text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Printing of Responsive Photonic Cellulose Nanocrystal Microfilm Arrays
Interactive materials capable of changing appearance upon exposure to external stimuli, such as photonic inks, are generally difficult to achieve on a large scale as they often require self‐assembly processes that are difficult to control macroscopically. Here this problem is overcome by preparing arrays of cellulose nanocrystal (CNC) microfilms from discrete nanoliter sessile droplets. The obtained microfilms show extremely uniform and intense color, enabling exceptional consistency in optical appearance across the entire array. The color can be controlled through the initial ink formulation, enabling the printing of polychromatic dot‐matrix images. Moreover, the high surface‐to‐volume ratio of the microfilms and the intrinsic hydrophilicity of the natural building block allow for a dramatic real‐time colorimetric response to changes in relative humidity. The printed CNC microfilm arrays overcome the existing issues of scalability, optical uniformity, and material efficiency, which have held back the adoption of CNC‐based photonic materials in cosmetics, interactive‐pigments, or anticounterfeit applications.
Introduction
Cellulose is one of the most abundant natural substances and has played a crucial role in human history, with uses ranging from cotton clothing to paper manufacture. Recently, its nanoscale derivatives, namely cellulose nanocrystals (CNCs) and nanofibers (CNFs), have gained popularity for their unique capability to combine biocompatibility and sustainability with exceptional optical properties. Starting with these natural building blocks and using only self-assembly techniques, it is possible to achieve a wide variety of different functionalities Adv. Funct. Mater. 2019, 29, 1804531 circular polarizers. By using two different "CNC inks," areas with different colors were deposited simultaneously within the same array. Due to the inherent left-handedness of the helicoidal CNC architecture, the microfilms reflect only left-handed circularly polarized (LCP) light and therefore the image could be revealed or hidden using simple circular polarization filters. The absence of any reflectance in the right-handed circular polarization (RCP) is further evidence of highly-ordered and wellaligned films. [7,18] This versatile method can be used to deposit CNC microfilm arrays onto a range of silanizable substrates, exemplified with silicon in Figure S2 (Supporting Information) and flexible polydimethylsiloxane (PDMS) in Figure 1d. In the latter case, the CNC films can withstand the deformation of the PDMS without cracking or detaching, contrasting strongly with dish-cast CNC films that are often brittle and prone to breaking during handling. [19,20] Such resistance to cracks is mainly due to their thinness (<5 µm) and to the discontinuous nature of the patterned array. Additionally, adhesive tape can be used to detach the array (Figure 1e), enabling transfer to a wider range of substrates. The CNC microfilms are interactive, exhibiting a dramatic, real-time colorimetric response to changes in humidity. As shown in Figure 1f, a "butterfly" dot matrix image changed from blue, through green and red, to infra-red upon exhaling onto the sample (t = 0-4 s). Once the exposure to humid breath stopped, the CNC films quickly recovered their original blue color (t = 6-10 s).
The method to prepare arrays of CNC films is illustrated schematically in Figure 2 and can be divided into three key Figure 1. a,b) Photographs of uniform red, green, and blue arrays of CNC microfilms deposited onto a patterned glass substrate, with individual microfilm diameters of 1000 µm (top) and 600 µm (bottom). As the viewing angle is increased from (a) 15° (relative to the normal) to (b) 30° the arrays are observed to iridescently blue-shift, characteristic of a well-ordered 1D photonic structure. c) A "Christmas tree" dot matrix image consisting of both red and green arrays of 600 µm CNC microfilms is clearly visible when imaged through a left-handed circular polarizer, but not visible through a right-handed circular polarizer. d) A CNC microfilm array prepared on a flexible PDMS substrate. e) An array of CNC microfilms was transferred onto adhesive tape. f) A blue "butterfly" dot matrix image rapidly and reversibly changed color in response to the moisture in an exhaled breath. All scale bars correspond to 5 mm. stages: i) preparing a hydrophilic/hydrophobic patterned surface by microcontact printing, ii) forming a sessile CNC droplet array by selective dewetting, and iii) slowly drying the droplets under an immiscible layer of hexadecane oil to form a CNC microfilm array. The third fabrication step is vital to obtain highly uniform microfilms via a self-assembly process. The optical appearance of a CNC droplet rapidly dried in the ambient environment (t < 5 min, Figure 3a) is very different to an equivalent droplet dried slowly under immiscible hexadecane (t ≤ 2 d, Figure 3b). Due to their large surface-to-volume ratio, pinned sessile droplets dried in the ambient environment had a faster evaporation rate at the edge than the center, resulting in an outward capillary flow. This outward flow deposited the colloidal CNCs near the edge of the droplet resulting in a dry microfilm with the profile described in Figure 3c, which is commonly known as a "coffee stain." [21,22] The radial shear also disrupted the organization of the CNCs, [23][24][25] with many tilted domains observed in dark-field microscopy ( Figure S3a, Supporting Information). In contrast, droplets dried slowly under hexadecane experience uniform water loss across the droplet surface, resulting in a negligible capillary flow. As such, the slowly dried microfilms exhibit a dome-shaped cross-sectional profile (Figure 3d), with typical heights of 1.8 and 3.5 µm for microfilms dried from 600 and 1000 µm diameter droplets, respectively ( Figure S4, Supporting Information). Most importantly, these films present an extremely well-aligned internal architecture, with no disruption at the edge, leading to the optimum photonic response across the film (Figure 3b).
Scanning electron microscope (SEM) images of the crosssection of a typical microfilm confirmed that the highlyordered, monodomain structure extends from the center to the edge (Figure 4). Adv To understand the formation of uniform color in CNC microfilms, a time-lapse of a droplet drying under hexadecane was recorded ( Figure S5, Supporting Information). Initially, multiple birefringent cholesteric domains were randomly orientated throughout the sessile droplet. Strong planar anchoring of the local director n on both liquid-substrate and liquid-liquid interfaces favors the alignment of the helical axis m locally perpendicular to both. The high aspect ratio of the deposited droplet made the top interface nearly parallel to the horizontal substrate, resulting in a uniform vertical alignment of m throughout the sessile droplet, [12,13,27] as confirmed by the disappearance of the birefringent features within 80 min. The slow drying rate allowed neighboring domains to merge to form a single large cholesteric domain, with possible trapped defects and disclinations. [7,26] During the last hour of drying, the film became dark red before gradually shifting to its final blue color. Although a single nanoliter droplet took up to two days to dry by this method, the drying time required to produce large arrays of high-quality microfilms remains the same. Furthermore, the crucial droplet surface-tovolume ratio that allows anchoring to dominate is unaffected by the scale of the array.
During the drying process, we observed that some droplets depinned from the hydrophilic/hydrophobic boundary. Once depinned, further loss of water led the droplet to contract both vertically and laterally. As shown in Figure 5, the colors of such microfilms are significantly red-shifted upon depinning, with the extent of the color-shift dependent on the lateral contraction of the film. Much like that reported for spherical microdroplets [12] : upon reaching a kinetically-arrested state this additional lateral contraction leads to reduced vertical compression exerted along the helicoidal axis, m, resulting in an apparent increase of the final pitch, p. To guarantee depinning did not occur upon drying, we systematically employed a sacrificial CNC coating step (see Experimental Section and Figure S6 (Supporting Information)), that allowed for the reliable and repeatable fabrication of large area CNC microfilm arrays with uniform color.
Cross-polarized optical images of representative CNC microfilms are shown in Cross-polarization allows for the selective imaging of the color reflected from the helicoidal nanostructure by subtracting the thin-film interference fringes. A color shift was observed adjacent to disclination lines, where the local ordering has been disturbed by trapped defects or dust. No tilted domains were observed under dark-field illumination ( Figure S3b,c, Supporting Information). The corresponding reflection spectra are shown in Figure 6g; the spectra were recorded at the center of each microfilm, collected through a left-handed circular polarizer and referenced against a silver mirror (i.e., a perfect left-handed cholesteric sample would have 100% reflectance in its photonic band gap). For microfilms with a diameter of 1000 µm, the peak reflectance could reach this theoretical limit, while microfilms with a diameter of 600 µm have a reflectance of up to 50%. The peak intensity reduction from the blue to the red microfilm is attributed to the weakened interference arising from the decreased number of pitch repeats for a fixed thickness. For comparison, reflection spectra recorded in RCP showed no peak ( Figure S7b, Supporting Information). The film-to-film uniformity across the CNC microfilm array is illustrated in the stitched images in Figure 6h and Figure S8 (Supporting Information). The microfilms demonstrate much better material efficiency in comparison to thicker dishcast films due to the high reflectivity arising from the wellordered nanoarchitecture as well as the suppression of edge effects. [7,11,20,28] The high dynamic sensitivity of the CNC microfilms to humidity is due to the reduced thickness and the perfect alignment of m across the microfilms, as compared to dish-cast CNC films. [20,29] This is exemplified in Movie S1 (Supporting Information), where the moisture released from a fingertip is shown to be sufficient to shift a blue array to infrared in seconds. To quantify this behavior, the peak wavelength shift for a blue CNC microfilm (Ø = 1000 µm) was tracked upon three cycles of 0% ↔ 75% relative humidity (RH) at 20 °C. As shown in Figure 7a, a 50 nm red-shift with negligible hysteresis was observed upon cycling the relative humidity over this range. The blue, green, and red CNC microfilms (Ø = 1000 µm) all red-shifted by ≈11% with respect to the initial peak wavelength over this humidity range ( Figure S9a, Supporting Information). The colorimetric response is not caused by a change in temperature, as confirmed by the stable peak wavelength upon heating Adv. Funct. Mater. 2019, 29, 1804531 Figure 6. a-f) Cross-polarized reflection micrographs of blue, green and red CNC microfilms with a diameter of (a-c) 1000 µm and (d-f) 600 µm. g) Corresponding reflectance spectra for panels (a-f) collected through a left-handed circular polarizer, with the intensity normalized against a silver mirror. h) Stitched images showing a larger area of the uniformly green CNC microfilm array (Ø = 600 µm).
the sample from 20 to 60 °C at 0% RH ( Figure S9b, Supporting Information). Furthermore, upon locally heating the sample from 20 to 60 °C under a constant humid nitrogen flow (75% RH at 20 °C, corresponding to 12.94 g(H 2 O) m −3 ), the microfilm shifted to a lower wavelength, confirming that the film is sensitive to relative humidity rather than absolute humidity ( Figure S9c, Supporting Information).
To explore the responsivity of the CNC film array to large changes in relative humidity (0% ↔ 100% RH), the local environment was rapidly alternated six times between saturated and dry atmospheres over a period of 90 s (Figure 7b; Movie S2, Supporting Information), during which the peak wavelength reversibly and repeatedly shifted by ≈300 nm. This much greater peak shift corresponds to an increasing rate of water uptake for CNC films at higher RH, as previously demonstrated by the weight increase of dish-cast CNC films in a humid environment. [20] Moreover, the thin CNC microfilms have complete water penetration and require only minimal water uptake for instantaneous color shifts (<1 s), in contrast to other reported CNC-based films (typically minutes to hours). [20,30] Figure 7c shows the change in reflected color of a blue microfilm when subjected to 0%-95% RH. As CNC films are known to have negligible porosity, [31] such color change can be attributed to the swelling of the film upon water uptake. The slight decrease of the reflectance with increasing humidity is attributed to the decrease of overall birefringence, as the birefringent CNCs were diluted with isotropic water. To quantify the swelling, the corresponding change of the optical path length, l opt , was measured from thin-film interference spectral fringes (see the Experimental Section and Figure S10 in the Supporting Information). As shown in Figure 7d, increasing the humidity from 0% to 95% RH, resulted in l opt increasing by a factor of 1.7. Upon water infiltration, the average refractive index, n , of the CNC microfilms must decrease and as the thickness is given by / opt = h l n , it can be concluded that the Figure 7. a) The peak wavelength shift of a blue CNC microfilm to changes in relative humidity (0% ↔ 75% RH), with little hysteresis observed over three cycles of (•) increasing and (▴) decreasing RH. b) The reproducible and large peak wavelength shift of a CNC microfilm subject to six cycles of rapid humidity change (0% ↔ 100% RH). c) The colorimetric response of a blue CNC microfilm (Ø = 1000 µm) upon changing the relative humidity stepwise from 0% to 95% RH. The spectra were taken through a left-handed circular polarizer, and the intensity was normalized against a silver mirror. d) The optical path length (l opt ), as calculated from the thin-film interference, increased by 70% upon increasing from 0% to 95% RH.
swollen film contains at least 41 vol% of water. This amount of water uptake, although large, is insufficient to rehydrate the CNCs beyond a kinetically-arrested gel. This explains why the microfilms can be repeatedly subjected to rapid swelling without disruption of the underlying nanostructure.
Conclusion
In conclusion, we have developed a scalable printing method to produce arrays of structurally colored CNC microfilms from spatially-defined nanoliter sessile droplets. Complex dot matrix images were designed by controlling the size and location of individual CNC microfilms via micropatterning of the substrate. By employing slow evaporation, the deposition and self-assembly of the CNCs were not disrupted by commonly observed shear or "coffee-ring" effects. When combined with strong anchoring of the CNCs to the substrate, this resulted in a highly-ordered, monodomain structure with little to no defects (e.g., Figure 6d-f). Our method enables the fabrication of responsive microfilms with high reflectivity, where the color is deterministically controlled by the CNC ink formulation. Finally, the instantaneous, reversible colorimetric response, from ultraviolet to infrared, opens new pathways for CNC-based responsive photonic pigments to be used in decorative, sensing, or anticounterfeiting applications.
Experimental Section
Characterization: Photographs were taken with a digital camera (D3200, Nikon), mounted with a left-handed circular polarizer (CIR-PL, Hoya). Polarized optical images were collected with a customized ZEISS Axio scope A1 microscope in reflection mode using a Halogen lamp (ZEISS HAL100) as a light source in Koehler illumination. The reflected light could also be filtered with a quarter wave plate and a linear polarizer mounted at different orientations to distinguish LCP or RCP light. A beam splitter directed the filtered light to a CCD camera (UI-3580LE-C-HQ, IDS) and an optical fiber (core diameter 50 µm) that was mounted in confocal configuration to collect microspectroscopic data (AvaSpec-HS2048 spectrometer, Avantes). All spectra were normalized to the reflection spectrum of a silver mirror (Thorlabs, PF10-03-P01) in LCP unless stated otherwise. All bright-field images were taken with a ZEISS EC Epiplan-Apochromat objective (10x NA 0.3) and dark-field images were taken with a ZEISS EC Epiplan-Apochromat objective (20x NA 0.6). The stitched images were taken by scanning over a 16 mm 2 area with a ZEISS EC Epiplan-Neofluar objective (5x NA 0.13). Scanning electron microscope (SEM) images were collected with a Mira3 system (TESCAN) operated at 3 kV and a working distance of 4-5 mm. The samples were mounted on aluminum stubs using conductive carbon tape and coated with a 10 nm thick layer of platinum with a sputter coater (Quorum Q150T ES).
CNC Suspension: The cellulose nanocrystal suspension was prepared as described previously. [8] In short, Whatman No. 1 cellulose filter paper (30 g) was hydrolyzed with sulfuric acid (64 wt%, 420 mL, VWR) for 30 min at 66 °C and then quenched by dilution with cold Milli-Q water. After removing most of the acid and supernatant by centrifugation, the suspension was dialyzed against deionized water (MWCO 12-14 kDa). The suspension was tip sonicated in an ice bath (Fisherbrand Ultrasonic disintegrator 500 W, amplitude 30% max, 5000 J g -1 of CNC). The final suspension was concentrated through a dialysis membrane (MWCO 12-14 kDa) using an osmotic bath of poly(ethylene glycol) (35 kDa, Sigma-Aldrich). The final measured concentration was 14.5 wt%. The stock suspension was diluted with Milli-Q water and sodium chloride (0.1 m) to obtain suspensions with 8.1 wt% of CNC and [NaCl]/[CNC] ratio of 25, 50, and 100 mmol kg −1 . The suspensions were equilibrated for 7 d to allow phase separation, and the denser anisotropic phase was used.
Stamp Preparation by Soft Lithography and Punch: Stamps with 600 µm features were prepared by casting PDMS (SYLGARD 184, 10: 1 w/w precursor: crosslinker) onto a master fabricated via photolithography. Stamps with 1000 µm features were prepared by punching holes through a sheet of PDMS using a 1.0 mm biopsy punch (BPP-10F, kai medical).
Patterned Substrates by Microcontact Printing: Glass and silicon substrates were rinsed with ethanol and blown dry with nitrogen gas. Polydimethylsiloxane (PDMS) substrates were cleaned by tape (Scotch Magic tape, 3M). The substrates were then activated with a plasma cleaner (Femto, Diener electronic) for 8 s (glass) or 32 s (PDMS), rendering the whole surface hydrophilic. 25 µL of 5% v/v octadecyltrichlorosilane (OTS, Sigma-Aldrich) solution in hexadecane (Sigma-Aldrich) was spread across a 90 mm petri dish using a swab (Absorb-Tip Foam Swab, Techspray). Stamps were inked by conformal contact with the coated petri dish for 30 s. The inked stamp was then applied conformally to the activated substrate for 60 s. After treatment, the contacted surface was rendered hydrophobic while the untouched areas remained hydrophilic. The contact angle of a 0.5 µL water droplet was <5° for the freshly plasmatreated glass and 102° for the OTS-treated surface.
CNC Microfilm Arrays by Blade Coating and Selective Dewetting: A customized blade-coating set-up was prepared using a glass capillary placed on top of a glass slide, with a coverslip (thickness 0.13-0.16 mm) used as a spacer. The CNC suspension was prepared as previously reported and is described in the Supporting Information. [8] A drop of CNC suspension (10 µL) was pipetted near one end of the patterned substrate. By sliding the "blade" across the substrate, the CNC suspension broke into many small sessile droplets, defined by the hydrophobic boundaries of the prepatterned substrate. To enforce pinning, sacrificial CNC films were first produced, whereby the wetted substrate was immediately placed on a hot plate (SD162, Stuart Equipment) at 55 °C. Once dry, the sacrificial CNC films were removed by rinsing with milli-Q water, followed by gentle wiping with damp tissue (White Medical Wipes, Kimberly Clark). Scanning electron microscope (SEM) images showed persistent CNCs sticking at the droplet's contact line ( Figure S6, Supporting Information). Such surface irregularities can pin the contact line of a drying droplet. [21] A second array of sessile droplets was then applied to the same substrate and dried slowly under hexadecane. Once fully dried, the substrate was taken out from the hexadecane bath, rinsed in ethanol and blown dry with nitrogen gas. Rinsing with ethanol temporarily redshifted the microfilms, with no sign of damage, but upon drying with nitrogen the microfilms blue-shifted back to their original color. In contrast, when freshly-prepared microfilms were immersed in water, the CNCs were redispersed, resulting in the loss of the microfilms.
Profilometry: The cross-sectional height profile of the microfilm, h, was measured by scanning across the sample using a stylus profilometer (DektakXT, Bruker) with a 12.5 µm tip radius at 4 mg force, with a resolution of 0.33 µm and a speed of 100 µm s −1 . To measure the profile of the optical path length, l opt , reflectance spectra of CNC microfilms were acquired by microspectroscopy in RCP to isolate the thin-film interference fringes from the reflectance of the CNC nanoarchitecture. Thin-film interference imposes a sinusoidal variation on the reflectance according to [32] λ π λ ( ) where R(λ) is the measured reflectance at wavelength λ, and A, B, and C are free parameters varied to give the best fit without affecting the oscillation frequency. The cross-sectional and optical path length profiles were in excellent agreement, with a scaling factor corresponding to the average refractive index, 1.55 opt n l h = = .
Temperature and Humidity Control: As shown in Figure S11 (Supporting Information), the sample was placed in a heating chamber (THMS600, Linkam Scientific) with the humidity of the inlet gas measured by an electronic humidity sensor (TSP01, Thorlab, ±1% between 20% and 80% RH, otherwise ±3% error). For the temperaturecontrolled study, dry nitrogen gas was flushed into the chamber while the sample temperature was cycled three times from 20 to 60 °C. For the absolute humidity experiment, the sample was heated from 20 to 60 °C, while the supplied water vapor was kept constant at 12.94 g m −3 . For the relative humidity-controlled study, dry and wet nitrogen gas flows were mixed such that the relative humidity could be tuned between 0% and 75% at 20 °C. To investigate the response to large changes in relative humidity, the tester exhaled onto the sample until a significant color change was observed. To recover the color, the sample was blown with dry nitrogen gas.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. Additional data relating to this publication is available from the University of Cambridge data repository (https://doi. org/10.17863/CAM.25747). | 5,080.6 | 2018-09-21T00:00:00.000 | [
"Physics"
] |
High-power electrically pumped terahertz topological laser based on a surface metallic Dirac-vortex cavity
Topological lasers (TLs) have attracted widespread attention due to their mode robustness against perturbations or defects. Among them, electrically pumped TLs have gained extensive research interest due to their advantages of compact size and easy integration. Nevertheless, limited studies on electrically pumped TLs have been reported in the terahertz (THz) and telecom wavelength ranges with relatively low output powers, causing a wide gap between practical applications. Here, we introduce a surface metallic Dirac-vortex cavity (SMDC) design to solve the difficulty of increasing power for electrically pumped TLs in the THz spectral range. Due to the strong coupling between the SMDC and the active region, robust 2D topological defect lasing modes are obtained. More importantly, enough gain and large radiative efficiency provided by the SMDC bring in the increase of the output power to a maximum peak power of 150 mW which demonstrates the practical application potential of electrically pumped TLs.
For the two-dimensional system, the effective continuum Hamiltonian takes the Firstly, we calculate the effective refractive index of the device with non-etched active region for later calculations of the band diagram of the topological cavity based on the finite element method.As the thickness of the active region increases, the effective refractive index increases rapidly.The thickness of our active region is 11.7 μm.As shown in Fig. S1b, the effective refractive indexes of 3.08 and 3.6 are obtained for structures without and with metal covering, respectively.A large effective refractive index contrast can be achieved for a thinner active region layer.Although the decrease in the thickness of the active region will introduce reduced cascade periods and thus smaller gain, the large refractive index contrast can improve the surface emission efficiency, which may also result in high output power.S3.As is show in Fig. S3, the absorption boundary design has played an important role in the realization of highly robust single-mode lasing.The device without absorption boundary can't operate stably in topological mode while the device with absorption boundary demonstrates the robust operation of topological mode, which is illustrated by both the lasing spectra and the far-field patterns.This can be attributed to the increasing loss of the whispering gallery modes due to the introduction of the absorption boundary.Besides, in Fig. S3c and S3f, the circle mesa cavity with circular absorption boundary device demonstrates multimode operation and the far-field also does not satisfy the C3v symmetry, since the whispering-gallery mode will have a lower loss for a circular boundary.As is shown in Fig. S4a, although the second-order DFB device demonstrates single-mode operation, the far-field demonstrates a long strip pattern.To maintain the fundamental mode operation, the ridge width of the second-order DFB device is limited which results in a large far-field divergence angle.The device with = 0 demonstrates multi-mode operation.This can be attributed to the unoptimized design of this photonic crystal structure, which results in a band-edge structure with many multiple modes having the similar gain condition for lasing.However, from another perspective, this demonstrates the important role of the introduction of the vortex cavity for the single-mode operation.The photonic crystals device can also realize a large-threshold gain difference with carefully design 3,4 .Compare with photonic crystal device lasing on the band edge modes, the topological cavity device introduces an approach to obtain stable mid-gap single-mode operation, which gives a large FSR and threshold margin simultaneously.
S6. Calculation of loss and vertical radiative efficiency of the cavity modes
The loss of the cavity modes and vertical radiative efficiency are calculated using the three-dimensional (3D, full-wave finite element method.For the calculations, we have taken the device with = 0.18 and 0 = 30.5μmas the example.
The calculated Q factors for different modes are shown in Figure S5.The photon loss rate r due to the vertical radiation of the laser is estimated by extracting the timeaveraged integrated power flow through the open air domains and normalizing it with respect to the resonator energy in the 3D simulation 5 : Where E and H represent the electric and magnetic fields respectively, ̂ represents the unit normal vector of the circle air domains, the dielectric constant and the permeability.The corresponding quality factors have been derived from the relation = / , where is the eigenfrequency of the topological mode.
And the radiative out-coupling efficiency is assumed to be proportional to where: Based on the 3D simulation, a photon loss rate of ≈ 28.5 GHz from integral results through equation (R.1, is obtained, which corresponds to = 140 . Considering the calculated = 96 from the 3D simulation, an in-plane quality factor = 320 is obtained.As a result, an out-coupling efficiency of ≈ 68.5% is estimated.This result is obtained without considering the waveguide loss.) , and the facet reflectivity for metal-metal waveguide structure with waveguide width of 100 μm and active region thickness of 11.7 μm around 3.2 THz is about 80% 6 .Therefore we obtain ≈ 1.1 −1 .
Meanwhile, the total optical loss coefficient for F-P plasmonic QCLs operating in the wavelength range of 3~4 THz [6][7][8] has been experimentally measured in the range of 10~15 −1 .Considering = + , where is waveguide loss, we take the = 12.5 −1 and = 11.4−1 is obtained.Figure S6 shows the L-I-V curves of the F-P device with slope efficiency of 28 /.In theory, the slope efficiency of the F-P device can be written as Meanwhile, as for the topological cavity device, the total optical loss coefficient and the slope efficiency can be written as Where ⊥ represent optical loss coefficient due to the vertical radiation, ∥ is the inplane mirror loss which can be obtained from α = S1 compared our results with other typical high-power surface-emitting THz QCL reported previously.Clearly, our device exhibits fairly competitive radiation loss and efficiency compared with devices based on different photonic designs, providing the huge potential for high output power.For the absolute output power of these devices, the epitaxial material used in the device fabrications is another critical factor, which may introduce the inconsistency between the radiation efficiency and output power.Therefore, enough gain provided by the surface metal cavity design and large vertical radiation loss and efficiency introduced by the topological photonics design result in the increase of the output power of our device together.(1) Estimated with the typical parameters from ref [6-8] (2) Calculated from the Q factor As shown in Fig. S7, the output power of the device with of 14 0 is nearly twice the power of the device with of 10 0 , which demonstrates nearly linear power upscaling with increasing sizes.In further, the device can be made much larger for higher output power.Due to the large free spectral range (FSR, of the topological midgap mode, stable single mode operation is expected with larger device size.In further, smaller divergence angle will be obtained increasing device size.However, the biggest challenge for large-area device is the heat dissipation issue of the device, nevertheless, this issue could be potentially addressed by thinning the thickness of the active region accordingly.As is shown in Fig. S6, the devices with phase modulation can maintain the properties of vortex polarization while the symmetry of the far-field patterns changes from C3 to double-lobed beam ( = 3, and doughnut beam ( = 1.5,.In further, the lasing spectra also remain the same as that of device without phase modulation.These results shows that the symmetry of the output far-field can be tuned through phase modulation with topological lasing properties unchanged.
1 ) 2 )(S1. 3 )
Fig. S1 | Effective refractive index calculation.a Simulation diagram and b Effective refractive index changes as the thickness of the active region.
Fig. S2 |
Fig. S2 | Performance of the SMDC device with = . and = .μm under different temperatures.a L-I-V results under different temperatures.b Lasing spectra under different temperatures.c.d.e Far-field patterns under different temperatures.We also characterized the performances of the SMDC TLs under different temperatures, as shown in Fig. S2.With the increasing temperatures from 8.5 K to 35 K, the devices still maintain robust single-mode operations and C3V symmetric farelectric fields, which indicate the operating robustness of the devices with different temperatures.
Fig. S3 |
Fig. S3 | The effects of the absorbing boundary condition on lasing properties.a,d Farfield pattern and lasing spectrum of the hexagonal mesa cavity device without absorbing boundary.b,e Far-field and spectrum of hexagonal mesa cavity device with 15 μm wide highly doped absorbing boundary.c,f Far-field and spectrum of circle mesa device C with 15 μm wide highly doped absorbing boundary.To study the effects of the absorbing boundary condition on lasing properties, we fabricated devices with different boundary condition for comparison.Lasing spectra and far-field patterns of the devices with different boundary conditions and parameter of = 0.18 and 0 = 30.5 μ m are shown in Fig. S3.As is show in Fig. S3, the
Fig. S4 |
Fig. S4 | Far-field patterns and lasing spectra for different types of devices.a,d Far-field pattern
Figure S5 .
Figure S5.3D simulation of the Q factors for different modes Next, we use the experimental results of the F-P device from the same wafer as the topological cavity device to modify the theoretical calculation considering the waveguide loss.For the F-P device with 2 mm cavity length and 100 μm ridge width, we have = 1 2 ln ( 1 1 2 ) = 1 ( 1
Figure S6 .
Figure S6.L-I-V curves of the F-P device with 2 mm cavity length and 100 μm ridge width.
. 5 ( 5 = 5 .
Hence ⊥ ( =140, and ⊥ + ∥ ( =66.7, are calculated to be 17.5 −1 and 25.5 −1 respectively.Therefore, after taking the waveguide loss in to consideration, we obtain _ = 17.25.5+11.4)= 0.474 for the SMDC TL device with = 0.18 and 0 = 30.5 μm.Meanwhile, the slope efficiency _ 151/ is obtained from the P-I-V curve in Fig.3bexperimentally.Therefore, we have the theoretically calculated slope efficiency ratio for the topological and FP devices 386, which is in excellent agreement with the experimental data 393.This verifies the reliability of the numerical simulation.This relatively high outcoupling efficiency originates from the coherent constructive emission over the entire lattice area.Table
Fig. S7 |
Fig. S7 | Power performance of devices with different sizes.
Fig. S8 |
Fig. S8 | Schematic diagram of the far-field testing optical pathAs illustrated in Fig.S8, the far-field properties of the devices were directly measured by a Golay cell detector scanning on the curve of a sphere with a radius around 15 cm from the device.For the polarized properties measurement, one linear polarizer was added to filter the cross-polarized components during the far-field intensity scanning.Through changing the direction of the linear polarizer, the polarized far-field properties were measured.
Table S1 .
Radiation loss and efficiency for different single-chip THz surface emitting QCLs | 2,510.2 | 2024-05-24T00:00:00.000 | [
"Physics",
"Engineering"
] |
Uncertainty Relations for Coarse–Grained Measurements: An Overview
Uncertainty relations involving incompatible observables are one of the cornerstones of quantum mechanics. Aside from their fundamental significance, they play an important role in practical applications, such as detection of quantum correlations and security requirements in quantum cryptography. In continuous variable systems, the spectra of the relevant observables form a continuum and this necessitates the coarse graining of measurements. However, these coarse-grained observables do not necessarily obey the same uncertainty relations as the original ones, a fact that can lead to false results when considering applications. That is, one cannot naively replace the original observables in the uncertainty relation for the coarse-grained observables and expect consistent results. As such, several uncertainty relations that are specifically designed for coarse-grained observables have been developed. In recognition of the 90th anniversary of the seminal Heisenberg uncertainty relation, celebrated last year, and all the subsequent work since then, here we give a review of the state of the art of coarse-grained uncertainty relations in continuous variable quantum systems, as well as their applications to fundamental quantum physics and quantum information tasks. Our review is meant to be balanced in its content, since both theoretical considerations and experimental perspectives are put on an equal footing.
Introduction
The physics of classical waves distinguishes itself from that of a classical point particle in several ways. Waves are spread-out packets of energy moving through a medium, while a particle is localized and follows a well-defined trajectory. It was thus most surprising when it was discovered in the early 20th century that quantum objects, such as electrons and atoms, could exhibit behavior that at times was best described according to wave mechanics. Moreover, it was shown that either wave or particle behavior could be observed depending almost entirely upon how an observer chooses to measure the system. This complementarity of wave and particle behavior played a key role in the early debates concerning the validity of quantum theory [1], and has been linked to several interesting and fundamental phenomena of quantum physics [2][3][4][5]. Though several complementarity relations have been cast in quantitative forms [6,7], perhaps complementarity is most frequently observed in terms of quantum uncertainty relations. In words, uncertainty relations establish the fact that the intrinsic uncertainties associated to measurement outcomes of two incompatible observations of a quantum system can never both be arbitrarily small. We note that this type of behavior appears in classical wave mechanics, for example in the form of time-bandwidth uncertainty relations, which are quite important in communications and signal processing [8]. In contrast, there is no aspect of a classical physics that prohibits us from measuring all of the relevant properties of a classical point particle, at least in principle.
In addition to quantum fundamentals, quantum uncertainty relations play an important role in several interesting tasks associated to quantum information protocols, such as the detection of quantum correlations and the security of quantum cryptography [9]. In this paper, we focus on continuous variable (CV) quantum systems [10,11]. Though many interesting results have been found for discrete systems, they are outside the scope of this manuscript. We refer the interested reader to Reference [9], being a comprehensive unification and extension of two older reviews on entropic uncertainty relations, more focused on the physical [12] and information-theoretic [13] side respectively. However, since the coarse-grained scenario situates itself somehow in-between the discrete and continuous description, we make a short introduction to discrete entropic uncertainty relations before discussing their coarse-grained relatives.
In CV systems, one encounters a fundamental problem when performing measurements. That is, the eigenspectra of the corresponding observables are infinite dimensional, and can be continuous or discrete. Since any measurement device registers measurement outcomes with a finite precision and within a finite range of values, the experimental assessment of CV observables can be quite different from theory. Of course, one can consider a truncation of the relevant Hilbert space [14], as well as some type of binning or coarse graining of the measurement outcomes. This is similar to the idea of coarse graining that was discussed by Gibbs [15] and used by Paul and Tanya Ehrenfest [16,17] in the early 20th century to account for imprecise knowledge of dynamical variables in statistical mechanics [18]. Coarse graining has also appeared in the quantum mechanical context as an attempt to describe the quantum-to-classical transition, where the idea is that measurement imprecision could be responsible for the disappearance of quantum properties [19][20][21][22][23]. Though this is quite an intuitive notion, it was recently shown that one can always find an uncertainty relation that is satisfied non-trivially for any amount of coarse graining [24]. That is, quantum mechanical uncertainty is always present in this type of "classical" limit. This motivates the formulation of coarse-grained uncertainty relations.
In addition to the necessity of coarse graining, there could be practical advantages: for tasks such as entanglement detection, it might be interesting to perform as few measurements as possible, advocating the use of coarse-grained measurements. However, improper handling of coarse graining can result in false detections of entanglement [25,26], pseudo-violation of Bell's inequalities or the Tsirelson bound [27,28], and sacrifice security in quantum key distribution [29], for example. Thus, the proper formulation and application of uncertainty relations for coarse-grained observables is both interesting and necessary.
In the present contribution we review the current state of the art of uncertainty relations (URs) for coarse-grained observables in continuous-variable quantum systems. In Section 2 we review the concept of uncertainty of continuous variable (CV) quantum systems in more depth and introduce several prominent URs. In Section 3 we discuss the utility of CV URs in quantum physics and quantum information, in particular for identifying non-classical states and quantum correlations. Section 4 presents the problem of coarse graining of CVs in detail, and two coarse-graining models are provided. The current status of URs for these coarse-graining models is reviewed in Section 5, where we present a series of coarse-grained URs previously reported in the literature [12,24,[30][31][32]. In addition, we extend the validity of some of these URs to general linear combinations of canonical observables. Section 6 is devoted to the experimental investigation and application of coarse-grained URs in quantum physics and quantum information. Concluding remarks are provided in Section 7.
Uncertainty Relations
The history of uncertainty relations traces back to the early days of the formalization of quantum theory and begins with the celebrated work by Heisenberg in 1927 [33] (see [1] for an English version).
The work discussed what later became known as Heisenberg's uncertainty principle. The first mathematical formulation for this principle, in [33], essentially reads: where ∆x and ∆p are the uncertainties of the position and linear momentum of a particle, respectively, and h is the Planck constant. Although the existence of such a principle is ultimately due to the non-commutativity of the position and momentum observables, it took almost 80 years for all the physical meanings, scope and validity of this principle to be elucidated [34]. Distinct physical meanings emerge from different definitions for "uncertainty" of position or momentum, and in each case a proper multiplicative constant makes the lower bound sharp. All of these inequalities are known by the generic name of Uncertainty Relations, from the beginning of this review referred to as URs.
Even though the inception of the URs was made in the context of position and momentum of a particle, their existence can be extended to the "uncertainties" associated with any pair of non-commuting observables in discrete or continuous variable quantum systems. Thus, generically we can define the URs as inequalities that stem from the fact that the measured quantities involved are associated to non-commuting observables. Nowadays, we can say that it is clear that there are three conceptually distinct types of URs [34]: (i) URs associated with the statistics of the measurement results of non-commuting observables after preparing the system repeatedly in the same quantum state, or statistical URs for short, (ii) the error-disturbance URs, also known as noise-disturbance URs, for the relation of the imprecision in the measurement of one observable and the corresponding disturbance in the other, and, (iii) the joint measurement URs associated with the precision of the joint measurements of non-commuting observables. The error-disturbance URs has two main contributions: one in References [35][36][37] that present state-dependent error-disturbance URs and the other in References [38][39][40] that argue for a state-independent characterisation of the overall performance of measuring devices as a measure of uncertainty that satisfies an UR of the form given in Equation (1). There was a certain controversy involving these two contributions and we recommend the work [41] that discusses the limitations of the state-dependent error-disturbance URs. The development of joint measurement URs has an early contribution in Reference [42] and further developments were given in References [38,[43][44][45][46][47][48].
The statistical URs are also referred to in the literature as preparation URs. This is because it is impossible to prepare a quantum system in a state for which two non-commuting observables have sharply defined values. However, here we prefer to call them statistical URs, as they express the limits to the amount of information that can be obtained about incompatible observables of a quantum system when it is repeatedly measured after being prepared in the same initial state in each round of the measurement process. We emphasize that there is not any attempt to measure the two non-commuting observables simultaneously. In each round of the measurement process only one observable is measured, the choice of which could be made randomly. In this sense the "uncertainties" contained in the statistical URs are of the statistical type: the more certain the sequence of outcomes of one observable is in a given state, then the more uncertain is the sequence of outcomes of the other non-commuting observable(s) considered.
This review focuses on statistical URs that are valid for coarse-grained measurements in continuous variable quantum systems, although a similar approach can be made for the other two types of URs mentioned above. There are two types of quantum mechanical degrees of freedom: the ones that can be described by a Hilbert space of quantum states with finite dimension and the others in which it has infinite dimension. In particular, we are interested only in continuous variable (CV) systems where the Hilbert space, H, of pure states, |ψ , has an infinite dimension. The CV systems that we consider consist of a finite set of n bosonic modes, sometimes called "qumodes" [10], so that H := H 1 ⊗ . . . ⊗ H n . Each mode is described by a pair of canonically conjugate operators,x j andp j , such that [x j ,p k ] = ih1δ jk . (2) jk . Therefore, the separable Hilbert space of each mode, H j , has a enumerable basis {|n j } n j =1,...,∞ consisting of eigenstates of the number operator, viz. n j |n j = n j |n j , evidencing the infinite dimensionality of the Hilbert space of the quantum states. In the case of mixed states we use density operators represented by greek letters with a hat, i.e.,ρ,σ etc. Important examples of CV systems are the motional degrees of freedom of atoms, ions and molecules, wherex j andp j are the components of the position and linear momentum of the particles (in this caseh in Equation (2) is the usual reduced Planck constant, i.e.,h = h/2π); the quadrature modes of the quantized electromagnetic field wherex j andp j are canonically conjugate quadratures (in this caseh in Equation (2) is justh = 1 ) [10]; and the transverse spatial degrees of freedom of single photons propagating in the paraxial approximation (in this caseh in Equation (2) ish = λ/2π where λ is the photon's wave length [49]).
In what follows, we summarize the principal statistical URs in CV systems that have been generalised to coarse-grained measurements. The corresponding coarse-grained URs will be presented in Section 5.
Heisenberg (or Variance) Uncertainty Relation
Let us consider two operators: where T means transposition and we define the 2n-dimensional vector of operators, as well as the arbitrary real vectors, d = (a, a ) T = (a 1 , . . . , a n , a 1 , . . . , a n ) T and The commutation relation ofû andv is where J is the 2n × 2n-dimensional matrix of the symplectic norm [50]: and the n × n matrices in the blocks are the identity matrix I and the null matrix O. In this review, matrices of an arbitrary shape not treated as quantum-mechanical operators are denoted in bold and without a hat. The parameter γ in definition Equation (6) is a scalar that in some sense quantifies the non-commutativity ofû andv. Commutation relations such as Equation (6) are called Canonical Commutation Relations (CCR) (sometimes the name CCR is used in the case when γ = 1, however, ashγ can be interpreted as an effective Planck constant, so the name CCR here is well justified). However, a CCR between two operatorsû andv does not guarantee that they are necessarily Canonically Conjugate Operators (CCOs). For this to be true we additionally need that the eigenvectors ofû andv must be connected by a Fourier Transform. In such a case we callû andv CCOs (also note that when two operators like the ones defined in Equation (3) have their eigenstates connected by a Fourier Transform, they necessary satisfy a commutation relation like in Equation (6), as can be easily shown. However the converse is not true. Take for example the single mode operatorsû =x and v =x +p, which satisfy [û,v] = [x,p] = ih but are not a Fourier pair).
Every pair of operators,û andv, that obey a CCR also satisfies the statistical UR: where σ 2 P u := û 2 − û 2 , and σ 2 are the variances of the marginal probability distribution functions (pdf): where we have defined . . . := Tr(. . .ρ), withρ being an arbitrary n−mode quantum state. We call the UR in Equation (8) the Heisenberg UR, or variance-product UR. For one mode CCOs, such asû =x andv =p (therefore γ = 1), the Heisenberg UR in Equation (8) was first proved by Kennard in 1927 [51], inspired by the inequality in Equation (1) of Heisenberg's seminal paper of the same year [33]. Later, it was also proved by Weyl in 1928 [52]. In 1929 Robertson [53] extended the Heisenberg UR for any pair of Hermitian operators andB: This result extends the Heisenberg UR in Equation (8) toû andv that are not CCOs. For every variance-product UR in Equation (12) there is an associated linear UR: In fact, this UR is a consequence of Equation (12) and the trivial inequality (σ P A − σ P B ) 2 ≥ 0, so that where it also follows that the linear UR is weaker than the variance product UR. In 1930 Schrödinger [54] improved the lower bound in Equation (12), so the new stronger UR reads: where {· · · , · · · } is the anti-commutator. One interesting property of the Heisenberg UR in Equation (8) is that the lower bound is independent of the quantum stateρ under consideration. Another property is that it can be seen as a bona fide condition on the covariance matrix of an n−mode quantum stateρ, viz the matrix of second moments of the CCOs, contained in the vectorx, of the stateρ: Indeed, in [55,56] it was shown that the bona fide condition on the covariance matrix V of a quantum stateρ is, where the inequality means that the matrix on the left hand side is positive semi-definite, viz. all of its eigenvalues are greater or equal to zero. Applying the inequality in Equation (15) to the canonical conjugate operatorsx andp, we have, For one mode systems, this inequality is equivalent to the bona fide condition in Equation (17). However, for multimode systems it is not enough. For multimode systems, a way to verify the bona fide of the covariance matrix was given in [57,58]. It was shown that testing the condition in Equation (17) is equivalent to verify the linear UR in Equation (13) for all the operators,û andv, defined in Equation (3). Therefore, using Equation (14) we can write the series of implications: Thus, it is enough to verify the violation of the Heisenberg UR for some pair of operatorsû and v to confirm that the bona fide condition on the covariance matrix of some n−mode operatorρ is not satisfied.
Entropic URs
The use of entropy functions to quantify uncertainty of a probabilistic variable dates back to the early work of Shannon [59]. Since then, several different entropy functions have been defined, with distinct relations to meaningful characteristics of the probability distributions considered. A number of these entropy functions have found use in quantum mechanics and, in particular, in QIT [9]. Here we outline the application of these functions to uncertainty relations between non-commuting observables.
Shannon-entropy UR
The UR based on the differential Shannon entropy for operators defined in Equation (3) is: where P u and P v are the marginal pdf defined in Equation (10) and the differential Shannon entropy of a pdf, P, is defined as [60]: Forû andv as CCOs, this uncertainty relation was first proved in 1975 by Bialynicki-Birula and Mycielski [61]. In their derivation the authors used the L p -L q norm inequality for the Fourier transform operator obtained by Beckner [62]. Please note that in the literature this inequality is sometimes referred to as the Babenko-Beckner inequality (Equation 1.104 from [12] provides an extension of this inequality to the case of arbitrary mixed states, using two variants of the Minkowski inequality), because Babenko [63] had proved it before Beckner, but only for certain combinations of (p, q) parameters. For the sake of completeness, we should also mention that Hirschman [64] had derived a weaker version of (20) with the constant eπ inside the logarithm replaced by 2π. The extension of the validity for operatorsû andv that are not CCOs was provided very recently in References [58,65].
The Shannon-entropy UR is in general stronger than the Heisenberg UR as the former implies the latter. This can be seen by using the inequality for a pdf P [60]: where σ 2 is the variance of P. Therefore, we can write the chain of inequalities: that compress the URs in Equations (8) and (20). It is clear from Equation (23) that the verification of the Shannon-entropy UR for any pair of the operators in Equation (3) is enough to guarantee the bona fide condition in Equation (17) [58]. When the quantum stateρ is Gaussian, viz when the Wigner function ofρ is a multivariate Gaussian probability distribution [11], the marginal pdfs, P u and P v , are also Gaussians. Remembering that the differential Shannon entropy of a Gaussian pdf P, with variance σ 2 P , is h[P] = (1/2) ln 2πeσ 2 P [60], we can see that Gaussian states saturate the first inequality in Equation (23). Therefore, for Gaussian states the Heisenberg UR and the Shannon-entropy UR are completely equivalent. As we will see in Section 5 this is not the case for the coarse-grained versions of these URs.
Rényi-Entropy URs
The UR based on the differential Rényi entropy for the operators defined in Equation (3) that are CCOs is given by the inequality: where 1/α + 1/β = 2 with 1/2 ≤ α ≤ 1 and γ = 1 since we deal with CCO operators. As before, P u and P v are the marginal pdfs defined in Equation (10) and the differential Rényi entropy of order α relevant for an arbitrary pdf, P, is defined as [60]: The Rényi-entropy UR was proved recently (in 2006) by Bialynicki-Birula [31] (see also [12]) again with the help of the powerful mathematical tools developed in [62]. Please note that in the limit α → 1 we also have β → 1, and consequently α 1 (2−2α) β 1 (2−2β) → 1/e. Therefore, in the limit α → 1 we have , so the expression in Equation (24) reduces to the Shannon-entropy UR in Equation (20) for γ = 1. As far as we know, in contrast to the Shannon-entropy UR, the extension of the Rényi-entropy UR to the general case of operators that are not necessarily CCOs is still a challenge for the future. A first attempt in this direction was provided in Reference [65], where the authors show that the Rényi UR in Equation (24) is still valid when the eigenvectors ofû andv are connected by a Fractional Fourier Transform [8], which corresponds to rotation in phase space.
All of the URs mentioned in this section (this is a general pattern though) can be cast in a general form where F is an uncertainty functional [left hand side of inequalities Equations (8), (20) and (24) for example] and f represents its respective lower bounds. In particular, we do not pay much attention here to the Tsallis entropy and URs associated with it. Again such URs can be cast in the general form stated above and their derivation is usually very similar in spirit to the case of the Rényi entropy. In Section 3 we will summarise the relevance of the statistical UR in general and in particular the URs in Equations (8), (20) and (24). In Section 5 we will present versions of the Heisenberg, Shannon-entropy and Rényi-entropy URs for coarse-grained measurements.
Utility of Uncertainty Relations in Quantum Physics
Uncertainty relations can be applied in several useful and interesting ways. First, they provide a way to test if experimental results are consistent with quantum mechanics, since data from the measurement of incompatible observables must verify any valid quantum UR. This is particularly helpful in identifying systematic errors in the measurement process, in testing the experimental reconstruction of density matrices, phase-space distributions (quantum state tomography), as well as the covariance matrix [66], or any other set of moments of the CCOs of the modes.
URs can also be used to characterize non-classical states of light, such as squeezed states [67]. In this case observation of the variance σ 2 P u ≤h/4 whereû is a phase-space quadrature in Equation (3), indicates noise fluctuations in this quadrature that are smaller than the vacuum state. As a consequence of the Heisenberg UR, the noise fluctuations in the conjugate quadrature must be larger or equal tō h/4σ 2 P u . In a similar fashion, in Reference [68] it was shown that violation of one out of an infinite hierarchy of inequalities involving normally ordered quadrature moments is sufficient to demonstrate non-classicality. We note that σ 2 P u ≤h/4 corresponds to the lowest-order inequality of this set. Related techniques have been developed based on the quantum version of Bochner's theorem for the existence of a positive semi-definite characteristic function [69,70]. Both of these methods have been used experimentally in Reference [71]. More recently, these two techniques were unified into a single criteria involving derivatives of the characteristic function [72], and put to test on a squeezed vacuum state.
To our knowledge, the first application of URs to identify quantum correlations was described in Reference [73], in which the authors proposed a Heisenberg-like UR, similar to that in Equation (8), to identify non-classical correlations between both the phases and intensities of the fields produced by a non-degenerate parametric oscillator. It was shown by M. Reid [74] that these measurements provide a method to demonstrate correlations for which the seminal Einstein-Podolsky-Rosen (EPR) argument [75] is valid. An experiment using this UR-based method to demonstrate EPR-correlations between light fields was realized shortly therafter [76]. It was later shown by Wiseman et al. [77,78] that the Reid EPR-criterion was indeed a method to identify quantum states that violate a "local hidden state" model of correlations. This type of correlation has been called "EPR-steering", or just "steering" [79], as this was the terminology used by Schrödinger when he discussed EPR correlations in 1935 [80]. Since 2007, EPR-steering has been understood to make up part of a hierarchy of quantum correlations, situated between entanglement [81,82] and Bell non-locality [83]. In addition to methods utilizing variance-based URs [84], entropic URs, such as those in Section 2.2, can be used to identify EPR-steering [85,86] and to quantify high-dimensional entanglement [87,88]. Some of these URs can be used to test security in continuous variable quantum cryptography [89,90], and it has been shown that violation of entropic EPR-steering criteria are directly related to the secret key rate in one-sided device independent cryptography [91]. We also highlight techniques based on a matrix-of-moments approach [92]. Continuous-variable EPR-steering has been observed in intense fields [76,93,94] as well as photon pairs [85,[95][96][97].
Perhaps one of the most important tasks in quantum information is identifying quantum entanglement. In this respect, URs have also found widespread use in simple and experimentally friendly entanglement detection methods, as we will now describe. Several early entanglement criteria for bipartite CV systems were developed using URs [98][99][100][101]. A particularly convenient method to construct entanglement criteria is to use the Peres-Horedecki positive partial transposition argument [102,103] (PPT), and apply it to uncertainty relations [82,[104][105][106][107]. The PPT argument is as follows. A bipartite separable stateσ 12 can be written as [108] whereρ 1i andρ 2i are bona fide density operators of subsystems 1 and 2, respectively. The transpose of the stateρ 2i , here denotedρ T 2i , is still a positive operator, since full transposition preserves the eigenspectrum. Thus, partial transposition (with respect to second subsystem) ofσ 12 gives the valid quantum state:σ On the other hand, partial transposition of an entangled stateˆ 12 , which cannot be written in the form (27), can lead to a non-physical density matrix since partial transposition may not preserve the positivity of the eigenspectrum. Thus, one can identify entanglement in a bipartite density operator by calculating the partial transposition and searching for negative eigenvalues, and even quantify the amount of entanglement via the negativity [109]. However, applications of this method in experiments requires quantum state tomography and reconstruction of the density operator, which involves a large number of measurements. A more experimentally friendly method to identify entanglement is to evaluate an UR applied to the partial transposition ofˆ 12 , which we describe in the next paragraph. The PPT-argument is only a sufficient entanglement criteria in a general bipartition of m × (n − m) modes, but is necessary and sufficient in the particular case of bipartitions of 1 × (n − 1) modes in CV Gaussian states [10,110]. Thus, there are no Gaussian states which are PPT entangled states in bipartitions of the form 1 × (n − 1). However, there do exist entangled CV Gaussian states that are PPT in general bipartitions of the type m × (n − m). These are called bound entangled states [111]. In Gaussian states, this set of bound entangled states coincides with the set of all states whose entanglement in a bipartition m × (n − m) cannot be distilled using local operations and classical communication [112][113][114]. However, to our knowledge, for non-Gaussian states it is conjectured that the set of bound entangled states in a given bipartition is only a sub-set of the set of undistillable states in that bipartition.
For continuous variables, Simon showed that transposition is equivalent to a momentum reflection, taking the single mode Wigner phase-space distribution W (x, p) −→ W T 2 (x, p) = W (x, Tp) [57], where T is a diagonal matrix whose elements are +1 for non-transposed modes, and −1 for the transposed ones. Thus, evaluating the "transposed" Wigner function is the same as evaluating the original Wigner function with a sign change in the reflected p variables.
For simplicity, we consider now the particular example of global operators of a bipartite state: andv ± =v 1 ±v 2 .
We note that operators with the same sign satisfy the commutation relations [û ± ,v ± ] = 2ihγ , so that these non-commuting operators after being an input to the uncertainty functionals fulfill the UR of the aforementioned form [note the factor of 2 in the argument of f (·)] Using the transformation of the Wigner function under partial transposition described above, one can evaluate the uncertainty functional of the partially transposed stateˆ T 2 12 via measurements on the actual state 12 using the relation which can be lower than f (2h|γ|) since the operators with different signs do commute. This possibility, when experimentally confirmed, indicates that T 2 12 is not a bona fide density operator, and thus the bipartite quantum state 12 is entangled.
Building on this general reasoning (PPT argument applied to an UR) several entanglement criteria have been developed. A comprehensive list of the criteria contains those based on the variances [115,116] and higher-order moments [117,118], Shannon entropy [105], Rényi entropy [106], characteristic function [119] as well as the triple product variance relation [120]. Particularly powerful is the formalism developed by Shchukin and Vogel, which provides an infinite set of inequalities involving moments of the bipartite state [121], such that violation of a single inequality indicates entanglement. We note that some of these criteria can be applicable to any non-commuting global operators. Uncertainty-based approaches (using the PPT method directly or not) have been developed for multipartite systems [122,123], and a general framework to construct entanglement criteria for multipartite systems based on the "PPT+UR" interrelation was presented in Reference [107]. The Shchukin-Vogel hierarchy of moment inequalities has also been applied to the multipartite case [124].
The PPT+UR approach has been used to identify continous variable entanglement experimentally in several systems, including entangled fields from parametric oscillators and amplifiers [94,125,126] as well as spatially entangled photon pairs produced from parametric down conversion [96,120,127], and time/frequency entangled photon pairs [128,129]. A higher-order inequality in the Shchukin-Vogel criteria [121] has been used to observe genuine non-Gaussian entanglement [130].
Realistic Coarse-Grained Measurements of Continuous Distributions
Coarse graining of observables with continuous spectra is a consequence of any realistic measurement process. In the laboratory, an experimentalist is given the task of designing projective measurements in order to recover information about probability densities of a continuous variable quantum system. Naturally, only partial information about the underlying continuous structure of the infinite-dimensional physical system is retrieved in a laboratory experiment. Whichever measurement design is chosen, the experimentalist is faced with two main difficulties, namely the finite detector range and finite measurement resolution, related to the size of the total region of possible detection events and the precision in which events are registered, respectively. The detector range problem [25,29] results from the finite amount of resource available to the experimentalist. For instance, consider a position discriminator based on a multi-element detector array. The array has a spatial reach (in a single spatial dimension) that increases linearly with the number of detectors. In a similar fashion, the sampling time of a single element detector used in raster scanning mode increases linearly with the chosen detection range. Continuous variables such as the position are also inevitably affected by the inherent finite resolution of the measurement apparatus [32], such as the size of each individual detector in the array, or the pixel size of a camera. Altogether, the finite detector range and measurement resolution restrict the capability to probe the detection position, limiting the experimentalist to a coarse-grained sample of the underlying CV degree of freedom.
The constraints imposed by the finite spatial reach and resolution of the measurement apparatus are then important features that must be considered in the experiment design. Ideally, the experimentalist would chose measurement settings producing the finest coarse-grained sample possible. As a trade-off, the increased resolution entails the sampling of a greater number of pixels (if the range of detection is preserved), increasing the amount of resources used in data acquisition and analysis. The compromise between the used resource and chosen resolution depends on the specific design and measurement technique. A single raster scanning detector is inherently inefficient and leads to acquisition times that grow with the number of scanned outcomes. On the other hand, the acquisition time is dramatically reduced by the use of multi-element detector arrays [131][132][133][134]. Other techniques such as position-to-time multiplexing [135,136] allow the sampling of multiple position outcomes with single element detectors, but at the expense of an increased dead-time between consecutive detections. We have exemplified the finite detector range and finite measurement resolution problems in terms of a detector that registers the position of a particle. However, similar considerations are valid for any detection system that registers a digitalized value of a continuous physical parameter.
Under constraints of resource utilisation-such as the number of detectors and/or sampling time-the experimentalist needs to set the number of possible detection outcomes for their coarse-grained measurements. Therefore, a natural question that arises regards the coarse-graining design allowing the extraction of the desired information. Naively, one might think that usual quantum mechanical features learnt from physics textbooks would be directly observable from the coarse-grained distributions obtained in the laboratory. The most prominent counter-example is the experimental observation of the Heisenberg UR in Equation (8). As shown in Reference [32], coarse-grained distributions of conjugate continuous variables do not necessarily satisfy the well known UR valid for continuous distributions. In order to accurately inspect the uncertainty product of the measured distributions in accordance with the Heisenberg UR, the latter must be modified to account for the detection resolution of the measurement apparatus. Another important quantum mechanical feature that one usually fails to observe from standard coarse-grained distributions is the mutual unbiasedness [137] relation between measurement outcomes of the incompatible observables. That is, eigenstates of-say-the coarse-grained position operator do not necessarily present a uniform distribution of outcomes for coarse-grained momentum measurements. In addition, some authors [138][139][140][141] have demonstrated that one can define functions of incompatible observables that indeed commute. Interestingly, it was shown in Reference [142] that one can indeed enjoy full quantum mechanical unbiasedness using a specific periodic coarse-graining design rather than the standard one. Other practical issues regarding false positives in entanglement detection [26,29] and cryptographic security [25,29] must also be reconsidered when one deals with realistic coarse-grained distributions.
In this section, we will introduce the projective measurement operators both for the standard and the periodic models of coarse graining. Practical features such as measurement resolution, detector range and positioning degrees of freedom in the measurement design will be discussed. We will also briefly discuss relations of mutual unbiasedness between coarse-grained measurement outcomes in domains of incompatibles observables. A detailed discussion of uncertainty relations for coarse-grained distributions will be presented in the next section.
Coarse-Graining Models
A laboratory experiment necessarily yields a discrete, finite set of measurement outcomes of any observable in any physical system. This is also the case for an experiment probing a continuous degree of freedom,û, for which measurement outcomes {u k } labeled by the discrete integer index k ∈ Z relate to the underlying continuous real variable u ∈ R corresponding to the eigenspectra ofû. In the most general scenario, a coarse-graining model is obtained from an arbitrary partition of the set of real numbers R, in intervals R k with u k ∈ R k . The orthogonality of the measurement outcomes requires the subsets to be mutually disjoint: R k ∩ R k = ∅, ∀ k = k . Even though the continuous variable can be formally discretised into an infinite number of outcomes (with k an unbounded integer), the experiment can only probe a finite range of the continuous variable. Thus, the detection range, R range , can be formally defined by the union of the disjoint subsets associated with the probed outcomes: This relation limits the set of possible values of k to a finite subset of integers Z k ⊂ Z. Due to the finite range, R range , of the measurement process it is important to secure under reasonable experimental conditions that the underlying probability density is supported within the chosen range of detection [25,29]. Mathematically, a faithful coarse-grained measurement design should ensure that where P u is the marginal pdf defined in Equation (10).
The probability p (u) k that the outcome u k is produced writes as an integral of the marginal probability density, P u , for the continuous variable: where the integration is performed in the interval R k . Due to the faithful coarse-grained condition in Equation (34) we have We can define projective operators associated with the coarse-grained measurements: so that the probabilities (35) can be written as with P u (u) = u|ρ|u . In order to study mutual unbiasedness and uncertainty relations, we shall later in this and the following sections define coarse-grained operators like those in Equation (37) for conjugate variables of the quantum state, such as the position and the linear momentum of a quantum particle.
Standard Coarse Graining
The standard model of coarse graining describes, for example, the typical projective measurements performed with an array of adjacent, rectangular detectors. A conventional example of such an apparatus is the image sensor of a digital camera, for which the pixel size stands for the detection resolution whereas the length of the full sensor embodies the range of detection. In the current analysis, we shall consider a linear detector array along a single spatial dimension rather than the two-dimensional area of a typical image sensor, as illustrated in Figure 1. The coarse-graining interval representing the detection window of the k-th pixel of the linear array is then: where ∆ is the detector or pixel size-also commonly referred to as the coarse-graining width or the bin width. Using the definition Equation (39), the discretised outcomes u k represent the u value of the center of the corresponding bin: Multi-element detector array Detection range The parameter u cen sets the position of the central bin of the array, whose outcome label is k = 0, yielding u 0 = u cen . To illustrate the effect of the coarse-graining design on measured distributions, we plot in Figure 2 coarse-grained distributions (blue bars) obtained using 3 different resolutions: ∆ = 2 (left colum), ∆ = 1 (central column) and ∆ = 1/2 (right column). For each resolution, we plot two distinct distributions obtained using u cen = 0 (top row) and u cen = ∆/2 (bottom row). In other words, the coarse-graining bins of the distributions plotted at the bottom part of the figure are displaced by half a "pixel" in relation to the distributions at the top. Clearly, the distribution obtained using a fixed resolution is not unique, but the effect of small displacements (smaller than the bin width) gets less important as the resolution is increased. For comparison, the generating continuous distribution is plotted in red. We shall now use this model for standard coarse graining to explicitly define the discretised counterparts of the position and momentum operators given in Equation (3).
where the projectorĈ k is defined in Equation (37) having an equivalent definition forv measurements), and we used ∆ (δ) as the detection resolution forû (v) measurements. According to the definition in Equation (35), as a result of the the coarse-grained measurement ofû andv we obtain the discrete probabilities, p (u) ∆,k and p (v) δ,l .The discrete variances associate with these discrete probabilities are: where we define the set of discrete probabilities: One can see from the definitions (42) that if the bin widths ∆ and δ are such that p δ,l are sufficiently close to unity for some value of k and l, we have σ 2 Thus, naive application of any of the variance-based URs given in Section 2.1 would indicate a false violation of a UR. It has been shown in Reference [32] that the same argument applies to discretized versions of entropic URs, such as those of Section 2.2. Thus, proper treatment of standard coarse-grained measurements is essential in order to take advantage of the practical application of URs in QIT and quantum physics in general. In Section 5 we show how this can be done.
Periodic Coarse Graining
A distinct model of coarse graining discussed in the literature [142,143] is refereed to as periodic coarse graining (PCG). In this model, the partition of the whole set of real numbers R is performed in a periodic manner, leading to a finite number d of subsets R k , with k = 0, · · · , d − 1. The resulting discretization utilizes the index k as a direct label for the detection outcomes, in a similar fashion to what is usually defined for finite-dimensional quantum systems. The subsets R k are defined as [142]: where s u plays the role of a bin width similar to the resolution ∆ used for the standard coarse graining. In the definition Equation (44), bins of size s u are arranged periodically with the parameter T u representing the period, as illustrated in Figure 3 for the particular design using d = T u /s u = 5 detection outcomes. It is important to notice that this coarse graining design do not distinguish detections in distinct bins associated with the same detection outcome k (ranging from 0 to 4 in Figure 3). For example, a detection within any bin colored in red in (44) would lead to the same detection outcome k = 1. An interesting feature of the PCG model is that the number of detection outcomes is utterly adjustable by the choice of the parameters T u and s u , regardless of the chosen detection range. For instance, doubling the range of detection allows one to design PCG measurement using twice as much periods in its design, while maintaining the same number d = T u /s u of detection outcomes. As with the standard model, the reference coordinate u cen sets the center of the detection range also for the PCG design. Using the subset definition given in Equation (44), we can explicitly write the projector operators, Equation (37), for the PCG model aŝ where we extend the sum in n over Z without loss of generality, assuming that Equation (34) is satisfied. Analogously, we also define the PCG projective operators over the conjugate variable v: where we define s v and T v as the bin width and periodicity used in the PCG measurements of v.
Mutual Unbiasedness in Coarse-Grained Measurements
If a quantum system , with finite dimension, is described as an eigenstate of a given observable, the measurement outcomes of complementary observables are completely unbiased: each one of them occurring with equal probability, 1/d, where d is the dimension of the quantum system's Hilbert space. This unbiasedness relation is an important feature of quantum mechanics with no classical counterpart, and is usually cast in terms of the basis vectors constituting the eigenstates of two (or more) complementary observables. To be more precise, two orthonormal bases {|a k } and {|b l } are said to be mutually unbiased if and only if | a k |b l | 2 = 1/d for all k, l = 0, · · · , d − 1 [137]. The observation of unbiased measurement outcomes is customary in experiments with finite dimensional quantum systems. Not only routine, measurements in mutually unbiased bases (MUB) constitute a key procedure in several quantum information processing tasks, such as verification of cryptographic security [9], certification of quantum randomness [144], detection of quantum correlations [145][146][147] and tomographic reconstruction of quantum states [148,149].
Mutual unbiasedness is also extendable to continuous variables quantum systems [150], for which bases {|u } and {|v } such [û,v] = ihγ, satisfy | u|v | 2 = 1/(2πh|γ|), i.e., the overlap of the basis vectors |u and |v is independent (no bias) of their eigenvalues, u and v (note, however, that even thoughû andv are mutually unbiased observables, this does not imply that they are complementary, as would be the case for operators in a discrete quantum system [151]. In continuous variable quantum mechanics, mutual unbiasedness does not imply thatû andv are maximally incompatible [152]. In this case, complementary observables are typically defined as CCOs, that is, forming a Fourier transform pair). For CV systems, nevertheless, this relation is rather a theoretical definition than an experimentally observable fact, since the experimentalist has neither the capability to prepare nor to measure the (infinitely squeezed) eigenstates of theû andv. Instead, both the preparation and measurement procedures are limited to the finite resolution of the experimental apparatus. As discussed previously in this section, measurements of a CV degree of freedom render discretized, coarse-grained outcomes whose probabilities, Equation (35), are provided by a coarse-graining model described by the projective operators given in Equation (37). These coarse-grained probabilities obtained experimentally do not in general preserve the mutual unbiasedness complied by the underlying continuous variables.
To elaborate the issue, let us consider sets of projectors {Ĉ l } defining coarse-graining measurements in the complementary domains u and v of a continuous variable quantum systemρ. We assume measurement designs providing a number d of outcomes in each domain. In this scenario, the requirement for mutual unbiasedness is thus that the coarse-grained probabilities for measurements of one variable are evenly spread between all discretized outcomes whenever the quantum state is localized with respect to the coarse graining applied to its conjugate variable (and vice-versa). The subtlety in this requirement is the (infinite) degeneracy of normalizable quantum states that can be localized with respect to the chosen coarse graining. To emphasize this degeneracy, we refer to the outcome probabilities, Equation (35), with explicit dependency on the quantum state in order to mathematically phrase the condition for mutual unbiasedness in coarse-grained CV: the outcomes l } are mutually unbiased if for all quantum statesρ and k 0 , l 0 = 0, · · · , d − 1 we have [142]: p where, again, we stress that p l ), as in Equation (35). Having formulated the conditions for mutual unbiasedness, Equation (47), it is easy to perceive that the adjacent, rectangular subsets defining the standard coarse graining [Equation (39)] will not lead to unbiased measurement outcomes. Any CV distribution localized in a single coarse-graining bin (for example in the u variable) generates a probability density that decays in the Fourier domain (the v variable) along the adjacent bins within the detection range. This decay generates a non constante coarse-grained distribution that, by definition, is biased. Furthermore, the number d of detection outcomes in the standard design depends directly on the selected detection range, as well as on the chosen resolution. As a consequence, even though a particular localized distribution could lead to approximately unbiased coarse-grained outcomes in the Fourier domain, an extended detection range would increase the number of outcomes, thus spoiling the unbiasedness.
It is thus evident that in order to retrieve unbiased outcomes from coarse-grained measurement, a more contrived coarse-graining design is needed. As it turns out, it was shown in Reference [142] that the PCG design exactly fulfils the requirements for unbiased measurements of finite cardinality stated in Equation (47). A relation between the periodicities T u and T v used in the PCG of the conjugate variables u and v was analytically derived as a single condition for unbiased coarse-grained measurements: The unbiasedness condition stated in Equation (48) establishes infinite possibilities for the pair of periodicities T u and T v that can be used to design the mutually unbiased pair of PCG measurements defined in Equations (45) and (46), respectively. For instance, the simplest and most important case is the condition with m = 1, since it is valid for all d and provides the best trade-off between experimentally accessible periodicities: T u T v = (2πh)d. Conditions with m > 1 are also possible but are not general since they depend on the chosen number of outcomes d [142]. For example, for d = 4, valid conditions are found using m(mod d) = 1, 3 whereas for d = 5, valid conditions are found using m(mod d) = 1, 2, 3, 4. Importantly, the case with m(mod d) = 0 is always excluded, since in this case the PCG projectors describe commuting sets, [138][139][140]. In other words a joint eigenstate of the productΠ l existis for all k and l whenever T u T v = 2πh/c with c ∈ N [153]. It is also interesting to note that using the periodicity definition from the PCG design (T = ds), it is possible to write the unbiasedness condition given in Equation (48) in alternative, equivalent ways: Finally, in Reference [143] these results were generalized for PCG measurements applied to an arbitrary pair of phase space variables other than the conjugate pair formed by position and momentum. What is more, a triple of unbiased PCG measurements was also shown to exist for rotated phase space variables, along the same lines as the demonstration of a MUB triple in the continuous regime done in Reference [150]. Experimental demonstrations of unbiased PCG measurements were also carried out in References [142,143], both of them utilizing the transverse spatial variables of a paraxial light field.
UR for Coarse-Grained Observables
A kind of a paradigm shift in the theory of uncertainty relations was brought by the observation that everything can be efficiently characterized solely by means of probability distributions. As a result, tools known from information theory, such as information entropy, Fisher information and other measures, came into play. Additionally, the notion of uncertainty for discrete systems could better be captured that way. Since products of variances calculated for observables such as the spin are bounded in a state-dependent manner (so that the ultimate lower bound typically assumes the trivial value of 0), information entropies provide an attractive alternative [154]. Written already in the Rényi form, the above equation is a discrete counterpart of Equation (25), which corresponds to the discrete counterpart of Equation (21) when α = 1.
In the finite-dimensional case given by an arbitrary stateρ acting on a d-dimensional Hilbert space H, and a pair of non-degenerate, non-commuting observables, andB, one usually defines the probabilities associated to projective measurements: where |a i and b j , i, j = 1, . . . , d are the eigenstates of the operators associated with both observables. Disctrete entropic URs for the above probability distributions are of the general form with U ∈ U (d) being a unitary matrix with matrix elements U ij = a i b j . We denote P A substantially more renowned Maassen-Uffink (MU) bound [155] derived in 1988, is B MU αβ = − ln c 1 . This bound is however valid only for the conjugate parameters 1/α + 1/β = 2. Very recently, a plethora of new results [41,[156][157][158][159][160][161][162][163] improving the celebrated MU bound has been obtained. In particular, an approach based on the notion of majorization (suitable from the perspective of resource theories and quantum thermodynamics [164]) provides a significant qualitative novelty [156,157,159,163], which will also be touched upon in this section.
In this review we are concerned with the case in which continuous probability distributions P u (u) and P v (v) are replaced (viz. they were measured this way) by their discrete counterparts (k, l ∈ Z). According to the discussion in Section 4 we can use the definitions in Equations (35) and (39), and the condition in Equation (33), to write the discrete probabilities: with k ∈ Z k ⊂ Z. In the following we describe a series of URs for these discrete probabilities that are known as coarse-grained URs, derived in [24,[30][31][32]. These are the coarse-grained counterpart of the Heisenberg, Shannon entropy and Rényi entropy URs in Equations (8), (20) and (24) respectively. Here, we will closely follow the treatment in [24,32]; however, before we start we give a short historical overview and discuss a path towards extensions going beyond CCOs. The idea that generic quantum uncertainty could be quantified by the sum of Shannon entropies evaluated for discretized position and momentum probability distributions for the first time appeared in the contribution by Partovi [165]. He also derived the first coarse-grained UR which in the form is reminiscent to the Deutsch bound for finite-dimensional systems [154] (please note that both papers [154,165] have been published in 1983; however, Partovi in his first sentence refers to a "recent letter" by Deutsch). Both bounds [154,165] were obtained by means of a direct optimization, independently applied to every logarithmic contribution. Symmetry in developments of the URs for finite-dimensional and coarse-grained systems happened to be much deeper as the second coarse-grained result, by Bialynicki-Birula [30], is a counterpart of the MU bound [155]. The former result is an application of the continuous variant of the Shannon entropy UR (so the L p -L q norm inequality by Beckner [62]) supported by the Jensen inequality for convex functions, while the MU bound is a direct consequence of the Riesz theorem for the l p -l q norms. Please note that relatively often, integration limits in (53) were chosen as "from k∆ to (k + 1)∆" and "from lδ to (l + 1)δ"; however this choice causes a formal pathology in the limit of infinite coarse graining [166]. Thus, sticking to terminology of Equation (39), in theory it is better to avoid borderline settings for the position of the central bin, i.e., u cen = ∆/2.
To briefly report later developments, one shall mention that Partovi reconsidered the problem he had posed several years ago, pioneering applications of majorizaiton techniques [167]. Also Schürmann and Hoffmann [168] discussed the Shannon entropy UR from the perspective of the integral equation associated to it, while the first author conjectured an improvement (later mentioned in detail) which agrees with his numerical tests [169]. Finally, we mention (without details) an erroneous improvement of [31] by Wilk and Wlodarczyk [170,171], mainly devoted to the case of the Tsallis entropy.
Although originally the URs were derived for CCOs,û andv, here we show which of the URs in [24,32] can be valid also for operatorsû andv that are arbitrary linear combinations of all positions and momenta of the n−bosonic modes like the ones defined in Equation (3), viz. operators that are not necessarily CCOs. In the general case, we stress that there is always a unitary metaplectic transformation (soÛ S belongs to the metaplectic group Mp(2n, R) and it is always associated with a matrix S that belongs the symplectic group Sp(2n, R) [50]),Û S , that connectsû andv, viz.v =Û † SûÛ S . However, this metaplectic transformation is not necessarily a π/2 rotation, which would be the case ifû andv were CCOs. In order to see this, we first define two sets of operators (û,û ) T = (û = u 1 , . . . ,û n ,û 1 , . . . ,û n ) T = √ γSx and (v,v ) T = (v =v 1 , . . . ,v n ,v 1 . . . ,v n ) T = √ γ S x, whereS and S are some matrices belonging to the symplectic group Sp(2n, R), with the only restriction that the first rows ofS and S correspond to the real coefficients d and d in Equation (5), respectively, which define the operatorsû andv in Equation (3). Due to the properties of symplectic matrices, all the pairsû i andû j , and alsov i andv j , satisfy CCRs, viz. [û i ,û j ] = ihγδ ij and [v i ,v j ] = ihγδ ij with i, j = 1, . . . , n. However, it is immediate to see that (v,v ) T = S(û,û ) T where the matrix S := S S −1 is a generic symplectic matrix. Then the Stone-von-Neumann theorem guarantees that the change (û,û ) T → (v,v ) T is unitarily implementable by a metaplectic transformationÛ S [50]. In particular we haveÛ † SûÛ S = (Sx) 1 =:v.
URs Proved Only for CCOs
The key concept behind the treatment of coarse-grained URs in [24,32] is the introduction of the piece-wise continuous probability density functions: where D ∆ (u, u k ) and D δ (v, v l ) are called the histogram functions (HF) with u k (and v l in an analogous way) defined in Equation (40). Generically, these functions are defined such that they are normalized in each bin: and approach the Dirac delta distribution for infinitesimal bin size: Therefore, in the limit Z k , Z l → Z and ∆, δ → 0 we have Q ∆,u (u) → P u (u) and Q δ,v (v) → P v (v). We shall stress here that the HF can, in general, have any functional form as long as it is non-negative, normalized and fulfills Equation (56). However, the most common histogram function is the rectangular HF: with an equivalent definition for D R δ (v, v l ). In Figure 2 we show an example of coarse-grained probability distributions functions Q ∆,u (u) (the area beneath these functions are displayed in full) using rectangular histogram functions and for different size bins ∆.
z n 4 U t z Z 2 + A F e A l e g x 5 4 D w 7 A V 3 A E R g C D n + A X + A 3 + t J 6 3 P r U + t w 6 X 6 O Z G E f M M l F a r / w 9 9 Q 2 K d < / l a t e x i t > < l a t e x i t s h a _ b a s e 6 4 = " l X s d j + F v / o s A e y l r Y N 4 I T u T N o G Y = " > A A A G k n i c f Z R N b 9 M w G M e 9 w c o o b x u I E x e L a o g D m l q E B I j L Y B c O b T U m u k a S u U 4 b m r q 2 J b t b I m 8 f A e u 8 M 3 4 N j h t y p a X z l K S R 8 / / 9 7 z Z k X 3 J q D b d 7 t + N z T t 3 t r 3 t u + 3 H z x 8 9 P j J z u 7 T E y i h c k I C y b U m Y 8 0 Y Z S T k a G G k T O p C I p 8 R k 7 9 + W G u n 4 Q p a n g 3 0 0 q y T h C I a d T i p F x r h M P M T l D k 5 O d 7 + 7 W L B u 9 A q j A 4 p N N n d u v Q C g e O I c I M Z 0 v q 8 5 V m b J E y F D O S t b Y E 4 n w H I X E L p r M 4 J 5 z B X A q l H u 4 g Q t v i U O R m n k O z J C Z q a r W u 5 s L
The discrete Rényi entropy is always positive, and we have with the last line being valid because lim x→∞ (2x/π)R 2 00 (x, 1) = 1 (Equation (28) in [174] reads: This result is based on the appropriate asymptotic expansion [175] valid for z → ∞). This results show that the coarse-grained UR in Equation (63) is non-trivially satisfied for an arbitrary (even very large) values of the coarse-graining widths. However, this desired property is not enjoyed by the UR first derived in [31]. This UR corresponds to Equation (63) in the coarse-grained regime Γ/4 1.79 in which ε 1 (Γ/4) = 1/e. Obviously, this is not a mere coincidence, as Equation (63) subsumes (65). This is clearly visible inside the definition of ε which involves the minimum of two different bounds.
When Γ/4 > 1.79 the lower bound in Equation (65) is negative so this UR is trivially satisfied, since the discrete entropy is always non-negative. From the above considerations we can obtain an UR for the variances, σ 2 Q ∆,u and σ 2 , if we set α = 1 in Equation (58) and use the inequality (22): where h[·] stands for the Shannon entropy. Now, we can use the decompositions: where the variances of the discrete probability distributions were defined in Equation (42), while σ 2 D ∆ and σ 2 D δ , are the variances of the generic HFs. Therefore, applying the above splitting to Equation (66) we arrive at the lower bound [24]: When the HF are rectangular, and in the coarse-grained regime Γ/(4|γ|) 1.79 where ε 1 (Γ/4|γ|) = 1/e, we recover the UR [32]: where we have used the fact that in this case Both Equations (68) and (69) are the coarse-grained versions of the Heisenberg UR in Equation (8). It is important to emphasize that Equation (69) cannot be obtained by the simple substitution σ 2 Although both σ 2 D ∆ and σ 2 D δ are the variances of a generic HF, viz. D ∆ (u, u k ) and D δ (v, v k ) for any value of k, it is interesting to associate them to the respective central bins, namely those that contain the mean value of the probability distributions P u and P v . By doing this, together choosing the origins of the coordinates in the middle of the central bin, we can see that the variances σ 2 P (u) ∆ and σ 2 P (v) δ are free from contributions associated with the statistics relevant for the central bins. Thus, if the widths of the coarse graining increase in the measurement ofû andv, the respective central bin-widths grow, so that the variances σ 2 only involve contributions from the tails of the probability distributions Q ∆,u and Q δ,v . Therefore, for large coarse grainings, the variances σ 2 D ∆ and σ 2 D δ become more important in the inequalities Equations (68) and (69). Thus, in the regime when: both Equations (68) and (69) are satisfied trivially. Note, that in Equation (71) we have used the relation 4π 2 ≥ e 2(h[D ∆ ]+h[D δ ]) /e 2 σ 2 D ∆ σ 2 D δ > 0 which can be obtained from the inequality in Equation (22).
However, Equation (68) is only the starting point for the second construction, proposed in [24], that is free from the above limitation, and cannot be trivially satisfied. This improved UR reads: where K(t) is implicitly defined as x 0 e −y 2 dy being the error function and M −1 (t) denoting the inverse of the invertible function The idea behind derivation of the coarse-grained UR in Equation (72) is the following. Let us rewrite Equation (68) in the form: [24] is a Gaussian with support restricted to the central bin and whose variance is an appropriate function of σ 2 D ∆ (σ 2 D δ ) (for details see [24].) Therefore, for this optimal HF its Shannon entropy , which results in the left hand side product in Equation (72).
According to the discussion above Equation (71) the coarse-grianed UR in Equation (72) has no contributions from the statistics corresponding to the central bin. In the limit when ∆, δ → 0 we recover the Heisenberg UR in Equation (8) thanks to the identities [24] lim In the opposite limit of infinite coarse graining, viz ∆, δ → ∞, we have σ 2 It is important to note that since π 2 Γ 2 ε 2 1 (Γ/4) whenever both ∆ and δ are finite, it is forbidden to set σ 2 as simultaneously equal to zero, as it would contradict the coarse-grained UR (72). This means that any quantum state (pure or mixed) cannot be localised in both observablesû andv that are CCOs. In other words, the associated probability distributions cannot simultaneously have compact support. This remarkable conclusion somehow threatens the scientific program to recover classical mechanics solely from coarse-grained averaging, physically originating from the finite-precision of the observations [19,176,177]. Indeed, quantum features can be observed in the measurement ofû andv irrespective of the precision of the detectors. However, for very large coarse-graining widths the variances σ 2 Thus, as these probabilities are likely very small, they would be particularly susceptible to statistical fluctuations and it would in general require very long acquisition times to collect the sufficient amount of data necessary to verify the UR (72) in the regime of extremely large coarse graining. (3) If we let α = 1 in Equation (58), use rectangular HFs such that Equation (60) is valid and restrict the size of the involved bins such that ε 1 (Γ/4|γ|) = 1/e-this is the regime of the coarse graining when Γ/4 1.79-we obtain the simplified coarse-grained UR of the form:
URs Valid for General Observables,û andv, Defined in Equation
Because the coarse-grained UR in Equation (58) was derived only for CCOs,û andv, a priori it is not clear why the above UR could remain valid also for generalized observables defined in Equation (3). This fact, however, can be proved with the help of the Shannon-entropy UR (20), that has properly been extended to the desired observables, and the inequalities: whose detailed derivation based on the Jensen inequality is relegated to Appendix B. Passing to the discrete entropies we find the coarse-grained UR: which looks the same as the one derived in [30] for CCOs. Here, the validity of this UR has been extended for any observablesû andv as defined in Equation (3). Also, following the same arguments that lead from Equation (66) to the UR in Equation (69) we can see that the UR for the discrete variances is also valid for generalû andv as defined in Equation (3).
To briefly summarize, entropic uncertainty relations for coarse-grained probability distributions were almost only considered for position and momentum variables. As far as we know, the only exceptions are given in References [58,65]. However, as we have shown here, the generalization of entropic URs for differential probabilities associated with general observablesû andv, which are linear combinations of position and momentum, can be done in many cases. However, in each case a careful analysis should be carried out to verify that the related coarse-grained URs are also valid for these generalised operators. Here, we have done this only in the simple cases.
Coarse-Grained URs Merged with the Majorization Approach
In [174] the coarse-grained scenario has been discussed with the help of the results obtained in [156,157,159], namely the majorization-based approach to quantification of uncertainty. To say it briefly, a majorization relation x ≺ y between two arbitrary d-dimensional probability distributions means that for every n ≤ d the inequality ∑ n k=1 x ↓ k ≤ ∑ n k=1 y ↓ k holds, with an equality (normalization) for n = d. Traditionally, by "↓" we denote the decreasing order, so that x ↓ k ≥ x ↓ l , for all k ≤ l. The Rényi entropy (and also others, such as the Tsallis entropy) is Schur-concave, which implies In the context of coarse-grained probability distributions it was conceptually simpler to consider the so-called direct-sum majorization introduced in [159]. An advantage of the majorization approach is that it covers a regime of (α, β) parameters, β = α to be precise, which in some way is perpendicular to the conjugate choice 1/α + 1/β = 2. In [174] an infinite hierarchy of majorization vectors, depending on a single parameter Γ = ∆δ/h, has been derived. The discussion is conducted for CCOs, thus one can easily recognize the dimensionless Γ parameter as those which appears in all previous URs with γ = 1.
The main result, namely a family of lower bounds denoted as R (n) α (∆δ/h) for n = 2, . . . , ∞, has been presented in Equation (27) from [174], however, we refrain from providing its detailed construction here. It seems enough to say that the bound in question is a function of R 2 00 (j 0 Γ/4, 1) with j 0 being certain positive integers. In other words, in spirit, the majorization bound is close to that derived in [24] and extensively discussed above. A comparison of the new bound and (63) for α = 1 = β-the only value of both parameters for which the involved bounds describe the same situation--showed that R 1 outperforms (63) in the regime when the R 00 -term does contribute to ε 1 . Asymptotic behavior of the new and previous coarse-grained bounds shows that for α = 1 = β and large Γ, all R (n) 1 bounds improve (63) by a divergent factor Γ/4. Moreover, the typical behavior of discrete majorization bounds has been confirmed in the coarse-grained setting. In the discrete case, the majorization relations almost surely dominate the MU bound, with an exception being a small neighborhood of the point for which the unitary matrix U is the Fourier matrix. The analog of the Fourier matrix in the coarse-grained scenario is the continuous limit Γ → 0. This probably intuitive fact has been rigorously shown by means of the asymptotics of R (∞) 1 for small Γ, which is equal to − 1 2 ln Γ.
Other Coarse-Grained URs
At the very end of this long section we would like to touch on a few coarse-grained URs which go beyond the standard position-momentum conjugate pair. First of all, Bialynicki-Birula also provided his major Shannon entropy UR in the case of angle and angular momentum [30], as well as (together with Madajczyk) to the variables on the sphere [178]. Coarse graining in these physical settings is only relevant for the periodic CVs (angle on a circle and two angles on a sphere), as the conjugate variables are discrete (though infinite dimensional).
Also, the coarse-grained scenario has been developed [179] in relation to the memory-assisted UR [180] relevant for quantum key distribution. The result, even though non-trivial, differs from Equation (63) in a similar fashion as the MU bound differs from the UR in the presence of quantum memory by Berta et al [180].
Going in a completely different direction, Rastegin [181] in his recent contribution proposed an extension of (65) to the case of a modified CCR, which assumes the form [x,p] = ih(1 + βp 2 ). The parameter β is related to the so-called minimal length predicted by certain variants of string theory and similar approaches (not to be confused with β playing the role of a conjugate parameter in the MU bound and similar URs for the Rényi entropies).
Last but not least, some of us have very recently derived an inequality (see Equations 9-12 from [182]), which could be understood as an UR (valid for CCOs) in the setting relevant for periodic coarse graining discussed in Section 4.1.2. As this UR involves additional averaging of p (x) k (ρ) and p (p) l (ρ) defined below Equation (47) with respect to the positioning degrees of freedom, we do not provide further details of this construction encouraging the interested reader to consult [182].
Applications of Coarse-Grained Measurements and Coarse-Grained Uncertainty Relations
As discussed above, when detecting the position and momentum of particles such as photons or individual atoms, coarse-grained measurements are not just necessary but can be much more practical. In this regard, URs that deal with coarse-grained measurements can be useful for several applications, such as those discussed in Section 3.
Section 3 discussed the use of URs along with the PPT arguement for the convenient detection of quantum entanglement in continous variable quantum systems. However, sufficient care must be taken in regards to coarse-grained measurements. The pitfalls of applying the usual entanglement criteria for continuous variables to coarse-grained measurements was discussed in Reference [26], where it was argued that this can lead to false-positive identifications of entanglement, such that the entanglement criteria based on uncertainty relations discussed in Section 3 can be (falsely) violated even for separable states. For a simple illustration of this, consider the most trivial example of a separable continuous variable state, the two-mode vacuum state [10]. Even though the state is separable, improper application of entanglement criteria without correctly taking account of coarse graining can lead to erroneous results. A demonstration of this is shown in Figure 5. We consider the results from coarse grained measurements, and apply entanglement criteria based on an ideal continuous variable UR and its coarse-grained version. The red circles show a variance-based entanglement criteria based on the variance product UR Equation (8), using the global operators defined in Equations (29) and (30), as developed in Reference [99]. Here we have subtracted the lower bound from the product of variances, so that a negative value indicates entanglement. As can be seen, when the coarse graining is large, one would erroneously conclude that the quantum state is entangled. On the other hand, the coarse-grained variance product UR (69) applied to the global operators (29) and (30) never indicates that the state is entangled, as indicated by the blue squares in Figure 5. Similar results hold for other UR-based entanglement criteria.
To show how coarse-grained data should be properly handled to identify entanglement, an experimental study was performed in a system of spatially-entangled photons [26]. In particular, the same variance criteria based on (69) was tested for the global operators defined in Equations (29) and (30), in which case entanglement was identified for a wide range of coarse-graining widths. It was also shown that coarse-grained entropic entanglement criteria, for example based on inequality Equation (58) (α = β = 1) applied to operators (29) and (30), can be superior to coarse-grained variance-based criteria, identifying entanglement when variance criteria do not, even for the case of Gaussian states. This is due to the fact that the coarse-grained probability distributions functions such as those shown in Figure 2 are not Gaussian functions, even when the quantum state under investigation is Gaussian.
An advantage of coarse-graining is that the measurement time can be drastically reduced. In Reference [86] EPR-steering was tested for discrete distributions of measurements made from standardized binning on the two-photon state produced from spontaneous parametric down-conversion, using a coarse-grained version of the EPR-steering criteria of Reference [85]. Bi-dimensional steering was observed for sample sizes ranging from 8 × 8 to 24 × 24, representing a considerable reduction in measurement overhead when compared with the quasi-continuous measurements reported in Reference [85], which sampled about 100 data points per cartesian direction (about 10 4 total measurements) to evaluate entropic EPR-steering criteria of continuous variables.
Standard coarse graining has been studied in the context of quantum state reconstruction of single and two-mode Gaussian states, and the quantum to classical transition [183]. Two scenarios were considered: direct reconstruction of the covariance matrix alone, and full reconstruction of the state using maximum likelihood estimation. The reconstructed coarse-grained functions were compared to those of Gaussian states subject to thermal squeezed reservoirs, indicating that in this context coarse graining does not produce a thermalized (decohered) Gaussian state. Numerical results testing entanglement criteria for the two-mode vacuum state, a separable pure state. The entanglement criteria are based on URs following the PPT argument outlined in Section 3. The criteria are evaluated as a function of the bin widths ∆ = δ, which are given in units of the standard deviations σ P u and σ P v . We note that σ P u = σ P v for the two-mode vacuum state. The red circles show the variance product UR Equation (8), where we apply the naive approach in which the variances of the continous variables are calculated from the discretized data using Equation (42). One can see that in this case we obtain a false-positive for entanglement when the coarse-graining widths are large. The blue squares show the coarse-grained variance product UR Equation (69), both applied to the global operators Equations (29) and (30). Here the lower bounds for both inequalities have been subtracted, so that a negative value indicates entanglement. The lines are merely guides for the eye.
The work mentioned above considered standard coarse graining, as described in Section 4.1. In some cases it is interesting to consider different models, such as that of periodic coarse graining described in Section 4.1.2. The mutual unbiasedness of periodic coarse graining described in Section 4.2 has been tested experimentally for two [142] and even three [143] phase-space directions. It was shown that mutual unbiasedness appears when the appropriate bin widths of the two or three conjugate variables are chosen. Periodic coarse graining has also been used in the detection of spatial correlations of photon pairs from SPDC [182]. Using a novel entanglement criteria based on the UR for characteristic functions [153], it was possible to identify entanglement with as few as 2 × 2 measurements in position and momentum (8 total), representing a considerable reduction in measurement overhead.
Simple binary binning of homodyne measurements has been proposed as a means to test dichotomic Bell's inequalities in CV systems, while allowing for high detection efficency [184][185][186][187]. Other types of non-standard coarse graining have been proposed as a means to violate Bell's inequality using homodyne measurements on non-Gaussian states [188]. Though it was shown that one could achieve maximal violation in principle, exotic non-Gaussian states are required. In Reference [27] it was shown that imperfect binning could result in false violations of Bell's inequalities, and even in violations of Cirelson's bound for quantum Bell correlations.
A closely related subject to periodic coarse graining of CVs is that of the so called modular variables [189][190][191], for which phase-space variables u are rewritten as u = n u +ū, where n u is the integer component andū the modular component, such that 0 ≤ū < . Here is a scaling parameter of appropriate dimension. For two CCOs, such asx andp for example, the integer operator of one observable-say-n x and the modular operator of the other observable-p satisfy URs that closely resemble those of the angle and angular momentum variables [30]. The modular variable construction was first introduced by Aharanov et al. [138,189] as a method to identify non-locality in quantum mechanics. Since then, several interesting applications have been developed. Variance-based URs for the modular variable construction were proposed as a method to identify a novel type of squeezing, as well as entanglement in pairs of atoms [192]. This entanglement criteria was used in References [193], along with one based on entropic uncertainty relations, to identify spatial entanglement of photon pairs that have passed through multiple slit apertures. Application to multiple-photon states was studied in Reference [194]. It is worth noting that in this case the usual CV entanglement criteria as discussed in Section 3 are incapable of detecting entanglement. Modular variables have been proposed as a way to test for the Greenberger-Horne-Zeilinger paradox in CV systems [195], as well as quantum contextuality [196][197][198] and as a method to construct algebras resembling that of discrete systems [190,191,199].
Finally, we briefly mention that URs play an important role in the attempt to unify quantum theory with general relativity. In this case, the Heisenberg uncertainty principle is modified to become a generalized uncertainty principle, taking into account Planck scale effects, which impose coarse-graining that is a fundamental part of nature, leading to minimum and maximum length quantum mechanics. An extensive amount of literature exists on the subject, for two recent reviews, see References [200,201].
Conclusions
Uncertainty relations play an important role in quantum physics, which is two-fold: on the one hand they have historically represented the difference between classical and quantum physics, while on the other hand they are a tool that can be used to identify and even quantify interesting quantum properties. Beginning with the seminal work of Heisenberg in 1927, several uncertainty relations have been developed for continuous variable quantum systems. However, in a realistic experimental setting, one never has access to the infinite dimensional spectrum associated to these observables. Thus, coarse graining is imposed by the detection apparatus to account for the measurement precision and range.
Here we have provided a review of several quantum mechanical uncertainty relations tailored specifically to coarse-grained measurement of continuous quantum observables. Our aim was to survey the state-of-the-art of the subject, from both the theoretical advances to experimental application of coarse-grained uncertainty relations. We also extend the validity of some of the coarse-grained URs, already in the literature, to general linear combinations of canonical observables in n-mode bosonic systems.
Several interesting open questions remain. First, it would be interesting to see the generalization of all the coarse-grained URs presented here for pairs of observables that are connected by general unitary metaplectic transformations. Second, one can consider applying coarse graining to URs not mentioned explicitely here, such as the triple variance product criteria [120,150], UR for characteristic functions [153], among others, as well the plethora of moment inequalities arising from tests for non-classicality [68,72] and entanglement [117,118,121]. Third, and more important, a deep discussion of the role of coarse-grained URs within the scientific program to recover classical mechanics solely from coarse-grained averaging should be developed. We hope that this review encourage this discussion.
Author Contributions: All authors contributed equally to this work.
Acknowledgments:
The authors acknowledge financial support from the Brazilian Funding Agencies CNPq, CAPES (PROCAD2013 project) and FAPERJ, and the National Institute of Science and Technology-Quantum Information. Ł.R. acknowledges financial support by Grant number 2015/18/A/ST2/00274 of the National Science Center, Poland.
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, and in the decision to publish the results.
After using the condition in Equation (36) which is the desired result. | 19,271 | 2018-05-02T00:00:00.000 | [
"Physics"
] |
The Kinase Activity-deficient Isoform of the Protein Araf Antagonizes Ras/Mitogen-activated Protein Kinase (Ras/MAPK) Signaling in the Zebrafish Embryo
Background: The full-length Araf (Araf-tv1) inhibits Nodal/Smad2-induced mesendodermal induction in zebrafish embryos. Results: Araf-tv2, a kinase-deficient isoform of zebrafish Araf, can inhibit developmental functions of Fgf/Ras signaling. Conclusion: Araf-tv1 and Araf-tv2 regulate distinct signaling pathways in zebrafish embryos. Significance: The zebrafish araf gene is expressed to produce different variants with distinct functions during embryogenesis.
Germ layer specification and patterning are important events of early embryonic development and are directed by multiple signaling pathways. Nodal signal is the key mesendoderm inducer and dorsalizing signal during amphibian and fish embryogenesis (1)(2)(3)(4)(5). Studies in the zebrafish indicate that maternal Wnt/-catenin signaling is essential for initiating the expression of zygotic nodal genes in the dorsal blastodermal margin (2) while maternal Eomesodermin is required for nodal genes expression in the lateral and ventral blastodermal margin (6). Fibroblast growth factors (FGFs) 2 in part mediate Nodal activity in mesendoderm induction and dorsal development and the zygotic expression of fgf genes requires Nodal and Wnt/ -catenin signals (7)(8)(9)(10)(11).
The Ras-Raf-Mek-Erk kinase cascade is one of the most important pathways downstream of Fgf signaling (12). Each component of the cascade has multiple family members, e.g. Nras, Kras, and Hras of the Ras family, Araf, Braf, and Raf1 of the Raf family, Mek1 and Mek2 of the Mek family, and Erk1 and Erk2 of the Erk family. During signaling transduction, the membrane-localized GTP-bound Ras, which is activated by cytokine growth hormones, targets Raf for phosphorylation; the activated Raf kinase activates MEK1/2 via phosphorylation; and the activated Mek1/2 in turn phosphorylate Erk1/2. This cascade has been shown to participate in regulation of various cellular processes, such as cell differentiation, proliferation, migration and survival, and diseases (13,14).
Recently, we demonstrate that Araf directly cross-talks with Nodal/Smad2 signaling in a MAPK-independent fashion (15). Full-length Araf inhibits Nodal/Smad2 signaling by directly phosphorylating the linker region of Smad2, leading to ubiquitination and degradation of activated Smad2. In the zebrafish embryo, araf antagonizes mesendoderm induction and dorsalization exerted by Nodal signal. Yokoyama et al. reported that the murine Araf locus could express a truncated isoform, DA-Raf1, which retains the Ras-binding domain but lacks the kinase domain (16). DA-Raf1 was found to intervene in ERK activation and positively regulate myogenic differentiation. It is unclear whether the zebrafish araf locus could produce similar truncated isoforms with a role in embryonic development.
In this study, we identified the araf transcript variant araf-tv2 in the zebrafish embryo, which is predicted to encode a C-terminally truncated protein with loss of the kinase activity domain. Although Araf-tv2 is able to physically associate with full-length Araf (Araf-tv1), it does not interfere with inhibitory effect of Araf-tv1 on Nodal/Smad2 signaling. Instead, Araf-tv2 inhibits Ras/MAPK pathway and may play a role in embryonic development by controlling Fgf/MAPK signaling.
Experimental Procedures
Zebrafish Strains-Wild-type Tübingen strain was used in this study. Embryos were raised in Holtfreter's water at standard 28.5°C as previously described (17). Ethical approval was obtained from the Animal Care and Use Committee of Tsinghua University.
Plasmids-Zebrafish araf-tv2, kras, hras, and nras were amplified from the cDNA pool of zebrafish embryos. The coding sequence of human HRAS, KRAS, or NRAS was amplified from cDNAs of HEK293T cells. For synthesizing mRNA, the coding sequence of the corresponding gene was cloned into pXT7 or pCS2 vector; for transfection in mammalian cells, the coding sequence was cloned into pCMV5 vector with HA, Myc, or Flag tag. Human or zebrafish Hras G12V, Kras G12V, or Nras G12V constructs were made by mutating the 12 th glycine to valine. HA-tagged Araf, Araf-N, Araf-C constructs were described before (15). To make araf-tv2 specific antisense RNA probe, a fragment contained 314 bp of araf-tv2 3ЈUTR was amplified and subcloned into EZ-T vector.
mRNA Synthesis and Microinjection-mRNAs were synthesized in vitro using T7 or SP6 mMessage mMachine Kit (Ambion) and purified using the Qiagen's RNeasy Mini Kit. Individual or mixed mRNAs were injected into the yolk of zebrafish embryos at the one-cell stage. araf-MOs were described previously (15).
Whole-mount in Situ Hybridization and Immunofluorescence-Antisense RNA probes were made by in vitro transcription using digoxigenin-labeled UTP. Whole-mount in situ hybridization was performed following standard procedures. Endogenous p-Erk1/2 was detected by whole-mount immunofluorescence using p-Erk1/2 antibody (CST #9101, diluted in 1: 200) as described before (15). The embryos were observed by confocal microscopy.
araf-tv2 Is Expressed during Zebrafish Early Embryonic
Development-The mouse Araf gene has been found to express a C-terminally truncated Araf isoform, which lacks the kinase activity domain, in addition to the full-length Araf (16) (18). According to ZFIN, the zebrafish araf locus consists of 15 exons and 14 introns and is predicted to produce three araf transcript variants (Fig. 1A). The two long variants encode an identical full-length Araf protein of 608 residues and were then named araf-tv1 thereafter; the third variant is shorter, expected to code for a C-terminally truncated Araf protein of 265 residues, and named araf-tv2 (Fig. 1A). Compared with araf-tv1, araf-tv2 contains a sequence derived from the 6th intron, which is immediately downstream of the sequence of the 6th exon, and does not harbor any sequences from 7th-15th exons. The variants araf-tv1 and araf-tv2 are most likely to be generated by alternative cleavage and polyadenylation because they have a completely different 3Ј untranslated region. The zebrafish Araf-tv2 protein is similar to the murine DA-Raf1 (16), which lacks the CR3 domain of kinase activity.
The existence of araf-tv2 in zebrafish embryos was confirmed by two rounds of RT-PCR using total RNA isolating from embryos at 24 hpf as template (Fig. 1B), which was followed by sequencing. Using araf-tv1 and araf-tv2 specific primers, we simultaneously detected both transcript variants in embryos at different stages by RT-PCR in the same reaction mixture. Results showed that the araf-tv1 level remained high from the 2-cell stage to 24 h postfertilization (hpf), whereas the araf-tv2 level was low at the 2-cell stage and then gradually increased (Fig. 1C). However, the araf-tv2 levels were always lower than the araf-tv1 levels before the completion of gastrulation (bud stage). Whole-mount in situ hybridization using araf-tv2 specific probe revealed that araf-tv2 transcripts were ubiquitously distributed in embryos from 2-cell to bud stages and mainly enriched in the head region at 24 and 36 hpf (Fig. 1D), which resembled araf-tv1 expression patterns (15). The expression pattern of araf-tv2 suggests a role in early development of zebrafish embryos.
Araf-tv2 Inhibits FGF/MAPK Signaling in Zebrafish Embryos-DA-Raf1, a splicing isoform of murine Araf without the kinase activity domain, is shown to antagonize Ras-Erk signaling (16). We wondered whether araf-tv2 overexpression would impair MAPK activation in zebrafish embryos. As shown in Fig. 3A, the level of endogenous p-Erk1/2 was high in control embryos. However, araf-tv2 overexpression caused an obvious reduction of p-ERK1/2 amount, which was in contrast to a marginal increase of p-Erk1/2 in embryos overexpressing araf-tv1. On the other hand, either of isoforms did not cause apparent change in the amount of p-Akt and p-Jnk1/2, which could be activated by cytokine growth hormones. Given that fgf17b is a potent regulator of zebrafish embryonic development (9), we tested whether araf-tv2 overexpression had an impact on MAPK activation upon fgf17b overexpression. Western blot analysis using embryonic cell lysates disclosed that co-injection of araf-tv2 and fgf17b mRNAs led to a marked decrease of p-Erk1/2 amount compared with fgf17b mRNA injection alone (Fig. 3B), whereas this effect was not observed for co-injection of araf-tv1 and fgf17b mRNAs (Fig. 3C). As detected by immunofluorescence, p-Erk1/2 was essentially restricted to the blastodermal margin at the shield stage. Overexpression of fgf17b induced p-Erk1/2 throughout the entire blastoderm, and the ectopic p-Erk1/2 was wholly wiped away by co-overexpression of araf-tv2 but not araf-tv1 (Fig. 3, D and E). Taken together, these results suggest that araf-tv1 and araf-tv2 have different functions during development with araf-tv2 antagonizing Fgf/ MAPK signaling.
Both Araf-tv2 and Araf-tv1 Inhibit Germ Layer Formation and Patterning-Because of technical difficulty of specifically knocking down or knocking out araf-tv2, we investigated its possible function first by injecting araf-tv2 mRNA into one-cell stage embryos and examining markers expression during gastrulation. Overexpression of araf-tv2 resulted in a decrease of the expression of the pan-mesodermal marker ntl and the dorsal markers gsc and chd (Fig. 4A). These results suggest that, like araf-tv1 (15), araf-tv2 plays an inhibitory role in mesodermal induction and dorsal development. Furthermore, the araf-tv2 injected embryos exhibited decreased expression of the anterior neuroectodermal marker otx2 and the posterior neuroec-FIGURE 2. Araf-tv2 interacts with but does not antagonize functions of Araf-tv1. A and B, detection of Araf-tv1 and Araf-tv2 interactions by co-immunoprecipitation (co-IP). Tagged Araf-tv1 and Araf-tv2 were co-expressed in HEK293T cells. IP, immunoprecipitation; WB, Western blot; TCL, total cell lysates. Arrowheads indicated the nonspecific IgG band (same for other panels). C, detection of physical interaction between Araf-tv2 and different Raf members by Co-IP in HEK293T cells. Note that Araf-tv1 only associated with Araf-tv1 but not with Braf or Raf1a. D, Araf-tv2 interacted with the C-terminal part of Araf-tv1. Araf N and Araf C referred to N-terminal and C-terminal (containing the kinase domain) parts of Araf, respectively. E, Araf-tv2 did not antagonize the inhibitory effects of Araf-tv1 on ARE 3 -luciferase reporter expression with or without TGF-1 stimulation in Hep3B cells. The indicated amount of plasmids was for each well in a 24-well plate. Statistical significance: **, p Ͻ 0.01 by Student's t test. F, Araf-tv2 could not block Araf-tv1-stimulated ERK activation in HEK293T cells. Long and short referred to longer and shorter exposure time, respectively. Endogenous ERK1/2, p-ERK1/2, and tubulin were examined. G, Araf-tv2 didn't interfere with Araf-tv1 interaction with Smad2. H and I, Araf-tv2 didn't promote the degradation of p-SMAD2C in Hep3B cells. Endogenous p-SMAD2C was detected using anti-p-SMAD2 (S465/467) antibody (H). The relative p-SMAD2C level was the ratio of p-SMAD2C to total SMAD2, which was measured by Image J analysis (I). OCTOBER 16, 2015 • VOLUME 290 • NUMBER 42 JOURNAL OF BIOLOGICAL CHEMISTRY 25515 todermal marker hoxb1b but had an expanded domain of the ventral epidermal marker gata2 (Fig. 4A). It is likely that araf-tv2 acts to repress neural induction. Interestingly, effect of araf knockdown could be efficiently compromised by either araf-tv1 or araf-tv2 overexpression (Fig. 4, B and C). The underlying mechanisms may be different because the two araf variants act on different signaling pathways.
Araf Isoforms Function in Zebrafish Embryo
Araf-tv2 Inhibits Fgf-induced Germ Layer Formation and Patterning-Next, we tested whether araf-tv2 could inhibit functions of Fgf signaling during embryonic development. As shown in Fig. 5A, injection of fgf17b mRNA alone in zebrafish embryos enhanced mesoderm induction and dorsal development with ectopic or expanded expression of ntl, myod, gsc, and chd and decreased expression of eve1 and gata2, inhibited endoderm formation with a reduction of sox32 and sox17 expression, and promoted neuroectodermal posteriorization with an expanded hoxb1b expression domain and a smaller otx2 expression domain. These effects were compromised by cooverexpression of araf-tv2 in a dose-dependent manner. In sharp contrast, co-overexpression of araf-tv1 had little effect on fgf17b-induced marker changes. Therefore, araf-tv2 may regulate embryonic development mainly by inhibiting Fgf signaling, a mechanism differing from araf-tv1 (15).
Araf-tv2 Inhibits Nras and Kras Signaling-Fgf ligands can transduce the signal to Ras/MAPK (20). Given that Araf-tv2 retains the Ras-binding domain, we assumed that it could associate with Ras proteins. Immunoprecipitation results in HEK293T cells showed that zebrafish Araf-tv2 associated with human KRAS and NRAS and much more strongly with their constitutively active forms (KRAS G12V and NRAS G12V) (Fig. 6A). However, Araf-tv2 appeared not interacting with human HRAS or HRAS G12V, which correlated with the previous report that Araf protein has low affinity with Hras (21).
We then tested whether association of Araf-tv2 with Ras would disrupt Ras-Raf interaction. We found that KRAS G12V could interact strongly with Araf-tv1 and Raf1a and weakly with Braf in HEK293T cells. The physical association between KRAS G12V and full-length Raf proteins was markedly reduced by co-transfection of Araf-tv2 (Fig. 6B), suggesting an antagonizing effect on Ras-Raf binding. Further biochemical analyses FIGURE 3. Araf-tv2 attenuates Fgf-stimulated MAPK activation in zebrafish embryos. A, overexpression of araf-tv2 mRNA decreased p-Erk1/2 levels in embryos. Embryos at the one-cell stage were injected with three increasing doses of araf-tv1 or araf-tv2 mRNA and collected at 75% epiboly stage for immunoblotting using different primary antibodies. The control embryos were injected with gfp mRNA. The bands of p-Erk1/2 were quantified by Image J software. B and C, p-Erk1/2 was detected by immunoblotting in embryos injected with fgf17b mRNA alone or in combination with araf-tv2 (B) or araf-tv1 mRNA (C) The p-Erk1/2 levels were analyzed. D and E, spatial distribution of p-Erk1/2 was detected by immunofluorescence in embryos injected differently. Ratios of affected embryos were indicated. Scale bar, 200 m.
The antagonizing effect of araf-tv2 on Ras/MAPK signaling was further substantiated in zebrafish embryos. Injection of in vitro synthesized zebrafish kras G12V or nras G12V mRNA could increase the amount of p-Erk1/2 (Fig. 6, F and G) and induce ectopic activation of p-Erk1/2 in zebrafish embryos (Fig. 6, H and I), which were effectively inhibited by co-injection of araf-tv2 mRNA. Thus, Araf-tv2 is truly a direct antagonist of Ras and acts to block its downstream MAPK activation.
Ras-induced Developmental Defects Are Counteracted by Araf-tv2-We next investigated developmental effect of ras overexpression by injecting zebrafish kras G12V or nras G12V mRNA into one-cell stage embryos and examining the expression of germ layer markers at gastrulation stages. Results showed that overexpression of ras mRNA led to up-regulated expression of ntl, gsc, chd, and hoxb1b with a reduction of eve1, gata2, and otx2 (Fig. 7, A and B). The high degree of similarity in change of markers expression patterns between fgf17b (Fig. 5A) and ras overexpression indicates that Ras indeed mediates Fgf signaling for regulating embryonic development. Importantly, those effects of ras overexpression on the markers expression were alleviated by co-overexpression of araf-tv2 mRNA (Fig. 7, A and B). It is speculated that endogenous Araf-tv2 might act to restrict Fgf/ Ras signaling activity during embryonic development.
Mek1/2 are direct substrates of Raf kinases and phosphorylated Mek1/2 activate downstream Erk1/2. Therefore, it would be expected that overexpression effect of Mek1/2 in embryos will not be affected by araf-tv2. We found that injection of human caMEK1 mRNA encoding a constitutively active form of MEK1 expanded gsc and chd expression and decreased eve1 and gata2 expression (Fig. 8, A and B), a dorsalized phenotype. These changes were not compromised by co-injection of araf-tv2 mRNA. This result supports the idea that Araf-tv2 antagonizes Ras signaling upstream Mek1/2. Except that otx2 and hoxb1b were examined at the 75% epiboly stage, the other markers were examined at the shield stage. Orientations of embryos: animal-pole views with dorsal to the right for ntl, chd, and gata2; animal-pole views with dorsal to the bottom for otx2; dorsal views for gsc; lateral views with dorsal to the right for hoxb1b. The ratio of embryos with the representative pattern was indicated. Scale bar, 200 m. OCTOBER 16, 2015 • VOLUME 290 • NUMBER 42 FIGURE 5. araf-tv2 overexpression antagonizes fgf17b-induced marker changes. A and B, expression patterns of marker genes revealed by whole-mount in situ hybridization. Embryos at the one-cell stage were injected with fgf17b mRNA alone or together with araf-tv2 (A) or araf-tv1 (B) mRNA and harvested for probing at the shield stage for ntl, gsc, chd, eve1, and gata2 or at the 75% epiboly stage for myod, sox32, sox17, otx2, and hoxb1b. Orientations of embryos: lateral views for ntl and hoxb1b; animal-pole views with dorsal to the right for myod, chd, eve1, and gata2; dorsal views for gsc, sox32, and sox17; animal-pole views with dorsal to the bottom for otx2. The ratio of embryos with marker change was indicated. Scale bar, 200 m.
Discussion
In this study, we demonstrate the expression of the araf splicing variant araf-tv2 in zebrafish embryos that encodes C-terminally truncated Araf isoform without the kinase domain. Although Araf-tv2 physically binds to Araf-tv1, it has little effect on Araf-tv1 function in mediating Ras/MAPK signaling or repressing TGF-/ Smad2 signaling. We uncover that Araf-tv2 associates with and prevents Ras proteins from signaling to MAPK. In the zebrafish embryo, araf-tv2 overexpression is sufficient to restrict Fgf/MAPK signaling to regulate germ layer formation and patterning. . Araf-tv2 binds to Ras proteins and inhibits Ras-stimulated MAPK activation. A, Araf-tv2 bound to KRAS and NRAS. Interactions between Araf-tv2 and different human RAS proteins were detected in HEK293T cells by co-immunoprecipitation. The G12V mutation made the corresponding Ras constitutively active. Arrowheads indicated the nonspecific IgG. B, Araf-tv2 impaired the binding of Ras to Raf members in HEK29T cells. Note that KRAS G12V only interacted weakly with Braf as indicated by *. C-E, overexpression of Araf-tv2 attenuated RAS-induced increase of endogenous p-ERK1/2 in HEK293T cells. Note that Araf-tv2 efficiently blocked effect of KRAS G12V (C) and NRAS (D) but had a little effect on HRAS activation of p-ERK1/2 (E). Co-transfection of Araf-tv1 enhanced Ras activation of ERK1/2. F and G, araf-tv2 overexpression blocked Ras activation of endogenous Erk1/2 in embryos. Embryos at the one-cell stage were injected with zebrafish kras G12V mRNA (F) or nras G12V (G) mRNA alone or in combination with different doses of araf-tv2 mRNA and harvested at 75% epiboly stage for immunoblotting. The relative levels of p-Erk1/2 were quantified. H and I, araf-tv2 overexpression inhibited ectopic Erk1/2 activation by kras G12V (H) or nras G12V (I) overexpression in embryos. The injected embryos at the shield stage were subjected to immunofluorescence using anti-p-Erk1/2 antibody. The ratio of affected embryos was indicated. Scale bar, 200 m. p-Erk1/2 level. However, unaltered p-Erk1/2 levels in araf morphants might be a combined effect of araf-tv1 and araf-tv2 loss: loss of araf-tv1 impairs Mek/MAPK activation to a certain degree and loss of araf-tv2 enhances Ras/MAPK signaling. Nevertheless, araf-tv1 and araf-tv2 appear to take part in germ layer formation and patterning by regulating distinct signaling pathways. The specific developmental functions of araf transcript variants need to be verified by genetic depletion of individual variants. We tried to knock out the araf-tv2-specfic coding sequence (84 bp) by Cas9 technology, but it was unsuccessful. This effort should continue in the future.
Author Contributions-C. X. designed the study, performed and analyzed experiments, and wrote the paper. X. L. assisted experiments. A. M. conceived the study and wrote the paper. All authors reviewed the results and approved the final version of the manuscript. | 4,285.4 | 2015-08-25T00:00:00.000 | [
"Biology"
] |
Long-term resveratrol administration improves diabetes-induced pancreatic oxidative stress, inflammatory status, and β cell function in male rats
Abstract Diabetes is a metabolic disorder caused by insulin resistance or a defect in the pancreatic beta cells in insulin secretion. The aim of this study was to evaluate the possible effectiveness of long-term administration of resveratrol on inflammatory and oxidative stress markers in the pancreatic tissue of diabetic rats. Male Wistar rats (n = 24) were randomly divided into four groups of six animals, namely a healthy group, a healthy group receiving resveratrol, a diabetic control group, and a diabetic group receiving resveratrol. Diabetes was induced by single dose injection of streptozotocin (50 mg/kg; ip), 15 min after injection of nicotinamide (110 mg/kg; ip). Resveratrol was also administered by gavage (5 mg/kg/day) for 4 months. Administration of resveratrol alleviated hyperglycemia, weight loss and pancreatic β cell function measured by HOMA-β. Resveratrol improved oxidative stress (nitrate/nitrite, 8-isoprostane and glutathione) and proinflammatory markers (tumor necrosis factor α, cyclooxygenase 2, interleukin 6 and nuclear factor kappa B) in the pancreatic tissue of diabetic rats. Resveratrol administration had no significant effect on the activity of superoxide dismutase and catalase enzyme. These observations indicate that resveratrol administration may be effective as a beneficial factor in improving pancreatic function and reducing the complications of diabetes.
INTRODUCTION
Diabetes mellitus is a widespread metabolic disorder caused by the impaired secretion of insulin by the beta cells of the islets of Langerhans or insulin resistance (Dooley et al., 2016).In recent years, the prevalence of diabetes has sharply increased in almost all regions of the world, so that 425 million people worldwide are now suffering from diabetes (Herrera et al., 2021).
Hyperglycemia causes oxidative stress by increasing the production of reactive oxygen species (ROS) and impairment of antioxidant defenses through reducing antioxidants such as glutathione (GSH) and the activity of antioxidant enzymes (Robertson et al., 2004;Zhang et al. 2020).Documents indicate that oxidative stress plays a key role in the pathogenesis of diabetes (Zhang et al. 2020).Hyperglycemia along with oxidative stress promotes the production of inflammatory cytokines such as tumor necrosis factor alpha (TNF-α), interleukin-6 (IL-6), and activates the proinflammatory transcription factor nuclear factor kappa B, NF-κB which in turn increases the production of inflammatory cytokines.This process leads to inflammation, cell damages (including lipid peroxidation, protein oxidation, and DNA damage) and eventually cell death (Palsamy, Subramanian, 2010;Zephy, Ahmad, 2015).Oxidative stress and inflammation in the pancreatic tissue have also been shown to cause dysfunction and death of Page 2/11 Braz.J. Pharm.Sci.2023;59: e21468 Samin Nahavandi, Masoumeh Rahimi, Mohammad Reza Alipour, Farhad Ghadiri Soufi beta cells of the Langerhans islets (Palsamy, Subramanian, 2010;van der Spuy, Pretorius, 2009).
Resveratrol (trans-3, 5, 4'-trihydroxystilbene) is an Stilbene compound and a phytoalexin produced due to the reactions of plants to stressful stimuli.(Weiskirchen, Weiskirchen, 2016).The beneficial effects of this compound, including antioxidant, anti-cancer, anti-inflammatory, and anti-obesity roles, as well as protection of the heart and nerves, have been observed in both humans and laboratory animals.Previous studies have shown that short-term resveratrol administration can exert beneficial anti-diabetic effects such as lowering blood sugar in hyperglycemic animals, protection of pancreatic beta cells, improvement of insulin function, and reduction of insulin secretion in hyperinsulinemic animals (Ku et al., 2011;Szkudelski, Szkudelska, 2015).
Also, Palsamy and Subramanian (2010) reported that short-term administration of resveratrol (30 days) reduces oxidative stress in the pancreas of diabetic rats.
While short-term administration of resveratrol has been suggested to have a protective effect on pancreatic beta cells and improves pancreatic structure and function, there is insufficient information on the effect of longterm administration.The present study, therefore, aims to evaluate the effect of long-term (4 months) resveratrol administration at a dose of 5 mg/kg/day on inflammatory and oxidative stress markers in the pancreatic tissue of diabetic rats.
Experimental animals and group design
A total of 24 male Wistar rats (Razi Institute, Tehran, Iran), weighing 320-350 g, in standard conditions (temperature 22-25 °C, 12 h dark-light condition, and free access to water and food) were selected randomly and divided into four groups of 6 animals as follows: 1. Healthy control group (NC): No intervention was performed on the rats of this group.The study approved by the Ethical Committee of Tabriz University of Medical Sciences (Code: IR.TBZMED.REC.1389.82),and adhered to the tenets of the Declaration of Helsinki (Arvin et al., 2017).
Induction of diabetes
To induce type 2 diabetes, was first 110 mg/kg of nicotinamide injected intrapritoneally and after 15 minutes, 50 mg/kg of streptozotocin (STZ) dissolved in 0.1 molar citrate buffer (pH = 4.5) was injected intraperitoneally.Nicotinamide protects pancreatic β cells (up to 40%) from STZ cytotoxicity, causing insulin-independent diabetes mellitus (type 2 diabetes) (Masiello et al., 1998).To prevent hypoglycemia due to insulin secretion from pancreatic cells degraded by STZ, 10% glucose solution was provided to diabetic rats for 24 h after 6 h of streptozotocin injection.After 48 h of injection, blood glucose of rats was measured by a glucometer (Arkray, Kyoto, Japan) and values above 250 mg/dl were considered as type 2 diabetes (Figure 1).Water-soluble resveratrol (Cayman chem., Ann Arbor, MI, USA) at a dose of 5 mg/kg was given as gavage daily for 4 months at noon times and the dose of resveratrol was monitored and adjusted weekly (Palsamy, Subramanian, 2008).At the end of the experimental period, rats were anesthetized with 80 mg/kg of ketamine and killed by beheading.Blood (5 ml) was collected from the retroorbital sinus of rats.Pancreatic tissues were removed by dissecting the abdomen of rats, immediately frozen in liquid nitrogen, and kept at −70 °C until homogenization.All interventions were performed from morning to noon and all measurements were done at noon interval.
Tissue homogenization
According to the instructions of the Cayman commercial kit (Cayman chem., Ann Arbor, MI; Item No: 10409), 200 mg of fresh pancreatic tissue with 400 μl of cold hypotonic buffer (10 mM NaCl, 2 mM MgCl2, 10 mM HEPES, 20% glycerol 0.1% Triton X-100, 1 mM dithiothreitol, 3 μl of 1 M of 10% P-40, complete protease inhibitor cocktail, pH 7.4), were homogenized for 15 minutes and then for 10 minutes at 4 °C at 14,000 rpm were centrifuged.Supernatant containing cytoplasmic proteins was used to measure TNF-α, IL-6, 8-Isoprostane, SOD, GSH, CAT and COX-2.The precipitate residue was homogenized in 50 μl of cold extraction buffer (hypotonic buffer, 39.8 μl of 5 M NaCl and 5 μl of 10 mM dithiothreitol) and centrifuged for 10 minutes at 4 ° C at 14000 g, and the supernatant contained the nuclear proteins was used to measure NF-κB.Protein determination kit (Cayman chem., Ann Arbor, MI; Item No: 704002) was used to evaluate the cytoplasmic and nuclear protein concentrations of the pancreas.
Measurement of glucose and insulin
Blood glucose was measured by a glucometer (Arkray, Kyoto, Japan).Insulin was determined by a proprietary kit (Cayman Chem., Ann Arbor, MI, USA) according to the instructions by the ELISA method at 450 nm and then calculated by drawing a standard curve (Soufi, Mohammad Nejad, Ahmadieh, 2012).
Measurement of oxidative stress markers
SOD activit y, GSH and 8-Isoprost ane concentrations, and nitrate to nitrite ratio, were assessed by a proprietary kit (Cayman Chem., Ann Arbor, MI, USA) according to the instructions by the ELISA method and then calculated by drawing standard curves.CAT activity was also measured using the IBL kit (IBL, Hamburg Germany) according to the Abei method.Measurements were based on hydrogen peroxide decomposition rate at 240 nm at 20 °C and the enzyme activity (nmol/mg protein) was obtained through the following formula (Khameneh et al., 2013).K=0.153 (log A240 at t=0/A240 at t=15)
Measurement of inflammatory markers
IL-6 and TNF-α levels were determined using IL-6 and TNF-α inflammatory factor kits (Invitrogen, USA) by the ELISA method based on the manufacturer's instructions at 450 nm.NF-κB activity was measured Samin Nahavandi, Masoumeh Rahimi, Mohammad Reza Alipour, Farhad Ghadiri Soufi
Changes in insulin resistance and assessing pancreatic β -cell function:
Figure 2-A illustrates that 4 months after induction of diabetes, insulin resistance level increased significantly in both diabetic groups compared to healthy control rats (p < 0.01 for both) and resveratrol administration in both healthy and diabetic groups had no effect on insulin resistance.Figure 2-B also illustrates that pancreatic beta cell function was significantly lower in rats in both diabetic groups than that in control group rats (p < 0.01 for both) and long-term resveratrol administration improved pancreatic beta cell function in both healthy and diabetic rats (p < 0.05 for both).using a NF-κB p65 transcription factor kit (Cayman chem., Ann Arbor, MI) according to the manufacturer's instructions and expressed as OD 450 nm/mg protein (Soufi et al., 2012).COX-2 activity was assessed by a ELISA kit (Cayman Chem., Ann Arbor, MI, USA) according to the instructions and then calculated by drawing standard curves (Khameneh et al., 2013).
Statistical analysis
The obtained data were expressed as mean ± standard error (SE) and p-values < 0.05 were considered statistically significant.Differences between different groups were compared by one way analysis of variance (ANOVA) and Tukey's test using SPSS software version 18.
Changes in body weight, blood sugar, and blood insulin
As shown in Table I, body weight and blood insulin concentrations decreased significantly in diabetic rats compared to the healthy control animals 4 months after the induction of diabetes (p < 0.01 for both factors), while blood sugar level was significantly higher in the diabetic group than the healthy control (p < 0.01).Administration of resveratrol in healthy rats changed none of these variables compared to the healthy control group.However resveratrol administration in diabetic rats increased body weight and decreased blood sugar compared to the diabetic group (p < 0.05 for both factors), it failed to reach similar levels in the healthy control group.Although longterm use of resveratrol increased insulin levels in diabetic rats, this increase was not at a significant level (p = 0.059).
Changes in oxidative stress markers
Figure 3, shows that long-term diabetes has increased the concentration of 8-Isoprostane and the ratio of nitrate to nitrite (p < 0.01 for both factors) and decreased SOD activity (p < 0.05) and GSH concentrations (p < 0.01) in the pancreatic tissues of diabetic rats compared to those in healthy controls.Four months resveratrol administration prevented a significant decrease in SOD activity in resveratrol-treated diabetic rats compared to the healthy group.Resveratrol also reduced the concentration of 8-Isoprostane (p < 0.01) and the ratio of nitrate to nitrite (p < 0.05) and increased GSH content (p < 0.05) compared to the diabetic group, but it failed to reach similar levels in healthy control group.Administration of resveratrol in healthy rats was only led to a significant increase in CAT activity (p < 0.05).
Changes in inflammatory markers
Figure 4 shows that COX-2 and NF-κB activities (p < 0.05) and TNF-α and IL-6 concentrations (p < 0.01) increased significantly in pancreatic tissues of diabetic rats in comparison to healthy control animals.These variables remained unchanged in healthy rats administered with resveratrol compared to the healthy group.In diabetic rats receiving resveratrol, however, it prevented significant increases in COX-2 activity and IL-6 concentrations in comparison to the healthy group and decreased NF-κB activity and TNF-α concentrations compared with the diabetic rats (P < 0.05 for both factors).Nonetheless, NF-κB activity (p < 0.05) and TNF-α concentrations (p < 0.01) were still significantly higher than healthy control group.
DISCUSSION
Oxidative stress has been shown to be one of the major causes of hyperglycemia.In other words, oxidative stress in the pancreatic beta cells reduces the activity and mRNA production of the insulin gene promoter, resulting in the inhibition of insulin gene expression.A reduction has also been reported in DNA binding capacity to PDX-1, an important transcription factor for the insulin gene (Evans et al., 2002;O'Brien, Granner, 1996).In this condition, the body has to break down proteins and fats to provide the required energy from their storage sources, causing muscle atrophy and fat mass, and thus an evident weight loss (Palsamy, Sivakumar, Subramanian, 2010).In line with previous studies, our results revealed that diabetic rats developed hyperglycemia and gradual but noticeable Samin Nahavandi, Masoumeh Rahimi, Mohammad Reza Alipour, Farhad Ghadiri Soufi weight loss over a 4-month period (Soylemez et al., 2008).However, the administration of resveratrol to diabetic rats for 4 months led to weakened hyperglycemia and the weight loss process, with a relative increasing trend in blood insulin levels (though this increase was not significant; p = 0.059), indicating a possible improvement in energy metabolism.
Resveratrol has been reported can protect the pancreas, improve the function of pancreatic β cells and reduce insulin secretion (Szkudelski, Szkudelska, 2015).This condition enables the pancreas to produce and secrete more insulin over a longer time period and therefore helps to improve metabolism and weight regulation in the long run (Soylemez et al., 2008;Leonard et al., 2003).
Results of the present study also suggest the effectiveness of resveratrol on pancreatic beta cell function, so that resveratrol administration increased the HOMA-β index in both healthy and diabetic rats, which are consistent with results of other studies (Movahed et al., 2013).
Studies suggest that resveratrol leads to more efficient signaling for insulin through the Akt pathway by reducing oxidative stress, resulting in reduced insulin resistance (Brasnyó et al., 2011).In a study conducted by Cheng et al. (2015) resveratrol could reduce the level of HOMA-IR index in the pancreatic tissue of diabetic rats.A similar result was observed in a human study conducted by Movahed et al. (2013).However, in some human studies (Goh et al., 2014) and in the present study, resveratrol administration could not reduce this index.The reason for this difference in results may be due to differences in the methods (consumed dose, duration of resveratrol consumption) in various studies.
In normal conditions, mitochondria and the NADPH oxidase pathway produce moderate levels of reactive oxygen species (ROS), such as superoxide species (•O 2 ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radical (•OH), which participate in some physiological processes.In the first step, ROS is detoxified by SOD, resulting in the dismutation of •O 2 to H 2 O 2 , and the latter is then detoxified by catalase or glutathione peroxidase and converted to water and molecular oxygen (Su, Hung, Chen, 2006).Additionally, GSH, another intracellular antioxidant molecule, is an essential substrate for the activity of glutathione peroxidase and acts as a direct scavenger of ROS by being converted to GSSG.
Overproduction of ROS or declined ability of antioxidants in their detoxification causes oxidative stress, which can be traced by measuring lipid peroxidation, decreased activity of antioxidant enzymes, protein oxidation, and apoptosis (Soufi, Mohammad Nejad, Ahmadieh, 2012).Chronic hyperglycemia has been shown to induce oxidative stress by direct production of ROS or by alteration of the redox balance (van der Spuy, Pretorius, 2009).
Our results are in line with previous studies in this field, so that rats in diabetic groups developed chronic hyperglycemia with elevated 8-Isoprostane level, decreased SOD antioxidant activity, increased nitrate to nitrite ratio, and declined GSH concentration in pancreatic tissue ().
As a member of the eicosanoids family, 8-isoprostane is known as an indicator of lipid peroxidation during oxidative stress and antioxidant deficiency, and is produced by the oxidation of phospholipids by oxygen radicals (Morrow et al., 1995).Nitric oxide (NO•) is another factor contributing to oxidative stress, which can react with superoxide to form a powerful oxidant, called peroxynitrite (•ONOO), causing more cell damage.NO has a short half-life that limits its direct measurement.Therefore, its metabolites, nitrite (NO 2 ˉ) and nitrate (NO 3 ˉ), are usually measured in studies (Pitocco et al., 2010).Palsamy and Subramanian (2010) reported that administration of resveratrol for 30 days resulted in significant reductions in lipid peroxidases, hydroperoxidases, and carbonyl proteins, as well as an enhancement in the activity of antioxidant enzymes in treated diabetic rats.Additionally, the antioxidant effects of resveratrol improved the structure and function of the pancreas The results of this study are in line with previous studies, so that 4-month administration of resveratrol prevented excessive increase in lipid peroxidation (8-isoprostane level), a decrease of SOD antioxidant enzyme activity, a significant increase in the ratio of nitrate to nitrite, and a significant decrease in GSH concentration in the pancreatic tissue.
Hyperglycemia increases the activity of NF-κB transcription factor in various ways.NF-κB is one of the most important proinflammatory transcription factors and plays a key role in the progression of diabetes-induced inflammations in different tissues (Clarkson, Thompson, 2000;Mantha et al., 1993).This factor increases collagen production, cell-binding molecules, mobilization and activation of leukocytes in tissues by activating proinflammatory cytokines, such as TNF-α, IL-1, and IL-6, in addition to being in a positive feedback loop leading to greater NF-κB activity and the production of inflammatory agents, including cyclooxygenase-2 (COX-2).This inflammation-inducing process, along with tissue damage due to oxidative stress, leads to apoptosis in the tissue.Inflammatory cytokines are involved in increasing COX-2 expression in many tissues by the activation of NF-κB and MAPK (mitogen-activating protein kinase) (Paik et al., 2000;Palsamy, Subramanian, 2010).
In terms of tissue inflammation, the results of this study are also in agreement with those of other studies, so that the activities of NF-κB and COX-2 and the concentrations of TNFα and IL-6 were significantly higher in pancreatic tissues of diabetic rats than in healthy rats.
There are several studies on the anti-inflammatory effects of resveratrol.Kennedy et al. (2009), for instance, investigated the effects of resveratrol on the reduction of inflammation in human adipocytes, and observed that resveratrol administration blocked inflammatory responses and inhibited the induction of IL-6, IL-8, and IL-1β expression levels.Palsamy and Subramanian (2011) also studied the effects of resveratrol on kidneys of diabetic rats and reported that resveratrol administration for 30 days improved proinflammatory factors, such as NF-κB, COX-2, IL-6, and TNF-α, in a diabetic group compared to diabetic controls.In another study oral administration of resveratrol (5 mg/kg body weight) to diabetic rats for 30 days resulted in significant reductions in the levels of inflammatory markers, TNF-α, IL-1β, IL-6, and NF-κB, in the pancreatic tissue (Palsamy, Subramanian, 2010).The present results are consistent with those of other investigations, so that 4-month administration of resveratrol decreased NF-κB activity and TNF-α concentration, and prevented the increase of COX-2 activity and IL-6 concentrations in the pancreas of diabetic rats.
CONCLUSION
Diabetes has been shown to cause oxidative stress, an increase in proinflammatory mediators, and inflammation in the pancreas tissue.On the other hand, long-term administration of resveratrol attenuated blood sugar reduction and mitigated the adverse effects of diabetes on the pancreatic tissue.Therefore, given that previous studies are indicative of no significant side effects by resveratrol administration, it has a significant beneficial reducing effect on the complications of diabetes in the pancreatic tissue and can be useful as a supplement in the reduction of diabetesinduced problems.To confirm this hypothesis, however, further clinical trials are needed in humans.
FIGURE 1 -
FIGURE 1 -The scheme of experimental procedures.
FIGURE 2 -
FIGURE 2 -The effect of long-term resveratrol administration on Insulin resistance index (HOMA-IR) and beta cell function index (HOMA-β) in pancreatic tissues of the studied groups during the experimental period.Values are expressed as mean ± SE for six rats in each group.The sign * indicates p <0.05 and the sign ** indicates p <0.01 relative to the healthy control group (NC); # indicate p < 0.05 relative to the diabetic control group (DC).NTR and DTR represent resveratrol-administered healthy and diabetic groups, respectively.
FIGURE 3 -
FIGURE 3 -The effect of long-term resveratrol administration on catalase activity (A) superoxide dismutase activity (B) 8-Isoprostane concentration (C) reductive glutathione activity (D) and nitrate to nitrite ratio (E) in pancreatic tissues of rats studied during the experimental period.Values are expressed as mean ± SE for six rats in each group.* and ** indicate p < 0.05 and p < 0.01 relative to the healthy control group (NC), respectively.# and ## indicate p < 0.05 and p < 0.01 relative to the diabetic control group (DC), respectively.NTR and DTR represent resveratrol-administered healthy and diabetic groups, respectively.
FIGURE 4 -
FIGURE 4 -The effect of long-term resveratrol administration on COX-2 (A) and NF-Κb (B) activities and concentrations of TNF-α (C) and IL-6 (D) in pancreatic tissues of the studied groups during the experimental period.Values are expressed as mean ± SE for six rats in each group.The sign * indicates p <0.05 and the sign ** indicates p <0.01 relative to the healthy control group (NC); # and ## indicate p < 0.05 and p < 0.01, respectively, relative to the diabetic control group (DC).NTR and DTR represent resveratrol-administered healthy and diabetic groups, respectively.
TABLE I -
Values for the measured factors of body weight, blood sugar, and blood insulin in the study groups *Values are expressed as mean ± SE for six rats in each group.** : indicates p < 0.01 relative to the healthy control group, and # indicates p < 0.05 relative to the diabetic control group.NTR and DTR represent resveratrol-administered healthy and diabetic groups, respectively. | 4,838.4 | 2023-04-14T00:00:00.000 | [
"Biology"
] |
Interference competition between wolves and coyotes during variable prey abundance
Abstract Interference competition occurs when two species have similar resource requirements and one species is dominant and can suppress or exclude the subordinate species. Wolves (Canis lupus) and coyotes (C. latrans) are sympatric across much of their range in North America where white‐tailed deer (Odocoileus virginianus) can be an important prey species. We assessed the extent of niche overlap between wolves and coyotes using activity, diet, and space use as evidence for interference competition during three periods related to the availability of white‐tailed deer fawns in the Upper Great Lakes region of the USA. We assessed activity overlap (Δ) with data from accelerometers onboard global positioning system (GPS) collars worn by wolves (n = 11) and coyotes (n = 13). We analyzed wolf and coyote scat to estimate dietary breadth (B) and food niche overlap (α). We used resource utilization functions (RUFs) with canid GPS location data, white‐tailed deer RUFs, ruffed grouse (Bonasa umbellus) and snowshoe hare (Lepus americanus) densities, and landscape covariates to compare population‐level space use. Wolves and coyotes exhibited considerable overlap in activity (Δ = 0.86–0.92), diet (B = 3.1–4.9; α = 0.76–1.0), and space use of active and inactive RUFs across time periods. Coyotes relied less on deer as prey compared to wolves and consumed greater amounts of smaller prey items. Coyotes exhibited greater population‐level variation in space use compared to wolves. Additionally, while active and inactive, coyotes exhibited greater selection of some land covers as compared to wolves. Our findings lend support for interference competition between wolves and coyotes with significant overlap across resource attributes examined. The mechanisms through which wolves and coyotes coexist appear to be driven largely by how coyotes, a generalist species, exploit narrow differences in resource availability and display greater population‐level plasticity in resource use.
| INTRODUC TI ON
The competitive exclusion principle posits that co-occurring species with high resource use overlap will compete resulting in exclusion when resources are limited (Gause, 1934;Hardin, 1960).
Intermediate to exclusion, resource competition can reduce fitness of individuals and result in a reduction of species abundance (Fedriani et al., 2000). Interference competition occurs where two species have similar resource requirements that are concentrated or limited and one species is dominant (e.g., kleptoparasitism, territory displacement; Case & Gilpin, 1974). Described as an active form of competition, interactions between individuals often result in the subordinate species realizing some cost (Schoener, 1983) such as loss of space (Tannerfeldt et al., 2002), reduction in time active (Hayward & Slotow, 2009), or loss of life (e.g., intraguild predation; Polis et al., 1989;Sunde et al., 1999).
Reducing interactions or competition may improve fitness for one or both species experiencing interference, as seen with cape foxes (Vulpes chama) avoiding black-backed jackals (Canis mesomelas) to reduce interspecific killing (Kamler et al., 2012). Limiting competition also may be possible through niche partitioning (Schoener, 1974). Niche partitioning can occur through natural selection where differences in morphology arise and allow adaptation of two otherwise competing species to fill niches that are functionally different (Wilson, 1975). Ecologically, altering foraging time or effort can facilitate niche partitioning and reduce interspecific contact (Toweill, 1986). Several species of bats, similar in body size and prey selection, coexist using temporal segregation (Swift & Racey, 1983).
In addition to temporal segregation, two species occupying a similar niche may exhibit spatial or dietary differentiation, or specialization, that can reduce competition and allow coexistence (Schoener, 1974).
Wolves (Canis lupus) and coyotes are sympatric across most of their ranges in North America (Arjo & Pletscher, 2004) but differ in body size (wolves 18.0-55.0 kg [Mech, 1974]; coyotes 9.1-14.7 kg [Bekoff & Gese, 2003]). Where wolves occur, coyotes may modify their distribution, behavior, and pack size to limit interspecific competition or wolf aggression (Arjo & Pletscher, 1999;Berger & Gese, 2007;Fuller & Keith, 1981;Thurber & Peterson, 1992) and coyote abundance may be suppressed as compared to wolf-free areas (Levi & Wilmers, 2012;Smith et al., 2003). However, co-occurring wolves and coyotes can exhibit high spatial overlap when comparing home ranges and core areas (Arjo & Pletscher, 1999;Atwood, 2006;Berger & Gese, 2007); yet previous studies have not provided a mechanism for coexistence where this spatial overlap occurs. Home range overlap does not equate to overlap in resource use, nor does use occur across a home range or core area simultaneously or homogenously. Consideration for activity and spatial segregation between these species at finer spatial and temporal scales than the home range may provide a mechanism for coexistence. In addition, diet may be important to consider as across much of eastern North America, white-tailed deer (Odocoileus virginianus) are an important prey of wolves and coyotes (Arjo et al., 2002;Ballard et al., 1999), though deer age classes selected may differ between species (Arjo et al., 2002;Kautz et al., 2019;Mech & Boitani, 2003;Patterson et al., 1998). The onset of white-tailed deer parturition provides a large influx of vulnerable prey that exhibits immobility and hiding behavior for about 5 weeks postparturition, followed by increased mobility and social behavior (Ozoga et al., 1982).
This temporal variability in deer fawn size and mobility provides a resource within both wolves and coyotes optimal prey size range (Carbone et al., 1999) and may reduce interference competition.
We quantified the degree of temporal, dietary, and spatial overlap of wolves and coyotes at the population level to estimate the potential for interference competition and identify the mechanism for how these sympatric canids coexist using accelerometer-enabled GPS collars, scat analysis, and resource utilization functions during May-August. We hypothesized that coyotes, as the subordinate carnivore, avoid wolves through temporal differentiation. We predicted coyotes would shift activity peaks and would exhibit reduced activity as compared to wolves. We hypothesized that wolf and coyote diets differ due to body size and optimal prey size (Carbone et al., 1999;Thurber & Peterson, 1992), where coyotes select smaller prey as compared to wolves. We predicted that wolves' diet would be mostly white-tailed deer as they are considered ungulate specialists. We predicted coyotes, as generalist omnivores, would exhibit a more variable diet due to avoidance of wolves and exclusion from prey resources by wolves. We hypothesized that wolves, as the dominant carnivore, exclude coyotes from areas with greatest probability of occurrence by white-tailed deer, and use those areas disproportionately more as compared to availability. Specifically, we predicted wolves, while active, would select for areas with greater adult white-tailed deer probabilities. We predicted that coyotes, while active, would select for areas of greater snowshoe hare and ruffed grouse densities during all time periods and greater fawn probabilities shortly after deer parturition as compared to wolves.
Finally, we predicted coyote resting sites (i.e., inactive sites) would be in areas of lesser probability of wolf occurrence.
| Capture and telemetry
We captured coyotes and wolves each spring (May-June) using No. 3 padded foothold traps (Oneida Victor) and modified MB-750 foothold traps (modified off-set jaws, additional swivels, and altered drag; D. Beyer, unpublished data), respectively. Additionally, we captured coyotes with relaxed locking cable restraints (Wegan et al., 2014) during February-March each year. We anesthetized coyotes and wolves with a ketamine hydrochloride (4 and 10 mg/kg, respectfully; Ketaset ® , Fort Dodge Laboratories, Inc.) and xylazine hydrochloride (2 mg/kg; 2 mg/kg; X-Ject E™, Butler Schein Animal Health) mixture (Kreeger et al., 2002). We fitted coyotes and wolves with a global positioning system (GPS) collar with a very high frequency (VHF) transmitter and an onboard triaxial accelerometer to record activity (Model GPS7000SU, Lotek Wireless). We programed GPS collars to acquire and store locations every 15 min from 1 May to 31 August 2013-2015. Before individuals were released at the capture site, we administered yohimbine hydrochloride (0.15 mg/kg; Hospira © ) to reverse the effects of xylazine hydrochloride. We uploaded data weekly using ultra high frequency communication and a handheld command unit (Lotek Wireless Inc.) from a fixed-wing aircraft. Approval for all capturing and handling procedures was through Mississippi State University's Institutional Animal Care and Use Committee (protocol 12-012).
| Time periods
We selected three time periods related to white-tailed deer fawn availability to wolves and coyotes. The preparturition period (PPP, 1 May-26 May) is before the annual birth pulse of fawns occurs and only adult deer are on the landscape. The limited mobility period (LMP, 27 May-30 June) occurs when fawns are young, immobile, and within the predicted optimal prey size for coyotes beginning at fawn parturition to 35 days postparturition (Carbone et al., 1999;Ozoga et al., 1982;Petroelje et al., 2014). The social mobility period (SMP, 1 July-31 August) occurs when fawns exceed the predicted optimal prey size of coyotes (Carbone et al., 1999) and when fawn behavior switches from hiding to running with associated family groups (Nelson & Woolf, 1987). Fawns in Michigan gain on average 0.2 kg/day during their first month weighing about 9 kg by the end of LMP (Verme & Ullrey, 1984) and would reach optimal prey size for wolves during SMP. After 31 August, the fall molt begins, making it difficult to distinguish adult and fawn hair in scat samples (Adorjan & Kolenosky, 1969
| Estimates of prey availability
We identified white-tailed deer, ruffed grouse (Bonasa umbellus), and snowshoe hare (Lepus americanus), a priori, as prey that may be important in wolf and coyote diets as they appeared to be dominant available prey in the study area (D. Beyer, unpublished data) and within the optimal prey size range (Carbone et al., 1999). We used snowshoe hare pellet counts to estimate hare density and grouse drumming surveys to estimate grouse density within the study area (see Appendix A, Methods).
We estimated probability of occurrence by adult female and fawn deer across the landscape using a resource utilization function (RUF; Marzluff et al., 2004) to regress the occurrence distribution (OD) of individual deer on landscape covariates thought to influence their use. To estimate ODs, we used VHF relocation data from radio-collared adult female white-tailed deer (n = 113) captured using Clover traps (Clover, 1956) and neonate fawn deer (n = 100) captured using vaginal implant transmitter guided searches or opportunistically during 2013-2015 (Kautz et al., 2019(Kautz et al., , 2020. We used Brownian bridge movement models (BBMM) in package "BBMM" (Nielson et al., 2013) for program R (version 3.01, R Development Core Team, 2018) to produce a 99% OD for each deer/time period (i.e., PPP, LMP, SMP) combination ( Figure 1). We included adult female deer with ≥20 VHF locations or fawn deer with ≥5 VHF locations, as neonates were subject to greater predation during the first 16 weeks after birth (Kautz et al., 2019) and including only fawns with ≥20 locations would bias the average RUF toward individuals that survived.
A total of 87, 89, and 94 adult female deer during PPP, LMP, and SMP, respectively, and 39 and 37 fawns during LMP and SMP, respectively, had adequate locations for analyses. The BBMM includes a term for a location error vector for estimated error of each VHF tri- Kilometers locations was related and not random. We regressed magnitude of the OD on six landscape variables (distance to water, distance to roads, distance to edge, patch size, and land cover) thought to influence deer resource selection (Duquette et al., 2014). Because the scale of deer movement data was coarser and lacked activity data as compared to wolf and coyote data, we did not include carnivore presence to predict occurrence. We used the 2011 National Land Cover Database (NLCD, Jin et al., 2013) as a categorical assignment of land cover across the 30 × 30 m grid. We combined land covers into the following seven major classes: deciduous forest, mixed forest, evergreen forest, woody wetlands/emergent herbaceous wetlands, open water, grassland/shrub, and developed which included categories containing less than 1% of land cover (e.g., urban, agriculture, and barren; Appendix A, Table A1). We calculated landscape metrics for each cell including patch size and distance to edge (NLCD, Jin et al., 2013), distance to road (Michigan Geographic Framework, all roads v17a), and distance to water (Michigan Geographic Framework, hydrography lines v17a) in ArcMap 10.3 (Environmental Systems Research Institute) and Geospatial Modeling Environment (Beyer, 2012). Before fitting models, we used Pearson's correlation to determine any covariates that were related (i.e., |r| > 0.7) and selected and retained the one that was more ecologically relevant for further analyses. We estimated the population-level RUF for adult female and fawn deer from the individual RUF averaged coefficients for each age class during each time period using the equation where n is the number of individuals and ̂ ij is the estimate of coefficient i for individual j. We estimated the variance of the population-level coefficients using the equation to include intraindividual and interindividual variation (Marzluff et al., 2004;Millspaugh et al., 2006). We then predicted probability of occurrence by adult female and fawn deer across the landscape for each period by using the scaled coefficients from each population-level RUF and spatially derived a relative value for resource suitability for all model covariates layered over a 30 × 30 m cell grid which corresponds to the resolution of NLCD (Jin et al., 2013), the coarsest resource attribute.
We used k-fold cross-validation as a measure of model fit for the
| Activity pattern
To assess daily activity patterns of coyotes and wolves and examine how each species partitions times of activity, we used accelerometers onboard GPS collars. Accelerometers measured gravitational acceleration four times per second along two axes (x and y). We programed GPS collars to store activity data on the collar averaged across 5-min intervals. We considered a collared individual active when summed accelerometer readings were ≥30.7 (Petroelje et al., 2020) and subset the 5-min intervals to observations of active intervals only. We used a one-tailed t test with unequal variances to assess if coyotes, the subordinate species, were active less of the time as compared to wolves, the dominate competitor (Hayward & Slotow, 2009). We estimated the measure of mean daily (24-hr) overlap of activity between coyotes and wolves using the active 5-min intervals and the R package Overlap (Ridout & Linkie, 2009) for each time period (i.e., PPP, LMP, and SMP). We used the coefficient of overlapping (Δ) where 0 is no overlap and 1 is complete overlap as a measure of activity pattern overlap (Linkie & Ridout, 2011;Ridout & Linkie, 2009). We used the nonparametric estimator that works with circular data recommended for small sample sizes (Ridout & Linkie, 2009). This coefficient uses minimum probability density functions, from the kernel density estimation, for both species at each time interval to estimate the area under the curve as a measure of overlap (Linkie & Ridout, 2011).
| Scat collection and diet analysis
We collected wolf and coyote scats opportunistically throughout the study area while driving along roads or performing other field activities during 1 May-31 August 2013-2015. We collected scats in plastic bags and labeled each with sample location, date collected, associated tracks present, and species. We used scat size and shape, and associated tracks to identify species of the deposited scat (Green & Flinders, 1981;Mech, 1970;Prugh & Ritland, 2005;Thompson, 1952). We excluded scats without associated tracks that were >28.1 and <29.0 mm as these were above the 3rd quantile for coyotes and below the 1st quantile for wolves and could therefore not be identified to species . We washed collected scats in double layered nylons and oven dried contents so all that remained was feathers, hair, bone fragments, seeds, and vegetation (Johnson & Hansen, 1979). Once contents were dried, we identified prey items including white-tailed deer (adult or fawn; Adorjan & Kolenosky, 1969), snowshoe hare, ruffed grouse, Rodentia, seeds, and other (which included other avian species, unknown species, vegetation, and invertebrates) based on hair coloration, scale pattern, and length (Adorjan & Kolenosky, 1969;Mathiak, 1938;Spiers, 1973;Wallis, 1993). We recorded the proportion of each prey item in each scat using a 1 × 1 cm grid to estimate the percent volume of each item.
We assessed if coyote's diet contained greater volumes of deer fawns, grouse, and snowshoe hare compared to wolves using an analysis of variance. We calculated dietary breadth (B) and food niche overlap (α) for each species during each time period using Pianka's (1973) formulas: where p i is the proportion of food item i in the diet of predator p and q i is the proportion of food item i in the diet of predator q.
| Space use
Population-level resource selection assumes that individuals select habitats similarly (Thomas & Taylor, 2006). However, Alldredge et al. (1998) suggested this assumption is rarely met and individual variation is important for population-level inference, especially if exclusion is occurring. Thus, we analyzed coyote and wolf location data with a Design III approach using individuals as replicates, accounting for individual-level variation, to assess population-level use (Thomas & Taylor, 2006). We used RUFs to relate the OD of individual wolves and coyotes to covariates thought to influence resource use.
To generate each OD, we used 15-min GPS relocations (x = 1,595.7/OD) from collared wolves and coyotes collected during 1 May-31 August 2013-2015. To identify the activity state of an individual at each GPS location, we used activity data collected from accelerometers and assigned each 15-min location as active if the nearest 5-min activity interval was ≥30.7 (gravitational acceleration, unit-less), otherwise we considered the location as inactive (Petroelje et al., 2020). For each collared individual, we used a dynamic Brownian bridge movement model (dBBMM; Kranstauber et al., 2017) within the package "move" for program R (version 3.01, R Development Core Team, 2018) to generate a 99% OD across a 30 × 30 m grid for all inactive (i.e., sleeping, resting) and all active (i.e., traveling, foraging) GPS relocations for each time period (i.e., PPP, LMP, and SMP; Figure 2). The dBBMM offers improvements over traditional utilization distribution estimators (e.g., fixed-kernel estimators) as it accounts for temporal autocorrelation by using the time and distance between locations and assumes movement between locations is random, modeled as a conditional random walk, which is likely given 15-min GPS relocations.
The dBBMM estimates Brownian motion variance ( 2 m ) which varies along the GPS path via a sliding window to account for changes in movement behavior (Kranstauber et al., 2017). We selected a window of 23 locations (5.75 hr) and a margin of five locations to estimate 2 m as wolves and coyotes displayed similar crepuscular activity patterns during each time period (Figure 3). We generated ODs for each individual wolf or coyote during each time period (i.e., PPP, LMP, SMP) and each activity level (active or inactive), resulting in six ODs per individual, and considered the 99% OD as the outer boundary of area available to each wolf and coyote .
We used linear models (Marzluff et al., 2004) to regress the occurrence probability within each grid cell (i.e., height of the OD) on nine prey or landscape covariates to estimate the relative importance included the population-level predicted probability of occurrence for wolves in each grid cell as a measure of avoidance. Before fitting models, we used Person's correlation to determine any covariates that were related (i.e., |r| > 0.7) and selected and retained the one that was more ecologically relevant for further analyses.
To estimate a population-level RUF, we calculated standardized mean parameter estimates for each species during each activity level and time period using Equation (1) and then calculated the conservative population-level variance using Equation (2) assuming the individuals were selected randomly from the population (Marzluff et al., 2004;Millspaugh et al., 2006). We set α = 0.05 for all population-level RUFs for inference. This is conservative due to small sample size of fewer than 30 individual coyotes and wolves. To assess model fit, we used k-fold cross-validation of wolf and coyote RUFs following procedures used for white-tailed deer.
| Capture and telemetry
We captured and collared 19 coyotes (15 females, four males) and 12 wolves (five females, seven males). Coyotes and wolves wore collars as the forested environment limited our inferences, though all individuals used in analyses were resident adults. Collared wolves represented each of the four packs within the study area. Two wolves collared from each of two packs were analyzed separately.
| Estimates of prey availability
We used the unstandardized population-level RUF for each deer age Table B2). Proportion of time active between wolves and coyotes did not differ during PPP or LMP, however during SMP coyotes were more active than wolves (p < .01). Mean daily activity overlap for coyotes and wolves was greater than 0.86 across time periods (Table 1) though it was greatest during PPP (Δ = 0.92). Two activity peaks, one near dawn and one near dusk, were detected for both canids though wolves lacked an activity peak during dawn hours in PPP and were often more active several hours following sunrise compared to coyotes ( Figure 3).
| Scat collection and diet analysis
We collected 522 and 518 scats initially classified as coyote or wolf, respectively. Diameter of scats with confirmed coyote tracks (x = 25.2 mm, SD = 4.4 mm) was smaller (Welch two-sample t test [H a < 0], p < .01) than those from wolves (x = 33.3 mm, SD = 6.1 mm).
We determined 377 and 305 scats to be coyote or wolf, respectively, identified by tracks or scat diameter and contained associated collection date which were used in diet analyses. Coyote scats con- Though food niche overlap varied among time periods (Table 1)
| Space use
Resource utilization functions for each species, activity level, and time period contained considerable variation among individuals; however, population-level RUFs consistently showed greater variation in selection of resource attributes by coyotes compared to wolves (Figures 7 and 8). Though some individual wolves and coyotes selected for resource attributes similarly (Appendix B,
| D ISCUSS I ON
Wolves and coyotes exhibited considerable overlap in all metrics of resource use examined (Table 1). The greatest divergence was identified within diel activity patterns, then diet, followed by spatial partitioning during periods of activity and inactivity. Given the considerable overlap in all resource metrics, coyotes may experience interference competition by wolves; however, the combination of greater plasticity in activity, diet, and space use by coyotes likely allowed coexistence with wolves in this system.
Our prediction that coyotes may avoid wolves by altering timing of their active periods and decrease activity within those TA B L E 1 Summary of wolves and coyotes overlap for each resource metric examined (i.e., activity, diet, and space use) periods was not supported across time periods as activity overlap was high and coyotes were not less active than wolves ( Figure 5).
Wolf and coyote activity was predominantly crepuscular, with substantial overlap during all time periods as found previously (Arjo & Pletscher, 1999); however, wolves lacked a dawn activity peak during PPP when coyotes did not. The proportion of time spent active for both species generally increased across time periods, but during SMP coyotes were more active than wolves. Temporal partitioning can be used to reduce aggression when interference competition exists (Litvaitisi, 1992), though other canids exhibiting interference competition also lacked temporal partitioning (e.g., coyotes and kit fox [Vulpes macrotis; Kozlowski et al., 2008], coyotes and swift fox [Vulpes velox; Kitchen et al., 1999]). Predators are often thought to follow activity patterns of their prey, (Curio, 1976) and though both canids were most active during crepuscular periods, coyotes may not need to avoid wolves through temporal partitioning if spatial partitioning is sufficient to limit interference competition.
It also is possible that temporal partitioning does not occur during summer with reduced wolf space use due to denning and pup rearing (Arjo & Pletscher, 1999). We only examined activity during summer (i.e., May-August) and greater overlap between wolves and coyotes may occur during winter months when prey is more limited (Arjo et al., 2002) and may result in temporal partitioning to reduce interference competition not identified here.
Though wolves and coyotes differ in body size, and thus pre- coyotes which is expected for an obligate carnivore and ungulate specialist (Paquet & Carbyn, 2003), though deer (adult and fawns) still represented the greatest proportion of any prey for coyotes across time periods. We predicted that coyotes would select for smaller prey items based on their predicted optimal prey size (Carbone et al., 1999), and rodents and hare were found in greater volumes in coyote scat as compared to wolves. However, deer fawns and grouse found in diets of coyotes and wolves did not differ by volume in scats. Though rodents consistently represented a greater proportion of the coyote diet compared to wolves, greater differentiation would likely have been observed if prey remains of Rodentia in scat were identified to genus as beaver can be an important food resource for wolves (Mech & Peterson, 2003) and coyotes are reported to consume a variety of small mammals (Bekoff, 1977).
We found limited evidence for spatial segregation between wolves and coyotes (Figures 7 and 8). Similarly, Berger and Gese (2007) found no evidence of spatial segregation between wolves and coyotes and Arjo and Pletscher (2004) RUFs showed greater variation in selection by coyotes as compared to wolves when active and inactive. The greater variation observed in coyotes was likely due to more generalist behavior and their subordinate responses to wolves as seen in other populations (Arjo & Pletscher, 2004;Arjo et al., 2002). Resource utilization functions for individual coyotes demonstrated selection for divergent resources suggesting coyotes can employ multiple strategies to coexist with wolves at fine spatial scales (Appendix B, Table B1). This is important to consider when characterizing population-level resource selection as individual variation may be greater (Marzluff et al., 2004), and potentially important, especially in the context of interference competition. In addition to individual variation, in complex landscapes selection of single resource attributes may not provide good estimates of species presence (as indicated by many of the individual F I G U R E 8 Population-level resource utilization functions standardized coefficients (β) with 95% confidence intervals, for active wolves (green) and coyotes (blue). Land cover covariates (*) indicate selection relative to the reference value of deciduous land cover, the most common land cover on the landscape. The three time periods related to white-tailed deer availability include preparturition ( models with multiple resource attributes influencing occurrence).
Although coyotes and wolves did not select for similar attributes at the population level, individual RUFs of each species included the same significant resource attributes (Appendix B, Table B1). Given our small sample size, we did not include interaction terms for resource attributes to reduce over parameterization, though further investigation of landscape complexity and resource interactions may improve our understanding of coyote avoidance of wolves especially with respect to multiple prey species interactions. However, even at the population level examining use of resource attributes with separate RUFs for active and inactive behaviors demonstrates the complexity of resource partitioning for a coyote population coexisting with wolves and how use may differ among activities (i.e., foraging, loafing). High individual variation in resource use among coyotes as manifested at the population level likely facilitates coexistence between coyotes and wolves.
Our prediction that active wolf occurrence would be positively related to adult female deer occurrence was not supported.
However, during LMP adult female and fawn deer and wolf active and inactive occurrence was negatively related to distance to edge at the population level. In addition, adult female and fawn deer and active wolf occurrence during SMP was inversely related to distance to roads. Fawn white-tailed deer use has also been found to be greater near roads in other areas of Michigan's Upper Peninsula, USA (Duquette et al., 2014), and has been suggested as a refuge by decreasing probability of encountering wolves (Gurarie et al., 2011;Muhly et al., 2011;Theuerkauf & Rouys, 2008). However, wolves sometimes use roads and trails for travel (Thurber et al., 1994;Whittington et al., 2005) and may hunt along these features as seen in Banff and Jasper National Parks, Canada, where wolves encounter rates with caribou (Rangifer tarandus) increased near anthropogenic linear features (Whittington et al., 2011).
We predicted active coyotes would select areas of greater probability of occurrence for fawns, snowshoe hares, and ruffed grouse.
Though fawns were a large proportion of the diet of coyotes during LMP ( Figure 6), we did not see increasing coyote occurrence with greater deer probability (Figures 7 and 8). Coyotes can respond functionally with respect to fawn consumption and may not shift their space use to select for areas of high fawn use . Coyote occurrence was not positively related to hare density (Figures 7 and 8), and though hare represented a smaller proportion of the coyote diet, the lack of a spatial response suggests coyotes may have also responded functionally as hare densities declined significantly over the study period (Appendix A, Table A2). Coyote occurrence was not influenced by grouse density though we would not expect a large spatial response as grouse represented a small proportion of the diet of coyotes across time periods ( Figure 6).
We predicted inactive coyote occurrence would be inversely related to wolf occurrence to avoid encounters during vulnerable activities such as loafing or sleeping, but at the population-level RUF this prediction was not supported (Figure 7). Coyote avoidance of areas with greater wolf use has been observed in Michigan's Upper Peninsula, USA , though these areas of wolf use were reduced and intensity of use greater due to smaller home ranges resulting from scavenging on livestock carcass dumps which were not present in our study area . This variation in spatial response to wolves regionally may be explained by risk of aggressive interactions. Merkle et al. (2009) found that 79% of wolf-coyote interactions occurred at wolf-killed carcasses and 7% of those interactions resulted in a coyote mortality; thus, avoidance of wolves may be less important where scavenging wolf kills is less common.
Predation on coyotes by wolves is often used to confirm interference competition (Arjo & Pletscher, 1999;Berger & Gese, 2007;Merkle et al., 2009;Thurber & Peterson, 1992) and can account for up to 50% of mortality for transient coyotes (Berger & Gese, 2007). Michigan since the late 1990s (Beyer et al., 2009). Additionally, our study area was mostly forested, in contrast to more open habitats of the western United States, which is likely to influence visible distance, scent dispersion, and spatial overlap between wolves and coyotes. Greater habitat complexity can result in lesser competition by reducing niche overlap (Levins, 1979) and reductions in scent dispersion in complex habitats increases search times for detection dogs (Leigh & Dominick, 2015) and likely reflect conditions experienced by wolves and coyotes.
Alternatively, Crimmins and Deelan (2019) suggest that in areas
where white-tailed deer are a main prey source, as in this study, coyotes are less likely to scavenge wolf kills as they are capable of killing adult deer, potentially reducing conflict in systems without large bodied ungulate resources. They found no evidence that increasing wolf populations were limiting coyote abundance in Wisconsin, USA, which shares many similarities with our study area in Michigan's Upper Peninsula, USA, though lesser wolf densities may also be important in facilitating coexistence in that region. Though deer were the greatest shared prey for wolves and coyotes in this study, based on the generalist nature of deer as supported by the adult female and fawn RUFs, it seems unlikely that deer present a concentrated prey source during the study period. Further, during this time fawns are of size to be consumed in a single meal or easily transported which reduces likelihood of scavenging and adult deer are difficult to capture.
| CON CLUS IONS
Interference competition suggests that dominant species can suppress or exclude subordinate competitors where resource use overlap is high (Case & Gilpin, 1974
ACK N OWLED G M ENTS
We received support for this project from Safari Club International
CO N FLI C T O F I NTE R E S T
The authors declare that they have no competing interests.
Snowshoe hare
Following recommendations of Hodges and Mills (2008), we estimated snowshoe hare abundance from mid-April to early May 2013-2015, following snowmelt, by counting fecal pellet groups within 1 m 2 plots. Within each land cover class (Jin et al., 2013, Ellenwood et al., 2015), as it is preferred winter forage for snowshoe hares (Bookhout, 1965) and differs from the dominant deciduous cover (i.e., sugar maple [Acer saccharum]).
We sampled remaining land cover types, with ≥30 pellet plot sites in each, to identify if any were of importance for snowshoe hare ("open water" and "developed" were not sampled). At each site, we compared the land cover layer designation to the actual vegetation observed using the designations provided by Jin et al. (2013) to correctly assign each plot for land cover classification. Each plot was a 10-cm × 10-m rectangle, and we counted all pellets greater than 50% contained by the rectangle. We used plots that were uncleared of hare pellets prior to surveying as they do not require waiting a Deciduous forest Areas dominated by trees generally greater than 5 m tall, and greater than 20% of total vegetation cover. More than 75% of the tree species shed foliage simultaneously in response to seasonal change Aspen (Populus tremuloides or P. grandidentata) represents dominant cover for 12% of deciduous forests within the study area (Ellenwood et al., 2015) 43
Woody or emergent herbaceous wetland
Areas where forest or shrub land vegetation accounts for greater than 20% of vegetative cover and the soil or substrate is periodically saturated with or covered with water. Areas where perennial herbaceous vegetation accounts for greater than 80% of vegetative cover and the soil or substrate is periodically saturated with or covered with water
29
Mixed forest Areas dominated by trees generally greater than 5 m tall, and greater than 20% of total vegetation cover. Neither deciduous nor evergreen species are greater than 75% of total tree cover 10 Evergreen forest Areas dominated by trees generally greater than 5 m tall, and greater than 20% of total vegetation cover. More than 75% of the tree species maintain their leaves all year. Canopy is never without green foliage hares/ha to hares/km 2 and applied a correction factor of 1.41 to account for natural log bias produced from the transformation (Murray et al., 2002). In addition, we calculated a study area density using the weighted mean by proportion of land cover to examine trends in the hare population over time.
Ruffed grouse
We used 65 roadside male grouse drumming survey sites and five visits to estimate density of grouse. Surveys were conducted when wind speeds were <8 mph and there was no precipitation, as these conditions may inhibit bird activity or detection (Zimmerman and Gutiérrez, 2007). We established survey sites >1.6 km apart to ensure site independence and assumed grouse have a maximum detection radius of 550 m from each survey point (Hansen et al., 2011 given the seasonality of this behavior, and included survey date as a covariate of detection. We included proportion of aspen land cover (Ellenwood et al., 2015) within each site detection radius as a covariate of abundance. We used Akaike information criterion for small sample sizes (AICc) to rank models for best fit (Burnham & Anderson, 2002) to estimate grouse abundance. We considered all combinations of covariates of detection and abundance, a total of four models each year, and we considered the model with the least AICc score as the best supported model for each year. We assumed the grouse population had a 1:1 sex ratio (Gullion, 1981) and estimated the population density by doubling the estimated drumming (i.e., male) grouse abundance from the best supported N-mixture model and converted this number to a density by dividing it by the total area surveyed.
TA B L E A 2
Mean (x) pellet counts for snowshoe hare pellet plots with 95% confidence intervals (CI) by dominant land cover or species (i.e., aspen; Populus tremuloides or P. grandidentata) classification with number of sites (n) and estimated density (hare/km 2 ) by land cover and overall study area for each year, Michigan's Upper Peninsula, USA, 2013-2015
Ruffed grouse
We detected an average of 0.7, 0.4, and 0.6 drumming grouse at each site during 2013-2015, respectively. Timing of survey visit (i.e., date) influenced detection of drumming grouse during all three survey years (Table A3). N-mixture models estimated detection (15.8%-33.4%) and abundance (137-178) as relatively stable across years with confidence intervals overlapping each year (Table A3).
Drumming male grouse abundance estimates were doubled to es- a N-mixture model includes covariates of detection on the left and abundance on the right. The "date" covariate was Julian date. The null model (intercept only) is indicated as "1." Covariates for ruffed grouse include "asp" as the proportion of aspen (Populus tremuloides or P. grandidentata) as land cover within each survey site. | 8,572.2 | 2021-01-11T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The equilibrium state of bifilar helix as element of metamaterials
In the present article, we study the long bifilar helix in which electric currents are quasi-stationary, i.e. the wavelength of the electromagnetic field is much longer than the turn of the helix. All components of the force acting on a physically small element of one helix from the other helix having a big length are calculated. The case when the currents in the two helices have the same direction relative to the x axis is considered. The dependence of the radial component of the force of interaction between two helices on the pitch angle is determined. At various pitch angles the helices can attract and repel each other while the direction of the current does not change. It is found the value of pitch angle when two helices do not interact and bifilar helix, formed by them, is in equilibrium state.
Introduction
Metamaterials are artificially engineered and structured materials to have properties that have not yet been found in nature.The properties of metamaterials are derived both from the inherent properties of their constituent materials, as well as from the geometrical arrangement of those materials.They are made from assemblies of multiple elements fashioned from such materials as metals or plastics.The metal helices are widely used as elements of metamaterials because at transmission of electrical current in them the electric dipole moments and magnetic moments are simultaneously generated.Consequently metamaterial based on helical elements displays as dielectric and magnetic properties, which enhances its use [1][2][3][4][5][6][7][8][9][10][11] (Fig. 1).A special place among the helices take bifilar helices consisting of two helical conductors.These conductors are arranged mutually symmetrically: the second helix is rotated with respect to the first helix on 180 degrees around a common helix axis (x axis).Electric currents in such bifilar helices are more balanced than in the case of single helices, which leads to the symmetry of properties of metamaterials: some components of the tensors of dielectric susceptibility and magnetic susceptibility vanish.At the same time, it raises the question about stability of the bifilar helix, since the electrical currents in it are close to each other and can interact strongly [12][13][14][15].The objective of this article is the search of the value of pitch angle when two helices do not interact and bifilar helix, formed by them, is in equilibrium state.The determination of condition of equilibrium state can be used for design and manufacture of metamaterials consisting of bifilar helices as elements.All components of the force acting on a physically small element in the center of one helix from the other helix having a big length are calculated.The integral equation for determination of pitch angle is found and numerically solved.It is found the value of pitch angle when two helices do not interact and bifilar helix, formed by them, is in equilibrium state.
The equilibrium state of bifilar helix
In this work the long bifilar helix in which electric currents are quasi-stationary, i.e. the wavelength of the electromagnetic field is much longer than the turn of the helix is considered.In Fig. 2 as a sample the two-turn helix and helix in a unfold form are presented.The geometry of the problem is presented in Fig. 3. Fig. 2. Two-turn helix and helix in a unfold form, where r is the radius of the turn, L is the total wire length, is the pitch angle, ℎ = 2 || ⁄ is the helix pitch, q is specific twisting of helix, and > 0 for a right -handed helix and vice versa, cot = .
The following notations are used: ⃗ 0 is radius-vector from element of current to the beginning of coordinate system, 1 1 is element of current of second helix.The relations between the projections of the vector in Cartesian and polar coordinate systems in our case are following The vector ⃗ of induction of a magnetic field at point A (Fig. 3), generated by the elements of the current , calculated by the Biot-Savart formula Its components in Cartesian coordinate systems are following since the integrand for is an odd function.Thus = 0 is a very important feature.It provides a balance of two symmetrically arranged helices, i.e. the absence of forces along the x and y axes in the center of helix (see below).Suppose that at point A the element 1 of second helix is located.The second helix is disposed symmetrically relative to the first helix.They form a bifilar helix.Then We should note that 1 > 0 and > 0 because currents flow in the same direction with respect to the x -axis. 1 has opposite sign respect .The force acting on element of second helix 1 1 calculated by the Ampere's formula All components of the force acting on a physically small element of one helix (Fig. 1) from the other helix having a big length are calculated.
The force is calculated at the center of a long bifilar helix, i.e. when = 0. We can see that the following components of force are equal to zero: 1 = 0 and 1 = 0 where the polar angle.Consequently, in the center of the helix the forces are absent that could rotate helices around their axis or move helices along this axis.At the same time the relation 1 ≠ 0 is satisfied, i.e.
the component of force acting along the radius of the loop of helix r is present.We can see that the force 1 depends strongly on the pitch angle of bifilar helix.
The case when the currents in the two helices have the same direction relative to the x axis is considered.At the large pitch angles of helices the relation 1 < 0 is satisfied, i.e. helices are mutually attracted.In the limiting case when → 2, = 0 ⁄ the well-known classical formula for the force of attraction of the same directed long parallel currents is obtained Here 2r is a distance between long parallel currents.At small pitch angles of helices = 0, → ∞ when their loops are in the form close to the flat loops, the inequality 1 > 0 is satisfied, i.e. helices repel each other.We carry out the change of variables = We can see that the root of the equation 1 () = 0 is independent separately from the radius of the helix r and of the helix pitch ℎ = 2 || ⁄ .The root of the equation 1 () = 0 is determined only by the pitch angle of helix , and in this sense the angle = cot() is a universal characteristic of the bifilar helix.The equation 1 () = 0 is numerically solved for the equilibrium bifilar helix and is shown that its root is 0 (see Fig 4).The equilibrium pitch angle 0 is the same for all sizes of helices, both small and large, if the condition of quasi-stationary current ≫ is satisfied.Here = √(2) 2 + ℎ 2 is the length of one turn of the helix, is wavelength of the electromagnetic field.The value of pitch angle 0 = 38.4 is found.At this value the relation 1 () = 0 is satisfied, i.e. two helices do not interact and bifilar helix, formed by them, is in equilibrium state.
Conclusion
We study the long bifilar helix in which electric currents are quasi-stationary.All components of the force acting on a physically small element in the center of one helix from the other helix having a big length are calculated.It is found the value of pitch angle 0 = 38.4 when two helices do not interact and bifilar helix, formed by them, is in equilibrium state.The determination of condition of equilibrium state can be used for design and manufacture of metamaterials consisting of bifilar helices as elements.
Fig. 1 .
Fig.1.a) Photo of the helices array In0.2Ga0.8As/GaAs/Ti/Au(a square grid on a photo is a negative photo resist from a polymeric material, thickness is about 1 micron)[4]; b) SEM image of an array of one-turn InGaAs/GaAs/Ti/Au helices resonant for THz range[5]; c) DNA-like helices resonant in microwave range[6][7].
Fig. 3 .
Fig. 3.The geometry of the problem (Cartesian and polar coordinate systems) | 1,885.8 | 2016-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Reliable OFDM Data Transmission with Pilot Tones and Error-Correction Coding in Shallow Underwater Acoustic Channel
: The performance of Underwater Acoustic Communication (UAC) systems are strongly related to the specific propagation conditions of the underwater channel. Horizontal, shallow-water channels are characterised by extremely disadvantageous transmission properties, due to strong multipath propagation and refraction phenomena. The paper presents the results of communication tests performed during a shallow, inland-water experiment with the use of a laboratory model of a UAC system implementing the Orthogonal Frequency-Division Multiplexing (OFDM) technique. The physical layer of data transmission is partially configurable, enabling adaptation of the modulation and channel coding parameters to the specific propagation conditions. The communication tests were preceded by measurement of the UAC channel transmission properties. Based on the estimated transmission parameters, four configurations of OFDM modulation parameters were selected, and for each of them, communication tests were performed with the use of two Error-Correction Coding (ECC) techniques. In each case, the minimum coding rate was determined for which reliable data transmission with a Bit Error Rate (BER) of less than 10 − 4 is possible.
Introduction
The designers of shallow-water communication systems try to implement the techniques of modern radiocommunications, but both the BER and data transmission rates achieved are much lower in the case of UAC systems. This is due to the disadvantageous properties of the UAC channels, namely the sea and inland waters, but also due to the technical capabilities of the generation and reception of acoustic waves. The range of a UAC system is determined mainly by the absolute value of absorption attenuation and varies in proportion to the square of the frequency of the system [1]. Differences in attenuation of frequency components of the transmitted signal due to growth in the range have degrading influence on the shape of the signal spectrum, and thus the time-domain waveform is distorted. To avoid these distortions, the bandwidth should be reduced as the system's range increases. Therefore, the differences in attenuation have a limiting effect on the bandwidth of the system and reduces its throughput. Another phenomenon that strongly impacts the transmission properties of the UAC channel consists of reflections from the sea-bottom and the water's surface, as well as other objects present in the water. This causes multipath propagation, which goes hand-in-hand with strong refraction, caused by a significant change in sound velocity as a function of depth. Both multipath propagation, as well as refraction, produce time dispersion of the transmitted signal. The time dispersion causes Inter-Symbol Interference (ISI), the consequence of which is frequency-selective fading observed in the received signal spectrum. Moreover, if the UAC transmitter or receiver, or objects reflecting the signal remain in motion, the possibilities of correct information detection are significantly limited due to the Doppler effect causing the signal spectrum distortions, which reveal as Inter-Carrier Interference (ICI) in case of multi-carrier systems.
The Orthogonal Frequency-Division Multiplexing (OFDM) technique is a digital modulation scheme used by many wireless communication standards, such as WiFi (IEEE 802.11 a/g/n), WiMAX (IEEE 802.16), and the fourth generation (4G) cellular systems. The popularity of OFDM stems from its capability to convert a long multipath channel in the time domain into multiple parallel single-tap channels in the frequency domain, thus considerably simplifying receiver design. Such a feature makes OFDM an attractive choice for UAC systems [2]. UAC systems with OFDM technique are characterized by a high flexibility of modulation parameters, which allows the signal to be adapted to the specificity of a particular communication channel. At very short ranges of the order of several hundred meters, OFDM systems work in the frequency bandwidth from several to several dozen kHz, allowing transmission rates of tens of kbps, but with a Bit Error Rate not less than 10 −1 . The use of Error-Correction Coding allows achieving a BER less than 10 −3 , while reducing the transmission rate to single kbps [3][4][5]. At a long distance over 50 km the OFDM technique allows to achieve a transmission rate of several dozen bps, but such a system characterizes with low reliability (BER not less than 10 −1 ), despite the use of ECC [6].
The paper presents the results of underwater acoustic OFDM communication test performed at a distance of 1 km in a very shallow inland-water channel. The aim of the experiment was to adapt the OFDM parameters to the propagation conditions of a shallow-water channel in such a way that it is possible to achieve high reliability of data transmission with a BER less than 10 −4 . Such reliability was achieved using OFDM technique together with the channel equalization using pilot tones and ECC.
A similar OFDM experiment carried out over a distance of 340 m using two pilot tone schemes, but without Error Correction Coding, is described in [7]. The OFDM data transmission was tested with different configurations of modulation parameters. The subcarrier spacing was varying from 78.13 Hz to 1250 Hz, and the OFDM symbol duration was from 0.8 ms to 12.82 ms. The minimum BER achieved was equal to 0.004.
The adaptation of the OFDM modulation scheme parameters, such as symbol duration and subcarrier spacing, to transmission properties of the UAC channel, are well described in literature on wireless communications [8]. However, the detailed procedures of determining the UAC transmission parameters of the channel, such as the delay spread, Doppler spread, coherence bandwidth, and coherence time, are rarely presented. It applies in particular to the threshold levels of relevant transmission characteristics based on which the parameters are determined. The performed underwater acoustic OFDM communication experiment has shown that the choice of the criteria for determining transmission parameters, based on which OFDM modulation scheme is designed, has an impact on the achieved data transmission rate and reliability.
The organization of the paper is as follows. Section 2 describes hardware of the UAC system used during the inland-water experiment as well as a physical layer of the OFDM data transmission. Section 3 presents the results of measurement of an underwater channel and estimation of its basic transmission parameters. In this section the setup of the experiment is described. In Section 4 the results of the OFDM transmission tests are described, which are commented in Section 5.
Materials and Methods
The OFDM technique was implemented in a laboratory model of an acoustic data transmission system, designed at the Department of Sonar Systems, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology. The system enables the OFDM modulation parameters and ECC rate to be adjusted to the propagation conditions as to obtain the desired Bit Error Rate, which is less than 10 −4 . Some of the OFDM signal parameters (the signal bandwidth B, the carrier frequency f c , and the sampling frequency f s ) are fixed, and they result from the parameters of hardware components such as ultrasonic transducers or underwater telephones. However, two of the key OFDM signal parameters, namely the symbol duration T OFDM and the subcarrier spacing B OFDM , are chosen adaptively. Thus, the OFDM data transmission tests of the UAC system are preceded by measurement of the channel's impulse response and estimation of its transmission parameters, which makes it possible to match the values of B OFDM and T OFDM to the specific propagation conditions.
Instrumentation Used
Both the transmitter and receiver of the laboratory model of the OFDM data transmission system use laptop computers with the Matlab environment for digital signal generation and analysis. Laptop computers communicate with underwater HTL-10 telephones from Sonel Sp. z o.o. The HTL-10 was developed in 2006 for the needs of the Polish Navy as a device to perform underwater communication with the parameters specified in the STANAG 1074 standardisation agreement. It is a compact device enclosed in a cassette with a height of 150 mm, a width of 380 mm, and a depth of 330 mm. It performs the generation of the communication signal, and an analysis of the received signals, with the use of digital signal processors by Texas Instruments: a 16-bit TMS320VC5416 fixed-point processor, and a TMS320C6713B (DSP) 32-bit floating-point processor. It contains multichannel analogue-to-digital converters with a 16-bit resolution and a maximum sampling frequency of 250 kHz. The source of the sampling frequency is an AD9834 direct digital synthesis circuit from Analog Devices. The underwater telephone workes with a NI-USB6363 external recording and generating device from National Instruments. The HTL-10 devices pass the analog signal to a hydroacoustic transducer and receive the signal from a receiving transducer. Both the transmitting and receiving transducers are omnidirectional transducers with a resonant frequency of 34 kHz. The decay of their Transmitting Voltage Response (TVR) is equal to 3 dB in ±5 kHz range from 34 kHz [9].
Structure of the OFDM Signal
The process of the OFDM signal generation in the transmitter is as follows. The input data stream is formed into complex Binary Phase Shift Keying (BPSK) constellation symbols. Each OFDM frequency domain symbol is composed of N s samples, of which N b samples are BPSK symbols, and the remaining N s − N b samples are zeros. Thus, each of the OFDM subcarriers carries an information bit as a binary phase of value π or −π rad. The ratio between N s and N b is equal to B/ f s , where the transmission bandwidth B is equal to 5 kHz, and the sampling frequency f s is equal to 200 kHz. The frequency-domain symbols are processed by Inverse Fast Fourier Transformation (IFFT) to obtain time-domain symbols, each of duration T OFDM . Each of them is extended by the cyclic prefix of duration T g equal to 1 4 of the symbol duration T OFDM . Such prepared OFDM symbols preceded by a synchronisation preamble modulate the carrier wave of frequency f c equal to 30 kHz, which is different from the resonant frequency of the transmitting and receiving transducers; however, for a broadband system, these frequencies do not need to be precisely equal.
The subcarrier spacing B OFDM and symbol duration T OFDM are matched to the transmission parameters of the UAC channel, in which the communication is performed. The coherence bandwidth B c of the channel sets an upper limit on subcarrier spacing B OFDM . At the same time, B OFDM should be much larger than the Doppler spread ν M . The coherence time T c sets an upper limit on the symbol duration T OFDM , which, on the other hand, should be longer than the delay spread τ M of the channel [8].
Pilots Tones and Correction Coefficients
The signal transmitted in the UAC channel suffers from time dispersion, which manifests itself as selective fading of the signal spectrum. To reduce the negative effects of this phenomenon on the information detection process, some of the OFDM symbols' spectra are used as reference pilot tones to equalise the values of the subcarriers of the symbols which are carrying the data. Every second time domain OFDM symbol is transmitted as a pilot symbol and all of its OFDM subcarriers carry data that is known in the receiver. Such a pilot tone pattern is shown in Figure 1. For each subcarrier, a correction coefficient C H is calculated on the basis of a complex value of a given subcarrier at the transmitter and receiver sides [7]: where: Each H RX [k + 1, f n ] subcarrier is corrected by the mean value of the two neighbouring C H coefficients: for the subcarrier in the preceding and following OFDM symbols (except for the H RX [K − 1, f n ], for which only C H [K − 2, f n ] is taken into account). Thus, the equalised values of subcarriers are calculated as: Such a pilot tone pattern allows the influence of ISI and noise on the received signal to be suppressed, and thus significantly improves a BER of data transmission.
Error-Correction Coding
Channel coding is implemented in the OFDM system to protect information bits from errors after transmission through the communication channel. Two classical block coding schemes are used, namely Bose-Chaudhuri-Hocquenghem (BCH) codes, and Reed-Solomon (RS) codes. During the inland-water experiment, the codes of different parameters were tested to find the maximum coding rate that allows reliable transmission with a BER less than 10 −4 to be achieved.
Data Frame
Each of the OFDM data frames starts with a synchronisation preamble, which is a Pseudo-Random Binary Sequence (PRBS). It is based on an m-sequence of rank 8, which modulates the carrier frequency f c equal to 30 kHz. The sequence is repeated 20 times and its duration T synch is equal to 1.02 s. The PRBS sequence is followed by the OFDM signal of a duration of 2.5 s. Subsequent data frames contain the synchronisation sequence and the OFDM signal. The duration of each frame is equal to 3.52 s. In each frame, a constant amount of information N i is sent, which is equal to 5 kbits. Thus, the data transmission rate in the case of ECC not being used, but with the pilot tones technique, is equal to 1.42 kbps. The Bit Error Rate is calculated as BER = N e /N i , where N e is the number of incorrectly detected bits in a single transmission frame, and N i is the number of all transmitted bits.
Transmission Parameters of Shallow-Water Channel
In order to estimate the transmission parameters of an inland shallow-water channel, the measurement tests were conducted in Wdzydze Lake on the northern edge of the Bory Tucholskie forest complex (53 • 58 31 N 17 • 54 19 E) on 5 May 2017. Wdzydze Lake is a freshwater lake. Its bottom is covered with a layer of mud and it falls steeply into the depths of the water. It is, in many respects, very similar to the Baltic Sea. A significant part of the lake area is more than 40 m deep, and the deepest area is more than 70 m deep. In terms of chemical composition, the waters of Wdzydze Lake represent the calcium bicarbonate type. The water temperature on 5 May 2017 was 15 • C. The weather was windless, it was not raining, and the water surface was calm. The transmission stand was placed on a boat, which was drifted very slowly. The receiving stand was in a measuring container of fixed position, 50 m from the lake shore, by a floating platform. The transmission transducer was sunk to a depth of 10 m, regardless of the water depth of about 20 m. There were no objects in the water near this transducer. The receiving transducer was sunk to a depth of 4 m. The water depth in this place was 7 m [10]. The distance between the transmitter and receiver was 1035 m. This was measured with the use of a 19x HVS GPS receiver by Garmin, which was integrated with MaxSea software by TimeZero and electronic maps by Jepessen. Such a hardware-software set ensured a distance measuring accuracy of 3 m. Figure 2 shows the positions of the transmitting and receiving stands. The water depth on the line between positions of transmitting and receiving stands is varying from 7 to 40 m. Figure 3 shows the sound speed profile measured by using a sound speed meter constructed in Department of Sonar Systems, Gdansk University of Technology. The device uses a direct method of sound speed measurement. Its accuracy is ±0.5 m/s. The sound speed profile was needed for estimating the underwater system range. On the basis of this profile an expected minimum range of the UAC system was calculated equal to 400 m.
During the tests, the Time-Varying Impulse Response (TVIR) was measured. In the case of a bandlimited bandpass channel, the TVIR is equivalently described by a time-varying complex baseband impulse response h(t, τ), defined in a window of observation time t and delay τ, with input s b (t) and output r b (t) being the complex envelope of the transmitted and received signal, respectively: where s b (t) and r b (t) are the complex envelopes of transmitted s(t) and received passband signals r(t), and * represents convolution operation. The impulse response h(t, τ) was measured by the correlation method with the use of a Pseudo-Random Binary Sequence (PRBS), based on an m-sequence of rank 8, and its duration T s was equal to 51 ms. It was repeated up to 127 times to gather one TVIR. The bandwidth and the carrier frequency of the probe signal were equal to 5 kHz and 30 kHz, respectively. The sampling frequency f s was equal to 200 kHz. 16 impulse responses were measured during the tests. In order to assess the time dispersion and variability of the UAC channel, a stochastic model based on the channel Space-Time-Frequency Correlation Function (STFCF) R h (∆t, ∆ f ) is used. The STFCF is obtained as an autocorrelation function of a time-varying transfer function H(t, f ) of the channel, which, in turn, is calculated as Fourier transform of the impulse response h(t, τ). The STFCF is a function of time and frequency differences under the assumption that the TVIR of the channel represents a wide-sense stationary uncorrelated scattering process [11]. Then, it is possible to calculate the 2-dimensional scattering function: where ∆t and ∆ f are time and frequency differences, respectively, ν is the Doppler spread, τ is the delay, and R h (∆t, ∆ f ) is the STFCF of the channel. An example module of TVIR and corresponding scattering function are shown in Figure 4. The probable reflections of the transmitted signal are seen as the multipath components of TVIR. The scattering function S(ν, τ) is the basis for the calculation of the transmission parameters: delay spread τ M , Doppler spread ν M , coherence time T c , and coherence bandwidth B c , which are used for designing the physical layer of the UAC data transmission system [8,12]. The delay spread is calculated on the basis of Power Delay Profile (PDP) P(τ), which is obtained as the integral of S(ν, τ) over the Doppler shift ν domain. It describes the average signal power reaching the receiver as a function of delay τ and it characterizes the time dispersion of the channel. The time dispersion can be assessed as a maximum delay spread τ M measured at a given threshold level T r of P(τ), or as an rms value τ rms : where:τ The values of delay spread averaged over the results of the analysis of the 16 measured impulse responses are as follows. The maximum delay spread was measured as a time duration between the first and last multipath component of a value higher than a threshold level of 0.1 or 0.01 of a maximum value of P(τ)), which corresponds to a decrease of 10 dB or 20 dB from the maximum value, respectively. In the case of a threshold level of 0.1, the maximum delay spread was equal to 8.67 ms, and in the case of a threshold level of 0.01, it was equal to 27.13 ms. The rms value of the delay spread was equal to 17.12 ms. These are values typical for shallow-water channels with multipath propagation.
The integral of S(ν, τ) over the delay τ domain gives the Doppler Power Spectral Density (DPSD) P(ν). The Doppler spread of the transmitted signal is calculated similarly to the delay spread, that is, as a maximum Doppler spread ν M or rms Doppler spread ν rms . The maximum Doppler spread measured as a width of P(ν) at a threshold level of 0.1 of its maximum value was equal to 2.23 Hz. Changing the threshold level to 0.01 of the maximum value of P(ν) results in ν M equal to 3.5 Hz. The rms value of the Doppler spread was equal to 0.83 Hz. This is a small value of Doppler spread caused by the slow drift of the boat with the transmitting transducer. Assuming a carrier frequency of the transmitted signal equal to 30 kHz, a Doppler shift of 3.5 Hz corresponds to a motion of 0.17 m/s. The exemplary PDP and DPSD, calculated on the basis of the impulse response presented in Figure 4, are shown in Figure 5.
After averaging R(∆t, ∆ f ) over the ∆t and ∆ f domain, Space-Frequency Correlation Function (SFCF) R(∆ f ) and Space-Time Correlation Function (STCF) R(∆t) are obtained ( Figure 6). The coherence bandwidth B c , in which the signal amplitude spectrum is flat and its phase characteristic is linear, is calculated as the width of R(∆ f ) at a given threshold level T r . The threshold level is usually equal to 0.5 or 0.9 in the case of OFDM radiocommunication systems [8]. The coherence time T c , specifying a time interval at which TVIR remains constant, is calculated in the same manner on the basis of R(∆t). During the inland-water experiment, both the coherence bandwidth B c and coherence time T c were calculated at threshold levels of 0.5, 0.7, and 0.9 of the maximum value of the corresponding correlation function. The coherence bandwidth values obtained were equal to 124.84 Hz, 68.54 Hz, and 36.74 Hz, respectively. These values are typical for shallow-water channels with multipath propagation. Usually, the coherence bandwidth of such a channel is in the order of several dozen Hz. The coherence time T c was equal to 3.03 s, 1.16 s, and 0.36 s, respectively, for all three cases of threshold levels. The values of all transmission parameters are shown in Table 1.
OFDM Transmission Tests
In order to protect the signal transmitted in the UAC system against the ISI and ICI, the physical layer of data transmission should be matched to the transmission parameters of the channel. The subcarrier spacing should be smaller than the coherence bandwidth and much larger than the Doppler spread. On the other hand, the duration of the OFDM symbol should be longer than the delay spread, and much shorter than the coherence time of the channel [8]. The transmission parameter values described in Section 3 differ significantly depending on the method of their determination. For example, the coherence bandwidth determined as the width of SFCF at the threshold level of 0.5 of the maximum value is almost four times larger than the coherence bandwidth determined at the threshold level of 0.9. Depending on the particular transmission parameters set, different subcarrier spacing and symbol duration values meet the physical layer designing rules. Thus, four possible OFDM modulation schemes were chosen for the inland-water experiment, differing in the number of subcarriers, which was equal to 64, 128, 256, or 512. The number of subcarriers determines the values of subcarrier spacing B OFDM and symbol duration T OFDM , as it is shown in Table 2. The reliability of the data transmission with the use of such a configured OFDM waveform was tested during the inland-water experiment. The OFDM signal bandwidth B was the same as in the case of the PRBS probe signal, that is 5 kHz, around the carrier frequency f c of 30 kHz. For each of the OFDM parameter configurations, 20 transmission tests were performed and the mean BER was calculated ( Table 2). The lowest BER was obtained in the case of 256 subcarriers with subcarrier spacing equal to 19.53 Hz and symbol duration equal to 51.20 ms. In the case of 512 subcarriers, the BER was significantly higher than for other subcarrier configurations. As can be seen, in the case of using pilot tones as the only technique of ISI suppression, it is possible to obtain data transmission with a BER less than 10 −1 . The data transmission rate, in this case, is equal to 1.42 kbps.
The next experimental tests were conducted using BCH and Reed-Solomon Error-Correction Coding of different number of information bits L i and message length L msg = 2 m − 1, where m ∈ {5, 6, 7, 8}. The results of BER values depending on the code rate C r = L i /L msg are shown in Figure 7. From among the ECC parameter configurations, those were chosen that allow obtaining the BER less than 10 −2 , 10 −3 , or 10 −4 with the lowest possible redundancy, and thus the highest possible coding rate. The transmission rates obtained with these encoding parameters are shown in Table 3.
Discussion
During the tests without the ECC technique implemented, the lowest BER was obtained in the case of 256 OFDM subcarriers with the subcarrier spacing of 19.53 Hz and symbol duration equal to 51.20 ms. Such a subcarrier spacing is smaller than the coherence bandwidth B C measured at the threshold level of 0.9 of the maximum value of the SFCF of the channel. In the 64 subcarriers case, in which the BER is almost twice as high as in the case of 256 subcarriers, the subcarrier spacing is larger than the B C measured at the threshold levels of 0.7 and 0.9, but not 0.5. Thus, the threshold level of 0.5 of the maximum value of the SFCF seems to be too low as the indicator of the strong correlation of frequency components of the channel's transfer function, and thus it shouldn't be used to choose the subcarrier spacing in UAC OFDM system.
In the case of 512 subcarriers, a BER was significantly higher than for other subcarriers configurations. Such a number of subcarriers located in the 5 kHz band at a sampling rate of f s = 200 kHz may cause that DFT used for OFDM symbols processing has insufficient resolution. Moreover, although the subcarrier spacing equal to 9.77 Hz is greater than each of the three Doppler spread values shown in Table 1, the difference is too small and the Doppler shift may affect the OFDM signal spectrum causing the ICI. Thus, it seems that it is worth using Doppler spread measure that achieves the highest values, i.e., the maximum spread at the threshold level of −20 dB relative to the maximum value of Doppler Power Spectral Density of the channel. The subcarrier spacing should be much larger than the maximum Doppler spread calculated this way.
In each of the OFDM configurations, the OFDM symbol duration was longer than the maximum value of the delay spread measured at the threshold level of −10 dB relative to the maximum value of the Power Delay Profile. In the case of 64 subcarriers, the OFDM symbol duration was no longer than the maximum value measured at the threshold level of −20 dB and no longer than the rms value of the delay spread of the channel. In the case of 256 subcarriers, for which the lowest BER was achieved, the symbol duration was longer than the maximum delay spread measured at the threshold level of −20 dB. Thus it seems it is worth using the latter measure of time dispersion of the UAC channel in the process of designing the physical layer of the OFDM data transmission system. In all OFDM subcarrier configurations, the OFDM symbol duration was much shorter than the coherence time of the channel.
Implementing the ECC technique allowed obtaining the BER less than 10 −4 in the case of 64, 128, and 256 subcarriers. Only in the case of 256 subcarriers, it was possible using both types of ECC: BCH codes and Reed-Solomon codes. The highest transmission rate with the BER less than 10 −4 was obtained using Reed-Solomon code and it was equal to 653.63 bps. Thus, applying the ECC reduces more than twice the transmission rate compared to the configuration without ECC, but with a significant gain on reliability.
Conclusions
The communication tests performed during the shallow inland-water experiment with the use of the laboratory model of an OFDM system have shown that it is possible to achieve reliable data transmission with Bit Error Rate less than 10 −4 . The tested UAC channel was about 1 km long and the depth was variable along this distance from 20 to 40 m. Thus, it was a typical case of a very shallow channel characterised by numerous reflections of the transmitted signal from the bottom and surface of the water. Obtaining such a low BER required the use of two techniques of ISI suppression, namely pilot tones and Error-Correction Coding. Achievable BER and data transmission rates were measured for four different configurations of OFDM signal parameters, selected to match the physical layer of data transmission to the transmission parameters of the channel, calculated on the basis of the measured Time-Varying Impulse Reponses.
The lowest BER was obtained in the case of subcarrier spacing which was less than the coherence bandwidth measured at the threshold level of 0.9 of the maximum value of the Space-Frequency Correlation Function (SFCF) of the channel. At the same time, the subcarrier spacing was much larger than the maximum Doppler spread measured at the threshold level of −20 dB relative to the maximum value of Doppler Power Spectral Density. The OFDM symbol duration was longer than the delay spread of the channel measured at the threshold level of −20 dB relative to the maximum value of Power Delay Profile.
In case of the largest number of subcarriers, i.e., 512, reliable data transmission could not be obtained. For the smallest number of subcarriers, i.e., 64, the data transmission rate of 72.42 bps was obtained after using the BCH code of rate of 0.051. For 128 and 256 subcarriers, transmission rates of hundreds of bps were achieved, which is sufficient for most applications of reliable underwater communications, such as submarine-to-submarine or submarine-to-surface platform communications, monitoring of bottom installations or AUV remote control.
Funding: This research received no external funding Acknowledgments: Special thanks to Jan Schmidt and Krzysztof Liedtke for technical support in experimental tests.
Conflicts of Interest:
The author declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: ADC | 6,998.6 | 2020-03-23T00:00:00.000 | [
"Physics"
] |
HDAM: Heuristic Difference Attention Module for Convolutional Neural Networks
The attention mechanism is one of the most important priori knowledge to enhance convolutional neural networks. Most attention mechanisms are bound to the convolutional layer and use local or global contextual information to recalibrate the input. This is a popular attention strategy design method. Global contextual information helps the network to consider the overall distribution, while local contextual information is more general. The contextual information makes the network pay attention to the mean or maximum value of a particular receptive field. Different from the most attention mechanism, this article proposes a novel attention mechanism with the heuristic difference attention module, HDAM. HDAM’s input recalibration is based on the difference between the local and global contextual information instead of the mean and maximum values. At the same time, to make different layers have a more suitable local receptive field size and increase the flexibility of the local receptive field design, we use genetic algorithm to heuristically produce local receptive fields. First, HDAM extracts the mean value of the global and local receptive fields as the corresponding contextual information. Then the difference between the global and local contextual information is calculated. Finally HDAM uses this difference to recalibrate the input. In addition, we use the heuristic ability of genetic algorithm to search for the local receptive field size of each layer. Our experiments on CIFAR-10 and CIFAR-100 show that HDAM can use fewer parameters than other attention mechanisms to achieve higher accuracy. We implement HDAM with the Python library, Pytorch, and the code and models will be publicly available.
Introduction
Convolutional Neural Networks (CNNs) [1] have achieved amazing development in the past 10 years.Due to the efficient representation, CNNs have achieved remarkable results in multiple downstream tasks, such as classification [2], detection [3] and segmentation [4].Therefore, efforts to improve representation capabilities have never stopped.For example, in the early days of CNNs, researchers found that the depth [5] of the network has a great impact on the performance of CNNs, because the deeper the network, the richer the high-dimensional information of the network.However, if the network reaches a certain depth, not only the computational cost of the network increases, and the inference is prolonged, but the performance of the network is severely degraded due to the vanishing or explosion of the gradient [6].In addition to deeply affecting the performance of the network, the width of the network is also an important factor that affects the performance of the network.Similar to increasing the depth, increasing the width [7,8] can also improve the representation ability of the network.But it also increases computing consumption and extends the inference time.Cardinality [9] diversifies the style of convolution in the same layer.This method can significantly improve the network without increasing parameters or adding a small number of parameters.All of the above are to improve network performance by stacking, and skip connection [10] is to change the way of information transmission whose advantage is that no additional parameters are required, the gradient explosion and vanishing are solved at the same time, and the convergence speed of the network is improved.However, additional storage space is needed to store the skip-connected part during inference.Although the above methods can improve the performance of CNNs to a certain extent, they consume a lot of computing resources such as memory and floatingpoint calculations.The attention mechanism can improve CNNs with a small number of parameters and zero additional storage requirements.More importantly, the attention mechanism simulates the human visual system, that is, pay more attention to meaningful information rather than meaningless information.
Among them, SENet [11] is one of the most representative attention networks.SENet [11] proposes a channel attention mechanism and calculates the global average of each channel, which is used as contextualinf ormation (CI) to recalibrate the input.This can magnify the global feature of each channel.On this basis, CBAM [12] and BAM [13] additionally consider the spatial attention, and increase the global maximum value to enrich the global CI, so that the network can find what and where to focus on more accurately.To make this attention more general, GENet [14] extracts local CI instead of global CI.On the basis of GENet [14], SPANet [15] extracts local CI from different spatial scales.
These attention strategies use few parameters to enhance the performance of CNNs.They recalibrate input pixels by multiplying the input pixels with global or CI embedding, and integrate the CI into the network's information flow.Therefore, most attention mechanisms use either global CI or local CI.We review the meaning of global and local CI and conclude that global CI represents the average value of the entire image, reflecting the trend of the overall pixel value; the local CI describes the average value of the local receptive field, and represents the average value of the pixel values in a small area of the sample.The two are different, and the animal's visual system pays attention to this difference.The difference in color distribution between objects is a prerequisite for the observer to distinguish and pay special attention.And today's various attention strategies do not take this into consideration.Therefore, this paper proposes a novel attention strategy based on the difference between global and local CI.At the same time, in order to design a more reasonable local receptive field, we adopt a heuristic strategy, that is, to introduce genetic algorithm (GA) [16] to perform a heuristic search for the size of the local receptive field.
Specifically, we first extract global and local CI, obtain their embedding through the shared multi-layer perceptron (MLP) [17] of two layers, calculate the difference between global and local CI according to embedding, and use this difference recalibrate the input.At the same time, we encode the combination of local receptive field sizes of all layers in the network, and search for the best network local receptive field size combination through GA [16].We validate HDAM on CIFAR-10 [18] and CIFAR-100 [18], and used accuracy, number of parameters as the measurement standards.The results show that HDAM surpasses various current state-of-the-art network models.These networks include classic networks, attention networks, and networks based on neural network architecture search (NAS) [19,20].
Related Work
In this part, we will introduce the work related to HDAM from two aspects: Convolutional neural network and Attention mechanism.
Convolutional neural network
In the first decade of the 21st century, limited by hardware equipment, the development of CNNs has been at a low ebb.With the gradual increase in computing power, and due to the success of AlexNet [23] in 2011, the development of CNNs entered the spring.Since then, CNNs have been the main backbone of computer vision and made remarkable achievements.After AlexNet, researchers continued to improve the performance of CNNs.GoogleNet [8] and VGG [7] increased the depth of CNNs, and found that depth is an important factor affecting the performance of CNNs.However, the training of the model needs to be carefully designed, such as the initialization and learning rate settings, otherwise it is difficult to achieve the desired performance.Batch normalization (BN) [21] believes that this is because the convolutional layer in the model fits the input whose distribution is changing in each inference, that is, the input produces an internal covariate shift, so it proposes to normalize the data of each batch.This makes the training of the model easier and the performance is compelling.Although BN can make training easier, the explosion and vanishing of gradient caused by the increase in depth still affect the potential of CNNs.Therefore, the skip connection proposed by ResNet [10] solves this problem by a big margin, because it alleviates the gradient accumulation consequence caused by the chain rule.ResNet [10] provides an efficient network topology template for later CNNs design.In addition to depth, WideResNet [5] based on ResNet [10] believes that expanding the width is also an effective means to improve CNNs.Depth and width are important hyperparameters that affect the performance of CNNs.Besides, the convolution operation affects the performance of CNNs from another perspective.The depthwise separable convolution [24] uses fewer parameters to achieve similar accuracy to the general convolution.This type of convolution is mainly used on mobile devices.ResNeXt [9] uses multiple convolutions of different sizes in the same convolutional layer.Also, without using additional parameters or using few additional parameters, the performance of CNNs has been greatly improved.Different from the above methods, the current design of CNNs network is mainly focused on the performance improvement strategy of CNNs based on the attention mechanism.This paper also proposes a new type of attention network.
Attention mechanism
The attention mechanism simulates how the animal visual system works, that is, paying attention to the more effective part.The performance of the model can be improved without increasing or increasing a few parameters.The attention mechanism mainly extracts the CI of the feature maps, and then multiplies CI back to the network to increase the network's sensitivity to this information.SENet [11] is a typical attention network.It extracts the result of global average pooling as CI.SPANet [15] and GENet [14] extract the local mean as the local CI, which makes the extraction method based on global CI more general.In addition to using the mean value as the CI, CBAM [12] and BAM [13] also use the maximum value as the component of the CI.Different from all existing attention mechanisms, we extract global and local CI at the same time, seek the difference between the two, and pass this difference back to the network.At the same time, in order to find the most suitable local receptive field size, we use GA [16] based on heuristic search for the first time in the field of attention mechanism to generate the most suitable local receptive field size combination.
Proposed Algorithm
In this part, we discuss HDAM in detail.HDAM mainly includes four parts, namely global and local CI extraction, embedding and difference calculation, input recalibration, and best local receptive field search.In order to explain HDAM more accurately, we provide detailed formula derivation.
Contextual information extraction
CI extraction is an important operation of the attention mechanism.CI represents the concentration of a specific receptive field information and is the basis for embedding calculation.
We use the mean to represent the CI of the receptive field.First, we divide the input into non-overlapping patches, and each patch is a receptive field.We calculate the average value of the receptive field on each channel based on the channel, and use this as the CI in the receptive field on each channel.Given Input ∈ R C×H×W , we first reshape the dimension of Input into R P ×C× H× W , where P means the number of local receptive f ields (patches) and H and W denote the height and width of the patch.P equals HW/( H W ). With the addition of global receptive field, the final receptive f ield metric (RF ) is ∈ R (P +1)×C× H× W , then the CI is as follows: where M ean(•) calculates the mean of RF and CI is ∈ R P ×C .If RF is the global receptive filed, it means global CI, otherwise it means local CI.
Embedding and difference calculation
Embedding calculation maps the extracted CI.To control the number of parameters, we use two-layer shared MLP to map the extracted global and local CI.The ReLu [22] activation function is used after the first layer, and the softmax activation function is used after the second layer.Finally, we use cross entropy to calculate the difference between global embedding and local embedding as shown below (for clarity, bias is ignored): where Embedding is ∈ R P ×C and W 1 and W 2 denote the two-layer MLP.Embedding is Local Embedding Global Embedding , where Local Embedding (LE) is ∈ R HW/ H W ×C and Global Embedding (GE) is ∈ R 1×C .The dif f erence coef f icient (DC) is calculated as follows: where DC is ∈ R HW/( H W )×1 .
Recalibration
The recalibration is to multiply the difference coefficient with the input.This process makes the difference between the global and local CI flow into the network in the inference to enrich the subsequent feature processing, so that the gradient carries the difference information to optimize the network parameters.We broadcast each DC obtained into a matrix with the same dimension and shape as its corresponding local receptive f iled, and then the obtained matrix is multiplied by the Input.Finally, we reshape the shape of Output into R C×H×W :
Local receptive field search
In this part, we will elaborate on the working principle of GA's heuristic search in local receptive field design.To facilitate our explanation, we use ResNet-50 [10] as the basic model for our explanation.As we all know, ResNet-50 [10] consists of four units, and each unit consists of several residual blocks.The number of residual blocks in each unit is three, four, six, and three, respectively.We only design HDAM on the input of each residual block.The input spacial sizes of all residual blocks in each of these four units are 16, 16, 8, and 4. Taking the first unit as an example, because the input spacial size of each residual block is 16, the range of the local receptive field size of each block can be [1/16, 1/8, 1/4, 1/2, 0], where the number represents the proportion of the input spacial size, 0 means that HDAM is not used, and the range of the local receptive field in the remaining units are [1/16,1/8,1/4,1/2,0], [1/8,1/4,1/2,0] and [1/4,1/2,0].
We use an array with a length of 16, that is, the sum of the number of blocks in all units, to represent the local receptive field size combination of all patches in ResNet-50 [10].Figure 1 is an example: We represent a combination of local receptive fields as an individual and use GA [16] to search for the best combination.Algorithm 1 shows the entire process of GA [16].
Dataset
We conduct our experiments on the two most popular datasets, CIFAR-10 [18] and CIFAR-100 [18].The CIFAR [18] dataset is collected by Alex Krizhevsky, Vinod Nair and Geoffrey Hinton and is divided in two subsets including CIFAR-10 [18] and CIFAR-100 [18] according to the number of categories.Each subset contains 60,000 images with the of 32 × 32, including 50,000 training images and 10,000 test images.The difference is that CIFAR-10 [18] contains 10 categories of images, each with 6,000 images, of which 5,000 are used for training and 1,000 are used for testing; CIFAR-100 [18] contains 100 categories, each with 600 images, of which 500 are used for training, 100 are used for testing.
Algorithm 1 Local receptive field search
Require: The population size N , the maximal generation number T , the crossover probability µ, the mutation of probability ν.
1: P 0 ← Initialize N arrays as a population using encoding strategy; 2: Decode each individual and generate the corresponding CNN (ResNet-50 [10]).Train and validate each CNNs, then take the highest accuracy as the fitness of each individual in P 0 ; 3: t = 0; 4: while t < T do 5: Qt ← ∅; 6: while |Qt| < N do 7: p 1 , p 2 ← Select two arrays from Pt with binary tournament selection ; 8: q 1 , q 2 ← Generate two arrays by q 1 and q 2 by crossover operation with the probability µ, and mutation operation with the probability ν; 9: Qt ← Qt ∪ q 1 ∪ q 2 ; 10: end while 11: Train and evaluate CNNs' performance in Qt; 12: P t+1 ← Select N arrays from Pt ∪ Qt by environmental selection; 13: t ← t + 1; 14: end while Ensure: The architecture of a ResNet-50 [10] with the best array.
Peer Competitor
To illustrate the superior performance of HDAM, we select a variety of different CNNs models for comparison, including the classic CNNs, the CNNs searched by NAS, and the CNNs with mainstream attention mechanism.The CNN architectures searched by NAS includes the ones searched by semi-automatic NAS and fully automatic NAS.The classic CNN structure includes DenseNet [25], Maxout [26], VGG [7] , Network in Network [27], Highway Network [28], All-CNN [29] and FractalNet [30].The structures searched by semi-automatic NAS include Genetic CNN [31], EAS [32] and Block-QNN-S [33].The structures found by the automatic search include Large-scale Evolution [34], CGP-CNN [35], NAS [36], MetaQNN [37] and AE-CNN [38].CNNs based on the attention mechanism include SE-Net [11] and CBAM [12].Except for the structure of CNNs based on the attention mechanism, we directly use their experimental results in the original paper, because these results are often the best.We retrain CNNs based on the attention mechanism.
Parameter Settings
We use ResNet-50 [10] as our basic model for embedding HDAM.According to the computing resource, two NVIDIA 2080TI graphic processing units (GPUs), we set the population size to 20 and the maximal generation to 20.The crossover and mutation probability are set to 0.9 and 0.2, respectively.We use the SGD with momentum as the optimizer.The momentum and weight decay are set to 0.9 and 5e-4, respectively.A total of 250 epochs is set to train the individuals.The batch size is 128 and the learning rate is shown in Table 1 The training accuracy is [39].Random cropping fills four zeros on all borders of the image, and then randomly crops the image with a size of 32 × 32.
Experiment Results
To void accidental factors, our experiments are conducted for 5 times, and the average value of these 5 times was taken as the final result.In addition to using accuracy as our evaluation index, the number of parameters is also used as one of the evaluation index.Table 2 shows the experimental results of HDAM on two datasets.The first column of the table is the name of models, the second and third columns are the accuracy of CIFAR-10 [18] and CIFAR-100 [18] on each model, the fourth column is the number of parameters of each model, and the last column is the model category including hand-crafted, semi-automatic, and full-automatic.'-' means that the corresponding model has no public record.
The experimental results show that HDAM obtains the best accuracy of 96.10 on CIFAR-10 [18] and the highest accuracy of 79.79 on CIFAR-100 [18].The accuracy of HDAM on CIFAR-10 [18] is 1.32 higher than the highest accuracy among hand-designed classic CNNs, 0.33 higher than the highest accuracy of CNNs generated by semiautomatic NAS, 0.4 higher than the highest accuracy of CNNs generated by full-automatic NAS and 0.35 higher than the highest accuracy of CNNs based on the attention mechanism.HDAM also obtains the highest accuracy of 79.79 on CIFAR-100 [18], which is 2.09 higher than the highest accuracy of hand-designed CNNs, 0.44 higher than the highest accuracy of CNNs generated by semi-automatic NAS, 0.64 higher than the highest accuracy generated by full-automatic NAS, and 0.53 higher than the highest accuracy of CNNs based on the attention mechanism.In terms of the number of parameters, HDAM has half the parameters of the attention network SE-ResNet-101 [11] and CBAM-ResNet-101 [12], which means that HDAM saves nearly half of the parameters and achieves higher performance.
Conclusion and Future Work
We propose a new attention mechanism module HDAM based on heuristics search and differences between the local and global CI.This module calculates global and local CI at the same time, but unlike any previous attention mechanism, HDAM does not use local or global CI to recalibrate the input, but calculates the difference between the two and recalibrates the input with the difference.In addition, to design a more reasonable local receptive field size, we first introduce heuristic search into the attention mechanism design.We encode the local receptive field of each convolutional layer into individuals, and use GA to search for the most suitable combination of local receptive fields.We use ResNet-50 as the base model to embed HDAM, and test HDAM on CIFAR-10 and CIFAR-100, respectively, and compare with four types of CNNs, including classic and state-of-the-art.The results show that HDAM surpasses all the above models on CIFAR-10 and CIFAR-100.Compared with the most popular attention mechanism-based models, HDAM can use nearly half of the parameters to obtain higher accuracy.For the future work, we will use weight inheritance to reduce the time spent searching for local receptive fields.
Table 2: Comparison between the proposed HDAM and the state-of-the-art peer competitors in terms of the classification accuracy, the number of parameters on the dataset CIFAR-10 [18] and CIFAR-100 [18].
Figure 1 :
Figure 1: An encoded individual.The numbers in the dashed lines indicate other size options in a residual block. | 4,753 | 2022-02-19T00:00:00.000 | [
"Computer Science"
] |
Carbon-Bonded Alumina Filters Coated by Graphene Oxide for Water Treatment.
The aim of this paper is to prepare nano-functionalized ceramic foam filters from carbon-bonded alumina. The carbon-bonded filters were produced via the Schwartzwalder process using a two-step approach. The prepared ceramic foam filters were further coated using graphene oxide. Graphene oxide was prepared by the modified Tour method. The C/O of the graphene oxide ratio was evaluated by XPS, EDS and elemental analysis (EA). The amount and type of individual oxygen functionalities were characterized by XPS and Raman spectroscopy. The microstructure was studied by TEM, and XRD was used to evaluate the interlayer distance. In the next step, filters were coated by graphene oxide using dip-coating. After drying, the prepared composite filters were used for the purification of the water containing lead, zinc and cadmium ions. The efficiency of the sorption was very high, suggesting the potential use of these materials for the treatment of wastewater from heavy metals.
Experimental Procedures
The graphene oxide (GO) suspension was prepared according to the modified Tourʹs method. A mixture of concentrated sulfuric acid and phosphoric acid in a volume ratio 9:1 (360:40 ml) was cooled to 5 °C. Next, graphite (3.0 g), and subsequently potassium permanganate (18.0 g), was added. The ongoing reaction heated the mixture itself to ≈20 °C due to the exothermic process. The reaction mixture was stirred and then heated to 50 °C for 1 h. Then, the mixture was cooled to 20 °C and poured onto 500 g of ice. After the ice dissolved, 30% hydrogen peroxide was added (50 ml) to remove the remaining unreacted potassium permanganate and manganese dioxide. The obtained GO was then purified by repeated centrifugation and redispersing in deionized water until a negative reaction on sulfate ions with Ba(NO3)2 was achieved. After the centrifugation, the final GO slurry contained ≈5 g of graphene oxide and 250 ml of water, which corresponds to the concentration of 20 g/L.
Methods
The morphology was investigated using scanning electron microscopy (SEM) with an field emission gun (FEG) electron source (Tescan Lyra dual beam microscope). Elemental composition and mapping were performed using an energy-dispersive spectroscopy (EDS) analyzer (X-MaxN) with a 20 mm 2 SDD detector (Oxford instruments, High Wycombe, UK) and AZtecEnergy software (Oxford instruments, HighWycombe, UK). To conduct the measurements, the samples were placed on a carbon conductive tape. SEM and SEM-EDS measurements were carried out using a 10 kV electron beam.
X-ray fluorescence (XRF) analysis was performed by an ARL PERFORM'X sequential WD-XRF spectrometer (Thermo Scientific, Waltham, MA, USA) equipped with an Rh anode end-window Xray tube type 4 GN, fitted with a 50 μm Be window. All peak intensity data were collected by the software Oxsas (Thermo Scientific, Waltham, MA, USA) in a vacuum. The generator settingscollimator-crystal-detector combinations were optimized for all 82 measured elements, with an analysis time of 6 s per element. The obtained data were evaluated by the standardless software Uniquant 5 integrated in Oxsas. The analyzed powders were pressed into pellets about 5 mm thick with a diameter of 40 mm without any binding agent and covered with 4 μm supporting polypropylene (PP) film.
The scanning transmission electron microscopy (STEM) was performed with a Tescan Lyra dualbeam microscope equipped with an FEG electron source and STEM sample holder. To conduct the measurements, the sample suspension was drop casted on a 200 mesh Cu TEM grid and dried in a vacuum oven (50 °C). STEM measurements were carried out using a 30 kV electron beam.
Combustible elemental analysis (EA) was performed using a PE 2400 Series II CHNS/O Analyzer (Perkin Elmer, Waltham, MA, USA). The instrument was used in CHN operating mode to convert the sample elements to simple gases (CO2, H2O and N2). The PE 2400 Analyzer automatically performed combustion, reduction, homogenization of product gases, separation and detection. An MX5 microbalance (Mettler Toledo, Columbus, OH, USA) was used for precise weighing of the samples (1.5-2.5 mg per sample). The internal calibration was performed with N-phenyl urea.
Raman spectroscopy was performed with an InVia Raman microscope (Renishaw, Wottonunder-Edge, England) in backscattering geometry with a CCD detector). A DPSS laser (532 nm, 50 mW) with the applied power of 5 mW and a 50× magnification objective was used for the measurement. Instrument calibration was achieved with a silicon reference which gives a peak position at 520 cm −1 and a resolution of less than 1 cm −1 . The samples were suspended in deionized water (1 mg/ml) and ultrasonicated for 10 minutes. The suspension was deposited on a small piece of silicon wafer and dried.
High-resolution X-ray photoelectron spectroscopy (XPS) was performed with an ESCAProbeP spectrometer (Omicron Nanotechnology Ltd., Taunusstein, Germany) with a monochromatic aluminum X-ray radiation source (1486.7 eV). The sample was applied on a conductive carbon tape before the analysis. Wide-scan surveys of all elements were performed, with subsequent highresolution scans of the C 1s and O 1s.
Optical microscopy (OM) was performed with the aid of a digital light microscope VHX-200 D (Keyence, Osaka, Japan). The uncoated carbon-bonded filters were investigated by using the sidelight source. The objective allows for magnifications in the range of 20×-200×.
The concentrations of metal ions before and after the sorption experiment were determined by means of atomic absorption spectroscopy (AAS, Agilent Technologies, Santa Clara, CA, USA) on an Agilent 280FS AA device using a flame-atomization technique. Measurements were carried out in acetylene-air flame. | 1,257.2 | 2020-04-01T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Perspectives for Improvements on Real-Time GNSS Positioning with the Use of New Observables
The research aims to analyze Global Navigation Satellite System (GNSS) positioning solutions using the Precise Point Positioning (PPP) method in kinematic mode, in which the position is obtained epoch by epoch through data from only one GNSS receiver, precise ephemerides, and satellite clock corrections, both with high accuracy. In this context, the adopted methodology employs the open-source software RTKLIB, which, through its data post-processing capability, enables the evaluation of kinematic PPP performance by incorporating new observables such as L5 and E5a bands, crucial for enhancing positional accuracy and mitigating multipath effects. Additionally, for survey simulations, data from the UFPR and POVE stations of the Brazilian Network for Continuous Monitoring of GNSS (RBMC) were utilized. The selected stations are located in the Northern and Southern regions of Brazil. The GNSS data were processed with the same tracking duration for a 45-minute cold-start, aimed at observing solution convergence. Subsequently, the estimated outputs in RTKLIB were obtained based on station origins using coordinates provided by SIRGAS-CON. Moreover, by utilizing precise ephemerides data, it was possible to conclude that the addition of new observables for triple-frequency positioning led to an improvement in the accuracy of the obtained coordinates, particularly in the Vertical component. In this regard, the increase in accuracy in experiments using only triple-frequency data with a 15° elevation mask was approximately 25% at the UFPR station and about ~37% at the POVE station. Furthermore, it was observed that reducing the elevation mask from 15° to 10° had a positive impact on dual-frequency positioning at the UFPR station, resulting in gains of over 3% in the East component and around ~21% in the North component compared to values obtained with the 15° mask. Similarly, during triple-frequency positioning at the POVE station, gains exceedingly approximately 16% in the North component and around ~22% in the Vertical component were observed.
Introduction
Precise Point Positioning (PPP) (Zumberge et al. 1997) has been extensively researched, leading to the development of various online services capable of achieving sub-centimeter point positioning not only in static postprocessing mode but also in real-time applications using a single Global Navigation Satellite System (GNSS) receiver (Grinter & Roberts 2013).Therefore, PPP has proven to be an excellent tool for geodetic and geodynamic applications, such as geodetic control, local and global deformation monitoring, dynamics of lithospheric plate motion, cadastral surveys, and photogrammetry (Alves, Monico & Romão 2011).
Several related works have been carried out, such as the studies of Rizos et al. (2012), Janssen andMcElroy (2013), andZanetti (2018), which evaluated whether PPP is a viable alternative for precise positioning under certain conditions and configurations.These studies promote comparative analysis between PPP and more traditional positioning techniques, such as static relative positioning.Therefore, future works related to PPP should focus on answering questions such as whether PPP is more accurate than traditional techniques, and if so, under what conditions and configurations.Huber et al. (2010) and Grinter and Roberts (2011) provide a second contextualization, analyzing the potential and limitations of PPP.These authors discuss PPP advances in the last two decades, highlighting its current capabilities and limitations and presenting the possible direction of this technology.They conclude that PPP research advances will likely provide an increasingly wide variety of products, especially in terms of accuracy.Moreover, Naciri, Hauschild and Bisnath (2021) report significant improvements in PPP performance with the use of new Global Positioning System (GPS), Galileo, and BeiDou-2/3 signals, achieving horizontal and vertical Root Mean Square (RMS) of 2.3 and 2.6 cm, respectively, in static processing and 5.4 and 7.5 cm in kinematic processing after 1 hour of processing using real-time satellite correction products.The authors also found that mitigating known biases in GPS Block-IIF L5 can lead to average improvements of approximately 15% and 20% in horizontal and vertical RMS, respectively.These gains in PPP performances demonstrate the potential for continued advancement and improvement in PPP accuracy.
Given the importance of PPP and technological advances, it is crucial to analyze and understand the different configurations that allow the use of L5 and E5a carriers in data processing, evaluating their accuracy and trends.This study aims to contribute to the understanding of PPP applicability and limitations by analyzing processing configurations and assessing the kinematic PPP precision, trend, and accuracy.Nowadays, GNSS carrier frequencies are considered more modern and were developed to minimize noise, particularly in pseudorange measurements at the L1 band.
RTKLIB
The software RTKLIB performs the analysis of pseudorange and phase propagated through models integrated into its library, and being open source, there is the possibility of adding new algorithms through the C/C++ programming language.Regarding the modules contained in the computational resource, RTKLIB has RTKPOST (for post-processing), RTKPLOT (for a solution and data visualization), RTKNAVI (for retention, decoding, and conversion of GNSS data transmitted in real-time), and RTKCONV (for RINEX data format conversion).It is important to highlight that RTKLIB allows for the processing of multi-constellation GNSS and supports dual and triple-frequency observations.
The Receiver Independent Exchange Format (RINEX) format consists of a text file for observation data, navigation messages, and meteorological data for a specific date and receiving station, containing a header for general information and a data section.The data section includes code pseudorange (in meters), carrier phase, and observation time, recorded according to the receiver clock.The clock correction files contain solutions for GNSS time synchronization errors.Additionally, another file that can be used is the Ionosphere Map Exchange Format (IONEX), which stores maps of Vertical Total Electron Content (VTEC) and daily values of GNSS differential code polarization, derived from GNSS data.
The satellite orbit file is essential because its motion around the Earth has an associated force, which pushes it away from the Earth.As for the navigation RINEX file, the broadcasted ephemeris contains position, velocity, and clock information for all GNSS constellation satellites for each day.The antenna calibration files are in the Antenna Exchange Format (ANTEX), aiming for absolute corrections of the antenna phase center.
Regarding the UFPR station data processing, when requesting the program to process triple-frequency data with IONEX ionospheric correction type, the generated outputs had an RMS that exceeded the decimetric range.The RMS allows analyzing the accuracy between the GNSS tracking values and the considered true values.Finally, to solve the problem, the Slant Total Electron Content (STEC) estimation was used.
Selected Stations
The selected workstations were UFPR, located in Curitiba, Paraná, and the POVE station, located in Porto Velho, Rondônia, both belonging to Brazilian Network for Continuous Monitoring of GNSS (RBMC) (Figure 1).
RBMC is a network of GNSS stations established in Brazil to provide accurate and reliable positioning data.The network is managed by Brazilian Institute of Geography and Statistics (BIGS) and plays a crucial role in supporting various applications such as surveying, mapping, and geodetic studies.
Given this, the tracking months chosen on the BIGS digital site were January 2022 for UFPR and July of the same year for POVE.Therefore, there are higher TEC values during months near the equinoxes and lower during winter months.In addition, ionospheric scintillation, which is a rapid variation in amplitude and phase of radio wave signals when these signals pass through ionospheric irregularities, is greater in regions near the magnetic equator, as stated by Pacini and Raulin (2006).Consequently, as it is the function of L5 and E5a observables to perform strongly in locations affected by multipath and ionospheric effects, it was expected that the addition of new carriers would improve the accuracy output quality of the post-processing of the selected stations.
GNSS Processing
The positioning mode employed in this study is PPP Kinematic, which utilizes different frequency combinations for the satellite signals.Two configurations were used: L1+L2/E5b and L1+L2/E5b+L5/E5a.The filter type selected for processing is Forward.Two elevation masks were tested: 15° and 10°, determining the minimum elevation angle for satellites to be included in the solution.Receiver dynamics were turned off, indicating that dynamic positioning was not utilized.Solid model earth tides correction was applied, and ionosphere correction was performed by estimating the Total Electron Content (TEC).The recording interval for the data was set to 1 second, ensuring regular sampling of the measurements.According to the sample Table 1, the following parameters are displayed.
Regarding the elevation mask of the adopted settings, satellites will be excluded if they are below a certain elevation angle or have a low Signal to Noise Ratio (SNR).However, in some specific cases of this study, the elevation mask was removed to evaluate potential benefits in terms of ellipsoidal height estimation.By allowing lower-elevation satellites to contribute to the positioning solution in these selected cases, it is possible to assess if there are improvements in determining the ellipsoidal height component of the position.This analysis helps to understand the impact of removing the elevation mask on the accuracy and reliability of the PPP results.
Additionally, the processing was conducted to compare the PPP performance with and without the inclusion of L5/E5a bands.These carrier frequencies are considered more modern and were developed to minimize noise, particularly in pseudorange measurements.3 Results
UFPR Station Data Processing
The result for the UFPR station, on the Day of Year (DoY) 004/2022, tracking from 10:00 to 10:45 AM, 1 second of recording interval, with GPS and Galileo constellations, is shown in Figure 2.
The results revealed in this study highlight the influence of including the L5/E5a observable on the positioning performance at the UFPR station.As shown in Figure 2, the bias and RMS of the East and North coordinates, in meters, were improved by the addition of the L5/E5a observable, while the degradation in the Vertical component was minimal.These findings emphasize the advantages of considering L5/E5a signals when performing GNSS positioning.
It is worth noting that the inclusion of the L5/ E5a signal deteriorated the ellipsoidal height precision by ~2%, as shown in Table 2.The decrease in precision was approximately 0.009 m, which may have significant implications in applications that require height accuracy.Therefore, the benefits of including the L5/E5a signal and the potential degradation in height precision should be carefully evaluated according to the specific requirements of each application.
As satellite signals pass through the Earth's atmosphere, they are affected by various layers such as the ionosphere and troposphere, which can cause delays and errors in GNSS positioning (Fonseca Junior 2002).As a result, when a satellite is closer to the horizon, the portion of these layers that the signal needs to pass through increases, which can significantly degrade the accuracy and precision of the positioning results.To mitigate this effect, an elevation mask can be applied to exclude satellites that are too close to the horizon (Mendes Da Rocha et al. 2017).However, to improve the quality of the altimetric positioning adjustment, making it more diverse in terms of observations, a 10° elevation mask was tested, which means that all satellites above this value were used in the solution.Moreover, all the other remaining parameters in Table 1 were kept unchanged.It is worth noting that while this approach may improve the vertical positioning component, it can potentially introduce errors in the horizontal positioning components due to the increased influence of the ionosphere and troposphere at lower elevation angles.
By analyzing the results presented in Figure 3 and Table 3, where the addition of the L5/E5a signal improved the East component by ~39%, the North component by ~34%, and the Up component by ~14%, it is evident that reducing the elevation mask to 10° in dual-frequency resulted in improvements in the East and North components, ~3% and ~21%, respectively, for dual-frequency, and ~33% in the North component for triple-frequency, when compared to the same test using the 15° elevation mask.However, reducing the elevation mask from 15° to 10° led to significant degradations in the Vertical component for both cases, with values exceeding ~100%, suggesting that reducing the elevation mask negatively impacts the adjustment of the input data.
About the required period to achieve a solution convergence time to a centimetric solution, it can be observed (Figures 2 and 3) that the period is nearly identical.To provide a more detailed analysis, Table 4 has been included to define the percentage gains relative to the comparison between the solutions obtained with the 15° and 10° masks, as presented in Tables 2 and 3, respectively.These percentage gains can be useful to assess the impact of a 10° mask.It is important to note that the convergence time can vary depending on several factors, such as the number and quality of the available satellites, the atmospheric conditions, and the positioning algorithm used.Table 4 shows the positioning gains and degradation found when reducing the elevation mask.East and North components exhibited significant gains when the elevation mask was reduced.
However, the evaluation of additional time frames to validate the previously made statements is found in Table 5, 45 minutes each.
POVE station data processing
The result for the POVE station, DoY 191/2022, tracking from 10:00 to 10:45 AM, with GPS and Galileo constellations, 1 second of recording interval, is given in Figure 4.
Figure 4 presents a comparison of the positioning results obtained with and without the use of the L5/E5a signal.The figure clearly shows that the addition of this observable has a positive impact on biases and RMS concerning the ground truth site coordinates, including the East and North components, as well as the ellipsoidal altitude, with gains exceeding ~50%.Moreover, the results presented in Table 6 corroborate this conclusion, demonstrating a significant improvement in the RMS values of the variables when the L5/E5a signal is included in the positioning solution.Therefore, the results obtained in this study suggest that the inclusion of the L5/E5a signal can be of utmost importance in improving the accuracy and reliability of GNSS positioning solutions, especially in challenging environments where the use of traditional signals may be limited by ionospheric and tropospheric effects.
Afterward, once again, the elevation mask was reduced, aiming to establish the reasoning discussed in section 3.1, for the UFPR station, as shown in Figure 5.
By analyzing the results generated in Figure 5, it can be concluded that the accuracy of the ellipsoidal altitude and the North coordinate obtained through dual-frequency data processing with the reduction of the elevation mask did not improve compared to the results presented in Figure 4.However, when considering only the triple-frequency data with the reduction of the elevation mask to 10° in a region with a more active ionosphere, significant improvements were observed in the North and Vertical components, with RMS gains exceeding ~60%.These findings suvggest that reducing the elevation mask in areas with a more variable ionosphere can be a viable approach when working with triple-frequency data.
Based on the results mentioned in the previous paragraph, it was possible to calculate the gains in terms of RMS, as shown in Table 7.
The convergence time of the solution is practically the same in Figures 4 and 5. Finally, Table 8 presents the gains, in percentage, regarding the comparison between the solutions obtained with elevation masks of 15° and 10°, as previously presented in Tables 6 and 7.
However, the evaluation of additional time frames to validate the previously made statements is found in Table 9, 45 minutes each.
The measured values over 45-minute time intervals at multiple times throughout the day reinforce the findings for the UFPR and POVE stations, as gains with the addition of the new observables are highlighted in the processing conducted using RTKLIB.
Conclusion
In the context of the conducted studies, reducing the elevation mask from 15° to 10° resulted in degradation in both analyzed frequencies.At the UFPR station, considering dual frequency, improvements were observed in the East and North components, approximately ~3% and ~21%, respectively.However, the Vertical component experienced a degradation of ~153%, indicating that the benefits of the reduction are not significant compared to the losses caused by the decrease in the elevation mask.For the triple frequency at the UFPR station, there was an RMS degradation in the East and Vertical components, approximately ~34% and ~111%, respectively, with gains of ~33% in the North component, which can be disregarded due to the losses caused by the reduction.
At the POVE station, located in a region with a more active ionosphere, the reduction of the elevation mask degraded all analyzed components in the dual frequency, with losses of ~17% in the East component, 33% in the North component, and ~5% in the Vertical component.In the case of triple frequency, the degradation in the East component was ~66%, with gains of ~16% and ~22% in the North and Vertical components, respectively.
Again, these gains are not relevant as the degradation in the East component was significant.Next, the benefits of including the L5/E5a signals were analyzed.In both stations, tracking was performed at different times of the day, with each session lasting 45 minutes, from 9:00 a.m. to 6:45 p.m.This allowed for estimating the average gains provided by the inclusion of these signals, considering a 15° elevation mask.In the East component, there was an average precision gain of ~33%, in the North component of ~35%, and in the Vertical component of ~71%.
Therefore, the inclusion of the L5/E5a observables provided overall benefits to the precision of the positioning solution, resulting in improvements in the results in all analyzed scenarios.
Figure 1
Figure 1 Location of used RBMC stations in Brazil.
Figure 2
Figure 2 Comparison between dual and triple-frequency positioning with 15° elevation mask at UFPR station.
Figure 3
Figure 3 Comparison between dual and triple-frequency positioning with 10° elevation mask at UFPR station.
Figure 4
Figure 4 Comparison between dual and triple-frequency positioning with 15° elevation mask at POVE station.
Figure 5
Figure 5 Comparison between dual and triple-frequency positioning with 10° elevation mask at POVE station.
Table 1
General positioning settings.
Table 2
Dual and triple-frequency accuracies with 15° elevation mask at UFPR station.
Table 3
Dual and triple-frequency accuracies with 10° elevation mask at UFPR station.
Table 4
Percentual improvement gain with 10° elevation mask at UFPR station.
Table 6
Dual and triple-frequency accuracies with 15° elevation mask at POVE station.
Table 7
Precision of dual and triple-frequency with 10° elevation mask at POVE station.
Table 8
Percentual improvement gain with 10° elevation mask at POVE station. | 4,208.4 | 2024-08-26T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Robust PPG Peak Detection Using Dilated Convolutional Neural Networks
Accurate peak determination from noise-corrupted photoplethysmogram (PPG) signal is the basis for further analysis of physiological quantities such as heart rate. Conventional methods are designed for noise-free PPG signals and are insufficient for PPG signals with low signal-to-noise ratio (SNR). This paper focuses on enhancing PPG noise-resiliency and proposes a robust peak detection algorithm for PPG signals distorted due to noise and motion artifact. Our algorithm is based on convolutional neural networks (CNNs) with dilated convolutions. We train and evaluate the proposed method using a dataset collected via smartwatches under free-living conditions in a home-based health monitoring application. A data generator is also developed to produce noisy PPG data used for model training and evaluation. The method performance is compared against other state-of-the-art methods and is tested with SNRs ranging from 0 to 45 dB. Our method outperforms the existing adaptive threshold, transform-based, and machine learning methods. The proposed method shows overall precision, recall, and F1-score of 82%, 80%, and 81% in all the SNR ranges. In contrast, the best results obtained by the existing methods are 78%, 80%, and 79%. The proposed method proves to be accurate for detecting PPG peaks even in the presence of noise.
Introduction
There is a growing demand for ubiquitous health monitoring systems. These systems are developed to provide proactive healthcare solutions as well as reduce medical costs: e.g., providing efficiency and cost-savings for doctors, nurses, and pharmaceutical companies [1]. Fortunately, rapid advancements in the Internet of Things (IoT)-based systems and wearable devices offer opportunities for the development of health monitoring systems [2]. Such IoT-based healthcare systems can provide comprehensive patient care by leveraging various sensor types, communication units, and computing resources. Wearable electronics-such as wristbands and smart rings-enable the ubiquitous collection of biomedical signals, including electrocardiogram (ECG) and photoplethysmogram (PPG) [3].
PPG is a low-cost, non-invasive, and simple optical technique used for measuring the synchronous blood volume changes in tissue such as the surface of the finger, toe, wrist, and forehead [3]. This approach is widely used in wearable IoT-based applications due to its high level of feasibility and ease of measurement [4]. Collected PPG signals can be used to extract various health parameters, such as heart rate and heart rate variability. These health parameters are obtained by the determination of the systolic peaks in the PPG records. However, the quality of the PPG waveform is easily affected by surrounding noises such as background noises and motion artifacts. The noises are unavoidable in IoT-based healthcare systems, as users engage in a variety of physical activities. Subsequently, when the signal quality is poor (i.e., low signal-to-noise ratio (SNR)), accurate detection of peaks in PPG signals becomes challenging. This issue increases false peak determination, which results in inaccurate vital signs extraction.
Numerous studies have been proposed to determine the PPG signal peaks accurately. In traditional methods, signals are inspected by experts, and then the peaks' locations are annotated manually. These methods are often implemented in hospitals/clinics and are mainly used as gold standard methods for validation [5]. Such manual peak detection methods are time-consuming and require domain knowledge. Therefore, they are not feasible in health monitoring applications due to the growth of data volume over time.
On the other hand, various automatic signal processing techniques have been developed for PPG peak detection. These methods mainly include adaptive threshold [6], transform-based techniques [7], derivative calculation [8], and computer-based filtering [9]. Adaptive threshold techniques are commonly used for peak detection in biomedical signals (e.g., ECG and PPG). The methods set a threshold, which is increased or decreased adaptively based on the amplitudes of detected peaks in the signals. The threshold can be updated according to different attributes, such as duration, amplitude, beat-to-beat intervals, and sampling frequency [10,11]. In some cases, the threshold-based methods require ECG signals in addition to PPG signals, which add to the cost of the medical equipment and hinder their use in (remote) healthcare systems [12].
In addition, transform-based techniques are proposed for PPG peak detection. These methods leverage signal processing techniques, such as discrete wavelet transform [13], stationary wavelet transform [7], and Hilbert transform [14,15]. Wavelet-based techniques are mostly used for denoising. These techniques decompose signals into multiple sub-bands with the same resolution as the original signals. Then, by composing the desired sub-bands, the informative parts are regenerated, and baseline wanders and high-frequency (HF) noises are eliminated. Hilbert transform is also employed for peak detection tasks. Hilbert transform is a powerful tool in analyzing the amplitude and frequency of a signal instantaneously. Refs. [14,16] indicate that the zero-crossing points in the Hilbert transform correspond to peaks' locations. However, these methods are insufficient for noise-contaminated signals, and they become unreliable if the SNR drops below a certain level.
In addition, machine-learning-based approaches have been developed for PPG signal analysis [17,18]. For example, a three-layered feedforward neural network was introduced in [19] for PPG peak detection. The method was only trained and evaluated with low-noise signals.
The conventional peak detection techniques in the literature are mainly designed for noise-free or low-noise PPG signals. Therefore, they are insufficient to determine PPG peaks' locations when the signal quality is poor due to motion artifacts and HF noises. These noises are inevitable in wearable-based health and well-being monitoring systems. We believe that a peak detection method is required to determine systolic peaks in noisy PPG, leveraging temporal information in the signal. The robustness of such a method requires to be investigated against different noise levels.
In this paper, we propose a CNN-based peak determination approach for PPG signals with different levels of motion artifacts. The convolution layers in our network are dilated, resulting in a large receptive field. Therefore, the network can use temporal information in PPG peak detection and learn complex problems associated with the noisy PPG signals. Our analysis exploits PPG signals and motion artifacts collected by wearable devices in health monitoring under free-living conditions. We develop a generator function to produce PPG signals with a wide range of noise, augmenting the training data and creating noisy signals similar to real-life PPG records. Using the PPG signals, the proposed method is evaluated in comparison with state-of-the-art PPG peak detection methods. In summary, the major contributions of the paper are as follows: • Proposing a dilated convolutional-neural-networks-based method for addressing the problem of PPG peak detection in the presence of noise. • Assessing the robustness of the proposed method using noisy PPG signals with SNRs ranging from 0 to 45 dB.
• Evaluating the proposed method in terms of accuracy compared to conventional methods, including adaptive threshold and Hilbert transform. • Providing the model implemented in Python for the community to be used in their solutions (https://github.com/HealthSciTech/Robust_PPG_PD).
The rest of the paper is organized as follows. The background and related work of this research is outlined in Sections 2 and 3. We introduce the dataset used in this work in Section 4. Section 5 describes the development of the proposed method in detail. In Section 6, we evaluate our method in comparison with other published methods. Finally, Section 7 concludes the paper.
Background
In this section, we briefly describe PPG and neural networks proposed for PPG-based applications.
Photoplethysmography
PPG is a convenient method for sensing the blood flow rate at peripheral sites. Therefore, this signal can be used to determine the cardiac cycle [3]. The PPG sensor includes two main components, i.e., a light source and a photodetector. PPG signals are acquired by emitting light in different wavelengths (e.g., infrared, red, and green, often at 940, 660, and 550 nanometers, respectively) to the skin surface and capturing the reflected light via photodetectors. The infrared and red lights are commonly used for measuring heart rate and blood oxygen saturation. Furthermore, the green light is widely used in wearable devices such as smartwatches [20].
The variation in the PPG signal is associated with cardiac and respiration oscillations. Figure 1 indicates a view of a PPG signal, where the heart rate values can be estimated by measuring the difference of the time interval between two successive peaks. The signal consists of two main components, i.e., the alternating current (AC) and direct current (DC). The AC part denotes synchronous cardiovascular fluctuations caused by cardiac activity, while the DC portion denotes various low-frequency elements of the blood flow, such as respiration [3,18]. The PPG method is widely used in wearable and mobile applications [20]. However, the collected signal is highly susceptible to noise. The typical PPG noises include motion artifact, baseline wander, and environmental noise [21]. Motion artifacts are generated due to the user's hand movements. Baseline wander is a low-frequency noise, which is noticeable as a fluctuating pattern in the PPG signal. Environmental noise is produced by additional sources (e.g., ambient light) collected in addition to PPG during the monitoring. The noise level in the PPG signal depends on different factors, such as the sensor quality, sensor setup (e.g., electric current), intensity of the activity, and environmental factors [20,22]. In this study, the additive noise includes baseline wander and motion artifacts, and we focus on the noise levels in PPG signals.
Neural Networks in PPG Applications
Artificial neural networks are inspired by the human brain and imitate how biological neurons interact with one another, comprising an input layer, hidden layers, and an output layer [23]. Neural networks algorithms have been recently used in various PPG signal applications. In [24], a classification method based on a multilayer perceptron (MLP) network was presented. In their study, an MLP network was trained to classify the pattern of the onset and systolic of the PPG signals with different window sizes. The preprocessing stage includes two steps, i.e., signals segmentation and smoothing using a simple mean square regression. Then, the results are fed to the network as features for pattern recognition. Chen et al. [25] proposed a hidden Markov model for PPG classification. They first used linear predictive coding and sample entropy methods to extract different features from the PPG waveforms. Then, a vector quantization method was employed to convert the features into the prototype vectors which were utilized to estimate the parameters for hidden Markov model parameters. Reiss et al. [26] introduced a CNN architecture for heart rate estimation. In their study, the PPG signals and corresponding three-axis accelerometer data were used to train the model.
For PPG noise removal, Ref. [17] proposed a deep recurrent neural network and stochastic modeling recover the noise-corrupted PPG signals. They first used recurrent neural networks for segmentation. Then, a Kalman filter was employed to extract clean PPG and create a stochastic model. They also tested their method on a real-time dataset acquired by a wearable glove. In addition to noise removal, deep learning methods were proposed for PPG quality assessment [18,27]. In these studies, 1D and 2D CNN models were trained to discriminate between reliable and unreliable signals. The methods were evaluated by comparing the result with ECG references.
Related Work
In this section, we describe several PPG peak detection methods with different complexities which have been developed in the last decades. Most peak detection methods contain two main stages, i.e., Section 3.1: preprocessing (or filtering) and Section 3.2: envelope detection and peak determination.
Preprocessing
Preprocessing is one of the important stages in the PPG peak detection task. This step aims to remove components of the signal that do not reflect the features of interest (e.g., heart rate and HRV). In the preprocessing step, different filtering methods-such as low-pass filter, high-pass filter, adaptive filter, singular value decomposition, and mode decomposition-are employed to suppress the baseline distortion and HF noises. Such methods make the systolic peak part more prominent in the PPG signals. Ref. [28] proposed low-pass and high-pass filtering methods with cut-off frequencies of 0.4 and 8 Hz to remove motion artifact and HF noise, respectively. The significant component of background noise is presented in the frequency range of 0.15 to 5.0 Hz. This was achieved by employing a band-pass filtering method with cut-off frequencies of 0.5 and 5.5 Hz [29]. Prieto et al. [30] used a combination of two zero-phase delay fourth-order high-pass and eighthorder low-pass Butterworth filters with a bandwidth of 0.1-16 Hz to remove unwanted signals. Moving average filters were also utilized in noise surpassing. For instance, in [8], a three-point bidirectional moving average was proposed to remove the phase delay caused by the filter. In [31], the authors proposed a novel two-step method consisting of noise filtering and noise elimination. In the noise filtering step, a band-pass filter was followed by a three-point moving average filter for signal smoothing. The signal was segmented into cycles for the noise elimination part, and three statistical features (i.e., standard deviation, kurtosis, and skewness) were calculated. Then, a threshold was set based on the features, and the segments beyond the threshold were eliminated. A three-point moving average filter was used for smoothing the signal at the final stage.
In [32], a variational mode decomposition was used to enhance the signal quality and suppress the motion artifact. The decomposition was implemented in two stages to minimize the balancing error. Paradkar et al. [7] introduced a singular value decomposition along with a moving average filter to extract the periodic component from the raw signal and reduce background noise. Ye et al. [33] proposed a hybrid motion artifact removal method combining adaptive filtering and signal decomposition. In this method, the PPG records and the acceleration data (used as the reference) were used as the input of the adaptive filter simultaneously. After using the adaptive filter, a signal decomposition method was performed to remove remaining motion artifact components of the raw PPG signal. Similarly, an adaptive filter was introduced for motion artifact reduction in [34]. The PPG-based filtering techniques reduce the effect of noise, but they cannot guarantee highly accurate results all the time. For example, they cannot remove noise peaks, when the noise overlaps (e.g., in frequency) with the PPG peaks. Moreover, they might require additional sensors, e.g., accelerometer or gyroscope.
Envelope Extraction and Peak Determination
This stage generally includes extracting different features such as the maxima and minima, slope of the signal, and signals' envelope using well-known and robust algorithms, e.g., adaptive threshold, transform-based, and machine-learning-based algorithms to determine the signal peaks.
Adaptive Threshold
The adaptive threshold is a common technique used for PPG peak detection. This method employs a constant specified by signals' temporal and frequency domain features and time intervals. The constant could be decaying or growing due to the dynamic nature of the PPG waveform [6,28]. For example, Shin et al. [35] proposed to update the threshold according to different features such as the sampling frequency, preceding peaks, and standard deviation of the signal. In [10], the adaptive thresholding is equipped with a morphological filter to remove the low noises and a slope sum function is used to pinpoint the location of peaks accurately. Van Gent [36,37] presented a method based on an adaptive threshold followed by moving average and spline interpolation methods if the detected peaks show clipping. In [38], a derivative-based method was proposed equipped with a slope sum function and an adaptive threshold to mitigate false peak detection. The complexity of the adaptive threshold PPG peak detection methods is low. However, the methods are sensitive to noise and fail to accurately identify the peaks when the PPG signal is contaminated by noise. In other words, the PPG signal changes rapidly due to noise, so the methods are incapable of selecting appropriate thresholds.
Transform-Based Techniques
In addition, transform-based techniques are developed for PPG peak detection. These methods are mainly based on various linear transformations such as wavelet [7] and Hilbert transforms [14], by which the signal's temporal and frequency domain features are extracted. Then, various thresholds-such as zero-crossing points or decision logic-are set to extract the signals component and corresponding peaks in the original signals. In [14], a Hilbert transform was accompanied by moving average and Shannon energy envelope techniques to locate the position of the systolic peaks. In [15], a Hilbert double envelope method was proposed for PPG peak detection. Through Hilbert transform, the lower and upper signal envelopes were obtained. In the next step, the Hilbert transform was applied to the upper envelope, and the lower envelope was retrieved. Finally, a local maximum method was applied on the output signal to locate the PPG peaks. Vadrevu et al. [39] introduced a stationary wavelet transform to extract two sets of coefficients from the PPG signal. Then, using multiscale sum and product, the peaks' sharpness was enhanced in the edges, and the other values remained near zero. Following that, the zero-crossing points were extracted to obtain the locations of the systolic peaks. Leveraging transform-based methods, the peak positions can be detected more accurately. For instance, Ref. [40] proposed a robust algorithm-enabled by a Hilbert transform, amplitude thresholding, and signal derivative-to detect PPG systolic peaks. Their algorithm achieved a better performance in comparison to an adaptive threshold technique. However, the transformbased methods are still insufficient for wearable-based PPG, as they fail to determine systolic peaks in distorted PPG signals precisely.
In another work, Ref. [11] introduced a positioning algorithm to locate PPG peaks. The method included denoising and abnormal intervals removal steps. In [41], PPG peaks were automatically detected and corrected, exploiting a Poincare plot feature and envelope detection. In [38], a window-based approach was introduced for peak detection. The peak location was determined by sliding a certain window over PPG signal. Following that, several well-documented, time-domain strategies (i.e., refractory period, clipping detection, and peak verification) were proposed to address disturbances associated with PPG signals. However, the method's accuracy is highly affected by the window size. The methods mentioned above are accurate when the PPG signal quality is good. However, they are highly susceptible to motion artifacts and environmental noises. They fail to differentiate false noisy peaks and systolic peaks and subsequently result in inaccurate peak detection. Moreover, the probability of false peak detection increases in signals with a high heart rate. Consequently, these methods are insufficient for wearable-based monitoring, in which the users might engage in various physical activities.
Machine Learning Methods
Traditional machine learning and deep learning methods have been employed to analyze cyclostationary biosignals such as PPG and ECG. For example, Ref. [42] proposed 1D CNN for QRS complex detection. In the preprocessing stage, a derivative function followed by an averaging system was used for noise removal. Then, the signals were fed to the CNN method for automatic feature extraction and classification. In [43], a faster R CNN model was proposed for ECG peak detection. Their method included three steps. First, the ECG signals were segmented and transformed into 2D images. Second, the images were fed to the model, and the output features map was input into a regional proposal network. In the final step, by setting a threshold, low probability outputs were excluded. Their method was tested with 24 h wearable ECG recordings. In addition, Ref. [44] proposed an automatic R-peak detection for ECG signals. The method comprised a bidirectional LSTM to obtain the probabilities and locations of R-peaks. The machine-learning-based methods mentioned above have been utilized for ECG R-peak detection. They are insufficient for PPG signals due to the difference in the signals' origins. PPG systolic peaks detection is more challenging as the peaks' slopes are not as large as QRS-complex in ECG. PPG signal quality and the waveform are also highly susceptible to artifacts generated, for example, by the user's hand movements.
For PPG peak detection, Ref. [19] proposed an online sequential learning algorithm. Their method included two steps. First, they divided the PPG signals into a set of fundamental sinusoidal defined segments. Among these segments, only one segment contained a peak. In the second step, a feedforward neural network method was trained to detect peaks in the segments. However, the method could not differentiate systolic peaks from noise peaks so it might fail with noisy signals. Moreover, the evaluation was merely limited to noise-free and low-noise PPG signals.
Dataset
The PPG dataset used in this paper is a part of a health monitoring study [45]. During the study, the participants were asked to wear Samsung Gear Sport smartwatches, with which their vital signs, physical activity, and sleep were tracked continuously. The monitoring was performed under free-living conditions, where the participants engaged in their normal daily routines.
The recruitment and data collection took place in southern Finland between July and August 2019. The recruitment started with the students and staff members of the University of Turku. More recruiting was then performed with snowball sampling, and in the end, 46 individuals were recruited. All of the participants were healthy individuals, and both males and females were present in equal numbers. The following exclusion criteria were used in the recruitment: (1) any restrictions using wearable devices at work, (2) restrictions regarding physical activity, (3) a diagnosed cardiovascular disease, and (4) symptoms of illness at recruitment time. Due to technical and practical issues, PPG signals from all 46 participants were not available, and data from 10 participants had to be excluded. Thus, PPG data from 36 participants were used in our analysis. Table 1 summarizes the background information of the participants. All PPG signals were recorded with Samsung Gear Sport smartwatches [46]. The smartwatch has compact dimensions of 44.6 × 42.9 × 11.6 mm, and it weighs 67 g with the strap. The smartwatch is waterproof, its battery lasts about 3 days, and it includes a PPG sensor and a built-in inertial measurement unit. The device runs an open-source Tizen operating system, enabling customized data collection and data transmission.
For the data collection, the participants were asked to wear the smartwatches on their non-dominant hands. The watches were programmed to collect data for 24 h at the sampling frequency of 20 Hz. We upsampled the PPG signals to 100 Hz to include the tolerance distance in the peak detection (see Section 5.3.2). The upsampling was performed using a linear interpolation technique, which is a conventional method for upsampling low-frequency signals [47]. In this method, a line is fitted between each pair of data points. Then, based on the upsampling rate, new data points are fitted on the line. The participants were also asked to send the collected data via Wi-Fi to our server using our Tizen app [45]. Our monitoring system is depicted in Figure 2, including the Samsung Gear Sport smartwatch for data collection, a smartphone as a gateway layer for data transmission, and the cloud server. This study was conducted following the ethical principles set by the Declaration of Helsinki and the Finnish Medical Research Act (No 488/1999). In addition, the University of Turku Ethics committee for Human Sciences gave a favorable statement (No 44/2019) of the study protocol. All study participants received both oral and written information about the study before their written consent was obtained. Study participation was entirely voluntary, and at all times, the participant had a right to withdraw from the study without giving any reason. At the end of the monitoring period, each participant was compensated with an EUR 20 gift card.
Deep Learning-Based PPG Peak Detection
In this section, we present a deep-learning-based method designed for PPG peak detection. The method is trained using noisy PPG signals. In the following, we first describe the data preparation step, including a generator function to produce noisy PPG signals. We then present the proposed model architecture and peak extraction method. The data analysis pipeline is shown in Figure 3.
Data Preparation
Data preparation generates noisy PPG signals with different SNRs to train and test the proposed model. The signals are generated using the available database, presented in Section 4. In this regard, we extract clean PPG signals and noise from the database. The collected PPG signals are nonstationary in terms of the noise level. In other words, the noise levels vary throughout the monitoring due to, for example, the user's hand movement. Hence, the signals are divided into (quasi)stationary segments, within which we assume the noise level is fixed. The length of the segments should be long enough to allow meaningful waveform analysis but short enough to ensure the segments are (quasi)stationary. Note that too-short segments lead to low-resolution features. In our analysis, 15 s segments are selected.
The clean PPG signals are obtained using a PPG quality assessment technique, including five morphological features (i.e., spectral entropy, Shannon entropy, approximate entropy, kurtosis, and skewness) [48] and a support vector machine method. Then, the peak annotations (of the clean signals) are performed using a derivative-based method. In addition, we verify the method's accuracy by randomly selecting the annotation outputs and examining them manually.
Moreover, baseline wander and motion artifacts are extracted from the collected data and stored in the noise dataset. The noises are additive and independent. Then, the clean signals (along with the peak locations) and noises are fed to a generator function.
Generator Function
A generator function is designed to create noisy PPG signals by aggregating clean PPG with noise. The noisy PPG signals are then utilized for training and testing the model. The generator function returns batches of normalized noisy PPG signals, their SNR values, and systolic peaks labels. Figure 4a shows a view of a generated PPG signal and its labeling vector.
The generator function includes five steps, as follows: Clean PPG signal selection: A 15 s window of clean PPG signal (X) is randomly selected. Noise selection: A 15 s window of noise (N) is randomly selected. Note that our noise dataset includes baseline wander and motion artifacts.
Noisy PPG generation: A weighted arithmetic mean is utilized to create the noisy PPG signals: where w X and w N are the weights of the clean PPG signal and noise, respectively. In our case, w X is 1 while w N is a random number with uniform distribution (0, 5). Therefore, PPG signals with different noise levels are constructed. Then, the signals are normalized to [−1, 1] to be used for training and testing our model. Labels extraction: A binary format is used for labeling the systolic peaks in the constructed PPG signal. In this labeling, "1" corresponds to the peak locations, whereas the rest of the signal is labeled as "0". Moreover, a slightly balanced "1" is added to the adjacent systolic peak points for making the model more robust against the false positive. In other words, despite considering one point as the location of the peaks, five labels (i.e., peak, two preceding, and two succeeding points) are set to "1" (see Figure 4). The use of five "1"s instead of only one "1" in the labeling vector leads to more robust positive predictions. Therefore, it reduces the noise effect in identifying the peak's location. It should be noted that the label values are created according to the systolic peaks in the clean PPG signals (but not in the aggregated noisy PPG).
SNR extraction: The SNR is calculated for each constructed noisy PPG signal as follows: where P Signal and P Noise are the signal and noise powers, respectively. The procedure of the generator function is also indicated in Algorithm 1.
Algorithm 1 The generator function.
Initialize: win_size ← window size batch_size ← number of batch size w X ← 1 while i < batch_size do X ← select a window of the clean PPG signal randomly clean peak ← Extract the corresponding peaks locations N ← select a random noise with same window size w N ← a random number with uniform distribution (0, 5) S ← w X X + w N N norm_sig ← normalize the noisy signal (i.e., S) label ← create a binary label format for the noisy signal SNR ← calculate the SNR i + = 1 end while
Model Architecture
To detect PPG peaks, we develop a CNN architecture with dilated convolutions, also known as atrous convolutions (or convolution with holes). Using dilated convolution instead of the regular one will result in a larger receptive field with the same amount of trainable parameters. This is achieved by inserting holes into the filter, i.e., some of the inputs are skipped as indicated in Figure 5. Dilation rate controls the amount of skipping, and filters with higher dilation rates have more holes. Dilated convolution with a dilation rate of 1 is a particular case that equals to a standard convolution. Dilated convolutions were utilized for the first time in efficient wavelet decomposition [49]. Later, they were successfully utilized in different deep learning applications, such as semantic image segmentation [50,51] and audio generation [52]. In our CNN model architecture, dilated convolutional layers are stacked, and their dilation rate is doubled at every layer. This approach results in vast receptive fields even with few layers (see Figure 6), which is computationally very efficient. Moreover, the input resolution is retained through the network. In contrast with our method, other existing methods that expand the receptive field, such as strided convolutions (stride larger than 1) or pooling layers, reduce the spatial resolution [53]. Stacking dilated causal convolutional layers and simultaneously increasing dilation rate was first proposed by Oord et al. [52] as part of their wavenet architecture for audio generation. Our model is architecturally simpler as we use a feedforward structure without any residual or skip connections. We also do not enforce causality. Therefore, receptive fields of the neurons in our model can contain both preceding and succeeding information, which will allow our model to make more accurate predictions. Figure 6. The receptive field of a neuron in a three-layer dilated convolution network is illustrated with bold lines. Note how the dilation rate (DR) is doubled at every layer. Our model is deeper than this illustration as it contains four additional layers.
Our model is fully convolutional, and it is a stack of seven 1D convolutional layers, as indicated in Figure 7. The input resolution of 1500 time steps is retained through the model. The kernel size is also 3 for every layer. The model makes sequence-to-sequence mapping. It produces a probability value for every time step. The probability value indicates how likely a signal point is to be a systolic peak. Two PPG examples with the probability values (i.e., the CNN model predictions) are shown in Figure 8. The dilatation rate is 1 in the first convolutional layer, and it is doubled at every following layer, reaching 64 at the final convolutional layer. This network structure results in a wide receptive field of 255 time steps for a neuron in a final classification layer. The size of the receptive field r of any convolutional layer l in our model can be determined as shown below: To keep our model compact, we slowly increase the number of filters as the network becomes deeper. The first convolutional layer contains four filters, while the second to last convolutional layer contains 32 filters. The final convolutional layer performs the binary classification; therefore, it has only one filter. It uses the sigmoid function (Equation (4)) as the activation function, while all preceding layers use exponential linear unit (Equation (5)) as the activation function [54].
where α > 0. Moreover, we chose Adam [55] as the optimizer and the binary cross-entropy as the loss function. The binary cross-entropy loss L is defined as follows: where y is a binary class label (0 or 1) and p is the predicted probability indicating how likely it is that the prediction belongs to the positive class labeled as 1. The proposed model is very small since it only has 3169 trainable parameters. The lower row shows the two inputs, and the upper row includes the two labeling vectors predicted by the model.
Peak Finder
We develop a wrapper function to extract the locations of the precise peaks from the model predictions, provided by the CNN model. Moreover, the wrapper function detects and removes false peaks from the model predictions. Mainly, the function performs the three following tasks:
Extracts the precise peak locations within the predicted values.
3.
Discards false peaks in the predictions.
In the following, we describe the three tasks of the wrapper function in more detail.
Low-Probability Signal Removal
In the first step, predictions with low probability are discarded using a threshold value. A labeling vector-where each time step indicates one probability value between 0 and 1-is fed to the wrapper function. Then, a local threshold filter is applied to the predicted time steps, and the time steps below the defined probability value are filtered out. The threshold is chosen empirically after a considerable number of predicted time steps evaluation. Decreasing the probability threshold improves the recall but reduces the precision.
Peak Extraction
We use a local maximum finder to determine the exact peak's location. For this purpose, we design a searching function to find the five samples segment that has the higher value of the probability within the model predictions. In each five samples segment of the predicted time steps, the index of the higher probability is chosen as the location of the peak. Moreover, if there are two same probability values in the selected segment, the first probability value is chosen, and the corresponding index is extracted as the location of the peak. Figure 9 illustrates a segment of model prediction with its corresponding probability values. As shown in Figure 9, seven samples are above the threshold. In this example, the function finds the sample with the highest probability (i.e., 0.85) by comparing the neighbor points. We produce a balance labeling vector for the noisy PPG signals, as instructed in Section 5. This idea helps our model to achieve higher precision while maintaining a lower tolerance distance. In other words, in the data generation stage, we introduce a new method for labeling. The method was generating a series of five "1"s instead of only one "1" as the location of the peak. This means that if the algorithm finds a peak in the peak detection phase, there might be a time difference between the exact peak location and the detected peak. This time difference is introduced as tolerance distance. Figure 10 shows a segment of the PPG signal and the defined tolerance distance with gray shaded rectangulars. In the proposed method, the tolerance distance is 50 ms, which is smaller than the tolerance distance (i.e., 88 ms) in other studies in the literature [44,56]. If the peak is detected within this range it is considered as true peak.
Peak Correction
In the third step, too-close peaks are discarded. Ventricular depolarization cannot occur in the refractory period despite the presence of stimuli. Therefore, no peak is presented in PPG signals during the refractory period after a peak. Our analysis assumes that the maximum heart rate is 200 beats per minute and, accordingly, the minimum distance between two successive peaks is 300 ms. This step is necessary when we aim to maximize the recall while a low-value probability threshold is defined.
Accordingly, the PPG peaks within a distance less than the threshold (i.e., 300 ms) are considered as false peaks. In this regard, we add a peak into the false-peak list if the distance with its preceding peak is less than 300 ms. Then, the false-peak list is sorted based on the peak's probabilities. In the next step, we select the highest peak's probability in the false-peak list and add it to the peak list. Then, we calculate the distance with its preceding peak. If the distance would be larger than the threshold, it is chosen as a peak; otherwise, it is removed. We repeat this step until the false-peak list is empty. For clarity, let us take an example of the PPG peak correction. Four systolic peaks are indicated in Figure 11. In the first round, we calculate the distance between each peak with its preceding peak. As shown in the figure, the distance between the first peak (P 1 ) and the second peak (P 2 ) is 250 ms. Therefore, P 2 is added to the false-peak list. Likewise, P 3 is added to the false-peak list. In the next step, we sort these false peaks based on their probabilities. Then, we start with the highest probability (i.e., P 3 ) and calculate its distance with P 1 . The distance is 350 ms, which is above the threshold. Hence, P 3 is considered as a systolic peak. In the next round, we choose P 2 and follow the same procedure. As the distance between P 2 and P 1 is less than the threshold, P 2 is not a systolic peak and is removed from the false-peak list.
Evaluation and Results
We evaluate the proposed method using the PPG data collected via the Samsung smartwatches in free-living conditions. The evaluation includes the data of 36 healthy individuals. The model generalization is an essential factor that should be taken into consideration. We validate the performance of the proposed method by implementing an inter-patient test, in which training and testing data are selected from separate individuals. In this regard, the PPG data of 26 participants (i.e., 9,600,000 15 s segments) are utilized for the training phase. We train the proposed model using (1) the noisy PPG signals constructed via the generator function and (2) their true labeling vectors. For the testing phase, the data of the remaining 10 participants (i.e., 35,800 15 s segments) are selected. We separate the users to avoid any data leakage between the model training and testing. Similar to the training phase, the generator function is utilized to create noisy PPG segments. The test PPG signals are fed to the model, and the labeling vectors are estimated. Then, the method's performance is assessed by comparing the estimated labeling vectors with the true labeling vectors.
In our experiments, we used a Linux machine with AMD Ryzen Threadripper 2920X 12-Core processor, NVIDIA TITAN RTX GPU (24 GB memory), and 126 GB RAM. We used Tensorflow (v2) deep learning framework with high-level Keras API to construct our model. A batch size of 800 and 200 epochs, where the number of steps per epoch was 60, was selected for model training. In the training data, the range of SNR is from −2.5 to 47.5 dB (complete noisy to noise-free signal). The data are clustered into 10 ranges with the step of 5 dB. This balancing prevents the network from being over-learned for a specific SNR value. A total number of 9,600,000 segments were used for the training phase (90% training and 10% validation). Figure 12 indicates four examples of 15 s noisy PPG signals, with different noise levels, used in the training phase. The method was implemented using Tensorflow [57], Keras [58], and SciPy [59] in Python. In addition to the proposed method, we implemented five exiting methods for PPG peak detection. First, Elgendi et al.'s method [60] was performed, using a dynamic threshold and two event-related moving average methods. Second, we utilized Van Gent et al.'s method [36] as an adaptive threshold method. Van Gent et al. [36] used an adaptive threshold along with a moving average on both sides of each sample. Third, Kuntamalla et al.'s [8] method was implemented to estimate PPG peaks using an adaptive threshold, which is empirically set to 0.35. Fourth, Chakraborty et al. [40], as a transform-based method, was used to estimate the peaks' locations using a Hilbert transform. Finally, a 1D-CNN method was implemented. The model was fully convolutional, consisting of six 1D stacked convolutional layers. The input data were the same as the inputs of the proposed method. It should be noted that for the Elgendi and Van Gent methods, we used the versions implemented in Neurokit [61] and Heartpy [62] Python packages.
Evaluation Measures
A beat-to-beat comparison was made between the detection results and the reference test set label to evaluate the algorithm in terms of accuracy. In the comparison, true positive (TP) is when the PPG peaks are detected correctly, false negative (FN) is when the method fails to detect a peak, and false positive (FP) is when the algorithm detects, e.g., noise as a peak. Then, the performance of the proposed method was assessed by calculating precision, recall, and F1-score, as follows [39]: F1-score = 2 × precision × recall precision + recall = 2 × TP TP + FP + FN (9)
Test Set Results
Our proposed method was evaluated using the test dataset created by the generator function. The function generated 100 Hz noisy PPG signals along with the SNR values and the corresponding labeling vectors. A total of 35,800 15 s noisy PPG signals with a balanced range of SNR were used for the testing. The SNR values were between −2.5 and 47.5 dB in our evaluation. The signals were divided into 5 dB SNR groups. Then, the performance of the method was investigated for each group. Figure 13 shows a PPG segment with different peak detection results. The SNR is 8.82 dB. The vertical dash lines show the true peaks, and the markers indicate the estimated peaks. Our method was successful at locating systolic peaks in 15 s segment. However, the other methods missed one or several peaks and detected false peaks as systolic peaks. The Kuntamalla method had the worst performance in this example. The performance of the models for different SNR groups are shown in Figure 14. A quantitative comparison is also presented in Table 2. Figure 14a illustrates the methods' precision. All the methods except the Kuntamalla method obtain equal precision value (i.e., 98%) when the SNR is above 42.5 dB. However, the precision values drop when the SNR values decrease. For example, in SNR 45 to 25 dB, the precision for the proposed method, Elgendi, Van Gent, Chakraborty, Kuntamalla, and 1D-CNN decrease by 15%, 18%, 24%, 19%, 20%, and 16%, respectively. As indicated, the proposed deep-learning-based method outperforms the existing methods. The results show that the false positive in the proposed method is lower compared to the other methods. Therefore, our method detects fewer false peaks as systolic peaks in the noisy PPG signals. Figure 14b indicates the methods' recall values. The figure shows that all the methods perform well in noise-free conditions, i.e., almost 96% recall. Our method showed better recall values in all SNR ranges except in 20 dB SNR, in which our method achieved 79% while the 1D-CNN method achieved 80%. As indicated, there are decreasing trends in the recall values when the SNR decreases. The falling trends are more intense at lower SNRs. With the lowest SNR, the difference between our method and the other methods reaches the highest value. As presented in Table 2, at SNR 0 dB, the differences between our method and the other methods are 7% (Elgendi), 8% (Van Gent), 12% ( Kuntamalla), 18% (Chakraborty), and 1% (1D-CNN). The recall values show that our method obtains lower false negatives compared to the other methods. Therefore, our method is relatively more successful in detecting the true peaks.
Finally, the methods' F1-score values are illustrated in Figure 14c. When the SNR values drop, the F1-score of the proposed method decreases with a smaller slope compared to the existing methods. The difference is bigger with lower SNR values. For example, as shown in Table 2, the F1-score values at 0 dB SNR are 0.52, 0.46, 0.43, 0.40, 0.38, and 0.48 for our method, Elgendi, Van Gent, Chakraborty, Kuntamalla, and 1D-CNN, respectively. In summary, our method was compared with five existing PPG peak detection methods using different noise levels (i.e., from noise-free PPG signals to distorted PPG signals due to high level of noise). Our findings showed that the performance of all the methods was similar in noise-free conditions, i.e., when the SNRs were higher than 40 dB. The noise-free signals included no or a few false peaks, and the methods could detect the systolic peaks successfully. However, in real-life situations, the PPG signals, collected by wearable devices, might be distorted due to motion artifact or environmental noise. With the medium-noise-level signals (i.e., SNRs from 40 to 15 dB), our method outperformed the state-of-the-art methods. With the high-noise-level signals (i.e., SNRs lower than 15 dB), the performances of all the methods (including the proposed method) were diminished. However, the results showed that our method was more successful with low-quality signals as well. Consequently, the proposed method has the best performance in all the SNR groups, particularly when the SNR values are not large. The method is more robust against noise and could better discriminate between the systolic and noise peaks.
Computation time: In addition to the accuracy assessment, we evaluated the computation time of the testing phase. We repeated the experiments 100 times and calculated the computation time of the methods. The average values and standard errors are indicated in Table 3. The Elgendi method (including rule-based steps) had the lowest execution time, i.e., 0.75 ms. The execution time of the proposed deep learning method was 1.081 ms, on average, which is lower than the processing time of the Van Gent, Chakraborty, Kuntamalla, and 1D-CNN methods (i.e., 8.55 ms, 2.48 ms, 2.55 ms, and 1.22, respectively).
Limitations and Future Work
The dataset used in this paper was limited to healthy participants. However, other studies [41] indicated that arrhythmias-such as premature atrial contraction, premature ventricle contraction, and atrial fibrillation-might affect the accuracy of peak detection methods. The method's performance should be investigated with the data of non-healthy individuals to address the lack of generalizability of the results.
Moreover, our evaluation was restricted to one dataset collected during free-living conditions using Samsung Gear Sport smartwatches. The method performance should be evaluated with different physical activities. In our future work, we intend to validate our method with other databases, such as [63,64], in which the users are engaged more in intense physical activities such as cycling and running.
Conclusions
In this paper, we presented a robust CNN-based peak detection for PPG signals with different noise levels. The proposed method included three phases. A generator function was introduced in the first phase, combining the PPG records with different noise levels. In the second phase, a dilated CNN was proposed. The use of dilated convolutions provided a large receptive field, which enhanced the efficiency of time series processing with CNNs. In the third phase, a wrapper function was implemented to detect the location of the PPG signals. After predicting the peaks, a filtering function was used to remove the false peaks. We evaluated the proposed method using the PPG data collected via wearable devices under free-living conditions. Our method was compared with five existing PPG peak detection methods. The performances of the methods were similar with noise-free PPG. However, our method exhibited higher accuracy when the noise level increased. We showed that the average F1-score of the proposed method was 81%, while Elgendi, Van Gent, Chakraborty, Kuntamalla, and 1D-CNN methods obtained 77%, 74%, 77%, 69%, and 79%, respectively. Our results indicated that the proposed PPG peak detection method was more successful in terms of recall and precision in a noisy environment. | 10,950.8 | 2021-09-01T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
Star Wars? Space Weather and Electricity Market: Evidence from China
: This paper aims to investigate the impact of space weather on China’s electricity market. Based on data products provided by NOAA and the National Energy Administration in China, this paper uses solar wind velocity as a solar weather indicator and the disturbance storm time index as a magnetospheric weather indicator to match monthly Chinese electricity market data over 10 years. Based on a VAR model, we found that (1) space weather increases the demand for electricity in China, and solar wind speed and the geomagnetic index increase the electricity consumption of the whole of Chinese society, as space weather mainly increases the electricity consumption of the secondary and industrial sectors. (2) The geomagnetic index significantly promotes power station revenue. (3) Space weather is associated with increased energy consumption. The geomagnetic index significantly increases the coal consumption rate of fossil power plants in China, but the solar wind speed has nothing to do with the coal consumption rate of fossil power plants.
Introduction
A clear understanding of the power market trends and influencing factors not only contributes to more scientific power system planning but also contributes to better the implementation of energy conservation and coal emission reduction policies. Many physicists have proven that space weather has a disastrous impact on electricity infrastructure. Space weather could largely change the temperature conditions, global navigation satellite systems, and power grids, which play key roles in the electricity market. However, little economic literature focusses on how space weather influences the electricity market. As dangerous space weather phenomena frequently occur and with large amounts of uncertainty, these events cost the world more than USD 2.7 trillion every year [1] and will be the biggest black swan events over the course of the next decade, as reported by Deutsche Bank. It is significant to examine the impact of space weather on the electricity market.
This paper adopts solar wind velocity (Vsw) as the index of solar activity and the disturbance storm time index (Dst) as the index of geomagnetic activity. We then match them with monthly electricity market data from over the past 10 years. This work is based on a VAR (vector autoregression) model to measure the impact of space weather on the Chinese electricity market, including electric energy consumption, sales and benefits for power station, and the coal consumption ratio of the power supply.
The research contributes to the literature in three ways: (1) it provides empirical evidence on space weather for identifying and evaluating the factors influencing power demand, which is conducive to more scientific planning for power industry development. (2) It provides evidence that space weather increases the energy consumption of electric coal and provides a new perspective for the implementation of energy conservation and emission reduction policies. (3) A quantitative study of the effects of space weather on
Space Weather
Space weather is a branch of space physics that is concerned with the time varying conditions within the solar system, emphasizing the space surrounding the Earth, and includes two parts: the solar activity and the geomagnetic activity. Solar storms and Geomagnetic storms occur frequently and with high uncertainty. Dangerous levels of solar and geomagnetic storms could occur every 3 years, and storms of disastrous levels could occur every 25 years.
There are many instances of these storms that have had a disastrous impact on the Earth over the course of history. The solar and magnetic storms of May 1921 contributed to the collapse of New York's power system, overheating the telegraph system and causing train stations to burn down [2]. Additionally, telegraphic communications were bad in many parts of Japan on the day of main perturbation of the magnetic storm that occurred from from 14 May 1921 21:00 to 15 May 1921 11:00. The Wellington event of August 1959 was the largest solar storm event on record, causing massive telegraph system failures and blackouts worldwide [3]. In March 1983, extremely violent solar activity triggered the most intense electromagnetic storm of the last century and greatly disrupted the ionosphere, leading directly to worldwide power system disruptions [4]. The storm affected Quebec City's power system the most severely, where transformers and capacitors tripped directly, experienced rapid voltage fluctuations, and the local population of 6 million people suffered severe economic losses. Meanwhile, numerous reports of communication channel failures and alarms occurred in United Kingdom, as the super grid transformers at Norwich Main and Indian Queens were switched out of service with damaged core bolt insulation and overheating.
Solar Activity
Solar activity modulates a flux of galactic cosmic rays that influence the Earth's atmosphere and its reflectance to the energy flux coming from the Sun. This can induce longterm trends of the Earth's climate and can sometimes lead to great weather abnormalities.
Physicists usually use the velocity of solar wind to measure how active the solar activity is. Solar wind is plasma emitted from the sun that has huge magnetic energy in its interior [5].
Geomagnetic Activity
The magnetosphere is a region of space surrounding the Earth in which the primary magnetic field is the Earth's magnetic field, which is formed by the interaction of the solar wind with the Earth's magnetic field [6]. Geomagnetic activity is caused by changes in the solar wind with the effective transfer of energy that results in significant changes in the currents and the plasma of the Earth's magnetosphere. Physicists usually use the dst index to measure how active the geomagnetic activity is. The dst index describes the temporal development of magnetic storms and the intensity of the ring current [7].
Space Weather and Electricity Market
There is little evidence in the economic literature that shows that space weather may have an impact on electricity markets. Based on daily power price data from 2000 to 2001 in the USA, it was found that stronger geomagnetic activity drives a significant increase in real-time power prices [8]. As unfavorable space weather conditions could inhibit power transmission, space weather is hypothesized to impede the transmission of power from low-cost to high-cost zones and thus to increase the dispersion in prices across zones. Additionally, in China, because there is a lack of liberalization in the energy trade market, it is hard to measure how space weather impacts price. However, no quantitative research has been done to explore the relationship between space weather and power demand or the power station revenue and non-renewable energy consumption during electricity generation. To explore more areas of how space weather impacts the electricity market, in this paper, we mainly focus on how space weather influences the electric energy consumption of society, the power station revenue, and the coal consumption rate of fossil plants.
Temperature Channel
Space weather can influence the temperature and in turn can influence the electricity market. For solar activity, early literature showed that solar brightness was the main cause of global warming before the industrialization of the Earth, while after the industrialization of the Earth, the influence of space brightness on the Earth's climate was reduced [9] and became insignificant. However, under the technical conditions of those scholars at that time, the possibility that ultraviolet light and magnetized plasma output from solar activity were important contributors to climate change could not be ruled out [10]. Actually, there have recently been many scholars who have noted that solar activity could drive the long-wave radiation flux to high latitude regions of the Earth, which increases the temperature of atmosphere and world's oceans [11]. Additionally, high-energy electrons released by solar activity will further destroy stratospheric ozone, further enhancing ultraviolet radiation to the Earth's surface and raising atmospheric temperatures.
Geomagnetic activity, on the other hand, drives long-wave radiation that significantly alters the Earth's atmospheric and oceanic temperatures [12]. Similar to solar activity, geomagnetic activity releases energetic electrons that break the ozone layer, thereby increasing the Earth's exposure to ultraviolet radiation and energetic particles and raising atmospheric temperatures [13]. Moreover, ionization would have a chemical interaction with the energetic electrons from solar and geomagnetic activities, providing a large amount of particles [14]. These greenhouse gases can further contribute to global warming. Although it is worth noting that the influence of geomagnetic activity on temperature is mainly concentrated in the high-latitude region [15], there are still a large amount of evidence that shows that the impact of space weather on middle-and low-latitude areas exist [16].
These temperatures could then affect the electricity market [17]. First, as the climate warms, consumers considering environmental factors will shift from energy sources such as oil and fuel to electric energy [18]. Second, as the temperature rises, consumers have cooling demands, which will increase the amount of electricity used for cooling, which, in turn, will increase the demand for electricity [19] and the revenue of electricity producers [20]. However, in the winter, consumers would have less heating demands, which might, in turn, decrease the power demand and power station revenue [21]. Multivariate regression analysis was used to discuss and confirm that U.S. electricity and natural gas consumption is affected by climate change factors such as temperature, relative humidity, and wind speed in the USA. Some scholars have discussed the relationship between higher temperatures and the increased electricity demand in Athens, Greece, the United Kingdom, and Australia [18,22,23]. When the temperature is low, the electricity demand increases as the temperature decreases because of the need for heating; when the temperature is high, electricity demand increases as the temperature increases because of the cooling demand.
Global Navigation Satellite System (GNSS) Channel
Space weather has a relationship with the global navigation satellite system (GNSS), and the GNSS plays a significant role in the electricity market. The time-stamping service is reliant on timing devices and increasingly on the Global Navigation Satellite System (GNSS), such as China's BeiDou Navigation Satellite System. Power stations, substations, and dispatching centers need the timing devices of the Global Navigation Satellite System (GNSS) to record the correct time and system switching sequence and for the protection actions for power system automation, power grid equipment operation, and grid Energies 2021, 14, 5281 4 of 14 mishaps [24]. As the timing devices become damaged when the GNSS is impacted by space weather, the wasted electricity in transmission and generator systems would increase.
Additionally, satellite systems could be greatly damaged by solar storms [25]. In history, plasma from solar wind and the magnetosphere have reduced the operational efficiency of geosynchronous orbit satellites. On Halloween 2003, a combination of solar and magnetic storms caused 19 satellites to fail completely and 47 satellites to show anomalies. Satellites exposed to this harsh environment are subject to episodes of deep dielectric charging, surface charging, solar panel degradation, single event upsets, and other deleterious effects. Either suddenly or gradually over time, such effects can cause catastrophic or simply lifetime-shortening consequences for satellite systems [26]. Furthermore, when the Sun and the magnetosphere add extra energy to the low-density layers of atmosphere at low Earth orbit (LEO) altitudes, the spacecraft then flies through the higher-density layer and experiences a stronger drag force [27]. Artificial satellites would then move away from their intended orbits, thereby interfering with operational efficiency.
The interaction of the solar wind with the magnetic field also has a relationship with disturbances in the ionosphere. As satellite signals from GNSS pass through the ionosphere, they are refracted and diffracted by these disturbances. The disturbed signals that are observed by a receiver on the ground may cause a loss of lock or cycle slips.
Power Grid and Pipeline Channel
Space weather can directly impact the power grid and pipeline. Geomagnetic storms can dramatically change the Earth's intrinsic magnesphere, generating an electromagnetic induction effect, which promotes a geomagnetically induced current (GIC) among closed-loop systems between transformers and transmission lines. This results in network equipment damage and other chain failures, and in serious cases, it can even cause widespread power mishaps and other vicious accidents.
Because of aeromagnetic measurements, power grids, pipelines, and other technical product operations are based on a geomagnetic induced effect [28], while the geomagnetic activity significantly disturbs the geomagnetic induced currents, the geomagnetic activity could result in a loss of efficiency in the power supply system [29].
As grid losses can contribute to electricity being wasted during transportation, electricity consumers need to use more electricity to achieve the same utility. This, in turn, drives demand for electricity throughout society. For power producers, because there is a lot of power loss in the power supply process, power producers need to increase production to make up for the power loss in the supply process, meaning that the power producers will sell more power and will then make more profit.
Space Weather and Consumption Rate of Fossil Power Planet
The coal consumption rate of a power supply refers to the amount of standard coal consumed by a power plant to supply one kilowatt-hour of electricity to the grid and is an overall indicator of the operating economy of fossil power plants.
When fossil power plants generate electricity, power stations generate apparent power with a combination of active power and reactive power. The active power is the real power that is dissipated in the circuit, whereas the reactive power moves back and forth between the load and the source. The geomagnetically induced current (GIC) effect caused by geomagnetic activity promotes the saturation of the magnetic core of the transformers and the power station, which causes the depletion of reactive consumption. As such, power stations need to generate more reactive power with more coal consumption in order to supply the same amount of electricity. This means that the coal consumption of fossil power plants would increase [30].
Independent Variables
As has shown in the literature review, this work use vsw as a proxy for solar activity because vsw measures the solar wind stream traversing the interplanetary field, which is close to the Earth. Additionally, vsw can always capture extreme solar activity. We can then use dst to measure geomagnetic activity because this measure describes the temporal development of magnetic storms and the intensity of the ring current for low and mid-latitudes. Additionally, the dst value is more precise than other indexes such as the geomagnetic Kp index, which only creates values with integers from 0 to 9. However, the dst index is usually negative, and the smaller the dst index is, the more severe the geomagnetic activity is when the dst index is negative. Additionally, if the dst index is positive, the geomagnetic activity is very calm and has little impact on the Earth. For the convenience of this research, we used the method proposed by Forbes [8]. We first change the positive part of the dst index to zero, which implies that the geomagnetic activity is calm. Additionally, we change the negative data to the positive index; the larger the new dst index is, the stronger the geomagnetic activity is.
Model
This paper adapts a Var model [31] to estimate the relationship between space weather and electricity in China.
Y t denotes the t period independent variable in the model, Y t−j and X t−j denote the independent variable of p lag length and the dependent variable of r lag length, respectively. c, β J , γ i indicate the parameters to be evaluated, and µ t denotes the random perturbation. Additionally, the optimal lag length of the explained and explanatory variables are determined by the minimum values in the AIC (the Akaike information criterion) or the FPE (Akaike's Final Prediction Error) criterion. AIC is an estimator of prediction error and thereby for the relative quality of statistical models for a given set of data, while the FPE criterion provides a measure of model quality by simulating the situation where the model is tested on a different data set. AIC and FPE both have the minimum values if the lag length concerning the explained and explanatory variables is optimal. After logging the coal consumption, electricity consumption, and power station benefit, we performed an ADF test on all of the variables, and the variables passed the stationarity test. Table 1 shows the AIC information concerning the electricity consumption for the whole of society. The optimal lag length is three, as an AIC lag length of three is the minimum. The stability test was then performed using the AR root test method. The model is stable if the modulus of all of the AR roots in the model and the reciprocal of their values are less than 1. Additionally, the data pass the balance test, as all of the dots are in the circle, which is shown in Figure 1. Table 1 shows the AIC information concerning the electricity c whole of society. The optimal lag length is three, as an AIC lag length imum. The stability test was then performed using the AR root test m stable if the modulus of all of the AR roots in the model and the recip are less than 1. Additionally, the data pass the balance test, as all o circle, which is shown in Figure 1. Table 2 shows that the vsw index can significantly promote the e tion of the whole of society and can pass the 90% robustness test. A index can increase the electricity consumption of the whole of socie 95% robustness test. The Var model as the impulse response analysis the changes of the error term on the system, which is the Granger ca empirical results cannot present the true causality, the Granger ca whether one time series S is the changing factor of another time serie the prediction result of the time series variable T when the past inform S and T are both known and is more effective than the prediction of ti the past information of T alone rather than using the direct impact bet Figure 2 is the impulse response function graph that shows the Grang the electricity consumption of the whole of society and space weath can learn from Figure 2 that during the optimal lag period, society's e tion increased, responding to the dst and vsw index, which means th and geomagnetic activity can explain the promotion of the electricity Table 2 shows that the vsw index can significantly promote the electricity consumption of the whole of society and can pass the 90% robustness test. Additionally, the dst index can increase the electricity consumption of the whole of society and can pass the 95% robustness test. The Var model as the impulse response analysis studies the effect of the changes of the error term on the system, which is the Granger causality. Though the empirical results cannot present the true causality, the Granger causality can still test whether one time series S is the changing factor of another time series T. It tests whether the prediction result of the time series variable T when the past information of time series S and T are both known and is more effective than the prediction of time series T by using the past information of T alone rather than using the direct impact between the indicators. Figure 2 is the impulse response function graph that shows the Granger causality between the electricity consumption of the whole of society and space weather. Additionally, we can learn from Figure 2 that during the optimal lag period, society's electricity consumption increased, responding to the dst and vsw index, which means that the solar activity and geomagnetic activity can explain the promotion of the electricity consumption of society. During the first period, the electricity consumption of the society decreased, responding to dst; however, during the second and third period, the consumption increased and did Energies 2021, 14, x and did not responded to the dst later. Additionally, society's electricity construct itively responded to the vsw from the first period to the seventh period. Table 3 shows the AIC information concerning the electricity consumption o ondary sector. The optimal lag length is three, as an AIC lag length of three is t mum, and the data pass the balance test, as all of the dots are in the circle, which i in Figure 3. Table 3 shows the AIC information concerning the electricity consumption of the secondary sector. The optimal lag length is three, as an AIC lag length of three is the minimum, and the data pass the balance test, as all of the dots are in the circle, which is shown in Figure 3. Table 4 shows that the vsw index can significantly promote the e tion of the secondary sector and can pass the 99% robustness test. A index can increase the electricity consumption of the secondary secto the 99% robustness test. Additionally, we can learn from Figure 4 tha lag period, society's electricity consumption increased, responding to which means that solar activity and geomagnetic activity can help to tion of electricity consumption in the secondary sector. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 4 shows that the vsw index can significantly promote the electricity consumption of the secondary sector and can pass the 99% robustness test. Additionally, the dst index can increase the electricity consumption of the secondary sector and can also pass the 99% robustness test. Additionally, we can learn from Figure 4 that during the optimal lag period, society's electricity consumption increased, responding to dst and vsw index, which means that solar activity and geomagnetic activity can help to explain the promotion of electricity consumption in the secondary sector. Table 4 shows that the vsw index can significantly promote the electr tion of the secondary sector and can pass the 99% robustness test. Additi index can increase the electricity consumption of the secondary sector and the 99% robustness test. Additionally, we can learn from Figure 4 that duri lag period, society's electricity consumption increased, responding to dst a which means that solar activity and geomagnetic activity can help to expl tion of electricity consumption in the secondary sector. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 5 shows the AIC information concerning the electricity consump ondary sector. The optimal lag length is three, as an AIC lag length of thr Table 5 shows the AIC information concerning the electricity consumption of the secondary sector. The optimal lag length is three, as an AIC lag length of three is the minimum, and the data pass the balance test, as all of the dots are in the circle, which is shown in Figure 5. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 6 shows that the vsw index can significantly promote elec in the industry sector and can pass the 99% robustness test. Additio can increase the electricity consumption of the industry sector and ca robustness test. Additionally, we can learn from Figure 6 that during riod, the electricity consumption of the industry increased, respondin index, which means that the solar activity and geomagnetic activity ca tricity consumption of the industry sector. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 6 shows that the vsw index can significantly promote electricity consumption in the industry sector and can pass the 99% robustness test. Additionally, the dst index can increase the electricity consumption of the industry sector and can also pass the 99% robustness test. Additionally, we can learn from Figure 6 that during the optimal lag period, the electricity consumption of the industry increased, responding to the dst and vsw index, which means that the solar activity and geomagnetic activity can promote the electricity consumption of the industry sector. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 7 shows the AIC information concerning power station revenue. The optimal lag length is two, as an AIC lag length of three is the minimum, and the data pass the balance test, as all of the dots are in the circle, which is shown in Figure 7. Table 7 shows the AIC information concerning power station re lag length is two, as an AIC lag length of three is the minimum, an balance test, as all of the dots are in the circle, which is shown in Figu Table 8 shows that the vsw does not have a significant impact on Figure 7. Balance test of data concerning power station revenue. Table 8 shows that the vsw does not have a significant impact on power station revenue. However, the dst index can help to explain the increase in the power station revenue and can pass the 90% robustness test. Additionally, we can learn from Figure 8 that during the optimal lag period, power station revenue increased, responding to the dst, which means that geomagnetic activity can help to explain the promotion of power station revenue. The power station revenue responded positively to the dst index from the first period to the third period. Table 9 shows the AIC information concerning the coal consum power plants. The optimal lag length is two, as an AIC lag length of th and the data pass the balance test, as all of the dots are in the circle Figure 9. Table 9 shows the AIC information concerning the coal consumption rate of fossil power plants. The optimal lag length is two, as an AIC lag length of three is the minimum, and the data pass the balance test, as all of the dots are in the circle, which is shown in Figure 9. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively. Table 10 shows that the vsw does not have a significant impact o tion of fossil power plants. However, the dst index can increase the c fossil power plants and can pass the 90% robustness test. Additionally Figure 10 that during the optimal lag period, the coal consumption plants increased, responding to dst, which means that geomagnetic a plain the promotion of the coal consumption rate of fossil power plan
Space Weather and Coal Consumption Rate of Fossil Planet
The coal consumption rate of fossil power plants responded pos dex from the first period to the second period. Table 10 shows that the vsw does not have a significant impact on the coal consumption of fossil power plants. However, the dst index can increase the coal consumption of fossil power plants and can pass the 90% robustness test. Additionally, we can learn from Figure 10 that during the optimal lag period, the coal consumption rate of fossil power plants increased, responding to dst, which means that geomagnetic activity can help explain the promotion of the coal consumption rate of fossil power plants. * represents statistical level at 10%, ** at 5%, and *** at 1%, respectively.
Conclusions
This work examined the impact of space weather on the e and middle-latitude areas. Using the vsw as a proxy for solar proxy for geomagnetic activity, the results show that solar activ tivity can promote societal electric energy consumption and in t The coal consumption rate of fossil power plants responded positively to the dst index from the first period to the second period.
Conclusions
This work examined the impact of space weather on the electricity market in low-and middle-latitude areas. Using the vsw as a proxy for solar activity and the dst as a proxy for geomagnetic activity, the results show that solar activity and geomagnetic activity can promote societal electric energy consumption and in the secondary sector and industry sector in particular. Additionally, geomagnetic activity can promote the power station revenue and the coal consumption rate of fossil plants.
Based on the results, it is suggested that the Chinse government should consider more astrophysical factors for the implementation of energy conservation and emission reduction policies. Thus, astrophysical topics should be given more attention when developing physics textbooks to be used in general education. What is more, space weather forecasting should be improved in order to protect the electricity market in advance. The Chinese National Space Administration should launch more satellites to observe variable kinds of space weather data from different positions in space, which is the base of promoting the forecasting ability using historic data. Additionally, investment in technology for protecting power grids and power stations from being damaged by geomagnetic forces, inducing a currency effect caused by geomagnetic activity should be given high consideration.
Author Contributions: T.W. and Z.Y. designed the study, analyzed the data, and wrote the manuscript. Z.Y. and M.G. collected the data and coordinated the data analysis. J.C. and Z.Y. revised the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research is supported by grants from the National Natural Science Foundation of China (Grant 71991482). | 6,701.8 | 2021-08-26T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Time-resolved Studies of Intermolecular Electronic Energy Transfer Processes between Molecules in Solution
A summary of the main energy transfer (ET) mechanisms between isolated pairs of molecules in solution is presented. Various ET models for many-molecules systems are discussed in their basic assumptions and their range of applicability. Time-resolved studies of ET in solution allow to determine the predominant ET mechanism and to test
ET models.
INTRODUCTION
Energy transfer processes are widespread in nature.The intermolecu- lar energy transfer (ET) from excited molecules to unexcited ones plays a major role e.g., in photosynthesis and the visual process. 2 Further, there is a great interest in the technical use of such ET processes because excitation energy can be transported with negligible losses over molecular distances (up to some 100 A) within time inter- vals of less or much less than a nanosecond.In addition, energy can be transferred from one spectral region to another.Examples are the exciton migration in solids 3 or the sensibilization of photophysical 4 and photochemical processes.
In the particular field of laser physics ET processes have been widely used to extend the lasing range, increase the output efficiency and 20 M. KASCHKE AND K. VOGLER influence the spectral and temporal characteristics of the output pulses of energy transfer dye lasers 6'7 or solid state laser materials. 8 '9 Thus, the investigation of ET mechanisms is stimulated on the one hand by the aim to gain principal knowledge on these mechanisms and on the other hand to achieve successful technical utilization of some basic ET processes.
The early studies of ET, after the classical papers of F6rster, 1 Dexter 11 and Galanin, 12 were exclusively carried out with stationary methods, such as concentration quenching of luminescence and fluo- rescence depolarization.
It is, however, very difficult to distinguish between various ET mechanisms by use of these methods only.Time resolved ET studies have opened the possibility to determine the predominant energy transfer mechanism and with increasing accuracy of experimental data to refine ET transfer models.
The purpose of this paper is to give a summary of the main ET mechanisms occurring between interacting molecules in solution and to summarize briefly the substantial assumptions and limitations of the most important models.Further on we will shortly describe some time resolved studies of ET processes and compare their results with the theoretical predictions of relevant ET transfer models.
ENERGY TRANSFER MECHANISMS
Though nearly all ET theories are based on classical descriptions with parameters adapted to experiment, most of them can be derived from quantum mechanical considerations.An atomic system like a mole- cule in solution can be characterized by its Hamiltonian.If there is another independent molecule in the vicinity some kind of mutual interaction can take place, the strength of which depends on the distance and type of interaction energy.In the case of comparatively weak interaction between distant molecules the Hamiltonian of the system can be given by 12[ I2"IDonor + I2"IAcceptor + I2Iint (1) Describing the initial state of the system of two molecules by the the following expression for the matrix element of the interaction Hamiltonian is obtained: The first term is the coulombic interaction, whereas the second term represents exchange interaction and requires a spatial overlap of the electronic clouds of the acceptor and donor molecules and is therefore expected to be significant for small distances of the donor-acceptor pair.The coulombic interaction is of long-range, and the expansion of the interaction Hamiltonian in series yields contributions of dipoledipole, dipole-quadrupole, quadrupole-quadrupole, etc., interac- tions, the strength of which again depends on the mutual distance of donor and acceptor.
For a description of the energy transfer kinetics one could employ the density matrix approach 13 for two coupled two-level systems (Figure 1), without using perturbation theory.However, when taking into account the broad spectral bands and the extremely short trans- verse relaxation times of polyatomic .molecules in solution, time- dependent perturbation theory proves to be more fruitful.This process is given by: D* + A D + A* (no reverse process is assumed to take place).
approach starts from Fermi's Golden Rule for the transfer rate, implying that no reverse transfer is possible: dKET --IHyl 2 6(AE) (4) where AE 0 represents the condition of energy conservation, which is fulfilled by spectral-overlap of electronic and vibrational levels of donor and acceptor molecules (Figure 1).
As it is obvious from Eqs. ( 5) and (7) the transfer rate strongly depends on the intermolecular distance RDA Of the donor-acceptor pair.
It should be noted that several energy transfer mechanisms can act simultaneously in solution.The above-mentioned dipole-dipole and exchange transfer can be combined with a diffusion process of neigh- 15 16 bouring molecules which determines the transfer rate.Also pure radiative energy transfer processes are possible. 17The predominant mechanism is determined mainly by the mean between donor and acceptor and some characteristic lengths (Rr, lDif, R0) (Figure 2).
So far we have only dealt with an isolated donor-acceptor pair.However, quantities measured by fluorescence or absorption spectro- scopic methods in disordered systems are mean numbers of excited donors and acceptors per volume, which can be obtained by averaging with respect to the spatial distribution of both donor and acceptor molecules.
Thus, in modelling energy transfer processes one has to make certain assumptions concerning the spatial distribution of donor and acceptor molecules independent on the predominant transfer mechan- ism.Several ET models are given below.In deriving these models a common rate equation method is used, implying that thermal equili- brium over the vibrational levels of the excited donor molecule is established, and further that coherent effects are neglected.The first condition can be violated, e.g., in intramolecular electronic ET, 8 whereas the latter may not be fulfilled in the case of very strong interaction, leading to transfer times in the fs-time range.
MODELLING OF INTERMOLECULAR ET
In the following we would like to summarize some features of the most important models for intermolecular energy transfer.
Radiative energy transfer
In this case the ET consists of two independent intramolecular pro- cesses taking place successively in time: the emission of the photon of the primary excited donor and the absorption of it at the acceptor located in a certain distance.The absorption (= ET) probability is mainly determined by the optical density in the direction of observa- tion.The self-absorption of fluorescence is an important case of radiative ET between identical molecules.It is very difficult to give a general description of self absorption because results strongly depend on the particular experimental geometry of observation. 13However, it can be stated that reabsorption strongly influences the measured decay time, which is generally prolonged.In the case of strong self- absorption the decay time measured from the ensemble of excited molecules depends not only on cavity dimension and geometry of excitation and detection, but also on wavelength of observation of the fluorescence light.
Under the assumption of weak self-absorption the measured decay time rm is independent of wavelength 19 and the true molecular fluore- scence decay time : can be derived from it by: 2 '21 Z'm '(1 rift" a) -a (8) where with rm(C 0) r.Here fl, f(Z), e(Z) and c are the fluorescence quantum yield, the normalized fluorescence spectrum, the molecular extinction coefficient and the concentration of the emitting molecules, respectively.These formula can be used in a rough estimation for transverse observation of a point-like excited volume.The light penetrates a length x within the solution in the direction of observation (Figure 3).r///.a is the combined probability of emission followed by an absorption of light along the pathway of propagation x.
Diffusion controlled collisional energy transfer
In the diffusion model the excited donor molecule moves towards the unexcited acceptor and an encounter of both within a certain transfer (or encounter radius Rr) causes a de-excitation of the donor with a excitatio cuvette Figure 3 Point-like excitation in a semi-infinite cuvette containing molecules with weak self-absorption of fluorescence.
certain probability.The diffusive motion is assumed to be incoherent and isotropic among homogeneous distributed molecules and the mutual donor acceptor interaction is restricted to the small encounter sphere (Rr 3 10/22,23).The diffusion length /Dif which the donor travels during its excited state lifetime ro is determined by the solvent viscosity r/ /Dif-" /DDif "t'D (10) where DDif is the relative diffusion constant of both molecules in the solvent 1 DDif--DD + Da o:- For molecules in solution D is of the order of 10 -6 cm 2 s -1 10 -4 cm 2 s -1 and ET by encounter is likely for rDA < lDif.
The theory starts with the diffusion equation equation for the probability of a donor to be in the excited state at point r and time t 15 pz(r, 0 DDifAzpD(r, t)-,-po(r, t).3t ZD (12) Spatial averaging of the one-donor solution leads to the following equation for the number of donors still excited after a delta function- like excitation pulse at t 0. 22 D(t) D(t O)e-t/D.exp {--4DDifRTNA t 8yI/2R2TNA(DDif t)I/2} (13) On the other hand the increase of the excited acceptors is given by (14) with the abbreviations" o 4ZDDifRTNA N and are the acceptor density in cm -3 and the acceptor lifetime in the excited state, respectively.
Equations (13 and ( 14) represent the temporal behavior of an ensemble of excited donors and acceptors, respectively.Thus Eqs. ( 13) and ( 14) also describe the donor and acceptor fluorescence after excitation by a very short light pulse.Besides the shortening of the exponential decay of an isolated donor exp (-.TD t)=> exp-(+ kr)t (17) with kr 4DDifRTNA a characteristic nonexponential term with appears.
The diffusion theory can also be applied if there is no real motion of a molecule but only a migration of excitation energy, as it occurs, e.g., in molecular crystals by excitons.Then, of course, the parameter DDi has to be replaced by the exciton diffusion constant which strongly depends on the type of excitons and which can take values as large as Dx (10 -3 10 +2 cm 2 S-I).Thus, diffusion length up to some microns can be obtained clearly demonstrating effective ET over large distances by excitons. 13similar case exists for molecules in solution if the excitation energy is firstly spread among the highly concentrated donor ensemble (ND > NA) by repeated transfer steps (D7 + D--Di + Dr) until an encounter with an acceptor molecule occurs, Dn* + A --D + A*.
Thus, the donor excitation is quenched only in a final act (trap).The single step of such a cascade might be a real collision, a coulomb interaction, or an exchange interaction.
Exchange interaction
Starting from the matrix-element of exchange interaction (Eq. 3) Dexter 11 has derived the expression for the transfer rate of this kind of ET for an isolated donor acceptor pair (Eq.7).This formula holds for distances R > L where the overlapping electronic wavefunctions can be assumed to decline exponentially.A measurable quantity is again only the statistical average of the individual donor decays caused by this transfer rate.This spatial averaging over an infinitely large number of acceptors which are randomly distributed in space leads to the following donor fluorescence decay function: D(t)= D(t O)e -t/vo exp {-4 R3TNAy_3g(eY.t/.D)} (18) Here R r is the critical distance at which the exchange transfer occurs with the same probability as the spontaneous decay in an isolated donor, 7 2Rr/L, and g(z) is given by: g(z) -z exp (-z.y)(ln y)3 dy (19) 24 and results from the spatial averaging (z e't/rD).For any z > 0 the integration can be performed term-by-term leading to a Taylor series which can be expressed for sufficiently large z-values by: g(z) (In z) 3 + 1.732(ln z) 2 + 5.934(ln z) + 5.445 + O(e-Z(ln z)3z-2) (20) 24 For other nonuniform spatial distributions of donors and acceptors the spatial averaging will yield mean donor decay functions differing from Eq. 18 by magnitude as well as time behavior. 53.4.F6rster model of long-range energy transfer and related models In this model the donor transfers its excitation energy to one acceptor of its surrounding via a dipole-dipole interaction (DDI).Further assumptions with respect to the spatial distributions of the molecules are: i) The donor is surrounded by an homogeneous distribution of acceptor molecules which are fixed in space during the time of inter- action.
ii) No mutual influence between donors exists.iii) Each donor possesses its own acceptor environment.
These assumptions imply directly that the number of excited donors is much less than the number of unexcited acceptors (No " NA), a condition obviously fulfilled in steady-state experiments but clearly not in the case of donor excitation by ultrashort light pulses.We will refer to this in Section 4, taking first the assumptions to be valid.Starting again from the rate equation for the deactivation of an excited donor d p(t) 1 p(t) -' aa , /. IET(RDA)PD(t) (21) dt "D i=1 where p(t) is again the probability of finding the donor still excited and kaaT(RDA) is given by (5).The sum extends over all acceptors in the vicinity of the donor under consideration.
Figure 4 shows the time behaviour of the donor decay (Eq.22) and the rise of acceptor excitation (Eq.23) due to F6rster model after -function-like excitation pulse of the donor ensemble.The optical transitions which form the overlapping integral of Eq. ( 6) can be of singlet or triplet type. 7en DDI is weak because of a forbidden optical transition in the donor or acceptor higher terms of interaction (quadrupole) must be considered 13'26 resulting in smaller R0 values.
But large transfer distances are possible even for forbidden optical transitions of the donor if the quantum yield D is high (phosphorescence) and the acceptor optical transition is allowed (large ez). 1 The critical ET radius R0 can be related to a critical concentration where dipole-dipole ET becomes efficient.
3 30002 (:r3/2NR)-a (24) NAO ---(73/2R03)-1 or CAO (Here Nzo is in cm -3 and CAO in mol 1-1; N is Avogadro's number of molecules per mol). 27ere are some specific spatial configuration to which F6rster-type ET can be applied under the restriction mentioned above.First there is the two-dimensional ET in monolayers, 28 where the donor decay becomes the form D(t) D(t= 0) exp F "q" \-o' J (25) VD Here q denotes the ratio q rtA/nAO and r/A, nAO are the two- dimensional concentration and critical concentration nzo 1/zR in cm -2 and F(2/3) 1.354.Second in a quasi-one-dimensional system the donor decay function after spatial averaging has the form 13 (26) and IA is the linear density in cm-1.
EXAMPLES OF TIME RESOLVED ET STUDIES
The simplest observable manifestation of ET is the quenching of fluorescence due to interaction between excited donors and unexcited acceptor, or conversely, the sensitization of fluorescence of a primarily unexcited acceptor.
Stationary measurements of concentration quenching of the fluore- scence 29 or the concentration depolarization 3 qualitatively indicate the existence of an ET process and permit to some degree the determination of characteristic transfer parameters.
Experimentally an exact distinction between the various ET mechanisms is only possible by means of time-resolved spectroscopic methods.Fortunately recent developments of lasers have delivered an almost ideal excitation source for time-resolved investigations of ET processes, which can produce monochromatic, short duration (nearly 6-like) pulse excitation.'33 This is necessary, because various mechanisms of ET differ mainly in the time domain shortly after excitation.
Until now, most of the time resolved investigations of ET of molecules in solution have been made by single-shot or synchroscan streak camera and single photon counting measurements of fluore- scence (e.g. . By means of excite-and-probe-beam spectroscopy using a ps-or fs-continuum the ET process can be observed in a broad spectral region, and even the excitation of non-fluorescent acceptors via ETprocesses can be studied with high time resolution (e.g.Ref. 37).
Confirmation of F6rster's /-Iaw
There are a lot of experimental data which well agree with the theoretical predictions deduced from the common F6rster model.In particular, in an intermediate acceptor concentration range of some CA 10 -2 10 -3 molthe validity of F6rster's decay law (Eq. 17) is well proved 33'34"38 in rigid solutions.
An example is given in Figure 4 which shows the fluorescence decay and rise of a donor and acceptor molecule according to Eqs. ( 22) and (23), respectively.The time resolved fluorescence was detected by a streak camera system. 17nother confirmation of a F6rster model of the donor decay in a time interval ranging from 10ps up to several nanoseconds was reported by Tredwell et al. 38 In the experiments the acceptor concen- tration range from 10 -2 to 10 -3 moll -1 in a rhodamine 6G (donor)-malachite green (acceptor) mixture in ethanol, 5 Fluorescence decay of rhodamine 6 G (10 -4 mol/l) in ethanol with (A) 0 mol/l, (B) 10 -3 mol/1, (C)2.5-10 - mol/l, (D)5.10 -3 mol/1 malachite green in ethanol streak camera records.3s For all concentrations a unique transfer radius of R0 52 + 1 A was obtained from a plot of log I versus t 1/2 (Eq.22).
Deviations from simple M-Iaw
In low viscosity solvents, deviations from the F/Srster's decay law (Eq.22) have been observed by additional motion of the molecules during the time interval of ET.
There are several attempts to take into account an additional diffusion of excitation energy [39][40][41] which are all based on an expansion of the ratio of diffusion length to the energy transfer radius /Dif/ R0.39,42, 49 Slight deviations from the dominating time behaviour according to Eq. ( 22) have been found experimentally for acceptor concentrations CA > 1 10 -3 in liquid solutions (e.g. in methanol DDif 2. 10 -5 cm 2 s-1).43,44 The experimental curves can be fitted well by modified equations for the donor decay, which take into account motion of the molecules (Eq..3.ET under the influence of inhomogeneous spatial distribution Despite the various confirmations of the F6rster model there have also been reported several experimental deviations from the F6rster-X/law, which cannot be explained by additional diffusion 5,35'37 in the low concentration range. Even at such low concentrations of 10 -4 10 -5 mol 1-1 a very efficient ET was observed, suggesting an increase of the critical transfer radius R0 in this concentration range which is in contradiction to its physical meaning (Table I).
The efficient donor-acceptor ET has also been demonstrated by the surprisingly fast rise of acceptor excitation in ps-excite-and-probe beam experiment (Figure 6).The rise time to the maximum of the acceptor excited state population amounts for different donor acceptor pairs: rhodamine 6 G (donor, cresyl violet, DOTCI, oxacine 725 (acceptors) tr 20 ps, 40 ps and 50 ps, respectively, and shows no significant dependence on acceptor concentration or solvent vis- cosity. 37Similar short transfer times have been observed for rhoda- mine 6 G to cresyl violet and malachite green. 36hese short rise times are also in discrepancy with theoretical Dimer in H20 Co 1 10-3 mol/l Co 1" 10--3 mol/l Co Eq. ( 6) Eq. ( 6) Eq. ( 6) Eq. ( 6) 6(a) Spectral changes in optical density at various delay times after excitation.Curves obtained by a ps-excite-and-probe spectrometer with ps-continuum probe pulse; rhodamine 6 G donor, cresyl violet acceptor, in ethanol; Ro 52 + 2 A. 37 -AOD 0.5 0,4 0,2 x [nm] 540 Rho --------623 CVf= Figure 6(b) Excitation density of donor and acceptor molecules versus delay time.., rhodamine 6 G direct excitation by a 5 ps SHG ( 532 nm)--laser pulse- cresyl violet excitation via ET from rhodamine 6 G co 1 10 -4 mol/l, ca 5 10 -5 mol/l. 37redictions of 400 ps (at CA 10 -2 mol 1-1) as derived from F6rster's formula (Figure 4). 34is contradiction can be removed by the assumption of a gen- eralized F6rster-model taking into account a possible inhomogeneous spatial distribution of acceptor molecules around the donor molecules.Charged dye molecules in solution are very likely interacting by weak static van-der Waals forces described by a Lenard-Jones potential, e.g., Figure 7, causing a mutual attraction of these molecules.In this way an inhomogeneous spatial distribution of acceptor molecules can be produced.Assuming a simple step-like sphero-symmetrical inho- mogeneous distribution spatial averaging according to this inho- mogeneity leads to an analytical expression with a modified time behaviour of the donor and acceptor ensemble: 37 For a b and R1 0 this equation approaches normal F6rster-like donor decay for a homogeneous acceptor distribution.The dependence on R1 (the radius of forbidden volume) is not very significant, if it is small enough (R1 10 A).
By fitting an experimental curve with the numerical calculations of A(t) one can derive the larameters of the model (Figure 8), e.g., a/b 25, R1 10 A, R2 20 A for an ET of rhodamine 6 G to cresyl violet in ethanol with R0 50 + 2/.37 Thus from experimental data one may get not only information about the efficiency of ET(R0) but also about the distribution of the molecules (a/b, R1, R2).Equation ( 27) can also be derived in a modified form for two- dimensional planar and also spherical geometry of the donor-acceptor assembly. 25These models have been successfully applied in describing energy transfer processes in microstructured systems (micelles, bilayers) see, e.g., Refs.45, 46 and 47, which present model systems of photosynthesis.
CONCLUSIONS
We have given a survey of the main ET mechanisms occurring between interacting molecules in solution, and have outlined their range of occurrence.Further we have summarized some of the most common models together with their substantial assumptions and limitations.It has been shown that these models, within their range of applicability, can be used to describe energy transfer processes in solution over a wide range of concentration ratios, solvents and molecular structures adequately.
We have not, for sake of shortness, gone into the problem of mutual donor-donor interaction as outlined in Refs.48 and 49.This problem is being treated in a forthcoming paper in detail, together with a detailed investigation of energy transfer processes in microstructured systems (Micelles, Vesicles, Bilayers). 25
Figure ]
Figure ] Simpfified ener8y k]evel scheme of donor and acceptor molecules.The ET
Figure 4
Figure 4 Time resolved fluorescence decays indicating an ET from rhodamine 6 G (a) m Figure
Figure 7
Figure 7 Inhomogeneous spatial distribution of acceptors around one excited donor a, b increased and bulk density of acceptor molecules; R1, R2 radii of sphere of raised acceptor concentration and forbidden volume.37
Table I
Critical transfer radius R0 calculated from the decay of the donor fluorescence in dependence on the acceptor concentration C A. | 5,195.4 | 1988-01-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Performance of Inductive Coupled Power Transfer Versus the Coil Shape-Investigation using Finite Element Analysis
The objective of this paper is to investigate the impact of the spiral coil shape of inductive coupled power transfer on its performance. The coil shapes evaluated are: circular, square and pentagon spiral shapes. The coils are modelled in Ansoft Maxwell software. Simulations are carried out to determine the mutual inductance, coupling coefficient and magnetic flux density. The performance in term of magnetic flux density, mutual inductance and coupling coefficient of the three coils shapes are compared. Of the three shapes, the pentagon is shown to have the best performance in term of its mutual inductance, coupling coefficient and magnetic flux density.
Introduction
Wireless Power Transfer (WPT) is a technology widely researched nowadays. This is due to the fact that the conventional wired system is messy, inconvenience and also may cause hazards such as electric shock or electrocution [1]. Meanwhile, the WPT is convenience, not messy and can help to achieve cleaner and greener energy options [2]. The technology is researched for various application such as mobile devices charging [3], electric vehicle charging [4], implantable medical devices and even for huge power transmission i.e transmitting electrical power without transmission lines [6].
Two most common WPT system are Inductive Coupled Power Transfer (ICPT) and Capacitive Coupled Power Transfer (CCPT) [7]- [10]. ICPT system uses two closely spaced coils, one primary or transmitter and the other one secondary or receiver. The current flowing through the primary coil generates a magnetic flux that is received by the secondary coil. Capacitive coupling uses two parallel plate-pairs, separated by a gap: The energy fed to the receiver plate is dependent on the electric field in both plate pairs. Generally, ICPT operates based on magnetic field and CCPT is based on electric field. Figure 1 shows the overview of ICPT and CCPT. This paper emphases on ICPT. The ICPT system comprises of two independent electrical systems with mutual coupling. The two coils or winding are physically separated by an air gap. Since the ICPT involves magnetic coupling, there is a need to understand the magnetic field characteristics of materials used in various shapes, parameters and conditions. This magnetic field characteristic will influence the mutual inductance, coupling coefficient and thus the performance of the ICPT. The shape and design of the coil play important role in ICPT. Different shape and design of the coil generate different patterns of the magnetic field and flux distribution. A shape may not generates a stable and suitable magnetic field and therefore affecting the efficiency of the system. For example, a research done by [11], shown that a spiral coil without any core have a better performance than a rounded winding of coil. It even have less resistance. In [12], it has shown that rectangular spiral coil can improve the power transfer capability as compared to circular and also having higher tolerance for misalignment.
Realizing the significance of the coil shapes on the ICPT performance, this paper aim to investigate the performance of ICPT for various shapes of the spiral coil. The shapes evaluated are circular, square and pentagon spiral coils. The performance is evaluated based on their magnetic flux density, mutual inductance and coupling coefficient. It is aim to understand further insight on the magnetic fields distributions in the coils and impact to the ICPT performance.
Coupling coefficient and Mutual Inductance
During the wireless power transmission between the primary coil and the secondary coil, there will be some leakage inductance. Meanwhile, mutual inductance is caused by the magnetic flux from primary coil cutting the secondary coil to induce the voltage and current in the secondary coil. Sometimes, the leakage inductance can be higher than mutual inductance in a loosely coupled system, which will reduce the magnetizing flux [1], [13]. The relationship between mutual inductance, M and coupling coefficient, k is given by where, L1 is the inductance of primary coil and L2 is the inductance of secondary coil. The power transferred to the receiver side, P2 can be calculated by where, ω is the frequency and Ip is the current in primary coil. From this relationship, it is known that the output power also depends on mutual inductance. Higher mutual inductance means higher efficiency of the ICPT. Therefore, for the purpose of investigating the performance of ICPT for various shapes of the coil, this research observes the mutual inductance and coupling coefficient of the system. There are many methods to calculate the mutual inductance such as Maxwell formulas, Grover's methods, Neumann's integrals and Finite Element Analysis (FEA) [14]. In this work, the FEA software, namely Ansoft Maxwell is used to determine the coupling coefficient and mutual inductance. The software is used to model the ICPT for all the shapes and simulation will be done in a 3D environment. The FEA is chosen because this method does not involve complex manual calculation and the software can promise consistent and reliable results for various shapes that will be evaluated.
Methodology
The comparative study of various coil shapes impact on the wireless power transfer performances is done by using FEA software, namely Ansoft Maxwell. The plane view of the coil design for the three shapes are shown in Figure 2. The 3-D design of the coil are shown in Figure 3, Figure 4 and Figure 5 for circular, square and pentagon spiral coils, respectively. The parameters for the design are summarized in Table 1 and Table 2.
The coils dimension, thickness, no of turns and separation between traces are given in Table 1 for the three shapes compared. As seen from the table the values are not exactly the same for all the shapes but are very close to each other. This is because it is impossible to have exactly the same values for the three different shapes. However, for this comparison, we are trying to achieve the same self-inductance for all the three shapes. Therefore, we have to vary other parameters accordingly. As shown in Table 2, the self-inductance for circular and pentagon are the same. However, the square has slightly higher value of the selfinductance. Note that the primary or transmitter and secondary or receiver coils are using exactly the same dimension and parameters values.
In this papers, the 3-D simulations were carries out to obtain the magnetic flux density for the shapes. Then, the mutual inductance and coupling coefficient at different air gap distance were also obtained using 3-D simulations. The simulations results are presented and discussed in the following section. The magnetic flux density contour plots obtained from the FEA using the software are shown in Figure 6, Figure 7 and Figure 8 for circular, square and pentagon spiral coils, respectively. From Figure 6, for the circular spiral coil, the highest flux density is 4.7 x 10 -4 T and the lowest value is 3.37 x 10 -5 T. It seems that the flux density is consistent throughout the coils where almost all the coils areas having the highest flux density value. Then, the value decreasing as it is getting further outside the coils areas. Meanwhile, from Figure 7, for the square spiral coil, the highest flux density is 5.13 x 10 -4 T and the lowest value is 3.77 x 10 -5 T. Compared to the spiral planar coils, the values are higher. However, the flux distribution is not as consistent. For the pentagon spiral coil, the highest flux density is 5.60 x 10 -4 T and the lowest value is 4.15 x 10 -5 T as seen in Figure 8. This is the highest amongst the three shapes. The flux distribution is more consistent than the planar coil but less than the spiral coil. Overall, the pentagon shape is having the most flux density and this is also reflected to the mutual inductance and coupling coefficient values as plotted in Figure 9 and Figure 10, respectively. For the same distance, the pentagon is having the highest mutual inductance and coupling coefficient followed by square and spiral shape. As the distance increased, the values are exponentially decaying. All the values are obtained from the FEA simulation software.
Conclusion
A comparative study of various shape of coils for ICPT has been investigated using FEA simulation. Three shape of coils were designed and simulated using Ansoft Maxwell to calculate the mutual inductance and coupling efficient over various air gap distance. Different coil shapes produce different magnetic flux. The flux contributes to the mutual inductance and coupling coefficient. The higher the magnetic flux, the higher the mutual inductance, and the higher the coupling coefficient. However, mutual inductance and coupling will decrease when the air gap distance increases. Comparing the performance of the three shapes, the pentagon spiral shape is the showing the best performance. Future work is needed to validate the simulation results experimentally. | 2,052.4 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Evaluating the Role of Machine Learning in Defense Applications and Industry
: Machine learning (ML) has become a critical technology in the defense sector, enabling the development of advanced systems for threat detection, decision making, and autonomous operations. However, the increasing ML use in defense systems has raised ethical concerns related to accountability, transparency, and bias. In this paper
Introduction
In recent decades, the term machine learning (ML) has gained widespread popularity, extending beyond the scientific community.ML has been touted as a means to easily enhance the system in which it is implemented.In contrast to classical programming, ML leverages technological advancements such as increased computational capabilities and the collection of large amounts of data (Big Data) to facilitate the generation of algorithms that describe the relationship between inputs and outputs.These algorithms can then be utilized to predict the likelihood of future events statistically.
Technologies or techniques based on ML, or, in more general terms, in artificial intelligence (AI), have revolutionized many industrial sectors because of their numerous advantages, including: • Improved efficiency: ML techniques can automate repetitive and time-consuming tasks, such as data entry and analysis, freeing up employees' time to focus on more complex and creative tasks [1].
•
Increased accuracy: ML algorithms can analyze large datasets quickly and accurately, providing insights that would be difficult for humans to identify [2].This can help businesses make more informed decisions, improve product quality, and reduce errors.
•
Cost savings: By automating tasks and improving accuracy, ML can help businesses save money on labor and reduce waste [3].
•
Predictive maintenance: ML can analyze data from sensors and other sources to identify when equipment is likely to fail, allowing businesses to perform maintenance before a breakdown occurs [4].
•
Fraud detection: ML algorithms can detect patterns in data that may indicate fraudulent activity, such as credit card fraud or insurance fraud [5].
• Improved supply-chain management: ML algorithms can analyze data from across the supply chain to identify areas for improvement, such as reducing inventory levels or improving delivery times [6].
Currently, many renowned companies, such as Google, Netflix, Amazon, Twitter, Facebook, IBM, Apple, Microsoft, and Oracle, are investing significant financial resources in ML technology to analyze customer profiles and develop new products for the market.For instance, Netflix leverages machine learning to analyze user search data and recommend content on the home screen that aligns with user preferences [7].Another example is Amazon's Alexa technology, which records user voices and sends the data to the cloud-based Alexa Voice Services for analysis using ML algorithms to interpret user commands and subsequently provide relevant outputs to the device [8].These examples bring technologies from the civilian world to tactical or military scenarios, such as virtual assistants (Apple's Siri or Amazon's Alexa), speech recognition, and text-to-speech for transcription and translation.Overall, ML within AI can provide significant benefits to businesses and industries, including the defense industry, mainly from a security perspective.Other technologies in conjunction with AI and ML were analyzed in relation to defense in [9], such as neuromorphic processors for advanced computing, which help to process high-volume data.Also, new and evolved tactical scenarios were proposed using emerging technologies in [9,10].
In addition, the defense sector is closely related to Public Protection and Disaster Relief (PPDR) applications and Mission-Critical Services (MCSs) due to their shared objective of safeguarding public safety and security.Both the defense sector and PPDR/MCS entities are involved in emergency response and crisis management.They work to mitigate the impact of disasters, natural or human-made, and protect civilians from harm.The defense sector and PPDR/MCS agencies are responsible for safeguarding critical infrastructure [11] such as power plants, communication networks, transportation systems, government facilities, and water infrastructure.This protection is crucial for maintaining essential services during emergencies [12].In this regard, ML and artificial intelligence techniques in general can help attain the technological innovation that both institutions require through cooperation and coordination for effective responses and resilience in the face of a wide range of challenges.Notable research, such as that of Petrov et al. in [13] on achieving end-to-end reliability for mission-critical traffic in 5G network software, Spantideas et al. in [14] on intelligent Mission-Critical Services over Beyond 5G networks with a focus on control loop and proactive overload detection, and Skarin et al. in [15] aiming towards mission-critical control at the edge and over 5G, exemplifies the cutting-edge contributions in this field.These studies paved the way for the integration of advanced ML techniques into the defense sector's critical operations.One can see that ML methods play a pivotal role in optimizing these applications.Their adaptability and effectiveness make them directly applicable to the defense sector, enhancing its capabilities significantly.ML applications were discussed in [16] in relation to a tool called DIVVA that verifies and validates disasterrelated information on social media platforms.This work used an ML technique based on a bidirectional LSTM model that achieved 84% accuracy in information classification.
In [17], one can find a list of ML techniques that can be used in disaster detection from six areas of interest: early warning damage, damage assessment, monitoring and detection, forecasting and predicting, post-disaster coordination and response, and longterm risk assessment and reduction.Artificial neural networks are the most promising approach today for the detection of, for example, earthquakes, according to [17].However, a traditional Support Vector Machine (SVM) has been used in research to detect changes in images, allowing for a classification process.Likewise, a variety of mission-critical applications are emerging for critical infrastructures and missions.Within public networks based on 5G and future 6G technologies, centralized deep reinforcement learning (CDRL) and federated DRL (FDRL) are ML solutions for critical services [18].All these solutions, when the defense sector comes into play, must be analyzed from an ethical and legal point of view due to the human involvement required.In this context, our contribution arises.
Our Contribution
The pervasive influence of machine learning (ML) has extended its transformative reach across various industries, and the defense sector is unequivocally emblematic of this paradigm shift.However, the defense sector is more delicate due to the nature and repercussions that decisions can have on human lives.For this reason, in the literature, several works have proposed the use of artificial intelligence and machine learning in tactical scenarios or military systems.On the other hand, no work has assessed the repercussions that this could have from an ethical and legal point of view.In this work we extend the contribution of the authors in the work [19].
This paper provides the following contributions: • An assessment of the implications of using ML algorithms and integrating them into defense systems from an ethical and legal perspective.
•
A presentation of the requirements of a framework for carrying out ML-defense integration from an ethical and legal point of view.
•
The challenges, advantages, and disadvantages of including ML in defense systems are analyzed, including an example of a project that has implemented ML.
The introduction provides a brief overview of the growing interest in leveraging ML technologies for defense purposes and outlines the paper's objective to uncover potential problems and roadblocks.
Method
The methodology followed was based on the authors' extensive military experience and real-world encounters with ML applications.We intended to identify and discuss the inherent issues of utilizing ML in military applications in order to inform a better-informed decision-making process for using ML effectively in this sector.
•
Data Collection: The authors collected data through a combination of structured technical research and real projects.They engaged with fellow military personnel, defense technologists, and ML experts to gather insights and anecdotes related to the challenges faced during the deployment of ML systems.The projects are not described in their entirety for confidentiality reasons given the military environment.
•
Case-study selection: The authors selected a case study involving various ML applications within the defense sector.This case study was the SALAs project (Section 6) and included examples from a survey regarding opinions on the use of lethal autonomous weapon systems.
•
Problem identification: The collected data were systematically analyzed to identify recurring challenges across our own experiences.The authors categorized the challenges into technical, ethical, operational, and strategic dimensions to provide a holistic view.
•
Comparative analysis: The authors performed a comparative analysis of the identified challenges in order to create a framework.Although the challenges could be interpreted as applying to artificial intelligence in general, they focused on the military perspective.
The authors mainly drew from their collective experience some practical recommendations for mitigating challenges and advancing the successful integration of ML in defense.These recommendations encompass legal and ethical aspects that have not been considered so far in the literature or any research.
ML and Defense Sector
The usage of ML in defense has the potential to enhance security, improve decision making, and increase efficiency.The examples mentioned above for the civil world can be extended for applications in defense.The tactical landscape would be completely transformed, including new technologies.This transformation was explained in [9], where ten use cases were proposed involving emerging technologies such as 5G and future 6G networks.ML algorithms arose decades ago.However, their implementation in real systems has not been possible.Today, 5G and future 6G communication systems bring with them techniques that improve processing and the amount of data that can be used, making it possible to introduce ML into various systems.For this reason, it is time to address the dilemma that the use of ML in defense could pose.Virtual assistants recognize and respond to spoken commands, allowing for the hands-free operation of devices and appliances and also providing automated responses to common questions or issues.This increases and guarantees the security of soldiers on the battlefield.Speech-to-text technology converts spoken language into text, enabling the real-time transcription of speeches, lectures, and meetings, which improves communications and, again, the security and safety of humans.However, using these technologies for military purposes also raises significant legal and ethical concerns that must be carefully considered.For several reasons, assessing AI and ML's legal and ethical implications in the defense industry is essential.Firstly, the use of these technologies in defense is subject to national and international legal frameworks that regulate the development, production, and use of weapons.For example, developing and deploying autonomous weapons systems raises concerns about accountability, transparency, and human control.These issues are critical for ensuring compliance with international law and maintaining global peace and security.
Secondly, AI and ML applications in defense can lead to unintended consequences, including biases, errors, and discrimination.For example, facial recognition technologies used in defense can perpetuate racial and gender biases.As such, it is essential to establish a legal framework that takes into account the potential risks and harms associated with these technologies.
Thirdly, ethical considerations must be taken into account when using AI and ML in defense.These technologies raise questions about the moral responsibility of individuals and institutions, the value of human life, and the protection of human rights.For example, the use of AI and ML in decision-making processes may lead to decisions that conflict with ethical principles, such as fairness, justice, and equality.
Significant shortcomings remain despite the need to establish a legal framework and address ethical considerations when using AI and ML in defense.Many legal frameworks are outdated and do not fully account for the unique challenges posed by AI and ML applications in defense.Additionally, ethical considerations are often neglected, and there is a lack of consensus as to which ethical principles should guide the development and use of these technologies.
Given the basic idea behind how ML works and the use that large companies are currently making of it in various fields, this article analyzes the ethical and legal issues involved in applying this technology in projects related to a country's defense sector, as well as the disadvantages, challenges, and advantages it entails.It should be understood that when talking about the defense of a country or a war, it is not only necessary to think of those physical events with destructive results involving tangible materials.We must also think of computer attacks, the hacking of networks, terrorism, social alarm and insecurity, piracy, uncontrolled immigration, threats from space, and any other action or omission that provokes a situation of hostility at the national level.
We extracted a series of advantages and disadvantages regarding the introduction of ML in different defense systems, as shown in Figure 1.The union of ML with defense technologies offers several advantages.Firstly, it enables the simulation of possible war scenarios, allowing defense agencies to evaluate different strategies and develop effective countermeasures.Additionally, machine learning enhances the training of troops by providing realistic and dynamic simulations, improving their decision-making abilities and situational awareness.Furthermore, ML algorithms can analyze historical data to extract valuable insights, helping operators understand and respond to complex contextual situations more effectively.By processing vast amounts of information, these algorithms can also provide operators with additional and useful intelligence, assisting in decision-making processes and enhancing overall operational efficiency.Another benefit is the potential for early warning and the prevention of future incidents.These algorithms can analyze patterns and anomalies in data to identify potential risks, enabling proactive measures to be taken to mitigate or prevent potential threats.However, there are also notable disadvantages to consider.Implementing machine learning in defense systems requires a significant financial investment, as it involves developing and maintaining robust infrastructure, acquiring advanced technologies, and training personnel.Obtaining relevant data to train machine learning algorithms can also be challenging, as it often requires extensive and diverse datasets that accurately represent real-world scenarios.
Defense Systems for Integrating ML
Various systems in defense already include ML-based technologies.We collected the main candidates that have been selected as pioneers in the application of these techniques to evolve defensive technologies, as follows: 1.
Autonomous systems: These include unmanned aerial vehicles (UAVs) or ground vehicles, which leverage ML algorithms to enable them to navigate and complete missions without direct human control.For example, Ref. [20] proposed an autonomous network using multi-agent reinforcement learning for early threat detection, which is an increasingly important part of the cybersecurity landscape given the growing scale and scope of cyberattacks.
2.
Predictive maintenance: ML algorithms are used to analyze data from sensors and other sources to predict when equipment may fail, allowing maintenance to be performed before a breakdown occurs.The SOPRENE project [21] proposed the use of ML for predictive maintenance.
3.
Cybersecurity: ML can analyze network traffic to detect anomalies and potential threats, enabling faster response times and reducing the risk of cyber attacks [22].
4.
Situational awareness: ML algorithms can analyze data from a variety of sources, including sensors, cameras, and social media, to provide real-time situational awareness to military personnel [23].The automated detection of refugee dwellings from satellite imagery using multi-class graph-cut segmentation and shadow information was presented in [24].
5.
Logistics and supply-chain management: ML algorithms can optimize logistics and supply-chain management by analyzing data on inventory levels, shipping times, and other factors to improve efficiency and reduce costs [25].6.
Threat detection: ML algorithms can be used to detect potential threats, such as explosives or weapons, at security checkpoints or during cargo inspections [20].
Overall, ML-based technologies have the potential to significantly enhance the capabilities of defense systems and personnel, improving efficiency, accuracy, and safety.
Legal Framework
As history has shown, novel technologies often emerge and gain widespread use among the population, leading to legal and political interventions to regulate their usage.Such regulations may take the form of ethical, health-related, or social recommendations.The advent of ML techniques and their ability to deliver automated decisions without human intervention is no exception to this trend, as their regulation has only been considered in recent years while they materialized from a purely theoretical concept into practical applications.The European Commission (EC) published a "White Paper" in Brussels in February 2020 [26], which was the first significant document on the legal regulation of AI at the European level.Its primary goal was to create an ecosystem of excellence that can support the development and adoption of artificial intelligence across the EU economy and public administration by leveraging the strengths of industrial and professional markets and the huge volume of digital data produced worldwide.
According to an official statement from the EC [27], there are seven essential requirements for AI legislation:
•
Human action and oversight.Additionally, special concern has been expressed about the protection of fundamental rights, such as personal data protection, privacy, and non-discrimination, as well as the civil and criminal liability of actions performed by autonomous systems or machines utilizing machine learning.
Based on these considerations, a member country of the European Commission launched a regulatory pilot project on AI called "sandbox" in June 2022 [28].Sandbox will serve as a tool to carry out a first regulatory project based on the experience of legislative authorities and companies developing AI to identify the best practices for the implementation of this technology.This pilot project is expected to be the starting point for the European Regulation on AI and is anticipated to be implemented within the next two years.
Ethical Framework
The ethical dilemmas that AI or the use of ML algorithms can raise have been discussed in many fields.For example, UNESCO addressed in [29] the ethical dilemmas of AI in different sectors, such as the automotive, legal, and art sectors.However, the defense sector has not yet been taken into account, which further justifies our analysis.An important point today is meeting the Sustainable Development Goals [30]; likewise, the Defense sector and its applications must analyze these requirements and consider their implications.
Unquestionably, the use of warfare systems must comply with International Humanitarian Law (IHL), which establishes a set of rules aimed at limiting the effects of armed conflicts for humanitarian reasons (Hague Law).To this end, it is crucial that human leaders supervise war actions to establish an adequate level of discrimination and precaution, depending on the tactical situation, in order to ensure that the risks to non-combatants are proportional to the military objectives' importance.However, in a war situation where the enemy could use autonomous and lethal weapon systems, the semi-automatic establishment of appropriate levels may not be compatible with protecting the units activated on the battlefield and the mission's success.
While military AI can be appropriate for its speed, determination, and accuracy, it can also be concerning due to its making of decisions without temperance, meditation, or adaptation to the various mazes of warfare.Factors such as identifying combatants, peaceful populations, civilians, and military allies; attacks with inordinate intensity; threats; system failures; impersonations; and insufficiently experienced deployed systems could promote an escalation in the war situation and increase the risk instead of providing advantages.It is challenging for a machine to automatically attend to and evaluate the principles of distinction and proportionality, such as distinguishing between a terrified civilian and a dangerous combatant in an urban scenario or applying just defensive force in the face of aggression, which requires a quantitative, qualitative, and ethical assessment.
Other essential aspects are accountability and human dignity.Responsibility serves as a deterrent against unconscionable actions, as a guarantor of law enforcement, and as a moral punishment.Human dignity is crucial from a moral standpoint because a human death chosen by a completely autonomous algorithm can lead to understanding human life as an object.The decision to eliminate a life should involve at least a prior moral judgment.
A recent example of the ethical challenges posed by the use of ML in military tactical environments is Google's decision not to renew Project Maven with the US Pentagon.This decision was controversial, with more than 4000 employees requesting the project's cancellation, suggesting a lack of confidence in the government's use of this system [31].
Finally, it is important to note that in 2015, the Open Roboethics Institute (ORI) conducted an international survey in which it received responses from more than 1000 people from 54 different countries.Among other aspects, this survey asked for opinions on the use of lethal autonomous weapon systems (SALAs) [32].The results from the SALAs survey [32] are collected and shown in Table 1.
Table 1.Opinion survey on the use of SALAs.
67%
All types of SALAs should be banned internationally.
56%
The use and development of SALAs should be prohibited.
85%
SALAs should not be used for offensive purposes.
71%
Remotely operated weapon systems should be used instead of SALAs.
60%
The respondent would prefer to be attacked by remotely operated systems rather than SALAs.
Based on the data presented in Table 1, it seems that from an ethical point of view, the development and use of SALAs were not very well received by the respondents.For example, 67% of the respondents thought that all types of SALAs should be banned internationally.Moral and ethical factors prevailed over war strategies and successful operations, refuting the famous expression "all is fair in war".
Example of ML Application in Defense: ATLAS
The Advanced Targeting and Lethality Automated System (ATLAS) project [33] pursued as a primary goal the equipment of US combat tanks with AI and ML capabilities, allowing them to identify and attack targets three times faster than conventional methods.Therefore, from an AI and ML point of view, the ATLAS project focused on developing a learning algorithm capable of processing a large volume of data collected by sensors.Its purpose was to detect and identify threats automatically and assign the corresponding orientation and elevation to weapon systems so that they can proceed with an attack [34].In addition, an ML algorithm capable of directly assigning the best weapon from those available to shoot down the detected target was also implemented.In pursuit of this objective, ATLAS concentrated on advancing the following technology areas: • Data collection regarding possible types of military targets and the pre-training of the ML algorithm used.
•
Image processing, where we should highlight the capacity for the detection, classification, recognition, identification, and tracking of targets that can be achieved by applying ML techniques for this purpose.
•
Trigger control-in this area, advanced targeting algorithms, the automation of the firing process, and the recommendation of the weapon to be used according to the identified target are very important.
•
The technical support integrated into the combat vehicle, since a high-voltage power supply system (600 Vdc) and the integration of sensors and electronics are necessary.• Sensors-in order to carry out this automation and to provide the ML algorithm with real-time working data, the tanks are equipped with image sensors in the visible, NIR (near-infrared), SWIR (short-wave infrared), MWIR (medium-wave infrared), and LWIR (long-wave infrared) wavebands; gyro mechanisms that make possible the continuous 360º rotation of the sensors and rangefinder; and LADAR (laser detection and ranging)/LIDAR (light detection and ranging)-type lasers.
Challenges in the Defense Sector
ML has great prospects in applications related to military environments.However, to create reliable products, it is still necessary to resort to simulation systems, knowledge, and data engineering, according to reference [35].The main challenges that emerged from this evaluation are summarized in Figure 2.
•
Possible friendly fire: There is a possibility of fire between units of the same side due to the misidentification of assets, confusion between allies and hostile units, errors in communicating the nature of identified assets, or insufficient contextualization during objective development.The automation of tasks is associated with a lack of tactical patience or even pre-action meditation, which can result in unassessed collateral damage.
•
Adversarial attacks against ML models: With the passage of time and the widespread use of this technology, the emergence of methods for attacking or interfering with these systems (adversarial evasion attacks) has led to the need to study the reliability, privacy, and security of these algorithms.For example, in imaging systems, noise imperceptible to the human eye could be inserted in such a way as to induce a reliable classification error during jamming.Of note are "white-box" attacks, which occur when the enemy knows how the algorithm (of the deep neural network) works, and "black-box" attacks, which occur when the adversary knows only the type of input and output of the system.Researchers at Stony Brook University (New York) and IBM developed the ARES evaluative framework [36] based on reinforcement learning for adversarial ML, allowing researchers to explore system-level attack/defense strategies and re-examine target defense strategies as a whole.
•
Transparency: As in safety-critical systems, these types of applications require high transparency, high security, and building user trust.Regarding transparency, the challenges are to improve user confidence in the recommendations given by the system; identify previously unknown causal relationships that can be tested with other methods; determine the limits of system performance; ensure fairness to avoid systematic biases that may result in unequal treatment for some cases; and improve model interoperability, so that users can predict the system recommendations, understand the model parameters, and understand the training algorithm.
•
Ethics in decisions made by machines: There is a degradation of "humanization" in the decisions made.A machine does not consider the death of civilians as collateral damage or take into account the morality of annihilating the life of an enemy combatant, even when they have indicated their surrender.Thus, it is a major challenge for a machine to learn and contemplate this criterion in its decision algorithm.
•
The scarcity of data and the lack of values: The performance of an ML algorithm depends mainly on the quality of the samples, the availability of large amounts of samples or data, and whether the data are optimal or meaningful for the exercise.For example, in the case of the US Army, which is a great power with a great deal of combat experience and a very large amount of recorded data, it may be considered that the number of samples is insufficient for the application of ML in a real, substantial, and imminent confrontation.On the one hand, there is a large amount of unknown data on the adversary, and on the other hand, obtaining data in real-time during combat is difficult due to the impossibility of computing and processing the extracted information.If the existing database originates from exercises, it will definitely be limited to certain levels of security and costs and will therefore be substantially different from a real battle.As a solution, it has been proposed to fill this data gap through very arduous fieldwork, taking all possible real values and then identifying them, labeling them, and creating a database with labels according to the needs of the ML algorithm.Another option would be to sample using real-time strategy games, where the commander can play various roles in different scenarios and thus accumulate experience in the form of data.
•
Failures in the evaluation criteria: The ultimate goal in developing ML algorithms is creating a system that aids decision making based on accumulated experience and experience gained in new scenarios.However, the main challenge is to determine the extent to which the algorithm is valid and reliable for decision making and to make it extensible to other scenarios.Therefore, the decision-making process of the created system requires a large number of experiments and simulations to test its effectiveness.
•
The complexity of modeling a tactical environment: A large amount of information is relevant concerning a battlefield, among which the state of the combat units and weapons present is of relative importance.The situation is complex due to the difficulty of controlling the behavior of the units, modeling the battlefield environment, intuiting the mechanisms of action, and identifying the evaluation criteria.To all this, we must add the determination of factors such as the efficiency and cost of the war; the damage produced and the need to sustain the resources deployed; and the need to take over air, land, or sea space.
On the other hand, political and economic factors also exist, since military objectives often represent political decisions.Therefore, on some occasions, regardless of the outcome of a battle, if political and economic containment is achieved, the consequences could be positive and valid as a strategy for a tactical environment.All this makes it difficult to build the upper layers of the modeling system.
•
Limited and uncertain information: During a battle, the information received may be incomplete, and the source may not be certain.Therefore, making decisions with these obstacles may not guarantee profit.Because of this, it is imperative to consider how to discretize time and action, create temporary windows of advantage, and seize the initiative in order to attain military or strategic objectives.Command staff who make decisions according to routines or prescribed protocols will be at a tactical disadvantage.Ideally, they should pay attention to contingencies and innovate their tactics as needed.However, the transfer of historical knowledge by military experts in the form of facts and rules is an indisputable premise to begin the development of an ML algorithm to be applied in tactical environments.As a basis, the system must know what is meant by the military domain; the performance of the weapons used; the models of warfare (asymmetric, symmetric, hybrid, destructive, nuclear, etc.), the relevant decision models (political, economic, securing civilians, the conquest of territory, etc.); the rules in armed conflicts; and the rules of operation between combatants or between allies.
Following on from the above information, the challenges faced in [37] during war simulation to predict the winning warship using Random Forest are presented as an example.The example study is presented in Table 2, alongside the challenges proposed.
Challenges
Warfare Simulation Predicting Battleship Winner Using Random Forest [37] Possible friendly fire In this case, this challenge did not apply, since the algorithm did not have to identify friendly or enemy units-it only predicted the winner.
Adversarial attacks against ML models
The enemy could match the disposition of dummy weapons.One would have to check whether this could affect the training data and the final result.
Transparency
In order for the command to completely trust the prediction of the algorithm, it should be aware of all sources and constraints.In this case, the information appeared transparent but was also very simple.The complexity of modeling the tactical environment The study did not consider the scenario in which the battle would take place nor the initial state of the combat units.It only took into account the size, speed, capacity, number of crew, attack, additional attack, and defense of the ships.
Limited and uncertain information
The study offered only one final winner.It was necessary to discretize the time and action in case there was, for example, a misfire or weapon limitation.This would change the situation.
Conclusions
Machine learning (ML) has become a pervasive force, revolutionizing industries far and wide, and defense is no exception.Its implementation offers a spectrum of advantages, encompassing heightened efficiency, substantial cost savings, adept fraud detection mechanisms, and the refined management of intricate supply chains.However, the incorporation of ML into defense operations unearths a host of intricate legal and ethical concerns that loom large.Currently, there is a discernible void in terms of comprehensive legal and ethical frameworks capable of governing these novel technologies.This void poses an imminent risk, as the absence of clear guidelines could inadvertently pave the way for unforeseen perils to emerge.The urgency of instituting a comprehensive legal framework is paramount, primarily to pre-emptively address the latent threats and potential harms that ML might introduce into defense systems.
Ethical considerations similarly cast a lengthy shadow over the unbridled integration of ML within defense.These span the realm of both individual and institutional moral responsibilities.The ethical conundrums expand when confronted with matters of safeguarding human life and protecting the rights that underpin human dignity.One of the most striking and controversial aspects revolves around the delegation of decision-making authority to autonomous machines bolstered by ML capabilities, operating without the capacity for moral discernment.While this marks a departure from the norm, it remains a distinctive issue demanding earnest contemplation.
The nexus of legal and ethical implications underscores the urgency of contemplating the application of AI and ML in defense contexts.International laws must be meticulously adhered to and human rights vigilantly safeguarded, necessitating the establishment of robust ethical precepts.However, for all the efforts to underscore the need for these frameworks, tangible gaps persist, reminding us that the path to their implementation is intricate and multifaceted.
Therefore, in this work, we carried out a crucial assessment to determine the requirements and weaknesses related to the creation of future legal and ethical frameworks.These frameworks must take into account the conditions of human nature that surround the defense sector and tactical scenarios.In this analysis, we found a limitation in the lack of information due to the confidential nature of the sector.This, in turn, became the central pillar for defining such frameworks.While establishing a legal framework and addressing ethical concerns pose challenges, taking advantage of potential benefits and minimizing risks and harms is necessary.At the same time, this analysis included the advantages of strengthening defense systems with ML, such as improving training systems for combatants or tactical situation prediction that guarantees the success of the mission.The disadvantages include the difficulty of carrying out the testing phase for systems and the initial deployment barriers.In addition, the challenges of the application of ML in defense projects were studied and identified, such as complexity reduction and faster recovery from failures.These ongoing challenges and potential opportunities highlight possible research directions for advancing the application of machine learning in defense while also addressing legal, ethical, and operational considerations.
Figure 1 .
Figure 1.Advantages and disadvantages of applying ML in military environments.
Figure 2 .
Figure 2. Challenges of applying ML in military environments.
Table 2 .
Example of a training data challenge. | 7,708.4 | 2023-10-22T00:00:00.000 | [
"Computer Science",
"Political Science",
"Law"
] |
Toll-Like Receptor 21 of Chicken and Duck Recognize a Broad Array of Immunostimulatory CpG-oligodeoxynucleotide Sequences
CpG-oligodeoxynucleotides (CpG-ODNs) mimicking the function of microbial CpG-dideoxynucleotides containing DNA (CpG-DNA) are potent immune stimuli. The immunostimulatory activity and the species-specific activities of a CpG-ODN depend on its nucleotide sequence properties, including CpG-hexamer motif types, spacing between motifs, nucleotide sequence, and length. Toll-like receptor (TLR) 9 is the cellular receptor for CpG-ODNs in mammalian species, while TLR21 is the receptor in avian species. Mammalian cells lack TLR21, and avian cells lack TLR9; however, both TLRs are expressed in fish cells. While nucleotide sequence properties required for a CpG-ODN to strongly activate mammalian TLR9 and its species-specific activities to different mammalian TLR9s are better studied, CpG-ODN activation of TLR21 is not yet well investigated. Here we characterized chicken and duck TLR21s and investigated their activation by CpG-ODNs. Chicken and duck TLR21s contain 972 and 976 amino acid residues, respectively, and differ from TLR9s as they do not have an undefined region in their ectodomain. Cell-based TLR21 activation assays were established to investigate TLR21 activation by different CpG-ODNs. Unlike grouper TLR21, which was preferentially activated by CpG-ODN with a GTCGTT hexamer motif, chicken and duck TLR21s do not distinguish among different CpG-hexamer motifs. Additionally, these two poultry TLR21s were activated by CpG-ODNs with lengths ranging from 15 to 31 nucleotides and with different spacing between CpG-hexamer motifs. These suggested that compared to mammalian TLR9 and grouper TLR21, chicken and duck TLR21s have a broad CpG-ODN sequence recognition profile. Thus, they could also recognize a wide array of DNA-associated molecular patterns from microbes. Moreover, CpG-ODNs are being investigated as antimicrobial agents and as vaccine adjuvants for different species. This study revealed that there are more optimized CpG-ODNs that can be used in poultry farming as anti-infection agents compared to CpG-ODN choices available for other species.
Introduction
Chicken and duck are two major farmed avian species. Production loss caused by infectious diseases is a major problem in the poultry industry, thus, developing new strategies to combat infections is required for preventing massive losses [1][2][3]. Toll-like receptors (TLRs) are pattern-recognition receptors for detecting microbial pathogens. These are type I transmembrane receptors with an extracellular domain comprising multiple leucine-rich repeats, followed by a transmembrane region and a highly conserved cytoplasmic Toll/IL-1 receptor (TIR) domain. Ligand binding of these TLRs occurs within the ectodomain. The TIR domain provides a key site for homophilic interaction, with the TIR domain containing MyD88 family adapter proteins for activating NF-κB and IRF signaling pathways [4][5][6][7]. For avian TLRs, their downstream signaling molecules are relatively similar to those of mammalian TLRs, suggesting that avian TLRs might have a similar mechanism of action to their mammalian orthologs [8,9]. Activating these TLRs initiates early innate immunity and activates adaptive immunity for the host responses to invading microorganisms. Because of their potent immunostimulatory activity, different TLR agonists are being investigated as anti-infectious agents or as vaccine adjuvants for different species [10][11][12].
Bacterial and viral DNA are potent immune stimuli. Immunostimulatory activity of these microbial DNAs is attributed to sequence motifs containing unmethylatedCpG-dideoxynucleotides in the DNA. Synthetic CpG-containing oligodeoxynucleotides (CpG-ODNs) mimic the stimulatory effect of these microbial DNA in activating immune cells [24][25][26][27]. The structure-function relationship of CpG-ODNs is better investigated in mammalian cells. In mammals, CpG-ODN activity is determined by its nucleotide length and the number of CpG-hexamer motifs, and the spacing, position, and surrounding bases of these motifs in the oligodeoxynucleotide. Moreover, studies with mammalian TLR9 revealed that the species-specific activity of CpG-ODN is largely determined by its CpG-hexamer motifs. For example, CpG-ODN with GACGTT motif displays the greatest activity toward mouse TLR9. In contrast, CpG-ODN with GTCGTT motif preferentially activates human TLR9 and TLR9 from other domestic animals including sheep, goat, horse, pig, and dog [28][29][30][31].
TLR9-mediated immunostimulatory activities of CpG-ODN have been extensively studied in mammals. CpG-ODN administration induces cytokine production, subsequently leading to maturation, differentiation, and proliferation of immune cells [25,26,[32][33][34]. Because these effects facilitate both antigen-dependent and antigen-independent eradication of infected microbes, CpG-ODNs are investigated for their function as vaccine adjuvants and as antimicrobial agents [11,[35][36][37][38]. Similarly, CpG-ODNs have been shown to protect chickens and ducks against bacterial and viral infections by acting as vaccine adjuvants or antimicrobial agents [12,18]. Nevertheless, TLR9 is missing in avian species, thus immunostimulatory effects observed in chicken and duck CpG-ODNs are mediated by TLR21 [8,9,16,17]. Although CpG-ODN efficacy as either a vaccine adjuvant or antimicrobial agent in birds would be determined by its immunostimulatory activity, the nucleotide sequence requirement for CpG-ODN to strongly activate avian TLR21 has not been well investigated. Here, we characterized chicken and duck TLR21s and investigated their activation by CpG-ODNs. Results showed that these two avian TLRs have a broad CpG-ODN sequences recognition profile.
Approval of Animal Work
Animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC), National Health Research Institutes, Taiwan. Chicken (Gallus domesticus), Peking duck (Anasplatyrhynchos var. domestica), and white Muscovy ducks (Cairinamoschata) were purchased from the Animal Drugs Inspection Branch, Animal Health Research Institute (Miaoli, Taiwan), and the Livestock Research Institute (Ilan, Taiwan). These animals were handled following the guidelines.
Reagents and Antibodies
TRIzol reagent, SuperScript IV kit, and AccuPrime DNA Polymerases were purchased from Invitrogen (San Diego, CA, USA). RNeasy Mini Kit and QuantiNova SYBR Green PCR Kit were purchased from QIAGEN (Hilden, Germany). CpG-ODNs were purchased from Integrated DNA Technologies, Inc. Luciferase assay reagents were purchased from Promega (Madison, WI, USA). Anti-FLAG antibody and anti-actin antibody were purchased from Sigma (St. Louis, MO, USA) and Santa Cruz Biotech Inc. (Dallas, TX, USA), respectively.
Molecular Cloning of Chicken and Duck TLR21s cDNA
Total RNAs were purified from chicken and duck spleens using TRIzol. First-strand cDNA libraries were prepared from total RNA using SuperScriptIV first-strand synthesis kit based on the manufacturer's instructions. To clone chicken TLR21 cDNA, forward and reverse primers (5 -atgatggagacagcggagaaggcatg-3 and 5 -ctacatctgtttgtctccttccctgg-3 ) were designed based on the coding region of chTLR21 (GenBank: JQ042914.1). cDNA containing a complete coding region of chTLR21 was cloned through PCR from the prepared chicken spleen first-strand cDNA library. To clone full-length duck TLR21 cDNA, forward and reverse primers (5 -acaggagccccccaccgccca-3 and 5 -accccatggatggttttcctccacccca-3 ) were designed based on several sequences or predicted gene sequence of avian TLR21s mRNA (GenBank: KT35043, NW_013186152.1, and NOIK01001195.1). cDNA of full-length duck TLR21 containing both 5 -and 3 -untranslated regions and a coding region was cloned through PCR from the prepared duck spleen first-strand cDNA library. The nucleotide sequence and deduced protein sequence of this duck TLR21 were submitted to GenBank (accession number MT081574).
Expression Vectors for Chicken, Duck, and Grouper TLR21s
Chicken and duck TLR21 expression vectors were constructed through PCR amplification of the corresponding protein-coding regions from the first-strand cDNA libraries of chicken and duck spleen. Forward and reverse primers for chicken TLR21 were 5 -atgatggagacagcggagaaggcatg-3 and 5 -catctgtttgtctccttccctggg-3 , and primers for duck TLR21 were 5 -atggcacggccccgcccctcc-3 and 5 -ctatgccttctcctctttctccccacgc-3 . Amplified DNA fragments were subcloned into a pEF6 vector in frame with a FLAG tag at their C-terminal ends. The expression vector for grouper TLR21 was generated as previously reported [22].
TLR21 Activation Assays
HEK 293 cells were grown in DMEM supplemented with 10% fetal bovine serum (FBS). Cells were co-transfected with the indicated TLR21 expression vector and a NF-κB-controlled luciferase reporter gene, treated with various CpG-ODNs as indicated, and the TLR21 activation assay was performed as previously described [22]. Relative luciferase activities were calculated as fold induction compared to unstimulated control. Data are expressed as means ± SD (n = 3).
Tissue Isolation from Chicken and Duck for First-Strand cDNA Preparation
Tissues from heart, liver, spleen, kidney, bursa of Fabricius, thymus, and lung were aseptically removed from one-week-old chickens and ducks after euthanization. Tissues were then gently minced and were soaked in TRIzol for total RNA extraction. First-strand cDNA was generated using SuperScript IV kits.
Preparation and Culture of Chicken and Duck Splenocytes
Spleens were aseptically removed from one-week-old chickens and ducks after euthanization. Organs were minced and pressed gently through 70 µm cell strainers. Cells were then washed and suspended in RPMI medium. The cell suspension was then carefully layered on Ficoll-Paque PREMIUM (GE Healthcare, Chicago, IL, USA), and thesplenic cell layer was separated bycentrifugation at 400× g for 40 min. Splenocytes were washed thrice and subsequently cultured in RPMI medium containing 10% FBS at 37 • C in a humidified cell incubator with 5% CO 2 .
RT-qPCR Analysis of Gene Expression
Splenocytes isolated from chicken and duck were treated with different CpG-ODNs for 4 h, and total RNA was extracted using QIAGEN RNeasy Mini Kit. Reverse transcription was performed using SuperScript IV first-strand synthesis kit. Quantitative PCR was carried out using a Roche LightCycler 480 System (Basel, Switzerland), QIAGEN, QuantiNova SYBR Green PCR Kit, and gene-specific primers. mRNA expression was normalized to GAPDH.
SDS-PAGE and Immunoblot Analysis
Cells were lysed with lysis buffer containing complete protease inhibitor cocktail (Roche Life Science, Indianapolis, IN, USA). Cell lysates were resolved by SDS-PAGE and transferred to PVDF membranes. Membranes were incubated with the indicated primary antibody and then with the HRP-conjugated secondary antibody. Visualization of the immunoreactive bands was performed using chemiluminescent HRP substrate (Millipore, Temecula, CA, USA) and UVP BioSpectrum Imaging System.
Statistical Analysis
Data are expressed as mean ± SD. All groups were from three independent experiments. Statistical analyses were performed using Student's t-test. p < 0.05 was considered statistically significant.
Characterization of Chicken and Duck TLR21s
A chicken (chi) TLR21 cDNA was previously reported to encode a TLR21 protein of 972 amino acid residues [19]. Based on the expected high sequence identity between nucleotide sequences of the TLR21 gene in avian species, we designed two primers based on the 5 -and 3 -untranslated regions of goose (Ansercygnoidesdomesticus) and pot-billed duck (Anaszonorhyncha) TLR21 sequences identified from the NCBI nucleotide database to clone theduck (Anasplatyrhynchos var. domestica, Peking duck) cDNA. We therefore cloned a full-length duck (duc) TLR21 cDNA, and the sequence was submitted to GenBank (accession number: MT081574).The cDNA encodes a TLR21 protein of 976 amino acid residues, which is less than the 979 amino acid residues of the giant grouper (Epinepheluslanceolatus) TLR21 characterized for its interaction with CpG-ODNs [22].
These three TLR21s contain an extracellular domain, a transmembrane domain, and a Toll/IL-1 (TIR) cytosolic domain, and they have N-terminal leucine-rich repeat (LRR-NT), leucine-rich repeats (LRRs) and a C-terminal leucine-rich repeat (LRR-CT) in their ectodomain. The TIR domain is better conserved among these TLR21s. In addition, the three boxes in the TIR domain for mammalian TLR signaling are conserved in TIR domains of these TLR21s ( Figure 1). Ligand binding occurs at TLR ectodomains, therefore, we used SWISS MODEL software to compare predicted three-dimensional ectodomain structures of chicken, duck, and grouper TLR21s. Ectodomains of these TLR21s have relatively similar horseshoe-shaped solenoid three-dimensional structures. A previous study revealed that fish TLR9s contain an undefined region (also called as Z-loop) in their ectodomains, whereas their functional homolog, TLR21s, do not [23]. Analysis of chicken and duck TLR21s did not show an undefined region in their ectodomain ( Figure 2), indicating that this fish TLR21 feature is preserved in these two avian TLR21s.
Phylogenetic Analysis of Chicken and Duck TLR21s
Vertebrate TLRs are divided into 6 families, namely, family 1, 3, 4, 5, 7, and 11. The TLR11 family contains two subfamilies, TLRs 11-13 and TLRs 20-22. TLR21 is an ortholog of mouse TLR13 [39,40]. When searching NCBI nucleotide databases, putative sequences for the TLR13 of avian species including wild turkey, helmeted guinea fowl, great tit, Atlantic canary, and white-throated sparrow were identified, but none of their TLR21s were found. As birds are not reported to have TLR13, whether these putative sequences were TLR21s that were mistakenly annotated as TLR13s requires clarification [40]. Phylogenetic and protein identity analyses using the protein sequences of these TLR21s and hypothetical TLR13s from different avian and fish species using ClustalW2 revealed that the ducTLR21 protein sequence is closely related to protein sequences in avian species, having 93.8%, 74.8%, and 61.3% protein identity to the goose, chicken, and sparrow sequences. ducTLR21 has 43.9% protein identity to grouper TLR21 and has around 43-46% protein identities to various fish TLR21 sequences. Generally, TIR domains are better conserved among different TLRs. Consistently the protein identity of chiTLR21 TIR domain to the TIR domains of duck and grouper are 86.3% and 56.6%, respectively ( Figure 3). Figure 1. Alignment of chicken, duck, and grouper TLR21 protein sequences. Chicken and grouper TLR21 proteins were retrieved from NCBI database. The accession number is NP-001025729.1 for chicken TLR21 and AJW66342.1 for grouper TLR21. Duck TLR21 sequence was submitted to NCBI database under the accession number MT081574. Signal peptide, leucine-rich repeats (LRRs), N-terminal LRR (LRR-NT), C-terminal LRR (LRR-CT), transmembrane domain (TM), and Toll/interleukin receptor (TIR) domain are assigned based on previous reports on chicken TLR21 [39]. The boxed regions are box1, box2, and box3 in the TIR domain. Amino acids are color-coded to indicate their chemical properties: green, hydroxyl/amine/basic/Q; blue, acidic; pink, basic; red, hydrophobic (including aliphatic Y). Asterisk, identical residues; two dots, highly conservative substitutions; single dot, conservative substitutions. Figure 1. Alignment of chicken, duck, and grouper TLR21 protein sequences. Chicken and grouper TLR21 proteins were retrieved from NCBI database. The accession number is NP-001025729.1 for chicken TLR21 and AJW66342.1 for grouper TLR21. Duck TLR21 sequence was submitted to NCBI database under the accession number MT081574. Signal peptide, leucine-rich repeats (LRRs), N-terminal LRR (LRR-NT), C-terminal LRR (LRR-CT), transmembrane domain (TM), and Toll/interleukin receptor (TIR) domain are assigned based on previous reports on chicken TLR21 [39]. The boxed regions are box1, box2, and box3 in the TIR domain. Amino acids are color-coded to indicate their chemical properties: green, hydroxyl/amine/basic/Q; blue, acidic; pink, basic; red, hydrophobic (including aliphatic Y). Asterisk, identical residues; two dots, highly conservative substitutions; single dot, conservative substitutions.
Phylogenetic Analysis of Chicken and Duck TLR21s
Vertebrate TLRs are divided into 6 families, namely, family 1, 3, 4, 5, 7, and 11. The TLR11 family contains two subfamilies, TLRs 11-13 and TLRs 20-22. TLR21 is an ortholog of mouse TLR13 [39,40]. When searching NCBI nucleotide databases, putative sequences for the TLR13 of avian species including wild turkey, helmeted guinea fowl, great tit, Atlantic canary, and white-throated sparrow were identified, but none of their TLR21s were found. As birds are not reported to have TLR13, whether these putative sequences were TLR21s that were mistakenly annotated as TLR13s requires clarification [40]. Phylogenetic and protein identity analyses using the protein sequences of these TLR21s and hypothetical TLR13s from different avian and fish species using ClustalW2 revealed that the ducTLR21 protein sequence is closely related to protein sequences in avian species, having 93.8%, 74.8%, and 61.3% protein identity to the goose, chicken, and sparrow sequences. ducTLR21 has 43.9% protein identity to grouper TLR21 and has around 43-46% protein identities to various fish TLR21 sequences. Generally, TIR domains are better conserved among different TLRs. Consistently the protein identity of chiTLR21 TIR domain to the TIR domains of duck and grouper are 86.3% and 56.6%, respectively ( Figure 3).
Tissue Distribution of TLR21 in Chicken and Duck and Activation Their Splenocytes by CpG-ODNs
TLR21 expression in heart, liver, spleen, kidney, bursa of Fabricius, thymus, and lung tissues of chickens and ducks were analyzed by RT-qPCR. chiTLR21 had higher expression in the spleen and bursa of Fabricius, and modest expression in the thymus and lung. ducTLR21 had strongest expression in the spleen and modest expression in the bursa of Fabricius, thymus, and lung, and was weakly expressed in the kidney and liver. These revealed that chicken and duck TLR21s are expressed in immune-relevant tissues (Figure 4). We further investigated induction of cytokine production in chicken and duck cells by different CpG-ODNs. For this, splenocytes were purified In the database, the hypothetical protein sequences of wild turkey, helmeted guinea fowl, great tit, Atlantic canary, and white-throated sparrow are annotated as TLR13, though, avian species do not contain TLR13. It is likely that these hypothetical proteins are TLR21 but were mistakenly annotated as TLR13 [40].
Tissue Distribution of TLR21 in Chicken and Duck and Activation Their Splenocytes by CpG-ODNs
TLR21 expression in heart, liver, spleen, kidney, bursa of Fabricius, thymus, and lung tissues of chickens and ducks were analyzed by RT-qPCR. chiTLR21 had higher expression in the spleen and bursa of Fabricius, and modest expression in the thymus and lung. ducTLR21 had strongest expression in the spleen and modest expression in the bursa of Fabricius, thymus, and lung, and was weakly expressed in the kidney and liver. These revealed that chicken and duck TLR21s are expressed in immune-relevant tissues (Figure 4). We further investigated induction of cytokine production in chicken and duck cells by different CpG-ODNs. For this, splenocytes were purified from chickens and ducks and treated with CpG-ODNs with different sequences and different CpG-hexamer motif types including GACGTT, GTCGTT, and AACGTT; induction of IL-6, IL-8, and IFNγ was then analyzed by RT-qPCR. Results revealed that these CpG-ODNs have different activities to the chicken and duck splenocytes. Nevertheless, regardless of which CpG-hexamer motif type they possess, these CpG-ODNs were able to activate cytokine production in chicken and duck splenocytes( Figure 5). This activation profile by CpG-ODNs is quite different from that of some fish TLR21s, previously shown to preferentially respond to CpG-ODNs with a GTCGTT hexamer motif [21][22][23].
Broad CpG-ODN Sequence Recognition of Chicken and Duck TLR21s
Because the splenocytes are composed by different cell types that is not favorable for a precise study of the CpG-ODNs activities, we then use a cell-based TLR21 activation assay to investigate activation of chicken and duck TLR21s by different CpG-ODNs and compared their activation profiles with that of grouper TLR21. For this, the cell-based TLR21 activation assay was established by co-transfecting an expression vector for chicken, duck, or grouper TLR21 and a NF-κB-driven luciferase reporter gene into HEK293 cells. These were stimulated with CpG-ODNs with different sequence and different CpG-hexamer motif types. Distinct from the preferential activation of grouper TLR21 by CpG-ODNs with the GTCGTT hexamer motif, chicken and duck TLR21s were activated by CpG-ODNs with different types of hexamer motifs ( Figure 6). This is consistent with
Broad CpG-ODN Sequence Recognition of Chicken and Duck TLR21s
Because the splenocytes are composed by different cell types that is not favorable for a precise study of the CpG-ODNs activities, we then use a cell-based TLR21 activation assay to investigate activation of chicken and duck TLR21s by different CpG-ODNs and compared their activation profiles with that of grouper TLR21. For this, the cell-based TLR21 activation assay was established by co-transfecting an expression vector for chicken, duck, or grouper TLR21 and a NF-κB-driven luciferase reporter gene into HEK293 cells. These were stimulated with CpG-ODNs with different sequence and different CpG-hexamer motif types. Distinct from the preferential activation of grouper TLR21 by CpG-ODNs with the GTCGTT hexamer motif, chicken and duck TLR21s were activated by CpG-ODNs with different types of hexamer motifs ( Figure 6). This is consistent with the abilities of these CpG-ODNs to activate chicken and duck splenocytes ( Figure 5), suggesting that chicken and duck TLR21s have broad CpG-ODN sequence recognition profiles.
Vaccines 2020, 8, x the abilities of these CpG-ODNs to activate chicken and duck splenocytes ( Figure 5), suggesting that chicken and duck TLR21s have broad CpG-ODN sequence recognition profiles.
Chicken and Duck TLR21s Do Not Distinguish Different Types of CpG-hexamer Motifs
The CpG-hexamer motif type is essential in determining species-specific activity of CpG-ODN to mammalian TLR9s. Human TLR9 responds to CpG-ODNs with the GTCGTT motif, whereas mouse TLR9 is strongly activated by CpG-ODNs with GACGTT or AACGTT, but is only weakly activated by CpG-ODN with GTCGTT [28][29][30]. Furthermore, previous studies and Figure 6C show that zebrafish and grouper TLR21s preferentially responded to CpG-ODNs with a GTCGTT motif [21,22]. To confirm that chicken and duck TLR21s have a broad recognition profile to different type of CpG-hexamer motifs, CpG-ODNs with the same nucleotide sequence and length but with different type of CpG-hexamer motifs were designed for TLR21 activation. These CpG-ODNs, including CpG-2000 containing GACGTT motifs, CpG-2722 containing CTCGTT motifs, CpG-1670 containing AACGTT motifs, and their derivatives, generated by replacing their CpG-hexamer motif with different type of CpG-hexamer motifs, are shown in Figure 7. When treating chicken and duck TLR21-expressing cells with these CpG-ODNs, these CpG-ODNs did not display significant Data represent means ± SD (n = 3 independent experiments). ** p < 0.01, *** p < 0.001 compared with the control. (C) Immunoblot analysis of the TLR21 expression in HEK293 cells. β-actin was blotted as a loading control.
Chicken and Duck TLR21s Do Not Distinguish Different Types of CpG-hexamer Motifs
The CpG-hexamer motif type is essential in determining species-specific activity of CpG-ODN to mammalian TLR9s. Human TLR9 responds to CpG-ODNs with the GTCGTT motif, whereas mouse TLR9 is strongly activated by CpG-ODNs with GACGTT or AACGTT, but is only weakly activated by CpG-ODN with GTCGTT [28][29][30]. Furthermore, previous studies and Figure 6C show that zebrafish and grouper TLR21s preferentially responded to CpG-ODNs with a GTCGTT motif [21,22]. To confirm that chicken and duck TLR21s have a broad recognition profile to different type of CpG-hexamer motifs, CpG-ODNs with the same nucleotide sequence and length but with different type of CpG-hexamer motifs were designed for TLR21 activation. These CpG-ODNs, including CpG-2000 containing GACGTT motifs, CpG-2722 containing CTCGTT motifs, CpG-1670 containing AACGTT motifs, and their derivatives, generated by replacing their CpG-hexamer motif with different type of CpG-hexamer motifs, are shown in Figure 7. When treating chicken and duck TLR21-expressing cells with these CpG-ODNs, these CpG-ODNs did not display significant differences in their chiTLR21 and ducTLR21 activation capability (Figure 7). These results reveal that chicken and duck TLR21s do not distinguish different types of CpG-hexamer motifs as mammalian TLR9s and grouper TLR21 do. differences in their chiTLR21 and ducTLR21 activation capability (Figure 7). These results reveal that chicken and duck TLR21s do not distinguish different types of CpG-hexamer motifs as mammalian TLR9s and grouper TLR21 do.
Responsiveness of Chicken and Duck TLR21s to CpG-ODNs with Different Lengths and Varied Spacing between Their CpG-hexamer Motifs
Previous studies revealed that CpG-ODN length and the spacing between two CpG-hexamer motifs of a CpG-ODN can determine CpG-ODN activity. For example, CpG-C4609 with 12 nucleotides more strongly activates rabbit TLR9 than CpG-2007 and CpG-1826, which contain 22 and 20 nucleotides, respectively [41]. CpG-2722, with four thymidines between two GTCGTT motifs, has stronger activation of grouper TLR21 than CpG-272 and CpG-2721, which contain two thymidines and no spacing between the two GTCGTT motifs, respectively [22]. Therefore, we further investigated the responsiveness of chicken and duck TLR21s to CpG-ODNs with different lengths and varied spacing between CpG-hexamer motifs. CpG-2722 contains 19 nucleotides; based on this, different CpG-ODNs lengthened by adding a GTCGTT motif to the 3′-end and adding thymidine spacing between the second and third GTCGTT motifs were designed. Chicken and duck TLR21-expressing cells were treated with these CpG-ODNs. Results showed no major change in the stimulatory activity of these CpG-ODNs with nucleotide length increased from 19 to 31 ( Figure 8). Also, thymidine spacing between the first and second GTCGTT motifs were adjusted, and 5′ or 3′ nucleotides were trimmed to generate CpG-ODNs shorter than CpG-2722, and their activities were analyzed. Results showed that compared to CpG-2722 activity, no major change was observed for the activities of CpG-2722-7, -8, -9, and -10, which have 21, 17, 15, and 16 nucleotides, respectively. Nevertheless, activities of CpG-2722-11 and -12, comprising only 10 and 13 nucleotides, were reduced ( Figure 8). Overall data suggested that chicken and duck TLR21s are activated by CpG-ODNs with lengths from 15 to 31 nucleotides, and spacing between the two GTCGTT motifs does not play a role in determining activity.
Responsiveness of Chicken and Duck TLR21s to CpG-ODNs with Different Lengths and Varied Spacing between Their CpG-hexamer Motifs
Previous studies revealed that CpG-ODN length and the spacing between two CpG-hexamer motifs of a CpG-ODN can determine CpG-ODN activity. For example, CpG-C4609 with 12 nucleotides more strongly activates rabbit TLR9 than CpG-2007 and CpG-1826, which contain 22 and 20 nucleotides, respectively [41]. CpG-2722, with four thymidines between two GTCGTT motifs, has stronger activation of grouper TLR21 than CpG-272 and CpG-2721, which contain two thymidines and no spacing between the two GTCGTT motifs, respectively [22]. Therefore, we further investigated the responsiveness of chicken and duck TLR21s to CpG-ODNs with different lengths and varied spacing between CpG-hexamer motifs. CpG-2722 contains 19 nucleotides; based on this, different CpG-ODNs lengthened by adding a GTCGTT motif to the 3 -end and adding thymidine spacing between the second and third GTCGTT motifs were designed. Chicken and duck TLR21-expressing cells were treated with these CpG-ODNs. Results showed no major change in the stimulatory activity of these CpG-ODNs with nucleotide length increased from 19 to 31 ( Figure 8). Also, thymidine spacing between the first and second GTCGTT motifs were adjusted, and 5 or 3 nucleotides were trimmed to generate CpG-ODNs shorter than CpG-2722, and their activities were analyzed. Results showed that compared to CpG-2722 activity, no major change was observed for the activities of CpG-2722-7, -8, -9, and -10, which have 21, 17, 15, and 16 nucleotides, respectively. Nevertheless, activities of CpG-2722-11 and -12, comprising only 10 and 13 nucleotides, were reduced ( Figure 8). Overall data suggested that chicken and duck TLR21s are activated by CpG-ODNs with lengths from 15 to 31 nucleotides, and spacing between the two GTCGTT motifs does not play a role in determining activity.
Requirement of CpG-dideoxynucleotides for Activation of Chicken and Duck TLR21s
CpG-dideoxynucleotides in a CpG-hexamer motif are required for CpG-ODN activation of mammalian TLR9s [28][29][30]. CpG-2722 contains two copies of CpG-hexamer motifs, and previous studies revealed that the second copy of the CpG-hexamer motif at the 3 end of CpG-2722 is not required for activating grouper TLR21, since CpG-2727, containing a reversed CpG-dideoxynucleotide in its second copy of the CpG-hexamer motif, had similar activity in grouper TLR21 activation as CpG-2722 [22]. To investigate this property of CpG-dideoxynucleotides and the number of CpG-hexamer motifs required for CpG-ODN to activate chicken and duck TLR21s, CpG-2007-1 was generated in which all CpG-dideoxynucleotides in CpG-2007 were reversed. Chicken and duck TLR21s were stimulated with CpG-2007, -2007-1, -2722 and -2727. Results revealed that CpG-2727 had activity as strong as CpG-2722 in activating chicken and duck TLR21s, whereas CpG-2007-1 activity was reduced (Figure 9), indicating that at least one copy of the CpG-hexamer motif with CpG-dideoxynucleotides is required for CpG-ODN to strongly activate chicken and duck TLR21s.
Requirement of CpG-dideoxynucleotides for Activation of Chicken and Duck TLR21s
CpG-dideoxynucleotides in a CpG-hexamer motif are required for CpG-ODN activation of mammalian TLR9s [28][29][30]. CpG-2722 contains two copies of CpG-hexamer motifs, and previous studies revealed that the second copy of the CpG-hexamer motif at the 3′ end of CpG-2722 is not required for activating grouper TLR21, since CpG-2727, containing a reversed CpG-dideoxynucleotide in its second copy of the CpG-hexamer motif, had similar activity in grouper TLR21 activation as CpG-2722 [22]. To investigate this property of CpG-dideoxynucleotides and the number of CpG-hexamer motifs required for CpG-ODN to activate chicken and duck TLR21s, CpG-2007-1 was generated in which all CpG-dideoxynucleotides in CpG-2007 were reversed. Chicken and duck TLR21s were stimulated with CpG-2007, -2007-1, -2722 and -2727. Results revealed that CpG-2727 had activity as strong as CpG-2722 in activating chicken and duck TLR21s, whereas CpG-2007-1 activity was reduced (Figure 9), indicating that at least one copy of the CpG-hexamer motif with CpG-dideoxynucleotides is required for CpG-ODN to strongly activate chicken and duck TLR21s.
Requirement of CpG-dideoxynucleotides for Activation of Chicken and Duck TLR21s
CpG-dideoxynucleotides in a CpG-hexamer motif are required for CpG-ODN activation of mammalian TLR9s [28][29][30]. CpG-2722 contains two copies of CpG-hexamer motifs, and previous studies revealed that the second copy of the CpG-hexamer motif at the 3′ end of CpG-2722 is not required for activating grouper TLR21, since CpG-2727, containing a reversed CpG-dideoxynucleotide in its second copy of the CpG-hexamer motif, had similar activity in grouper TLR21 activation as CpG-2722 [22]. To investigate this property of CpG-dideoxynucleotides and the number of CpG-hexamer motifs required for CpG-ODN to activate chicken and duck TLR21s, CpG-2007-1 was generated in which all CpG-dideoxynucleotides in CpG-2007 were reversed. Chicken and duck TLR21s were stimulated with CpG-2007, -2007-1, -2722 and -2727. Results revealed that CpG-2727 had activity as strong as CpG-2722 in activating chicken and duck TLR21s, whereas CpG-2007-1 activity was reduced (Figure 9), indicating that at least one copy of the CpG-hexamer motif with CpG-dideoxynucleotides is required for CpG-ODN to strongly activate chicken and duck TLR21s. Activation of chicken and duck TLR21s by CpG-ODNs with reversed CpG-didexoxynucleotides. Human embryonic kidney (HEK) 293 cells were co-transfected with a control vector or expression vector for different TLR21s as indicated, along with a nuclear factor (NF)-κB-controlled luciferase reporter gene. Cells were treated with 0.8 µM of CpG-ODN for 7 h. Luciferase activities in the cells were measured. Data represent means ± SD (n = 3 independent experiments). *** p < 0.001 compared with control. Sequences of CpG-ODNs used in this study are shown on the left.
Discussion
CpG-ODNs are potent immunostimulants investigated as anti-infectious agents and vaccine adjuvants for various species [11,[35][36][37][38]. In mammals, their cellular receptor is TLR9, in avian species, TLR21, and fishes have both TLR9 and TLR21 [19][20][21]. CpG-ODN immunostimulatory activity is determined by structures including its CpG-hexamer motif type, spacing between motifs, nucleotide sequence, and length [28][29][30][31]. While the structural-functional relationship for the interaction between CpG-ODNs and TLR9 has been investigated, interaction between CpG-ODNs and TLR21 is not well-known. Moreover, it is unclear whether there are differences between the functional properties of avian and fish TLR21s. In this study, we characterized chicken and duck TLR21s and investigated the structural requirements for CpG-ODN to strongly activate these two poultry TLRs.
Computer modeling revealed that chicken and duck TLR21s have a horseshoe-like ectodomain similar to other TLRs. Also, similar to fish TLR21s, these two do not contain an undefined region similar to that in between LRR14 and LRR15 of its functional homolog, TLR9. Of the mammalian TLRs, TLR7, and TLR8 are most closely related to TLR9, and these three all contain an undefined region in their ectodomain [42][43][44]. Three rodent TLR8s, namely, mouse, rat, and rabbit TLR8s, are relatively insensitive to ligand stimulations, and a varied undefined region in the ectodomain of these three TLRs was suggested to cause the low responsiveness of these TLR8s to ligand stimulation [45,46]. In fishes, varied undefined regions were found in TLR9s, but TLR21s do not contain this undefined region, leading to speculation on whether TLR21 is the major receptor for CpG-DNA in most fishes and whether TLR9 may have low activity or even be nonfunctional in some fishes [23]. TLR9 is not present in avian species. Of the ten avian TLRs, TLR21 is the only TLR that recognizes CpG-DNA. Thus, there is significance for TLR21 to be selected through evolution to ensure detection of pathogens with microbial DNA by avian species.
Various studies showed that mammalian TLR9s have preferences in terms of recognizing different CpG-ODN nucleotide sequences. Optimal CpG-hexamer motifs for activating rodent TLR9, including mouse and rabbit, are GACGTT and AACGTT, and for activating TLR9 in humans and other domestic animals, the optimal motif is GTCGTT [28][29][30][31]. Similarly, CpG-ODNs with a GTCGTT motif have stronger activities in activating zebrafish and grouper TLR21s than CpG-ODNs with GACGTT motif [21,22]. In addition, nucleotide length also plays a role in determining CpG-ODN stimulatory activity. Generally, 18-24 nucleotides are required for a CpG-ODN to strongly activate mouse and human TLR9s. Nevertheless, CpG-ODNs that are 12-14 nucleotides long show stronger activities to rabbit TLR9 [41]. Our results show that chicken and duck TLR21s are strongly activated by CpG-ODNs with GACGTT or GTCGTT motif, indicating that these TLRs do not distinguish different types of CpG-hexamer motifs. Furthermore, chicken and duck TLR21s are activated by CpG-ODNs with different lengths (15-31 nucleotides) and spacing between their CpG-hexamer motifs.Thus, compared to mammalian TLR9s and fish TLR21s, chicken and duck TLR21s have broad ligand recognition profiles to different CpG-ODN sequences and lengths.
Major infectious diseases of poultry birds including Salmonellosis, Coccidiosis, Campylobacter infections, avian influenza, infectious bronchitis, Marek's disease, infectious bursal disease, and Newcastle disease, can cause large economic losses in the industry. Some bird-borne microbes can spread to humans and threaten human health. Therefore, there is a need to prevent poultry infectious diseases [1][2][3]47,48]. Vaccination is commonly used to protect humans and other species against microbial infections. While vaccines have been used in poultry farming to reduce infectious diseases, various disadvantages exist for some conventional vaccines and adjuvants including virulence reversion of live attenuated or inactivated vaccines, and the toxicity and poor ability of traditional adjuvants to induce optimal immune responses. Adjuvants such as Freund's adjuvant frequently induce strong side effects resulting in abscesses and granulomas at the injection site. Although aluminum salt significantly enhances serum humoral response when supplemented to vaccines, the capability of this adjuvant to induce cell-mediated immune response is poor [49][50][51][52].
CpG-1018 has been used as adjuvant in a Hepatitis B vaccine approved by the US FDA in 2017. This vaccine was proven to be more effective than aluminum salt-adjuvanted Hepatitis B vaccines [53,54], suggesting that CpG-ODN is a potent and safe adjuvant. When formulated with antigens, CpG-ODNs can increase survival of poultry birds challenged with various viruses and bacteria for microbial infectious diseases, including avian influenza, Newcastle disease virus, infectious bursal disease virus, Salmonella, and E. coli, by increasing cytokine production, lymphocyte proliferation, and serum IgG. While the CpG-ODNs with AACGTT motif have not yet well investigated, these CpG-ODNs shown to have adjuvant activity contain either CTCGTT or GACGTT motifs. Furthermore, in chickens, CpG-ODN can be administered through different routes including oral, intranasal, subcutaneous, and in ovo injections [55][56][57][58][59][60][61][62][63][64][65][66]. Thus, CpG-ODN development has increased the strategies for designing adjuvanted vaccines for poultry birds. Our studies show that chicken and duck TLR21s have a broad CpG-ODN sequence recognition profile, revealing that there are more choices of CpG-ODNs for optimal use as adjuvants in vaccines to boost antigen-dependent immune responses in poultry birds.
Conclusions
TLR21 is a pattern recognition receptors for detection of microbial DNA to initiate host response to infections. In addition, it is the cellular receptor to mediate the anti-infectious and adjuvant activities of CpG-ODNs in avian. The usage of CpG-ODN as vaccine adjuvant for poultry birds is been investigated [10][11][12]16,17]. Our results in this study are a novelty in describing the CpG-ODN recognition feature of chicken and duck TLR21s. Unlike the mammalian TLR9s and some fish TLR21s, these two avian TLR21s recognize a broad array of CpG-ODN sequences for their activation. This suggests that the innate immune system of chicken and duck have a strong ability to sense DNA-associated molecular patterns from microbes to initiate immune responses for host defense to infections. In addition, the result also suggests that there are more CpG-ODN choices for using as immune stimulatory agent (such as vaccine adjuvant) for poultry bird than available for other species. | 8,134 | 2020-11-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Shaped cathodes for the production of ultra-short multi-electron pulses
An electrostatic electron source design capable of producing sub-20 femtoseconds (rms) multi-electron pulses is presented. The photoelectron gun concept builds upon geometrical electric field enhancement at the cathode surface. Particle tracer simulations indicate the generation of extremely short bunches even beyond 40 cm of propagation. Comparisons with compact electron sources commonly used for femtosecond electron diffraction are made.
INTRODUCTION
The field of ultrafast structural dynamics is quickly growing, as shorter and brighter hard X-ray and electron pulses are being produced and implemented to light up atoms in motion. [1][2][3][4][5][6][7][8] The advent of forth generation light sources 9,10 has made the production of ultra-bright femtosecond (fs) hard X-ray pulses possible, which have been successfully applied for time-resolved diffraction [11][12][13][14] and ultrafast coherent imaging. [15][16][17] On the other hand, the use of ultrashort electron bursts has also emerged as a powerful means to atomically resolve dynamical phenomena and structure in the laboratory setting. [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] In this regard, different approaches for the generation of fs multi-electron bunches have been developed to meet the prerequisite timeresolution to observe the movement of atoms; i.e., sub-picosecond electron pulses and ideally the shorter the better to avoid temporal blurring in stroboscopically recorded images. Compact femtosecond electron diffraction (FED) instruments with electrostatic electron guns, based on quasi-flat cathode and anode electrodes, have enabled a time-resolution of ffi100 fs (rms, rootmean-square deviation) with bright multi-electron pulses. 36,37 For simplicity, electron pulses were assumed to be Gaussian in shape, and therefore a conversion factor of 2.355 has been used to calculate fwhm (full-width-at-half-maximum) from rms values. Recent designs with a minimal cathode-to-sample distance have brought the temporal resolution of these sources closer to the limit imposed by their initial energy spread-or single electron pulse limit. 30,[38][39][40][41] Electron kinetic energies (KE) produced by electrostatic guns typically range from sub 1 keV to 100 keV and are commonly referred to as sub-relativistic. More advanced electron sources based on radio frequency (RF) photo-injectors are known to generate ultrashort bright pulses of relativistic electrons (KE > 1 MeV). This technology is relatively mature within the accelerator community due to its use in synchrotron and free electron laser facilities and has become popular for monitoring ultrafast structural dynamics. [42][43][44][45][46][47][48][49] As its energy spread gets under control, laser-driven electron acceleration is also arising as a low-cost alternative for the generation of ultrashort and ultrabright electron pulses with KE in the 200 keV-1 GeV range. [50][51][52][53] In addition, different active and passive electron pulse compression schemes have been proposed and/or demonstrated. [27][28][29][53][54][55][56][57][58][59][60][61][62][63][64] One of the most successfully applied methods in recent FED experiments with ultrabright electron bursts relies on the use of an RF (or microwave) cavity that acts as a temporal lens. [27][28][29]55,56,[60][61][62][63] This methodology was found to compress dense subrelativistic multi-electron pulses down to 67 fs (rms). 60 Shorter multi-electron pulses are expected from this approach for which synchronization noise has limited the instrument response to about 80-150 fs (rms). [60][61][62] However, a recent phase-lock scheme based on passive optical enhancement has reduced this timing jitter to only ffi5 fs (rms). [65][66][67] Therefore, RF pilllens electron pulse rebunching still holds great promise in providing sub-20 fs (rms) temporal resolution with bright multi-electron bunches. 56 Furthermore, all-optical electron pulse compression throughout the use of a single cycle THz resonator has recently shown to bring the duration of multi-electron pulses from 395 fs (rms) to 32 fs (rms) [930 fs (fwhm) to 75 fs (fwhm)] with minimal long-term timing drift ffi 4 fs (rms). 68,69 This method is expected to generate even shorter multi-electron bursts. 68
RESULTS AND DISCUSSION
Here, we introduce a rather simple all-electrostatic electron gun design that delivers multielectron bursts as short as 12 fs (rms) [28 fs (fwhm)] at a relatively long electron propagation distance of 10 cm (sample position in our instrument) without the need of electron pulse temporal rebunching. A 300 kV FED setup based on this source concept is under construction at the University of Waterloo. The electron gun exploits the advantage of strong on-axis electric field acceleration at the electron birth. Figure 1 shows a computer-aided design (CAD) of the key electron source components alongside a geometrical depiction of the photocathode head. The cathode surface has a parabolic shape with a small flat circular area of 1 mm in diameter centered at the symmetry axis or electron propagation axis defined as ð0; 0; zÞ. This flat region is necessary to avoid an excessive kick in the transverse direction acting on off-axis electrons that greatly deteriorates the transverse and longitudinal properties of the electron bunch. The cathode is positioned inside a double magnetic lens (in-lens system). The magnetic fields generated by each lens point in the opposite direction along the z-axis in order to provide a resultant field B z ¼ 0 at the cathode surface (0, 0, 0). This is a necessary condition to null emittance growth caused by magnetic fields at the electron pulse birth. 70 This in-lens source design yields a lateral spot size ffi 190 lm (rms) for a quasi-parallel electron beam at the sample's plane. A normalized transverse emittance of 0.025 mm mrad (or transverse coherence length of about 3 nm) FIG. 1. CAD of the electron source concept. The main components are a parabolically shaped photocathode head with a small flat region and a double magnetic lens system with a conical inner form. This cathode shape has been carefully selected in order to confer an on-axis geometrical surface electric field magnitude of 50 MV/m without exceeding a maximum of 60 MV/m in other parts of the head. The in-lens system ensures B z ¼ 0 at the electron birth to avoid magnetic emittance growth. The conical anode shape helps to maintain the surface electric field at the anode jE a jՇ5 MV=m and brings the magnetic poles closer to the cathode head. The magnetic lens system has been optimized in order to obtain, within practical constraints, a reasonable electron spot size at the sample position and low power consumption to avoid water-cooling. Inlet: black dotted line corresponds to the symmetry z-axis; green curves depict two parabolas displaced from the propagation axis by 0.5 mm in the radial direction and which follow the equation f ðrÞ ¼ 1:6 cm À1 r 2 ; blue segment highlights the flat region of 1 mm in diameter. was obtained. This value suffices for the study of most inorganic and organic crystalline materials composed of small molecules. The lens system has been optimized to operate under the assumption of a core material with a relative magnetic permeability of 10 4 and a saturation magnetic field of 1.5 T. Such values are easily attainable by various soft magnetic iron alloys. 71 The required total power was estimated to be only 200 W.
Local electric field enhancement by several orders of magnitude (ffi1 GV/m) is a wellknown effect in field emitters and single electron to a few electrons photo-triggered tip sources. [72][73][74][75][76][77][78] Recently, such nanoemitters have been successfully applied to monitor photocurrents in nanostructures 77 as well as ultrafast structural dynamics. 78 The introduced cathode head exploits the use of moderate geometrical field enhancement while permitting the generation of multi-electron bunches.
Electrons in simulations were generated at the cathode surface considering a temporal (longitudinal) Gaussian profile of 6 fs (rms) [14 fs (fwhm)], an initial energy spread of 0.2 eV, and a lateral (transverse) Gaussian spot size of 50 lm (rms). Such initial electron pulse parameters can be obtained via single-photon photoemission using the second harmonic from the output of a non-collinear optical parametric amplifier (NOPA). [79][80][81] A NOPA provides the frequency tunability necessary to match the work function of various metal candidates such as Ti, stainless steel, Mo, and W. The photocathode is held at a potential V ¼ À300 kV with respect to ground. The cathode-anode separation distance along the propagation axis, d z ffi 5 cm, confers an average on-axis electric field hE z i ¼ À DV d z ffi À6 MV=m. Equipotential lines, calculated using Poisson Superfish, 82 are shown in panel A of Fig. 2 (red traces). A large increase in the magnitude of the on-axis electric field jE z j can be observed as we approach the cathode head reaching a maximum value of 50 MV=m at the surface, see Fig. 2(b).
One of the major concerns of the current design is vacuum breakdown that can compromise the stability of our electron source. Critical surface vacuum breakdown fields, E s;c , have been measured and found to be E s;c ffi 6:5 À 10 GV=m for refractory metals. [83][84][85] Such critical threshold is commonly expressed as jE s;c j ¼ b m b g jDV c j=d, where b m and b g are microscopic and geometrical field enhancement factors, respectively, and DV c is the critical applied potential drop over a given separation distance, d, between two electrodes. Thus, jDV c j=d equals the magnitude of the average critical applied field jhE c ij, and b g jhE c ij becomes what we refer to as the "geometrical critical surface electric field" jE g;c j, which therefore results in jE s;c j ¼ b m jE g;c j. Typical values of b m for polished surfaces lie in the range of 100-300 (with 100 corresponding to mirror-like surface finishing 85 ). On the other hand, the magnitude of the maximum geometrical surface electric field jE g;max j in previous compact electron gun designs was calculated to be about 20 MV/m and therefore satisfies the condition b m jE g j < jE s;c j. Note that we cannot modify jE s;c j but b m and jE g j within certain limits with b m ! 1 for a roughness free surface. Surface conditioning is known to reduce dark current and greatly increase jhE c ij 86-88 by bringing b m from several hundreds to about 20-50 for refractory metals. 84 The use of a solid cathode head made of Ti, for instance, is therefore essential to allow for proper surface processing. This is difficult to achieve in back illuminated electron guns due to the implementation of ultrathin film photocathodes that can be easily damaged by arcing. On this subject, the cathode shape was optimized to obtain a maximum geometrical surface electric field jE g;max j < 60 MV=m in order to maintain b m jE g j below jE s;c j. In addition, the source design ensures low surface electric fields at the anode electrode (jE a j Շ 5 MV=m), a fact that will greatly mitigate anode-initiated vacuum breakdown. 88 Fig. 3 shows the electron pulse duration r tz (rms) obtained from ASTRA simulations 89 for different electron source geometries as a function of the number of electrons per bunch. Black trace corresponds to our 300 kV FED electron gun concept for a total electron propagation distance d T ¼ 10 cm. Blue and red traces refer to the results obtained for conventional 100 kV compact FED setups with flat parallel electrodes, d T ¼ 2 cm, and constant on-axis electric fields of E z ¼ À20 MV/m and À10 MV/m, respectively. Note that despite the relatively long propagation distance, the proposed design provides r tz < 20 fs for bunches containing 10 4 electrons, and only r tz ffi 12 fs in the limit of low space charge effects. It should be mentioned, however, that the main disadvantage of the proposed electron source is its relatively larger spot size, ffi190 lm (rms), when compared with that of a compact FED setup, ffi55 lm (rms), with the same initial electron beam parameters.
The most noteworthy feature of this new source design is its ability for delivering ultrashort multi-electron bursts after significantly long propagation distances. As can be seen in Fig. 4(a) by direct comparison against conventional FED setups, geometrical field enhancement plays a key role in minimizing the temporal broadening caused by energy spread at FIG. 3. Standard deviation or root-mean-square (rms) electron pulse duration (r tz , in fs) as a function of the number of electrons per bunch (#e À ) obtained from ASTRA particle tracer simulations. 89 Black trace corresponds to our 300 kV FED design and d T ¼ 10 cm. Blue and red traces correspond to compact 100 kV FED setups (i.e., large flat cathodes) with d T ¼ 2 cm and constant electrostatic fields of E z ¼ À20 MV and À10 MV/m, respectively. photoemission. Temporal broadening is known to be dominant at the initial stage of electron propagation 90 and found to be, approximately, inversely proportional to the geometrical surface electric field at the electron birth. Hence, geometrical surface field enhancement appears as a simple means to reduce the temporal broadening introduced by the initial momentum spread.
Therefore, the proposed electrostatic electron source approach lessens the need of using microwave cavities to compensate for such temporal spreading. 67 Moreover, we can also observe in Fig. 4 an important decrease of the electron pulse expansion rates when KE is increased from 100 keV to 300 keV (red and blue dots versus closed and open black dots). This is a consequence of relativistic effects that diminish space-charge repulsive forces by a factor of c À3 . 91 We find remarkable (see black closed dots in Fig. 4(b)) that r tz is below 60 fs even for a pulse containing 10 4 electrons and after 40 cm of propagation. Longer propagation distances may be advantageous for improving transverse electron beam properties owing to the use of additional electron optics and apertures between the electron source and the sample. Given the extremely short length of the produced electron bursts, we decided to explore the effect of instabilities of the power source on the instrument response time. Fluctuations of the electron gun voltage, r DV , result in variations of the arrival time (t 0 ) of each electron pulse to the sample (or time zero jitter, r t 0 ). We found for our 300 kV FED design r t 0 =fs ffi 4 Â10 À2 Á r DV =eV Á d T =cm. Hence, voltage drifts of 10 ppm (r DV ¼ 3 eV) and d T ¼ 10 cm translate into r t 0 ffi 1.2 fs. A state-of-the-art high-voltage power supply is therefore necessary to guarantee that the overall temporal instrument response is not limited by fluctuations of the voltage source. 92
CONCLUSIONS
We have presented an electron source that builds solely on electrostatic fields and that is capable of generating ultrashort and bright multi-electron pulses with minimal temporal degradation over long propagation distances. The main two ingredients of this electron source design are: (i) geometrical field enhancement that increases the strength of the electric field at the electron birth and therefore reduces the temporal broadening caused by initial energy spread; and (ii) higher KE that helps to diminish the detrimental effects of space charge and leads to a decrease of the electron pulse expansion rate. With a time resolution down to ffi 12 fs (rms), FED instruments based on this new electron gun concept hold great promise in resolving even high-frequency vibrational modes without the necessity of implementing RF-electron pulse rebunching or all-optical electron pulse compression schemes. | 3,558.8 | 2017-01-25T00:00:00.000 | [
"Engineering",
"Physics"
] |
Optimized peptide based inhibitors targeting the dihydrofolate reductase pathway in cancer
We report the first peptide based hDHFR inhibitors designed on the basis of structural analysis of dihydrofolate reductase (DHFR). A set of peptides were rationally designed and synthesized using solid phase peptide synthesis and characterized using nuclear magnetic resonance and enzyme immunoassays. The best candidate among them, a tetrapeptide, was chosen based on molecular mechanics calculations and evaluated in human lung adenocarcinoma cell line A549. It showed a significant reduction of cell proliferation and an IC50 of 82 µM was obtained. The interaction of the peptide with DHFR was supported by isothermal calorimetric experiments revealing a dissociation constant Kd of 0.7 µM and ΔG of −34 ± 1 kJ mol−1. Conjugation with carboxylated polystyrene nanoparticles improved further its growth inhibitory effects. Taken together, this opens up new avenues to design, develop and deliver biocompatible peptide based anti-cancer agents.
All the synthesized peptideswere characterized by mass and NMR spectral techniques. The mass spectra of the peptides are shown in Fig. S1. The structure of all the peptides was confirmed by NMR spectroscopy. For instance, the 1 H and 1 H-1 H COSY spectrum of peptide 2 are shown in Fig. S2 and S3 respectively. The sequence specificity of different peptides was confirmed by using ROESY spectrum. First all resonances of amino acid residues in the peptide chain was assigned by employing 1 H-1 H TOCSY spectrum (Fig. S4a). The N-terminal amino group of peptide 2 (FMYL)was not visible in the 1 H NMR spectrum possibly due to its broadening. The sequence of remaining amino acids (MYL) was ascertained by using combination of COSY and ROESY NMR spectrum (Fig. S4b). Similarly, by following same approach the sequence specificity of peptide11 (YSFML) was confirmed (Fig. S5).
Fourier transform infrared (FTIR) spectral analysis:
The bond stretching frequency of each functional group present in the nanoparticles were recorded using a PerkinElmer FTIR spectrometer (model: L1860121, USA), scanning from 4000 cm -1 to 500 cm -1 for 42 consecutive scans at room temperature.
Size and zeta potential measurement: The hydrodynamic diameter and zeta potential values of the nanoparticles were measured using Zeta PALS, Zeta Potential Analyzer, Brookhaven Instruments Corporation at room temperature.
Atomic Force Microscopic (AFM) analysis:
The surface morphology of the native carboxylated polystyrene nanoparticle, peptide conjugated polystyrene nanoparticles was investigated using an atomic force microscope (PARK SYSTEM, NX-10 AFM, XEI Software for imaging) in tapping mode at room temperature.
UV-Visible spectral analysis:
The absorbance values of different concentrated peptide solutions, and the unreacted peptide extracts were recorded on UV-Visible spectrophotometer (Thermo Fisher Scientific, Nano Drop 1000) at a resolution of 1 nm.
Zeta potential measurement:
Zeta potential value of native polystyrene nanoparticles was found to be -30.14 mV.
Modification of the particle surfaces with peptide results in a lowering of the surface potential to -8.51 mV. This result is expected, as peptide conjugation results in lowering of the number of free carboxyl groups that are the causative factor for high zeta potential values in the native nanoparticles.
Atomic Force Microscopy (AFM) analysis
Surface morphologies of native polystyrene and Pep-g-PS nanoparticles obtained by AFM analyses are shown in Fig. S6. Native polystyrene nanoparticles exhibited a population of homogeneous non-agglomerated particles of spherical shape with smooth surfaces. The size of discrete nanoparticles was found to be 180-200 nm (length/width and height). After modification with peptide molecules via covalent conjugation, an increase in agglomeration was observed, and an associated increase in size to be 329 ± 81 nm (measured as the longest axis).
Attenuated total reflectance/Fourier transform infrared (ATR/FTIR) spectral analysis:
ATR/FTIR spectrum of native and modified polystyrene nanoparticles was recorded to determine the changes due to peptide. Bond stretching vibration frequencies of different functional groups is shown in Fig. S7. In case of native polystyrene particles, the strong absorption peaks at 3308 cm -1 , 1722 cm -1 , and 1654 cm -1 assigned to the presence of stretching vibration bands of -OH group (asymmetric vibration), -COOH group and -C=C group present in the structure of polystyrene unit. Again, the peaks at 2927 cm -1 and 3056 cm -1 were attributed to the asymmetric stretching vibration of -C-H group in methylene and methine unit respectively. The additional bands at 745 cm -1 also confirmed the presence of aromatic units.
The solid state pathway synthesized peptide molecules also revealed a group of vibration bands at 3470 cm -1 , 1695 cm -1 , and 1635 cm -1 , indicating the presence of a strong asymmetric stretching band of -NH2 group, -C=O group and -CONH2 group in the peptide chain.
After conjugation of peptide molecules onto surface of polystyrene nanoparticles, the strong absorption band at 1722 cm -1 was found to be shifted at lower frequency values in the ranges of 1675-1640 cm -1 as compared to both peptide molecule and polystyrene molecule, leading to the formation of new amide linkage between the -COOH group of polystyrene unit and -NH2 group of peptide unit. In addition, the peak around 3300 cm -1 was found to be increased to a broad band peak at 3591 cm -1 , indicating the incorporation of additional amine into the modified polystyrene molecule. Video S1 and S2: A549 cells, treated 1µg/ml of propidium iodide (PI) to monitor cell death, were either left untreated (Video S1) or treated with 100µg/ml of peptide 2 and imaged every hour for a period of 44hrs using Incucyte ZOOM imaging system | 1,265.6 | 2018-02-16T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Impacts of the Finnish service screening programme on breast cancer rates
Background The aim of the current study was to examine impacts of the Finnish breast cancer (BC) screening programme on the population-based incidence and mortality rates. The programme has been historically targeted to a rather narrow age band, mainly women of ages 50–59 years. Methods The study was based on the information on breast cancer during 1971–2003 from the files of the Finnish Cancer Registry. Incidence, cause-specific mortality as well as incidence-based (refined) mortality from BC were analysed with Poisson regression. Age-specific incidence and routine cause-specific mortality were estimated for the most recent five-year period available; incidence-based mortality, respectively, for the whole steady state of the programme, 1992–2003. Results There was excess BC incidence with actual screening ages; incidence in ages 50–69 was increased 8% (95 CI 2.9–13.4). There was an increasing temporal tendency in the incidence of localised BC; and, respectively, a decrease in that of non-localised BC. The latter was most consistent in age groups where screening had been on-going several years or eventually after the last screen. The refined mortality rate from BC diagnosed in ages 50–69 was decreased with -11.1% (95% CI -19.4, -2.1). Conclusions The current study demonstrates that BC screening in Finland is effective in reducing mortality rates from breast cancers, even though the impact on the population level is smaller than expected based on the results from randomised trials among women screened in age 50 to 69. This may be explained by the rather young age group targeted in our country. Consideration whether to targeted screening up to age 69 is warranted.
Background
Impact of mammography screening in decreasing breast cancer (BC) mortality has been shown among women invited in ages 50-69 in several randomised trials [1]. There is growing evidence from cohort follow-up studies that the service screening programmes implemented in the late 1980s or early 1990s affect breast cancer mortality among invited with at least a similar degree than the randomised trials [2][3][4][5][6]. Studies using dynamic materials [7][8][9] or modelling historical screening coverage [10,11] have also demonstrated effectiveness of organised screening. On the other hand, there are proposals that screening has not clearly affected population breast cancer mortality rates [12,13] or, breast cancer or overall mortality is not affected in general [14,15].
There are several biases in the dynamic population-based studies. Death from breast cancer occurs often several years after the diagnosis; studies based on routine deaths records may suffer from misclassification of the screening status [2,7,12]. This can be corrected if incidence-based mortality is used. Another source of bias in the early trend studies is that national screening programmes have been introduced gradually, and the screening coverage in the targeted groups had become complete only after several years of implementation.
The aim of the current study was to examine impacts of the breast cancer screening programme in Finland on the population-based incidence and mortality rates. Finland makes an exceptional setting for the study, because the programme has been targeted historically to a rather narrow age band mainly among women of ages 50-59 years.
Methods
The study was based on the information on breast cancer (ICD-10 code C50) from the files of the Finnish Cancer Registry. The women diagnosed with a new (incident) primary breast cancer (N = 74,175), or died from breast cancer (N = 22,799), in 1971-2003 were included. Analysis was restricted to invasive breast cancers. Following targeted age groups in screening, ages 40-69 were included in the analysis of incidence data; follow-up of subsequent deaths from breast cancer was extended up to age 79. The incident cases were classified into two groups by stage of the disease, localised and non-localised at diagnosis, based on information mainly on the lymph node status as available in the cancer registration. Among 9.4% of the BC cases the stage information was not available at the cancer registry. The proportion cases with stage unknown was 13% in 2003 and thus stage-specific information was considered adequate for the analyses up to the year 2002.
In order to assess historical coverage of screening, national data on the invitations were collected from the records of the Mass Screening Registry -a subunit of the Finnish Cancer Registry. Coverage of the screening registration is variable during the 15 years' period of routine BC screening: in 1987-1990 the average coverage of registration of invitations is 83% (N = 358,200/431,300) whereas from 1991 onwards the estimated annual registration coverage is > 95% (1,257,000/1,259,200; the estimated average registration coverage 99.8%). These estimates are derived from comparisons of the yearly registered invitations with an external documentation on the invitations, maintained by the Radiological Society of Finland [16].
If a municipality or a screening centre had not registered their invitations, we requested from the municipality health authorities or screening centres mailed information on invited age groups, by birth cohort and invitational year, and estimated thereafter the biannual coverage of invitations by combining numbers of invited with the respective population data. We obtained mailed information on screening invitations during 1987-1990 for 140 municipalities for which there were registered invitational data available only from 1991 onwards. During the overall study period the number of municipalities changed from 461 (1987,1988) to 446 (2002); annual invitational data became available in total for 410 (89%) of them.
For incidence-based, i.e., refined mortality, individual patient data were studied. Only those deaths were included, where the diagnosis of breast cancer took place in the given calendar period and age group. For the refined mortality, the age was defined as the age at diagnosis of the corresponding incident case. Deaths from breast cancer were included and stratified by the 5-year calendar period, birth-cohort, and age at diagnosis. In these data, breast cancer deaths taking place at an older age than 79 years were excluded. The length of follow-up after diagnosis was made symmetric between the groups on age and period: Deaths in 1972-1983 among those diagnosed in 1972-76 in ages 40 and 69, deaths in 1977-88 among those diagnosed in 1977-81 in ages 40-69, and so on, until deaths in 1992-2003 among those diagnosed in 1992-1996. There was thus a 5-year long period for incidence and a 12-year long period for mortality in each time window. Incident cases from the calendar period 1992-1996 were the most recent ones included in this incidence-based analysis. This restriction was done to obtain comparability of information over the decades, and to obtain also as long follow-up time as possible.
Annual population denominators were used by 1-year age group for estimating screening coverage, as well as in estimating refined mortality follow-up; and by 5-year age group by calendar year for breast cancer incidence and routine mortality rates.
Coverage estimates of screening
Invitational coverage was defined as the number of women invited at least once within the recommended screening interval (a two-year period) per population size. The actual invitational coverage increased gradually during the first five years of the programme, 1987-1991; i.e., during the pseudo-randomised implementation phase ( Figure 1). The actual coverage proportions were converted to estimates describing invitation (yes/no) ever in lifetime. Estimated coverage of 'ever' invited was approxi-mated within ages 50-64 at 10%-89% during 1987-1991, and 100% since 1992. In ages 65-69, lifetime coverage increased over 10% since 1992 and up to 100% since 1997; mainly while ageing of the invited group.
Statistical analyses
The incidence of and mortality from breast cancer as well as the incidence-based mortality were analysed with Poisson regression. To adjust for the fluctuating changes in the incidence and mortality with age and cohort, the incidence and mortality were modelled as functions of numerical calendar year and polynomials of numerical age at 5-year age groups and synthetic birth cohort [17]. The orders of polynomials i.e., the forms of marginal age and cohort curves were searched for overall incidence, localised and non-localised incidence, and mortality separately using the data of calendar period 1953-1970. The decision of the chosen degrees of polynomials was based on the likelihood ratio statistic and descriptive evaluation. In the estimation of overall incidence, 10th order polynomial of age and 4th order polynomial of cohort were needed, in localised and non-localised incidence the corresponding orders of age were 10 and 6, and of cohort 8 and 2, respectively, and in mortality the orders of polynomials were 5 for both age and cohort. In the analysis of incidence-based mortality, the orders of polynomials for age at diagnosis and cohort were both 3, and each year in the follow-up between the diagnosis and death was included as a level of categorical variable.
In the statistical analyses we paid emphasis on the steady state of the programme with a high actual screening coverage among women in the main target ages (particularly, ages 50-59). We included an indicator for the period from 1992 onwards -with the exclusion of years 1987-91 -indicating the steady state of the screening programme. During the steady state, prevalence screens took place mainly at the onset at age 50. In the first descriptive phase, observation during the recent screening period 1998-2002 was contrasted with expectation without screening as drawn from modelling information prior to the screening era (years 1971-1986) within each age group. Because of the symmetry requirements in the data input when studying incidence-based mortality (see above), this expectation could not be estimated for the completely same period of time and these results are shown separately. In the second phase, the effect of screening was studied for the screened age groups only (with the estimated coverage at least 90%). For example, the age group 60-64 years consisted of screened birth-year cohorts after 1992, and the groups 65-69 years since 1997. In order to estimate screening effects, parallel developments among screened and non-screened were assumed. Non-screened group consisted here of women in ages 40-49 -years; and ages 65-69 years up to the end of 1991. Note that even the observed incidence and mortality rates shown below in the Tables are based on models.
Ethical consideration
Information on the breast cancer incidence and mortality, as well as on the population numbers and screening indices were based on tabular statistical data only; according to the current legislature, no approval from the ethical committee was required. Table 1 shows age-specific BC incidence and mortality rates observed in 1998-2002; corresponding estimates without screening as extrapolated from mortality experience in these same age groups from time before national implementation of screening; excess relative risk estimates; and woman-years at risk. In addition, Figures 2 and 3 illustrate the overall developments of BC incidence and routine mortality trends by calendar year and 5-year age group among women in ages 40-69; as well as the values obtained by fitting the models.
Results
There was excess BC incidence with actual screening ages (18% and 11% in ages in age 50-54 and 55-59 years, Table 1). Among all age groups studied, there was an increasing tendency of localised breast cancers, particularly pronounced in screening target ages; and, respectively, a decreasing trend in incidence of non-localised breast cancers. The decrease in the incidence of non-localised breast cancers was most consistent in ages 55 to 69 years, i.e. in late screening or up to five years after the last screen. Age-specific routine breast cancer mortality rate had increased up to mid-1990s particularly in ages younger than targeted by the national screening programme (ages 40-44, and 45-49 years; Table 1). We observed a consistent decreasing trend in the mortality rate since the mid-1990s among all studied age groups, however ( Figure 3). In general, the death rates in ever-screened age groups (50 to 69, age at death) had increased less than among nonscreened or remained stable (Table 1).
Incidence-based mortality rates from breast cancers diagnosed in ages 40-49 showed an increase of 28%-34%. In screening ages (50 to 69), the corresponding point estimates showed slightly smaller increase or a decreasing tendency ( Table 2). The decrease was largest in deaths from BC cases which had been diagnosed in ages 55-59 and 60-64. Specific to the age groups, the incidence-based mortality result was statistically significant only among women of ages 45-49.
Observed and fitted breast cancer incidence rates by age group Figure 2 Observed and fitted breast cancer incidence rates by age group. The first five years since the implementation of the national programme have been excluded from the fitted rates. Year 65-69 Table 3 summarises the tentative screening effects, obtained from comparisons of trends between screened and non-screening age groups. The overall BC incidence was in excess of 8.0%; particularly the incidence of localised BCs was in excess (22.5%). Incidence of non-localised breast cancers at diagnosis was 9.0% lower than the expectation if absence of screening. Routine BC mortality in all the screened ages combined (age at death) had decreased with -5.6% as compared with expectation without screening. Change in refined BC mortality was -11.1% (95% CI -19.4, -2.1%), respectively. Among women screened most intensively (50-64) the corresponding estimates by 5-year age groups were -3.1 %, -14.7% and -17.1%, respectively.
Discussion
Two phases of analysis were constructed. The first, descriptive phase (Tables 1 and 2) gave an overall picture of the situation, whereas the second, more analytic phase (Table 3), targeted directly at estimation of the effects of Observed and fitted breast cancer mortality rates by age group at death Figure 3 Observed and fitted breast cancer mortality rates by age group at death. The first five years since the implementation of the national programme have been excluded from the fitted rates. Year 65-69 the screening programme. In association with the steadily running breast cancer screening service, we observed a slight increase in the overall breast cancer incidence, and a decrease in the incidence of non-localised breast cancers. Decrease in the incidence of non-localised breast cancers may be considered an indicator predicting reductions in the mortality rate. A small even though statistically non-significant decrease was documented in the breast cancer mortality among the invited age groups. The effect of screening as obtained from incidence-based mortality analysis varied at -3.1 --17.1% in the most intensively screened age groups. Despite the higher than predicted mortality, we observed a consistent recent decrease in breast cancer mortality since the mid-1990s in all the studied age groups 40-69.
The invitational coverage of the screening programme was close to 100% in the major targeted ages of the organised screening programme, 50-59 years of age; among nontargeted only a very low coverage was observed. We made efforts to assess coverage of the registered invitations using external information. Attendance (compliance) rates in the programme have been very high; during 1991-1999 at 89% at first screen and at 92% at subsequent screens as reported from 10 centres constituting 55-60% of the screenings of the whole national programme [18]. The Finnish Radiological Society [16] reported an average compliance rate of 89% among all the centres in 1987-1997, respectively.
In the implementation period there was a randomised screening design, affecting the population-based coverage. After 1991 (after the randomisation period) some municipalities might have got a three-year invitational interval instead of the recommended two-years for few birth cohorts. Sometimes the first screen could take place at age 51. All these affect obtaining less than full 100% age-specific coverage. After all, these deficits in screening coverage had only a minor impact in the national programme.
When considering ages targeted for screening, one limitation was that the screening coverage was not complete in ages 60-69. Another limitation was that there had been during the late 1990s another population-based invitational screening modality than the organised programme in several municipalities in this age group, paid by the women themselves. There is not much information available on the functioning of the self-paid modality. According to the only published information on a nation-wide basis, invitational coverage of this modality in ages 60-69 could have been almost at the same level as of organised screening modality during late 1990s [19]. Based data from Turku city, attendance rates in the self-paid modality could have been about 20% lower among women in age 60 to 69 than in the organised screening modality that is paid by the municipalities [20].
In the randomised screening trials an average effect of about -25% has been reported in the refined mortality rate among invited, when the screening programme has been run in ages 50-69 [1]. The current study indicated an average effect of -11.1% in the population-based refined mortality rates by respective age groups. Effects associated with screening taking place in ages 50-59 years, as primarily defined in the Finnish screening policy, have not been reported in detail from the randomised studies. In the Swedish trials, the screening effect was -16% (95% CI from -30% to 1%) among women started screening in age 50-59 and -33% (95% CI from -47% to -16%) among women started screening in age 60-69, respectively [21]. Unlike in the Finnish programme, those women who had started screening at age 50-59 were systematically screened also when they reached age 60 or more; therefore our current estimate (-8.6%) is not directly comparable. Among women aged 60-69 at diagnosis the effect from the current study was -16.6%, only about half of the impact reported from the above randomised studies. Concerning the latter comparison, it is likely that the deficit in the actual screening coverage in that age affects the Finnish rates.
A larger point estimate in effectiveness for screening women at age 50-69, in comparison with screening women only at age 50-59, is supported also by a recent report from the Copenhagen BC screening programme, analysis based on age at death (not age at screen as in our study) [3]. The average effect on the refined mortality in ages 50 to 79 at death was -25%; the corresponding average estimate at age 50 to 64 at death (where screening in age 50-59 should have primarily affected) as extracted from the report is -11% and at ages 65-79 at death (where screening in age 60-69 should have primarily affected) -30%. Our results on the refined mortality are in line with the findings from Copenhagen for the younger targeted age group, not with the findings for the older targeted age group.
An earlier study has attempted to estimate how big impact would have taken place in the 'routine' breast cancer deaths in Finland, if a biannual breast cancer screening programme was started in 1988 among women in ages 50 to 69 [22]. Screening coverage of 80% was assumed, as well as efficacy as reported from randomised trials. In ages 50 to 69 at death, the estimated average decrease in the population-based routine death rate from breast cancer was -8% during the first five years since the onset of the programme; and -14% and -19% during the next five-year periods. The estimate of the current study (-5.6%) was clearly smaller than the two latter estimates. It is likely that these findings of small impacts on the populationbased rates in the current study can be explained largely by the rather narrow age group targeted historically in our country.
Impacts of breast cancer screening on incidence have not been investigated adequately for the service screening programmes. In randomised trials, controls usually became screened at the end of randomisation period (usually within 6-8 years since the onset) and therefore follow-up of incidence is affected by screening [1]. There is a unique report from a trial in Malmö, Sweden, where controls in ages 55-69 were not invited; suggesting about 10% excess in lifetime BC incidence attributable to screening [23].
There is another study, using a non-randomised design, suggesting much bigger excess incidence rates [24]. A long-term follow-up study of a recent routine screening programme and with correction for lead-time has suggested, however, that only rather small over-diagnosis of some 5% in relative terms might be expected [25]. A recent trend study from Finland, including a very long follow-up time since the last screen when screening was done in ages 50-59, reported no excess in the estimated cumulative incidence [26]. The current study suggests that if there is over-diagnosis in the Finnish BC screening programme, the relative estimate is small. Even a small excess risk in the breast cancer incidence may mean considerable numbers when contrasted with the numbers of deaths prevented; particularly when taking into account that the number of deaths prevented by screening in ages 50 to 59 only is rather small.
There was a consistent decrease in BC mortality since the mid-1990s in all the age groups studied. It was important to note also among women before age targeted for screening. This development may indicate an effect from improved treatments or from improved early diagnoses (stage migration) also outside organised screening. This trend was in a general agreement with the finding from US that developments in treatment and in early diagnosis in other fields of health care than in organised screening probably affect more in rather young age groups, say, 40 to 54, than in the older targeted ages of screening programmes [27]. This development could affect the numbers of deaths prevented and also the relative screening effect in the future.
One further limitation in estimating the tentative screening impact was that women in ages 40-49 (almost entirely unscreened), as well as women in ages 65-69 dur-ing the first few years of the programme, contributed to expectation without screening. We thought that inclusion of non-screened age groups was necessary to estimate screening effects (Table 3), because there were no unscreened regions in the country and otherwise we could not get an idea of developments during screening period -such as changes in background risk, changes in diagnostic activity or use of mammograms outside screening; or improvements in treatment -in absence of screening. For estimating the steady phase of the programme, when the prevalence screens take place mainly at age 50, we considered that the very low coverage of screening invitations below age 50 does not affect materially. Including experience in unscreened age groups during screening period seemed to alter the estimates meaningfully, compare the results obtained in the Tables 1 and 2 to those in the Table 3.
Irrespective of the decreasing trend since the mid-1990s, there was a rather large overall increase in the mortality rates in the long-term trend among women at age below 50 years. The increase in the background risk has been earlier reported to be among the highest in Finland [28] and it is the most likely explanation. Among young women the relative risk estimates may be imprecise, due to small numbers. One problem for the modelling was also that the baseline absolute mortality rate was much lower among the youngest age groups than among women in screening ages (Tables 1 and 2).
The incidence and mortality were modelled with the Poisson regression with numerical calendar time and polynomial functions of 5-year age group and synthetic cohort. When the orders of polynomials of age and cohort are included in the model, the changes in the incidence and mortality rates with age and cohort can be taken adequately into account. One must, however, remember carefully to check the sensibility and sensitivity of the modelbased rates since this kind of modelling can lead to incredible predictions. We are looking forward to compare the current modelling results with other approaches for estimating incidence and mortality, especially using incidence-based mortality.
The effect of screening on incidence and mortality can be considered stable after 5 years since the beginning of the screening. Therefore the first five years with various effects on incidence and mortality were excluded from the basic descriptive analyses. One further problem when considering the national trends was that the Finnish programme was implemented gradually due to the randomised design during the implementation phase. Including or excluding that period did not materially affect the estimates.
In this study, information on screening was based mainly on the dynamic (open) information on the whole targeted population in ages 50-59. In older ages, 60-69 years, women were invited only partially or irregularly. Expected rates without screening were based on the patterns in time before the national screening programme, as well as on the experiences of non-screened age groups during screening era. As the whole population in the mainly targeted age was invited, there was no experience among non-screened in that age during the screening period. Using individual-level information on invited and screened during the current screening periods could still bring new information for evaluating screening effectiveness.
Conclusion
In conclusion, the current study demonstrates that BC screening in Finland has been effective in reducing mortality from breast cancers. The impact is smaller than that observed in randomised trials. This may be explained at least partially by the rather narrow age group targeted; a great part of women in ages 60-69 that were included in the targeted populations in the randomised screening trials for breast cancer, were unscreened in our country. The results warrant considering whether screening should be targeted up to age 69. Considering rather small relative estimates, further research using individual-level information on the invited and screened can bring new information on the effectiveness of the screening programme. | 5,835.2 | 2008-01-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
Magnified Time-Domain Ghost Imaging
Ghost imaging allows to image an object without directly seeing this object. Origi- nally demonstrated in the spatial domain using classical or entangled-photon sources, it was recently shown that ghost imaging can be transposed into the time domain to detect ultrafast signals with high temporal resolution. Here, using an incoherent supercontinuum light source whose spectral fluctuations are imaged using spectrum- to-time transformation in a dispersive fiber, we experimentally demonstrate magnified ghost imaging in the time domain. Our approach is scalable and allows to overcome the resolution limitation of time-domain ghost imaging.
Ghost imaging allows the imaging of an object without directly seeing this object. Originally demonstrated in the spatial domain, it was recently shown that ghost imaging can be transposed into the time domain to detect ultrafast signals, even in the presence of distortion. We propose and experimentally demonstrate a temporal ghost imaging scheme which generates a 5× magnified ghost image of an ultrafast waveform. Inspired by shadow imaging in the spatial domain and building on the dispersive Fourier transform of an incoherent supercontinuum in an optical fiber, the approach overcomes the resolution limit of standard timedomain ghost imaging generally imposed by the detectors speed. The method can be scaled up to higher magnification factors using longer fiber lengths and light source with shorter duration. © 2017 Author(s). All
I. INTRODUCTION
Ghost imaging retrieves the image of an object from the correlation between a spatially resolved structured illumination pattern and the total intensity transmitted through (or reflected by) the object. 1,2 Originally developed to test the EPR paradox using entangled photon sources, 2-6 the concept has subsequently been expanded to classical spatially incoherent light sources 2,7-11 and more recently to pre-programmed illumination with a spatial light modulator. 12 More advanced schemes based on multiplexing have also been demonstrated, to reduce the acquisition time 13 or to image objects which vary slowly with time. 14 Compared to standard imaging techniques, a unique property of ghost imaging is its intrinsic insensitivity to distortions that may occur between the object and the single-pixel detector that only measures the total transmitted (or reflected) intensity. 15,16 This specific feature has made ghost imaging particularly attractive for many applications ranging from microscopy and compressive sensing to LIDAR or atmospheric sensing.
Very recently, taking advantage of the space-time propagation correspondence in optics, 17-20 ghost imaging has been transposed into the time domain to produce the image of an ultrafast signal by correlating, in time, the intensity of two temporally incoherent light beams, neither of which independently carried information about the signal. 21 Significantly, it was also demonstrated that the technique is insensitive to distortion that the signal may experience between the object and the detector, e.g., due to dispersion, nonlinearity, or attenuation. Despite the fact that this transposition has opened up novel opportunities for the detection of ultrafast waveforms, an important limitation of ghost imaging in the time-domain that may limit its usability is the finite resolution determined by the fluctuation time of the random light source and/or the speed of the detection system.
Here, we propose a new approach for ghost imaging in the time domain that magnifies the retrieved temporal object and allows overcoming the limited resolution of the original ghost imaging a<EMAIL_ADDRESS>b Current address: FOTON Laboratory, CNRS, University of Rennes 1, ENSSAT, 6 Rue de Kerampont, F-22305 Lannion, France.
2378-0967/2017/2(4)/046102/8 2, 046102-1 © Author(s) 2017 scheme imposed by the finite speed of photodetectors, without the need of using advanced temporal imaging schemes. 22 The approach is inspired by shadow imaging in the spatial domain and builds on the dispersive Fourier transform of the fast random fluctuations of an incoherent supercontinuum (SC) generated by noise-seeded modulation instability. We experimentally demonstrate a factor of five magnification of a temporal object in the form of the transmission of an electrooptic modulator. Higher magnification factors can be obtained by using a longer length of dispersive fiber or shorter initial pulse duration. By proposing a new approach to improve the resolution of time-domain ghost imaging, our results open a new avenue to blindly detect and magnify ultrafast signals.
II. MAGNIFIED TIME-DOMAIN GHOST IMAGING USING DISPERSIVE FOURIER TRANSFORM
In time-domain ghost imaging, the fast temporal fluctuations of an incoherent light source are divided between a test arm, where a temporal object modulates the intensity fluctuations of the source, and a reference arm where the fluctuations are resolved in real time in the image plane 21 [i.e., the plane of the reference arm detector, see Fig. 1(a)]. By correlating the time-resolved fluctuations from the reference arm with the total (integrated) power transmitted in the test arm, a perfect copy of the temporal object can be retrieved. The temporally incoherent light source may be a quasi-continuous wave source with a fluctuation time inversely proportional to the source bandwidth or a pulsed source with large intensity variations both within a single pulse and from pulse to pulse. 23 The correlation is calculated from multiple measurements synchronized with the temporal object. Note that the average intensity value of the source over the measurement time window does not affect the ghost image, however if the magnitude of the source intensity fluctuations varies over the duration of the temporal object (which can be the case especially for a pulsed source), the ghost image is distorted and requires post-processing correction. 23 FIG. 1. Operation principle of (a) standard time-domain ghost imaging and (b) magnified time-domain ghost imaging. The ghost plane is defined in the reference arm as the equivalent of the object plane (which is, by definition, located in the test arm), such that the dispersion accumulated by the light between the source and the ghost plane is equal to the dispersion accumulated between the source and the temporal object. I = light intensity. T = transmission of an intensity modulator (= object). C = correlation function.
In order to obtain a magnified ghost image, the temporal fluctuations of the source in the reference arm must be magnified 23 whilst in the test arm one only needs to measure the total (integrated) intensity with no modification compared to standard time-domain ghost imaging [see Fig. 1(b)]. In principle, magnification of the source fluctuations can be obtained using a time lens system. [24][25][26] However, time lens systems generally require complicated schemes to impose the necessary quadratic chirp onto the signal to be magnified and typically operate only at a fixed repetition rate with limited numerical apertures.
A different approach consists of using a broadband, temporally incoherent light source whose random intensity fluctuations can then be magnified in the reference arm using a simple dispersive fiber. There are of course some constraints which need to be considered. First, the time span ∆T GP of the light source fluctuations in the ghost plane defined as the equivalent of the object plane in the reference arm (see Fig. 1) needs to be longer or equal to that of the duration of the temporal object to be retrieved. Second, the characteristic fluctuation time at the ghost plane (and correspondingly in the object plane) τ GP c needs to be shorter than the shortest object detail that one wishes to resolve. These criteria can be fulfilled by an incoherent SC source whose spectral fluctuations are converted into the time domain using spectrum-to-time transformation or dispersive Fourier transform as illustrated in Fig. 2, whereby the dispersion of an optical fiber allows separating in time the fluctuations associated with each spectral component. 27,28 The characteristic frequency of the SC spectral fluctuations is well-approximated by ∆ω c ≈ 1/∆T 0 , where ∆T 0 is the average duration of the SC pulses. When these spectral fluctuations are converted into the time domain by a dispersive fiber with total dispersion β 2 L a , the resulting incoherent pulse source exhibits (intra-pulse) temporal fluctuations τ GP c ≈ | β 2 |L a /∆T 0 at the ghost (and object) plane. The fluctuations from the pulse source are then divided between the test arm, where the temporal object is located, and the reference arm where they are stretched further in a second segment of ∆T 0 and ∆ν 0 represent the initial duration and bandwidth of the SC pulses, respectively. ∆T GP and ∆T IP represent the duration of the SC pulses at the ghost and image planes, respectively. τ GP c and τ IP c denote the characteristic fluctuation time within each SC pulse at the ghost and image planes, respectively. τ s is the temporal blur resulting from the different spectral components λ 1 and λ 2 that correspond to the temporal edges of the initial SC pulses and temporally overlap in the ghost plane. dispersive fiber with total dispersion β 2 L b . The fluctuation time at the image plane in the reference arm is then τ IP c ≈ | β 2 |(L a + L b )/∆T 0 , such that the temporal fluctuations in the reference arm are magnified by a factor M = ( β 2 L a + β 2 L b )/ β 2 L a = 1 + L b /L a compared to the fluctuations in the ghost plane (see Fig. 2(a) and also the supplementary material).
By correlating the magnified random fluctuations measured in the reference arm with the total transmitted intensity through the object in the test arm, one then directly obtains an M-time magnified image of the temporal object, effectively improving the overall resolution of the ghost imaging scheme by the same factor (for a fixed detector bandwidth). The initial duration ∆T 0 of the SC pulses is finite such that each time instant in the ghost plane (and, equivalently in the test arm, each time instant of the object plane) actually includes the contribution from several spectral components. These spectral components propagate to the image plane with different group-velocities due to dispersion [see Fig. 2(b)], which results in a "temporal blur effect" that limits the resolution of the imaging system. The temporal blur is defined as the delay τ s , in the image plane, between the frequencies corresponding to the temporal edges of the initial SC pulses and contributing to the same time instant in the ghost (and object) plane (see the supplementary material). Basic geometric considerations in Fig. 2 For each SC pulse i, the oscilloscope records a pair of measurements: the magnified fluctuations I (i) ref (t) and the total intensity transmitted through the electro-optic modulator I (i) test . This pair is recorded N times and the normalized correlation function which produces the ghost image is then calculated according to where represents the ensemble average over the N realizations (i = 1 . . . N) and ∆I (i) = I (i) − I (i) .
III. EXPERIMENTAL SETUP
The experimental setup is illustrated in Fig. 3(a). The light source consists of an incoherent SC followed by a dispersive fiber which performs the dispersive Fourier transform. The incoherent SC with large shot-to-shot spectral fluctuations is generated by injecting 0.5 ns pulses produced by an erbium-doped fiber laser (Keopsys PEFL-KULT) operating at 1547 nm with 100-kHz repetition rate into the anomalous dispersion regime of a 6-m long dispersion-shifted fiber (Corning ITU-T G.655) with zero-dispersion at 1510 nm. The SC generation process is initially triggered by modulation instability which breaks up the long initial pulse into a series of random and shorter pulses. The spectral components of the SC below 1550 nm are filtered out with a long-pass filter to obtain a relatively flat spectrum. The average power of the SC is then reduced with an attenuator (Thorlabs VOA50-FC) to avoid any nonlinear process that may occur during further propagation in an optical fiber. After the spectral filtering stage, the SC has a bandwidth of 80 nm and the average duration of the SC pulses ∆T 0 was measured to be less than 200 ps. Note that the duration of the SC after filtering is shorter than that of the original pump pulses. This is because the SC is not transform-limited, such that filtering out the pump remains effectively remove the undepleted temporal wings of the initial long pulse thus reducing the overall duration.
The spectral fluctuations of the SC are subsequently converted into the time domain through dispersive Fourier transform using an SMF-28 fiber of length L a = 2.5 km and dispersion parameter β 2 = −20 ps 2 /km at 1550 nm. The average duration ∆T GP of the source pulses at the ghost (and object) plane was then measured to be approximately 4 ns. As required, this value exceeds the duration of the temporal object to be measured (see below). The standard deviation of the magnitude of the fluctuations is nearly constant over this time span [see the dotted black curve in the inset of Fig. 3(b)], such that the ghost image will not be distorted. The pulses are then split between the test and reference arms with a 50/50 coupler. In the test arm, the temporal object is the transmission of a zero-chirp 10-GHz-bandwidth electro-optic modulator (Thorlabs LN81S-FC) driven by a programmable nanosecond pulse generator (iC-Haus iC149). It consists of two 0.75-ns pulses with different amplitudes, spanning a total duration of 3.5 ns. In the reference arm, the temporal fluctuations are magnified with an additional dispersive step in an SMF-28 fiber of length L b = 10 km selected to yield an integer magnification factor of 5. Direct measurement of the average duration of the pulses ∆T IP after the additional dispersive step in the reference arm confirmed the 5× increase in the duration to 20 ns.
The detector in the test arm is a 5-GHz InGaAs photodiode (Thorlabs DET08CFC/M) whose response is integrated over 5 ns, such that the effective bandwidth is equal to 0.2 GHz only and the temporal profile of the object cannot be resolved in the test arm. The detector in the reference arm is a 1.2-GHz InGaAs photodiode (Thorlabs DET01CFC). The intensities measured by the two detectors are recorded by a real-time oscilloscope (Tektronix DSA72004). The detection bandwidth was intentionally limited to 625 MHz (with a sampling rate of 6.25 GS/s). Thus, the effective response time of the detection system that measures the fluctuations in real time in the reference arm is τ d = 1.6 ns.
IV. RESULTS AND DISCUSSION
The correlation C(t) calculated over N = 100 000 SC pulses allows us to construct a ghost image magnified by a factor of M = 5, as shown in Fig. 4. In this figure, we compare the ghost image with the original temporal object measured directly with a continuous-wave laser and 5-GHz photodiode (Thorlabs DET08CFC/M) and magnified 5× through post-processing. We can see very good agreement, both in terms of duration and amplitude ratios of the object pulses, confirming the 5-time magnification factor of the object duration in the ghost imaging configuration. The slight distortion of the magnified image is caused by the blur effect of the magnifying scheme.
The temporal resolution τ R of the imaging scheme is determined by the combination of (i) the time response τ d of the detection system, (ii) the characteristic time τ GP c of the random intensity fluctuations in each pulse at the ghost (or object) plane, and (iii) the initial duration ∆T 0 of each SC pulse (i.e., before the spectrum-to-time transformation) which induces a temporal blur τ s = (M −1)∆T 0 in the image plane (see Fig. 2(b) and also the supplementary material). The overall resolution can then be approximated as Note that in writing Eq. (3) we account for the fact that the fluctuations of the light source are magnified by a factor M, effectively improving the resolution τ d of the detection system by the same factor. The resolution of the magnified ghost imaging system is illustrated in Fig. 5 as a function of FIG. 5. Resolution of the magnified ghost imaging setup as a function of the supercontinuum initial duration and for various magnification factors. The time response of the detection system in the reference arm τ d is taken to be 1.6 ns, and the characteristic fluctuation time of the supercontinuum pulses in the ghost plane τ GP c is set equal to 50 ps 2 /∆T 0 (consistent with our experimental parameters). The dashed lines illustrate the different factors that limit the resolution (detector speed τ d , characteristic time of fluctuations at the ghost plane τ GP c , and initial duration of the SC pulses ∆T 0 ). the initial SC pulse duration ∆T 0 and for different values of the magnification factor. We can see that for short initial durations (≤1 ps), it is the fluctuation time of the light source at the ghost (or object) plane τ GP c that determines the overall resolution of the imaging system. In contrast, for long SC pulse durations (≥100 ps), it is the time delay τ s between the SC frequencies at the image plane that sets the temporal resolution. The response time of the detection system τ d only has an effect for small magnification factors. The temporal resolution τ R in the results of Fig. 4 is estimated to be 360 ps, determined both by the resolution of the detection system in the reference arm τ d = 1.6 ns and by the temporal spreading of the SC frequencies in the image plane τ s ≈ 0.8 ns, the fluctuation time of the SC pulses at the ghost (or object) plane τ GP c ≈ 0.3 ps having a negligible influence. The resolution of the imaging system is therefore improved by a factor τ d /τ R approximately equal to the magnification factor M compared to the standard ghost imaging setup.
The optimum resolution of the scheme can be achieved for a magnification factor M >> 1 and is equal to 2| β 2 |L a (corresponding to an initial SC duration ∆T 0 = | β 2 |L a ). This implies that one should use a short segment of dispersive fiber to convert the spectral fluctuations of the SC to the time-domain. However, one should also bear in mind that the fiber segment should be long enough so as to image the full temporal object. It is then clear that an incoherent SC source with a short (average) initial pulse duration ∆T 0 and large bandwidth would be ideal for the scheme demonstrated here.
V. CONCLUSION
Using dispersive spectrum-to-time transformation of the fluctuations of an incoherent supercontinuum, we have demonstrated ghost imaging with magnification in the time domain. This approach increases the effective fluctuation time of the light source and thus overcomes the limited resolution of the standard time-domain ghost imaging generally imposed by the speed of the detection system. Large magnification factors can be obtained using a pulsed light source of short duration. We emphasize that the magnified approach demonstrated here is also insensitive to any distortion that would affect the light field after the object. Our results open novel perspectives for dynamic imaging of ultrafast waveforms with potential applications in communications and spectroscopy.
SUPPLEMENTARY MATERIAL
See supplementary material for description of the analogy between dispersive spectrum-to-time transformation in the temporal domain and shadow imaging in the spatial domain. | 4,488.4 | 2016-12-31T00:00:00.000 | [
"Physics"
] |
Comparative Analysis of Monetary Policy Shocks and Exchange Rate Fluctuations in Nigeria and South Africa
The study examined a comparative analysis of monetary policy shocks and exchange rate fluctuations based on evidence from the two largest economies in Africa (Nigeria and South Africa) – from 1985 to 2015. Data were derived from various sources which include the National Bureau of Statistics, the Central Banks reports and the World Bank database. Vector Autoregressive (VAR) Analysis was used as the estimation technique. The results indicated that the foreign interest rate in South Africa had higher variations in the short-run. While in the long-run, foreign interest rate has higher percentage variations to exchange rate. In Nigeria the world oil price has the higher influence on exchange rate both in the short-run and longrun periods. Based on these results, the study then recommended that the monetary authorities and policymakers in both countries encourage external currency inflows into the economy.
Introduction
There is general consensus that domestic price fluctuation undermines the role of money as a store of value, and frustrates investments and growth. In the move towards stabilization and growth, countries -especially African federations shifted their attention to monetary union and exchange-rate regimes. As a monetary instrument, exchange rate movement is a concern for investors, analysts, managers, governments and policymakers due to its fluctuations Atsin (2010). According to Sulaiman (2006), monetary policy deals with inducing the availability and cost of credit to the economy with a view towards achieving the macro-economic goals of the nation. However, Simeon-Oke and Aribisala (2010)viewed that exchange rate is the price of one currency in terms of another currency. This rate is an exceptional price which government is interested in. The exchange rate of an economy has a crucial role to play, as it directly affects all the macroeconomic variables like domestic price indicator, profitability of traded goods and services, and allocation of resources and investment decisions which explains why the monetary authorities and private sector seek stability in these variables Ajakaiye (2001). The exchange rate is also seen as an essential macroeconomic variable which helps with the formulation of economic policies and reforming programs in which these policies accelerate the achievement of set macroeconomic goals. These goals include achieving and uploading price stability, the balance of payment equilibrium, full employment, even distribution of income, economic growth, and development.
Monetary variables and exchange rate are seen as macroeconomic variables that contribute to the growth and development of an economy. Monetary policy managers have paid collective attention to exchange rate instability and monetary policy shocks in the pasts. Despite manipulations by monetary authorities in terms of stabilizing the exchange rate fluctuation and the shocks on the monetary variables, there has been no consensus on the outcome. Empirical research on this has been concerned, especially on the monetary policy shocks and exchange rate fluctuations. This study undertakes a comparative analysis of monetary policy variables and the exchange rate in Nigeria and South Africa. The remaining part of the paper is structured as follows: section II reviews the literature; the method of analysis is presented in section III; results and discussion are presented in section IV; while summary, conclusion and recommendations are made in section V.
monetary authorities to change the quantity, availability or cost of money." Monetary policy is a method of economic management that stimulates economic growth and economic development by using its instruments, and can either be expansionary or restrictive. An expansionary monetary policy aims to sustain the growth of aggregate demand through an increase in the rate of money supply and lowering interest rates, while restrictive monetary policy is designed to reduce money supply and increase interest rates. Monetary policy targets to control money supply in order to ameliorate the various economic problems which include balance of payment (BOP) imbalances, inflation and unemployment etc (Gbosi, 2002). Thus, macroeconomic objectives like sustainability in the growth of the economy, stability in price level, BOP equilibrium along with full employment can be achieved through monetary policy by the monetary authorities. Monetary authorities are responsible for using monetary policy to grow their economies.
Exchange Rate: Exchange rate is defined as the rate at which one country's currency can be exchanged for another, and thus it is the price of one currency in terms of another (Anyawu & Oaikhena, 1995). Exchange rate measures the value of a nation's currency against other countries' currency and reflects the economic situation of the country compared to other countries. In an open economy, the exchange rate is an important variable due to its interaction with other internal and external variables. Domestic and foreign economic policies as well as economic development greatly affect the exchange rate. In addition, the exchange rate is a variable that affects the economic performance of the country. It is an economic indicator (Havva, Mohammad & Teimour, 2012). The importance of the exchange rate is derived from the fact that it connects the price system of two different currencies making it possible for international traders to make a direct comparison of the prices of traded goods. The exchange rate can be bilateral or multilateral. A bilateral exchange rate is described as the exchange rate of one currency like the Nigerian Naira, in terms of another, for example the US dollar (USD). On the other hand, a multilateral exchange rate also referred to as the nominal exchange rate -is the rate of one currency against a weighted composite basket of the currencies of that country's trading partners.
Empirical Literature: Mehmet and Zekeriya (2013) investigated monetary policy shocks and macroeconomic variable evidence from fast-growing emerging economies. The study used Vector Autoregressive (VAR) Analysis as an estimation technique. The results showed that a contractionary monetary policy appreciates the domestic currency, increases interest rates, effectively controls inflation rates, and reduces output. The study did not find any evidence of the price, output, exchange rate and trade puzzles that are usually found in VAR studies. It was also found that the exchange rate was the only transmission mechanism in developed countries. Zandweghe (2015) examined the relationship between monetary policy shocks and aggregate supply in the United Kingdom between 1970 and 2014. The VAR method was employed as an estimation technique. The result showed that accommodative productivity or loose monetary policy shocks temporarily boost labor by increasing work effort and the work week of capital. This study is country-specific, however, and omitted some important variables. Anzuici, Marco and Patrizio (2013) investigated the empirical relationship between monetary policy and commodity prices in the USA by means of a standard VAR model. The results suggested that expansionary US monetary policy shocks drove up the broad commodity price index and all of its components.
Alain (2007) examined the effect of monetary policy shocks on the Philippine economy. VAR was employed and the results of the monetary shock impulse responses showed that the inflation rate, world oil price, and narrow money supply significantly impacted on the Philippine economy. Asad, Ahmad and Hussain (2012) studied the impact of the real effective exchange rate on inflation in Pakistan using the time-series data of real GDP, nominal GDP, real effective exchange rate, prices and money supply -for the period 1973 to 2007. The findings showed that the real effective exchange rate had an impact on inflation in Pakistan and that a positive and significant relationship was found between the real effective exchange rate and inflation. Paul and Muazu (2016) investigated causes and effects of exchange-rate volatility on economic growth in Ghana using ARCH, GARCH and VECM. The results showed that shocks to the exchange rate are mean revertingwith painful consequences in the short run. It was also found that almost three-quarters of shocks to the real exchange rate are self-driven, while the remaining one-quarter is attributed to factors like government expenditure and money-supply growth, terms of trade, and output shocks. Furthermore, excessive volatility was found to be detrimental to economic growth -however, only up to a point, as a growth-enhancing effect can also emanate from innovation and more efficient resource allocation. Babatunde and Olufemi (2014) studied monetary policy shocks and exchange-rate volatility in Nigeria from 1980-2009. In the study, the classical ordinary least square and Error correlation models were employed. The results showed that both real and nominal exchange rates in Nigeria have been unstable. It was equally found that the variation in the monetary-policy variable explains the movement of the exchange rate through a selfcorrecting mechanism process -with little or no intervention from the monetary authority. Results from the causality tests showed that there is a causal link between the past values of monetary-policy variables and the exchange rate. Ade and Philip, (2014) examined exchange-rate fluctuations and microeconomic performance in sub-Saharan Africa, and employed a dynamic panel co-integration analysis. The tentative results showed that a long-run relationship and a bidirectional relationship existed between exchange-rate volatility and macroeconomic performance. Muhammad and Eatzaz (2009) examined the relationship between monetary variables and nominal exchange rates in Pakistan. Generalized Method of Moments (GMM) estimates provide considerate support for the flexible price monetary model on the basis of country-by-country analysis. Therefore, the study concludes that monetary variables confirmed results for the determination of nominal exchange rates and validated monetary models as long-run equilibrium conditions. Ncube and Ndou (2013) investigated contractionary monetary policy and exchange rate shock on the South African trade balance by using a structural vector regressive model -specifically the recursive and sign restrictive SVAR models. The interest rate was used as a monetary variable among other variables. The findings showed that the real effect of exchange-rate appreciation and contractionary monetary policy worsened the trade balance as a percentage of GDP in the long-run periods. The shock of the exchange rate on trade balance was greater than contractionary monetary policy shocks. The contractionary monetary policy operated in the expenditure-switching channel rather than the income channel in the short-run -which deteriorated the trade balance. Annari and Renee (2012) examined monetary policy and inflation in South Africa: a VECM augmented with foreign variables. The study employed co-integrated vector autoregressive (VAR) model. The study discovered three significant long-run economic relations: the augmented purchasing power parity, the uncovered interest parity, and the Fisher parity. These long-run relations were imposed on the VECM to investigate the effect of a monetary-policy shock on inflation. The results suggested an effective functioning of the monetary-transmission mechanism in South Africa. From the above literature review, it is clear that there are inconsistencies in the findings of many studies; some variables are omitted in some studies which gave room for further study and also to extend the study period covered by previous researchers. This study is also comparative, thereby providing better insight than the country-specific studies.
Methodology
Model Specification and Estimation Technique: In order to arrive at the model of this study, theoretical and empirical specifications of past researchers were considered. The modified model is in line with Fatai and Akinbobola (2015) and Babatunde and Olufemi (2014).The following variables: exchange rate, money supply, interest rate, foreign interest rate, world oil price and Consumer Price Index were used in the study. The econometric forms of the variables were presented in vector autoregressive estimation below Vector Autoregressive (VAR): In this research, the VAR model was employed. Sims (1980)opined that, if there is simultaneity among a number of variables, then all these variables should be treated in the same way.
In other words, these should be no distinction between endogenous and exogenous variables.
Figure1: Internal Monetary Variable Response to Exchange Rate: Nigeria Experience
Vector Autoregressive (VAR) Impulse Response Analysis (Nigeria Experience): Impulse response function identifies the responsiveness to one standard deviation in an exogenous variable relative to one of the innovations of the endogenous variables of a model. Furthermore, it is used to predict the response of each endogenous variable to changes in other exogenous variables (Ogunsakin, 2011). Result of impulse response function is therefore presented. The result in figure 1 revealed the response of exchange rate to the shock coming from an interest rate as an internal monetary variable in Nigeria is positive from the beginning and is closer to zero in quarter two, after which it becomes negative and statistically insignificant. The economic implication of this finding stems from the fact that this result is in line with the theoretical prediction. Looking critically at the results in figure 1, the result equally confirms that a standard-deviation shock coming from the money supply inflicts negative from the beginning to the quarter of period threeafter which it becomes positive and has an insignificant effect on exchange rate. This equally conforms to the theoretical prediction. The response of exchange rate to the shock from the CPI, as shown in figure 1, is positive and statistically significant. The CPI was initially insignificant from quarter one up to quarter twobut is later significant. The economic implication of this finding is that the CPI increases, corresponding to an increase in exchange rate. However, it is not considered to be a good indicator of economic growth and price stability in Nigeria during the study period. Response of WOP to EXR The result in figure 2 showed the external monetary variables' shock relative to exchange rate in Nigeriathat is, a standard-deviation shock from a foreign interest rate has a negative but statistically insignificant impact on exchange rate in Nigeria during the study period. This finding actually conforms to the a priori expectation. According to this theoretical prediction, an unexpected foreign interest rate shock has no impact on the exchange rate. The response of the exchange rate to the shock from the world oil price is positive and significant. The economic implication of this is that an increase in the world oil price attracts more foreign currencies and strengthened the Naira. Table 1 showed the variance decomposition of exchange rate to both the internal and external monetary variables in Nigeria during the study period. In the short-run, apart from the own shock, the world oil price has the highest percentage (31%) of variance in the exchange-rate decision in Nigeria. This is followed by foreign interest rate that accounts for 11% of the variation to exchange-rate decision in Nigeria during the study period. Furthermore, interest rate and CPI are equally responsible for a 6% and 5% variation in the exchange-rate decision in Nigeria, while in the long run, world oil price accounts for 53% of variations, foreign interest rate for 20% of variations and money supply for 9% of variations in the exchange rate. Interest rate and CPI are insignificant in the long-run -accounting for 4% and 3% of the variations, respectively, in the exchange-rate decisions in Nigeria during the study period. This is similar to the work of Babatunde and Olufemi (2014) Response of CPI to EXR Source: Eviews 9.0
Vector Autoregressive (VAR) Impulse Response Analysis(South African Experience):
The responses of South African internal monetary variables to exchange rate are presented in figure 3 showed that the standard-deviation shock of interest rate to the exchange rate displayed negative from quarter one to quarter five, and later oscillated to equilibrium at the end of quarter five, which became positive up to quarter eight, and began to decline negatively from quarters nine to ten. The implication is that the interest rate contributed both positive and negative movement to the exchange rate of South Africa during the study period. The shocks from money supply to exchange rate depicts negative impact from quarter one to the end of quarter three, and then oscillated from the end of quarter three in a positive upward movement, and later declined from quarter seven. The internal CPI impulse to exchange rate is positive from quarter one to quarter three and declined negatively for a long period up to quarter nine -as shown in figure 3. Response of WOP to EXR Source: Eviews 9.0 Figure 4 presented the external monetary variables such as foreign interest rate (FIR) and world oil price (WOP) deviations to exchange rate in South Africa. FIR standard deviation to exchange rate displayed negative at the beginning of quarter one to the early period of quarter five, then moved slightly positive to quarter eight, and later become negative during the study period. The innovation of world oil price to exchange rate was equally negative from quarter one to the end of quarter three, and which later becomes positive for a long period. Variances' Decomposition (South African Experience): Table 2 (above) shows the variance decomposition of exchange rate to both the internal and external monetary variables utilized in the South African economy during the study period. In the short-run, apart from the own shock that is exchange rate, the foreign interest rate has the highest percentage (14.4%) of variance in the exchange-rate decision in South Africa. This is followed by money supply and interest rate which account for 9.7% and9.6% of variations respectively in the exchange-rate decision in South Africa during the study period. Also, world oil price and CPI are equally responsible for a 3.9% and 2.2% variation in exchange-rate decision in South Africa during the study period. In the long-run, the foreign interest rate accounts for 28% of variations, interest rate for18% of variations, and money supplyfor10% of variations in the exchange rate. CPI accounts for 9% of variations in the exchange rate. World oil price is insignificant in the long-run -with only a 5% impact on the exchange-rate decision in South Africa during the study period.
Summary: This study focused on a comparative analysis of monetary policy shock and exchange rates in the two largest economies in Africa (Nigeria and South Africa). The rationale for this analysis was due to the realization that fluctuations in exchange rate through internal and external monetary policy innovations, took place in both Nigeria and South Africa. Based on the analysis, the South African foreign interest rate has higher variations in the short-run. This was followed by money supply and interest rate. The world oil price and CPI have lower variations in the short-run. In the long-run, the South Africa Rand in terms of other currencies shows that foreign interest rate equally has the higher percentage of variations to exchange rate, followed by interest rate and money supply -while CPI and world oil price have the lowest variations in exchange-rate fluctuations. The Nigerian analysis revealed that world oil price has the higher percentage of variations in exchange rate in the short-run, followed by foreign interest rate and interest rate -while CPI and money supply have lower variations to exchange rate in the short-run. In the long-run, world oil price also has the highest percentage influence on exchange rate, followed by foreign interest rate and money supply. Domestic interest rate and CPI had a lower variation to exchange rate in Nigeria during the study period.
Conclusion and Recommendations
The aim of this study is to make a comparative analysis of monetary policy shock and exchange-rate fluctuations, based on evidence from the two largest economies in Africa (Nigeria and South Africa). This study concluded that the external monetary variables employed in the study -world oil price and foreign interest rate contributed more impacts compared to internal monetary variables like money supply, interest rate and CPI. In South Africa, foreign interest rate acquired a higher percentage variation to influence exchange rate during the study period, while in Nigeria world oil price acquired a higher percentage of variations in exchange rate during the study period. Simultaneously, foreign interest rate equally controls higher variations in the long-run with the South Africa experience, while world oil price controls higher variations as far as Nigeria is concerned. It also concluded that Nigeria is a crude-oil exporting country which contributes largely to exchange-rate fluctuations -implies that the higher the cost per barrel of crude oil, the more foreign currencies Nigeria earns. However, South Africa is not a crude-oil exporting country and is rather an importing country. Furthermore, foreign interest rate contributes moderately in both countries in terms of exchange-rate fluctuations. The internal monetary variables equally contribute to exchange ratebut not significantly to influence the fluctuations in exchange rate. From the findings, it is therefore suggested that the monetary authorities and policy-makers in the two countries should encourage external currency flows of foreign interest rate and world oil price into the economy. It is equally suggested that the internal monetary variable particularly interest rate and consumer price index should be strengthened in order to avoid sudden movement of the country's currency in term of other currencies. | 4,757.2 | 2018-01-15T00:00:00.000 | [
"Economics"
] |
Emergence of Highly Multidrug-Resistant Bacteria Isolated from Patients with Infections Admitted to Public Hospitals in Southwest Iran
Background The emergence of multidrug-resistant (MDR) microorganisms causing infections is increasing worldwide and becoming more serious in developing countries. Among those, Acinetobacter species are becoming prominent. Objectives The aim of this study was to determine the rate of antimicrobial resistance of the bacteria causing infections, Acinetobacter species in particular, in local public hospitals in Firuzabad, Fars province, Iran. Methods This cross-sectional study was performed on different clinical specimens collected from patients who were suspected of infections hospitalized from March 2016 to March 2019 in local hospitals of Firuzabad, Fars province, Iran. The bacterial isolates were identified following standard microbiological methods. Clinical and Laboratory Standards Institute guidelines were used to identify the antibiotic susceptibility of these isolates. Results Overall, 1778 bacterial etiologies were isolated from 1533 patients diagnosed with infection. Of these, 1401 (78.8%) were Gram-negative and the remaining were Gram-positive bacteria. Escherichia coli (37.1%), Klebsiella spp. (13.9%), and Acinetobacter species (10.4%) were the most common isolated bacteria. Antibiotic sensitivity testing in this study showed a high resistance rate of Acinetobacter species to all antibiotics tested except Colistin. During the study period, the rate of infection with highly multidrug-resistant Acinetobacter species increased from 7.2% to 13.3%. Conclusions This study highlights the emergence of MDR bacterial agents such as Acinetobacter species as a new threat in our region. However, a decrease in the rate of infection with Pseudomonas aeruginosa was noticeable.
Introduction
e emergence of antimicrobial-resistant bacteria in the community and hospitals is a critical threat to public health worldwide [1][2][3]. Unnecessary antibiotic use, excessive use of broad-spectrum antibiotics, and improper prescription of antibiotic drugs are the main reasons for the increased prevalence of antibiotic-resistant microorganisms [4]. Despite this rise in the prevalence of drug-resistant pathogens, the development of new antimicrobial agents is declining drastically [5]. Accordingly, the possibility of facing a rising number of potentially untreatable infections in the near future is a cause for concern. Furthermore, decreased sensitivity to the available antibiotics is a major concern in Iranian healthcare facilities [6][7][8].
A major challenge in treating patients with bacterial infections is the selection of appropriate antibiotics for their treatment. is can be obtained based on the information on the antimicrobial resistance patterns in the area. For this reason, updated data on antimicrobial resistance patterns in every region is required. e goal of the present study was to provide up-to-date data on the antibiotic resistance patterns of bacterial infections in this area. Such data could provide a practical guide for physicians. More importantly, it would highlight the serious threat of multidrug-resistant (MDR) bacteria causing infections, some of which are entirely resistant to every antibiotic available. Studies like this will draw special attention to the necessity of future studies in order to find new medications for treating infections with such bacteria.
Study Subjects.
is cross-sectional study was conducted within a 3-year period from March 2016 to March 2019 in local public hospitals in Firuzabad, southwest Iran. e samples were taken as a part of the routine diagnostic practice; however, after the approval of the ethics committee (Approval ID: IR.SUMS.REC.1393.8313), informed consent was obtained from each participant or legal guardian.
Using sterile equipment and aseptic techniques, 1778 clinical samples including blood, urine, sputum, wound swab, and endotracheal tube specimens (ETT) were collected from 1533 patients diagnosed with infection based on clinical signs and laboratory investigations. Patients were aged between 1 and 90 years old (38.7 ± 24.4 years). e specimens were taken from patients by medical nurses and laboratory technicians and were transported to the laboratory immediately for further analysis.
Sample Collection.
For blood culture collection, the venipuncture method was used to obtain blood samples. Two sets of blood specimens were collected from different venipuncture sites. Each bottle consisted of 7-10 mL of blood for adult patients [9]. e collected volume of blood for pediatric patients was based on the weight of the patients [10].
For the diagnosis of urinary tract infections (UTI), cleancatch midstream urine (MSU), neonatal bagged urine, indwelling catheter (Foley catheter) urine, or suprapubic catheter urine was collected from patients. e sputum samples were taken into sterile containers and were immediately analyzed microscopically by Gram staining. e samples containing less than 10 epithelial cells and more than 25 leukocytes in each area upon 100x magnification were included in the study as eligible sputum specimens [11].
ETTs were obtained immediately after extubation. Roughly, 1 cm of the distal end of the ETTs was cut for microbiological culture analysis. e tips were placed in a 15 mL conical tube containing 5 mL of sterile phosphatebuffered saline (PBS). Conical tubes were centrifuged at 400 g upon delivery, and pellets were used for further analysis.
For wound culture, wounds were first rinsed thoroughly with sterile saline solution. A small area (1 cm) of clean viable tissue was identified, and the sterile swab was rotated on it for 5 seconds while applying enough pressure to produce exudate. Swabs were then transferred into sterile containers. All the specimens were processed in the laboratory immediately (within 1 hour) to keep the samples stable.
Bacteriological Investigation.
Culturing and identification of isolates were on the basis of standard guidelines for microbiological examination [12,13]. Briefly, blood samples were collected as soon as the onset of clinical symptoms before administration of antimicrobial therapy. For the identification of pathogens, BACTEC ™ (Becton Dickinson, USA) blood culture bottle system was employed. Blood culture specimens were incubated for 7 days. Positive blood cultures were plated on Columbia blood agar with 5% sheep blood, MacConkey agar, and chocolate agar. Blood and MacConkey agar plates were incubated for 2 days in an atmosphere with 5% CO 2 . Chocolate agar plates were incubated anaerobically in Gas-pack anaerobic jars with Gas-Pack envelopes (BBL; Becton Dickinson & Co., Cockeysville, Md., USA) and palladium catalyst to achieve and maintain an anaerobic atmosphere enriched with CO2.
Urine specimens were cultured on blood agar and MacConkey agar plates using calibrated 0.001 mL loops for quantitative urine cultures. Greater than or equal to 100 000 colony-forming units (CFU) of bacteria per mL of MSU or neonatal bagged urine samples were considered positive for infection. A positive growth of bacteria for other types of urine specimens was considered infection as well.
Respiratory specimens were routinely cultured onto several solid media, including chocolate agar, sheep blood agar, and MacConkey agar. Sputum cultures with more than 5 colonies per plate of potential respiratory pathogens were considered positive for infection. e assessment of wound infection was performed by inoculating the swabs on blood agar, MacConkey agar, and chocolate agar and incubating at 37°C for 24 to 48 hours.
All culture media were supplied from bioMérieux, France. Bacterial isolates were identified using conventional methods based on their morphological, biochemical, and physiological characteristics. Briefly, Gram staining was performed on smears of inoculums of single colonies from pure subcultures in 20 μl of sterile PBS. e stained slides were analyzed using light microscopy. Subsequently, the identification of the isolated Gram-positive and Gramnegative bacteria was carried out using biochemical tests. For the Gram-positive isolates, catalase, coagulase, DNase, Bacitracin, Novobiocin and Optochin susceptibility, hippurate hydrolysis, 6.5% NaCl broth salt tolerance, and bile esculin tests were applied. e identification of Gramnegative bacteria involved performing triple sugar iron (TSI), Simmon's citrate, sulfide-indole-motility (SIM), urease, methyl red (MR), Voges-Proskauer (VP), lysine decarboxylase, arginine decarboxylase, ornithine decarboxylase, phenylalanine deaminase, oxidase, oxidation-fermentation (OF), and acetate utilization tests [14].
Antimicrobial discs were obtained from Padtan-TEB Co., Tehran, Iran. For the interpretation of antibiotic susceptibility testing, diameters of inhibition zones around the discs were measured and were classified as sensitive (
Statistical
Analysis. Data management and analysis was carried out using WHONET 5.6 software.
Results
Overall, 1533 patients developed infections during this three-year study. Among them, 889 (58%) were female and 644 (42%) were male. Infections with Gram-positive and Gram-negative bacteria were detected in 293 out of 1533 (19.1%) and in 1181 out of 1533 (77.0%) patients, respectively. Furthermore, coinfection with both Gram-positive and Gram-negative bacteria was found in 59 out of 1533 patients (3.8%).
e distribution of bacterial isolates in different clinical specimens and hospital wards is shown in Tables 1 and 2, respectively.
Among Gram-negative bacteria, E. coli (37.1%), Klebsiella spp. (13.9%), and Acinetobacter spp. (10.4%) were dominant causes of infections. However, S. aureus was the most prevalent isolate among Gram-positive bacteria ( Figure 1). e prevalence of bacteria involved in infections during the study period is presented in Table 3. ere was a decrease in the rate of Pseudomonas aeruginosa infections from 12.2% in the years 2016-2017 to 4.8% in the years 2018-2019. However, an increasing trend in infections due to Acinetobacter spp. from 7.2% in the years 2016-2017 to 13.3% in the years 2018-2019 was demonstrated (Table 3). e antibiotic resistance patterns of the Gram-positive and Gramnegative bacteria are presented in Tables 4 and 5, respectively. MDR patterns of Gram-positive and Gram-negative bacterial agents are shown in Tables 6 and 7 . e highest rate of multidrug resistance among Gram-positive bacteria was found in the isolates of Enterococcus spp. (91.4%), followed by S. epidermidis (64.9%) and S. aureus (38.8%), while among Gram-negative bacteria, the highest rate of multidrug resistance was detected in the isolates of Acinetobacter spp.
Urinary tract infection was present in 62.1% of the patients, followed by respiratory tract infection (19.6%) and wound infection (15.5%). UTIs were more frequent among women (74%), and E. coli was the major cause of them (62.0%).
About fifty-five percent of the patients with respiratory tract infections were those who were receiving tracheal intubation, most of whom were hospitalized in intensive care units (ICU). e main bacteria isolated from ETT cultures were Acinetobacter spp. (44.9%). e same bacteria were the most frequent cause of positive sputum cultures (28.2%). ese bacteria were highly resistant to most of the antibiotics. However, Colistin was the only antibiotic that all of the mentioned bacteria were still sensitive to.
Staphylococcus epidermidis was the major cause of positive blood cultures (20.0%) and these bacteria showed the most sensitivity to Vancomycin and were most resistant to Erythromycin. Furthermore, Staphylococcus aureus was the most frequent microorganism isolated from wound cultures (21.8%), all of which were sensitive to Vancomycin.
Discussion
In the present study, the pattern of antibiotic resistance of bacteria isolated from patients with infection was investigated. During three years of study, 1533 patients developed infections. When the incidence of infections was examined in different wards of general hospitals in Firuzabad, Fars province, Iran, the patients admitted in the internal medicine ward, in particular, had the highest rate of infection (28.9%) followed by surgical (orthopedic and general surgery) wards (23%), the pediatric ward (13%), neonatal ICU (12%), ICU (11.5%), CCU (7.4%), and emergency department (4.2%). UTI was the most frequent infection during the study period. Escherichia coli was found to be the most common pathogen isolated (37.1%), followed by Klebsiella spp. (13.9%) and Acinetobacter spp. (10.4%).
Overall, the prevalence of infections with Gram-negative bacteria was higher than Gram-positive bacteria (78.8% versus 21.2%). e most frequent Gram-negative bacteria causing infections were E. coli (21.6%), while in other studies Enterobacter spp., Klebsiella pneumoniae, and Pseudomonas aeruginosa were the most prevalent causative organisms of infections [17], the incidence of infection with E. coli and Acinetobacter spp. is increasing in our region. e prevalence of MDR among Acinetobacter spp. isolates was found to be 100%, which is far higher than reports from Saudi Arabia (74%) and Ethiopia (71.6%) [18,19]. e emergence of MDR Acinetobacter spp. may complicate the choice of the accurate antibiotic for treatment and increase the mortality rate in hospitalized patients [20]. Except for Colistin (Polymyxin E) with 100% sensitivity, Acinetobacter spp. isolates exhibited high rates of resistance to all the antibiotics that are routinely used in clinical pathology laboratories for Gram-negative bacteria.
ese results contradict with a previous study in which drug resistance to Colistin was high (49.8%) in the northern part of Iran [21].
Interdisciplinary Perspectives on Infectious Diseases
Furthermore, between 5.3% and 69.8% of isolated E. coli were resistant to different antibiotics. Among the tested antibiotics for E. coli isolates, the lowest antibiotic resistance was detected for Amikacin, followed by Colistin. However, these isolates were highly resistant to Nalidixic acid and Co-Trimoxazole, which is consistent with a previous report from Isfahan, Iran [22]. Pseudomonas aeruginosa isolates showed high rates of sensitivity to the studied antibiotics and were detected in only 8.0% of infections, which is far lower than infections with E. coli and Acinetobacter spp. isolated from clinical samples. e rate of MDR among Pseudomonas aeruginosa isolates was 9.8%, which is lower than the MDR rate of 31% reported by a recent study in Tehran, Iran [23].
Staphylococcus aureus was the most common Grampositive bacteria isolated from infected patients and generally comprised 7.5% of all bacterial infections and 35.5% of all Gram-positive bacterial infections. Staphylococcus aureus was mostly isolated from patients with wound and respiratory tract infections. We did not find Vancomycin-resistant Staphylococcus aureus during the study period among infected patients.
is finding is in line with that of a previous study on Staphylococcus aureus in which although all studied isolates were MDR, they were generally susceptible to Vancomycin [24].
Enterococcus spp., as the second most frequent isolated Gram-positive bacteria, showed a notably high multidrug resistance rate of 91.4%, which is comparable to the rates reported by a study in Taiwan [25]. Further, 75.2% of Enterococcus spp. isolates were found to be Vancomycin-resistant, which might be associated with the extensive use of Vancomycin in the hospital environment. Vancomycinresistant Enterococci (VRE) have become a serious problem in almost every hospital and especially in high-risk patients [26][27][28].
ere were 116 patients with the same bacterial isolates from different clinical samples. All isolates were tested for their antibiotic susceptibility. Approximately 80% of them had the similar patterns of antibiotic susceptibility. However, 52 patients had different bacterial species in clinical samples collected from multiple anatomical sites of infection. Interdisciplinary Perspectives on Infectious Diseases
Conclusion
In conclusion, the results indicate the increasing prevalence of infections with emerging opportunistic pathogens such as Acinetobacter spp. in our region. ey are able to cause different types of infections. While the rate of infection with other Gram-positive and Gram-negative bacteria remains unchanged during the study period, a reduction in the rate of infection with Pseudomonas aeruginosa is evident. However, the emergence of MDR Acinetobacter spp. seems to become a major threat in the near future. Further, the considerable rate of infection with E. coli should not be ignored. Moreover, the molecular analysis of the isolates is recommended to characterize the antibiotic resistance genes.
Data Availability
Data are available within the article. Disclosure e funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,458 | 2021-08-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Theoretical developments on the initial state in relativistic particle collisions
. We discuss recent progress towards developing accurate initial state descriptions for heavy ion collisions focusing on weak coupling based approaches, that enable one to constrain the high-energy structure of nuclei from deep inelastic scattering or proton-nucleus collisions. We review recent developments to determine the event-by-event fluctuating nuclear geometry, to describe gluon saturation phenomena at next-to-leading order accuracy, and to include longitudinal dynamics to the initial state descriptions
Introduction
A crucial ingredient needed to simulate the space-time evolution in heavy ion collisions is the structure of the colliding nuclei at small momentum fraction x, which is the region probed in high-energy collisions.A consistent description of the heavy ion collision initial state together with e.g.deep inelastic scattering (DIS) data has been achieved in approaches based on collinear factorization and Color Glass Condensate (CGC).In the EKRT model based on collinear factorization [1] the partonic content of the nuclei is described in terms of nuclear parton distribution functions.In the CGC approach (implemented e.g. in the IP-Glasma [2] framework) the DIS and p+A cross sections and the time evolution of the color fields immediately after the heavy ion collision are described in terms of the universal Wilson line correlators.
There are also many other approaches to describe the initial state in heavy ion collisions including, for example, parametrization-based models such as T R ENTo [3] and different event generators (Pythia/Angantyr, EPOS, HIJING).In this contribution we, however, focus on weak coupling approaches with a direct connection to DIS and p+A collisions.
Probing nuclear geometry in photon-nucleus scattering
In heavy ion collisions the initial state eccentricities are transformed into momentum space anisotropies by the hydrodynamically evolving QGP.As such, a crucial input to the QGP simulations is the spatial distribution of nuclear matter at the initial condition and immediately after the collision before an approximatively thermalized QGP is formed.As such, there has been extensive activity in recent years to constrain the event-by-event fluctuating shape for of the proton and heavy nuclei.
Exclusive processes where the total momentum transfer is the Fourier conjugate to the impact parameter directly probe the nuclear geometry [4].Exclusive vector meson production and DVCS were extensively studied ad HERA.In recent years, vector-meson photoproduction has been studied in ultra peripheral photon-mediated collisions at RHIC and at the LHC where the photon-nucleus processes are available before the EIC era.Such measurements directly probe the nuclear geometry, down to very small x P ∼ 10 −5 , and are sensitive probes of non-linear QCD dynamics.Furthermore the potential to constrain nuclear PDFs in the poorly constrained small-x region has also been investigated recently [5].
When exclusive vector meson production has been measured as a function of momentum transfer [6,7], a spectrum that is more steeply falling compared to the one obtained as a Fourier transform of the Woods-Saxon density profile is obtained.This can be interpreted as a signature of saturation phenomena that effectively transform the nuclear density profile towards the black disc shape at small x P [8]. at a sub-nucleon scale.These results confirm the importance of sub-nucleon fluctuations to describe the measured incoherent J/y process at high energies, representing the first experimental step to use the quantum fluctuations of the gluon field to search for saturation effects in heavy nuclei.In addition, this measurement, when confronted to models, demonstrates that the contribution of the dissociative component to the total incoherent cross section depends on |t|.Thus, future analyses shall study the incoherent production of J/y as a function of rapidity and |t| [47].Finally, this analysis, together with recent measurements [18,20], indicate that new or improved theoretical models are needed to describe simultaneously the energy and |t|-dependence of both the coherent and the incoherent processes of J/y photoproduction, to gain a better understanding of saturation effects at a more fundamental level.
The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex.In this conference, for the first time measurements of the incoherent J/ψ photoproduction cross section in ultra peripheral heavy ion collisions (i.e. in photon-nucleus collisions) as a function of the momentum transfer was reported by the ALICE and STAR collaborations [9].The incoherent production where the target dissociates is interesting, as it probes the event-by-event fluctuations in the target geometry [10].For example, the HERA data has been shown to prefer significant event-by-event geometry fluctuations for the proton [11].The new ALICE data is shown in Fig. 1, where it is compared to CGC calculations that either use spherical nucleons or include an event-by-event fluctuating nucleon geometry constrained by the HERA data [12].Although the overall cross section is overestimated by the theory calculation (i.e.not enough nuclear suppression is obtained), the tslope can be interpreted to prefer the calculation with nucleon substructure similar to that in protons at HERA kinematics.In addition to nucleon substructure fluctuations, in this conference recent progress towards probing the deformed structure of e.g.Uranium and Xenon in deep inelastic scattering was presented [13].
Gluon saturation at the precision level
Despite the fact that the leading order CGC calculations (that resum α s ln 1/x contributions to all orders) have been successful in describing large amount of small-x data, it is crucial to develop the theory to the next-to-leading order accuracy to enable precision level comparisons with the current and future measurements.Over the last couple of years, there has been an extensive effort in the community to bring the theory calculations describing the gluon saturation phenomena to the next-to-leading order accuracy.
At small-x cross sectios factorize to a convolution of Wilson lines and a hard impact factor.The energy dependence of the Wilson lines is described by perturbative Balitsky-Kovchegov or JIMWLK evolution equations.For a fully consistent NLO calculation, all these ingredients, including the non-perturbative initial condition for the high-energy evolution, need to be promoted to this order in α s .
The NLO BK evolution equation became available already in 2007 [14], and later the NLO JIMWLK equation has also been obtained [15,16] although for that there is currently no known method to solve it numerically.Typically the non-perturbative initial condition describing the proton structure at moderately small x has been extracted from fits to the proton structure function data [17].This became possible at NLO accuracy once the hard impact factor for DIS (the photon light front wave function at NLO) became available [18] (see also Ref. [19] for a complementary approach based on proton valence quark wave function).The first NLO fit has been reported in Ref. [20], and recently also a successful description of both the total and heavy quark production cross sections in DIS has been obtained [21].
In addition to total DIS cross sections, impact factors for many other scattering processes are currently known at NLO.These include, for example, exclusive vector meson production [22][23][24][25] and dijet/dihadron production [26][27][28]28] in DIS and inclusive hadron production in proton-nucleus collisions [29][30][31].Recently, first phenomenological applications at NLO accuracy have also become available.In particular consistent NLO calculations (with the caveat that the NLO BK evolution equation is approximated by a leading order equation into which dominant higher order corrections have been resummed) compared to available data exist for exclusive light and heavy vector meson production [22][23][24], inclusive π 0 production in proton-lead collisions [32] and dijet production in DIS [26] (although in that case the initial condition for the small-x evolution is not constrained by other collider data).Additionally, numerical results where leading order Wilson line correlators are used together with NLO impact factors exist for charged hadron production in proton-nucleus collisions [33].The obtained nuclear suppression factors for charged hadron and dijet production are shown in Figs. 2 and 3.
This rapid progress towards the NLO accuracy has brought the field to the point where precision level studies of saturation phenomena are becoming feasible.As saturation effects are typically expected to be only moderate [34], there is likely no smoking gun for gluon saturation even at the EIC.Instead, it will be crucial to perform global analyses where different DIS and p+A observables are simultaneously included.As there is always dependence on the non-perturbative input to the small-x evolution equation, in such analyses it will also be crucial to properly take into account uncertainties in this non-perturbative input (and in other non-perturbative ingredients such as in the vector meson wave function when calculating exclusive vector meson production).At the moment this non-perturbative input does not have any uncertainty estimates available at NLO [20], but first steps to include uncertainties in the extraction of the BK evolution initial condition [35] and the fluctuating proton geometry [36] at leading order have been taken recently.
Longitudinal dynamics
Many state-of-the-art descriptions of QGP evolution use 3+1D hydrodynamical simulations and hadronic afterburners.In order to fully describe longitudinal dynamics in heavy ion collisions, a realistic x-dependent initial condition is also necessary.This energy dependence has been included for example in the recent T R ENTo-3D [3] initial state parametrization.
In weak coupling approach one can again apply either collinear factorization or CGC to go beyond midrapidity.In the EKRT model [41] the input is an x-dependent nuclear PDF such as EPPS21 [42].In this conference, recent developments to the EKRT model were presented [38], including spatially dependent nuclear PDFs with event-by-event fluctuations, a dynamical event-by-event saturation criterium based on minijet production, minijet multiplicity fluctuations and global energy conservation.When this new 3D initial state description is R pPb , 3.0 < y < 3.5 Resummed Nuclear suppression factor for inclusive charged particle production in proton-lead collisions at the LHC compared to the LHCb data [37].Figure from Ref. [33].
unambiguously positive for the R range studied.Theory uncertainties in the NLO result can be divided into four classes; three of these are displayed in Fig. 2 (Bottom).We first show uncertainties from the unknown order N 2 LO contributions beyond the NLO impact factor; they are estimated by varying the running coupling scale c = 0.5 2 both in the NLO coefficient function where µ R = cP ?and in the Sudakov factor.Since they are parametrically of order ↵ 2 s ln 2 (P ?/µ 0 ), the band width grows with decreasing q ? .This illustrates the importance of controlling powers of ↵ s ln(P ?/µ 0 ) for future precision studies.
The second source of uncertainty are the missing contributions from the full NLO BK kernel.To gauge the sensitivity to these, we use two different formulations [85,86] of the kinematically constrained running coupling BK equation that differ by the additional resummation of single transverse logarithms [84].The blue area shows the corresponding sensitivity with relative variation of O(10%); including the full NLLx BK RGE can therefore significantly improve the overall precision of the computation.Thirdly, variations with respect to ↵ s,max are shown by the gray band in Fig. 2 (Bottom).Though as expected they grow at small q ?this sensitivity is mitigated, especially in large nuclei, because the scale (minimal transverse size) controlling the coupling is set instead by Q s .Lastly, power correction q 2 ?/P 2 ?, Q 2 s /P 2 ?uncertainties (not shown) previously discussed at LO [25,26] can be O(10%) for q ? .1.5 GeV and P ?= 4 GeV.Fig. 3 displays R eA , the ratio of the azimuthally averaged back-to-back dijet yield in e+A to e+p collisions.Such ratios minimize the aforementioned theory uncertainties as well as experimental ones.The top plot shows the q ?dependence of R eA for a large nucleus; for simplicity, we take A 1/3 = 6.At LO, it has a "Cronin" peak well-known from the corresponding ratio in protonnucleus (p+A) collisions [87]; in the CGC, it is generated by coherent multiple scattering that shifts the typical momentum imbalance to larger q ? in heavier nuclei [88].At NLO, we see that the Cronin enhancement is washed out by Sudakov corrections alone.A further strong effect is seen from the NLO contributions dominantly caused by the WW gluon TMD RGE which suppresses R eA analogously to the R pA case [89, 90].Qualitatively, Sudakov logs suppress configurations corresponding to small q ?(or large r bb 0 ) in the projectile.However since a fundamental consequence of gluon saturation is that even configurations with small r bb 0 are sensitive to nonlinear RG evolution with x, its precocious onset in large nuclei [91] leads to a suppression in R eA with A 1/3 .This is clearly demonstrated in the bottom plot.For fixed q ?= 1.5 GeV, one observes an increasing suppression with A 1/3 .The systematics of this suppression with A 1/3 and q ?are sensitive to the WW TMD RGE.Additional plots with different kinematic choices are provided in Figure 3. q? and A dependence (Top and Bottom respectively) of the nuclear modification factor ReA for the azimuthally averaged back-to-back dijet yield. the supplemental material.
While more detailed studies are necessary, our results are suggestive that inclusive back-to-back dijets in e+A collisions show strong potential to be a golden channel for gluon saturation at the EIC.Our conclusions can be strengthened by minimizing the stated theory uncertainties and by extending the comprehensive NLO study here to the di-hadron channel.Global analyses incorporating other e+A small x final states [85, 92-102] and analogous studies [103][104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119] in p+A collisions at RHIC and the LHC will further enable unambiguous determination of the dynamics of gluon saturation.
For simplicity we will focus sp region, where ⌘ s > 0 and large.moving nucleon has x 1 ∼ 1 wh cleon possesses the opposite be the characteristic scale, the sa distribution increases with decr pidities 1 will peak at small will be dominated by large mod in which p > q and expand a poses of this work, we can keep [39].Figure adopted from Ref. [40].coupled to 3+1D relativistic hydrodynamics, a good description of key heavy ion observables away from midrapidity is obtained as illustrated in Fig. 4.
In CGC approach new developments towards a 3D initial condition were also presented.In the new McDipper initial condition [40] the initial quark and gluon production is calculated in a similar manner as inclusive particle production discussed in Sec. 3, with the proton smallx structure being described by a parametrization fitted to HERA data.Although currently the parton production is computed at leading order accuracy, extensions to higher order accuracy are in principle possible.Using initial state estimators, a good description of key heavy ion observables is obtained, although the flow decorrelation is underestimated as shown in Fig. 5.
The x-dependence can also be calculated perturbatively by solving the JIMWLK equation.In this conference, the initial state geometry and momentum correlations obtained from the JIMWLK evolution in Ref. [43] were presented.These results suggest that the initial momentum correlations are short-range in rapidity, unlike the event geometry for which cor-relations vanish much more slowly when the rapidity separation increases.The longitudinal structure of nuclei governed by the JIMWLK evolution can also be coupled to 3D classical Yang-Mills simulations to determine the time evolution after the collision before QGP is formed.First such implementation was recently shown in Ref. [44].When coupled to 3+1D hydrodynamical simulations, a good description of particle spectra, mean p T and flow harmonics at midrapidity is obtained, but again not enough longitudinal decorrelation for flow is obtained.This can be seen to suggest a need for an additional source of fluctuations.
Conclusions
The initial state of heavy ion collisions contains a vast amount of interesting fundamental physics, and a realistic initial state description is also crucial to probe in detail the QGP properties.At the moment there is a rapid development to include a realistic description for longitudinal dynamics which enables one to compare simulations of heavy ion collisions to observables away from midrapidity, and to understand the saturation effects at precision level.
The initial state can be inferred directly from heavy ion collisions, or probed in other scattering processes such as deep inelastic scattering or proton-nucleus collisions.A simultaneous description of heavy ion initial state and other collider data can be obtained in collinear factorization based approaches such as EKRT, or in Color Glass Condensate based implementations such as IP-Glasma.For EKRT, recent developments to include longitudinal dynamics was presented in this conference.In CGC based initial state descriptions, there is currently rapid progress in the field to include higher order corrections.These developments are crucial to both enable accurate studies of gluon saturation phenomena especially in the next decade when the Electron-Ion Collider [45] becomes operational, and to develop precise initial state descriptions for heavy ion collisions including a perturbatively calculated x-dependence.
Figure 2 :
Figure 2: Cross section for the incoherent photoproduction of J/y vector mesons in ultra-peripheral Pb-Pb collisions at p sNN = 5.02 TeV measured at midrapidity.The uncorrelated uncertainty (statistical and systematic added in quadrature) is indicated with the vertical bar, while the correlated uncertainty by the grey band.The width of each |t| range is given by the horizontal bars.The lines show the predictions of the different models described in the text.The bottom panel presents the ratio of the integral of the predicted to that of the measured cross section in each |t| range.The relative uncertainties on the ratios calculated from GSZ are 45%.
7 Figure 1 .
Figure 1.Incoherent J/ψ production as a function of squared momentum transfer |t| measured by ALICE compared to theory calculations with (MS-hs) and without (MS-p) nucleon substructure fluctuations.Figure from Ref. [9]
Figure 3 .
Figure 3. Nuclear suppression factor for dijet production in deep inelastic scattering in EIC kinematics as a function of dijet momentum q T .Figure from Ref. [26].
Figure 4 .
Figure 4. Pseudorapidity distribution of charged hadron multiplicity calculated from the 3D EKRT model[38] compared to the ALICE data.
Figure 5 .
Figure 5. decorrelation estimated form the initial spatial eccentricity in the McDipper model compared to the STAR data[39].Figure adopted from Ref.[40]. | 4,120.2 | 2023-12-12T00:00:00.000 | [
"Physics"
] |
Rational engineering of a native hyperthermostable lactonase into a broad spectrum phosphotriesterase
The redesign of enzyme active sites to alter their function or specificity is a difficult yet appealing challenge. Here we used a structure-based design approach to engineer the lactonase SsoPox from Sulfolobus solfataricus into a phosphotriesterase. The five best variants were characterized and their structure was solved. The most active variant, αsD6 (V27A-Y97W-L228M-W263M) demonstrates a large increase in catalytic efficiencies over the wild-type enzyme, with increases of 2,210-fold, 163-fold, 58-fold, 16-fold against methyl-parathion, malathion, ethyl-paraoxon, and methyl-paraoxon, respectively. Interestingly, the best mutants are also capable of degrading fensulfothion, which is reported to be an inhibitor for the wild-type enzyme, as well as others that are not substrates of the starting template or previously reported W263 mutants. The broad specificity of these engineered variants makes them promising candidates for the bioremediation of organophosphorus compounds. Analysis of their structures reveals that the increase in activity mainly occurs through the destabilization of the active site loop involved in substrate binding, and it has been observed that the level of disorder correlates with the width of the enzyme specificity spectrum. This finding supports the idea that active site conformational flexibility is essential to the acquisition of broader substrate specificity.
PLL representatives are extremely thermostable [30][31][32][33][34][35][36][37][38] and are particularly promising candidates for biotechnological use 39,40 by virtue of their high stability and resistance towards denaturing agents 41 . PLLs, however, exhibit much lower phosphotriesterase activity than PTEs. Engineering experiments with the aim of increasing PLLs' phosphotriesterase activity were performed. In particular, because PTEs and PLLs' active sites mainly differ in their loop length (7 and 8), sequences and conformations 16,26,42 , loop grafting experiments were performed and proved to be difficult. This approach resulted in aggregation-prone or inactive enzymes 16,34 , but was successful when using a stepwise approach 43 . Recently, more classical engineering techniques were successful in increasing the phosphotriesterase activity of the PLL DrOPH 44 .
In this study, we worked on the PLL SsoPox from the hyperthermophilic archaea Sulfolobus solfataricus 32,45 . SsoPox exhibits high acyl-homoserine lactones and oxo-lactones hydrolysis abilities 11 , and promiscuous, low phosphotriesterase activity 46 . Previous work showed that engineering can successfully improve SsoPox catalytic efficiency against phosphotriesters such as ethyl-paraoxon 11,47 . These previous studies highlighted the role of a key active site loop 8 residue, W263, which modulates the active site loop conformational space 11 . In this study, we aimed to further improve the phosphotriesterase activity of SsoPox, including against phosphothionoesters. SsoPox is a very appealing candidate for the bioremediation of phosphotriesters because of its unique thermal stability, and its ability to resist aging, solvent and protease treatments 48 . On the other hand, SsoPox is a poor phosphothionoesterase, exhibiting a strong preference (100-fold) for methyl-paraoxon over methyl-parathion, a feature referred to as the thiono-effect 46 . We used a rational strategy: taking advantage of the structural similarity between SsoPox and the bacterial PTE from the mesophilic Brevundimonas diminuta (BdPTE) 27 , we designed and produced a structure-based combinatorial library, and screened this library for improved phosphotriesterase activity with the aim of obtaining heat stable, highly active variants with broad specificity. We obtained several improved variants, including against methyl-parathion, with a 2,210-fold improvement in activity. All these variants but variant αsB5 contained a mutation at position 263. Interestingly, the addition of the W263 mutation using site-saturation mutagenesis 12 on an αsB5 background revealed an incompatibility of these mutations, instead of the expected increase in phosphotrieseterase activity. Of these improved variants, we obtained a mutant that lost the thiono-effect against methyl-parathion, exhibiting a >2,000-fold (k cat /K M ~10 4 M −1 .s −1 ) as referred to the wild-type enzyme, and a methyl-paraoxon/methyl-parathion activity ratio of ~1. The X-ray structures of several improved variants reveal that the selected mutations increase the active site loop conformational flexibility and reshape the active site. The best variants were extensively characterized against eight commercially available insecticides, along with previously reported SsoPox monovariants harboring mutation at residue 263 and their specificity spectrum was determined.
Results and Discussion
The lactonase SsoPox was engineered for higher phosphotriesterase activity using structure-based combinatorial libraries. By comparing structures of enzymes with similar topology, it has been possible to redesign, using modelling tools, the active site cavity of SsoPox to mimic as closely as possible that of BdPTE ( Fig. 1; Table S1). This method consisted of two main steps: (i) the identification of mutations corresponding to equivalent positions at the enzyme active sites using structural alignment and (ii) the rational selection of mutations, in a position with no equivalent residue, which mimicked the shape and chemical nature of the target enzyme cavity. A mutation dataset was thus obtained and used to develop a combinatorial library consisting of random combinations of our pre-selected mutations. Structural alignments of SsoPox (red) and BdPTE structures (yellow) guided the mutational design. (C) Active site superposition of BdPTE (green) and SsoPox (cyan) active sites. Residues with structural equivalent (orange sticks) in both structures were used to design mutations of SsoPox's residues into the BdPTE corresponding residues. Positions with no structural equivalent (red sticks) were designed using modelling tools (purple sticks) to mimic the active site cavity size, shape and chemical properties of the BdPTE crystal structure model ( Figure S1). (D) The mutations data base consists of mutations to the BdPTE sequence when there is a structural equivalence (orange), and of mutations designed when there is no structural equivalence (red). The third set of mutations (black) adds more diversity at two selected positions, 258 and 263. (E) Once the library had been validated, primers carrying mutations were used to shuffle them and generate a gene library with random combinations of selected mutations. (F) The library was screened for paraoxonase activity to identify enzymes with improved proficiency.
Scientific REPORTS | 7: 16745 | DOI:10.1038/s41598-017-16841-0 Structure-based design produced mutants with improved phosphotriesterase activity. The structure-based design considerably reshaped the active site cavity of SsoPox, theoretically bringing it closer to that of BdPTE, both in shape and nature ( Figure S1). A total of 14 key positions within the active site were identified, and were mutated to specific residues, or degenerated into several possible amino acids (see Methods). The paraoxonase activity of 184 randomly selected clones was screened and the 14 best were sequenced. Most of these 14 were found to possess the same mutations (e.g. W263F/L/M/A; 73% of cases) and ( Figure S2A; Table S3). Of them, the most active variants, αsA1 (C258L-I261F-W263A), αsA6 (F46L-C258A-W263M-I280T), αsB5 (V27A-I76T-Y97W-Y99F-L130P-L226V), αsC6 (L72I-Y99F-I122L-L228M-F229S-W263L), and αsD6 (V27A-Y97W-L228M-W263M) were subject to kinetic and structural characterization. It should be noted that αsA1 was obtained in a previous engineering effort 47 , and that four of the five selected variants contained a substitution of the key residue W263. We previously highlighted the role of this residue in substrate binding and modulation of the active site loop 8 conformation, resulting in an increase in activity against promiscuous substrates 11 . Variant αsB5 does not harbor any substitution at position W263. We therefore decided to test the W263 substitution on an αsB5 background, using saturation mutagenesis strategy. The five variants (αsA1, αsA6, αsB5, αsC6 & αsD6) were characterized against several phosphotriesters, including ethyl-paraoxon (I), ethyl-parathion (II), methyl-paraoxon (III), methyl-parathion (IV) and malathion (V) (Fig. 2). Overall, all variant exhibits improved catalytic efficiencies against all tested substrates, with the exception of αsA1 with malathion (~2-fold decrease) ( Fig. 3A; Table 2). The best improved variant, αsD6, exhibits a 2,210-fold increase in catalytic efficiency against methyl-parathion ( Table 2). The largest improvements were observed for ethyl/methyl-parathion, two bad substrates for wild-type SsoPox. Interestingly, while the wt enzyme shows a clear preference for small substituents 46 , most selected variants lost this preference, possibly indicating an enlargement of the active site cavity. These improvements were in the range of what was previously obtained but with higher intensive mutation protocols and for only one OP substrate 49 . Finally, it was noted that two variants (αsA6 & αsC6) presented a substrate inhibition for some thiono-phosphotriesters.
The five selected variants were further evaluated for their lactonase activity (Table 2). Interestingly, variants αsA6 and αsC6 exhibit enhanced lactonase activity against γand δ-lactones, while αsA1, αsB5 and αsD6 show reduced lactonase catalytic efficiencies. This emphasizes the fact that the improvement of the phosphotriesterase activity does not necessarily compromise the cognate, lactonase activity of the enzyme. These mutations therefore dramatically increased a new activity (phosphotriesterase) without the complete loss of the native/original function (lactonase) 4 . These mutants are different from those generated on another PLLs, through loop insertion. While the stepwise insertion of residues in loop 7 can significantly increase the ability of PLLs to degrade phosphotriesters, it also drastically decreases their lactonase activity 43 . Insertion in loop 7 has been previously described as a key evolutionary event in the transition from lactonase to phosphotriesterase 50 . The crystal structures of variants αsA1, αsA6, αsB5, αsC6 and αsD6 were solved (Table 1). While most structures were solved at high or medium resolutions (1.4-2.55 Å), mutant asD6 could only be solved at low resolution (2.95 Å). Given this resolution, we limited our interpretations and retained this data for the consistency of the study as it exhibits structural features (loop 8 disorder) that are consistent with all the other mutants. Overall, the mutants' structures are similar to that of wt-SsoPox. However, the active site cavities are larger for the mutants: most selected mutations replace residues with smaller ones: W263 is mutated into M/L/I, V27 into A, I76 into T, L130 into P, Y99 into F. Only two selected substitutions relate to bulkier side chains: Y97W and L228M. This enlargement is difficult to quantify even with the structure of these mutants because of the extremely mobile nature of loop 8 in the variants. Enlarging the active site was a main objective of the active site redesign of SsoPox, to enable the bulkier phosphotriesters to bind within the natural lactonase enzyme. Additionally, the loop 8 conformations differ in mutants, as compared to the wt-enzyme. Because part of loop 8, including position 263, is located at the enzyme dimer interface, altered loop 8 conformations modulate the relative orientation of both monomers. In the case of the mutants characterized in this study, the dimer reorientation yields significant displacement up to 5.2 Å (e.g., αsA1, between equivalent carbon α positions), as compared to wt-SsoPox ( Figure S5). Similar reorientations were previously observed upon substrate binding or mutations of W263 11,42 .
Although the active sites of the improved variant structures superpose well onto the wt-structure, all selected mutants display altered loop 8 conformations. Changes are subtle for some of the variants (e.g. αsA1), but are large for others (e.g. αsC6) (Fig. 4). Notably, mutant αsA6 was crystallized in two different conformations: one in which loop 8 adopts a wt-like conformation, referred to as closed conformation (CC), and another one where loop 8 is unfolded, referred to as open conformation (OC) ( Figure S6). In the two other mutants, αsD6 and αsB5, loop 8 could not be modeled due to the lack of electronic density, which was likely to be due to the high level of motion of this enzyme region. Analysis of the normalized thermal motion B-factor supports this hypothesis. It also confirms that loop 8 is highly mobile in all selected mutants, as compared to wt-SsoPox (Fig. 5). The higher mobility of loop 8 may also partly explain the improved ability of the variants to hydrolyze phosphotriesters with large substituents. Improved mutant αsD6 lost the thiono-effect. SsoPox presents a marked preference for oxono-OP substrates as compared to thiono-ones (>100-fold in catalytic efficiency) 46 . Conversely, PTEs do not exhibit such a drastic preference, paraoxon being a slightly better substrate than parathion for the enzyme 27,51 . Interestingly, 1.00 ± 0.02 × 10 −1 1.13 ± 0.10 × 10 2 -9.08 ± 2.00 × 10 2 163 Continued some selected variants retained a strong preference for oxono-phosphotriesters (~10-fold), whereas others such as αsD6 lost the thiono-effect ( Figure S7). The wild-type enzyme exhibits a low K M value with methyl-parathion (121 µM), but also a very low k cat value (1.10 × 10 −3 s −1 ) ( between the thiono moiety and the metal cations (low K M value), which would subsequently impair catalysis (low k cat values) 52 . Interestingly, the improved variants do not exhibit significant difference in K M values for thiono-phosphotriesters, but show a dramatic increase in k cat values, as compared to wild-type enzymes (e.g. k cat value of 6.89 s −1 and 1.10 × 10 −3 s −1 , as compared to methyl-parathion for the αsD6 variant and wild-type enzymes, respectively). The two selected variants, namely αsB5 and αsD6, which lost the thiono-effect have only two mutations in common: V27A and Y97W. Both residues are located relatively close to the bi-metallic active site, yet do not interact directly with the metals (V27-Fe: 6.7 Å, Y97-Co: 4.3 Å; Fig. 6). Because other PLLs have been shown to exhibit high charge coupling between the β metal and the conserved tyrosine residue 33,53 , Y97 is a likely candidate for the thiono-effect, and will be examined in future studies.
Mutations of W263 and αsB5 are incompatible. αsB5 (V27A-I76T-Y97W-Y99F-L130P-L226V) is the only selected mutant that does not contain a substitution of position W263. Because W263 substitutions were previously shown to be key for increasing SsoPox's phosphotriesterase activity, we used saturation mutagenesis to introduce W263 mutations to the background of αsB5 12 . The best selected mutants against paraoxon ( Figure S2C) were found to be αsB5-W263I/L/M. Surprisingly, none of these mutants demonstrated an increase in phosphotriesterase activity in comparison with αsB5, but rather a decrease in activity ( Table 2). This is intriguing because, taken individually, these mutations have been shown to greatly improve phosphotriesterase activity. For example, with ethyl-paraoxon as a substrate, W263M and αsB5 produce a catalytic efficiency improvement of 14 and 67-fold, respectively, while the combination αsB5-W263M increases activity only 11-fold ( Table 2). This example of negative epistasis could be due to the mode of action of these mutations. Structures reveal that αsB5 mutations have a destabilizing effect on loop 8, as illustrated by the B factor analysis. The same was observed for W263M in a previous study 11 . Combining these two destabilizing sets of mutations may have harmed the active site integrity, as previously described with another enzyme as "conformational active site disorder" 54 , and in particular the necessary alignment and pre-organization of the catalytic residues. The destabilizing effect of these mutations can also be observed on the overall enzyme stability, with αsB5, W263M, αsB5-W263M exhibiting T m values of 70.4 °C, 85.3 °C and 69.1 °C, respectively, as compared to 106 °C for the wt enzyme (Table S4). Here, the effects of the combined 5 mutations harbored by variant αsB5 together with the previously identified positive substitutions at residue W263 were not additive underlining that single beneficial mutations do not necessarily induce positive epistatic effects. Synergistic effects of mutations are often complex to predict and most engineering strategies usually lead to an optimization plateau difficult to overcome 55 .
Improvement of engineered variants over W263 mutation.
In a previous report, we noted that mutation of W263 into all the other 19 residues lead to an increase of the promiscuous paraoxonase activity 11 . Here we note that most of the improved variants (i.e. αsA1, αsA6, αsB5, αsC6 & αsD6) (i) exhibit higher paraoxonase activity than W263L/M, and (ii) foremost are capable of hydrolyzing phosphotriesters that are not substrates for the wild-type enzyme or W263 mutants, such as ethyl-parathion, chlorpyrifos or fensulfothion ( Table 2). The activity spectrum of these engineered variants against the tested phosphotriesters is therefore much wider than the wild-type enzyme and W263 mutants.
Relation between active site loop disorder and broad enzymatic specificity. The organophospho-
rus compound bioremediation potential of several mutants was investigated. In particular, the best variants, αsD6 and αsB5, alongside other improved variants αsB5-W263I/L/M, W263L/M and the wt-enzyme, were characterized for their phosphotriesterase activity with ethyl-paraoxon (I), ethyl-parathion (II), malathion (V), chlorpyrifos (VI), diazinon (VII), fenitrothion (VIII), fensulfothion (IX) and coumaphos (X) (Fig. 2). Interestingly, we found that most mutants were able to hydrolyze fensulfothion, previously reported as an inhibitor of the wild-type enzyme 46 . Fensulfothion was indeed previously shown to bind head-to-tail into the wild-type enzyme active side 46 , a non-productive binding mode. Changes in the active site loop 8 conformational ensemble of engineered mutants might have allowed for the productive binding of fensulfothion. All mutants selected and used in this study are destabilized as compared to the wt-enzyme ( Fig. 7; Table S4). Structures reveal that these mutations, located on loop 8, contribute to increasing the mobility of the loop. Interestingly, the ability of the tested mutants to hydrolyze a wide range of phosphotriesters, including compounds that were inhibitors for the wt-enzyme, correlates with a lower T m (Fig. 7). This correlation suggest that the active site loop 8 degree of disorder may modulate the enzyme's ability to bind and hydrolyze a variety of phosphotriesters, a wider conformational ensemble of loop 8 being correlated with a broader enzyme specificity. This observation is consistent with previous studies highlighting the importance of the destabilization of active site loops to evolve new activities and new substrate specificity 54,56 . Such flexible active site loops involved in enzymatic specificity supports the notion of fold polarity: a portion of the active site (e.g. the loop) is weakly connected to the enzyme scaffold and thereby makes it possible for new functions to evolve with few changes 57 .
Mutant αsD6 is an active, broad spectrum phosphotriester biodecontaminant. Mutant αsD6
was further investigated for bioremediation considerations. Organophosphate-based pesticides are usually spread with concentrations in the millimolar range leading to micromolar-ranged contaminations. We thus investigated the potential of αsD6 for decontaminating pesticide solutions at 250 µM to simulate a contamination of groundwater or runoff waters. We investigated two S [E]/[ ] ratios, 10 −2 and 10 −3 respectively, for decontaminating OP solutions and determined the time required to hydrolyze 95% of pesticide preparations ( Fig. 3B; Figure S4). When considering the S [E]/[ ] ratio of 10 −3 , 95% degradation was achieved for four substrates (ethyl-paraoxon, diazinon, fenitrothion and coumaphos) within 10 minutes. Two substrates were decontaminated within an hour (ethyl-parathion and chlorpyrifos) and fensulfothion was degraded in three hours.
When increasing enzyme concentration 10-fold ( S [E]/[ ] ratio of 10 −2 ), five substrates (ethyl-paraoxon, ethyl-parathion, diazinon, fenitrothion and coumaphos) were decontaminated within two minutes, while fensulfothion needed ~20 minutes and the two other tested (chlorpyrifos and malathion) required 40 minutes. Given both its catalytic proficiency and high thermal stability (T m = 82.5 °C), αsD6 is a promising candidate for organophosphorus compound bioremediation and, in particular, the decontamination of water runoffs, soils, food products and materials. Moreover, SsoPox variants were previously shown to be compatible with immobilization steps including alginate beads and polyurethane-based coatings 48,58 , and could be of prime interest for the development of filtration devices for water treatment purposes.
barrel topology, SsoPox and
BdPTE structures (Pdb ID 2vc5 and 1dpm, respectively) were superimposed (Fig. 1) using PyMol 59 , making it possible to identify structurally equivalent residues at their respective active sites. In an effort to reshape the SsoPox active site, side chains that were well superimposed between the two enzymes were mutated using Coot into the residue present in BdPTE 60 , with the exception of V27 which was mutated in Ala (and not Gly) to avoid increased entropy ( Fig. 1; Table S1). However, due to the major differences in loop 8 and 7 between PLLs and PTEs, numerous residues were not superimposable. These active site residues (Y99, L228, F229 and W263) were therefore mutated in an effort to mimic the BdPTE active site cavity in terms of shape and chemical nature, as illustrated by the chimeric reconstruction of SsoPox carrying all these mutations ( Figure S1). Others mutations were also implemented in the dataset, e.g. C258A and C258L and different substitutions of the key position W263 (F/L/M/A) that have been shown to improve the phosphodiesterase activity 11 .
Synthesis of the combinatorial library. The SsoPox coding gene was amplified from the previously described pET22b-SsoPox plasmid 46 . For each of the 14 mutations, a mutagenesis primer of 30-33 bp (~10-15 bp from each mutation side) was synthesized (Table S2). For close mutations, primers can share several mutations and different primers were synthesized and mixed to keep an equivalent statistical probability for each of the 14 mutations. Typically, 2 pmol of primer mixture and 100 ng of DNaseI (TaKaRa)-generated fragments of the SsoPox gene were assembled as previously described 17 , followed by nested PCR with external cloning primers (SsoPox-lib-pET-5′/3′) ( Table S2). The PCR product was then cloned into a customized version of the pET-32b(-ΔTrx) plasmid and then electroporated into E. cloni cells (Lucigen, USA). After agar-plate growth, plasmid extraction was performed to create the plasmid bank. Variant genes were PCR amplified using T7-prom and (Table S2) and sequenced. The combinatorial library shows an average of 5.3 ± 3.3 library mutations by sequencing 10 randomly picked colonies, and 0.43 ± 0.53 random mutations per gene.
Screening of the library. The plasmid library was used to transform the Escherichia coli strain BL21(DE 3 )-pGro7/GroEL (TaKaRa) to obtain colonies expressing a library of SsoPox variants. Randomly picked clones (184), representing a coverage of 3.8% of the library, were grown on a 96-well plate in 500 µL of ZYP medium as previously described 11 . Production of proteins and chaperones was induced after five hours of culture at 37 °C by reducing the temperature to 25 °C, adding CoCl 2 (0.2 mM) and adding arabinose (0.2%, w/v). After overnight growth, cell lysate was used to perform activity screening with 100 µM of paraoxon substrate ( Figure S2B) after partial purification of the protein with a heating step of 15 minutes incubation at 70 °C 35,49 . The screening was performed in 50 mM HEPES pH 8, 150 mM NaCl, 0.2 mM CoCl 2 . Kinetics of paraoxon/ methyl-parathion hydrolysis were monitored by following absorbance at 405 nm for 10 minutes using a microplate reader (Synergy HT, BioTek, USA) and the Gen5. Screening was performed as described above ( Figure S2C). Clones were produced in liquid ZYP medium and their activity was screened with 100 µM of paraoxon substrate. The plasmids corresponding to the most interesting variants were extracted and the SsoPox variant encoding genes were sequenced. Kinetic assays. Generalities. Catalytic parameters were evaluated in triplicate at 25 °C, and recorded using a microplate reader (Synergy HT, BioTek, USA) and the Gen5.1 software, in a 6.2 mm path length cell for 200 µL reaction in 96-well plate, as previously explained 30 . Catalytic parameters were obtained by fitting the data to the Michaelis-Menten (MM) equation using Graph-Pad Prism 6 software. When V max could not be reached in the experiments, catalytic efficiency was obtained by fitting the linear part of MM plot to a linear regression using Graph-Pad Prism 6 software. For some SsoPox variants, the MM plot was fitted to the substrate inhibition equation using Graph-Pad Prism 6 software enabling us to determine a K I for several substrates. Consequently, the calculated catalytic efficiencies in these conditions are true only at low substrate concentrations. In some other cases, saturation could not be reached, therefore k K / cat M values were determined using linear regression. For Coumaphos, data were fitted to one-phase decay non-linear regression.
Lactonase activity characterization. Lactonase kinetics were performed using a previously described protocol 30 . The time course hydrolysis of lactones were performed in lac buffer (2.5 mM Bicine pH 8.3, 150 mM NaCl, 0.2 mM CoCl 2 , 0.25 mM Cresol purple and 0.5% DMSO) over a concentration range 0-2 mM for 3-oxo-C10 AHLs (XI) or 0-5 mM for undecanoic-γ/δ -lactones (XII, XIII). Cresol purple (pK a 8.3 at 25 °C) is a pH indicator used to follow lactone ring hydrolysis by acidification of the medium. Molar coefficient extinction at 577 nm was evaluated recording absorbance of the buffer over an acetic acid range of concentration 0-0.35 mM.
Bioremediation of pesticide solutions at 250 µM. For all OP substrates but malathion, experiments were performed in triplicate at 25 °C using a microplate reader (Synergy HT, BioTek, USA). Wavelengths were chosen as described in the kinetic assays section. Degradation of malathion was followed by GC/MS in triplicate. OP concentration was fixed at 250 µM. All the substrates were soluble at this concentration. The best variants of SsoPox for each substrate were used, namely αsD6 for all the OPs. Two enzyme-to-substrate ratios S [E]/[ ] were used, 10 −2 and 10 −3 , corresponding to enzyme concentrations of 90 µg.ml −1 and 9 µg.ml −1 respectively. The reactions were monitored until the plateau was reached. Experimental measures were obtained using Gen5.1 software, then analyzed with GraphPad Prism 6 software. Curves were then fitted using One-Phase Decay non-linear regression with the equation (1): where Y0 = 0% and Plateau = 100%. The rate constant k was determined and the time required to observe a 95% decontamination was calculated accordingly. The curves and results of the fits are shown in Figure S4. Degradation at 95% was confirmed using a final point by GC/MS for all pesticides with a 10 −3 enzyme-to-substrate ratio. 100 µL of activity buffer solution was first extracted with 100 µL chloroform. Organic extracts were analyzed by using a Clarus 500 gas chromatograph equipped with a SQ8S MS detector (Perkin Elmer, Courtaboeuf, France). 1 µL of organic extract was volatilized at 220 °C (split 15 mL.min −1 ) in a deactivated FocusLiner with quartz wool (SGE, Ringwood, Australia) and compounds separated on an Elite-5MS column (30 m, 0.25 mm i.d., 0.25 mm film thickness) for 12 minutes using a temperature gradient (80-280 °C at 30 °C.min −1 , five minutes' hold). Helium flowing at 2 mL.min −1 was used as the carrier gas. The MS inlet line was set at 280 °C and the electron ionization source at 280 °C and 70 eV. Full scan monitoring was performed from 40 to 400 m/z in order to identify chemicals by spectral database search using MS Search 2.0 operated with the Standard Reference Database 1 A (National Institute of Standards and Technology, Gaithersburg, MD, USA). m/z is the mass-to-charge ratio of the base peak fragment detected for each molecule. In the case of weak pesticide signals, extracted ion chromatograms were generated with base peak ions to confirm the presence of chemicals (Paraoxon (I) and Parathion ethyl (II) m/z 109; Malathion (V) and Fenitrothion (VIII) m/z 125; Diazinon (VII) m/z 137; Coumaphos (X) m/z 97; Fensulfothion (IX) m/z 293; and Chlorpyrifos (VI) m/z 197). Selected Ion Recording using base peaks ions was applied in order to specifically monitor pesticides and collect peak areas for kinetics. All samples were analyzed over short periods of time to avoid signal drift. All data were processed using Turbomass 6.1 (Perkin Elmer).
Crystallization. Crystallization assays were performed as previously described 42,46 . Crystallization was performed using the hanging drop vapor diffusion method in 96-well plates (Greiner Microplate, 96 well, PS, F-bottom) on ViewDrop II seals (TPP Labtech). Equal volumes (0.5 µL) of protein and reservoir solutions were mixed, and the resulting drops were equilibrated against a 150 µL reservoir solution containing 20-30% (w/v) PEG 8000 and 50 mM Tris-HCl buffer (pH 8). Crystals appeared after few days at 4 °C.
Data collection and structure refinement. Crystals were first transferred to a cryoprotectant solution consisting of the reservoir solution and 20% (v/v) glycerol. Crystals were then flash-cooled in liquid nitrogen. X-ray diffraction data were collected at 100 K using synchrotron radiation at the ID23-1 beam line (ESRF, Grenoble, France) and using an ADSC Q315r detector. X-ray diffraction data were integrated and scaled with the XDS package (Table 1) 61 . The phases were obtained using the native structure of SsoPox (PDB code 2vc5) as a starting model, performing a molecular replacement with MOLREP or PHASER 62,63 . The models were built with Coot and refined using REFMAC 60,64 . We note that three structures presented in this work exhibit one disordered monomer (and low corresponding electronic density): αsB5 (monomer D, total of 4 monomers), αsD6 (monomer C, out of a total of 4 monomers) and αsC6 (monomer C, total of 4 monomers). Structure illustrations were performed using PyMOL 48 .
Relative B-factor analysis. The occupancies of all residues in all tested structures were set to 1 for this analysis. For residues with alternate conformations, the sums of occupancies were set to 1. Structures were re-refined with REFMAC 60,64 . The relative B-factor values were obtained by normalizing the B-factor values of each residue by the average B-factor of the whole structure as previously described 11,54,57 . | 6,746.6 | 2017-12-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
In rats with estradiol valerate-induced polycystic ovary syndrome, the acute blockade of ovarian β-adrenoreceptors improve ovulation
Background Polycystic ovary syndrome is characterized by hyperactivity of the ovarian sympathetic nervous system, increases in the content and release of norepinephrine, as well as decreases in the number of β-adrenoreceptors. In the present study, β-adrenoreceptors in the ovaries of rats with polycystic ovary syndrome were blocked and analyzed the resultant effects on ovulation, hormone secretion and the enzymes responsible for the synthesis of catecholamines. Methods At 60 days of age, vehicle or estradiol valerate-treated rats were injected with propranolol [10− 4 M] into the ovarian bursas on oestrus day. The animals were sacrificed on the next day of oestrus, and the ovulation response, the steroid hormone levels in the serum and the immunoreactivity of tyrosine hydroxylase and dopamine β-hydroxylase in the ovaries were measured. Results In animals with the induction of polycystic ovary syndrome and β-adrenoreceptor blocking, ovulation was restored in more than half of the animals and resulted in decreased hyperandrogenism with respect to the levels observed in the estradiol valerate-treated group. Tyrosine hydroxylase and dopamine β-hydroxylase were present in the theca cells of the growing follicles and the interstitial gland. Injection of propranolol restored the tyrosine hydroxylase and ovarian dopamine β-hydroxylase levels in rats with polycystic ovary syndrome induction. Conclusions The results suggest that a single injection into the ovarian bursas of propranolol, a nonselective antagonist of β-adrenoreceptor receptors, decreases the serum testosterone concentration and the formation of ovarian cysts, improving the ovulation rate that accompanies lower levels of tyrosine hydroxylase and dopamine β-hydroxylase in the ovary.
Background
Polycystic ovarian syndrome (PCOS) is the most common cause of infertility in women of reproductive age. It has a prevalence between 6 and 10% based on the U.S. National Institutes of Health criteria and 15% when the Rotterdam criteria are applied [1,2]. PCOS is a multifactorial pathology that is characterized by hyperandrogenism, anovulation, presence of multiple ovarian cysts, irregularities in the menstrual cycle, and variable levels of gonadotropins [3,4]. The etiology of PCOS is unknown, but intrinsic abnormalities in the synthesis and secretion of androgens are a probable basis for the syndrome [5]. Additionally, involvement of the sympathetic nervous system that innervates the ovaries during the development of the syndrome is suggested by studies in women with PCOS, where a high density of catecholaminergic nerve fibers has been shown [6]; further, in rats, the participation of sympathetic nerve fibers in the modulation of androgen secretion in the ovaries has been revealed [7], which may contribute to the etiology of PCOS [8]. In rats, the main catecholamine present in the ovaries is norepinephrine (NE), which stimulates steroidogenesis [9][10][11], follicular development [12][13][14][15] and ovulation [16][17][18] by regulating αand β-adrenoreceptors (ADR) [19][20][21].
There is evidence that nonhormonal procedures result in PCOS. Luna et al., [22] showed that peripheral stimulation of β-adrenoreceptors (ADRB) with isoproterenol in wild-type adult rats promotes an increase in the number of precystic and cystic ovarian follicles without changes in plasma steroid levels, while blocking ADRB with propranolol in the same model inhibits their formation. The authors suggested that stimulation of ADRB activates the sympathetic nervous system of the rat ovary, which could be one mechanism of PCOS development and that they could be a therapeutic alternative for women with PCOS [22]. Fernandois et al. [23] showed that the prolonged blockade of β1 and β2-adrenoreceptors in 8-and 10month rats, by i.p. daily injection of propranolol (5 mg/kg of body weight), during 60 days, recovered estrous cyclicity, elevated the ovulation rate, and levels of serum sexual steroids. We have previously shown that in the cyclic rat the acute blockade of β1 and β2-adrenoreceptors by propranolol injection at different days of the estrous cycle reduced the number of ova shed only in those animal treated on diestrus 2, without affecting ovulation in the other day of the cycle [24].
Several experimental models have been proposed to induce PCOS in neonatal, prepubertal or adult rats, depending on the phenotypic and physiological characteristics that are to be investigated, such as steroidal and nonsteroidal drugs (dehydroepiandrosterone, dihydrotestosterone, letrozole, and estradiol valerate (EV)-administration) [25][26][27] and genetic or environmental manipulations (genetically modified rat models as well as models developed with exposure to constant light or stress) [28,29]. To study the relationship between the PCOS and sympathetic innervations, the most commonly used PCOS model is generated by a single injection of EV in prepubertal rats, which results in a polycystic ovary morphology, irregular estrous cycles [30,31], alterations in basal and pulsatile luteinizing hormone (LH) concentrations and follicle stimulating hormone (FSH) concentrations and an increased androgen response to human chorionic gonadotropin stimulation [32]. The ovaries of rats injected with EV presented increased neural sympathetic activity [8,[32][33][34]. This increase is due to changes in the homeostasis of ovarian catecholamines that begin before the development of cysts and persist after their formation [8]. This change is accompanied by an increase in the release and content of NE from the nerve terminals to the ovary, an increase of tyrosine hydroxylase (TH) activity, the limiting enzyme for the synthesis of catecholamines, and a down regulation of ADRB2 in theca interstitial cells [8,32,35].
Previous studies have analyzed the participation of ovarian innervation in the development of PCOS in rats following EV injection, and increased activity of the sympathetic nerves of the ovary has been shown. The bilateral sectioning of the superior ovarian nerve (SON) in EVtreated rats restores ovulation [8], while the unilateral section of the SON in the same animal model restores ovulation mainly in the innervated ovary and the NE concentration was decreased only in denervated ovaries [36]. In a previous study [37] we showed that the elimination of noradrenergic fibers by guanethidine injection before the establishment of PCOS prevents the blockade of ovulation and hyperandrogenism. In animals with the PCOS peripheral sympathetic denervation by guanethidine also restores ovulatory capacity, but it was not as efficient in reducing hyperandrogenism. This suggests that the elimination of noradrenergic fibers before the establishment of PCOS prevents two characteristics of the syndrome: blocking of ovulation and hyperandrogenism [37]. Electroacupuncture treatment [33,38] or voluntary exercise [39] in EV-treated rats reduces sympathetic activity, restores the oestrus cycle and ovulation, and normalizes LH secretion and steroidogenesis to regulate ADR.
Based on these evidences the aim of the present study was to analyze if a pharmacological acute blockade of ovarian ADRB restores ovarian functions in the EV model of PCOS.
Animals
Newborn, female rats of the CII-ZV strain were kept with their mothers under controlled light conditions (lights on from 05:00 to 19:00 h) until weaning and were provided free access to food and water ad libitum under the same light conditions. The animals were provided by the Facultad de Estudios Superiores-Zaragoza, UNAM, and the Bioethics Committee approved the experimental protocols. All procedures described in the present study were performed in accordance with the Guide for the Care and Use of Laboratory Animals of the Mexican Council for Animal Care (NOM-062-ZOO-1999) and to the Guidelines for the Use of Animals in Neuroscience Research from the Society for Neuroscience. Every effort was made to minimize the number of animals in each experimental group and to ensure minimal discomfort and pain.
Experimental designs
Ten-day-old female rats were intramuscularly injected with 2.0 mg EV (Sigma Chemical Co., St. Louis, Mo. USA) that had been dissolved in 0.1 mL sesame oil. The vehicle group (Vh) was injected with a single 0.1 mL dose of sesame oil. Vaginal smear was performed daily after the vaginal opening was first observed.
At 60 days of age, the animals in vaginal oestrus were randomly assigned to one of the following four experimental groups: 1) Vh group (n = 10). Rats treated with sesame oil were sacrificed at 60 days of age, on oestrus day. 2) Vh group plus propranolol (n = 10). The ovarian bursas of rats treated with sesame oil were injected with 20 μL of propranolol [10 − 4 M] (Sigma Chemical Co., USA) that was dissolved in 0.9% saline solution. 3) EV group (n = 8). Rats treated with EV were sacrificed at 60 days of age, on oestrus day. 4) EV group plus propranolol (n = 9). The ovarian bursas of rats treated with EV were injected with 20 μL of propranolol [10 − 4 M] (Sigma Chemical Co., USA) that was dissolved in 0.9% saline solution.
Surgery
Following the methodology previously described [40], each of the rats underwent a bilateral laparotomy under general anesthesia, and the ovaries were exteriorized to enable injection of 20 μL of propranolol into each one, with the aid of a Nano-Injector, Stepper Motorized (Stoelting Co, USA) and a 100 μL micro syringe (Hamilton, USA) equipped with a 29-gauge needle; the injection rate was 4 μL/min. To prevent fluid leakage, the needle was kept in the ovarian bursa for 2 min. Subsequently, the ovaries were carefully cleaned, dried, and returned to the abdominal cavity, and the skin and muscle were sutured. The surgeries were performed between 9:00 and 11:00 A.M.
Autopsy procedures
Animals from each group were deeply anesthetized with pentobarbital between 9:00 and 11:00 A.M. following confirmation of oestrus by vaginal smear after the surgery. Blood was obtained by intracardiac puncture; it was allowed to clot and was centrifuged for 15 min. at 3000 RPM. The serum was stored at − 20°C until progesterone, testosterone and oestradiol levels were measured. The animals were then perfused with a 200 mL of saline solution followed by 200 mL of 4% paraformaldehyde dissolved in a phosphate buffered solution (PBS). At autopsy, the oviducts were dissected, the number of ova shed was counted with the aid of a stereomicroscope, and ovulation was corroborated by observing the presence of corpora lutea (CL).
Ovarian morphology
The ovaries were dissected and kept in paraformaldehyde for 24 h, rinsed with saline and kept in a PBS solution with 30% sucrose until histochemical processing. The paraformaldehyde-perfused ovaries were sectioned with a cryostat (Microm HM 525) at temperatures of − 20°C, and the 10-μm thick section were subsequently mounted on coated glass slides. Ovarian serially sections of five animals from each group were stained with hematoxylin-eosin and examined under a light microscope. All sections from each group were analyzed for the presence of fresh CL and follicular cysts with a Leica binocular microscope (DM750) coupled to a Leica camera (ICC50 HD). The criteria used to define fresh CL were healthy cells with large nuclei and the presence of blood vessels. The follicular cyst structures were defined according to Brawer et al., [30].
Immunofluorescence to TH and dopamine β-hydroxylase (DBH) The ovarian sections of three animals taken at random from each experimental group (Vh, Vh + Pro, EV, and EV + Pro), were rinsed with PBS (pH 7.4) and were then rinsed twice with PBS with 0.5% Triton X-100. The nonspecific binding sites were blocked with IgG-free 2% bovine serum albumin (Sigma Chemical Co., USA). The sections were then incubated overnight at 4-8°C with primary antibodies: polyclonal rabbit anti-TH antibody (1:200 sc-14,007 Santa Cruz Biotechnology Inc., USA) or polyclonal rabbit anti-DBH (1:200 sc-Santa Cruz Biotechnology Inc., USA), and the sections were subsequently incubated with a FITC-labelled goat anti-rabbit secondary antibody (Vector Labs Inc., USA). The slides were counterstained with Vectashield coupled with DAPI (Vector Labs Inc., USA) for nuclear staining. For negative controls, the primary antibody was substituted with PBS. Photomicrographs were taken using an Evolution VF Digital Camera (Media Cybernetics, Inc., USA) coupled with a fluorescence microscope (BX-41 Olympus Co.). From the ovarian sections of each animal, 10 ovarian follicles that exhibited the follicular antrum and the oocyte were selected, except in the cysts where the oocyte is absent (n = 3 animals per group with 10 pseudo-replicas per animal). Using the National Institutes of Health's Ima-geJ software, the relative fluorescent to TH or DBH immunoreactivity was quantified following the methodology used previously [37,[40][41][42] . The color micrographs were converted to 8-bit grayscale images, the criteria used to define the intensity settings were constant between all sections (the selection area in square pixels were equal for each ovarian follicle analyzed). The regions of interest were randomly selected based on the visualization; the fluorescence intensity was quantified in a constant area of each class of follicle evaluated.
Hormone measurements
The progesterone, testosterone and oestradiol serum levels were measured using a Radio-Immuno-Assay with kits purchased form Diagnostic Products (Los Angeles, CA).
Progesterone results are expressed in ng/mL, and testosterone and oestradiol results are expressed in pg/mL. The intra-and interassay coefficients of variation were 8.35 and 9.45 for progesterone, 9.65 and 10.2 for testosterone, and 8.12 and 9.28 for oestradiol, respectively.
Statistics
The results were expressed as the mean ± standard error (SE) for all experiments. The number of ova shed by ovulating rats was analyzed using Kruskal-Wallis tests, followed by a Mann-Whitney U-tests. The ovulation rate, expressed as the number of ovulating animals per number of treated animals, was analyzed using Fisher's exact probability test. Hormone serum levels and immunoreactivity of TH or DBH were analyzed using one-way analysis of variance followed by Tukey test, with Graph Pad Software, Inc., (San Diego, CA, USA). A probability ≤5% was considered significant.
Results
Ovulation rate and number of ova shed (Table 1) The animals injected with EV exhibited vaginal opening at 14 ± 0.0 days of age and were in oestrus according to the vaginal smear, which remained unchanged until the day of sacrifice. Animals injected with Vh exhibited vaginal opening at 35.1 ± 1.2 days of age and had 4-day oestrous cycles.
In the Vh group, all the animals ovulated regardless of whether or not they were injected with propranolol. However, the number of ova shed was smaller in the Vh plus propranolol group than in the Vh group (Table 1).
In the EV group, 1/8 animals ovulated, while in the EV plus propranolol group, 6/9 of the microinjected animals ovulated. The number of ova shed by the EV group microinjected with propranolol was smaller than the number observed in the Vh group (Table 1).
Hormone serum levels
Microinjection of propranolol in both ovarian bursas of rats treated with Vh did not result in changed progesterone levels compared with the Vh group. Animals injected with EV exhibited higher concentrations of progesterone than the controls. The single injection of propranolol within the ovarian bursas in rats with EV resulted in lower progesterone levels than those observed in EV-injected rats (Fig. 1a).
In the Vh group, propranolol microinjection in both ovarian bursas did not modify testosterone levels compared with Vh-injected group. Testosterone levels in EV animals were higher than those in Vh-injected animals. In these animals, the microinjection of propranolol in both ovarian bursas resulted in lower testosterone levels than EV-treated group but higher testosterone levels than Vh-injected animals (Fig. 1b).
The microinjection of propranolol in Vh-treated animals did not change oestradiol levels compared with Vh-injected rats. The hormone levels in EV-treated animals were higher than those in Vh-treated animals. The microinjection of propranolol into both ovarian bursas resulted in lower oestradiol levels than EV-treatment group (Fig. 1c).
Ovarian morphology
The ovaries of rats injected with Vh and microinjected or not with propranolol in both ovarian bursas presented growing follicles at different stages and CL (Fig. 2a and c). The ovaries of rats injected with EV presented follicular cysts, and only the ovaries of a single rat had CL (Fig. 2b). In the ovaries of EV-treated rats that were microinjected with propranolol in both ovarian bursas (Fig. 2d), CL were observed as in the Vh group.
TH and DBH immunoreactivity in ovarian tissue
The data had a normal distribution (TH fluorescence intensity of follicles with antrum: p value 0.9702 and cyst: p value 0.5176, Shapiro-Wilks normality test). TH and DBH immunoreactivity were found only in the interstitial tissue and theca cells of antral follicles. Compared to the Vh group, the TH immunoreactivity was not significantly different in the ovarian tissue of Vh-propranolol injection-treated rats. The highest intensity of TH immunoreactivity was observed in the theca cells of the ovarian follicles from the EV group. Propranolol injection into the ovarian bursas in EVtreated rats restored TH immunoreactivity, with respect to the EV group (Fig. 3).
The microinjection of propranolol did not modify the DBH immunoreactivity in the Vh group. The DBH immunoreactivity in the ovaries of rats injected with EV was higher with respect to the Vh group. Propranolol injection into the ovarian bursas of EV-treated rats restored DBH immunoreactivity in ovarian tissue with respect to the EV group (Fig. 4).
Discussion
The results of the present study show that acute blocking of ADRB in ovaries with PCOS reestablishes ovulation in more than half of the animals, decreases progesterone, testosterone and oestradiol levels, prevents the development of ovarian cysts (as determined by the observation of ovarian tissue with growing follicles or presence of CL), and restores the enzymes responsible for the synthesis of NE to their basal levels.
Hyperactivity of the sympathetic ovarian system has been proposed to be associated with hyperandrogenemia [5,7,43,44]; however this relation is not yet clear [43,45]. Lara et al. [8] showed that the levels of NE in the ovaries increased slightly at 30 days after an EV injection. When the animals were analyzed 60 days after injection with EV, they had higher levels of ovarian NE and testosterone than the controls did. Rats injected with EV develop PCOS morphology, show a downregulation of ADRB2 and show an increase in the nerve growth factor (NGF) and its low affinity receptors in the ovary [7,8,32,46]. This association suggests that NGF [7,43,44] induces androgen overproduction in ovaries with PCOS, which is also a result of hyperactivation of the catecholaminergic system on ovarian steroid-secreting cells [32]; however, when the NGF actions were blocked in the ovaries, the ovarian functions are restored [46].
Previous studies have shown that EV-treated rats with unilateral section of the SON restored ovulation by the innervated ovary and normalized the testosterone and oestradiol levels [36]. This result suggests that noradrenergic fibers arrive by SON participate in the hyperandrogenism in the PCOS model. On the other hand, Linares et al. [47] showed that the bilateral section of the vagus nerve (VG) in EV-injected rats restored ovulation in both ovaries, suggesting that the neural information carried by the SON and VG plays a role in the regulatory mechanisms of development and maintenance of PCOS.
Other studies using agonists and antagonists of ADR have suggested that α-adrenoreceptors (ADRA) and ADRB are present in the ovaries [10,11,19,[48][49][50][51]. Consistent with Ojeda and Lara [52] shown that NE acts on ADRB into theca and granulosa cells and stimulates the progesterone and testosterone secretion, but not oestradiol. Likewise, in EV-treated rats, the progesterone and androgen secretion increased in a NE-dependent manner [34].
According to Luna et al. [22], the ovaries of adult rats injected daily with isoproterenol for 10 days, on day 11 secreted a higher amount of androstenedione than the ovaries of the control group. Such increase was not observed in rats studied 30 days after isoproterenol treatment, besides ovarian cysts were still present, the adrenergic activity is similar to the control group, suggesting that after ending the treatment with isoproterenol, the animals began their recover. This response is different in EVtreated rats who have hyperandrogenism and hyperactivation noradrenergic for longer periods [8]. After 56 days of EV injection, several groups have described the presence of follicular cysts and ovarian noradrenergic activity remains higher than normal [8,32,34,36,46,53]. Then, we suppose that the mechanisms involved in the formation of the polycystic ovary induced by isoproterenol and EV are different.
The findings from this study showed that a single propranolol injection into the ovaries of EV-treated rats improves the ovulation rate, as evidenced by the presence of CL. Moreover, the progesterone and testosterone levels were lower in EV-treated rats and microinjected with propranolol than those treated only with EV; hence, the ADRB blocker begins to restore ovarian steroidogenesis. We suggest that if the blockage of the ADRB receptors is maintained, the concentration of steroid hormones could decrease even more. Although not all rats in the EV group plus propranolol ovulated, there was a decrease in testosterone concentration in all animals treated with the ADRB receptor antagonist, which suggests variability in animals. It has been suggested that in prepubertal animals, the regulation of enzymes that participate in progesterone, testosterone and oestradiol synthesis does not occur in parallel. This suggests that the mechanisms of regulating the synthesis of each hormone are not regulated by the same signals and that the changes in the steroid hormone levels are not explained by the changes in gonadotropin secretion [54].
According to Fernandois et al., [23] there exists a correlation between reproductive aging and PCOS; both processes are accompanied by increased intraovarian sympathetic tone. In their study, it was proposed that after 2 months of blocking the ADRB, there was a reactivation of follicular development, an improved ovary cycling activity, an increased ovulation rate and a decrease in the number of cystic structures. Luna et al., [22] proposed that PCOS could be induced by ADRB activation in rats and could be prevented by simultaneous administration of an agonist and an antagonist of ADRB. In the present study, a single propranolol injection into the ovarian bursas of EV-rats showed ovarian morphology with follicular development and the presence of CL, indicating that the animals ovulated. However, this treatment was not able to reestablish ovarian functions in all animals. Espinoza et al. [37] showed that chronic administration of guanethidine (a drug that destroys noradrenergic fibers), prior to the induction of PCOS with EV, prevents the blockage of ovulation and hyperandrogenism. However, animals that have already developed PCOS are not able to reduce testosterone levels; despite pharmacological denervation, neural signals arrive in the ovaries via the SON.
It is possible that when ADRB are blocked, NE acts on α-adrenoreceptors, maintaining high testosterone levels, despite treatment with propranolol. Manni et al., [38] showed that the expression of ADRA1 was higher in the ovaries of rats with PCOS. Although the effect of the ADRA activation on ovarian steroidogenesis in PCOS rats has not been studied, it has been shown that in cultured granulosa cells obtained from Fig. 3 Distribution of TH in ovaries of Vh (a) or EV-treated rats (c) and before the bilateral injection of propranolol (Pro) (b, d) into the ovarian bursas. e Negative control where the primary antibody was substituted with PBS. The ovarian sections were stained with anti-TH antibody (green color), and nuclear staining was performed with DAPI (blue color). TH is observed throughout the ovary, including the F: follicle and T: Theca cell. Bar 100 μm. f ImageJ analysis of TH relative fluorescence means ± SE (n = 3 animals per group with 10 pseudo-replicas per animal), a p < 0.05 vs Vh group; b p < 0.05 vs. EV group (one-way analysis of variance, followed by Tukey) adult rats, phenylephrine (an ADRA1A agonist) stimulates the secretion of progesterone [11], which is a precursor of testosterone.
According to Morales-Ledesma et al. [36] the NE release in EV-treated rats increased from the sympathetic fibers to ovaries. This change is associated with higher TH activity [8,32,35]. In the present study, we show that TH and DBH immunoreactivity is present in the theca-interstitial cells of EV-treated rats, and this activity is likely associated with the synthesis and secretion of testosterone. To our knowledge, this study is the first to show that a single propranolol injection into the ovarian bursas in EV-treated rats decreases TH immunoreactivity. These observations suggest that the functional activity of ovarian sympathetic tone is diminished by blocking ADRB. Likewise, DBH immunoreactivity is decreased in EV-treated rats. This finding suggests that the increase in TH activity produces a downregulation of DBH immunoreactivity in the ovaries as a way of producing negative feedback of NE synthesis. Fig. 4 Distribution of DBH in ovaries of Vh (a) or EV-treated rats (c) and before the bilateral injection with propranolol (Pro) (b-d) into the ovarian bursas. The ovarian sections were stained with anti-DBH antibody (green color), and nuclear staining was performed with DAPI (blue color). e Negative control where the primary antibody was substituted with PBS. DBH is observed throughout the ovary, including the F: follicle, T: theca cell. Bar 100 μm. f ImageJ analysis of DBH relative fluorescence means ± SE (n = 3 animals per group with 10 pseudo-replicas per animal), a p < 0.05 vs Vh group; b p < 0.05 EV group (one-way analysis of variance, followed by Tukey) | 5,699.4 | 2019-11-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Novel MOA Fault Detection Technology Based on Small Sample Infrared Image
: This paper proposes a novel metal oxide arrester (MOA) fault detection technology based on a small sample infrared image. The research is carried out from the detection process and data enhancement. A lightweight MOA identification and location algorithm is designed at the edge, which can not only reduce the amount of data uploaded, but also reduce the search space of cloud algorithm. In order to improve the accuracy and generalization ability of the defect detection model under the condition of small samples, a multi-model fusion detection algorithm is proposed. Different features of the image are extracted by multiple convolutional neural networks, and then multiple classifiers are trained. Finally, the weighted voting strategy is used for fault diagnosis. In addition, the extended model of fault samples is constructed by transfer learning and deep convolutional generative adversarial networks (DCGAN) to solve the problem of unbalanced training data sets. The experimental results show that the proposed method can realize the accurate location of arrester under the condition of small samples, and after the data expansion, the recognition rate of arrester anomalies can be improved from 83% to 85%, showing high effectiveness and reliability.
Introduction
A metal oxide arrester (MOA) is widely used as an important protection equipment for safe operation of power transmission and the transformation system. However, due to the impact of lightning overvoltage and switching overvoltage, as well as environmental temperature and humidity, the characteristics of the MOA will change [1]. Therefore, the good operation characteristics of the MOA are particularly important for power transmission and the transformation system.
There are two main methods for MOA condition monitoring. The first one is based on leakage current, which can be divided into three methods, including the full current method [2][3][4], third harmonic current method [5][6][7] and the capacitive current compensation method [8,9]. However, these methods need an aging test, which is inefficient and difficult to overcome the interference of harmonic voltage in operation voltage. The second method is based on infrared thermal imaging [10,11]. Because MOA faults will cause a local temperature rise, the grade of heating defect can be judged by comparing the temperature difference of each part of the MOA. Compared with other defect detection methods, infrared detection technology is simple, safe and more efficient. However, the complex environment and less-fault samples are the difficulties of intelligent inspection. Therefore, how to realize automatic fault identification is a hot topic for researchers.
The early research of state detection is mainly based on the traditional image recognition and machine learning methods for image data mining. References [12,13] proposed to use self-organizing mapping (SOM) to analyze the thermal characteristics of an MOA infrared image under working voltage, so as to determine the MOA state; in [14], it is are collected and stored by edge devices, and then processed quickly on the spot based on the needs of communication and cloud diagnosis and uploaded to the cloud server for fault identification, relying on a high-speed communication network (4G, power wireless private network, etc.). Computing is distributed in the whole system network, including edge intelligent devices and cloud servers, and data is stored in the intelligent devices at the edge of the network. Therefore, the system can meet the construction needs of a low delay, low energy consumption, high-precision power Internet of things.
Electronics 2021, 10, x FOR PEER REVIEW 3 of 15 patrol smart car, etc.) and the server (cloud server, etc.). The infrared thermal images of the MOA are collected and stored by edge devices, and then processed quickly on the spot based on the needs of communication and cloud diagnosis and uploaded to the cloud server for fault identification, relying on a high-speed communication network (4G, power wireless private network, etc.). Computing is distributed in the whole system network, including edge intelligent devices and cloud servers, and data is stored in the intelligent devices at the edge of the network. Therefore, the system can meet the construction needs of a low delay, low energy consumption, high-precision power Internet of things.
MOA Identification and Localization
Aiming at the problem that traditional detection methods are difficult to overcome the complex background interference of a power grid, the key components localization is proposed to localize and extract different types of MOAs in substation or transmission line. In order to apply to edge devices, an improved SSD-MobileNet network that performs well in both speed and scale is adopted.
Infrared Thermal Fault Detection of the MOA
However, achieving full automation of MOA defect detection is still very challenging due to the visual complexity of defects and the small number of defective MOAs.
(1) The amount of abnormal MOA data is not enough to train robust classification model. (2) The existing fault detection algorithms based on deep learning are usually single neural network, but they are often limited by the characteristics of the network in the face of different background applications. (3) The visual complexity of defects makes it difficult, if not impossible, to construct a precise model.
On the one hand, through a transfer learning-generation convolution countermeasure network, the data expansion model is constructed to solve the problem of data imbalance. On the other hand, different infrared features are extracted by multiple neural networks, and multiple classifiers are trained. Finally, the combination strategy is used to fuse the prediction results to improve the accuracy and generalization ability of the detection model.
MOA Identification and Localization
A single shot multibox detector (SSD) is a classic one-stage target detection model proposed by Wei Liu in 2016 [25]. As a fast recognition and positing network, SSD is widely used in target detection, and its architecture is shown in Figure 2.
MOA Identification and Localization
Aiming at the problem that traditional detection methods are difficult to overcome the complex background interference of a power grid, the key components localization is proposed to localize and extract different types of MOAs in substation or transmission line. In order to apply to edge devices, an improved SSD-MobileNet network that performs well in both speed and scale is adopted.
Infrared Thermal Fault Detection of the MOA
However, achieving full automation of MOA defect detection is still very challenging due to the visual complexity of defects and the small number of defective MOAs.
(1) The amount of abnormal MOA data is not enough to train robust classification model. (2) The existing fault detection algorithms based on deep learning are usually single neural network, but they are often limited by the characteristics of the network in the face of different background applications. (3) The visual complexity of defects makes it difficult, if not impossible, to construct a precise model.
On the one hand, through a transfer learning-generation convolution countermeasure network, the data expansion model is constructed to solve the problem of data imbalance. On the other hand, different infrared features are extracted by multiple neural networks, and multiple classifiers are trained. Finally, the combination strategy is used to fuse the prediction results to improve the accuracy and generalization ability of the detection model.
MOA Identification and Localization
A single shot multibox detector (SSD) is a classic one-stage target detection model proposed by Wei Liu in 2016 [25]. As a fast recognition and positing network, SSD is widely used in target detection, and its architecture is shown in Figure 2. In order to cope with the limited computing resources at the edge, in this paper we use the lightweight MobileNet structure to replace the original VGG16 basic network, and cut down the average pooling layer and the full connection layer. MobileNet is a series of lightweight networks proposed by Google [26]. Figure 3 shows the standard convolution and MobileNet structure. MobileNet uses something similar to a deep separable convolution instead of a traditional convolution and decomposes the original standard convolution into deep convolution and point-by-point convolution. Each time, one channel of input data is convoluted, and then convolution is performed by using the convolution core with a channel number of 1 × 1 input data channel number, thus reducing a large amount of redundant calculation. First, the image size of the MOA is converted to a fixed size of 300 × 300. Then, forward propagation is used to extract features through basic network to from the feature map. Finally, the additional feature network is adopted for regression calculation and maximum suppression to generate the prediction of the target object frame and category.
Data Expansion
As shown in Figure 4, the proposed data expansion model based on transfer learning and a deep convolution generation adversarial network (TL-DCGAN) is proposed. Firstly, the transfer learning method is used to train a model DCGAN1, which can generate normal samples by using a large number of existing normal MOA images. Then, the weight of dcgan1 is transferred again, and the limited fault data is used to train the data expansion model DCGAN2. In order to cope with the limited computing resources at the edge, in this paper we use the lightweight MobileNet structure to replace the original VGG16 basic network, and cut down the average pooling layer and the full connection layer. MobileNet is a series of lightweight networks proposed by Google [26]. Figure 3 shows the standard convolution and MobileNet structure. MobileNet uses something similar to a deep separable convolution instead of a traditional convolution and decomposes the original standard convolution into deep convolution and point-by-point convolution. Each time, one channel of input data is convoluted, and then convolution is performed by using the convolution core with a channel number of 1 × 1 input data channel number, thus reducing a large amount of redundant calculation. In order to cope with the limited computing resources at the edge, in this paper we use the lightweight MobileNet structure to replace the original VGG16 basic network, and cut down the average pooling layer and the full connection layer. MobileNet is a series of lightweight networks proposed by Google [26]. Figure 3 shows the standard convolution and MobileNet structure. MobileNet uses something similar to a deep separable convolution instead of a traditional convolution and decomposes the original standard convolution into deep convolution and point-by-point convolution. Each time, one channel of input data is convoluted, and then convolution is performed by using the convolution core with a channel number of 1 × 1 input data channel number, thus reducing a large amount of redundant calculation. First, the image size of the MOA is converted to a fixed size of 300 × 300. Then, forward propagation is used to extract features through basic network to from the feature map. Finally, the additional feature network is adopted for regression calculation and maximum suppression to generate the prediction of the target object frame and category.
Data Expansion
As shown in Figure 4, the proposed data expansion model based on transfer learning and a deep convolution generation adversarial network (TL-DCGAN) is proposed. Firstly, the transfer learning method is used to train a model DCGAN1, which can generate normal samples by using a large number of existing normal MOA images. Then, the weight of dcgan1 is transferred again, and the limited fault data is used to train the data expansion model DCGAN2. First, the image size of the MOA is converted to a fixed size of 300 × 300. Then, forward propagation is used to extract features through basic network to from the feature map. Finally, the additional feature network is adopted for regression calculation and maximum suppression to generate the prediction of the target object frame and category.
Data Expansion
As shown in Figure 4, the proposed data expansion model based on transfer learning and a deep convolution generation adversarial network (TL-DCGAN) is proposed. Firstly, the transfer learning method is used to train a model DCGAN1, which can generate normal samples by using a large number of existing normal MOA images. Then, the weight of dcgan1 is transferred again, and the limited fault data is used to train the data expansion model DCGAN2.
Generation Adversarial Network
As shown in Figure 5, the GAN network structure mainly includes the generator (G) and discriminator (D). The objective function of GAN training is as follows: where pdata(x) is the probability distribution of real samples, pz(z) is the distribution of input random noise and V(G,D) is the cross entropy loss.
The loss function of G is: The loss function of D is: where s is the real sample and xfake is the false sample. The optimization goal of the GAN model is to make the samples generated by G make D unable to distinguish true from false. Therefore, in training, for G, we hope that the larger the p(s|xfake) is, the better; that
Generation Adversarial Network
As shown in Figure 5, the GAN network structure mainly includes the generator (G) and discriminator (D).
Generation Adversarial Network
As shown in Figure 5, the GAN network structure mainly includes the generator (G) and discriminator (D). The objective function of GAN training is as follows: where pdata(x) is the probability distribution of real samples, pz(z) is the distribution of input random noise and V(G,D) is the cross entropy loss.
The loss function of G is: The loss function of D is: where s is the real sample and xfake is the false sample. The optimization goal of the GAN model is to make the samples generated by G make D unable to distinguish true from false. Therefore, in training, for G, we hope that the larger the p(s|xfake) is, the better; that The objective function of GAN training is as follows: where p data (x) is the probability distribution of real samples, p z (z) is the distribution of input random noise and V(G,D) is the cross entropy loss. The loss function of G is: The loss function of D is: where s is the real sample and x fake is the false sample. The optimization goal of the GAN model is to make the samples generated by G make D unable to distinguish true from false. Therefore, in training, for G, we hope that the larger the p(s|x fake ) is, the better; that is, max D V(D, G) mentioned above. For the D, when the sample comes from the training set xreal, the larger the p(s|x real ) is, the better; when the sample comes from the G, the larger the p(s|x fake ) is, the better; that is, the min G V(D, G) mentioned above.
Data Expansion Model Based on TL-DCGAN
Compared with GAN, a variant of GAN (deep convolutional GAN (DCGAN)) was proposed in 2016 [27] which uses the mature convolutional neural networks (CNN) instead of MLP and removes the pooling layer, making the overall network model differentiable.
Improved Generator Structure
In order to improve the resolution of the MOA, a cumulus layer is added on the basis of DCGAN. The generator structure of the fault image expansion model of the MOA is shown in Figure 6. In addition, this is conducted in order to make the generated data distribution more close to the real data distribution, prevent the gradient disappearance and improve the network stability.
Electronics 2021, 10, x FOR PEER REVIEW 6 of 15 is, mentioned above. For the D, when the sample comes from the training set xreal, the larger the p(s|xreal) is, the better; when the sample comes from the G, the larger the p(s|xfake) is, the better; that is, the
Data Expansion Model Based on TL-DCGAN
Compared with GAN, a variant of GAN (deep convolutional GAN (DCGAN)) was proposed in 2016 [27] which uses the mature convolutional neural networks (CNN) instead of MLP and removes the pooling layer, making the overall network model differentiable.
Improved Generator Structure
In order to improve the resolution of the MOA, a cumulus layer is added on the basis of DCGAN. The generator structure of the fault image expansion model of the MOA is shown in Figure 6. In addition, this is conducted in order to make the generated data distribution more close to the real data distribution, prevent the gradient disappearance and improve the network stability. The generator is mainly composed of the input layer, full connection layer, convolution layer and residual block, in which the convolution layer is used as fractional step convolution, and the activation function is the ReLU function. Firstly, a set of random noise is inputted, which is uniformly distributed, and is extended to a feature matrix of size 4 × 4 × 1024 through the whole connection layer. Then, through the first convolution layer, deconvolution, batch normalization and activation function operation are performed, and the output characteristic matrix size is 8 × 8 × 512. Then, the output characteristic matrix size is 16 × 16 × 256 through two residual blocks, increasing the network depth and improving the network representation ability. Finally, after much processing of the volume layer and residual block, the pixel is 128 × 128 MOA image. See Table 1 for the parameters of convolution layer of the producer, where the convolution core size is 3 × 3 and the step size is 2. The generator is mainly composed of the input layer, full connection layer, convolution layer and residual block, in which the convolution layer is used as fractional step convolution, and the activation function is the ReLU function. Firstly, a set of random noise is inputted, which is uniformly distributed, and is extended to a feature matrix of size 4 × 4 × 1024 through the whole connection layer. Then, through the first convolution layer, deconvolution, batch normalization and activation function operation are performed, and the output characteristic matrix size is 8 × 8 × 512. Then, the output characteristic matrix size is 16 × 16 × 256 through two residual blocks, increasing the network depth and improving the network representation ability. Finally, after much processing of the volume layer and residual block, the pixel is 128 × 128 MOA image. See Table 1 for the parameters of convolution layer of the producer, where the convolution core size is 3 × 3 and the step size is 2.
Improved Discriminator Structure
As shown in Figure 7, compared with the original DCGAN network, the discriminator structure designed in this paper adds a layer of convolution network. In addition, in order to improve the network performance, the residual module is constructed similar to the above generator. Among them, the leaky ReLU activation function is used, and the convolution kernel size is 3 × 3. After convolution, batch normalization and function activation are performed. The input of the discriminator is the real or generated MOA image with the size of 128 × 128. The image size is reduced by sampling under the convolution layer, and then the network is deepened by two residual blocks, and the extracted feature information is transmitted to the deep layer of the network. Finally, through several convolution layers and residual blocks, the image size becomes 4 × 4 × 1024 and is input to the full connection layer to get the result of image discrimination. The parameters of the convolution layer in the discriminator are shown in Table 2, in which the convolution kernel size is 3 × 3 and the step size is 2. As shown in Figure 7, compared with the original DCGAN network, the discriminator structure designed in this paper adds a layer of convolution network. In addition, in order to improve the network performance, the residual module is constructed similar to the above generator. Among them, the leaky ReLU activation function is used, and the convolution kernel size is 3 × 3. After convolution, batch normalization and function activation are performed. The input of the discriminator is the real or generated MOA image with the size of 128 × 128. The image size is reduced by sampling under the convolution layer, and then the network is deepened by two residual blocks, and the extracted feature information is transmitted to the deep layer of the network. Finally, through several convolution layers and residual blocks, the image size becomes 4 × 4 × 1024 and is input to the full connection layer to get the result of image discrimination. The parameters of the convolution layer in the discriminator are shown in Table 2, in which the convolution kernel size is 3 × 3 and the step size is 2.
Defect Detection
As shown in Figure 8, the proposed MOA infrared state detection framework based on multi-model fusion is proposed. Due to the small number of MOA infrared fault samples, if the traditional single neural network is used to extract the feature vector, it is easy to lead to overfitting of the model in the training process of the classifier. Therefore, this paper uses a variety of convolutional neural networks to extract a variety of MOA fault features, and then selects the relevance vector machine (RVM) as the feature vector classifier to train and generate multiple weak learning machines, and finally uses the combination strategy to fuse them together for defect detection.
Defect Detection
As shown in Figure 8, the proposed MOA infrared state detection framework based on multi-model fusion is proposed. Due to the small number of MOA infrared fault samples, if the traditional single neural network is used to extract the feature vector, it is easy to lead to overfitting of the model in the training process of the classifier. Therefore, this paper uses a variety of convolutional neural networks to extract a variety of MOA fault features, and then selects the relevance vector machine (RVM) as the feature vector classifier to train and generate multiple weak learning machines, and finally uses the combination strategy to fuse them together for defect detection.
Depth Feature Extraction
In recent years, deep learning has developed rapidly; the deep convolution neural network has especially achieved good results in image classification and target recognition, and greatly improved the efficiency. Therefore, this method is used as a feature extractor to identify the infrared thermal fault of the MOA.
In the deep convolution neural network, most of the neurons only connect with the nearby neurons and share the weights, which greatly reduces the network parameters and improves the training speed. As shown in Figure 9, there are three main structures in the deep convolution network: the convolution layer, pooling layer and full connection layer.
Input
Convolution layer1 Convolution layer2 Pooling layer1 Pooling layer2 Fully connected layer In the convolution layer, after the input data is convoluted with the linear filter, the feature map is obtained through the nonlinear activation function. Each feature map contains one feature and shares the same parameters. Different feature maps use different parameters to extract different features. The convolution formula is: The pooling layer downsamples the feature graph, reduces the dimension of the feature graph and network parameters, makes the feature easier to follow-up processing and reduces the overfitting phenomenon to a certain extent. The pooling formula is:
Depth Feature Extraction
In recent years, deep learning has developed rapidly; the deep convolution neural network has especially achieved good results in image classification and target recognition, and greatly improved the efficiency. Therefore, this method is used as a feature extractor to identify the infrared thermal fault of the MOA.
In the deep convolution neural network, most of the neurons only connect with the nearby neurons and share the weights, which greatly reduces the network parameters and improves the training speed. As shown in Figure 9, there are three main structures in the deep convolution network: the convolution layer, pooling layer and full connection layer.
Depth Feature Extraction
In recent years, deep learning has developed rapidly; the deep convolution neural network has especially achieved good results in image classification and target recognition, and greatly improved the efficiency. Therefore, this method is used as a feature extractor to identify the infrared thermal fault of the MOA.
In the deep convolution neural network, most of the neurons only connect with the nearby neurons and share the weights, which greatly reduces the network parameters and improves the training speed. As shown in Figure 9, there are three main structures in the deep convolution network: the convolution layer, pooling layer and full connection layer.
Input
Convolution layer1 Convolution layer2 Pooling layer1 Pooling layer2 Fully connected layer In the convolution layer, after the input data is convoluted with the linear filter, the feature map is obtained through the nonlinear activation function. Each feature map contains one feature and shares the same parameters. Different feature maps use different parameters to extract different features. The convolution formula is: The pooling layer downsamples the feature graph, reduces the dimension of the feature graph and network parameters, makes the feature easier to follow-up processing and reduces the overfitting phenomenon to a certain extent. The pooling formula is: In the convolution layer, after the input data is convoluted with the linear filter, the feature map is obtained through the nonlinear activation function. Each feature map contains one feature and shares the same parameters. Different feature maps use different parameters to extract different features. The convolution formula is: where x k ij is the k-th layer characteristic graph, i and j are input dimensions and x k−1 ij is the input data of the upper layer. The convolution filter of layer k is determined by the weight w k ij and the bias term b k j and f is the nonlinear activation function. The pooling layer downsamples the feature graph, reduces the dimension of the feature graph and network parameters, makes the feature easier to follow-up processing and reduces the overfitting phenomenon to a certain extent. The pooling formula is: where down is the downsampling function, if the downsampling window size is n × n. The output feature map is reduced by N times.β k ij and b k j are multiplicative bias and additive bias parameters, respectively. The full connection layer is similar to the traditional neural network, in which each neuron is connected to all inputs.
Although the classifier based on supervised learning is very mature, it needs a large number of labeled data to train a classification model with high accuracy and strong generalization. However, in the actual power grid system, the samples of fault MOA infrared data are usually less, and the image background environment is more complex. Therefore, in this paper, different convolution neural networks (AlexNet, GoogLeNet, ResNet, RetinaNet) are used to extract different features of the MOA image, and the image can be comprehensively analyzed from different aspects, so as to obtain more reliable detection results.
MOA Fault Detection Based on Integrated Learning
In order to get more accurate judgment accuracy and improve the generalization ability of the defect recognition model, this paper proposes a multi-model combination strategy based on weighted voting rule and F 1 score. F 1 score, also known as balanced f score, is a harmonic average of model accuracy (P) and recall (R). Its maximum is 1 and minimum is 0. It is often used to measure the accuracy of the binary classification model.
The F 1 score of Mi of each weak learning machine is calculated, and the formula is as follows: where P is the accuracy rate and R is the recall rate. Then, according to the performance of RVM classifier in the verification set, the voting weight is calculated by using the following formula to give higher weight to the classifier with high reliability, so as to improve the reliability of the ensemble classifier: Finally, according to the prediction result h(xi) of each weak learning machine and its voting weight w i , the final model prediction result H(x) is obtained by using weighted voting rule: where n is the number of weak learning machines, that is, the number of depth feature types. According to the prediction result H(x) of the integrated classifier, whether the MOA is abnormal or not can be determined.
Experimental Results and Analysis
To evaluate the performance of the proposed MOA defect detection system, we tested it on a MOA image data set of a substation in Jiangxi Province, China. The data acquisition equipment is an advanced pistol thermal imager with 640 × 480 infrared resolution, and its model is FLIR E98 (Shenzhen Keruijie Technology Co., Ltd., Shenzhen, China). The experiment environment is as follows: Win10, Tensorflow1.3, Anaconda (python3.6), Keras2.1.5, Core i9-9900k and GTX 2080 GPU with 8-GB memory.
MOA Positioning Experiment
In the experiment, the parameters of the model are initialized by using the weight of the classical network, and then the infrared data of the MOA are divided into training set, verification set and test set according to the ratio of 6:1:3. In this paper, the empirical value is selected as the initial value of the super parameter. Among them, the learning rate is set to 0.0015, batch_ size is set to 16 and the epoch is set to 500. Results as shown in Figure 10, the proposed MOA identification and location algorithm can effectively identify and locate different types of MOAs (the rated voltages are 110 kV, 220 kV and 500 kV, respectively). the classical network, and then the infrared data of the MOA are divided into training set, verification set and test set according to the ratio of 6:1:3. In this paper, the empirical value is selected as the initial value of the super parameter. Among them, the learning rate is set to 0.0015, batch_ size is set to 16 and the epoch is set to 500. Results as shown in Figure 10, the proposed MOA identification and location algorithm can effectively identify and locate different types of MOAs (the rated voltages are 110 kV, 220 kV and 500 kV, respectively). In order to further verify the advantages of the proposed method in MOA identification and location, the proposed algorithm and the commonly used deep learning algorithm are tested and compared on the same data set. The results are shown in Table 3. Different algorithms are compared from map, recognition speed and model training time. It can be seen from the table that a one-stage algorithm is superior to a two-stage algorithm in recognition speed, model size and training time. The recognition accuracy of a twostage algorithm is significantly higher than that of a one-stage algorithm. Among them, although the proposed algorithm is slightly inferior to the two-stage algorithm in accuracy and slightly slower than the You Only Look Once (YOLO) in speed, it is most suitable to be deployed in the edge end of the embedded device in comprehensive ability, and can realize the MOA fast and with high-precision identification and positioning.
Data Expansion Experiment
In order to keep the diversity of the samples and enhance the generalization ability of the training model, a total of 2435 infrared images of the MOA in different natural conditions in several areas were obtained from a power grid company, including 1981 normal samples and 454 fault samples. Firstly, the fault samples are expanded to 1696 by the traditional method, and then the original DCGAN model is trained. In the experiment, the In order to further verify the advantages of the proposed method in MOA identification and location, the proposed algorithm and the commonly used deep learning algorithm are tested and compared on the same data set. The results are shown in Table 3. Different algorithms are compared from map, recognition speed and model training time. It can be seen from the table that a one-stage algorithm is superior to a two-stage algorithm in recognition speed, model size and training time. The recognition accuracy of a two-stage algorithm is significantly higher than that of a one-stage algorithm. Among them, although the proposed algorithm is slightly inferior to the two-stage algorithm in accuracy and slightly slower than the You Only Look Once (YOLO) in speed, it is most suitable to be deployed in the edge end of the embedded device in comprehensive ability, and can realize the MOA fast and with high-precision identification and positioning.
Data Expansion Experiment
In order to keep the diversity of the samples and enhance the generalization ability of the training model, a total of 2435 infrared images of the MOA in different natural conditions in several areas were obtained from a power grid company, including 1981 normal samples and 454 fault samples. Firstly, the fault samples are expanded to 1696 by the traditional method, and then the original DCGAN model is trained. In the experiment, the parameter optimizer is the Adam optimizer, the learning rate is set to 0.0002, the momentum value is set to 0.5 and the batch_size is set to 64.
It can be seen from the Figure 11a that in the original DCGAN model without transfer learning, some MOA contour information appears only at 100 epoch, and the training is relatively slow, and the complete and usable normal MOA image cannot be generated after 500 epoch training. As shown in Figure 11b learning, some MOA contour information appears only at 100 epoch, and the training is relatively slow, and the complete and usable normal MOA image cannot be generated after 500 epoch training. As shown in Figure 11b, the DCGAN1 model which migrates the weight of classical algorithm can generate the basic features of MOA image at 100 epoch, such as orientation features, target contour, etc., and can generate a more complete image at 500 epoch. On the basis of training the model DCGAN1, we continue to use the idea of transfer learning to train the model. Due to the use of the weight of the model DCGAN1, in Figure 11c, it can be clearly seen that the DCGAN2 has been able to get the basic characteristics of the MOA at 100 In order to judge the performance of the improved model more accurately, the discriminator and generator loss rate curves of the original DCGAN model and the improved DCGAN2 model are drawn; respectively, as shown in Figure 12, the x-axis is the different training moments of the model, and the y-axis is the loss function value of the discriminator or generator. In the initial stage of training, the generator training times are less, the extracted MOA features are not comprehensive, the generated image is quite different from the real MOA and the discriminator can easily identify the image "true and false". Therefore, the generator loss is much larger than the discriminator loss. With the increase in training times, the MOA features obtained by the generator are more and more sufficient, and the generated images are more and more close to the real sample data. Comparing the change of the loss function of the two models, it can be seen that the loss rate of the improved DCGAN2 model can finally converge, which proves that the generated image effect is better. In order to judge the performance of the improved model more accurately, the discriminator and generator loss rate curves of the original DCGAN model and the improved DCGAN2 model are drawn; respectively, as shown in Figure 12, the x-axis is the different training moments of the model, and the y-axis is the loss function value of the discriminator or generator. In the initial stage of training, the generator training times are less, the extracted MOA features are not comprehensive, the generated image is quite different from the real MOA and the discriminator can easily identify the image "true and false". Therefore, the generator loss is much larger than the discriminator loss. With the increase in training times, the MOA features obtained by the generator are more and more sufficient, and the generated images are more and more close to the real sample data. Comparing the change of the loss function of the two models, it can be seen that the loss rate of the improved DCGAN2 model can finally converge, which proves that the generated image effect is better.
MOA Thermal Fault Detection Experiment
In addition to the average accuracy mentioned in the previous chapter, an F1 score is added to evaluate the performance of the classifier. The fault identification effect diagram of different types of MOA is shown in Figure 13. It can be seen from Table 4 that the highest recognition rate of the model trained by a single neural network is only 76%, while the recognition accuracy of the multi-model fusion classifier proposed in this paper can be improved from 5% to 81%, which can effectively identify the infrared thermal fault of the MOA.
MOA Thermal Fault Detection Experiment
In addition to the average accuracy mentioned in the previous chapter, an F 1 score is added to evaluate the performance of the classifier. The fault identification effect diagram of different types of MOA is shown in Figure 13. It can be seen from Table 4 that the highest recognition rate of the model trained by a single neural network is only 76%, while the recognition accuracy of the multi-model fusion classifier proposed in this paper can be improved from 5% to 81%, which can effectively identify the infrared thermal fault of the MOA.
MOA Thermal Fault Detection Experiment
In addition to the average accuracy mentioned in the previous chapter, an F1 score is added to evaluate the performance of the classifier. The fault identification effect diagram of different types of MOA is shown in Figure 13. It can be seen from Table 4 that the highest recognition rate of the model trained by a single neural network is only 76%, while the recognition accuracy of the multi-model fusion classifier proposed in this paper can be improved from 5% to 81%, which can effectively identify the infrared thermal fault of the MOA. In addition, the proposed method can identify and locate the MOA before the condition detection, which reduces the search space of the fault detection model. As shown in Figure 14, the fault detection accuracy of the MOA after positioning is significantly higher than the global detection. Therefore, the average accuracy of the final detection increased from 81% to 83%. In addition, the proposed method can identify and locate the MOA before the condition detection, which reduces the search space of the fault detection model. As shown in Figure 14, the fault detection accuracy of the MOA after positioning is significantly higher than the global detection. Therefore, the average accuracy of the final detection increased from 81% to 83%.
Conclusions
In this paper, an infrared thermal fault detection method for small samples is proposed: (1) In order to solve the problem of sample imbalance, transfer learning and deep convolution, generation countermeasure networks are used to expand the data of fault MOAs. Experiments show that the expanded training set can improve the accuracy of a fault detection model by 2%. (2) In order to minimize the interference of the background to defect detection, defect detection is divided into two steps: target recognition and state detection. Firstly, the improved SSD algorithm is used to identify and locate the MOA. The experimental results show that the proposed algorithm can accurately locate different types of MOA in different scenarios. (3) Through a variety of convolution neural networks to extract a variety of MOA features, then train multiple weak classifiers, and then use the combination strategy to integrate the prediction results, further improving the prediction accuracy and generalization ability of the model. (4) The proposed method is based on simulation data and real cases, and many problems need to be further studied: how to combine the fault characteristics of equipment to make the model interpretable and improve the identification accuracy; through the cooperation of edge computing and cloud computing, improving the real-time performance of the detection system to meet the engineering application is the next research direction.
Conclusions
In this paper, an infrared thermal fault detection method for small samples is proposed: (1) In order to solve the problem of sample imbalance, transfer learning and deep convolution, generation countermeasure networks are used to expand the data of fault MOAs. Experiments show that the expanded training set can improve the accuracy of a fault detection model by 2%. (2) In order to minimize the interference of the background to defect detection, defect detection is divided into two steps: target recognition and state detection. Firstly, the improved SSD algorithm is used to identify and locate the MOA. The experimental results show that the proposed algorithm can accurately locate different types of MOA in different scenarios. (3) Through a variety of convolution neural networks to extract a variety of MOA features, then train multiple weak classifiers, and then use the combination strategy to integrate the prediction results, further improving the prediction accuracy and generalization ability of the model. (4) The proposed method is based on simulation data and real cases, and many problems need to be further studied: how to combine the fault characteristics of equipment to make the model interpretable and improve the identification accuracy; through the cooperation of edge computing and cloud computing, improving the real-time performance of the detection system to meet the engineering application is the next research direction. | 9,539.6 | 2021-07-21T00:00:00.000 | [
"Computer Science"
] |
The rebirth of Kairos Theology and its implications for Public Theology and Citizenship in South Africa
This article explores the relationship between kairos theology and public theology, placing a particular emphasis on kairos aspects such as contextuality, criticality and change. The article draws from and reflects on the dialogue between South African and Palestinian kairos theologies, the more recent Kairos South Africa movement and the shackdwellers' movement Abahlali baseMjondolo in order to describe a public theology marked by responsibility and contextuality.
Kairos Theology?
Kairos theology is rightly noted as a brand of liberation theology in and beyond South Africa.Vuyani Vellem puts it well: "Black Theology in South Africa, Kairos Theology, Black Theology in America, Latin American Liberation Theology, Minjung, Dalit, Feminist Theology, African Theology, Contextual Theology and Womanist Theology -all use the category of liberation to define their task, purpose and methodology.All of them, originating from different contexts, symbolize a global, 'worldly' expression of the liberation motif for another possible world." 2 He employs the notion of "liberative expectancy" to refer to the importance of the symbol of the Kairos in public life. 3n South Africa, kairos theology tends to be relegated to the apartheid era.It is a prophetic theology for a time of struggle.John de Gruchy describes South African theology as comprising 'theologies of the struggle' and 'post-apartheid theologies'. 4e places the kairos paradigm under 'prophetic theology' as one of the key theologies of the struggle for liberation. 5ts liberational orientation, however, cannot be limited to the South African struggle against apartheid.As we celebrate the South African Kairos Document of 1985/1986, 6 it is clear what a tremendous impact this theological tradition has exerted on various other settings and situations. 7It facilitated prophetic praxis in relation to different spheres of public life -politics, economics, civil society, and public opinion formation. 8Most recently, it is a theological tradition which resulted in the Palestinian Kairos Document of 2009. 9Against the background of the kairos documents, coupled with many reflections on the nature and meaning of kairos theology, for me it is a theological tradition (rather than a theology itself) in which the dimensions of contextuality, criticality and change are specially discernible. 102. Kairos Theology -and the public good?
When I reflect on public theology, it is a way of drawing attention to the inherent public nature of Christian faith, the concern for the public dimension of Christian theology, the potential relevancy of theology beyond the ecclesial domain, and the intentionally public role of churches -indeed all notable components of our multifaceted perspectives and practices of public theology.In my understanding, the public theology discourse engages with the question of public responsibility amidst a history of overwhelming contradictions, ambiguities and complexities. 11he notion of 'public' in public theology should arguably not be reduced to simply meaning the opposite of 'private'.Nor ought it merely to become synonymous with 'social'.There is a fundamental sense of public theology lost when it receives these kinds of reductionisms, as being nothing more than 'relational theology' or 'social theology'. 12Another reductionist tendency has to do with 'public' being used interchangeably with 'contextual'.Public theology is indeed concerned with relationality, with sociality, and with contextuality -but it need not be reduced to any of these aspects, as important as they are concerning the nature and role of public theology.Perhaps a fourth reductionist issue relates to public theology being viewed as 'particularistic' in the same way liberation, political, black, feminist, womanist, African, minjung, dalit, and other so-called particularistic theologies are stereotypically regarded with particularity in mind.In this regard, some can mistakenly reduce public theology to being a particular North American discourse, while others can reduce it to being a particular theology in conflict with the relevant concerns of Latin American liberation theology or African womanist theology and the like.
Against the background of such forms of conceptual confinement, the notion of 'public' is important in my understanding and practice of public theology along two fronts.In the first place, I concur with those who underline the philosophical content behind what we today refer to as 'the public sphere', influenced in no small measure by the insights of Jürgen Habermas, who discusses the public sphere as a 11 The following discussion draws from an earlier piece: Clint Le Bruyns, "Public Theology?On responsibility for the public good" (May 25th, 2011).Available online [internet] at: http://www.ecclesio.com/2011/05/public-theology-on-responsibility-for-the-public-good-%E2%80%93-by-clint-lebruyns [accessed 25/05/2011]. 12For example, cf. an unpublished paper by Steve de Gruchy, "Introducing the Methodology of Social Theology" (June 2006).My criticism relates to at least two points.Firstly, he describes as 'social theology' what many of us functioning within the public theological discourse would understand as 'public theology' and, secondly, he describes as 'public theology' what many of us functioning within the public theological discourse would regard as a narrow -even fundamentalistic -conception of public theology.Regarding the latter point, he simply describes public theology as follows: "The study of an issue of public concern with a view to speaking to the 'public square' rather than the church, and that requires engagement with contemporary discourse in a secular and religiously pluralistic world.(e.g.Responding to 'gay marriages', teaching evolution, the death penalty)." distinctive, modern dimension of societal life characterised by communicative action through rational, participatory, transformational discourse. 13Appreciating this philosophical texture concerning 'the public sphere' helps direct our attention in public theology not in general to relationality, sociality, contextuality or particularity, but more specifically to this 'public sphere'.
In the second place, then, this notion of 'public' helps us reflect more carefully on the concrete engagement of theology with the public sphere.Theology is in contact and conversation with concrete realms of public life -political, economic, civil society and public opinion. 14There are thus inevitable implications for understanding the agenda and mode of public theology.It is a way of understanding and practising theology which must contribute in constructive, dialogical, enriching and transforming ways to 'the public good'. 15For example, without dissolving the theoretical integrity of theological content, public theology demands of us a developing expertise in other disciplines of knowledge matched by a commitment to participate in conversations and exercises beyond the borders of a congregation or theological seminary.
Those of us schooled in such traditions as black theology or liberation theology affirm the agenda of 'the common good' as encapsulated in the role of theology in society.Others read this phrase narrowly as a Marxist ideological taint.Albert Nolan talks about "the coming of God's kingdom, God's reign on earth" as "the object of Christian hope" and, without losing perspective of the language of transcendence, talks about it more plainly than we typically do as theologians. 16"Our hope," he assumes, "is that God's will be done on earth" -and then concludes: "What God wills is always the common good.What God wants is whatever is best for all of us together, whatever is best for the whole of creation". 17he kairos theological tradition offers the South African and broader, global community a resourceful and challenging case study of the ambiguity that tends to characterise our theological perspectives and practices in relation to the common good.In reality we do not find it easy to appreciate what is best for everyone, concedes Nolan, since what we may often hope for are "too often selfish and self-serv-ing, egocentric and narrow-minded: hopes for a better future for myself, my family, my own country at the expense of other people; hopes for economic growth and a higher standard of living for some, regardless of others".18"But if our attempts are to do, as far as possible, with whatever is for the common good, then we are doing God's will, and to that extent God's will is being done on earth". 19hile public theology, drawing deeply from its different theological wells, attends to the agenda of the common good, I would suggest a more specific emphasis around this agenda, that of 'the public good'.This is to help us maintain the connection between theology and the public sphere with its political, economic, civil society and public opinion domains. 20The challenge of public theology is to assist us in humble and ambitious ways for taking responsibility to contribute meaningfully and concretely to the public good.Perusing the theological literature during South Africa's post-apartheid era, kairos theology as a brand of liberation theology does not appear to feature significantly in contributing to a liberation theology for our democratic society.Perhaps I could phrase it as such: kairos theology was important for the common good in the quest for liberation in the struggle against apartheid, but is arguably found greatly wanting in its resourcefulness for the public good in the quest for reconstruction and transformation in contemporary struggles.
The rebirth of kairos theology?
It is now a platitude to point out the fact that something happened to our theological paradigms during the post-apartheid era.Questions have been raised about our theological context, content, methodology and application.Some coin provocative phrases to draw attention to some of these shifts.Steve de Gruchy suggested we moved from 'church struggle to church struggles'. 21Charles Villa-Vicencio argued we moved from theologies of liberation to those of reconstruction. 22Etienne de Villiers talked about the move towards an ethics of responsibility. 23McGlory Speckman emphasised the move towards development. 24Isabel Phiri talked about a new concern for theologies of life. 25he post-apartheid theological discourse in South Africa alerts us to the possibility (at least), or the reality (more frankly-speaking), that our theologies in the new South Africa may not necessarily be as appropriate and responsive as we would like for the kind of public impact and critical participation that the times demand.I think this is why in various quarters in more recent years we are revisiting the South African Kairos Document and our kairos theological tradition.We do not appear to be fully confident that we have a public theology evidencing these much-needed dimensions of contextuality, criticality and change.
In December 2009 the Palestinian Kairos Document was released.It stood in line within this great kairos theological tradition, itself inspired by the South African Kairos Document. 26Palestinian Christians cried out to the ecumenical church and international community about their ongoing suffering under Israeli occupation and apartheid coupled with the deafening silence of the international community of believers and nations. 27They drew special attention to the destructive public impact of the ways in which biblical and theological resources were being employed. 28 month later a group of South African church leaders and theologians began the process of formulating a document that could serve as a South African response to the Palestinian struggle with its issues of occupation, apartheid and Zionism.As part of this group I recall how much this process of reading the Palestinian Kairos Document and attempting a South African response forced us to ask questions about our own life-situation and the public role of theology and church.It pushed us to revisit our own Kairos Document and the quality of public theology for liberation, justice and dignity it envisaged and called for.I remember how much the Palestinian Kairos Document seemed to be helping us reconnect with our South African context and to review the state of prophetic public theology in the church, academy and society.
The process underlined these aspects of contextuality, criticality and change.In the end we submitted a response to Kairos Palestine in April 2010, at the time of Easter, with the support of more than 60 key church leaders and theologians. 29 month later, in May 2010, the South African Council of Churches held a consultation in Kempton Park on the Palestinian Kairos Document.A number of stalwarts of the kairos theological tradition were present, including Frank Chikane and Allan Boesak.I, along with several others, presented critical input on the situation in Palestine-Israel, the Palestinian Kairos Document, and the need for solidarity with Palestinians.In retrospect, our input did not generate much discussion about Palestine-Israel itself -as it did about South Africa today.What the ecumenical leaders and theologians really used the occasion to talk about was the kairos theological tradition and the need to revisit it for its relevance in regard to our public challenges and public responsibility in South Africa.
A few months later, in October 2010, many of us gathered in Pietermaritzburg for the 25th anniversary of the South African Kairos Document. 30Besides 'looking back' during the commemoration, what resurfaced throughout were the aspects of contextuality, criticality and change.People were fatigued, overwhelmed and despairing of the numerous public challenges they confronted as persons in communities.People were by no means politically-correct in their tone, view and assessment of the political and other powers in public life; on the contrary, they were frustrated, angry and critical.They were extremely critical of the church and theology as public role-players for transformation in political, economic and cultural life.There was indeed a deep sense of nostalgia (cf.painful longing) for the quality of theological engagement as was evident in the kairos theological tradition.In fact, among us were again some of the stalwarts of the kairos theological tradition, such as Itumuleng Mosala, who displayed much scepticism and negativity regarding the public contribution of theology in contemporary South Africa.Others, such as Albert Nolan and Allan Boesak, presented scathing criticisms of the political and economic powers from a theological perspective.Discussion by these and other thinkers assisted us around questions of social analysis and the weaknesses of the churches and their theologies.In general, though, our deliberations offered less direction around the future of theology and church as constructive role-players in transforming public life for the common good.
There have been a number of related initiatives and events that I've participated in since then.A common thread has been celebrating and revisiting our kairos theological tradition.Progressively the question of an organised movement with a 'kairos consciousness' 31 surfaced.Eventually the establishment of 'Kairos Southern Africa' was born early 2011.
These individual and collective initiatives have convinced me about the emerging rebirth not so much of a kairos theology, but of a kairos theological tradition with its kairos consciousness marked by contextuality, criticality and change.I would suggest that those of us who are active role-players and thought leaders within various theological paradigms -black theology, liberation theology, womanist theology, feminist theology, confessing theology, African theology, public theology, etc. -must take cognisance of this apparent 'rebirth' of kairos consciousness and seriously consider what implications and responsibilities it presents to us in present-day South Africa as public theologians in one way or another.
Kairos Theology -and the future of public responsibility?
It has become increasingly evident that a kairos consciousness needs to be regained with its aspects of contextuality, criticality and change.As an example of this in action, I want to direct our attention to the recent document of December 2011 submitted by Kairos Southern Africa to the African National Congress at the occasion of the launch of its centenary celebrations. 32It takes the form of a letter, structured by way of 17 'words' from the churches to the ruling party.At the time of its submission, the letter was endorsed by about 100 church leaders, theologians and concerned people, including those beyond the Christian or religious realm, who all identified with the message of Kairos Southern Africa.As one who was consulted about its contents and as a signatory, it is remarkable to observe how seriously this letter was taken by the ANC and the strong support it has thus far received from 31 See Allan Boesak, "Kairos Consciousness" (March 25th, 2011) the public, with a 1-million-signature campaign presently in motion.The 17 words of the letter could be reduced to 4 words: celebration, confession, collaboration, critique.The Kairos Southern Africa letter holds importance as an attempt to regain a kairos consciousness among ordinary people.There is the sense of an increasing disconnect between the people and the powers, and between the people and democratic life together; this is being addressed to some extent in this letter.
The question of a participatory democracy or active citizenship remains a pressing matter for public theology in South Africa. 33The notion of 'participatory democracy' or 'active citizenship' is in political vogue in and beyond South Africa today.It has become a way of talking about the responsibility all people must assume for the integrity and advancement of their life together.Initiatives of Theological Projects such as Kairos Southern Africa 34 , the Beyers Naudè Centre for Public Theology at Stellenbosch University 35 , or the Theology and Development Programme at the University of KwaZulu-Natal 36 , are all notable cases in point of an attempt at a public or social theology embedded in a kairos consciousness directed towards a responsible citizenship ethic.
An ethic of responsible citizenship?
The public discourse on citizenship revolves around important questions of identity and human dignity, agency and power, and public responsibility. 37his notwithstanding, at least two preliminary points should be made.The first issue is that we acknowledge that this discourse is neither neutral nor uncontested.We should not be naïve and ignorant about its perceived ideological nature, how it might be employed for specific political and economic agendas, and the possible popular resistance to it.Moreover, an ideal of citizenship might not even be desirable; political and economic powers can deliberately discourage it.
A second issue has to do with the tendency of a conceptual narrowness in how it is understood, an issue current literature on citizenship bemoans.The notion of citizenship underlines the call for a new kind of politics, a different quality of life together.It is understandable why the idea of a more 'critical' citizenship is in vogue.Literature on citizenship demands a more transcendent view of this concept, a perspective that goes beyond such paradigms as 'indigenous politics', 'party politics', 'state politics', and 'imperialist politics'.These aspects undergird the need for an ethic of responsible citizenship. 38n order to offer some orientation and content to an ethic of responsible citizenship, I want to propose the following framework: First, kairos consciousness as a vision of change; second, kairos consciousness as a virtue of criticality; and third, kairos consciousness as a practice of contextuality.
5.1 Public theology for responsible citizenship: Vision of A few years ago, on a beautiful Sunday afternoon, I took a stroll down the streets of the university section of central Stellenbosch.At that time I was working in Systematic Theology at Stellenbosch University's Faculty of Theology.As I walked past the Faculty of Education building, a team of workers were positioning a very large banner above the entrance of the building.It read: "Education…can change the world."I thought to myself, "How inspiring!And that's something a theologian would say."Moments later I reached the Faculty of Law building.There another team was hard at work with a similar banner, except that this time it read: "Law… can change the world."What was going on in Stellenbosch?Banners of this type were being placed above the entrance to every Faculty building.Indeed, on reaching my own Faculty, there was the banner: "Theology…can change the world."I could only imagine how surprised and intrigued these university students would be as they rushed (or staggered!) to Monday morning classes the next day, to be confronted with this critical connection between their particular science and the life-giving agenda of social transformation and public responsibility.
José Galizia Tundisi in an essay on "The Advocacy Responsibility of the Scientist" offers several statements about the interconnection between science (in a broadbased sense) and social transformation. 39irst, he emphasises the fundamental importance of knowledge in the world of science, "that scientists from all walks of life have the task of promoting and in- creasing knowledge through their professional activities". 40Without deflecting from an understanding of theology that is 'lived' and 'experienced', he indirectly reminds us of a theology that is also 'thought' and 'known'.It is thus quite appropriateeven essential -that we still behold theology as a science, concerned with different rationalities, conceptualisations and discourses.Second, he contends that while the world of science readily contributed to social progress in variety of realms and ways during the past century, scientists in the twenty-first century "face a different problem" and that "the world needs science and scientists in a very different way". 41Tundisi is thinking more specifically about the ecological challenges confronting us today as that 'different problem'.His implied point is double-edged: On the one hand, the world of science continually makes us aware of new challenges calling forth public responsibility; on the other hand, the world of science continually needs to be open to modification and reform in order to better advance social wellbeing.A credible world-affirming public theology would, therefore, be one that continues the liberational paradigm of taking social analysis and kairos consciousness seriously coupled with a self-critical, re-forming nature.We need to think and do theology in ways that facilitate conscientisation about existential realities and in ways always open to critique and reformation.
Third, Tundisi underlines the transformative responsibility within the world of science.He asks: "Given this reality, this reality of which science has made us aware, how can we scientists, we gatherers and disseminators of knowledge, help to change this course of events?What is our role in remediation?" 42His line of questioning disturbs the stereotypical myth of what I'd refer to as 'a dusty consciousness' -a knowledge relegated to dusty bookshelves.On the contrary, his expectation of science is that it must be responsive to that about which it has made us aware.The quality of public theology required today cannot simply 'pass on' in a 'repetitive' fashion dogmas of the past; rather, it must help us engage with and change social realities for a better life for all being.
Fourth, it thus follows that "striving to understand the world and then disseminating our findings to the public is more important than ever". 43He hereby picks up on the necessity for public reception.It is not sufficient that the world of science attests to these aforementioned features; it demands reception, ownership and participation throughout the public arena where politics, economics, civil society and public opinion interconnect.Science thus moves from vision to action.The relevance for public theology is that, indeed, our theological capital for the constructive transformation 40 Tundisi, "The Advocacy Responsibility of the Scientist", 448. 41Tundisi, "The Advocacy Responsibility of the Scientist", 448. 42Tundisi, "The Advocacy Responsibility of the Scientist", 449. 43Tundisi, "The Advocacy Responsibility of the Scientist", 449.
of life must not remain stuck within the ecclesial quarters or among the professional elite of the churches, but must most certainly be 'received' by the broader community of people.Furthermore, our theological contributions must transcend mere visionary activity; it must extend to broader aspects of our life together, such as the domains public policy, international law, political practice, economic ideologies, and so on.While an extremely complex and admittedly controversial arena of public theological engagement, theology at its best envisions, embodies and advances the public good.
Fifth, Tundisi argues for science as "a method for social transformation" which "can be an integral part of the noble work to improve the quality of life". 44This underlines the actual impact of science on everyday life.To what degree does it impact life in actual, concrete, constructive ways?Similarly, the connection between theology and social transformation must be actually experienced, as opposed to merely sought after or anticipated.To the extent that churches and theologians influence public policy or political law for the betterment of life for all, therein lies its eventual nobility as a public science.
Sixth, Tundisi then concludes on a hopeful note: As an ecological scientist, he says, "I am convinced that our moral responsibility and our engagement with society can help to save our planet". 45His essay, albeit modestly, appeals for a consideration of a science of hope, or of scientists of hope.Theologies of hope feature pre-eminently within the Christian theological tradition, and is receiving renewed attention today in the light of a vast array of despairing challenges. 46Once again the litmus test rests with ordinary people confronting social, existential realities who will respond with either despair or hope based on what kind of public reception and impact our theology brings to our life together.
Public theology for responsible citizenship: Virtue of criticality
The idea of public theology today is a contested notion.There is the obvious story of ambiguous public theological conception and engagement during apartheid South Africa; the South African Kairos Document of 1985 confronted us about this ambivalence and its very life-and-death implications.State Theology proponents exercised destructive public theology, a legacy part and parcel of our continuing social baggage.Church Theology representatives reflected an impotent, indifferent, selfcentred public theology, which preserved the status quo.Prophetic theology advocates were committed to a prophetic public theology that could discern, confront, oppose, struggle, revolutionise, perhaps even transform. 47Against the background of our historical narrative, it behoves us to indeed be discerning and cautious about this idea of so-called public theology.
The narrative of Abahlali baseMjondolo is an intriguing, insightful and instructive narrative exploring the challenges of responsible citizenship, along with the role of the churches in their various forms and manifestations. 48In a cursory study, I would highlight the following findings for further reflection and deliberation: 1.The question has come up as to how Abahlali baseMjondolo (AbM) should or could relate to the churches.So far, AbM has sometimes approached churches, and more and more, AbM is being approached by church people and church-based organisations.But the churches are complex, and they might have their own, different agendas and possibilities in relation to AbM's struggle.2. The discussions responded to the following 2 questions.First, "What are our experiences of the church in our struggle so far?", and second, "What do we need from churches?" 3.So far, in the struggles of the AbM, there has only been a loose connection with churches and it has not been well-defined.It has really only arisen from time-to-time in response to incidences of tragedy.4. But beyond these tragedies and crises, there has been no time really to celebrate liturgy in our place together with church people, and nor have we had a constructive workshop to talk about these things properly before now.Because of this loose connection, the church doesn't know about our life in the shacks, it has no experience of it.Because it has not been present, the church does not know about the difficulties that the people go through and it does not know about the crises we face … and so, the church does not feel our pain.5.Because of this loose connection too, the church is not here with us to pass on important moral principles that are about how it is to be human beings -the church is not here with us.6.This distance is not healthy.The tragedies that happen here in the shacks, and the knocking down of people's houses, can put people onto the streets.Surely in these cases, the churches could even provide temporary shelter? 7.But more than that, church ministers are people that others are prepared to listen to and so, if they were there with us, then it is possible that their presence could even stop demolitions from going ahead.And if we from AbM and the churches grew closer together, it is possible that we might start learning better about what they believe -for now, we don't really know.8.But even this weak connection that we are and what we have said already, shows that we need each other and that we need to make our voices stronger together because it is important to build a common struggle.For a long time in our struggles, many people looked down on us because we are from the shacks, they think of us almost as if we are criminals who prey on others.But our recent connection with the Bishop makes us think that some people are thinking hard and seriously about our experiences, and they do not just assume we are a bunch of worthless hooligans.That our struggles are taken seriously by respected people is important.9.In addition, these respected people in the churches have connections overseas, and maybe they could help with some of the immediate crises of poverty that affect people in the shacks.
Although we ask the question about 'what do we need from the churches?'
we must start from the position that we must work together.We must acknowledge that we are together actually because, inside the church, we have women, children, people who are from the jondolos -so why do we disconnect the 'Sunday church' from the day-to-day life and struggle AbM?This '2-in-1' division must be discussed and the two aspects must be made to complement each other.11.We acknowledge that the government is a very bad listener to the poor.But it listens to the churches.So maybe we can use that to add to the strength of our voice.Perhaps church leaders can use their status to persuade the government on our issues.How would it be if church leaders joined us in our marches -wouldn't that make the government listen more?The church leaders give their support to many important public awareness campaigns (for example regarding the protection of children's rights, or the fight against crime) and this is because sometimes people are prepared to listen when church people say things.Looking at the poster on the wall of our meeting room here about the churches supporting the call for an 'HIV/AIDS free generation', perhaps there is a challenge to the churches to launch a new awareness campaign for a 'shack-free generation'.We have seen in our experiences that, sometimes when people were losing their rights -for example to their land -that some priests and churches stepped in to stop it, or at least to provide help.12.This discussion makes us think not only about the church out there.It We know that church is the closest place to the people because church is not a church without people.And so the people know it as the most recognised place -a place of safety, where there are no thieves and others who prey on us.So we have this feeling about the church that everyone has a 'willingness' to support and give dignity.We are nobody without the church.Somehow in this way, the church can be a bridge between AbM and the government because it will be seen that we are not animals.And then nothing is impossible.14.In the history of South Africa, before 1994 and at the peak of mobilisation and unrest, we saw some religious figures playing a role.But discrimination, racism and apartheid are not over!Now apartheid is between those who are rich and those who are poor, and we see that this apartheid is getting worse.This should make the church to be uncomfortable and therefore, the need for their intervention is just as important now as it was then -and they cannot do it on their own, they must work with the movements of the poor.15.There is a perception that religious people are trustworthy.As we come from the shacks, we are not trusted.Even our churches from the shacks are not trusted.16.When we look at charity and relief work like feeding schemes, it is better when these come from churches than from political parties because when it comes from the political parties it is actually like a 'bait', something is expected from us in return.And also, if the churches were involved in this kind of work, then they would know about how our life is, which is important.17.There are statements in the Bible that are important.18. Churches are meant to be agents of justice.They understand that unity is important in some of their own work, and in different areas they are joining together to work better.This approach should also apply in connection with social movements and justice.19.There seem to be many possibilities that can be developed between the struggles of AbM and the churches.But, we are also not naïve about the churches.We know that some parts of the church pray with the rich and powerful people, that some parts of the church continue to give their blessing to this government.But although the church has these problems, we are sure that God is on the side of the poor.Public theological engagement is myopic, fantasaic and impotent when it merely engages with people, institutions and structures of power, as opposed to also engaging with social movements and grassroots initiatives.This, I would argue, is a present weakness in public theological engagement in South Africa through such initiatives as Kairos Southern Africa and related public theological projects.In this way, a 'critical edge' is lost or veiled, to the detriment of the public witness of the church and theology in advancing an ethic of responsible citizenship.
5.1 Public theology for responsible citizenship: Practice of contextuality cannot take it for granted that theologians in post-apartheid democratic South Africa find it acceptable, or at the very least accept it uncritically.Whereas some scholars embrace this notion quite explicitly for its appropriateness and resourcefulness in how they conceive of the public nature, rationality and impact of theology (for example, E de Villiers, N Koopman, C Le Bruyns, D Smit, V Vellem), others engage with it more soberly and conditionally (for example: A Boesak, J Cochrane, J de Gruchy, S de Gruchy, R Tshaka), while still others ignore or respectfully dismiss it as not being a helpful or relevant way of making sense of theology and its role in our social life together (for example: T Maluleke).The critical questions these scholars raise are extremely important and thus indeed valid.
Broadly speaking I would venture to say that there tends to be two dominant categories of contestation.First, there is a disagreement or reservation on typological grounds. 49Here scholars criticise a view of public theology narrowly restricted to theologies of (re)construction following pre-liberation protest and struggle.A distinction is made between theologies of protest and theologies of reconstruction.It implies that complex issues of protest and struggle are overcome or not as relevant within a new democratic milieu.The new context has seemingly made oppositional theologies unnecessary and inappropriate.Construction rather than confrontation is the assumed mode of engagement.It is a public theological engagement which ultimately makes space for those of high power and influence, and denies meaningful participation by those limited or lacking in democratic public agency.The problem of this conception of public theology is its constricted model of publicness.
Second, there is a disagreement or reservation on ideological grounds. 50Here scholars criticise a view of public theology oblivious to local, contextual, post-colonial dynamics.It speaks too abstractly, generally and universalistically about public theological engagement, in so doing positing an overarching theological paradigm into which everything must fit.It ignores the regional and parochial nature of theology, thereby painting a "rather benign, metropolitan and even romantic" notion of public life.It dismisses local theologies such as black theology, African theology, women's theology, and liberation theology.It dismisses critical historical dimensions such as postcoloniality.The problem of this conception of public theology is its acontextual model of publicness.
In another essay, Maluleke underlines the fact that when we think of South Africa simply as 'a young democracy' it to our peril. 51It's a misnomer, he contends.He argues for a consciousness of post-colonial life as a life still grappling with life as a colony.The complex and conflicting dynamics of power, identity and alienation are ever-present.Maluleke appreciates the need for many in the public theology domain to emphasise theories of reconstruction, but asserts that there can be no meaningful constructive theory without a meaningful theory of resistance.I find his contention valid.However, for the purposes of public theological engagement towards social transformation, his argument is true in what it affirms but unhelpful in what it denies.The post-colonial, post-apartheid, democratic South Africa demands an ethics of resistance, but it must be conceived along with an ethics of reconstruction.It seems as if Maluleke reduces construction to co-option or uncritical cooperation with the powers.Do we not need a public theological engagement that can protest, resist and oppose at the same time as it seeks to contribute to the reshaping of our life together?Or are we called to simply remain 'watchdogs' of society, instead of critical, prophetic participants in public transformation as well?
A public theology expressive of such a sanitised and acontextual nature is a public theology not worthy of its name.However, one of my contentions with these scholars who contest the notion of public theology concerns the ironic fact that, possibly unknowingly, they make voices outside of South Africa -in the 'North' -the determinants as to what public theology is all about.Their point of departure revolves around public theology advocates outside of South Africa.Advocates of public theology within South Africa are then left to defend the charges.They criticise North American and European representatives of public theology, but draw general conclusions for us all about the meaning and appropriateness of public theology.Why do they base their assessments simply on these sources?Should they engage critically with sources within South Africa, would their assessment be the same?Would they necessarily find public theology advocates who view public theology as a replacement of liberation theology?Would they find us guilty of discarding local historical, post-colonial dynamics of power and transformation?Would they really discern a way of doing theology with no traces of a black liberation theology?
At the same time, it is on this point of contextuality that public theological engagement must be examined for its relation to ordinary peoples' movements, such as Abahlali.A kairos consciousness is rooted in a people's theology.The public theology to which I am committed is a public theological engagement that seeks to draw from the African soil and to dream of overcoming all that oppresses and dehumanises.All in all, a public theology embedded in a kairos consciousness of contextuality, criticality and change towards the nurturing of a responsible citizenship.
49 James R Cochrane, "Against the Grain: Responsible Public Theology in a Global Era" in International Journal of Public Theology 5 (2011), 44-62. 50Tinyiko Sam Maluleke, "Reflections and Resources -The Elusive Public of Public Theology: A Response to William Storrar" in International Journal of Public Theology 5 (2011), 79-89.
51
Tinyiko Sam Maluleke, "May the Black God Stand Please!: Biko's Challenge to Religion" in Andile Mngxitama, Amanda Alexander & Nigel C Gibson (eds), Biko Lives!Contesting the Legacies of Steve Biko (New York: Palmgrave Macmillan, 2008), 115-126.A concluding note I conclude with a poem by Zolani Mkiva, an imbongi yesizwe, who hails from Idutywa in the Eastern Cape.His poem "Son of the Soil" (1974) leaves us with the challenge of rootedness in Africa and the option of transcendence.I do not perfumed lips But I speak the truth I do not have cat eyes But I can see the true colours of the universe I do not have donkey ears But I can hear what make sense and what is a nuisance I do not have a dog nose But I can smell and distinguish between carbo-monoxide & oxygen I do not have a big heart But I do have passion for love and I love people I do not have soft hands But I can deliver my people from shame I am the son of the soil Like daughters of the land I am the filament of freedom I am the pistil of peace I am the calyx of consciousness I am the corolla of peoples cause I am the pollen of prosperity I am the anther of amicable solutions I am the stem of our society The son of the soil
21 See Steve de Gruchy, "From church struggle to church struggles" in John de Gruchy with Steve de Gru- chy
See, for example: Etienne de Villiers, "A Christian Ethics of Responsibility: Does it provide an adequate theoretical framework for dealing with issues of public morality?" in Scriptura 82 (2003), 23-38; D E De Villiers, "Human rights and moral responsibility: Their relationship in the present South African society" in Ned-Geref Teologiese Tydskrif 141: 3&4 (2000), 212-224.
is starting to revive the religious person in us and we are beginning to wonder, 'what is our religious belief?' -if we are God's children, then what does this mean for us living in the shacks?And what does it require us to do? 13. | 9,241.4 | 2015-12-18T00:00:00.000 | [
"Philosophy"
] |
Evolution of Regeneration in Animals: A Tangled Story
The evolution of regenerative capacity in multicellular animals represents one of the most complex and intriguing problems in biology. How could such a seemingly advantageous trait as self-repair become consistently attenuated by the evolution? This review article examines the concept of the origin and nature of regeneration, its connection with the processes of embryonic development and asexual reproduction, as well as with the mechanisms of tissue homeostasis. The article presents a variety of classical and modern hypotheses explaining different trends in the evolution of regenerative capacity which is not always beneficial for the individual and notably for the species. Mechanistically, these trends are driven by the evolution of signaling pathways and progressive restriction of differentiation plasticity with concomitant advances in adaptive immunity. Examples of phylogenetically enhanced regenerative capacity are considered as well, with appropriate evolutionary reasoning for the enhancement and discussion of its molecular mechanisms.
INTRODUCTION
Animal regeneration is a subject of continuous scientific interest. The first experimental studies on regeneration were carried out in the 18th century (Reaumur, 1712;Tremblay, 1744). Despite the remarkable progress in the field (Bely and Nyberg, 2010;Zattara et al., 2019), we have to face the fact that regenerative capacity varies colossally among the animal taxa. Despite the enormous amount of experimental data on regeneration, the mechanisms of its evolution remain largely uncertain.
The first attempts to understand the laws that drive the evolution of regenerative capacity in animals date to the 19th century. Since then, the so-called first rule of regeneration ("the regenerative capacity of animals decreases with an increase in anatomical complexity") was re-formulated by many authors independently (Vorontsova and Liosner, 1960). The first counterexamples of phylogenetically enhanced regenerative capacity in animals date back to the 19th century as well.
August Weismann (1834Weismann ( -1914 was the first to propose comprehensive evolutionary reasoning for the diverse regeneration potential in animals. He postulated that regenerative capacity is an adaptive trait that is subject to phylogenetic alterations and therefore may vary considerably among the taxa. According to Weismann, the regenerative capacity of a particular organ depends on three factors: anatomical and physiological complexity, the frequency of damage to the organ, and its significance for survival (Weismann, 1893(Weismann, , 1899. In the 20th century, similar views were expressed by Arthur Edwin Needham, who also emphasized the relevance of environmental conditions (for instance, he believed that aquatic environments are highly favorable for regeneration) (Needham, 1952). Needham's remarks on the adaptive value of high regenerative capacity, particularly on its ambiguous evolutionary feasibility and controversial impact on survival, represent an important addition to Weismann's concept. According to Needham, the routes of adaptation to the damaging factors are multiple. Even under conditions of frequent damage to an organ, its regeneration would not necessarily be the unique or least expensive adaptive mechanism; the compensations for the loss may include the enhanced breeding capacity, as well as the effective avoidance of the damage through enhanced mobility (Needham, 1952).
Despite the long history of the subject, the evolution of regenerative capacity in animals is far from being fully understood (Bely and Nyberg, 2010). In a broad sense, the problematics of contemporary experimental studies and theoretical investigations in the field have been set up by Weismann (1893Weismann ( , 1899 and Needham (1952). It includes the questions like whether regeneration is a primitive or adaptive trait, what is the role of damage frequency in the evolution of regenerative capacity, what is the role of the environment, what are the reasons for the dynamic evolutionary alterations in regenerative capacity, is it appropriate to consider regeneration as a direct correlate of asexual reproduction, etc. The answers to these and other old questions in their contemporary perspective are the subject of this review.
CONTRIBUTION OF RUSSIAN SCIENTISTS TO THE THEORY OF REGENERATION
The first comprehensive Russian studies in the field of regeneration date back to the early 20th century. We should mention the research by K. N. Davydov, performed on acorn worms Ptychodera minuta and Ptychodera clavigera. Davydov was one of the first to express the idea of the similarity between regeneration and embryonic development; his conclusions were based on the comparison of the process of anterior regeneration in P. minuta and P. clavigera with embryonic development (Davydov, 1903).
By the 1930s, several large scientific centers for the study of regeneration were formed in Russia. One of those was headed by academician A. A. Zavarzin. Scientific activities of his team had a pronounced evolutionary dimension; their principal findings include the archetypal similarity of skeletal muscle regeneration (with the involvement of myoblasts) in representatives of different taxa (Zavarzin, 1938).
Another famous team focused on studying regeneration in invertebrates (predominantly Porifera) was headed by B. P. Tokin (Tokin, 1969;Korotkova, 1988). B. P. Tokin reckoned that the term «regeneration» was historically coined as a generic notion encompassing multiple different phenomena. He believed that restoration of lost parts (extremities or organs) proceeds by a different scenario and obeys other laws than the socalled «somatic embryogenesis»-formation of a whole organism from a limited number of preserved cells or small tissue fragments. In this regard, B. P. Tokin and colleagues proposed a broader concept of «regulation» which was a unifying term for regeneration per se and «somatic embryogenesis» (Tokin, 1969). This idea was subsequently criticized by Liosner, who questioned the criteria for the distinction between the regeneration of body parts and «somatic embryogenesis». L. D. Liosner justly pointed that in many cases the distinction is vague, e.g., the restoration of body terminus in many invertebrates (cnidarians, planarians, annelids, etc.) satisfies the definitions of both regeneration and somatic embryogenesis (Liozner, 1975).
Another key term that B. P. Tokin was operating with was «integration»-a universal measure of adaptive fitness showing a tendency to a continuous increase in the course of phylogenesis. B. P. Tokin believed that the ability to regenerate body parts increases evolutionary along with "integration" (as indicated by the high regeneration rates characteristic of the integument and internal organs in vertebrates), while the capacity of asexual reproduction and somatic embryogenesis decreases (Tokin, 1969). Tokin's views on the origin of regenerative capacity should be mentioned as well: he believed that physiological regeneration arose very early based on the properties and metabolic needs of primitive living systems, while reparative regeneration evolved later, based on the principles of physiological regeneration and subsequent evolution of metabolic pathways and defense mechanisms of the body (Tokin, 1969).
Another influential Russian team working on fundamental problems of regeneration was the laboratory headed by M. A. Vorontsova and L. D. Liosner (the Laboratory of Growth and Development at the Institute of Human Morphology Russian Academy of Medical Sciences, Moscow). The scope of their scientific interest within the field of animal regeneration was extremely diverse. Initially, the model choice was confined to limb regeneration in amphibians, with the main focus on the balance of destruction and proliferation and the role of mitogenic radiation in these processes; a series of such studies was published in the Wilhelm Roux' Archiv für Entwicklungsmechanik der Organismen (Blacher et al., 1933;Liosner et al., 1936). Later on, the focus of scientific interest eventually shifted toward the regeneration of internal organs, notably parenchymal organs, in amphibians and ultimately in mammals. The vast experimental data on the regeneration of different organs (kidneys, liver, lungs, testes, ovaries, etc.) allowed a number of important fundamental generalizations (Vorontsova and Liosner, 1960;Liozner, 1974). In particular, Vorontsova found out that all parenchymal organs regenerate in a similar way; to describe this; the term «regenerative hypertrophy» was introduced. Regenerative hypertrophy-compensation of the loss by, respectively, cell proliferation or the increase in the size of individual cells without restoration of the initial morphogenetic complexity (Vorontsova and Liosner, 1960;Liozner, 1974). V. F. Sidorova showed that cellular mechanisms of postnatal regeneration of parenchymal organs correspond to postnatal growth rather than embryonic development, as no additional structural units (lobules, acini, and nephrons) are formed after the resection (Sidorova, 1964(Sidorova, , 1978. A. G. Babaeva demonstrated the key role of the immune system in regeneration, notably the ability of lymphocytes to stimulate or suppress the repair processes in mammals (Babaeva, 1989(Babaeva, , 1990. In the works of G. B. Bolshakova and her co-workers began a new research area-the study of the regeneration of the internal organs of mammalian fetuses; it was shown that in the prenatal period, myocardial injury in 16-day-old rat fetuses causes an increase in the proliferation of cardiomyocytes away from the injury zone, while the formation of connective tissue in the damaged zone is slow, which turns out to be unfavorable on the survival of such animals in the postnatal period (Bolshakova, 2008). A. V. Elchaninov showed that after resection of the liver of rat fetuses, the proliferation of hepatocytes is also activated and the liver mass is restored, while, in contrast to the postnatal period, without an increase in the ploidy and size of hepatocytes Bolshakova, 2011a,b, 2012).
Findings of other Russian research teams that worked successfully on specific fundamental issues of animal regeneration should be mentioned as well. These include the influence of pigment epithelium of the retina in its regeneration in tailed amphibians studied by Mitashov (1996) and the role of polyploidy in liver regeneration/myocardium repair in mammals demonstrated by Brodsky and Uryvaeva (1977).
THE ORIGINS OF REGENERATION
From the very beginning of regeneration studies, two opposing opinions have been expressed about its origin. Some experts qualified regeneration as a primary property of living systems (A. E. Needham and T. H. Morgan adhered to this point) (Needham, 1952), while others believed that it had emerged as a trait in some primitive organisms along with multicellularity (Weismann, 1893(Weismann, , 1899Bely and Nyberg, 2010). The second opinion implies the understanding of regeneration as an epiphenomenoninducible re-play of a program, which underlies a particular morphogenetic process (asexual reproduction, growth, and embryogenesis) and is used repeatedly in the case of damage (Garza-Garcia et al., 2010;Tiozzo and Copley, 2015).
Except for the radically different interpretation of very early events, these two theories are mutually consistent, as both allow viewing regeneration in terms of fundamental homology and account for the employment of recognizable, highly conservative patterns (which can be loosely defined as intensive physiological maintenance of the remnant complemented by active reconstruction of the missing part). Repair processes in different organisms have much in common, for example, rapid re-epithelialization of the damaged site, activation of cell proliferation, activation of matrix metalloproteinases, scavenging and regulatory activities of macrophages and other cells of the immune system (Elchaninov et al., 2018(Elchaninov et al., , 2019, the impact of the nervous system, etc. Repair processes may involve dedifferentiation and transdifferentiation of cells and, notably, activation of a stereotype genetic program (Fumagalli et al., 2018;Darnet et al., 2019;Mehta and Singh, 2019).
Moreover, the diversity of views on the origin of regeneration is more of historical interest, as early studies considered this process only at the level of tissues and organs while understandably neglecting the corresponding phenomena at subcellular levels. With the current state of knowledge, it is difficult to ignore the events and processes of restoration and maintenance of intracellular integrity, including the continuous renewal of organelles, turnover of the membranes, duplication of centrioles, division of mitochondria, disassembly and reassembly of the nuclear envelope during mitosis, etc. A unicellular organism devoid of any ability to regenerate would be maladaptive if viable at all; therefore, the direct association of regenerative capacity with multicellularity is hardly reasonable. Vorontsova and Liosner (1960) distinguished several types of regeneration which had evolved separately; this point has been reflected in recent studies (Bely and Nyberg, 2010). For example, the regeneration of various components of organs, the regeneration of whole organs, and the regeneration of the entire body from a fragment represent different types of regeneration. Some of these types are continuously preserved by evolution, while others become eliminated (for example, the regeneration of the entire body from a fragment).
Despite the distinct common features, repair processes in different animal taxa may take dramatically different ways (Alvarado and Tsonis, 2006). These ways are most commonly distinguished by the scale of damage-induced cell proliferation and its contribution to the morphogenesis, with the extremes called morphallaxis and epimorphosis (the terms were introduced by T. H. Morgan) (Figures 1A-C). Morphallaxis proceeds by a spatial reorganization of the remnant at the initial stages of repair; for example, Hydra regenerates by morphallaxis ( Figure 1A) (Alvarado and Tsonis, 2006). The opposite way, epimorphosis, proceeds through the formation of regeneration blastema composed of low-differentiated cells with high proliferative capacity ( Figure 1C). Epimorphosis is characteristic of limb regeneration in tailed amphibians (Caudata) and to a certain extent also of planarian regeneration ( Figure 1B) (Gurley et al., 2008). Currently, most experts agree that the clear distinction between epimorphosis and morphallaxis hardly makes sense, as any real regeneration is usually a combination of both (Agata et al., 2007;Bely and Nyberg, 2010). For instance, the oral pole regeneration in Hydra is distinctly epimorphic (Chera et al., 2009;Galliot and Ghila, 2010).
The apparent phylogenetic primacy of morphallaxis is indirectly indicated by its broad representation in both bilaterians and non-bilaterians, whereas epimorphosis is specific for bilaterians (Bely and Nyberg, 2010). Considering their similarity, it can be assumed that epimorphosis evolved on the basis of morphallaxis (Agata et al., 2007;Ben Khadra et al., 2018;Ferrario et al., 2018). It should be noted that the overall homology of regeneration mechanisms in animals is not that obvious. The mechanisms of regeneration in distant taxa can differ beyond recognition, as can be illustrated by the diverse genesis of regeneration blastema in invertebrates (Das, 2015;Bertemes et al., 2020) and vertebrates (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020).
In planarians, the formation of blastema results from the proliferation of neoblasts in response to amputation (Bertemes et al., 2020); in crustaceans and insects, wound blastema is formed from the migrating epidermal cells that undergo dedifferentiation (Mito et al., 2002;Das, 2015;Bando et al., 2018).
Phylogenetic plasticity of regeneration mechanisms in Caudata, with optional stem cell involvement and varying contributions of dedifferentiation and transdifferentiation, should be noted (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020). For instance, in newts, myoblasts are formed by fragmentation of muscle fibers, whereas in axolotls, they form by differentiation of myosatellite cells found within the blastema (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020).
Based on these findings, K. Muneoka et al. reckon that regenerative capacity in vertebrates evolved independently in different taxa originating from a hypothetical common tetrapod ancestor incapable of limb regeneration. The authors use this concept to describe the evolution of epimorphic limb regeneration in amphibians (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020) and suggest a similar scenario for the evolution of regenerative capacity in mammals, with their ability to partially restore the terminal phalanx of a finger by forming a blastema-like structure through remodeling and growth of bone tissue, which is different from the mechanisms of blastema formation in amphibians (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020).
It should also be noted that, in mammals, cellular sources of the wound blastema of the terminal phalanx differ in an age-dependent manner. In mouse embryos at advanced developmental stages, wound blastema is a derivative of chondrogenic cells of the terminal phalanx, which express Msx1, Msx2, Dlx5, and Bmp4 markers. A similar amputation performed in the neonatal period promotes the formation of the wound blastema as a derivative of mesenchymal cells located predominantly beneath the nail organ and expressing Msx1, while the blastema cells express Bmp2 and Bmp7 (Seifert and Muneoka, 2018;Muneoka and Dawson, 2020).
The diversity of cellular mechanisms of blastema formation has been emphasized by Brockes et al. whose theory of regeneration origin and evolution is based on two assumptions: (1) regeneration employs the highly conservative principal mechanisms of growth, development, and maintenance of tissue homeostasis universally found in animals, and ensuring the capability of self-repair in certain species/taxa and (2) these highly conservative cellular mechanisms are governed and regulated by a relatively small number of taxon-specific genes responsible for the pronounced regenerative capacity (Garza-Garcia et al., 2010).
The first of these points is consistent with the evidence on the molecular invariance of morphogenetic processes (i.e., various types of morphogenesis involve similar regulatory cascades) (Cary et al., 2019;Mehta and Singh, 2019). The second point (existence of "principal regulator" genes) is less evident; notable examples include fgf20 proposed as a primary regulator of fin regeneration in Danio rerio (Whitehead et al., 2005;Poss, 2010). A taxon-specific protein Prod1 (Geng et al., 2015), found in newts and salamanders but missing in D. rerio, Xenopus, and mammals, participates in the neural control over regeneration and patterning (Garza-Garcia et al., 2010;Geng et al., 2015;Muneoka and Dawson, 2020). The presence of Prod1 orthologs in Ambystoma mexicanum and Ambystoma maculatum places its origin before the divergence of Salamandridae and Ambystomidae (Garza-Garcia et al., 2010). In a planarian Schmidtea mediterranea, 15% of 1065 genes associated with homeostasis and regeneration have no homologs in other organisms and are considered taxon-specific (Reddien et al., 2005). According to Brockes et al., this group of genes is likely to comprise principal regulators that determine the ability to regenerate (Garza-Garcia et al., 2010).
The concept of principal regulators has also been indirectly supported by a comparative genomic study encompassing 132 species of multicellular animals with different regeneration capacities. A group of 118 highly conservative genes, 96% of which encoded Jumonji C (JmjC) domain-containing proteins, have been found specific for the «highly regenerative» species. The evolutionary loss of such genes has been associated with a dramatic decrease in regenerative capacity (Cao et al., 2019).
The evolutionary relationship between morphallaxis and epimorphosis is disputable. The assumption on their intrinsic homology was expressed by Bely and Nyberg (2010). This point of view is supported by the non-random incidence of both regeneration modes among animal taxa, as well as the fundamental similarity of the cellular processes underlying them. However, the depth of this similarity varies, and the mechanisms can be fundamentally different. Moreover, the terms «morphallaxis» and «epimorphosis», in the sense that Morgan (who coined them) put into them, do not take into account the overall mechanistic diversity of regenerative processes in the animal kingdom; as a result, phenomena of different nature are combined under one term. In this regard, some authors propose to abandon the use of terms «morphallaxis» and «epimorphosis». For instance, K. Agata suggested new terms «distalization» and «intercalation» (Agata et al., 2007;Tiozzo and Copley, 2015). Recent findings indicate striking diversity of regulation and implementation of regenerative processes at molecular and cellular levels; even within a taxonomic group, the mechanisms of regenerative response may vary significantly. In this regard, the concept of homology as related to regeneration becomes a distinct complex problem (Tiozzo and Copley, 2015).
The question of the origin of reparative regeneration is closely related to the problem of how physiological regeneration (i.e., the non-injury-induced restorative processes) and reparative regeneration relate to each other. In general, physiological regeneration is defined as the restoration of organs, tissues, cells, and subcellular structures lost during their normal life cycle or when performing their functions (Vorontsova and Liosner, 1960). In modern understanding, physiological regeneration is inherent in all tissues and cells; however, it proceeds in different forms. The phenomena of physiological regeneration include desquamation of epidermal cells, renewal of the intestinal epithelium, restoration of the uterine mucosa during the menstrual cycle, etc. (Carlson, 2007). B. P. Tokin viewed physiological regeneration as a mechanistic basis and direct evolutionary precursor to reparative processes. In an extreme interpretation (currently only of historical interest), reparative regeneration is an enhanced version of physiological regeneration. This simplification is due to the fact that cell proliferation, observed in some tissues under normal conditions and activated after injury, was the only measurable sign of regeneration. Currently, it is obvious that reparative regeneration differs in mechanisms from physiological regeneration and according to some views evolves as epiphenomenon which partially employs both the principles of physiological regeneration and the highly conserved molecular and cellular mechanisms of embryonic development and growth (Goss, 1992;Tiozzo and Copley, 2015).
Anyway, there is no doubt that regeneration as a process arose very early in the evolution and therefore involves highly conserved cellular mechanisms of morphogenesis. The intrinsic similarity of regeneration processes with asexual reproduction (Vorontsova and Liosner, 1960;Martinez et al., 2005;Kawamura et al., 2008;Burton and Finnerty, 2009;Zattara and Bely, 2016), growth (Bely and Wray, 2001;Gurley et al., 2008), and embryonic development (Martin and Parkhurst, 2004;Ghosh et al., 2008;Vogg et al., 2019) has been repeatedly noted.
REGENERATION AND ASEXUAL REPRODUCTION
Indeed, it is quite difficult not to link regeneration with asexual reproduction (Vorontsova and Liosner, 1960;Martinez et al., 2005;Brockes and Kumar, 2008;Kawamura et al., 2008;Burton and Finnerty, 2009;Zattara and Bely, 2016). In many organisms, regeneration can be morphologically indistinguishable from asexual reproduction by budding or fission. The mechanisms of asexual reproduction could be "easily" adapted for regeneration; the key difference is the stimuli that trigger these processes. Such concept has been supported by molecular studies of regeneration and asexual reproduction in hydras, planarians, annelids, and other invertebrates (Martinez et al., 2005;Mehta and Singh, 2019;Reddy et al., 2019a,b) revealing specific involvement of stem cells and generically similar roles of Wnt-signaling in these two processes (Mehta and Singh, 2019).
Ultimately, the phenomenon of restoration of the entire body from a fragment can be considered as asexual reproduction (Tokin, 1969). B. P. Tokin viewed the decreasing capacity for asexual reproduction as a direct correlate (and reflection) of the loss in regenerative capacity.
The resemblance of asexual reproduction with regeneration in invertebrates is remarkable. However, despite the rich recent history of comparative studies on the histological level, only a limited number of specific molecular findings support the intrinsic similarity of the two processes. The positive examples include similar expression of Pl-en in the nervous system, as well as Pl-Otx1 and Pl-Otx2 in the anterior body wall, foregut, and nervous system, of the annelid worm Pristina leidyi during regeneration and asexual reproduction (Bely and Wray, 2001). Also, Hydra shows a similar expression of HyBMP5-8b, a BMP5-8 ortholog involved in axial patterning and formation of tentacles, in budding and regeneration (Reinhardt et al., 2004). However, despite the outward similarity of asexual reproduction with regeneration, these two processes evolved separately. For instance, the closest common ancestor of Annelida was probably capable of regenerating the anterior and posterior ends of the body but was devoid of the ability to reproduce itself asexually Bely, 2011, 2016). In Nematostella vectensis, molecular markers expressed during asexual reproduction and regeneration significantly overlap; however, no expression of regeneration markers Nv-otxC and anthox1 is observed during asexual reproduction (Burton and Finnerty, 2009).
REGENERATION AND EMBRYOGENESIS
K. N. Davydov was one of the first to express the idea of the similarity between regeneration and embryonic development; his conclusions were based on the comparison of the process of anterior regeneration in P. minuta and P. clavigera with embryonic development (Davydov, 1903). The relationship between regeneration and embryogenesis is of particular importance for evolutionary biology, as it allows experimental investigation of the emergence of new structures. Sánchez Alvarado and coauthors developed an original view of this problem (Sánchez Alvarado, 2000;Elliott and Sánchez Alvarado, 2013). According to his opinion, the limb development in arthropods and vertebrates is governed by similar molecular cascades. However, the closest common ancestors of arthropods and vertebrates had no limbs at all. What factors, then, predetermined the homology? (Sánchez Alvarado, 2000;Elliott and Sánchez Alvarado, 2013).
The answer to this question can be obtained by studying regeneration. The similarity of embryonic limb buds with regeneration blastema is evident both histologically and at the level of molecular signaling cascades (Galis et al., 2003). In planarians, the blastema contains key components of molecular pathways regulating the establishment of anteriorposterior (Wnt-signaling), dorsal-ventral (BMP-pathway), and medial-lateral polarities (Sánchez Alvarado, 2000;Elliott and Sánchez Alvarado, 2013;Karami et al., 2015). According to Sánchez Alvarado and coauthors opinion, «the molecular processes underlying blastema formation and regeneration have been co-opted by sexually reproducing animals for the production of new structures such as limbs during the evolution of their developmental processes» (Sánchez Alvarado, 2000;Elliott and Sánchez Alvarado, 2013).
In molecular terms, embryonic development and regeneration are very different. N. vectensis shows no asymmetric expression of Hox-like genes (characteristic of embryogenesis) during asexual reproduction or regeneration (Burton and Finnerty, 2009). In zebrafish, the epimorphic regeneration of fins requires fgf20a expression, which is not required for fin development (Whitehead et al., 2005). In Xenopus, three Abdominal B-type Hox genes XHoxc10, XHoxa13, and XHoxd13 show different expression patterns in regenerating and developing limbs (Christen et al., 2003). The similarities and differences of embryonic development, asexual reproduction, and regeneration are consistent with the idea that the capacities of asexual reproduction and regeneration evolved on the basis of signaling pathways of growth and development; however, the "borrowing" was selective and proceeded in a variety of ways.
Apparently, signaling pathways governing regeneration and asexual reproduction in primitive animals were eventually redirected for the performance of other tasks, e.g., limb development (Sánchez Alvarado, 2000;Elliott and Sánchez Alvarado, 2013).
EVOLUTIONARY MAINTENANCE OF REGENERATIVE CAPACITY
Regardless of the character of regeneration origins at the most ancient stages of evolution (whether it was a primary or secondary property of animals), this property was propagated in diverse forms throughout the animal kingdom.
The problem of maintaining regenerative capacity during evolution is one of the key ones. However, there are very few specific experimental studies. Initially, the very idea of maintaining the ability to regenerate, the role of the frequency of damage in this process was developed by Weismann (1893Weismann ( , 1899, and further tested in the works of Morgan T.H. (1901). Further insight into the role of injury and the value of regeneration in the fitness of a species was developed by Needham (1952) and Goss (1969).
According to the classical reasoning, frequent damage to an organ is favorable for the maintenance of its regenerative capacity (Weismann, 1893(Weismann, , 1899, given that its loss will significantly reduce the individual's fitness and the overall costs are not detrimental for the species (Needham, 1952;Goss, 1969).
At the initial stages of evolution, aggressive environmental conditions apparently played a principal role in maintaining the regenerative capacity (Wulff, 2006). Indeed, a high frequency of damage is typical for some groups of highly regenerative organisms in natural environments, to the extent that the majority of individuals in wild populations show distinct signs of damage and repair (Clark et al., 2007;Bely and Nyberg, 2010). However, the high regenerative capacity may be preserved even at low frequencies of damage. T. H. Morgan, in his classical studies on hermit crabs, showed that the rudimentary hind limbs, hidden in the shell and rarely damaged unless the shell is broken (in which case the animal would likely perish), regenerate in the same way as front limbs (exposed to the environment and frequently damaged or autotomized) Morgan T.H., 1901;Sunderland, 2010). Noteworthy, hydras, and planarians, with their remarkable regenerative capacities, show no signs of active repair in the wild (Bely and Nyberg, 2010). As emphasized by Needham, regeneration would never be the only adaptive response to frequent damage. Instead, the species may enhance its reproductive potential; the animals may also develop mobility, protective coloration, exoskeleton, etc. (Needham, 1952).
Theoretically, as already noted, the severity of damage must be balanced by the cost of the regenerative process. Excessive severity of damage will kill the animal, whereas its insignificance for the normal functioning (due to dispensability or redundancy of the damaged structure) will eliminate the need for regeneration. However, in practice, it is rather difficult to determine the cost of damage, as well as the cost of regeneration for a particular organism (Tiozzo and Copley, 2015). Several studies indicate that regeneration is indeed associated with significant energy expenditures (Naya et al., 2007) and functional opportunity costs that affect the survival and reproductive capacity of the organism (Bernardo and Agosta, 2005;Maginnis, 2006;Suzuki et al., 2019). Complex adaptive reactions (e.g., autotomy, which helps to minimize the loss of biological fluids and tissues when attacked by predators) can reduce the cost of damage thus increasing the feasibility of regeneration (Maginnis, 2006;Mcgaw, 2006;Bateman et al., 2008). In the general case, the regeneration is feasible when its benefits and rates override the possible negative effects from the existence of functionally immature and burdensome intermediate structures (Ramos et al., 2004;Dupont and Thorndyke, 2006;Barr et al., 2019) or incomplete/deviant recovery in cases of atypical regeneration (Lailvaux et al., 2009;Bely and Nyberg, 2010).
Due to the difficulties and contradictions of adaptationism (when applied on its own), alternative hypotheses were proposed to explain the evolutionary maintenance of regenerative capacity. In this regard, pleiotropic effects and phylogenetic inertia represent particularly important factors that should be discussed separately.
In an evolutionary context, the term «pleiotropy» refers to the maintenance of regenerative capacity of an organ in close association with some other important morphogenetic process, for example, asexual reproduction, growth, embryogenesis, or regeneration of another organ (possibly regulated by the same genetic frameworks). Pleiotropy implies default activation of related morphogenetic processes; for instance, in cnidarians and flatworms, the mechanisms of regeneration and normal growth are intrinsically similar (Alvarado and Tsonis, 2006;Bosch, 2007).
The concept of phylogenetic inertia refers to cases when regenerative capacity confers no distinct selective advantages to the species, nor shows distinct associations with any other morphogenetic process. In such cases, regeneration is preserved for the reason of insufficient selection pressure (or time) for its elimination. This concept provides a valuable description for the evolution of regenerative capacity in annelids, some of which retained the capacity while others lost it (Bely and Wray, 2001;Bely, 2006).
EVOLUTIONARY ENHANCEMENT OF REGENERATIVE CAPACITY
It should be noted that evolutionary enhancement of regenerative capacity is rare. Nevertheless, the distinct minor trends can be illustrated by the enhanced regenerative capacity of muscle liver tissues and in mammals and birds compared with amphibians (Liozner, 1974;Carlson, 2005) and the enhanced regeneration of extremities in arthropods compared with other ecdysozoans (Maruzzo and Bortolin, 2013). Another famous example is the regeneration of the tail in lizards (Garza-Garcia et al., 2010) and high skin regeneration in the spiny mouse, Acomys (Brant et al., 2016). Despite these impressive examples of the enhanced regenerative capacity, their mutual relationship is too distant to allow comprehensive investigation of common evolutionary patterns.
One of the most productive strategies in tracing the evolutionary dynamics of regenerative capacity is to compare closely related species with different regenerative capacities (Bely and Sikes, 2010;Zattara et al., 2019). Phylum Nemertea is one of the most promising in this aspect, as all of its studied species are capable of regenerating the posterior portion of the body, while only some of them can regenerate the anterior terminus (Bely et al., 2014;Zattara and Bely, 2016). The findings indicate that the common ancestor of Nemertea was capable of regenerating the posterior portion, but not the anterior terminus. In the evolution of Nemertea, this capacity was reinforced in at least four instances, as revealed by facile regeneration of the anterior terminus in corresponding species (one among Palaeonemertea and three among Pilidiophora; Zattara et al., 2019). The repeated events of enhancement were apparently promoted by repeated emergence of certain traits which allowed the transition (probably, the long-term survival of decapitated individuals) (Zattara et al., 2019). Mechanistically, the enhancement may result from the activation of some embryonic developmental programs in adults. Such assumption is consistent with the experiments on the embryos of Nemertopsis bivittata, which, after being cut into two parts, develop into two individuals (whereas the adults of this species are non-regenerative) (Martindale and Henry, 1995). Such mechanisms can be highly conserved; cf. the organizing roles of Wnt/β-catenin signaling during apical regeneration in Hydra and early development in vertebrates (Guder et al., 2006;Reddy et al., 2019a;Vogg et al., 2019).
AN EVOLUTIONARY DECLINE IN REGENERATIVE CAPACITY
The decline in regenerative capacity is a very strong phylogenetic trend, the examples of which can be found in any phylum (Bely and Nyberg, 2010;Lai and Aboobaker, 2018). However, its accurate comparative assessment in different groups of animals is complicated (Bely, 2010;Bely and Sikes, 2010).
Meanwhile, mechanistic reasons for the decline, though much discussed, remain understudied. In the view of adaptationists, regenerative capacity may be alleviated as a direct consequence of low damage frequency (Baumiller and Gahn, 2004). However, this view has not been supported by experimental findings, efficient regeneration of rudimentary limbs in hermit crabs reported by T. Morgan. The same applies to the regeneration of internal organs, which, according to A. Weismann, should regenerate poorly (Weismann, 1893(Weismann, , 1899. In the 20th century, this concept was criticized by M. A. Vorontsova, L. D. Liosner, and their followers (Vorontsova and Liosner, 1960;Liozner, 1974).
In addition, a decline in regenerative capacity may occur as a result of a significant change in the adaptive value of the organ. In case of dramatic gain in adaptive value, damage to the organ may kill the individual without giving regeneration a chance. However, a decrease in the adaptive value of an organ may also promote a decline in its regenerative capacity, as it happens with a multiplication of identical or similar structures, e.g., the alleviated capacity of limb regeneration in certain arachnids (Brautigam and Persons, 2003).
Regenerative capacity may also decrease in a pleiotropic manner. Galis et al. (2003) suggest that the regenerative capacity of vertebrate limbs evolves in connection with their embryonic development. In the case of the early onset of limb development, its formation coincides with basic morphogenetic events involving complex interactions of multiple embryonic structures. As a consequence, the limb develops under powerful inducing effects of somites, lateral plate mesoderm, etc., but not as a self-organizing structure. Accordingly, the regenerative capacity of the definitive limb is reduced (Galis et al., 2003).
When the onset of limb development is delayed until the completion of fundamental inductive interactions between the primary germ layer derivatives (somites, neural tube, etc.), the autonomously developing limbs will be regenerative. This concept can be illustrated by the delayed limb development in Caudata (whose capacity for limb regeneration is renowned). Opposite examples include the fins of sharks and lungfish, as well as the limbs of birds and mammals, which develop from early anlagen and regenerate poorly. At the same time, the concept does not account for the poor limb regeneration in Anura, whose limbs develop fairly late, but regenerate well in larvae only (Galis et al., 2003). However, adult Anura are not completely devoid of the ability to regenerate limbs: in Rana temporaria and Rana clamitans, limb regeneration can be obtained after additional damaging effects on the wound surface (Polezhaev, 1946), while in Xenopus laevis, the same effect can be achieved by blocking proton channels and limiting the duration of local immune responses (Adams et al., 2007;Fukazawa et al., 2009).
Close to the concept under consideration is the concept of modules, a network of genes that control the behavior of cells taken from evo-devo. Defining the concept of modularity is not a trivial task. In developmental biology, the hypothesis of modules assumes the division of a developing organism into functional or organizational subunits that have pronounced morphological isolation, for example, somites, or correspond to a certain part of the body of an adult, such as a limb kidney (Bolker, 2000). Raff (1996) listed the following module characteristics: it should have a discrete genetic specification, hierarchical organization, interactions with other modules, a particular physical location within a developing organism, and the ability to undergo transformations on both developmental and evolutionary time scales (Raff, 1996).
In connection with the problem of the evolution of regeneration, this concept implies the idea of developmental constraint, i.e., restraints on phenotype production due to limited interaction among modules. For example, an increase in the complexity of the structure at the histological level can prevent the propagation of gradients of morphogens or bioelectric signals, which can lead to a decrease in the regenerative capacity (Tiozzo and Copley, 2015). The interplay of regeneration and immunity represents a special issue (Mescher et al., 2017). The advent of adaptive immunity apparently collided with the pronounced regenerative capacity. In the highly regenerative Caudata, many components of adaptive immunity are underdeveloped; for example, compared with tailless amphibians, they lack antiviral immunity (Cotter et al., 2008;Murawala et al., 2012). Significant upgrade of the adaptive immune system during metamorphosis in Anura is consistent with the observed decline in the regenerative capacity of the adult individuals compared with the larvae (Robert and Ohta, 2009;Godwin and Rosenthal, 2014). In Anura, the immune system undergoes significant developmental changes. Prior to metamorphosis, it is functionally immature, as indicated by larval repertoires of T cell and B cell receptors, low expression of MHC I, low levels of B cell-mediated responses and antibody production, the negligible activity of natural killer cells, and low activity of helper and killer T cells. Metamorphosis is associated with a significant upgrade of these indicators; in addition, it brings the capacity of MHC II-dependent activation of helper T cells (Robert and Ohta, 2009). The increase in activity of natural killer cells and T cells in tailless amphibians leads to enhanced antitumor and antiviral immunity, which apparently costs them their regenerative potential.
Similar patterns are observed in mammals, with the pronounced regenerative capacity (manifested in scarless wound healing and myocardial regeneration) confined to certain stages of fetal development (Porrello et al., 2011;Vivien et al., 2016). The pronounced regenerative capacity of fetal skin and myocardium can be associated with certain functional properties of the developing immune system. It has been demonstrated that during this period the body more readily develops a Th2-mediated antiinflammatory response than pro-inflammatory reactions (Sattler and Rosenthal, 2016). The shifted balance apparently favors a fullvalue compensation of the defect in line with its immediate tissue environment rather than its replacement with fibrous tissue. Apart from the plausible role of T cell-mediated responses, the influence of innate immunity should be considered as well. The development of organs is accompanied by their colonization with macrophages of bone marrow origin as opposed to primary populations of embryonic macrophages, which may also affect the regenerative capacity (Epelman et al., 2014;Elchaninov et al., 2019Elchaninov et al., , 2020. Apparently similar reasons explain the high skin regeneration in the spiny mouse, Acomys. So they have an almost complete absence of macrophages and a low level of proinflammatory cytokines in their skin wounds (Brant et al., 2016).
Thus, it can be noted that evolutionary maturation of the immune system leads to a decrease in the regenerative potential, as illustrated by the inability of frogs to regenerate limbs after metamorphosis, as well as the extinction of scarless healing of skin wounds in mammals.
The reverse correlation between adaptive immunity and regenerative capacity (Godwin et al., 2017) may reflect the important role of under-, trans-, or dedifferentiated cells in regeneration (considered in the next section). It has been suggested that the advanced adaptive immunity (characteristic of Anura, birds, and mammals) is poorly compatible with the presence of non-differentiated cells, which are considered compromised and become eliminated along with foreign cells. The constant immune pressure on the populations of cells with high differentiation potential negatively affects the regenerative capacity (Godwin et al., 2017).
Another reason for the decline in regenerative capacity may be the high energy cost of this process. In animals with a short lifespan, individuals invest more resources in reproduction, which leads to a decrease in regenerative potential; this apparently has happened to certain species of lizards (Fox and McCoy, 2000;Bernardo and Agosta, 2005). A similar relationship between reproduction and regeneration can be observed in species with asexual reproduction, e.g., annelids who have lost the capacity of anterior regeneration (Bely and Wray, 2001;Bely, 2010;Zattara and Bely, 2013). Regeneration may affect the development; for instance, it significantly delays the metamorphosis in fruit flies, cockroaches, butterflies, and crabs, which can also adversely affect survival (Suzuki et al., 2019). Another possible cause for the decline in regenerative capacity is warm-bloodedness (Goss, 1969), which is closely related to the evolution of adaptive immunity, hard skeleton (Wulff, 2006), and finite growth (Bely and Wray, 2001;Bely, 2010).
Elucidation of mechanisms that determine the decline of regenerative capacity is challenging, especially given the varying degree of such effects in the evolution. It was noted that in certain groups of animals, e.g., annelids, regeneration is reduced to wound healing, amphibians and fish tend to exhibit hypomorphic regeneration, whereas reptiles may show either decreased rates of recovery or confinement of repair to certain stages of ontogeny (Vorontsova and Liosner, 1960;Han et al., 2003Han et al., , 2008Seifert and Muneoka, 2018). In planarian Dendrocoelum lacteum, crosscut at a certain level, tail fragments are incapable of regenerating the head. It has been found that the restriction is due to the uninhibited Wnt/b-catenin signaling in such fragments and that ectopic suppression of Wnt/b-catenin signaling makes them capable of anterior regeneration (Liu et al., 2013;Maden, 2018). Similarly, the lack of anterior regeneration observed in certain annelids has been associated with low expression of nanos (Bely and Sikes, 2010).
DIFFERENTIATION STATUS AS A CORRELATE OF REGENERATIVE CAPACITY
According to Weismann's theory, the regenerative capacity decreases as the structural and functional organization becomes more complex. In other words, Weismann believed that complex structural patterns are poorly compatible with regeneration, which requires pronounced tissue plasticity and a sufficient degree of freedom for the reconstruction.
Despite the vagueness and controversy of the term "organization complexity" as applied to animals, differentiation plasticity of cells is certainly connected with regeneration capacity.
The terms «transdifferentiation», «dedifferentiation», and «redifferentiation» have a rich history of scientific usage. The issue of their exact meanings and, in general, whether their use makes sense, is still open. Despite the long controversy, the definitions vary. Literally, dedifferentiation is the loss of structural and functional specialization; accordingly, redifferentiation may be understood as reacquisition of its previous differentiated phenotype by a particular cell (Odelberg, 2004(Odelberg, , 2005Grigoryan, 2016). «Transdifferentiation» is a particularly controversial term. Some experts use it loosely, even to describe a transition between derivatives of the same germ layer, for example, the transition between cholangiocyte and hepatocyte (Michalopoulos, 2011). Others use it in a narrower sense, to describe a transition between germ layers; the examples include the transition of the coelomic epithelium into gut epithelium during gut regeneration in holothurians (Dolmatov et al., 2019) and the transition of pigment cells of the iris into epithelial cells of the lens (Grigoryan, 2016). «Dedifferentiation» implies explicit transition to a low-differentiated state with high proliferative activity. A classic example of dedifferentiation is observed during regeneration of the retina from the pigment epithelium in newts, during which the epithelial cells lose melanin granules, enter proliferation, and differentiate into neurons (Mitashov, 1996); the whole sequence, however, can be justly classified as redifferentiation or even transdifferentiation. Formation of the wound blastema during regeneration of newt limbs also involves dedifferentiation, with muscle fibers losing their striation and undergoing fragmentation to become myoblasts (Odelberg, 2005).
Differentiation plasticity of cells at the site of injury (or directed to it) is closely related to the extent of remodeling in response to damage, with the extremes termed morphallaxis («blastema-less» regeneration) and epimorphosis (which involves the formation of blastema). For instance, the diploblastic Hydra can be considered as an organism that is constantly in a state of regeneration (Sánchez Alvarado, 2000;Martínez and Bridge, 2012). In Hydra, non-differentiated pluripotent cells of the gastric column are constantly proliferating and changing their location within the body (Sánchez Alvarado, 2000;Bosch, 2007;Vogg et al., 2019). According to some expert opinions, these cells may be considered as a hidden permanent analog of the blastema. The constant «circulation» of such cells in Hydra's body provides a reasonable alternative to their emergency accumulation at the site of damage (which would be an epimorphic feature). Moreover, the constant presence of non-differentiated progenitors enables the triggering of determination and differentiation processes immediately after damage, which is typical for morphallaxis (Sánchez Alvarado, 2000).
In triploblastic animals, the evolution of an expanded system of cell differentiation checkpoints posed critical restrictions on the pluripotency. In planarians (considered as the most primitive triploblastic animals), the only pluripotent cells are neoblasts. In the case of damage to the planarian body, neoblasts actively proliferate and form blastema. It is believed that the cells involved in the restoration of the entire body from a fragment have similar properties in different groups of animals (endowed with such capacity). These cells are marked with RNA/protein-rich structures referred to as nuage, germ plasm, or chromatoid bodies (nuGPCB) which typically contain the expression products of germline-associated genes of Vasa, Nanos, Piwi, Tudor, Pumilio, and Bruno families. In invertebrates, non-differentiated cells are also typically marked by high expression of PIWI/piRNA genes, which ensures genome stability (Tiozzo and Copley, 2015;Lai and Aboobaker, 2018).
In more complex triploblastic animals, e.g., tailed amphibians, the pluripotency is restricted even further. These animals lack a reserve of pluripotent cells, which emerge during regeneration as a result of dedifferentiation and transdifferentiation of the preexisting differentiated cells (Alvarado and Tsonis, 2006;Brockes and Kumar, 2008;Li et al., 2015). In tailless amphibians and salamanders, the potency of accumulating non-differentiated cells in response to injury is dramatically reduced or restricted to the larval stages (Agata and Inoue, 2012). Relative contributions of dedifferentiation and transdifferentiation to regeneration remain disputable, partly due to the pluralism of definitions for these processes in different settings (Galliot and Ghila, 2010). The majority of experts agree that dedifferentiation and transdifferentiation characteristically occur during regeneration in Hydra, as well as during Wolffian regeneration of the lens in Caudata (Galliot and Ghila, 2010;Henry and Hamilton, 2018). Transdifferentiation of coelomic epithelial cells into enterocytes can be observed during regeneration in sea cucumbers (Dolmatov et al., 2019;Boyko et al., 2020). At the same time, the cells of regenerating limbs in tailed amphibians have been shown to retain their key differentiation determinants (Kragl et al., 2009;Slack, 2017).
According to a number of authors, the ability of cells to return to the cell cycle is closely related to the concept of cell plasticity (Galliot and Ghila, 2010). In the course of evolution in some animals, the regulation of the cell cycle became more complicated, the appearance of additional checkpoints, which in turn could cause a decrease in the regenerative capacity.
In the course of a comparative study of the mechanisms of regulation of the cell cycle, it was found that 23 cyclins are encoded in the Saccharomyces cerevisiae genome, which regulates six proline-directed serine/threonine protein kinases. Cdc28 is required for driving the cell cycle. The multifunctional kinase Pho85 regulates G1 progression and other intracellular processes. In humans, 13 members of the CDK-family (cyclin-dependent kinase) have been found to interact with 29 cyclins and cyclinrelated proteins (Malumbres and Barbacid, 2005). A family of five proteins (known as Ringo or Speedy) has been found in vertebrates but not in S. cerevisiae, Caenorhabditis elegans, or Drosophila melanogaster (Nebreda, 2006).
It has been found that CDK7, CDK8, and CDK9 are not very different from their yeast orthologs. CDK4 and CDK6 first appeared in multicellular organisms. The increased number of cyclins in the mammalian genome has resulted in a large variety of CDK-cyclin complexes. However, only 10 cyclins (three D-type, two E-type, two A-type, and three B-type cyclins) are known to be directly involved in driving the mammalian cell cycle (Malumbres and Barbacid, 2009).
The control of the mitotic cycle in the nuclei of muscle fibers in Anamnia and mammals is carried out with the involvement of different amounts of regulatory proteins. It was found that in non-amniotic vertebrates, one INK4 gene functions, which is responsible for the synthesis of cyclin-dependent kinase inhibitor 2 (p16Ink4). At the same time, mammals have two Ink4 genes (Ink4a which produces p16INK4a, and ARF, and Ink4b which produces p15INK4b). p16INK4a and p15INK4b block cyclin-dependent kinases 4 and 6 (CDK4,6) activity under normal conditions. In mammals, there is an additional mechanism of inhibition of the cell cycle re-entry by alternate open reading frame (ARF) through tumor protein p53. Under normal conditions, maintenance of chromosomes 2 (MCM2) ubiquitinates p53 and targets it for destruction (Seifert et al., 2012).
Despite the limitations in proliferative potential and phenotypic plasticity, mammalian tissues present with certain examples of dedifferentiation. However, these examples are most often associated with pathological processes, to leave alone tumorigenesis. For example, under conditions of severe viral or toxic liver damage, cholangiocytes are prone to dedifferentiation, with subsequent redifferentiation to cholangiocytes or transdifferentiation to hepatocytes (Michalopoulos, 2011). Another effect of viral or toxic liver damage on cell differentiation status is the loss of lipid droplets by Ito cells and their transition to myofibroblasts (Unanue, 2007).
CONCLUSION
In the course of the evolution of certain animal taxa, more and more checkpoints were added to the regulation of the cell cycle and exit from it. These checkpoints are maintained by the expanded system of cyclins and cyclin-dependent kinases with associated gene-and-protein networks and circuits (Malumbres and Barbacid, 2009;Seifert et al., 2012). The establishment of complex multilevel control of the mitotic cycle was inevitably coupled to enhanced control of the differentiation status; this association represents a major cause for the decline in regenerative capacity in vertebrates. An eventual increase in the activity of metabolic processes in warm-blooded animals allowed neither the preservation of non-differentiated cells in sufficiently high numbers nor the massive waves of dedifferentiation fraught with tumorigenesis (Sánchez Alvarado, 2000;Li et al., 2015).
Regeneration is a complex and diversified process inherent to the life at different levels of its organization. For obvious reasons, morphologically advanced cases of regeneration (such as restoration of the entire body from a fragment or regeneration of amputated limbs) draw more attention than others. As a consequence, a limited number of regeneration model organisms are used for research: zebrafish, newts, hydra, and planaria. In this case, the same type of damage is very often used-amputation, which narrows our understanding of regeneration and its evolution. Almost nothing is known about the mechanisms of regeneration in such animals after toxic damage, viral or bacterial. This is often considered in the relevant sections of microbiology, toxicology, and is not taken into account by regeneration researchers.
The evolution of regeneration can be studied by various approaches (Vorontsova and Liosner, 1960;Bely and Nyberg, 2010). The methodology involves a reduction of the phenomenon to particular events assigned to different levels of the organization and classified accordingly, with appropriate accounting for their relative contributions in a single model. Moreover, it is obvious that the evolution of regeneration is not a unidirectional process. Despite a major trend of the decline in regenerative capacity with the increasing organizational complexity, the phenomenon is modified in a variety of ways and never completely eliminated. For instance, mammals, who have suffered a pronounced phylogenetic decline in regenerative capacity, are capable of restoring neither amputated limbs nor other external appendages (the repair is limited to wound healing). At the same time, regeneration of certain organs and structures in mammals is morphologically consistent and results in complete functional recovery; characteristic examples include the restoration of the auricle tissue after a perforating wound (Williams-Boyce and Daniel, 1986) and restoration of the liver mass after massive resections (Bangru and Kalsotra, 2020).
Evolutionary studies on regeneration involve overcoming certain biases. Regrettably, the studies on regenerative capacity are still linked to a limited number of animal models and species. Importantly, in natural habitats, the organs may be damaged by disease rather than mechanically, which dramatically affects the course of regeneration. Regeneration of pathologically altered organs has been experimentally studied in mammals; for other animal taxa, the corresponding data are fragmentary or missing.
AUTHOR CONTRIBUTIONS
AE, GS, and TF contributed the text. All authors read and approved the final version of the manuscript. | 11,981.2 | 2021-03-05T00:00:00.000 | [
"Biology",
"Philosophy"
] |
Very-Short-Term Power Prediction for PV Power Plants Using a Simple and Effective RCC-LSTM Model Based on Short Term Multivariate Historical Datasets
Improving the accuracy of very-short-term (VST) photovoltaic (PV) power generation prediction can effectively enhance the quality of operational scheduling of PV power plants, and provide a reference for PV maintenance and emergency response. In this paper, the effects of different meteorological factors on PV power generation as well as the degree of impact at different time periods are analyzed. Secondly, according to the characteristics of radiation coordinate, a simple radiation classification coordinate (RCC) method is proposed to classify and select similar time periods. Based on the characteristics of PV power time-series, the selected similar time period dataset (include power output and multivariate meteorological factors data) is reconstructed as the training dataset. Then, the long short-term memory (LSTM) recurrent neural network is applied as the learning network of the proposed model. The proposed model is tested on two independent PV systems from the Desert Knowledge Australia Solar Centre (DKASC) PV data. The proposed model achieving mean absolute percentage error of 2.74–7.25%, and according to four error metrics, the results show that the robustness and accuracy of the RCC-LSTM model are better than the other four comparison models.
Introduction
Developing renewable energy can effectively reduce dependence on fossil energy and other burning energy sources, thereby improving the world's energy and economic security [1,2]. Thus, the importance of developing new energy sources is increasingly prominent [3,4]. Due to its clean, safe, and sustainable characteristics, photovoltaic (PV) power generation is still receiving continuous attention worldwide. PV technology has been improved in material [5] and maintenance strategy [6,7] in recent years. According to the latest data [8], the global installed capacity of new PV has reached 100 GW in 2018, accumulated to 505 GW. Among them, the newly installed capacity of PV in China, US, Japan and Germany reached 45 GW, 10.6 GW, 6.5 GW, and 3.0 GW, respectively. However, due to the power output of the PV power generation system is largely affected by environmental factors, the economic benefits of the PV plant depend on the flexibility of PV power systems [9]. In order to improve the flexibility of the demand side and supply side in the PV market, increasing the resolution and accuracy of PV Power generation predictions becomes critical and urgent [10]. RCC-LSTM model does not require very long-term historical data, which allows the system to forecast VST PV power generation after a short-period of self-running, and the sliding window enables the predictive model to be self-updated in real-time and adapt to the natural attenuation of PV systems.
The main achievements of this work can be summarized as follows: (1) We present a new method for VST PV power forecasting that combines similar time period collection using the RCC algorithm with neural network learning prediction algorithms. The models use only the previous PV power data and meteorological data, i.e., solar radiation, temperature, and humidity. A notable advantage of our method is that it uses only variables that are easily obtainable (previous PV power and simple weather data). In comparison to other methods, it does not use future weather predictions that are not always available for all PV plant. or sky images that require special equipment to be processed and recorded and (2) On the five minutes time resolution, the correlation between different meteorological data and power output at different time periods was explored. The specified time point radiation coordinates which had the highest correlation with power output are further proposed. The similar time period collected by the RCC method is used as training sample for the prediction model. This method reduces the calculation cost of the model and enhances the prediction accuracy.
(3) Based on the dataset from two independent PV systems, a comprehensive comparative study is conducted comparing the proposed method with mainstream data-driven methods, including RCC-BPNN, RCC-Elman, RCC-RBFNN models and LSTM model on all four seasons. The experimental results show that the proposed RCC-LSTM model has an obvious advantage in forecasting accuracy.
The rest of the work is organized as follows. Section 2 presents the materials and methods in this paper. Section 3 describes the proposed methodology in this paper. Section 4 shows the experimental results and provides an analysis and comparison of the test results. Finally, the conclusions and future work are presented in Section 5.
The Description of The Experimental Data
This paper uses the measured data from the YULARA PV system in Alice Springs, Australia at a latitude of 22 • 79 S and a longitude of 130 • 16 E. In order to verify the scope and robustness of the proposed model, two separate systems (i.e., 3A and 4) with different PV technologies and panel ratings are selected. Figure 1 shows the map of the systems and their power generation ranges are 22.56 KW and 327.6 KW, respectively. The detailed information of these two systems is shown in Table 1.
Electronics 2020, 9, x FOR PEER REVIEW 4 of 19 system to forecast VST PV power generation after a short-period of self-running, and the sliding window enables the predictive model to be self-updated in real-time and adapt to the natural attenuation of PV systems. The main achievements of this work can be summarized as follows: (1) We present a new method for VST PV power forecasting that combines similar time period collection using the RCC algorithm with neural network learning prediction algorithms. The models use only the previous PV power data and meteorological data, i.e., solar radiation, temperature, and humidity. A notable advantage of our method is that it uses only variables that are easily obtainable (previous PV power and simple weather data). In comparison to other methods, it does not use future weather predictions that are not always available for all PV plant. or sky images that require special equipment to be processed and recorded and (2) On the five minutes time resolution, the correlation between different meteorological data and power output at different time periods was explored. The specified time point radiation coordinates which had the highest correlation with power output are further proposed. The similar time period collected by the RCC method is used as training sample for the prediction model. This method reduces the calculation cost of the model and enhances the prediction accuracy.
(3) Based on the dataset from two independent PV systems, a comprehensive comparative study is conducted comparing the proposed method with mainstream data-driven methods, including RCC-BPNN, RCC-Elman, RCC-RBFNN models and LSTM model on all four seasons. The experimental results show that the proposed RCC-LSTM model has an obvious advantage in forecasting accuracy.
The rest of the work is organized as follows. Section 2 presents the materials and methods in this paper. Section 3 describes the proposed methodology in this paper. Section 4 shows the experimental results and provides an analysis and comparison of the test results. Finally, the conclusions and future work are presented in Section 5.
The Description of The Experimental Data
This paper uses the measured data from the YULARA PV system in Alice Springs, Australia at a latitude of 22°79′ S and a longitude of 130°16′ E. In order to verify the scope and robustness of the proposed model, two separate systems (i.e., 3A and 4) with different PV technologies and panel ratings are selected. Figure 1 shows the map of the systems and their power generation ranges are 22.56 KW and 327.6 KW, respectively. The detailed information of these two systems is shown in Table 1. The data of the two consecutive years (2017 and 2018) are chosen for this experiment and can be download from [49]. The resolution of the historical dataset is 5-min, and the data mainly includes active power (KW), temperature ( • C), relative humidity (%), global horizontal radiation (w/m 2 ×sr), and diffuse horizontal radiation (w/m 2 ×sr).
General Structure of the Proposed Model
The detailed overall structure of the proposed method is described in Figure 2. To further understand the details of the method, an additional description of each part is given in this section. The data of the two consecutive years (2017 and 2018) are chosen for this experiment and can be download from [49]. The resolution of the historical dataset is 5-min, and the data mainly includes active power (KW), temperature (°C), relative humidity (%), global horizontal radiation (w/m 2 ×sr), and diffuse horizontal radiation (w/m 2 ×sr).
General Structure of the Proposed Model
The detailed overall structure of the proposed method is described in Figure 2. To further understand the details of the method, an additional description of each part is given in this section.
Clean the data and fill in missing data Select similarity period by radiation Coordinate
Data Preprocessing
During the training process of the deep learning network, the quality of the training data will affect the accuracy of the prediction model. Therefore, the training-data should be preprocessed before it is transmitted to the network, which includes the cleaning of the abnormal data (such as PV panel anomalies) and filling in missing data (such as system and equipment failure). After that, in
Data Preprocessing
During the training process of the deep learning network, the quality of the training data will affect the accuracy of the prediction model. Therefore, the training-data should be preprocessed before it is transmitted to the network, which includes the cleaning of the abnormal data (such as PV panel anomalies) and filling in missing data (such as system and equipment failure). After that, in order to meet the data requirements of the training network and to avoid the unbalanced data distribution caused by different unit ranges of the different feature vectors, these types of data are normalized into the same unit of measurement. Finally, the training dataset is rearranged according to the PV output sequence and the structure of the neural network.
Radiation Coordinate Classification Method
PV power output is related to many factors [50], including some meteorological factors, type of PV module, the installation structure, and the working characteristics of the PV module among others and it is almost impossible to include all the influencing factors. However, it is easy to understand that the weight of these factors on the PV power output is not constant. They act differently under different time periods and different weather conditions. Moreover, the natural attenuation of PV systems has a certain effect on their degree of impact.
This paper selects three random days for analysis in four quarters (includes January 15, February 22, December 20, summer in Australia, April 30, May 20, June 1, Autumn in Australia, July 7, July 18, August 6, winter in Australia, and September 1, September 22, spring in Australia). Time range from 8:30-17:30, 109-time points in total, four representative and easily available meteorological factors are selected for comparison. The correlations between different features with power output at different time period are calculated by ρ d,p , and the average of three days was randomly selected.
The ρ d,p is defined as follows: where X represents the different meteorological influential factors, Y represents the power generation, and N represents the number of time points. The correlation coefficient ranges from 0.8 to 1.0, the representation has a strong correlation, the strong correlation at 0.6-0.8, the medium correlation between 0.4-0.6, the weak correlation at 0.2-0.4. And the very weak correlation at 0-0.2. Figure 3 represents the correlation coefficient of four different meteorological factors with PV power generation at different time periods in different seasons. In general, it can be seen that the correlation between different impact factors and PV power output also changes in different seasons. Their respective influences on PV power output are also constantly changing, which further explains why the linear models are difficult to solve.
Electronics 2020, 9, x FOR PEER REVIEW 6 of 19 order to meet the data requirements of the training network and to avoid the unbalanced data distribution caused by different unit ranges of the different feature vectors, these types of data are normalized into the same unit of measurement. Finally, the training dataset is rearranged according to the PV output sequence and the structure of the neural network.
Radiation Coordinate Classification Method
PV power output is related to many factors [50], including some meteorological factors, type of PV module, the installation structure, and the working characteristics of the PV module among others and it is almost impossible to include all the influencing factors. However, it is easy to understand that the weight of these factors on the PV power output is not constant. They act differently under different time periods and different weather conditions. Moreover, the natural attenuation of PV systems has a certain effect on their degree of impact.
This paper selects three random days for analysis in four quarters (includes January 15, February 22, December 20, summer in Australia, April 30, May 20, June 1, Autumn in Australia, July 7, July 18, August 6, winter in Australia, and September 1, September 22, spring in Australia). Time range from 8:30-17:30, 109-time points in total, four representative and easily available meteorological factors are selected for comparison. The correlations between different features with power output at different time period are calculated by ρd,p, and the average of three days was randomly selected.
The ρd,p is defined as follows: where X represents the different meteorological influential factors, Y represents the power generation, and N represents the number of time points. The correlation coefficient ranges from 0.8 to 1.0, the representation has a strong correlation, the strong correlation at 0.6-0.8, the medium correlation between 0.4-0.6, the weak correlation at 0.2-0.4. And the very weak correlation at 0-0.2. Figure 3 represents the correlation coefficient of four different meteorological factors with PV power generation at different time periods in different seasons. In general, it can be seen that the correlation between different impact factors and PV power output also changes in different seasons. Their respective influences on PV power output are also constantly changing, which further explains why the linear models are difficult to solve. Figure 3a illustrates the correlation between temperature and PV power generation over different hours of the day. It can be seen that there is a certain degree of similarity between the four seasons, but there are still minor differences, and the correlation varies from −0.75 to 1 during a single day, this fluctuation is large. At the same time, in Figure 3b. The fluctuation range of correlation between humility and PV power generation over the different time period is between −1 and 0.75, there are also subtle differences in four seasons. Moreover, the Figure 3c represents the correlation between diffuse horizontal radiation and PV power generation. The correlation is in the range from −0.6 to 1 and there is a kind of regulation within one day, but the degree of correlation in different seasons is quite different. However, as shown in Figure 3d. The correlation between global horizontal radiation and PV power output always keeps a high value, most are distributed between 0.8 and 1.0, a few are between 0.4 and 0.7. Thus, it is more reasonable to use global horizontal radiation to collect similar time periods.
Therefore, in order to ensure that the forecasting model can be adapted to different seasons while learning the slight gaps in different time periods. The method separates the different time periods and uses the sliding window for VST power prediction.
The range of climate parameters in a short period of time is small, and climate change is relatively stable in the northwestern region of the PV generation. It may work to speculate on the climate of the predicted point by analyzing the climate situation before the predicted point. On this basis, set the origin (0,0,0) as the reference point, and select the global horizontal radiation which has the highest correlation with power output. By setting the global radiation in the time period for different time points to the coordinates of the start, mean, and end values, Euclidean distance can be calculated between these coordinates and the origin as follows: where Gstart_i, Gmean_i and Gend_i are the start, average, and end values of the global horizontal radiation for the time period (i-n, i-1) before the predicted time point, respectively. n is the number of time point in the selected time period. Five conditions with n equal to 2, 3, 4, 5, and 6 are selected for verification. i represents the number of times during the day of the test. Calculate the correlation between d and p.
where p is the PV power output value at time-step i. Figure 3a illustrates the correlation between temperature and PV power generation over different hours of the day. It can be seen that there is a certain degree of similarity between the four seasons, but there are still minor differences, and the correlation varies from −0.75 to 1 during a single day, this fluctuation is large. At the same time, in Figure 3b. The fluctuation range of correlation between humility and PV power generation over the different time period is between −1 and 0.75, there are also subtle differences in four seasons. Moreover, the Figure 3c represents the correlation between diffuse horizontal radiation and PV power generation. The correlation is in the range from −0.6 to 1 and there is a kind of regulation within one day, but the degree of correlation in different seasons is quite different. However, as shown in Figure 3d. The correlation between global horizontal radiation and PV power output always keeps a high value, most are distributed between 0.8 and 1.0, a few are between 0.4 and 0.7. Thus, it is more reasonable to use global horizontal radiation to collect similar time periods.
Therefore, in order to ensure that the forecasting model can be adapted to different seasons while learning the slight gaps in different time periods. The method separates the different time periods and uses the sliding window for VST power prediction.
The range of climate parameters in a short period of time is small, and climate change is relatively stable in the northwestern region of the PV generation. It may work to speculate on the climate of the predicted point by analyzing the climate situation before the predicted point. On this basis, set the origin (0,0,0) as the reference point, and select the global horizontal radiation which has the highest correlation with power output. By setting the global radiation in the time period for different time points to the coordinates of the start, mean, and end values, Euclidean distance can be calculated between these coordinates and the origin as follows: where G start_i , G mean_i and G end_i are the start, average, and end values of the global horizontal radiation for the time period (i-n, i-1) before the predicted time point, respectively. n is the number of time point in the selected time period. Five conditions with n equal to 2, 3, 4, 5, and 6 are selected for verification. i represents the number of times during the day of the test. Calculate the correlation between d and p.
Electronics 2020, 9, 289 8 of 19 where p is the PV power output value at time-step i. As shown in Table 2. The correlation between the d of different scale radiation coordinates and the PV power output p at the next moment is obtained. In Figure 4, it can be observed that they have high correlation value, more than that, the ρ d,p mean value and standard deviation of different time-steps are shown in Figure 4a,b, and when the time point is 4, the correlation is higher and the stability is better. Thus, the 4-time points before the prediction point are selected as the analysis time period. As shown in Table 2. The correlation between the d of different scale radiation coordinates and the PV power output p at the next moment is obtained. In Figure 4, it can be observed that they have high correlation value, more than that, the ρd,p mean value and standard deviation of different timesteps are shown in Figure 4a,b, and when the time point is 4, the correlation is higher and the stability is better. Thus, the 4-time points before the prediction point are selected as the analysis time period. According to the above-mentioned characteristics, the radiation coordinate classification (RCC) method is proposed as the classification method for selecting similar time periods. The corresponding data of a similar time obtained is reconstructed into a training data set. After training, input the corresponding data of this period to predict power output. The specific process is as follows: Firstly, collect the data in the same time period of the last 30 days before the day, and the first two time periods before the target time point. Reconstitute these data into a feature array A which is composed of (Pt, Tt, Ht, Gt, Dt, Pt+1). The structure of array A is shown below: Then, the power and meteorological parameters in the feature array are normalized. The normalization formula is defined as: where Akl_new represents the data obtained after normalization, Akl is the specific value of the power and meteorological data, k represents the star, mean or end value listed, l indicates the number of According to the above-mentioned characteristics, the radiation coordinate classification (RCC) method is proposed as the classification method for selecting similar time periods. The corresponding data of a similar time obtained is reconstructed into a training data set. After training, input the corresponding data of this period to predict power output. The specific process is as follows: Firstly, collect the data in the same time period of the last 30 days before the day, and the first two time periods before the target time point. Reconstitute these data into a feature array A which is composed of (P t , T t , H t , G t , D t , P t+1 ). The structure of array A is shown below: Then, the power and meteorological parameters in the feature array are normalized. The normalization formula is defined as: where A kl_new represents the data obtained after normalization, A kl is the specific value of the power and meteorological data, k represents the star, mean or end value listed, l indicates the number of hours in the unit time period, A kmin and A kmax are the minimum and maximum values of the meteorological data in the feature column. Secondly, by defining the radiation classification feature coordinates as (G start , G mean , G end ), the definition of each parameter is the same as above.
Combine the 32 time periods into corresponding three-dimensional vectors. Calculate the Euclidean distance λ between these 32 feature coordinates and the target period feature coordinates. The formula is defined as: where G' start , G' mean , and G' end represent the start, mean, and end values of global horizontal radiation for a specified time period before the predicted time, respectively. Compare them with the value of λ, set the experience threshold λ' and select the time period which λ is less than λ'. The meteorological and power data corresponding to these time periods are used as the dataset for training network.
The data include the temperature (T t ), relative humidity (H t ), global horizontal radiation (G t ), diffusion horizontal radiation (D t ), PV power output (P t ), and PV power output at the next time point P t+1 of each step. These data have been normalized, and they are arranged from far to near as the training set of the prediction network according to the distance from the target time period.
LSTM Recurrent Neural Network
As one of the most advanced recurrent neural networks, the Long Short-term Memory (LSTM) recurrent neural network has shown remarkable results in numerous time-series learning tasks [51,52]. Unlike the neurons of traditional recurrent neural networks, the LSTM has memory blocks connected by successive layers, and it enables the network to selectively memorize the input training data through a unique three-gate structure. These structures ensure that the network structure can learn multivariate influences of nonlinear tasks. In addition, the cascade structure of the LSTM makes it has an excellent performance in time series problems. For example, there are some good examples of forecasting work based on LSTM. Wang et al. [53] establish a hybrid day-ahead PV power forecasting model based on CNN and LSTM. This model uses CNN first extracts local features of data and applies LSTM to extracts the overall timing features of data, and the prediction performance is outstanding.
Fundamentally, there are three logic gate structures in every single cell, including forgetting gate, input gate and the output gate. And each operation process mainly includes four sub-operations, as shown in Figure 5. hours in the unit time period, Akmin and Akmax are the minimum and maximum values of the meteorological data in the feature column. Secondly, by defining the radiation classification feature coordinates as (Gstart, Gmean, Gend), the definition of each parameter is the same as above.
Combine the 32 time periods into corresponding three-dimensional vectors. Calculate the Euclidean distance λ between these 32 feature coordinates and the target period feature coordinates. The formula is defined as: where G'start, G'mean, and G'end represent the start, mean, and end values of global horizontal radiation for a specified time period before the predicted time, respectively. Compare them with the value of λ, set the experience threshold λ' and select the time period which λ is less than λ'. The meteorological and power data corresponding to these time periods are used as the dataset for training network.
The data include the temperature (Tt), relative humidity (Ht), global horizontal radiation (Gt), diffusion horizontal radiation (Dt), PV power output (Pt), and PV power output at the next time point Pt+1 of each step. These data have been normalized, and they are arranged from far to near as the training set of the prediction network according to the distance from the target time period.
LSTM Recurrent Neural Network
As one of the most advanced recurrent neural networks, the Long Short-term Memory (LSTM) recurrent neural network has shown remarkable results in numerous time-series learning tasks [51,52]. Unlike the neurons of traditional recurrent neural networks, the LSTM has memory blocks connected by successive layers, and it enables the network to selectively memorize the input training data through a unique three-gate structure. These structures ensure that the network structure can learn multivariate influences of nonlinear tasks. In addition, the cascade structure of the LSTM makes it has an excellent performance in time series problems. For example, there are some good examples of forecasting work based on LSTM. Wang et al. [53] establish a hybrid day-ahead PV power forecasting model based on CNN and LSTM. This model uses CNN first extracts local features of data and applies LSTM to extracts the overall timing features of data, and the prediction performance is outstanding.
Fundamentally, there are three logic gate structures in every single cell, including forgetting gate, input gate and the output gate. And each operation process mainly includes four suboperations, as shown in Figure 5. The formula corresponding to each part of the operation is as follows [54]: Forget gate: Input gate: Merge process: Output gate: The cell cascade structure is shown in Figure 6 [51]: Electronics 2020, 9, x FOR PEER REVIEW 10 of 19 The formula corresponding to each part of the operation is as follows [54]: Forget gate: Input gate: [ ] ( ) 1 tanh , Merge process: Output gate: The cell cascade structure is shown in Figure 6 where, ht is Pt+1, represents the power output at the next moment, Xt is an eigenvector composed of (Tt, Ht, Gt, Dt, Pt), and Tt, Ht, Gt, Dt present four multivariate meteorological factors at time point t, while Pt presents power output at time point t. The training process of LSTM is shown in Figure 7. Xt-1(0~k) Xt-2(0~k) Xt-3(0~k) Xt-4(0~k) ht (2) ht (1) ht (0) ... where, h t is P t+1 , represents the power output at the next moment, X t is an eigenvector composed of (T t , H t , G t , D t , P t ), and T t , H t , G t , D t present four multivariate meteorological factors at time point t, while P t presents power output at time point t. The training process of LSTM is shown in Figure 7. The formula corresponding to each part of the operation is as follows [54]: Forget gate:
Error (Evaluation) Metrics
To prove the stability of the present RCC-LSTM based PV output prediction method, the coefficient of determination (R 2 ), the mean absolute percentage error (MAPE) and the root mean square error (RMSE) metrics are calculated, respectively. The definitions of these error metrics are shown below [1].
The R 2 is defined as: Electronics 2020, 9, 289 11 of 19 The MAPE is defined as: The MAE is defined as: The RMSE is defined as: In this study, forecast results of the model run for a whole day are evaluated. P f,i and P a,i represent the predicted and real PV output at i time-point, respectively. p a,i is the average value of actual PV output, and N is the prediction sample point numbers. N equals to 109 in this study.
Proposed Methodology
In order to describe the method more intuitively, the implementation procedure of the prediction method is shown in Figure 8. The detailed steps of the RCC-LSTM prediction model are shown below:
Results and Discussion
To verify the validity of the proposed RCC-LSTM model, several typical networks, including RCC-BPNN, RCC-RBFNN [47], RCC-Elman, and LSTM-RNN [55] are chosen to make comparison, and the test are conducted in four seasons and two different PV systems. In addition, four different evaluation metrics (RMSE, MAPE, MAE, and R2) are applied to verify the prediction accuracy of the RCC-LSTM model. Figures 9 and 10 represent the forecasting result curves obtained by running different prediction models on two random days, respectively. Step 1: Collect historical PV power output and multivariate meteorological factors datasets. The meteorological factors include air temperature, relative humidity, global horizontal radiation and diffuse horizontal radiation.
Step 2: Preprocess the data, including abnormal data and normalization.
Step 3: According to the meteorological characteristic values of the time period before the forecasting point. RCC algorithm is used to determine the similarity time periods of the forecasting time period in the sample sets. By setting the threshold λ' in RCC, the data of the first 10 time periods with smaller λ values are selected as the training dataset of the neural network if there are fewer than 10 samples.
Step 4: Determine the cell numbers of the LSTM, and initialize the threshold values and weights of LSTM RNN, respectively.
Step 5: The LSTM neural network is trained by using similarity time period samples, and then the prediction model is obtained.
Step 6: Input the power output and the values of the meteorological factors of the specific time period before forecasting time points into the prediction model to forecast the power output value.
Results and Discussion
To verify the validity of the proposed RCC-LSTM model, several typical networks, including RCC-BPNN, RCC-RBFNN [47], RCC-Elman, and LSTM-RNN [55] are chosen to make comparison, and the test are conducted in four seasons and two different PV systems. In addition, four different evaluation metrics (RMSE, MAPE, MAE, and R2) are applied to verify the prediction accuracy of the RCC-LSTM model. Figures 9 and 10 represent the forecasting result curves obtained by running different prediction models on two random days, respectively.
Results and Discussion
To verify the validity of the proposed RCC-LSTM model, several typical networks, including RCC-BPNN, RCC-RBFNN [47], RCC-Elman, and LSTM-RNN [55] are chosen to make comparison, and the test are conducted in four seasons and two different PV systems. In addition, four different evaluation metrics (RMSE, MAPE, MAE, and R2) are applied to verify the prediction accuracy of the RCC-LSTM model. Figures 9 and 10 represent the forecasting result curves obtained by running different prediction models on two random days, respectively. To further test the performance of the proposed RCC-LSTM model in different seasons, several days in different seasons are chosen to expand the validation sample set, three random days in each season. The detail information about different evaluation metrics is shown below.
The metrics results of the RMSE in SITE 3A and SITE 4 are shown in Figures 11 and 12, To further test the performance of the proposed RCC-LSTM model in different seasons, several days in different seasons are chosen to expand the validation sample set, three random days in each season. The detail information about different evaluation metrics is shown below.
The metrics results of the RMSE in SITE 3A and SITE 4 are shown in Figures 11 and 12, respectively. The RCC-LSTM model has the best prediction accuracy: the mean value of RMSE is 0.94 KW (in SITE 3A), which is the minimum for all models, compare with other models, the enhancement is 24.79%, 23.25%, 45.83, and 8.23%, respectively. In SITE 4, the RMSE has 12.58 KW mean value, and the enhancement is 38.38%, 28.18%, 55.33%, and 16.08%, respectively. However, due to the limit of data set, in RMSE, the standard deviation performance of LSTM is slightly lower than RCC-LSTM. Figure 10. Forecasting result curve by different models for random days in SITE 4. To further test the performance of the proposed RCC-LSTM model in different seasons, several days in different seasons are chosen to expand the validation sample set, three random days in each season. The detail information about different evaluation metrics is shown below.
The metrics results of the RMSE in SITE 3A and SITE 4 are shown in Figures 11 and 12, respectively. The RCC-LSTM model has the best prediction accuracy: the mean value of RMSE is 0.94 KW (in SITE 3A), which is the minimum for all models, compare with other models, the enhancement is 24.79%, 23.25%, 45.83, and 8.23%, respectively. In SITE 4, the RMSE has 12.58 KW mean value, and the enhancement is 38.38%, 28.18%, 55.33%, and 16.08%, respectively. However, due to the limit of data set, in RMSE, the standard deviation performance of LSTM is slightly lower than RCC-LSTM. To further test the performance of the proposed RCC-LSTM model in different seasons, several days in different seasons are chosen to expand the validation sample set, three random days in each season. The detail information about different evaluation metrics is shown below.
The metrics results of the RMSE in SITE 3A and SITE 4 are shown in Figures 11 and 12, respectively. The RCC-LSTM model has the best prediction accuracy: the mean value of RMSE is 0.94 KW (in SITE 3A), which is the minimum for all models, compare with other models, the enhancement is 24.79%, 23.25%, 45.83, and 8.23%, respectively. In SITE 4, the RMSE has 12.58 KW mean value, and the enhancement is 38.38%, 28.18%, 55.33%, and 16.08%, respectively. However, due to the limit of data set, in RMSE, the standard deviation performance of LSTM is slightly lower than RCC-LSTM. As seen in Figures 13 and 14, and compared with four other models, in SITE 3A, the average MAPE of the presented model reduced by 28.70%, 23.30%, 43.40%, and 9.67%, respectively. In SITE 4, the presented model's average MAPE improvement relative to the compared four models is 44.04%, 31.31%, 53.11%, and 18.40%, respectively. Electronics 2020, 9, x FOR PEER REVIEW 14 of 19 As seen in Figures 13 and 14, and compared with four other models, in SITE 3A, the average MAPE of the presented model reduced by 28.70%, 23.30%, 43.40%, and 9.67%, respectively. In SITE 4, the presented model's average MAPE improvement relative to the compared four models is 44.04%, 31.31%, 53.11%, and 18.40%, respectively. In addition to the RMSE, MAPE, and MAE, R 2 is also a meaningful parameter to evaluate prediction models, the average value of R 2 (in %) and the standard deviation of different forecasting models are presented in Figure 17. Owning to the self-update time window, these models have good correlation performance, while the results of the proposed model are still better than others. In SITE 4, the mean value and the standard deviation of R 2 are 0.9747 and 0.0176, respectively. They are both the best among all models. Further, the situation in SITE 3A is the same as SITE 4. In addition to the RMSE, MAPE, and MAE, R 2 is also a meaningful parameter to evaluate prediction models, the average value of R 2 (in %) and the standard deviation of different forecasting models are presented in Figure 17. Owning to the self-update time window, these models have good correlation performance, while the results of the proposed model are still better than others. In SITE 4, the mean value and the standard deviation of R 2 are 0.9747 and 0.0176, respectively. They are both the best among all models. Further, the situation in SITE 3A is the same as SITE 4.
In addition to the RMSE, MAPE, and MAE, R 2 is also a meaningful parameter to evaluate prediction models, the average value of R 2 (in %) and the standard deviation of different forecasting models are presented in Figure 17. Owning to the self-update time window, these models have good correlation performance, while the results of the proposed model are still better than others. In SITE 4, the mean value and the standard deviation of R 2 are 0.9747 and 0.0176, respectively. They are both the best among all models. Further, the situation in SITE 3A is the same as SITE 4. Therefore, the RCC-LSTM model has outstanding performance in the VST prediction of PV generation, especially in desert areas where the weather changes are more moderate. As the data set accumulates, the forecasting result of cloudy weather will also be improved.
The environment framework of this experiment is TensorFlow, which is implemented based on Python3.6 and a 64-bit operating system personal computer with Intel (R) Core (7M) i5-7300HQ<EMAIL_ADDRESS>2.50GHZ and 8.00GB of RAM. As shown in Figure 18, the average training time-cost at the different time points and the average runtime of every time point are shown in Figure 18a and Figure 18b, respectively. Owing to the same time periods collection strategy of the training set, the dataset of training is small. Thus, every real-time predict step only requires a few seconds, which is acceptable in practical applications. Furthermore, due to the training dataset of RCC-LSTM that selected by RCC, compared with LSTM model, the RCC-LSTM not only improves the accuracy of the prediction but also reduces the training time cost of the prediction model. The average training timecost of RCC-LSTM is 28.84% lower than that LSTM, and the runtime will be much lower by improving the hardware environments or optimizing code. Therefore, the RCC-LSTM model has outstanding performance in the VST prediction of PV generation, especially in desert areas where the weather changes are more moderate. As the data set accumulates, the forecasting result of cloudy weather will also be improved.
The environment framework of this experiment is TensorFlow, which is implemented based on Python3.6 and a 64-bit operating system personal computer with Intel (R) Core (7M) i5-7300HQ<EMAIL_ADDRESS>2.50GHZ and 8.00GB of RAM. As shown in Figure 18, the average training time-cost at the different time points and the average runtime of every time point are shown in Figure 18a,b, respectively. Owing to the same time periods collection strategy of the training set, the dataset of training is small. Thus, every real-time predict step only requires a few seconds, which is acceptable in practical applications. Furthermore, due to the training dataset of RCC-LSTM that selected by RCC, compared with LSTM model, the RCC-LSTM not only improves the accuracy of the prediction but also reduces the training time cost of the prediction model. The average training time-cost of RCC-LSTM is 28.84% lower than that LSTM, and the runtime will be much lower by improving the hardware environments or optimizing code.
Conclusions and Future work
A simple and effective RCC-LSTM model for VST PV power forecasting is proposed in this paper. The proposed method applies the RCC method as a tool for collecting similar time periods and employs the LSTM to extract features from the time series photovoltaic power data and to learn long-term information in sequence. Based on the dataset from two independent PV systems located in Central Australia. In five-minute forecasting horizons, a comprehensive comparative study is
Conclusions and Future Work
A simple and effective RCC-LSTM model for VST PV power forecasting is proposed in this paper. The proposed method applies the RCC method as a tool for collecting similar time periods and employs the LSTM to extract features from the time series photovoltaic power data and to learn long-term information in sequence. Based on the dataset from two independent PV systems located in Central Australia. In five-minute forecasting horizons, a comprehensive comparative study is conducted to compare the proposed RCC-LSTM method with available four data-driven methods, including RCC-BPNN, RCC-Elman, RCC-RBFNN, and LSTM. Then, four error metrics are calculated and compared. The average daily RMSE, MAPE, and MAE of the RCC-LSTM model in site 3A are 0.940 kW, 5.053% and 0.587 KW, respectively, and those of site 4 are 12.58 kW, 4.449% and 7.590 KW, respectively. Compared with other methods, the average enhancement of RMSE, MAPE, MAE and R 2 is 30.01%, 31.49%, 31.50%, and 2.152%, respectively, which illustrates the superior performance of the proposed method. In addition, the average prediction time-cost of the RCC-LSTM is 28.84% lower than the basic LSTM. Therefore, it is proven that the proposed model can be used to predict VST PV power generation for PV power plants.
In future work, the proposed model can be improved in terms of its architecture and training data, and a more flexible selection of the threshold values should be implemented. Also, the cell number can be adjusted to be applied to different weather conditions. | 9,802.8 | 2020-02-08T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The Design of Contactors Based on the Niching Multiobjective Particle Swarm Optimization
Contactors are important components in circuits. To solve the multiobjective optimization problems (MOPs) of contactors, a niching multiobjective particle swarm optimization (NMOPSO) with the entropy weight ideal point theory is proposed in this paper. The new algorithm selecting and archiving the nondominated solutions based on the niching theory to ensure the diversity of the nondominated solutions. To avoid missing the extreme solutions of each objective during the multiobjective optimization process, extra particle swarms used to search the independent optimal solution of each objective are supplemented in this algorithm. In order to determine the best compromise solution, a method to select the compromise solution based on entropy weight ideal point theory is also proposed in this paper. Using the algorithm to optimize the characteristics of a typical direct-acting contactor, the results show that the proposed algorithm can obtain the best compromise solution in MOPs.
Introduction
Contactors are very important electrical equipment used to control the circuit.The performance of contactors directly influences the safety and stability of the circuit.In recent years, with the improvement of the demand for contactor performance, the optimization problem of contactors has become a hot issue.The optimization problem of contactors is a typical multiobjective optimization problem.The characteristics and power consumption are all the optimization objectives.In the past, due to the limitation of calculation efficiency of the contactors' characteristics, the optimization problem of the contactor was mostly based on the orthogonal experiment which is more emphasized on the single-objective optimization [1,2].In recent years, with the development of approximation models, the calculation efficiency of the contactors' characteristics has been improved.It is possible to apply multiobjective optimization algorithm to the optimization problem of contactors [3][4][5].
In the early days, the multiobjective optimization algorithm is realized by the linear weighting, setting constraints, and other methods which can integrate the multiobjective into a single objective.These methods were complicated and easily subjected to subjective experience.In recent years, with the development of intelligent optimization algorithms, combining the intelligent optimization algorithms with the Pareto optimal solution to achieve the multiobjective optimization has got wide attention.In all these algorithms, multiobjective genetic optimization (GA) algorithms developed fastest.Some of these algorithms are based on nondominance sorting such as nondominated sorting genetic algorithm (NSGA), nondominated sorting genetic algorithm II (NSGA-II), and niched Pareto genetic algorithm (NPGA) [6][7][8][9].And some others are based on decomposition, such as multiobjective evolutionary algorithm based on decomposition (MOEA/D) and MOEA/D-M2M [10,11].All these multiobjective GA algorithms can realize the multiobjective optimization.However, due to the limitations of the GA, these algorithms still have some disadvantages in optimizing efficiency and effectiveness.Particle swarm optimization (PSO) algorithm is more effective than GA in many cases because of its simple operation, fast convergence rate, and excellent searching ability; some researchers study the multiobjective particle swarm optimization (MOPSO) [12][13][14].Parsopoulos and Vrahatis have solved the dual-objective optimization problem by using a two-particle swarm optimization algorithm [15].Coello et al. have proposed MOPSO based on adaptive grid and introduced the concept of archive [16].Brits et al. have proposed a NichePSO based on the niching theory borrowed from the idea of EA which has been widely used for solving MOP [17].In [12], Qu et al. have proposed a distance-based locally informed particle swarm (LIPS) and eliminate the need of niching parameter of PSO.In [18], a recently developed multiobjective particle swarm optimizer (D2MOPSO) is proposed; a new archiving technique that facilitates attaining better diversity is used in this algorithm.In [19], a dynamic multiple-swarm PSO in which the number of swarms is adaptively adjusted throughout the search process via the proposed dynamic swarm strategy is proposed.Lin et al. have proposed a novel MOPSO algorithm using multiple search strategies (MMOPSO), where two search strategies are designed to update the velocity of each particle [20].Most of these algorithms focus on the selection of the best and the archiving strategy to make the solution set more diverse; some similar algorithms can be found in [21][22][23][24].Campos et al. proposed a bare bone particle swarm optimization with scale matrix adaptation (SMA-BBPSO) to avoid premature convergence problem [25].In [26], an information sharing mechanism (ISM) is proposed to improve the performance of a particle.In [27], Qin et al. has proposed an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm's learning strategy.These scholars care more about the convergence and the learning strategy of PSO that improve the MOPSO indirectly [28][29][30].Most of the above algorithms are based on archiving and these algorithms have been used in many fields [31][32][33][34].
The above MOPSO focuses on the diversity not the limiting case of each objective, so these algorithms often miss the independent optimal solution due to the selection and archiving strategy.At the same time, the results obtained by the above MOPSO are mostly Pareto optimal sets.In engineering, a compromise solution needs to be selected from the set.The above algorithms do not propose the method to select the compromise solution.In view of the above problems, this paper proposes a NMOPSO that considers the independent optimal solutions and gives a method to determine the compromise solution based on the entropy weight ideal point theory.
Taking a typical high-power DC contactor as an example, this paper first establishes an approximate model of its static and dynamic characteristics based on radial basis function (RBF) network to ensure that the computational efficiency of the contactor characteristics meets the need of the NMOPSO.Then, according to the NMOPSO proposed in this paper, the compromise solution is obtained.The objectives obtained before and after the optimization are compared to prove the effect of the algorithm.This multiobjective optimization algorithm is also suitable for the field of similar optimization problem.
Approximate Model of Contactors' Static and Dynamic Characteristics
The optimization objectives of contactors contain static and dynamic characteristics, power consumption, quality, and so on, so the optimization problem of contactors is a typical multiobjective optimization problem.Among all these optimization objectives, static and dynamic characteristics are the most important.They directly influence the contactors' switch speed and indirectly influence the life and reliability of contactors.The static characteristics of the contactors contain the electromagnetic flux and the electromagnetic force.
The dynamic characteristics of the contactors include making time and speed of the armature.The dynamic characteristics can be solved by the 4th Runge-Kutta according to the flux and the force.Therefore, the key to get the static and dynamic characteristics is the calculation of the flux and the force.Finite element method (FEM) is widely used to calculate the magnetic field.However, it takes minutes even hours to solve the dynamic characteristic by FEM.Its computational efficiency limits its application in the optimization problem.It is necessary to replace the FEM by an approximate model to realize the fast calculation of static and dynamic characteristics.RBF network model is an accurate approximate model.It is suitable for the calculation of contactors' characteristics because of its fast convergence speed and strong bureau approximation ability.The principle of RBF network is representing the objective function by the sum of a series of radial basis functions, that is, assume the objective function is [35] where y is the real value of the objectives, y is the value obtained by the RBF network, ε is the error, λ i is the weight; φ is the RBF, r i = x − x i is the distance between the input vectors and centers, x is the input vectors, x i is the centers of the RBF, m is the number of centers, and c is a real constant.Equation (3) can also be expressed as where The RBF networks are built by two steps.The first step is selecting the centers of the RBF.The second is the determination of weights.The weights can be obtained by The selection of the centers is the key to RBF network, which directly influences the accuracy of the approximate model.The most common methods of center selection are random selection, k-means clustering, orthogonal least square 2 Complexity (OLS), and so on.In these methods, OLS is a relatively accurate method.The OLS selects the centers one by one from the sampling points by judging the contribution rate of each sampling point to the error.The RBF network built by the OLS has relatively few center points.So the RBF network is computationally efficient, and it is suitable for calculating the characteristics of contactors.Calculating the dynamic characteristics by the RBF networks needs only about 1.5 s.The computational efficiency satisfies the need of optimization algorithms.The multiobjective optimization problem of contactor can be achieved by combining the RBF network and multiobjective optimization algorithms.
NMOPSO with the Entropy Weight Ideal Point Theory
3.1.The Multiobjective Optimization Problems.The MOPs can be described by the following equation [36]: where F X is the objective vector, f i X is the ith optimization objective, m is the number of optimization objectives, X = x 1 , … , x n is the decision vectors, and g j X and h k X are the constraints.The optimization objectives of the MOPs are all interrelated.The improvement of one objective is often accompanied by the deterioration of others.The solutions of MOPs cannot easily compare with each other, and there is not a solution that can make all the objectives reach the optimum.So the Pareto dominance is always used in the MOPs, and the Pareto optimal set is obtained in these problems instead of the only one best solution.
Definition 1. Pareto dominance: X a and X b are the decision vectors of MOPs, Definition 3. Pareto front: all the objective vectors corresponding to Pareto optimal make up the Pareto front of the multiobjective problem.
NMOPSO considering the Independent Optimal
Solutions.The particle swarm optimization (PSO) is proposed by Kennedy and Eberhart in 1995 [37].They compared the optimization process with the process of birds searching for food and designed optimization strategies based on the behavior of the bird flock.The basic PSO is used for single-objective optimization.The core of PSO is the updated rules of the position and velocity of the particle.When birds are searching for food, they both follow its own experience and move to the birds who performed better.So, the velocity and position of each particle in the swarm can be updated according to the following: where t is the iterations, v ij is the jth dimension velocity of particle i, W is the inertia weight which determines whether particles have better global ability or better local ability, c 1 is the cognition weight, c 2 is the social weight.r 1 and r 2 are two random values uniformly distributed in the range of (0, 1), P * ij is the jth dimension of personal best (pbest) of particle i, and P * gj is the jth dimension of the global best (gbest) of the swarm.
There are two main differences between MOPSO and PSO.First, the results of MOPSO are not one best solution but a Pareto optimal set.Second, the solutions in the MOPSO often cannot compare with each other; it is hard to select the P * ij and the P * gj .In view of the above differences, the following two optimization strategies can be used in MOPSO.And the algorithm is called NMOPSO.
(1) The archiving strategy based on the niching theory.
In the optimization process, an external archive is established to retain the nondominated solutions that have been obtained in the optimization process.By sorting and deleting the solutions in the external archive, the final Pareto optimal set is obtained.To maintain the diversity of the solutions in the archive, the solutions are sorted by the fitness which is obtained according to niching theory through formula (7).If the number of solutions in the archive exceeds the defined value, the last solutions is deleted: where P j is the solutions in the archive, Ft i is the fitness of P j , s d i, j is the sharing function of the solutions, d i, j is the Euclidean distance between the ith and jth solutions, σ is the niche radius, and α is a constant coefficient.
(2) The selection strategy of the pbest and the gbest.
For each particle, if its new solution is dominant of its predecessors in one iteration, the new solution is set as the pbest of the particle, otherwise the new solution is set as the pbest with a probability of 50%.For all the particles, the gbest is 3 Complexity selected by the roulette in the external archive according to the fitness.
In analyzing the optimal solutions of each objective independently in the MOPs, the limiting case of these objectives can be obtained.These solutions are called independent optimal solution here.The independent optimal solution is representative in the optimization and can be used to evaluate the optimization effect.However, in the optimization process, NMOPSO does not care about finding the independent optimal solution for each objective but converges to the Pareto optimal set.And the particle density near the independent optimal solution is high; the independent optimal solution may be mistakenly deleted when deleting the last solutions in the archive.To solve this problem, single-objective particle swarms are added to search for the independent optimal solution of each objective.And the independent optimal solution of each objective is stored in the archive to prevent that the final Pareto optimal set does not contain the limiting case of each objective.And it also provides data support for the determination of the best compromise solution.
The NMOPSO always converges to one or more solutions prematurely because of its high convergent rate.This phenomenon is more obvious when independent optimal solutions of each objective are added.Therefore, it is necessary to add appropriate mutation in the NMOPSO to maintain the diversity of the solution.Here, each dimension of the particles is mutated with a probability Cb before each circle, where Cb is calculated by the following: where Cb is the mutation probability, Cb 0 is the initial mutation probability, t is the iterations, N is the maximum iterations, and e is a constant coefficient.Cb initially declines fast and then tends to be flat.That ensures the PSO has strong global search ability at the beginning of the optimization process, to avoid falling into the local optimal solution.At the same time, PSO can quickly converge to the Pareto optimal set at the end of the optimization process.According to the above strategy, the Pareto optimal set can be obtained by 8 steps.
Step 1. Initialize the multiobjective particle swarm, the single-objective particle swarms, and the external archive.
Step 2. Search the independent optimal solutions of each objective by the single-objective particle swarms.
Step 3. Calculate the value of each particle in the multiobjective particle swarm according to F X .Keep the nondominated solutions and the independent optimal solutions in the external archive.
Step 4. Delete the dominated solutions and the repeated solutions in the archive.Calculate the fitness of the solutions in the archive.Sort the solutions by the fitness, and if the number of solutions in the archive exceeds the defined value, delete the last solutions.
Step 5. Select the gbest by the roulette in the external archive according to the fitness.
Step 6.For each particle in the present iterations, judge if it is dominated by the previous solution and select the pbest.
Step 7. Update the velocity and position of each particle according formula (6).Mutate the particle according to formula (8).
Step 8.If the number of iterations is equal to N, stop the algorithm or else return to Step 3.
In order to verify the performance of the NMOPSO proposed in this paper, 3 standard test functions ZDT1, ZDT2, and ZDT3 are used [38].The Pareto front of ZDT1 is convex, the Pareto front of ZDT2 is concave, and the Pareto front of ZDT3 is discontinuous.According to formula (9), the distance between the solutions in the archive and the true Pareto front can be calculated.That is called the generational distance [39].
where n is the number of solutions in the archive and dist i is the Euclidean distance between the objective vector of ith nondominated solution and the nearest member of true Pareto optimal set.GD = 0 means all the nondominated solutions are on the Pareto front.
The NMOPSO is compared with two classical algorithms-NSGA-II and MOPSO.Moreover, the NMOPSO is also compared with two typical up-to-date algorithms-AGMOPSO and CDMOPSO [14].Here, the number of particles is set to 50 and the number of nondominated solutions in the archive is set to 100.The maximum iterations are set to 100.The results of each algorithm are obtained from 20 times independent runs.Figure 1 shows the typical figure of each algorithm and Table 1 shows the comparison results.
Figures 1(a)-1(c) show the comparison of NMOPSO with classical algorithms, and Figures 1(d)-1(f) show the comparison of NMOPSO with up-to-date algorithms.In Figure 1, all five algorithms can converge to the Pareto optimal set and have good diversity.However, from the data in Table 1, NMOPSO works significantly better than others when ZDT3 was optimized and can get the same even better results compared with up-to-date algorithms when ZDT1 and ZDT2 are optimized.At the same time, the NMOPSO proposed in this paper can better get the limiting case of each objective because of the searching and archiving strategy of the independent optimal solutions.Especially for ZDT3, NMOPSO can get the limiting case of objective more reliably.NMOPSO in this paper can get the limiting case of each objective and is helpful to determine the best compromise solution.
Selection of Compromise Solution Based on the Entropy
Weight Ideal Point Theory.The result obtained by NMOPSO is a Pareto optimal set; however, in an engineering application, a decision is needed to select a compromise solution in the set.Here, an entropy weight ideal point theory is proposed to select the best compromise solution.
Ideal point is an important definition in the MOP.For multiobjective optimization problems, the Pareto optimal set is P, if the f 0 i satisfy (10).The called the ideal point of this MOP, which is corresponding to the independent optimal solutions of each objective.
Due to the mutual restraint relationship among the objectives in the MOP, the ideal point cannot be obtained generally.The ideal point method is to find X on the solution set which makes the distance between the F(X) and F 0 to be minimum by solving the following: where w i is the inertia weight.Because the unit and the magnitude of each objective are different, the w in ( 11) needs to 5 Complexity be determined according to the actual situation of each objective.In order to avoid the influence of subjective experience, an entropy weight method is used to determine the weight here.And the relative rate of change f i X − f 0 i /max x∈P f i X is used here to replace the f i X − f 0 i so that can eliminate the impact of the unit and characterize the degree of change of each objective more accurately.
The entropy weight method determines the weight according to the amount of information carried by each objective.In this method, the higher amount of information the objective contains, the bigger the weight of the objective is.In order to obtain the entropy weight of each objective in the Pareto solution set, each objective should be standardized according to the following: where PK = pk ij is a nondominated solution set containing n solutions and m objectives, i = 1, 2, … , n and j = 1, 2, … , m, respectively.pb ij is the standardized data.E i is the entropy of the ith objective.pe ij = pb ij /∑ n j=1 pb ij is the probability of each value of the ith objective if pe ij = 1/n, j = 1, 2, … , n.That the probability of each value is the same means this objective contains no information and the weight is set to 0.
The Optimization Results of Contactor
The structure of the typical direct-acting contactor is clear and its working principle is simple.So here, a typical directacting contactor is selected to verify the effect of NMOPSO proposed in this paper.The structure of the contactor is shown in Figure 2. Select the retention force of the contactor, making time, coil power, and the mass of the armature as the optimization objectives to represent the static and dynamic characteristics of the contactor, power consumption, and other needs.Here, the retention force of the contactor is expected to be bigger, while other objectives are expected to be smaller.The main parameters that affect the objectives are the size parameters X 1 , R 1 , R 2 , R 3 , and the coil resistance R Ω .To ensure there is no interference between the components of the contactor, the initial values and constraints of each parameter are given in Table 2.
In the above four objectives, the coil power and the mass of armature can be directly calculated according to the coil resistance and the size parameters.However, the retention force of the contactor and the making time need to be obtained by establishing the approximate model of contactors' static and dynamic characteristics.The dynamic characteristics of the contactor are obtained by iterative solution of the static characteristic.To ensure the effect of the approximate model, only the accuracy of solving the dynamic characteristic of the contactor needs to be verified.Here, get the coil current by the oscilloscope monitors and the armature displacement by the laser displacement sensor.Figure 3 shows the comparison of the RBF network approximate model results, the FEM results, and the actual data.
In Figure 3, the three curves of results basically coincide.The making time of this contactor is 23 ms in practice, the To verify the performance of the NMOPSO used in the field of contactor, here the retention force of the contactor and the making time are optimized, and the result is compared with the MOPSO.Here, the number of particles is set to 50, and the number of particles in the archive is set to 100 in three algorithms.The maximum iterations are set to 100. Figure 4 shows the Pareto front of the force and making time.Both of the two algorithms can find the Pareto front of the MOP.However, due to the mutation and the retention of independent optimal solution, the results obtained by the NMOPSO are more closed to the limiting case of each objective, and some solutions obtained by the NMOPSO obviously outperform those by the MOPSO.The performance of the NMOPSO is better and suits the optimization of the contactor.
Using the NMOPSO, optimize the above four contactor's objectives which are the retention force of the contactor, making time, coil power, and the mass of the armature.The entropy weight of each objective is calculated by formula (12), and the independent optimal solution of each objective is recorded in Table 3.It can be seen that the objectives of the contactor have a great optimized space according to Table 3.
The result of each objective after the optimization is shown in Table 4.In these objectives, the optimization of the retention force of the contactor and the coil power is more obvious.The force is increased by 32.3 N, the coil power is decreased by 1.2 W, and the making time and the mass are not obviously improved but still maintain the original level.At this point, if a parameter is changed to improved one objective, the other objectives will be worse.For example, reducing the coil resistance to 40 Ω and keeping the other parameters constant will reduce the making time to 21.3 ms and increase the rate of optimization from 1.8% to 3.2%.However, that will make the coil power increase to 19.6 W and the rate of optimization from 6.1% to 0%.It can be seen that the compromise solution optimized by the entropy weight ideal point method is relatively better.
Conclusion
For solving the MOP of the contactors, a NMOPSO which considers the independent optimal solutions and mutation is proposed in this paper.The NMOPSO realizes selecting and sorting of the nondominated solutions by archiving strategy which is based on the niching technique.And an entropy weight ideal point idea is also contained in this method to get the best compromise solution.
Taking a typical high-power DC contactor as an example, an approximate model of the static and dynamic characteristics of the contactor is established based on the RBF network so that the solving efficiency of the objectives of the static and dynamic characteristics satisfies the requirements of the MOP.Then, the objectives such as the retention force of the contactor, making time, coil power, and the mass of the armature are optimized by the NMOPSO proposed in this paper, and the best compromise solution is determined.After the optimization, the retention force of the contactor has increased 30%, and the coil power has reduced 6%.The other two objectives maintain the original level.In conclusion, the method proposed in this paper is good at the optimization of the contactors and can get good optimization result.The multiobjective optimization algorithm also suits the similar field with good promotion.
Figure 1 :
Figure 1: The performance of five algorithms.
Figure 3 :
Figure 3: The comparison of the accuracy of the approximate model.
Figure 4 :
Figure 4: The Pareto front of the force and making time.
Table 1 :
The performance of five algorithms.
Table 2 :
The initial values of each parameter.
Table 3 :
The optimal and initial values of each objective.
Table 4 :
The final optimal results. | 5,887 | 2018-07-04T00:00:00.000 | [
"Computer Science"
] |
A Cooperative Model for IS Security Risk Management in Distributed Environment
Given the increasing cooperation between organizations, the flexible exchange of security information across the allied organizations is critical to effectively manage information systems (IS) security in a distributed environment. In this paper, we develop a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs). In addition, for an organization's IS, a BN is utilized to represent its security environment and dynamically predict its security risk level, by which the security manager can select an optimal action to safeguard the firm's information resources. The actual case studied illustrates the cooperative model presented in this paper and how it can be exploited to manage the distributed IS security risk effectively.
Introduction
With the increasing of collaboration between organizations, the management of information systems (IS) security risk is distributed across the allied organizations and the cooperative activities between organizations are imperative [1][2][3][4]. Therefore, for more effectively assessing the security risk level of the IS in a distributed environment, it is critical to develop a system for the exchange of security information among the interconnected IS. However, how to achieve the flexible exchange of security information under distributed environment is a significant challenge in the process of modelling [5]. Unfortunately, few previous studies on IS security take the above issue into account.
In this paper, a cooperative model for IS security risk management is proposed to estimate the risk level of each associated organization's IS and support the decision making of security risk treatment in a distributed environment. In the model, the exchange of security information among the interconnected IS is achieved through Bayesian networks (BNs). Moreover, a BN is also exploited to model the security environment of an organization's IS and predict its security risk level. However, it is difficult and critical task for a security manager to establish an appropriate BN, which is suitable for the environment of organization's information systems [6][7][8].
To address this issue, in this paper, we develop an algorithm to support the BN initiation. Finally, based on the security risk level for an organization's IS, the security manager selects an optimal action to protect its information resources.
The remaining sections of this paper are organized as follows. We first review the relevant literature in Section 2. Then we discuss the development of the cooperative model in detail in Sections 3 and 4. The proposed model is further demonstrated and validated in Section 5 via a case study. Finally, we summarize our contributions and point out further research directions.
Literature Review
There has been increased academic interest in the IS security risk management. From the technical literature, the security protocols [9], fire wall and intrusion detection techniques [10,11], and authentication technologies [12,13] have been examined. From an economics perspective, some researchers have investigated the investment on information systems security [14,15], economics of vulnerability disclosure [16,17], and
Communication between estimation components
It consists of the request id, the sender's id, and the probability distribution of the requested variable. Upon receiving the list of components capable of providing the required input from the registration component, the request component sends requests directly to these components. Then, the sender sends the probability distribution of the requested variable.
the characteristics of internet security breaches that impact the market value of breached firms [18].
In recent years, a new managerial perspective on IS security has emerged from the literature. This perspective focuses on the managerial processes that control the effective deployment of technical approaches and security resources to create a secure IS environment in an organization. From this perspective, Feng and Li [19] proposed an IS security risk evaluation model based on the improved evidence theory. For the handling of uncertain evidence found in IS security risk analysis, their model provided a novel approach to define the basic belief assignment of evidence theory. In addition, the model also presented a method of testing the evidential consistency, which is capable of resolving the conflicts from uncertain evidence. Then, in order to identify the causal relationships among security risk factors and analyze the complexity of vulnerability propagation, they also developed a security risk analysis model (SRAM) [20], in which the vulnerability propagation analysis is performed to determine the propagation paths with the highest IS security risk level. Yan [21] presented a conceptual model for IS security analysis, which can facilitate to identify potential security risks. Chen et al. [22] focus on controlling the risks in the form of the fault of information networks. They developed an approach to estimate the risk level on the vulnerability of information networks.
Bayesian networks (BNs), also known as probabilistic belief networks, is a knowledge representation tool capable of representing dependence and independence relationships among random variables [23]. A BN, = ( , , ), consists of a directed acyclic graph and a set of conditional probability distributions (beliefs) for variables . BN inference means computing the conditional probability for some variables given the evidence, which is defined as a collection of findings. This operation is also called probability updating or belief updating.
In this paper, the developed BN is not only used to facilitate the dynamical prediction of the security risk level of an organization's IS, but also exploited to model the IS security environment.
Model Architecture
In a distributed environment, the proposed model consists of many interconnected network information systems. We call these network information systems as "associated members. " Each associated member is installed with three kinds of components: monitor component, estimation component, and treatment component. Besides, the above three kinds of components, the registration component contains the information about each estimation component. It is required that all estimation components in the distributed environment must register with the registration component. The cooperative model architecture is demonstrated in Figure 1.
The interactions among the estimation component and the registration component are shown in Figure 2. In the interactive process, as shown in Table 1, there are four kinds of sharing information: search request, search reply, registration message, and communication between estimation components.
Bayesian Network Development
In this section, we present an algorithm based on ant colony optimization (shown in Algorithm 1) to develop the Bayesian network (BN), which is able to model the security environment of an associated member under distributed environment.
The equations appearing in the algorithm are as follows.
(1) Heuristic information: (2) Updating rule: The Scientific World Journal in which in the arc → , is the pheromone's degree, and (0 < ≤ 1) is a variable which can control the pheromone value. Moreover, * is the BN structure suitable for the organization's IS best.
(3) Probabilistic transition: in which and are two nodes chosen randomly based on the following equation:
The manager interface of our proposed model is shown in Figure 4, in which the security manager can specify the BN for each associated organization.
Once the new evidence is obtained through the monitor components, the estimation component is able to make the BN modify its own belief (probability distribution on variable of risk level) in real time and exchange the update of beliefs of the security state with other associated members.
Conclusions
In a distributed environment, in order to effectively manage information systems (IS) security, a cooperative model based on Bayesian networks is presented and illustrated in this paper. We contribute to the IS security literature by supporting the exchange of security information among interconnected IS. Furthermore, for the modelling of IS security environment, an algorithm based on ant colony optimization facilitates to predict IS threat level more objectively. The model proposed in this paper has great potential for future extensions and refinements to provide more utility for the management of IS security. | 1,824.2 | 2014-01-19T00:00:00.000 | [
"Computer Science"
] |
A Code for Simulating Heat Transfer in Turbulent Channel Flow
: One numerical method was designed to solve the time-dependent, three-dimensional, incompressible Navier–Stokes equations in turbulent thermal channel flows. Its originality lies in the use of several well-known methods to discretize the problem and its parallel nature. Vorticy-Laplacian of velocity formulation has been used, so pressure has been removed from the system. Heat is modeled as a passive scalar. Any other quantity modeled as passive scalar can be very easily studied, including several of them at the same time. These methods have been successfully used for extensive direct numerical simulations of passive thermal flow for several boundary conditions.
Introduction
Turbulence is probably the open problem in physics with most applications in daily life. Turbulent flows are intrinsic to almost any flow in engineering, but they are also extremely important in meteorology or the dispersion of contaminants. It is well known that we still even lack an existence and uniqueness theorem about the solution of the governing equations of turbulent flows, the Navier-Stokes equations. Moreover, these equations cannot be solved analytically apart from some trivial examples. It is generally admitted that the simulation can be considered at three levels of detail, Reynolds Average Navier Stokes (RANS), Large Eddy Simulation (LES), and Direct Numerical Simulations (DNS) [1] which also corresponds to different levels of precision. Both RANS and LES models can simulate typical industrial flows with the necessary accuracy and in practical computation times [2][3][4][5], but DNS is until now the only trustworthy method to investigate physical properties of flows which are little understood.
However, DNS are extremely expensive as every scale of turbulence, both temporal and spatial ones, has to be properly resolved. This implies very fine grids and small temporal steps. Moreover, and as a consequence of Kolmogorov's scales, [6,7], the number of points needed to carry out the simulation grows extremely fast, as Re 9/4 , where Re is the Reynolds number. Thus, to use DNS to understand turbulent flows, very simple geometries are employed, allowing very precise and fast numerical tools that will be described in this paper. Canonical wall-bounded flows are pipes, channel flows, and the flat-plate boundary layer. These flows are in essence theoretical abstractions and do not appear as such in reality. However, they constitute the basic blocks of more complete real flows and have been studied for 80 years. For isothermal flows, experiments of these canonical flows have been instrumental in the development of the theory since the birth of boundary layer research. In particular, they remain important for the study of very high Reynolds numbers, as DNS is still not able to reach Reynolds numbers as high as those of experiments. High Reynolds numbers are needed to obtain a clear separation of scales related to the near-wall turbulence cycle. The main control parameter is the friction Reynolds Number Re τ = hu τ /ν, where ν is the viscosity, h is a characteristic length and u τ is the friction velocity at the wall. The largest experiment ever done has been published in 2018 for a friction Reynolds number of 2e4 [8], a value that is absolutely out of reach for DNS. That said, simulations of wall-bounded turbulence are particularly helpful in identifying physical processes occurring in near-wall turbulence as the whole velocity field (and its derivatives) is available. Moreover, experiments showing the 3D thermal field of a turbulent flow are almost impossible to do. For thermal flows, DNS can give insight where experimental flows are impossible.
The first wall-bounded simulation was run by Kim, Moin, and Moser in 1987 [9] for channel flows and in 1988 by Spalart [10] for Turbulent Boundary Layers. Since then, the friction Reynolds number simulated has been continuously growing [11][12][13][14][15][16][17][18][19]. Some of these simulations, Ref. [12,16,18,19] were made with an earlier version of this code. The history is similar for thermal passive flows, but the Reynolds number reached grew up at a slower pace. The first DNS of a thermal flow was carried out by Kim and Moin in 1987 [20]. Kim and Moin obtained first-order turbulence statistics for Re τ = 180 and several Prandtl numbers, Pr = 0.1, 1 and 2. The Prandtl number is the ratio between momentum diffusivity to thermal diffusivity. They also computed a very important number for modeling, the turbulent Prandtl number. In addition, for the Prandtl number of air, Pr = 0.71, correlations between the velocity and the temperature were also calculated. A somewhat artificial boundary condition was imposed in which heat was generated internally and removed from both cold isothermal walls. This condition for the thermal field plays an analogous role to that of the pressure gradient does for the velocity field. Later, Lyons et al. [21] performed a simulation for Re τ = 150 and Pr = 1. The boundary condition used in this later work consisted of both walls kept at different temperatures. Finally, Kasagi et al. [22] performed a DNS for Re τ = 150 and Pr = 0.71 with a more realistic boundary condition, the Mixed Boundary Condition (MBC from now on). For this condition, the average heat flux over both heating walls is constant and the temperature increases linearly in the streamwise direction. The instantaneous heat flux may vary with respect to time and position. In the work done by Piller [23], it was shown that the MBC acts as an ideal isothermal boundary condition in the inner layer and as an ideal isoflux boundary condition (fixed heat flux) in the outer layer.
After these simulations were made, the trend has been to increase the friction Reynolds number for different molecular Prandtl numbers. However, values of Re τ and Pr are limited by the computational cost. Yano and Kasagi [24] stated that the computational cost can be approximated as L 2 x L y Re 4 τ Pr 3/2 . Moreover, even if only one equation has to be added to the system, this equation is also nonlinear. Typically, the cost of accurately computing the nonlinear term accounts for 80 to 90% of the total time of the simulation. Thus, to add the energy equation we need to double the computational time, and the Reynolds number achieved is still low for the majority of Prandtl numbers.
In this paper, we will restrict ourselves to explain the numerical code and its implementation. Since 2018, several works have been published using this algorithm, which was written in Fortran90. In Lluesma et al. [25], the authors validated the code and found the optimal size of the computational box for Reynolds numbers up to Re τ = 2000. The range of Prandtl numbers, below Pr = 0.7, was increased in Alcántara-Ávila [26]. Later, in [27], the large structures of Couette flow were analyzed. In [28,29], the authors were able to increase the study for Pr > 0.7, reaching Pr = 10, and Re τ = 5000 for Pr = 0.7. Finally, this code has been used to study the large structures found in stratification problems, cite [30].
The structure of the paper is as follows. In the next section, the numerical problem is described. The third section is devoted to explain the numerical techniques used for discretization both in time and space. The fourth section describes the parallelization techniques used. Finally, conclusions and future works are outlined.
Methods
The flow considered in this work is a turbulent channel flow driven by a pressure gradient (Poiseuille flow). The flow is treated as incompressible. The thermal field is treated as a passive scalar. As mentioned before, the boundary condition used for the thermal field is the MBC. For this boundary condition, an uniform heat flux is heating both walls, which introduces the heat into the flow. The temperature of these walls increases linearly in the streamwise direction and does not depend on time.
In Figure 1, a schematic representation of the lower half of the computational box used can be observed. In this plot, contours of an instantaneous snapshot of the streamwise velocity are represented, colored by the magnitude of the velocity. The flow moves from left to right. Periodic conditions are imposed in the streamwise and spanwise boundaries. As it was said above, the computational box was optimized in [25]. The dimensions of the box are L x = 2πh, L y = 2h, and L z = πh in the streamwise, wall-normal, and spanwise directions, respectively. Coordinates in these directions are denoted by x, y, and z. The corresponding velocities are U, V, and W, or, using index notation, U i . This is also the case for vorticity, Ω = ∇ × U = (Ω 1 , Ω 2 , Ω 3 ) = Ω x , Ω y , Ω z and helicity H = U × Ω = (H 1 , H 2 , H 3 ) = H x , H y , H z . Temperature is represented by T. However, the transformed temperature, Θ (defined below), will be used during the entire paper. Uppercase letters denote instantaneous flow magnitudes. Using the Reynolds decomposition, one can obtain the averaged value, denoted by an overbar, and the fluctuating part, denoted by a lowercase letter, of the flow magnitudes, i.e., U = U + u. The superscript + indicates normalization in wall units, using ν and u τ = τ w /ρ, where τ w is the mean shear stress and ρ is the fluid density.
The behavior of turbulent flows is described by the Navier-Stokes equations, which are composed by the continuity and momentum equations, and the energy equation, where U + xyz is the average of U + in time and in the three spatial directions, i.e., is the bulk velocity. In the energy equation, the transformed temperature, Θ = T w − T is used. Since the MBC is used, the temperature in the channel increases linearly in the streamwise direction and periodic conditions cannot be used for T. T w contains the nonperiodic part of the temperature, which makes Θ periodic in the streamwise direction. Notice that the method described below is also useful for any other passive scalar, as it could be NOx concentration. It is also important to point out that for clarity, only the system with one energy equation will be explained here. However, as heat transfer is treated as a passive scalar, several different Prandtl numbers can be run simultaneously. In what follows and to facilitate the discussion, + superscripts will be omitted.
Using some algebra, which involves taking the rotational of Equations (1) and (2) twice, these equations can be written in velocity-vorticity form obtaining a fourth-order equation for V and a second-order equation for Ω y where h v and h g collect the nonlinear part of the Equations (4) and (5), and are given by The main advantage of this system is that pressure is not present, simplifying the problem. Notice that one can easily recover velocities and vorticities from the continuity equation and the definition of Ω y , Taking derivatives, and reorganizing the terms, we get As the flow is periodic in both x and z, it is natural to use Fourier methods in these directions. The Fourier transform of any field ϕ(x, y, z) is defined as where the hat indicates the Fourier coefficient, and k x and k z are the wavenumbers in x and z, respectively. Equations (10) and (11) can be trivially solved as they becomê Notice that this technique cannot be used for the (0, 0)-modes of U and W. Using the Navier-Stokes equations, one can write and solve these equations instead. The pressure gradient in the spanwise direction is negligible. However, the pressure gradient in the streamwise component can be used to keep the flow mass constant. Using Fourier transforms, Equations (3)-(5) are transformed into Fourier space in x and z, obtaining three decoupled problems, whereĥ t,k x k z is the Fourier transform of the energy equation nonlinear term.
Problem (14)- (16) is usually solved splittingV k x k z in three parts, (wavenumbers are omitted)V =V p + aV a + bV b , where a and b are chosen in order to fulfill the homogeneous Neumann condition: and although, due to the symmetry of the problem,V b (y) =V a (−y), so only Equations (21), (22), (24) and (25) have to be solved. Notice that 3D PDE have been transformed into N x × N z independent problems.
CFD Techniques
As it is said above, Fourier method in x and z is the best option to solve this problem. This, in fact, decouples the problems (21)- (25) in a large amount of problems in y. The numerical technique used for y is thus critical to obtain an accurate and fast algorithm. This kind of problem had been addressed mostly by spectral methods [9,31]. A typical technique used in channels is Chebyshev polynomials [11,32]. This has also been applied to thermoconvective problems [33][34][35]. However, in this case, it is more efficient the use of Compact Finite Differences, CFD. The CFD method was introduced in a groundbreaking article by SK Lele in 1992 [36]. Lele's idea was to use finite differences to solve problems presenting a range of spatial scales, generalizing some Padé schemes that had been used early [37,38]. Lele's CFD main advantage is that maintaining the freedom in choosing the mesh points of typical finite difference methods (FD), offering very high precision, comparable to spectral methods. This is critical in turbulent problems, as one typically needs many more points close to the wall than in the outer regions of the flow.
Throughout this part of this work, the following notation is going to be used. Let u(y) be a real evaluated function. Supposing that u is differentiable enough, the first and second derivative of u at the point y i will be denoted by u i = u (y i ) and u i = u (y i ). In the case of a derivative of grade n, we will use u n) i = d n dy n u(y i ). Here, y i is a point belonging to a certain discretization of the interval [a, b], where a and b are finite and y 0 = a, y n = b. Without loss of generality, we will focus on schemes for the second derivative. FD schemes aim to compute an approximation of u i trough a linear combination of the values of the function close to y i . CFD, instead, relates a linear combination of the second derivatives with the values of the function. To explain the practical use of CFD schemes, we are going to work with one specific example, the scheme used in [29]. In that work, the authors used a stencil of seven points in the function and five in the second derivative. Let us assume that the following relation holds for some unknowns coefficients, α j and a j . Without loss of generality, it is possible to assume that one of these coefficients is one, so from now on, α 0 = 1. Defining h j = (y i − y i+j ), Taylor's Theorem states that The relations between the coefficients a i and α i are derived by matching the Taylor series coefficients. In this case, the formal truncation error of the approximation is tenth order. Translating this information into an algebraic equation leads to the system Please note that this is the direct matrix coming from matching Taylor's expansions. Usually, this matrix is ill-conditioned, so results can be very inaccurate. A typical procedure to overcome this problem consists of normalizing each row by its absolute value maximum.
As the mesh can be nonuniform, it is necessary to solve this system for each point in the mesh. When approaching the boundaries, no ghost points are used but the stencils are adapted, removing the points which lay outside the interval. This reduces the formal truncation error of the system. Once the coefficients for every point have been computed, we obtain two sparse matrices, one containing the a i coefficients, A yy , and another one made by the α i coefficients, B yy , such that B yy (u 0 , ..., u n ) t = A yy (u 0 , ..., u n ) t , where the superscript () t denotes the transpose. Note that (28) can be used in both ways, to derive a function or to integrate it.
Time Discretization
To get good accuracy in reasonable computation times, a third-order Runge-Kutta method has been chosen, derived in [39]. For any problem with the form where L is a linear operator and N is nonlinear, the equation is discretized as follows: with ϕ i = ϕ t i , for t n = t 0 , t 1 , t 2 , t 3 = t n+1 . This method presents two problems. The first one is a problem of memory, due to the necessity of saving two nonlinear terms in steps two and three. The second problem is a possible loss of accuracy near the wall, due to the explicit computation of the linear operator in the right hand, Both problems are solved in the algorithm, using a little bit of algebra. Suppose that (30) is solved, and ϕ 2 is to be computed. Let us denote and we can compute and store in a single buffer of memory the right hand side of (31) except the nonlinear term N 1 . This tactic can be also applied in the computation of ϕ 1 but only in the cases where ∆t does not change. If the time step changes, the derivative of ϕ 0 has to be computed explicitly. To avoid this loss of accuracy, it is a good idea to recompute the maximum ∆t every few steps, using the Courant-Friedichs-Lewy condition The simulations made with this code ran with a CFL of 0.7, showing remarkable stability. Equations (30)-(32) are solved for φ, Ω y and Θ using CFD. In what follows we will work with the energy equation, given that the linear operator is L = Re −1 τ ∇ 2 for φ and Ω y , and L = (PrRe τ ) −1 ∇ 2 for Θ. The operator L in Fourier space can be written as Denoting by RHS i the right-hand side of (30)-(32), this system becomes In Fourier space, one gets Now using the two matrix described in the CFD section (28), and defining η as the first constant of the previous equation, Defining a new matrix M, as the final problem to solve is Mϕ i = B yy RHS.
Notice that this matrix is banded, which allows low-storage schemes and the use of very efficient LAPACK routines.
Parallelization Strategy
The most expensive part of the algorithm is the computation of the nonlinear termsĥ v , h g andĥ t . This is because it is necessary to compute these terms in physical space, due to the aliasing problem [31]. Notice that there are two global operations: 1.
Direct and inverse Fourier transforms in x and z, as the nonlinear term has to be computed in physical space.
As the data is distributed throughout the supercomputer, it is necessary to perform several all-to-all communications, which are critical and extremely demanding of the fast network of the supercomputer. The code demands a total of comm = 3 × (9 + 3h p ) global communications per step, where h p is the total number of heat transfer problems studied.
The number of points of the problem, and thus memory, depend on the Reynolds number studied. Another constraint is the efficiency of the fast Fourier transforms (FFT), so typically numbers made up of powers of 2 and 3 are chosen to increase the velocity of the FFT. Assuming a mesh with mx × my × mz points, generally mx is the largest one. It is then natural to start each substep of the Runge-Kutta scheme with the data set distributed in y-z planes. To avoid load imbalances, the number of nodes must be a divisor of the number of complex planes, which is mx/2 after dealiasing. This number is also the maximum number of nodes that can be used. The maximum number of OpenMP processes at each node is mx/2/nprocs, where nprocs is the number of processors of the node.
The code to implement the algorithm described above, where Figure 2 can serve as roadmap, is as follows 1.
Read data and configuration files: HDF5 2.
Runge-Kutta, data: φ, Ω y and Θ, in Fourier Space, distributed through the supercomputer with a (y, z, x) shape.
Compute statistical quantities of the flow if needed. iii.
Perform inverse transforms in z of U and Ω.
(b) Calculation on nonlinear terms (II Move to the step 2a.
3.
End of program.
The code spends 90% of the time and almost all communication time computing the nonlinear term and only 10% to actually solve the equations. A large share of this 90% is used to do global communications. This is a critical step and requires an in-deep study of the supercomputer where the simulation is going to run.
Finally, notice that while global communications uses MPI routines, the granularity of the code makes the use of OpenMP routines for the Fourier transforms, derivatives, and the viscous step. This is an important advantage due to the foreseen evolution of supercomputation.
Conclusions
The main goal of this work is to develop a code to perform direct numerical simulations of distinctive wall-bounded thermal canonical flows. This code has run for roughly 50 CPU-M hours in several supercomputers, already producing several articles. FFT and CFD techniques have been described-in particular, how to use a nonequispaced grid for CFD. In addition, the combination of the Runge-Kutta time-stepper and the previous techniques have been explained. Finally, the parallelization scheme has been outlined. Its main advantage is the granularity of the data, allowing a large amount of small problems in each step that can be solved efficiently using OpenMP techniques.
The raw data that support the findings of this study are available from the corresponding author upon reasonable request. One point statistics can be downloaded from the web page of our group : http://personales.upv.es/serhocal/ (accessed on 24 March 2021).
Abbreviations
The following abbreviations are used in this manuscript: | 5,226.2 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Dynamic Performance Evaluation of Blockchain Technologies
In recent years, the rapid development of blockchain technologies have attracted worldwide attention. Its application has been extended to many fields, such as digital finance, supply chain management, and digital asset transactions. For some enterprises and users, how to choose the most effective platform from many blockchains to control costs and share data is an important issue. To comprehensively evaluate the blockchain technologies, we first construct three-level evaluation indicators in terms of technical, market, and popularity indicators. Then, we propose an improved global DEA-Malmquist index without explicit inputs to assess the dynamic performance of blockchain technologies. Finally, we carry out an empirical analysis to evaluate 31 public blockchains’ performance from May 2018 to April 2020. The results indicate that the overall performance of blockchain technologies is basically on the rise. Some blockchain technologies that have not yet received widespread attention have shown good dynamic performance.
With the mature use of big data and cloud computing, Swan summarized the advantages of blockchain technologies through theoretical research and pointed out that these technologies can promote the development of supply chain finance [3]. Pilkington introduced that public blockchain is an important part of the blockchain [4]. Dinh et al. pointed out that the blockchain is maintained by a group of nodes that do not fully trust each other, which is a shared digital ledger [5]. Smetanin et al. provided a systematic review of current blockchain evaluation approaches [6]. Previous studies on blockchain evaluation include both analytical and simulationbased approaches. For instance, many researchers have studied the application of queuing theory in the field of blockchain evaluation [7]- [14]. By introducing the Markov process and Markov decision process, researchers evaluated the blockchain system from the perspectives of stability and anti-attack ability [15]- [21]. By adopting the random walk model, researchers have studied double-spending attack issue in blockchains [22], [23]. Lu showed us the application of blockchain technology in IoT, security, data management and other fields [24].
It is important to note that previous studies mostly focus on a specific blockchain technology (such as Bitcoin, Ethereum, IOTA), or target a specific scenario (such as selfish mining, double-spending attacks). However, there are still few studies on comprehensive evaluation of blockchain technologies. China Center for Information Industry Development (CCID) released the first global public blockchain technology assessment index in May 2018 [25]. It is worth noting that although Bitcoin's technical ranking is low, it is still one of the most popular blockchains. Furthermore, Tang et al. designed 3 first-level indicators and 11 second-level indicators to evaluate the public blockchain and ranked the public blockchain through the TOPSIS method [26]. However, in Tang's study, a single period evaluation method was used instead of a global evaluation method, and the dynamic performance changes of blockchain technologies were not considered. For this end, we will introduce a non-parametric method to evaluate the dynamic performance of blockchain technologies.
Data Envelopment Analysis (DEA) is a non-parametric method to identify best practices of peer decision making units (DMUs), in the presence of multiple inputs and outputs.
Since it was first introduced by Charnes et al. [27], there have been numerous studies in many areas. DEA provides not only efficiency scores for inefficient DMUs, but also provides for efficient projections for those units onto an efficient frontier. Considering that the R&D process of blockchain is different from the traditional production process, it is difficult for us to obtain and quantify the actual investment in the blockchain technologies' research and development process. We will use the Index DEA model to convert the inputs of different decision-making units to 1, and then directly compare the output results [28]. In terms of output indicators, we will conduct a comprehensive analysis of blockchain technologies from three levels: Technical indicators (CCID: Basic-tech, Applicability, and Creativity), Market indicators (The average rate of return and The standard deviation of the rate of return) and Popularity indicators (Google search heat). Besides, the dynamic evaluation of blockchain technologies over time is an urgent problem to be solved. Then, we will further analyze the two elements in the Malmquist index, which are technology frontier shift and technical efficiency change [29], [30]. Through the technology frontier shift, the development or decline of all DMUs can be measured. While technical efficiency change is used to measure changes in technical efficiency. Although the standard DEA-Malmquist productivity index has been widely adopted, it has encountered an infeasible problem in calculating directional distance function (DDF) across periods [31]- [33]. To avoid the infeasible problem, we introduce the global Malmquist productivity (GM) index developed by Pastor and Lovell [34]. It should be emphasized that the global benchmark technology covers all benchmark technologies over the same period. In this way, the GM index gets rid of the problem of infeasibility [35]. Besides, some studies have also dealt with the problem of calculating DDF with DEA-Malmquist index [36]- [38].
Different from previous studies, this paper has the following contributions. First, we integrate DEA and global Malmquist index to evaluate the dynamic performance of blockchain technologies. Second, we adopt an index DEA model to calculate the global Malmquist index, according to the fact that the blockchain without explicit inputs in practice. Third, we further investigate the efficiency change (EC) and technical change (TC) to show the improvement of blockchain technologies over time. Finally, evaluate the performance of 31 public blockchains over 24 months. By adjusting the length of the observation period, we found that the development of different blockchain technologies shows different trends throughout the period and each subperiod. And some blockchain technologies that have not yet received widespread attention show good dynamic performance.
The rest of the paper is organized as follows. In section II, we introduce the proposed evaluation model. Section III presents data, empirical study, the results, and discussion. Section IV summarizes the study. Acronyms and abbreviations can be seen in the appendix, Table 6.
A. EVALUATION INDICATOR
With the development of blockchain technologies, more and more public chain projects have been launched. How to choose the best performing one among many public chain projects has become a new problem. To evaluate the blockchain project, we need to consider the performance of three aspects: technical performance of the blockchain, market performance of cryptocurrency, and popularity of the project. Therefore, we have constructed an indicator system from three levels. First, technical indicators, In Tang's research, the basic technology and applicability obtained from China Center for Information Industry Development (CCID) are used as two sub-indicators under the technical indicators [26]. Similarly, we obtain three indicators of basic technology, applicability, and creativity from the CCID index. They comprehensively described the technical level of blockchain. Second, market indicators, the market performance of cryptocurrency (issued through the blockchain project) can well reflect investors' expectations of blockchain projects, here we mainly consider two indicators, mean and standard deviation of the rate of return. Third, popularity, through Google Trends we can get the popularity of each blockchain project. Fig. 1 shows the indicator system we constructed.
The R&D process of blockchain technology is different from the traditional production process. Through the architectural design, protocol design, expansion design, application design and programming realization of the project team, a blockchain technology was born. Users join the decentralized blockchain network and conduct transactions with other network participants. The platform issues cryptocurrencies to incentivize users. At the same time, cryptocurrencies can also be traded between users. Based on blockchain technology, users can encrypt data before sending, and add identity verification in the transmission authorization process to prevent the influence of personal factors. Any operation involving personal data can be used for identity authentication for decryption and confirmation of rights. The characteristics of tampering and operation records and other information are recorded on the chain and synchronized to the block network. Blockchain technology can also be applied to many fields such as copyright protection, logistics chain traceability, and supply chain finance. However, it is difficult for us to obtain and quantify the specific data of the input in the R&D process (e.g. funds invested, the personnel involved, and the length of work). Plus, in the R&D process of blockchain technology, the most important thing is the design of consensus mechanism, account model, and smart contract. Different consensus mechanisms and different account models are not comparable to each other. Therefore, when designing the evaluation system, we shifted our focus to the output of blockchain technologies. Specifically, we comprehensively evaluate the performance of blockchain technology from the three perspectives: experts (ratings of experts), investor attitudes (the situation of blockchain corresponding to currencies) and social popularity (Google popularity).
The indicator system we proposed comprehensively considers a blockchain technology from technical status, market performance and popularity. The first level of indicators evaluates the current state of blockchain technologies and shows the performance of blockchain technologies from the perspective of experts. The second level of indicators evaluates the market performance of blockchain technologies. Through the fluctuation of cryptocurrency prices, we can see the confidence of investors in the development potential of blockchain technologies from the perspective of investors. The third level of indicators analyzes the popularity of blockchain technologies among the public, and the popularity directly determines the participants and investors that blockchain technologies can obtain. The index system formed by the three levels of indicators covers the evaluation of blockchain technologies from the perspectives of experts, investors, and followers, and comprehensively considers the current performance and development potential of blockchain technologies.
In this study, due to the non-parametric method we adopted, there is no need to compare the importance of indicators at different levels. In the following work, we will treat all six indicators as the desirable (undesirable) output of blockchain technologies in the evaluation model.
1) TECHNICAL INDICATORS
The technical evaluation of the blockchain project mainly considers the basic technical level, application level and innovation ability of the public chain, which correspond to CCID's evaluation indicators Basic-tech, Applicability and Creativity, respectively. These three indicators are quantified by the expert scoring method. Since CCID has established a technology assessment index for public blockchains [25], its scoring results will be viewed as the following three indicators. In the basic technology evaluation, the CCID index mainly examines the realization function, basic performance, security, and centralization degree of the public chain.
b: APPLICABILITY (y 2 )
In the application-level evaluation, the first-phase evaluation model mainly inspects and evaluates the application scenarios supported by the public chain, the number and ease of wallets, and the development support on the chain. In the evaluation of innovation, it mainly inspected the scale of public chain technologies' innovation team, code update status, code influence and other aspects.
2) MARKET INDICATORS
Issuing cryptocurrency is a common way for blockchain technologies to reward participants. Based on different consensus algorithms such as PoS, PoW, BOINC, the blockchain platform will issue cryptocurrencies to users. Users can also make profits through transactions after obtaining cryptocurrencies. The fluctuation of cryptocurrency prices reflects the attitude of investors towards the future development trend of corresponding blockchain technology.
With the development and popularization of blockchain technologies, cryptocurrency has gradually become a financial asset that has attracted much attention. The price fluctuations of cryptocurrencies are relatively large in all financial asset investments. The market performance of cryptocurrencies reflects investors' confidence in the blockchain technology behind the currency. The prices of cryptocurrencies are closely linked to the development of corresponding blockchain projects. In our evaluation system, we use the monthly mean of daily returns (y 4 ) and the monthly standard deviation of daily returns (b 1 ) to examine the market performance of cryptocurrencies. Since the cryptocurrency adopts a continuous trading system, we take the daily price at 0:00 as the opening price, and the 24:00 price as the closing price. After calculating the rate of return on a daily basis, we obtain the average of the monthly rate of return as y 4 . Similarly, we can obtain the monthly standard deviation of daily returns as b 1 . The former represents the profitability of virtual currencies, while the latter represents the stability of profitability. It should be noted here that the monthly standard deviation of daily returns (b 1 ) is an undesirable indicator. The smaller the value of this indicator, the better the cryptocurrency performance.
3) POPULARITY INDICATORS
In addition to the above indicators, the public's sentiment towards blockchain projects directly affects the development of blockchain projects. More attention means more participants and investors, which are very important for the development of blockchain projects. In the search market, Google handles around 90% of searches worldwide [39]. The popularity of search terms over time and across various regions of the world can be compared in Google Trends [40]. We use the average Google search heat (y 5 ) of a public blockchain to reflect the popularity of this blockchain.
B. EVALUATION PROCESS
In this special case, we cannot get explicit inputs of each blockchains in each period, which makes us need to modify traditional GM index. Next, we will introduce a new Malmquist index to deal with this problem.
Fist, we define the contemporaneous benchmark technology for period t as and define the global benchmark technology as the convex envelop of all contemporaneous technologies, i.e., Then the contemporaneous Malmquist index is defined as , whereas its global counterpart is defined on S G as Next, to deal with desirable outputs and undesirable outputs simultaneously, we introduce the Directional Distance Function (DDF) into this model. The directional distance function was first introduced by Chung, et al. [41]. DDF makes it possible for decision makers to decide the direction of projection decision by adjusting the input/output direction vector. When considering only desirable and undesirable outputs, we define DDF on global technology T G and take the VOLUME 8, 2020 specific form as Since the global DDF utilizes all blockchains over all periods in constructing the production possibility set, it is computed for Blockchain o at period t under VRS assumption as Although the two adjacent periods refer to the global frontier when calculating the Malmquist index, the respective frontiers are still used in the calculation of efficiency changes: The closeness of the frontier of period t + 1 and the global frontier can be represented by The larger the ratio, the closer the frontier of period t + 1 is to the global frontier; The closeness of the frontier of period t and the global frontier can be represented by The larger the ratio, the closer the frontier of period t is to the global frontier; The change of the frontier from period t to period t + 1 can be expressed by the ratio of the above two ratios: The Malmquist index can be decomposed into efficiency changes and technical changes: Because all DMUs evaluated refer to the same global reference set, this Malmquist index does not have the infeasibility VOLUME 8, 2020 problem; because each period references the common global frontier, this Malmquist index is transitive, i.e.
III. EMPIRICAL ANALYSIS
In this section, we apply the modified GM index to assessing 31 public blockchains' performance from May 2018 to April 2020. which are Basic-tech (y 1 ), Applicability (y 2 ), Creativity (y 3 ), the monthly mean of daily returns (y 4 ) and Google search heat (y 5 ). And an undesirable indicator, the monthly standard deviation of daily returns (b 1 ). Table 1 shows the descriptive statistical results of each indicator, including the mean, median, maximum, minimum, standard deviation, kurtosis, and skewness. The standard deviation of y5 is the largest, while the standard deviation of other variables is relatively small. The skewness values of the variables are all not 0, and the kurtosis values are not 3, indicating that the data does not strictly obey the normal distribution. Therefore, by introducing the global Malmquist index, which is a nonparametric method, the evaluation of blockchain technologies will be more reasonable. Table 7 (see Appendix) presents more detailed descriptive statistics (geometric mean, standard deviation, maximum, minimum, and range) of thirty-one public blockchains in the data. As shown, under the Basic-tech indicator, the average level of EOS is the highest, while Bitcoin cash is the lowest. From the applicability of all aspects of the public chain, 31 blockchains are similar. In addition, EOS has the largest Creativity of 61.1, while the smallest of Bytecoin is only 1.1. It is worth mentioning that Bytecoin's Google search heat is also the smallest on average. In terms of the average rate of return, among the 31 public blockchains, only NEO, Bitcoin, and Tezos have positive values. The standard deviation of the rate of return reflects similar instability of the overall performance of the blockchains.
B. ANALYSES FROM THE MODIFIED GM INDEX
We calculated the modified Global Malmquist index of each year and its components EC (efficiency change) and TC (technical change) from May 2018 to April 2020. The whole period GM performance index is the geometric mean of all 24 months' GM indexes, namely, the 23rd root of the product of the growth rates from the 23 pairs of months (from May 2018 -June 2018 to March 2020 to April 2020). The results of the monthly GM index and its components EC and TC are presented in table 8, 9, and 10 (see Appendix) for the whole sample period.
To observe the overall situation of the dynamic performance of each public blockchain project, we calculated the geometric mean of the monthly indicators of each blockchain, as shown in Table 5. For a more detailed analysis, we divided the 24 periods into four stages (May 2018 to November 2018, November 2018 to May 2019, May 2019 to November 2019, November 2019 to April 2020), as shown in table 6.
From Table 2, we can see that most of the blockchain projects have improved throughout the observation period, and the top five with the largest average improvement are IOTA, Zcash, Hcash, XLM, and NULS. In contrast, the geometric averages of GMs of NEO, Decred, Ontology, Bytecoin, and Monero are all less than 1, meaning that from the overall level, these several blockchains have experienced different degrees of regression. Among them, the average GM of EOS and Ethereum are both 1, and there is no obvious improvement or regression from period 1 to period 24. Bitcoin, which has attracted a lot of people's attention, performed well in this evaluation. Its mean GM is 1.018. From the decomposition of GM, we can see that Bitcoin's progress mainly comes from technological progress. The mean EC of bitcoin is equal to 1, which means that from period 1 to period 24, bitcoin has not made significant progress compared to other blockchains. According to CCID's previous evaluation results, this is likely due to its slow transaction speed and flaws in the consensus mechanism. Table 3 gives more detailed information. In the stages of May 2018 to November 2018 and May 2019 to November 2019, many DMUs have a GM index of less than 1. In the second half of 2018, the China Banking and Insurance Regulatory Commission and other five ministries and commissions jointly issued the ''Reminder on the Risk of Illegal Fund Raising in the Name of ''Virtual Currency'' and ''Blockchain''''. At the same time, many countries have adopted a series of regulatory measures against the ICO projects, and these policies have caused a short-term negative impact on the development of blockchain. In the second half of 2019, the headquarter of the People's Bank of China issued the document ''Strengthening the Prevention and Control of Supervision and Combating the Trading of Virtual Currency'' to further strengthen the supervision of ICO projects. VOLUME 8, 2020 In the past, blockchain projects used cryptocurrency as the main reward mechanism. With the increasingly strict supervision of virtual currency transactions in countries around the world, blockchain projects have been directly and negatively affected in terms of market performance and popularity. In terms of technical indicators, due to the impact of negative news during these two periods, the basic technology, applicability, and creativity also declined. In terms of technical indicators, due to the impact of negative news during these two periods, the basic technology, applicability, and creativity also declined.
The other two stages only have two blockchains each with a GM index less than 1. Combined with the overall situation, Two periods of regression (May 2018 to November 2018, May 2019 to November 2019) may be caused by the transformation of blockchain technologies from ICO-oriented to application-oriented.
Bitcoin and Ethereum are the two blockchain projects that people are most concerned about. We list their GM index by period in Table 4. Bitcoin has 14 GM indices that are greater than 1 in 23 pairs of periods, while Ethereum has 15 pairs that satisfy GM indices greater than 1.The efficiency change of the two blockchain projects is 1, and by looking at the global efficiency of the two projects in each period, we find that they have their own advantages. Bitcoin is always the hottest, and the technical indicators of Ethereum are advantages. Therefore, they have the highest overall efficiency in each period, which is 1. This makes sense. Since GM is equal to EC times TC, in this case where EC is equal to 1, the GM changes of both projects come from TC.
In Table 5, we compare the rankings from three different blockchain technology evaluation methods. The second and third columns show the global Malmquist index and ranking of blockchain technologies (under the entire observation period). The fourth and fifth columns show the global Malmquist index and ranking of blockchain technologies (from July to August 2018). The last column is the ranking of blockchain technologies obtained by Tang using the Topsis method in August 2018.
Obviously, the blockchain technologies that have shown great progress on the overall level (GM index greater than 1) are not well ranked in Tang's method. For example, the top three, IOTA, Zcash, and Hcash, are respectively 23rd, 12th and 11th in Tang's single month ranking results. At the same time, in the single-period evaluation, the ranking results obtained by the global Malmquist index method are also very different from the Topsis method. Top three blockchain technologies, Litecoin, Stratis, and Hcash, are ranked 16th, 20th, and 11th respectively under Tang's method.
This difference comes from two aspects. The first is that Tang's method mainly focuses on the cross-sectional comparative evaluation of the current blockchain technology performance. However, the global Malmquist index method pays more attention to the dynamic performance of blockchain technology, in other words, the change between the previous performance and the current performance. This is very important in the dynamic performance evaluation of blockchain technologies. There is no doubt that the blockchain technology that can maintain continuous progress will have stronger competitiveness in the foreseeable future. The second is that Tang's method can only consider the performance of a single period at a time. However, due to the transitivity of the global Malmquist index, we can evaluate the performance of blockchain technology in any time span as needed. This advantage makes this method can more comprehensively consider the dynamic performance of blockchain technology. Based on the above factors, the dynamic performance evaluation of blockchain technology obtained under this method can provide some more noteworthy information.
IV. CONCLUSION
This paper provides a dynamic evaluation method for the user's selection process in many blockchain projects.
First, we propose a new indicator system for the dynamic performance evaluation of blockchain technologies. Through multi-perspective indicators, we can conduct a more comprehensive evaluation of the performance of blockchain technologies. Second, we apply the modified GM index method for the evaluation of blockchain technologies' dynamic performance. Unlike Tang's static evaluation method, our method can evaluate the dynamic performance of blockchain technology (the progress/regression of blockchain technology). Third, different from previous studies that focused on a single object and a single scenario, this study has achieved a multi-perspective comprehensive evaluation of blockchain technologies performance through a benchmark method.
Under the modified GM index, the application fields of the Malmquist index have been further expanded. In this study, we evaluated the performance of 31 public blockchains over 24 months. Compared with other scholars' research, our research results reflect more details. Accurately show the dynamic performance of each public blockchain among different periods; through GM index decomposition, the reasons for the improvement of blockchain performance are clarified; by adjusting the length of the observation period, we found that the development of different blockchain technologies shows different trends throughout the period and each subperiod. This allows us to evaluate the dynamic performance of blockchain technology in more detail.
This research still has certain problems. In the follow-up research, the index system should be further enriched. Compared with the second-hand data, the first-hand data can better show the performance of the blockchain. In future research, the impact of policy supervision on blockchain performance should be discussed in more depth.
See table 6-10.
ZHONGBAO ZHOU was born in 1977. He is currently a Professor with the School of Business Administration, Hunan University, China. His research interests include reliability modeling, system optimization, and decision making.
RUIYANG LI was born in 1995. He received the B.S. degree in mathematics and applied mathematics from Hunan University, China, in 2017, where he is currently pursuing the Ph.D. degree in management science and engineering with the School of Business Administration. His research interests include system optimization and decision making. HELU XIAO was born in 1986. He is currently an Assistant Professor with the Business School, Hunan Normal University, China. His research interests include system optimization and decision making. | 5,964 | 2020-05-05T00:00:00.000 | [
"Computer Science",
"Business"
] |
On the Estimate Measurement Uncertainty of the Insertion Loss in a Reverberation Chamber Including Frequency Stirring
In this paper, it is shown an enhancement of a previous model on the measurement standard uncertainty (MU) of the insertion loss (IL) in a reverberation chamber (RC) including frequency stirring (FS). Differently from the previous model, the enhanced does not require specific conditions on the parameter to be measured. Such an enhancement is applicable for all usable measurement conditions in RCs. Moreover, a useful majorant is also shown; it is obtained under a weak condition on the coefficient of variation (CV) of the parameter to be measured. Results by measurements support the validity of the proposed enhancement and of the majorant.
I. INTRODUCTION
Measurement uncertainty (MU) quantification is very important to improve the applications of reverberation chambers (RCs) [1]. Hybrid stirring increases the number of uncorrelated samples and, consequently, reduces the MU [1]- [7]. In this paper, we consider a hybrid stirring as realized by a combination of frequency stirring (FS) and mechanical stirring (MS) [2]- [3]. The FS measurements also allow us the transformation in time domain [8]- [10]. The MU of the insertion loss (IL) in an RC with hybrid MS and FS was addressed in [11], where a model was developed and achieved under conditions of well-stirred fields; it is here called previous model. In [11], MU is estimated following the approach described in [12], considering it as a type A uncertainty. This type of uncertainty is normally the main component for MU in RCs [13]. The type B evaluation uncertainty depends on Manufacturer's specifications of the instrumentations, as well as on the specific calibration Manuscript procedure used for measurements, which can change from case to case; however, the instrumentation used for measurements and main concerning settings will be also shown here. Actually, a wider treatise on the MU in RCs was addressed in [13], where an approach similar to that in [11] was used, as it will be specified below; nevertheless, some meaningful differences in the approaches should be discussed. We will discuss that below in the section V. The purpose of this paper is to enhance the previous model and the concerning usability. The enhanced model does not require specific conditions for its validity; it is de facto a generalization of the previous model. It is found that such an enhancement is applicable for all usable measurement conditions of IL in RCs including conditions at low frequencies. A useful majorant of the standard MU is also obtained; it requires that the coefficient of variation (CV) of the measured samples be less than or equal to one. We find that the previous model is the same as the majorant. It can be applied when a conservative margin for statistical fluctuations is considered and the abovementioned CV is less than one.
II THEORY
We develop the enhancement by considering the IL as made in [11]. We can write [11]: where N represents the ensemble average with respect to the N uncorrelated field configurations from MS of metallic stirrer(s) in the chamber. E 2 represents the squared amplitude of the transmission coefficient S21; it is a random variable (RV). Actually, IL is a sample mean (SM) and therefore has statistical fluctuations: it is an RV. We can write the mean, variance, and CV of the RV ILf, respectively, as follows 1 [11]: 2 ,0 On the Estimate Measurement Uncertainty of the Insertion Loss in a Reverberation Chamber Including Frequency Stirring Note that f = fk -f1, where f1 and fk are the minimum and the maximum frequency of the FS. We are interested in the mean and variance of W. We can write: We want to transform (9a) so that it gives a significant connection between MS and FS. We can write: 2 The MS considered in (6) is limited to metallic stirrer(s) again. which are estimated by corresponding sample means from N uncorrelated sampling data of S21, are never totally uncorrelated as the former includes an effect of the latter. However, they are sufficiently uncorrelated, so that (10a) can be well approximated as follows: It is highlighted that (10b) is valid also in case of sample estimates. We can also write [11]: where ,0 f is the standard deviation of the means ILf1,0, ILf2,0, ···, ILfk,0. Manipulating (9), (10b), and (11), we can write: When (5a) is met and 2 ,0 E = 1, which corresponds to the case of well-stirred fields, (12) and (13) become equal to (10) and (13) in [11], respectively, as expected. Practically, W and W are also RVs as parameters on the right side of (12) and (13), as well as those in similar eqs. below, are sample estimates. They depend however on N; in these cases, we omit the zero at their subscript. When (5a) is met, a variation of the enhanced model (12)-(13) can be obtained; in fact, (12) and (13) became as follows [21]: where 22 , ,0 for the assumption (5a). Since population parameters are estimated by the corresponding sample statistics 3 , which uses N uncorrelated sampling data of S21, we can de facto know if (5a) is met only when N is much greater than one; in fact, in such cases, the statistical fluctuations are very reduced. When N is not much greater than one, we can assume that (5a) is met and calculate its average in the FSB; the comparison of the results with those from measurements proves if the assumption was true. Note estimate tends to zero in the FSB. In section IV, it is shown that when N is greater than or equal to eight, (12)- (13) practically give the same results of (14)- (15). Differently, results from (14)-(15) are worst; in particular, the concerning standard MU and relative standard MU are smaller than the corresponding from (12)-(13) as expected. It will be shown that results from (12)-(13) match those from measurements. Moreover, on equal N value, the difference between results from (12)-(13) and those from (14) [13]. It will be confirmed by results in section IV. By following assumptions and developments made in [11, after eq. 14], we can write: where the subscripts p, mp, and sp mean p independent positions of at least one of the two antennas, multiple positions, and a single position, respectively; 2 , ,0 sp p is the variance due to the lack of perfect uniformity [11]. Note that the constancy of 2 , ,0 Ef for all positions p is an assumption 3 This is the reason for which the symbol 2 , ,0 Ef is used in (14) and (15) instead of 2 , ,0 Ef . absolutely acceptable. If k = 1 (only MS), then (16), becomes as follows: It is useful to write (16) as follows: It is also useful to write the ratio We can write: Equations (21) and (22) Note that the right sides of (24) and (25) are the same as in the previous model, and they give majorants of the corresponding standard MUs. It is specified that the subscript c in (24) and (25) denote that fields meet the condition (5b).
It is important to highlight that (12) and (13), as well as the corresponding (20) and (23), are a general model for the standard MU of the IL in RCs; in fact, (10)- (13) in [11], as well as (24) and (27) in [11], are a particular case of (12)- (13) and (20) and (23), respectively, which occurs when 2 ,0 Finally, before we show the measurement setup, it is useful to express the CV 2 , ,0 Ef by the K-Factor, which is denoted by Kf,0; we can write: III. MEASUREMENTS SETUP Measurements are made in the RC at Università Politecnica delle Marche, Ancona, Italy, which works in step mode for measurements used in this paper. The measurement setup and acquisition settings are the same as in [11], except that in this case two type of configurations of the antennas are used for measurements: one configuration minimizes the direct link between the antennas, which are distant and cross-polarized, and the other one maximizes it. In the latter case, the antennas are on the line of sight at a known distance each other; they are tip-to-tip positioned and co-polarized; several distances are used for measurements but only results concerning the distances of 0.05 m and 0.3 m are shown for shortness. The former and latter measurement configurations are here called A and B, respectively. It is specified that the measurement setup includes a four-port VNA, model Agilent 5071B, and two antennas, model Schwarzbeck Mess-Elektronik USLP 9143, whose usable frequency range (FR) ranges from 250 MHz to 7 GHz for EMC tests. The IF bandwidth and source power, which determine the instrument measurement uncertainty along with the set FR and amplitude of the measured transmission coefficient, are set to 3 kHz and 0 dBm, respectively. Over the FR from 0.2 GHz to 8.2 GHz, 16,001 frequency points are acquired with a step frequency (SF) of 500 kHz for a number of mechanical positions M = 64 [11]. Note that the number 64 corresponds to the total number of acquired stirrer positions, which in turns corresponds to the total number of acquired (frequency) sweeps (M = 64) [11]. The total sweeps are divided in n sets of (frequency) sweeps, so that each set includes N sweeps and M = n · N. The settings n and N can be changed to test the enhanced model [11]. For each sweep, the total number of processed frequency points = 16,000 is divided in q sets of frequencies, so that f = (k -1) · SF and = k · q. Unlike what was made in [11], the symbol for the total frequency points is here denoted by to avoid confusion with the symbol of the K-Factor. The value of q is the number of FSB or f included in the FR. The mean W0 in (12) is estimated n times and the standard deviation of such n averages Wi (i = 1, 2, ···, n) is calculated [11]. The calculated standard deviation is an estimate of the measured standard uncertainty. When such an uncertainty is normalized to the average of the averages Wi, an estimate of the relative standard uncertainty is obtained. The measured standard MU is compared to the corresponding expected standard MUs, which are obtained by applying (12), as well as (14), and (24). They are applied by using any of the n estimates Wi and the corresponding estimates of 2 ,0 f , (12) and (14), respectively, as mentioned above. Similarly, the measured relative standard MU is compared to the corresponding expected relative standard MUs, which are obtained by applying (13), (15), and (25). The non-correlation of samples is verified by autocorrelation function (ACF). Here, the threshold used is 1/e, where e is the Neper's number. In general, thresholds of 0.5 and 0.7 could be also used [22]; however, the higher the threshold the higher the residual correlation of samples [21]. Note that the 64 frequency sweeps of each IL measurement can be thought as a matrix of 64 rows and 16,001 columns, where along each row changes only the frequency (FS) whereas along each column change only the stirrer position (MS). The ACF is calculated for both any row and column. For measurements where the IL includes a significant variable direct component, the ACF is considerably affected. A short sequence of frequency samples, where the average of the direct component is removed, could be considered, as made in [21]; this method has the drawback to use only a few samples for the estimate of the ACF and, however, it is not reliable [21]; therefore, it is not used here. Here, the direct component is removed before calculating the ACF for measurements concerning the configuration B; it is removed for each frequency point, i.e., it is removed for both MS and FS. The direct component to be removed is obtained by using all 64 sweeps. For both measurements from configuration A, where it is not necessary to remove the residual direct component, and measurements from configuration B, acceptable results are obtained according to the abovementioned threshold, which are not explicitly shown here for the sake of shortness. However, to ensure noncorrelated samples in all the FR and for any FSB, a decimation of samples from 1 through 8 is made for samples concerning the configuration B. Similarly, a decimation of samples from 1 through 2 is made for samples concerning the configuration A. Hence, SF becomes 1 MHz for configuration A measurements and 4 MHz for configuration B measurements. Finally, we note that when an appreciable direct component is not present or when it is removed and the stirred component is well stirred, the non-correlation can be verified by using the correlation coefficient (CC) applied to the amplitude squared of samples [23]. By using such a method, it is confirmed that results worse when the FSB increases, as well as when it is too much small, according to the number of samples [23].
IV. RESULTS
The effect of the enhancement of the previous model and of the majorant is well-visible in 1. Therefore, in order to make effective and simple the verification of the proposed models, we use (12)-(13), (14)- (15), and (24)-(25). Fig. 1 and 2 show the standard MUs and the relative standard MUs given by (12), (14), (24), and (13), (15), (25), respectively, for measurements concerning the configuration A. Note that f = (k -1) · 1 MHz. Fig. 3 shows an enlargement of Fig. 2 at low frequencies. From Fig. 4 to Fig. 7, the CV f, the CV 2 , Ef along with its average value and RMS value, and K-Factor are shown; in particular, Fig. 6 shows an enlargement of All Figs show the concerning statistical fluctuations. The comparison between measured standard MUs and corresponding expected standard MUs shows that (12)-(13), as well as (14)- (15) are supported by measurement results. In order to prove that the models works well also for different FSBs, expected relative standard MSs are shown in Fig. 8 and Fig. 9, where k = 100 (f = 99 MHz) and k = 200 (f = 199 MHz), respectively. It is also confirmed that expected results from (24) and (25) are the same as those from (12) and (13), respectively, when K = 0, which implies 2 1 E (the equal sign has to be taken in (24)-(25)), except at the low frequencies (f < 250 MHz), where a deviation is expected and observed (see Fig. 3). By Figs. (1)- (9), it is also noted that (12)- (13) and (14)- (15) give practically the same results for N = 8. In Fig. 10, where N is 2, it is well visible the difference between results from (13) and (15). Such a difference is due to the statistic fluctuations, which increase as N decreases, as mentioned above; the same applies to (12) and (14). The slight difference between results from (13) and (25), which is visible in Fig. 10, as well as those between results from (12) and (24), when in (24)-(25) the equal sign has to be taken, is due to the N value; it decreases as N increases because both 2 , ,0 Ef and 2 1/2 2 , ,0 Ef are underestimated when N is small, as mentioned above. It is important to note that the measured standard MU and the expected standard MU from (12), as well as the corresponding relative standard MUs, match also when N is small (N < 4) for the effect of such an underestimate (see Fig. 10 for the relative standard MUs); otherwise, the abovementioned difference is acceptable from N = 4 [11]. Figs. from 11 to 15 show results of measurements concerning the configuration B for d = 0.05 m. In particular, Figs 11 and 12 show expected standard MUs and expected relative MUs along with the corresponding measured MUs. The FSB is 96 MHz. One notes that expected and measured results match again. Note that (24) and (25) are clearly majorants of the corresponding measured uncertainties in these cases. It is important to note that results from (12)-(13) are essentially the same as those from (14)- (15) except for N < 8 as Fig. 10 shows. However, we consider ultimately the enhanced model (12)-(13) even though we believe that the variation (14)- (15) can generally be used for N ≥ 8. Finally, we highlight that results in [21,, where no decimation was applied, did not match well because samples were partially correlated. The effect of a residual correlation is also appreciable in [21, Fig. 9] for f > 5 GHz.
V. DISCUSSION The standard MU of the IL of an RC, as well as the relative standard uncertainty, is estimated for type A evaluation uncertainty; they are compared to the corresponding measured uncertainties. The estimate of the MU is made so that the uncertainty component 2 due to the non-uniformity of the field in the RC is highlighted and separately obtained, except the multiplying factor 2 , ,0 1 sp p present in 1. The non-uniformity is affected by the load in the RC and it increases as the load increases. Such a component of uncertainty is connected to the reciprocal location, orientation, and polarization of the transmitting and receiving antennas for a given RC. The model gives good results at low frequencies as well. The non-uniformity of the field in an RC, which is estimated by 2 can not be neutralized by the increase of the samples N k, even though, a marginal reduction of such a component of MU could be achieved by a widening of the FSB [11]. This aspect is very important when 2 has to be reduced. It could be the case where the effect of a strong load on the uniformity has to be reduced or when a very low total uncertainty is necessary. In [13], the PDFs of the interest sample statistics are theoretically achieved; the theory is applied to 2 parent distributions with two or six degrees of freedom according to the sample statistic to be processed. The RVs, which are represented by the same amount of samples N k p from hybrid stirring, are all assumed to be identically distributed (ID), so that the theoretical PDF is achieved, as well as the concerning uncertainty. It is specified that M in [13] corresponds to k in [11] and here, when MS and FS, but no position stirring, is considered whereas M corresponds to the product k p in [11] and in this paper when MS, FS, and position stirring are considered. It is important to highlight that the standard MU obtained here and in [11] is equivalent to that obtained in [13] for the average power, when the dependence on the frequency and the non-uniformity of the field are negligible in the FSB. For such measurement conditions, the averages W and Wmp certainly exhibit PDFs that can be approximated by a Gauss normal sampling distribution, according to the total number of acquired samples N k p, and the confidence intervals can also be obtained. However, one can note that the N k p RVs are not strictly ID as the IL is subject to the non-uniformity of fields inside an RC. Such variations are affected by the load of the RC as mentioned above. In other words, the RVs have all the same PDF type, but they have not strictly the same mean and standard deviation; that is, they are not strictly statistically equivalent. At low frequencies, the distributions of the field and power deviate from those theoretically known; therefore, the theory applied to 2 , is an approximation at low frequency. From the experimental point of view, when all samples are mixed up together, the real total uncertainty is obtained. However, the assumption of RVs ID simplifies the theoretical developments and is certainly acceptable for small FSB and little non-uniformity of the field in the RC. The theory can also be extended to cases where fields are partially incoherent, i.e., cases where K > 0 [13, pag. 31]. In [13], many PDFs of practical interest and the related uncertainties are achieved, as well as the corresponding confidence intervals, including the PDF and the uncertainty of the maximum value for both field and power. The standard "uncertainty of the uncertainty" is also achieved. Results from some applications expected for the standard [1] are also shown. Finally, we believe that the averages W and Wmp exhibit PDFs approximately normal in all common usable measurement conditions in RCs including loaded RCs [20], [24]. Similarly, we believe that the assumption of RVs ID made in [13], along with the extension of the theory to cases where K > 0, causes an acceptable approximation in all common usable measurement conditions in RCs including loaded RCs.
VI. CONCLUSIONS In this paper, an enhancement of the previous model for the standard MU in an RC is shown; it is de facto a generalization of the previous model. Moreover, a useful majorant of the standard MS is shown as well. By results from measurements, it is shown that enhanced model works well for both high and low frequencies. It includes the previous model as an its particular case and does not require specific conditions for its validity. The majorant requires a weak condition on the CV of the parameter to be measured, i.e., it has to be less than or equal to one. The majorant, which just corresponds to the previous model, could be used when the abovementioned CV is less than one and a conservative margin is considered for the statistical fluctuation; however, it could not work well at low frequencies, where the condition for its validity is not guaranteed. Finally, the comparison between the model shown here and that in [13] was discussed; it is concluded that both approaches are practically sound. | 5,188.4 | 2018-01-13T00:00:00.000 | [
"Engineering"
] |
Numerical Simulation and Compact Modeling of Thin Film Transistors for Future Flexible Electronics
In this chapter, we present a finite element method (FEM)-based numerical device simulation of low-voltage DNTT-based organic thin film transistor (OTFT) by considering field-dependent mobility model and double-peak Gaussian density of states model. Device simulation model is able to reproduce output characteristics in linear and saturation region and transfer characteristics below and above threshold region. We also demonstrate an approach for compact modeling and compact model parameter extraction of organic thin film transistors (OTFTs) using universal organic TFT (UOTFT) model by comparing the compact modeling results with the experimental results. Results obtained from technology computer-aided design (TCAD) simulation and compact modeling are compared and contrasted with experimental results. Further we present simulations of voltage transfer characteristic (VTC) plot of polymer P-channel thin film transistor (PTFT)-based inverter to assess the compact model against simple logic circuit simulation using SmartSpice and Gateway.
Introduction
The interest in organic thin film transistors (OTFTs) has increased significantly in the past few years and has been proven for various applications such as flexible low-cost displays [1], organic memory [2], key components of RFID [3] tags, lowend electronic products, and polymer circuits and sensors [4]. Flexible electronics is a new technology that builds electronic circuits by depositing electronic products on flexible substrates like plastics, paper, and even cloth. Compared with inorganic electronics, organic or flexible electronics have the various following advantages. First, it can be manufactured at a very low cost at low temperatures. Second, it is thin, lightweight, foldable, and bendable and has a strong light absorption, no crushing, mechanical flexibility, low energy consumption, and high emission efficiency. Third, the cost is lower due to cheaper materials and lower-cost deposition processes [5], and it is also used for large area applications. Actually a stack of organic semiconductors (OSC) and low-temperature polymer gate dielectrics and the rapid annealing process are suitable with high-throughput, low-cost printing manufacturing [6]. Researchers replaced semiconductors with organic materials such as DNTT [7], poly(3-octylthiophene) (P3OT), poly(3-hexylthiophene) (P3HT), and poly(3-alkylthiophene) layers, and dielectric layers are used to create complete flexibility. A bigger challenge is to enhance the real performance of organic devices so as to expand their usage in real-time commercial applications [8]. To enhance the speed of the device, a very great deal of the research efforts has been dedicated to increasing the mobility of organic materials by improving the deposition conditions [9]. In addition to mobility, other methods of improving OTFT performance include scaling the length of channel and changing the active layer thickness. The OTFT is usually fabricated in an inverted structure with gate at the bottom, and source and drain will be at the top. Gundlach et al. [10] show that the bottom contact structure has a strong dependence on the contact barrier and due to the different nature of the interface between the channel and the insulator, the device exhibits different electrical properties [11]. Recently, for p-type OSCs, very high mobility values of several tens of cm 2 V À1 s À1 have been reported for polymers and small molecules indicating that OSC has great potential for improved performance through chemical structures and process optimization [12]. In addition to performance, deep understanding of instability issues of OTFTs and finding stable and reliable solutions for OTFT is therefore very important [13]. Since the systematic cost of experimental investigation is very high and it requires a lot of time also, technology computeraided design (TCAD) simulation of semiconductor devices is becoming very important for investigating the design and electrical characteristics of the device prior to fabrication of the device. Organic semiconductor technology has emerged in the past 20 years. Depending on the model, these devices have been developed and studied over the past decade. Compared to the silicon industry, for which public model is clearly defined and commonly used to provide designers with a relatively good description of the process, organic transistors still lack to have complete device models that can fully describe their electrical characteristics. Therefore TCAD simulation and compact modeling of organic transistors become very important.
In this chapter we present 2D device simulation of low-voltage DNTT-based OTFT using Silvaco's ATLAS 2D simulator which uses Poisson semiconductor device equation [14][15][16][17][18][19][20][21][22] continuity equation for charge carriers, drift diffusion transport model, and density of defect states model for simulation electrical characteristics of the device. Silvaco's UTMOST IV model parameter extraction software is used to get compact model parameters using UOTFT model [23]. Also TCAD simulation results and compact modeling results were compared and contrasted with the experimentally measured results of the device. Compact model has been applied for logic circuit simulation, and voltage transfer characteristics of PTFTbased inverter circuit have been simulated using the compact model parameters extracted from UOTFT model. This chapter contains five sections. This section introduces the content of the paper. The device structure and simulation are described in Section 2. The compact modeling, model verification, and parameter extraction are explained in Section 3. The results and discussion are explained in Section 4. Finally, conclusions drawn are given in Section 5.
Device structure and finite element-based numerical simulation
The OTFT is designed on the bottom-gate top-contact of a flexible PEN substrate. A gate dielectric composed of a 3.6-nm-thick aluminum oxide layer and a 1.7-nm-thick n-tetradecylphosphonic acid self-assembled monolayer (SAM) was used [24]. Next, an organic semiconductor layer having a thickness of 11 nm was placed on the AlOx/SAM gate dielectric. The AlOx/SAM gate dielectric (5.3 nm) is very small in thickness and has a large capacitance per unit area, so transistors and circuits can operate at a low voltage of about 3 V. The OTFT has a channel length of 200 μm and a channel width of 400 μm, Lov = 10 μm.
The energy band diagram of a metal insulator semiconductor (MIS) structure is given in Figure 1. Maximum valence band energy (E V ) and minima of conduction band energy (E C ) of the inorganic semiconductor are substantially similar between the HOMO and the LUMO of the organic semiconductor. Especially for DNTT, HOMO is approximately À5.19 eV, and LUMO is about À1.81 eV [7,24]. This introduces a large enough 3.38 eV HOMO-LUMO energy gap, which is sufficient for transistor operation.
To start the ATLAS simulation, we defined the physical structure and device dimensions, including the location of the electrical contacts. Figure 2 shows a crosssectional view of the bottom-gate, top-contact DNTT-based OTFT.
Device physical equation
We can calculate these charge carrier densities by solving basic device equations simultaneously including Poisson equation [14][15][16][17][18][19][20][21][22], electron and hole continuity equation, charge transfer equation, and defect density of states equation. The first three equations are the default equations that ATLAS uses to find the electrical behavior of the device.
The Poisson equation determines the electric field intensity in the given device based on the internal movement of the carriers and the distribution of the fixed charges given by Eq. (1): where ∈ is the permittivity of the region and ρ(x,y) is the charge density given by.
where p(x,y) is the hole density, n(x,y) is the electron density, N D + (x,y) is the ionization donor density, and N A À (x,y) is the ionization acceptor density. To account for the trapped charge, Poisson equations are modified by adding an additional term Q T , representing trapped charge given in (A): where Q T = q(N + tD +N À tA ), N + tD = density  F tD , and N À tA = density  F tA . Here, N + tD and N À tA are ionized density of donor-like trap and ionized density of acceptor-like traps, respectively, and F tD and F tA are probability of ionization of donor-like traps and acceptor-like traps, respectively.
Due to charge accumulation, a potential is generated, which affects the intensity of electric field distribution and current. The voltage applied to the gate electrode generates an electric field that attracts a few or majority carriers. In addition, for OTFTs, the voltage potential between the source and the drain establishes another electric field along the channel that drives the charge carriers and produces a current.
The continuity equation describes the dynamics of charge carrier distribution over time as given in Eqs. (4) and (5): In these equations, q is the magnitude of the electronic charge, n is the electron carrier density, p is the hole carrier density, J is the corresponding current density, G is the corresponding charge generation rate, and R is the corresponding charge recombination rate. For organic/metal oxide semiconductor field-effect transistors (MOSFETs), there is no optical absorption, so the term is simplified and the properties of the material are described by the minority carrier recombination lifetime. Since MOSFETs are majority carrier devices, the characteristics of carrier generation and recombination are relatively unimportant. The physical properties of organic semiconductors depend on the generation and movement of polarons [25].
A third important set of equations for describing the device physics for the charge carrier is given by It contains drift and diffusion parts. These equations determine the current density based on the carrier mobility (μ), the electric field (E), the carrier density (n, p), and the diffusion coefficient of the carrier (D). Diffusion coefficient operators are related to Einstein's mobility relationship: In summary, the ATLAS software solves Poisson equations, continuity equations, and current density equations [26,27] at each node in a two-dimensional grid for a given device structure simultaneously with itself and is subject to boundary conditions (including those applied at the contacts). With the help of ATLAS, the electric field distribution and electron and hole current density are calculated at each node and terminal current at electrode.
Density of defect states model
The assumed total density (DOS), g(E), consists of four bands: two tail bands (analogous to acceptor-like conduction band and donor-like valence band) and two deep energy bands (one donor-like and the other acceptor-like); they are modeled using a Gaussian distribution [15][16][17][18][19][20][21][22]28]: Here, E is the trap energy, EC is the conduction band energy, EV is the valence band energy, and the subscripts (T, G, A, D) stand for tail, Gaussian (deep level), acceptor, and donor states, respectively: For exponential tails, DOS is defined by its conduction and valence band edge intercept densities (NTA and NTD) and by its characteristic attenuation energy (WTD and WTA).
For Gaussian distribution, DOS is given by the total state density (NGD and NGA), its characteristic attenuation energy (WGD and WGA), and its peak energy distribution (EGD and EGA).
Trapped carrier density
The ionized densities of donor and acceptor states are given by Eqs. (14) and (15): where p TA , p GA , n TD , and n GD are given below f tGA(E,n,p) and f tTA(E,n,p) are the ionization probabilities for the Gaussian acceptor and tail DOS, while f tTD(E,n,p) and f tGD(E,n,p) are defined as the probability of occupation of a trap level at energy E for the Gaussian and tail acceptor, and donor states in steady state are given by following equations [24][25][26][27]: (24) where v n is the thermal velocity of electron, v p is the thermal velocity of hole, and n i is intrinsic carrier concentration. SIGGAE and SIGTAE are the electron capture cross sections subject to the Gaussian states and main tail, respectively. SIGGAH and SIGTAH are hole trap cross sections of the Gaussian states and acceptor tail, respectively. SIGTDE, SIGGDE, SIGTDH, and SIGGDH are the equivalent for donor states [8].
Poole-Frenkel mobility model
Firstly, Miller et al. [29] described the rate of monophonic jumps for simulating hopping in inorganic semiconductors. Later, Vissenberg et al. [30] studied the dependency related to carrier transport on the energy distribution and the jump distance in amorphous transistors, which further helps to find the carrier mobility. The very popular Poole-Frenkel mobility model [31] is given by where μ p PF E ðÞ and μ n PF E ðÞ are the Poole-Frenkel mobilities for holes and electrons, respectively; μ n0 and μ p0 are defined as the zero-field mobilities for electrons and holes, respectively; and E is the electric field. DELTAEN.PFMOB and DELTAEP.PFMOB are the activation energy at zero electric field for electrons and holes, respectively. BETAN.PFMOB is the electron Poole-Frenkel factor, and BETAP.PFMOB is the hole Poole-Frenkel factor. Tneff is the effective temperature for electrons, and Tpeff is the effective temperature for holes.
Compact modeling, model parameter extraction, and model verification
The technology and operation of organic thin film transistors (OTFTs) have various unique features that require a dedicated compact TFT model. The important features of OTFT include operation in carrier accumulation mode, exponential density of states, interface traps and space charge-limited carrier transport, nonlinear parasitic resistance, source and drain contacts without junction isolation, dependence of mobility on career concentration, electric field, and temperature. The universal organic TFT (UOTFT) model [23] is a modeling expression that extends the uniform charge control model (UCCM) previously used for a-Si and poly-Si TFTs [23,32] to OTFTs and introduces a general expression of modeling for conductivity of channel of OTFTs [30,33]. In this way, the UOTFT model is applicable to various OTFT device architectures, specifications of material, and manufacturing technologies.
Model features
UOTFT model depends on a general-purpose compact modeling approach [23,32], which provides smooth interpolation of drain currents between linear and saturated operating regions including channel length modulation effects and also provides the unified expression of the gate-induced charge in the conductive channel which is valid in all operating states. This model also gives a unified chargebased mobility description, drain-source current, and gate-to-source and gate-todrain capacitances.
Model description
The control equation for the UOTFT model for the n-channel OTFT case is described here. The p-channel condition can be obtained by direct change in voltage, charge polarity, and current.
The charge accumulation in channel per unit area at zero-channel potential (ÀQ acc ) o is calculated by the help of the solution of the UCCM equation given by [34].
where C i is the gate insulator capacitance per unit area, Vgse is the effective intrinsic gate-source voltage, Vgs is the gate-source voltage (intrinsic), VT is the temperature-dependent threshold voltage parameter, and VO is characteristic voltage (temperature-dependent); for carrier density of states including the influence of interface traps, ∈ 0 is the vacuum permittivity, and EPSI and TINS are model parameters representing the relative permittivity and thickness of the gate insulator, respectively.
Effective channel mobility
For accurate modeling of OTFTs, we should consider the characteristic power-law dependence of mobility on carrier concentration. According to the results of percolation theory [30], effective channel mobility is expressed in the UOTFT model as MUACC, VACC, and GAMMA are model parameters. MUACC is a temperaturerelated parameter which defines effective channel mobility at the onset of strong accumulation of channel. This onset point is controlled by model parameter VACC. The power-law dependence of the mobility on carrier concentration is defined by the temperature-dependent model parameter GAMMA.
Intrinsic drain-source current
Drain-source current of intrinsic transistor due to charge carriers accumulated in the channel is defined by general interpolation expressions [23]: where G ch is the effective channel conductance in the linear region, V dse is the effective intrinsic drain-source voltage, V ds is the intrinsic drain-source voltage, parameter LAMBDA defines the finite output conductance in the saturation region, and MSAT is the model parameter that provides a smooth transition between linear and saturated transistor operation. I sat is the ideal intrinsic drain-source saturation current, and the effective channel conductance in the linear region G ch is obtained in the following way: where G ch0 is the intrinsic effective conductance of channel in the linear region and R ds is the nonlinear bias-dependent series resistance for intrinsic channel region defined by temperature-dependent model parameter RDS and the model parameter VRDS; on the other hand, Weff and Leff are effective channel widths and length, respectively.
The drain saturation current I sat is determined by the following formula: where V sat is the saturation voltage obtained as where ASAT is the temperature-dependent model parameter.
The drain-source leakage current is obtained as The IOL is a temperature-dependent leakage saturation current model parameter; NGSL and NDSL are non-ideal factors for gate and drain bias, respectively, and SIGMAO is a model parameter representing zero-bias drain-source conductivity: where V th is the thermal voltage at device operating temperature. The total intrinsic drain-source current is
Material parameters used for DNTT
The DNTT-based OTFT is designed in a bottom-gate, top-contact configuration. The designed structure has a channel length of 200 μm and a channel width of 400 μm with L ov =10μm as shown in Figure 2. For the simulation of DNTT-based OTFT structure, the following parameters [24] used are listed in Table 1. Figure 3 shows the transfer characteristics obtained for the TCAD-based numerical simulation, compact model-based simulation of DNTT-based organic thin film transistor, and the measured characteristic of DNTT-based OTFT [24]. The transfer characteristics are obtained by varying the gate-to-source voltage (V GS ) from 0 to À3 V keeping drain voltage constant at À2 V. There is very good agreement between TCAD-based numerical simulation, compact model-based simulation of the transfer characteristics of OTFT, and experimental transfer characteristics of the fabricated device. Figure 4 shows the output characteristics obtained from the TCAD-based numerical simulation, compact model-based simulation of DNTT-based organic thin film transistor, and the measured output characteristics of DNTT-based OTFT [24]. Output characteristics are obtained by varying the drain-to-source voltage (V DS ) from 0 to À3 V and keeping the gate-to-source voltage (V GS ) constant at À1.5, À1.8, À2.1, À2.4, À2.7, and À3.0 V. The simulated output characteristic matched with the experimental output characteristic of the fabricated device.
Parameter extraction
Extracted OTFT model parameters for low-voltage DNTT-based OTFT using UOTFT model is given in Table 2. The extraction process starts with the collection
Simulation of logic circuit
For UOTFT model validity, simple logic circuit has been implemented and simulated based on p-type OTFTs only. The schematic in Figure 5 shows the simple inverter circuit used in the simulation of a load transistor with auxiliary gate voltage Table 2.
Model parameters extracted for UOTFT model. (V). The given inverter circuit works like a potential divider between the driver and the load OTFT. When the input voltage is lower than the threshold voltage (more positive than VT), the driver OTFT turns off. On the other side, when it is more than the threshold voltage (more negative than VT), the driver OTFT turns on. The operation of the inverter also depends on load TFT size relatively with the driver TFT. To assess whether the simulation correctly reproduces this dependence, the size of load OTFT and its gate voltage (V) remain at the same value, while the size and gate voltage of driver OTFT change. Figure 6 shows the voltage transfer characteristic (VTC) plot of the inverter circuit under consideration for W/L ratio of 10, 120 1140 of driver TFT. As W/L ratio of the driver OTFT increases, its impedance decreases, and the transition between high and low states becomes clearer.
Conclusion
We presented a finite element method (FEM)-based device simulation of lowvoltage DNTT-based OTFT by considering field-dependent mobility model and double-peak Gaussian density of states model using device simulator ATLAS. We also presented the application of UOTFT model and parameter extraction method to organic TFTs. We can also conclude that numerical simulations, experiments, and compact modeling-based simulation characteristics demonstrate the same behavior as matched in Figure 3 and Figure 4. We simulated an OTFT based on DNTT and demonstrated the application of the UOTFT model to organic TFTs and also use experimental data from DNTT-based OTFTs to extract parameters for Silvaco's general-purpose organic TFT compact model. The model has been verified against logic circuit simulation. It has been concluded that UOTFTs provide more accurate modeling of the simpler parameter extraction methods for various organic TFTs. The results show that the UOTFT model correctly simulates the behavior of the devices reported in this study and is expected to be used for more complex circuits based on organic thin film transistors. | 4,666.2 | 2020-06-10T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics",
"Computer Science"
] |
The role of astrocyte‐mediated plasticity in neural circuit development and function
Neuronal networks are capable of undergoing rapid structural and functional changes called plasticity, which are essential for shaping circuit function during nervous system development. These changes range from short-term modifications on the order of milliseconds, to long-term rearrangement of neural architecture that could last for the lifetime of the organism. Neural plasticity is most prominent during development, yet also plays a critical role during memory formation, behavior, and disease. Therefore, it is essential to define and characterize the mechanisms underlying the onset, duration, and form of plasticity. Astrocytes, the most numerous glial cell type in the human nervous system, are integral elements of synapses and are components of a glial network that can coordinate neural activity at a circuit-wide level. Moreover, their arrival to the CNS during late embryogenesis correlates to the onset of sensory-evoked activity, making them an interesting target for circuit plasticity studies. Technological advancements in the last decade have uncovered astrocytes as prominent regulators of circuit assembly and function. Here, we provide a brief historical perspective on our understanding of astrocytes in the nervous system, and review the latest advances on the role of astroglia in regulating circuit plasticity and function during nervous system development and homeostasis.
neurons belonging to eleven distinct functional domains organized along the dorsal-ventral axis [17]. Astrocytes tile throughout the vertebrate spinal cord, and interestingly, spinal cord astrocytes also show domain-specific expression profiles, leading to the hypothesis that domain-specific astrocytes represent subclasses. Indeed, a recent report found that depletion or mutation of astrocytes from the pMN domain of the mouse spinal cord disrupted sensorimotor circuit formation and maintenance, and that astrocytes from neighboring domains do not repopulate the region to compensate [12,13]. More recently, in situ single-cell gene expression analyses of cortical astrocytes found laminar organization of astrocyte transcriptomes, as well as markers for superficial, mid and deep astroglial populations in adult mouse and human cortex [18]. Disruption of neuronal differentiation in murine cortical layers L2-4 resulted in aberrant astrocyte organization in the superficial layers. Moreover, inversion of neuronal layers in the cortex at postnatal day 14 (P14) resulted in similarly upturned astrocytic marker expression. Together, these data demonstrate that astrocytes show region-specific expression and function [19]. It remains to be tested whether these changes are astrocyte-intrinsic, or whether the neuronal microenvironment resolves local astroglial identity.
Astrocyte expansion coincides with synapse development
Recent data suggest that astrocytes are also critical regulators of nervous system development [8,20]. Indeed, studies from multiple model systems suggest that the expansion of astrocytic membrane domains occurs in tandem with the birth and refinement of synapses within individual circuits, including visual processing, attention, memory, and motor control pathways [21][22][23][24][25][26][27][28]. Astrocytes extend fine processes to establish non-overlapping territories after the first postnatal week in the developing mouse cortex, coincident with synaptogenesis [29]. The timing of astrocyte migration and expansion in the developing spinal cord occurs earlier in rodents, ranging from late prenatal stages to postnatal day seven [30]. Similarly, astrocytes extend processes into the Drosophila ventral nerve cord (analogous to the vertebrate spinal cord) during the final stage of embryogenesis [31], and by 6 days post-fertilization in the developing zebrafish spinal cord [32]. In humans, astrocytes are born in late fetal stages [33]. Although studies of human astrocyte development are challenging and a precise timecourse of human astrocyte-synapse association has yet to be done [34,35] a single cortical human astrocyte can extend processes from the soma that gradually ensheath upwards of two million individual synapses [36]. It is noteworthy to mention that human astrocytes are larger, more structurally complex, and more diverse than astrocytes in any other chordates assessed to date [14,[36][37][38][39]. In each case, the expansion of astrocytic membranes into the neuropil occurs alongside synaptogenesis. Together, these studies intimated that astrocyte-derived cues could influence synapse development, and vice versa. Astrocytes locally support neuronal synapses.a Light microscopy image of single astrocyte (cyan) contacting the pre-synaptic membrane of the Drosophila A18b neuron (magenta), with pre-synapses highlighted in the inset (yellow). First instar larva. Genotype: A18b (94E10-lexA; 8xlexAop-2xBrp-short::cherry; lexAop-myr::GFP), astrocyte (25h07-gal4; hs-FLPG5;; 10xUAS(FRT.stop)myr::smGdP-HA, 10xUAS(FRT.stop)myr::smGdP-V5, 10xUAS-(FRT.stop)myr::smGdP-FLAG). Scale bar, 200 nm. b TEM image showing a single astrocyte (cyan) contacting the pre-synaptic membrane of the A18b neuron (magenta), and the post-synaptic membrane of an A27a neuron (green) with synapses highlighted (yellow asterisks). Genotype: wild type. First instar larva. Scale bar, 500 nm
Astrocyte regulation of synapse number
Given the relatively late timecourse of astrogenesis during nervous system development, astrocytes are not present to regulate embryonic waves of neurogenesis and axon outgrowth. Though outside the scope of this review, note that astrocytes do form part of the neurogenic niche that regulates adult neurogenesis (reviewed in [40]), and that reactive astrocytes regulate axon regrowth and recovery after nervous system injury [41]. During early postnatal development, astrocytes elaborate their fine processes concurrent with synaptogenesis. The timing of astrocyte arrival in the CNS, and their strategic positioning of peri-synaptic processes, make astrocytes an attractive candidate to promote assembly of neurons into neural circuits. We now appreciate that these roles include, but are not limited to, structural and functional synaptogenesis [21,25,26,42], synapse pruning [43,44], and synapse maintenance [13].
Astrocytes in synaptogenesis
A role for astrocytes in synaptogenesis was first defined in the lab of Dr. Ben Barres by taking advantage of mouse retinal ganglion cell (RGC) culturing systems. These pioneering studies demonstrated that addition of astrocytes to neuronal cultures was sufficient to promote synapse formation and spontaneous activity of RGC neurons, which are largely inactive in the absence of glial support [28,45,46]. A similar role for astrocytes in promoting synaptogenesis using rat RGC microcultures was described shortly thereafter [47], and more recently in cultured human cerebral cortical spheroids [48]. Together, these data provide direct cross-species evidence that astrocytes are able to directly promote circuit development and function in vitro. Genetic access to astrocytes during circuit assembly was not available until recently [49][50][51][52][53], yet these advances have rapidly expanded our understanding of the contribution of astrocytes to circuit development. Advances on invertebrate and vertebrate in vivo animal models demonstrate that astrocytes are regulators of neural circuit assembly in C. elegans [54,55]; Drosophila [56,57]; feline [58]; Xenopus [59]; rodent [8,25,28,47]; and human [60,61].
Over the last decade, we have greatly expanded our understanding of the molecular mechanisms by which astrocytes regulate synaptogenesis. Astrocyte-derived (secreted and membrane bound) synaptogenic and antisynaptogenic cues dynamically interact to finely tune synapse number during neural circuit assembly [8,62]. As these pathways have been extensively covered elsewhere [8,63], here we focus specifically on Hevin and SPARC, which are essential for the generation of functional synapses during mammalian nervous system development, and also regulate synapse plasticity (discussed below) [26]. These proteins are of additional interest given that the upregulation of their expression profiles has been linked to neurodevelopmental disorders [64] and reactive astrogliosis in adults [65][66][67]. The matricellular protein Hevin is secreted by astrocytes localized to excitatory CNS synapses throughout the organism's life, and peaks in its expression during synaptogenesis and following CNS injury [14,[67][68][69]. Extensive studies of retinocollicular and thalamocortical synapse development have demonstrated that Hevin is required for the formation and maturation of glutamatergic synapses [26,27,70]. In the latter case, Hevin refines thalamic presynaptic inputs onto cortical dendrites by bridging pre-synaptic Neurexin-1α to dendritic Neuroligin-1B, and loss of Hevin causes a reduction of mature glutamatergic synapses [27]. Astrocytes also produce SPARC (Secreted Protein Acidic and Rich in Cysteine), which acts as a competitive inhibitor to antagonize Hevin-induced synaptogenesis. According, while Hevin null mice show decreased numbers of excitatory synapses in the superior colliculus, SPARC KO mice show enhanced synaptogenesis in the same brain region at P14 [26]. Interestingly, SPARC does not inhibit excitatory synaptogenesis induced by astrocyte-derived thrombospondins, but is a specific antagonist of Hevin. Because these proteins are not known to physically interact, it remains to be seen how Hevin and SPARC function together to tune synapse number in vivo [26]. More recently, a novel in vivo enzymatic assay defined a proteome for extracellular astrocyte-neuron junctions in the primary visual cortex (V1 cortex) and found that astrocytic Neuronal Cell Adhesion Molecule (NRCAM) binds to NRCAM-gephyrin complexes on postsynaptic neurons to induce the formation and function of inhibitory GABAergic synapses, with only minor effects on excitatory synapses [71]. Together, these results identify a direct role for astrocytes in the control of excitatory and inhibitory synapse assembly and maturation in vivo, while also displaying the heterogeneity of astroglial cues depending on the synapse subtype.
Astrocytes in synapse pruning
Overproduction of synapses and their subsequent experience-dependent elimination is critical for refinement of neuronal circuits during development [72]. This is especially well-characterized during ocular dominance plasticity (discussed further below), and during Drosophila circuit rewiring in metamorphosis and regeneration [43,59]. In 2013, Chung et al., demonstrated that astrocytes and microglia participate in synapse elimination via two activity-dependent phagocytic receptors, MEGF10 and MERTK, which trigger engulfment of excitatory and inhibitory synapses in the developing mouse visual system (Fig. 2c). Loss of MEGF10 in mouse results in a failure to refine retinogeniculate connections in the developing visual system, resulting in ectopic synapses with reduced functionality [43]. Similarly, the Drosophila homolog of MEGF10, Draper, is necessary for clearance of axons during injury and during circuit remodeling [44,73,74]. A feature of the adult CNS is its ability to engage in activity-dependent synaptic plasticity during learning and memory [75]. Strikingly, MEGF10 and MERTK-dependent synaptic engulfment by astrocytes continues through adulthood in both murine and human cortical layers, which may contribute to learning, memory, and disease [43,76,77]. A recent report in Drosophila discovered that during a critical period of brain development in young adults, the extracellular domain of the amyloid precursor protein-like (APPL, homologous to human APP) regulates glial expression of Draper and clearance of neuronal debris after injury [78]. It will be interesting to test whether APPL/APP also regulates developmental pruning of synapses. Thus, the number of synapses on a neuron is not exclusively an intrinsic property but is heavily regulated by glial signals.
Astrocytes tune synapse function and synaptic plasticity
The establishment of functional neuronal circuits does not only depend on early synaptogenic and pruning processes. To achieve precise CNS wiring, the developing nervous system must be able to adapt to the onset of neural activity, which can induce extensive, activitydependent functional and structural remodeling of mature synapses [79]. Also known as plasticity, these restructuring events are usually driven by the arrival of environmental stimuli via sensory afferents [80,81]. The progression from immature to functionally effective Select mechanisms for astrocyte-induced plasticity. a Hebbian plasticity. Recruitment of NMDA receptors is mediated by astrocyte-derived Hevin and the cell adhesion molecules Neuroligin-1 (NL1) and Neurexin-1 (Nrxn1) during the ocular dominance plasticity critical period. Astrocyte chondroitin sulfate proteoglycans (CSPGs) and SPARC stabilize AMPA postsynaptic receptors. Astrocyte gap junction proteins Connexins 30 and 43 regulate metabolite transport through monocarboxylate transporters (MCT1/2) between astrocytes and neurons in an activity-dependent manner to facilitate plasticity. b Homeostatic plasticity. Astrocyte-derived SPARC limits aggregation of AMPA receptors to facilitate synaptic scaling in response to chronic silencing. Additionally, receptors and transporters located in the astrocytic membrane monitor neuronal Ca 2+ transients and release of neurotransmitters, resulting in gliotransmitter release. c Structural-homeostatic plasticity. Astrocyte-secreted Chrdl1 restricts neuronal plasticity by directly switching postsynaptic neurotransmitter receptor identity. Astrocyte-derived Neuroligin (NL) binds dendritic Neurexin (Nrxn) to mediate the closure of critical periods by stabilizing dendrite microtubule populations. Synapse elimination is driven by neuronal activity, and is regulated by astroglial MERTK and MEGF10. d Repeated excitatory postsynaptic potentials evoke more robust synaptic activity in potentiated circuits over time. Conversely, synapses targeted by long term depression display lower levels of excitability following stimulation. e Homeostatic mechanisms decrease the difference between synaptic input and output by bidirectionally adjusting the probability of transmitting an action potential postsynaptically neuronal circuits that drive robust behavior is dependent on careful regulation of short-and long-term remodeling events. These events are strongly enriched during developmental windows called critical periods [80,82]. If changes in synaptic strength are not carefully regulated, the activity passing through a given neuronal circuit could increase or decrease unchecked, resulting in abnormal activity patterns, the loss of sensitivity for synaptic partners, or excitotoxicity [79,83,84].
Functional and structural modifications to synapse and circuit function are generally categorized as either Hebbian or homeostatic plasticity (Fig. 2a-c). During Hebbian plasticity, coincident activity at pre-and postsynaptic sites causes modifications that alter synaptic efficacy through a positive feedback loop (Fig. 2d). The most widely studied form of Hebbian plasticity is longterm potentiation (LTP), which underlies long-term memory [79,[85][86][87]. Hebbian plasticity usually occurs at single synapse scale rather than circuit-wide scale, where an increase in presynaptic firing increases the probability of a further increase in postsynaptic gain [79,88]. Conversely, homeostatic plasticity is a negative feedback mechanism that is activated in response to chronic changes to activity and serves to prevent runaway excitation/inhibition in response to Hebbian plasticity (Fig. 2e). Although homeostatic plasticity can function on individual synapses [89,89], it also functions on the scale of whole neurites, neurons, and even to balance levels of activity through an entire circuit via functional and structural remodeling [88,[90][91][92][93]. The changes that arise from these dynamic remodeling events can have profound effects on circuit function, behavior, and human health [82], yet the developmental mechanisms that promote or restrict plasticity are not yet fully understood at the cellular or molecular level.
Astrocytes regulate hebbian plasticity, one synapse at a time Following the discovery of Hebbian plasticity over half a century ago [85], many different forms of remodeling have been identified, including both local (synaptic) and circuit-wide [79,80,86]. However, studies of neurons alone have failed to reveal how circuit plasticity is established and circuit balance is maintained. As mentioned above, RGCs cultured in the presence of astrocytes show elevated neuronal activity [46]. Recent advancements in microscopy and genetic strategies for monitoring glial cell populations have led to a new awareness for how astrocytic networks are strategically arranged to support and modify synaptic activity [47,94,95].
In the mammalian CNS, glutamate triggers ion flow through N-methyl-D-aspartate receptors (NMDARs) on postsynaptic membranes to power excitatory neurotransmission [96]. Repeated stimulation of sensory and learning pathways (such as those in the hippocampus) can enhance recruitment of NMDARs to the postsynaptic terminal, thereby increasing the efficacy of long-term synaptic transmission (e.g. LTP) [97,98]. In addition, the concentration of α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic receptors (AMPARs) on postsynaptic membranes can alter short-term synaptic plasticity [99]. As aforementioned, astrocytes secrete the matricellular protein Hevin, which increases the number and size of excitatory synapses in RGC cultures and in vivo during development of the visual system [26,27,70]. Recent studies revealed an additional role for Hevin in neuronal plasticity. During ocular dominance plasticity (ODP), monocular deprivation weakens synapses downstream of the closed eye, while cortical connections downstream of the open eye are coincidently strengthened [58,[100][101][102]. This process is dependent on the differential recruitment of post-synaptic NMDARs [101,[103][104][105]. Astrocytederived Hevin organizes presynaptic Neurexins and postsynaptic Neuroligins (binding partners), thereby aligning the pre-synaptic neurotransmitter release machinery with post-synaptic NMDARs during the ODP critical period (Fig. 2a) [27,106]. Accordingly, Hevin null mice exhibit reduced ODP, which can be rescued upon viral delivery of Hevin to astrocytes [27]. Interestingly, though Hevin and SPARC have opposing functions in synaptogenesis, synaptic plasticity (LTP) is also reduced in the hippocampus of SPARC null mice. Reduced LTP in SPARC null animals is not the result of decreased NMDAR localization, but rather increased levels of AMPARs on postsynaptic membranes and enhanced baseline activity of these synapses [107]. These data suggest that during developmental plasticity, Hevin and SPARC function together to fine-tune the ratio of NMDARs to AMPARs to facilitate LTP. Overall, these data demonstrate that astrocytes modulate synaptic strength by regulating post-and presynaptic receptor composition.
Astrocyte tiling and synaptic transmission
Bidirectional communication between astrocytes and synaptic terminals is critical for the establishment and maintenance of neuronal transmission [8,106]. A property of astrocytes that expands the complexity of these interactions is the capacity of a single astrocyte to associate with millions of synapses within an expanded glial network. This network is generated by gap junctions (GJs) that allow neighboring astrocytes to tile with one another, yet maintain non-overlapping territories [108][109][110]. GJs are intercellular channels that integrate astrocytes into functional syncytia, which facilitate signaling and transport of metabolites between neuronal and non-neuronal tissues [111,112]. The GJ proteins connexin 30 (Cx30) and connexin 43 (Cx43) are highly expressed in astrocytes [69,113], and act to shuttle glucose and its metabolites (lactate) from astrocytes to neurons in an activity-dependent manner (Fig. 2a) [111,114,115]. In the absence of astroglial Cx30 and Cx43, astrocyte-dependent glutamate recycling and potassium buffering at the synapse is impaired. This results in enhanced excitability at hippocampal CA1 Schaffer collateral synapses due to increased levels of AMPARs [94]. Interestingly, analysis of synapse numbers on CA1 neurons of Cx30 −/− Cx43 −/− mice revealed that while the overall number of synapses remains unchanged in the absence of GJs, the pool of silent synapses was significantly decreased. This led to defects in Hebbian plasticity, including suppressed LTP and increased occurrences of long-term depression. Disruptions to astrocyte GJ proteins have been linked to impairments in LTP during memory allocation and stress in the mammalian hippocampus [115][116][117]; thus, GJ-dependent astrocyte communication is critical for developmental learning and behavior.
Astrocyte-derived extracellular matrix modifies synaptic plasticity
Finally, astrocytes may influence synaptic efficacy through modulation of the extracellular environment [118]. Chondroitin sulfate proteoglycans (CSPGs) are glycosylated, extracellular matrix proteins secreted by both neurons and glia that form integral components of perineuronal nets (PNNs). PNNs are lattice-like aggregates of CSPGs surrounding neuronal processes [119]. PNNs emerge in concert with the arrival of astrocytes during late postnatal development, corresponding with the closure of critical periods of experience-dependent plasticity [82,120]. CSPGs act as lateral diffusion barriers for AMPARs, and can therefore facilitate post-synaptic receptor composition to modulate short-term synaptic plasticity (Fig. 2a) [121]. It was recently shown that removal of PNNs from the mouse deep cerebellar nuclei increased synaptic plasticity and improved active learning during eyeblink conditioning, a form of motor learning. Conversely, loss of PNNs inhibited the formation of eyeblink-associative memories [122]. Although astrocytes express a variety of CSPGs during nervous system development [14,69] and following injury [123][124][125], the relative contribution of astrocyte to neuronderived ECM for circuit development and plasticity remains poorly defined [126,127]. An important open question is whether astrocytic CSPGs modulate AMPAR aggregation in vivo during circuit assembly. In addition, it remains unknown whether astrocyte-derived and neuronal-derived CSPGs trigger different signaling cascades. This is especially critical to determine because altered CSPG signaling is linked to poor outcomes in injury and disease [123][124][125][128][129][130][131].
Astrocytes are key regulators of homeostatic plasticity, from synapses to circuits Homeostatic plasticity arises in response to prolonged changes in network activity, shifting the balance away from extreme excitation (E) or inhibition (I) to maintain E/I balance. Thus, a neuron can preserve its ability to respond to activity via Hebbian plasticity by maintaining a functional state of excitability [132]. This form of plasticity was first theorized to exist as a "normalizing" force in mathematical models of circuit function [133][134][135], as the technology to detect, characterize, and confirm homeostatic plasticity was not available until much later [136][137][138][139][140][141]. Homeostatic plasticity has the capacity to modify multiple substrates within neural circuits. Synaptic scaling is a homeostatic mechanism that alters the strength of individual synapses, which can modify the activity and function of neurotransmission for a single neuron [132,142]. Synaptic scaling is a modification to the number of AMPAR at the post-synaptic terminal, which can reduce synaptic strength and the probability of a postsynaptic potential. Modifications to neurotransmission through synaptic scaling can take place either by removing AMPARs locally at the terminal [143], or globally, by affecting the rate of transcription of AMPARs within a circuit [144]. Although several neuronal mechanisms for homeostatic plasticity have been defined [141], a looming question is to determine how this form of plasticity is regulated at the circuit level. The juxtaposition of synapses and perisynaptic processes of astrocytes ( Fig. 1) makes them a great target to study regulation of homeostatic plasticity [88,145].
We now understand that astrocytes secrete a number of proteins that modulate homeostatic plasticity at the synaptic and circuit level. A well-documented example is SPARC [146,147]. As noted above, analysis of cultured hippocampal slices from astrocyte-specific SPARC knock-out mice revealed increased numbers of postsynaptic GluR2 receptor subunits, which caused an ectopic accumulation of postsynaptic surface AMPARs and impaired LTP following high frequency stimulation [107]. Accordingly, loss of SPARC inhibits synaptic scaling following activity deprivation by TTX (Fig. 2c) [107]. These data demonstrate that astrocytes can co-opt the same signaling pathways to regulate both Hebbian and homeostatic plasticity during circuit assembly.
During critical periods, activity across circuits can also alter neuronal architecture, ranging from retraction/extension of individual synaptic elements to modifications to dendrites and axons. This form of plasticity, also known as homeostatic structural plasticity, can drastically affect the probability of forming synapses between neighboring neurons by physically increasing or decreasing membrane space [88,90,92,93,148,149]. Recent work has defined important roles for astrocytes in homeostatic structural plasticity [42,93,150]. In the CNS, astrocyte-secreted Chordin like-1 (Chrdl1) was shown to regulate the switch from AMPAR to GluA2containing synapses in the developing mouse cortex [42]. Moreover, in a monocular enucleation model of ocular dominance homeostatic plasticity (reviewed in [151]), Chrdl1 knock-out resulted in ectopic synaptic remodeling events, suggesting that astrocytes use Chrdl1 to restrict neuronal plasticity (Fig. 2c). Excitingly, a recent report demonstrated that astrocytes close a critical period of motor dendrite remodeling in the Drosophila CNS via astrocyte Neuroligin to motor neuron Neurexin signaling (Fig. 2c) [93]. Importantly, this study pinpoints astrocytes as a putative target in critical perioddependent neurodevelopmental disorders [152].
Astrocytes shape neuronal signaling
Although astrocytes themselves are not electrically excitable, astrocytes measure local synaptic activity via metabotropic and ionotropic-sensing receptors [153][154][155]. Interestingly, astrocytes display Ca 2+ transients that mirror neuronal activity [156][157][158][159][160]. In Drosophila, astrocyte Ca 2+ signaling in the ventral nerve cord occurs in parallel with motor waves [161]. Similarly, co-imaging of Ca 2+ transients in astrocytes and neurons following mouse whisker stimulation demonstrated that neuronal and glial calcium waves operate in synchrony during sensory tasks [162,163]. Voluntary limb movements have also been shown to cause Ca 2+ elevations of motor cortex astrocytes in awake, moving mice, suggesting efferent activity is tightly coupled to astrocyte activation [164]. It remains unclear whether astrocytic Ca 2+ waves originate within the gap junction-coupled astroglial network, or represent the product of the linear summation of neighboring neuronal activity [165][166][167][168][169]. Nevertheless, we now understand that activity-dependent calcium elevation in astrocytes can induce release of small molecules including glutamate, ATP, and D-serine [170][171][172][173] in a calcium-and SNARE protein-dependent mechanism [174]. In turn, these "gliotransmitters" modify synaptic transmission and short-term plasticity (Fig. 3) [155,161,[175][176][177][178][179]. For example, induction of calcium transients within astrocytes by direct manipulation or via inositol tris-phosphate-dependent signaling has been shown to depress or enhance synaptic transmission [154,175,180]. Recently, Ma et al. (2016) demonstrated that a Drosophila TRPA1 calcium channel (Water witch) is expressed in astrocytes and facilitates the accumulation of calcium in response to local neuronal activity. Astrocytic calcium can in turn modulate downstream dopaminergic neuron activity and locomotor behavior in vivo [161]. Indeed, it is now apparent that astrocytes are essential for rhythmic locomotor behaviors [93,[181][182][183]. In mouse, the sensory TRPA1 Ca 2+ channel similarly maintains basal astrocytic calcium levels to facilitate constitutive D-serine release at the synapse. Loss of TRPA1 impairs NMDA-dependent LTP at Schaffer collateral to CA1 pyramidal neuron synapses, demonstrating that fine astrocytic processes tune synaptic plasticity in an activity-dependent manner [184]. There is mounting evidence that astrocyte calcium signaling arises to modulate sensory-evoked neuronal activity. Well documented examples include somatosensory stimulations that trigger Ca 2+ elevations in astrocytes which amplify stimulus-evoked cortical plasticity via noradrenaline and acetylcholine [185][186][187][188]. More recently, Lines et al. (2020) correlated the modulation of somatosensory afferents to astrocyte Ca 2+ waves, and showed that astrocyte activation dampens sensory-evoked neuronal activity in S1 (primary somatosensory cortex) [160]; thus, placing bidirectional astrocyte-neuron communication at the center of sensory information processing in the mammalian cortex. This opens an exciting line of future work, where astrocytic activation could be manipulated to modulate neuronal activity during critical periods of circuit development and disease.
Finally, astrocyte networks may contribute to circuit plasticity during memory allocation and goal-directed behaviors. Ectopic activation of astrocytes is sufficient to induce de novo NMDA-dependent LTP in CA3-CA1 pyramidal neurons [189,190]. Interestingly, the authors also found that while direct neuronal activation impaired memory formation, delayed activation through astrocytes strongly enhanced memory allocation [190], suggesting that indirect signaling through astrocytes may be necessary to gate LTP. Indeed, a recent report suggests that gliotransmission by astrocytes recruits metabotropic glutamate receptors to the presynaptic terminal during spike timing-dependent plasticity, a process that shifts developing hippocampal synapses from long term depression (LTD) to LTP [191,192] (Fig. 3). Extensive work in mouse models have also defined astrocytes as key regulators of inhibition. As aforementioned, astrocyte-derived NRCAM influences inhibitory synapse development and function in the developing visual cortex. Similarly, astrocytic activation in the limbic system can drive depression of excitatory synapses and enhancement of inhibitory synapses in the central amygdala [179]. In the developing somatosensory cortex, astrocytic signaling mediates spike-timing-dependent LTD [192,193]; and in the developing prefrontal cortex, astrocytic GABA B receptors monitor local concentrations of GABA and in turn, regulate low gamma oscillations (see more below) and goal-directed behaviors [194]. Thus, bidirectional signaling between neurons and astrocytes ensures proper E/I balance in multiple brain regions to shapes the flow of information through neural circuits and facilitate neuronal plasticity that is essential for learning, memory, and goal-directed behaviors.
Astrocytes and circuit pattern generation
It is evident that astrocytes tune synaptic and circuit architecture during sensory-dependent plasticity. Yet, plasticity also occurs even during "idle" periods with little environmental input [195][196][197]. Since the discovery of electroencephalography [198], scientists have identified several rhythmic voltage fluctuations in the brain, from individual neurons to whole neuronal networks [199]. These oscillations emerge in all brain regions, and their patterns underlie the basis for internal day/night cycles, sensory representation, and short term memory [200,201]. Astrocytes are capable of modulating neuronal rhythms by mediating ion homeostasis at the synapse [194,[202][203][204][205][206]. Interestingly, astrocyte-dependent ion homeostasis seems to be critical for oscillatory behaviors such as sleep [207,208]. Indeed, a suite of papers describing the role of astrocytes in sleep in fly and mouse were published in the last six months alone. In brief, astrocytes exhibit calcium waves that follow natural circadian rhythms-they are highest during wake phases and lowest during sleep [207,209]. Interestingly, astrocytes accumulate high levels of calcium during wake cycles in order to encode sleep need [209][210][211], similar to astrocytic calcium encoding futility-induced passivity in zebrafish [178]; enhancing astrocytic calcium caused perpetual sleep in Drosophila [210] and reducing astrocytic calcium is necessary for slow-wave sleep in mouse [207]. Finally, this transition in behavioral state is dependent on the local concentration of neurotransmitters (dopamine, serotonin, and endocannabinoids) sensed by astrocytic receptors [212,213]. It will be interesting to test whether high levels of neuronal activity during the day also drive sleep recovery (e.g. napping) in an astrocyte-dependent manner. Additionally, as sleeping behaviors can change dramatically over the course of and associated astrocytes (a). at two synaptic (S) connections. Synapse 1: Influx of calcium into gap junction-coupled astrocytes via TRPA1 channels occurs in response to local neuronal activity. Elevation of astrocytic calcium causes astrocyte activation and release of gliotransmitters including D-serine, which induces NMDAR-dependent LTP. Synapse 2: Elevation of astrocytic calcium can also drive release of the gliotransmitters ATP and glutamate, which can stimulate pre-synaptic adenosine and metabotrophic glutamate (mGluR) receptors, respectively, to promote signaling of downstream neurons and drive the flow of information through the circuit organismal development (e.g. neonate versus adult humans) or with the seasons (e.g. torpor), it will be important to determine the degree to which astrocytes contribute to sleeping behaviors outside of the standard diurnal clock. Thus, astrocytes coordinate both developmental and homeostatic circuit activity from the scale of individual synapses to circuits.
Concluding remarks
Synaptic plasticity ensures the correct assembly and tuning of millions of synapses during nervous system development [82,86]. From their perisynaptic location, astroglia have been shown to organize pre-and postsynaptic elements that modify Hebbian mechanisms of plasticity [27,107,190]. Moreover, astrocytes are also capable of instructing homeostatic plasticity both at the synapse and more broadly within neurons and circuits to counterbalance sustained periods of augmented activity [42,93,145]. Astrocytes even contribute to computation within neural networks to drive circuit and animal behavior [161,178,214]. Excitingly, a new study found that neuronal LTP can induce changes in astrocyte perisynaptic coverage to facilitate extended crosstalk between neighboring synapses [183]. Thus, a closer examination of how neurons and astrocytes bidirectionally interact and communicate during developmental circuit plasticity is warranted. The advent of genetic tools for visualization of astrocyte dynamics in zebrafish should provide a rich avenue for such exploration [32,178].
Additionally, although calcium signaling in astrocyte perisynaptic processes often occurs in parallel with neuronal activity [162,163,167,169], global calcium levels can act independently [166,186]. There is recent evidence suggesting that astrocyte networks modulate autonomic control of heart rate, such that astrocyte Cx43mediated release of ATP presumably regulates excitatory circuits in the brainstem [215]. Given the importance of local astrocytic signaling to circuit function, unraveling the importance of astrocyte-network signaling across broad brain regions is a necessary future line of research.
Finally, the rapid evolution of sequencing techniques has made the term "astrocyte" an umbrella-like category for a group of highly heterogeneous cells [13,184,185,216]. Though there is some in vivo evidence that astrocytes can become locally specialized to provide circuitspecific support [13], this remains an open area ripe for future investigation. Future efforts should be directed at understanding how astrocytes acquire these unique expression profiles, and how this specialization guides their function within individual neurons and circuits. These types of experiments will require the development of intersectional tools that enable manipulation of specific subpopulations of astrocytes within the intact nervous system. Though challenging, the availability of single cell RNA sequencing datasets will undoubtedly speed up identification of region-specific markers that could be used for development of tools to test how functionally diverse astrocyte populations are in vivo, and how this diversity ensures proper neural circuit assembly and function. | 7,033.6 | 2021-01-07T00:00:00.000 | [
"Biology"
] |
Thallium in mineral resources extracted in Poland
Thallium concentrations in primary mineral commodities extracted in Poland and processed in high temperatures were determined by ICP-MS method. Samples of hard and brown coal, copper-silver and zinclead ores, argillaceous and calcareous rocks of different genesis and age were analyzed. The highest thallium concentrations occur in the zinc-lead ores, the average content being of 52.1 mg/kg. The copper ores contain in average 1.4 mg/kg of thallium. Hard coals from the Upper Silesian Coal Basin display higher thallium content than those exploited in the Lublin Coal Basin. Brown coals from Turow deposit distinguish by much higher values, 0.7 mg/kg Tl, than those from huge Bełchatów and smaller Konin-Turek region deposits. Average thallium concentrations in clays used for ceramic materials are lower than 1 mg/kg, except of Mio-Pliocene Slowiany deposit. The average content of thallium in the studied limestone and dolomite raw materials for cement, lime, and metallurgical flux, and refractories is very low in comparison to the average amounts in the world carbonate rocks.
Introduction
Thallium metal and its compounds are regarded highly toxic element.Thallium accumulation in the organism destroys important parts of the cells and causes genetic changes of the embryo, disturbances in the heart-vessel, breathing and nerve systems, degeneration changes in the suprarenal glands, alopecia, damages in liver and kidneys as well as a loss of hearing, seeing and hair (Repetto et al. 1998, Seńczuk 2002).Thallium is a metal dispersed in the environment.Due to its ion radius, close to potassium and rubidium, it often substitutes these elements (mainly in feldspars and micas).It is a chalcophile element in the hydrothermal environment entering into sulphides (marcasite, pyrite, galena, sphalerite, antimonite, realgar) and sulphosalts (geochronite, boulangerite, jordanite), in amounts from traces to over 0.1%.In the weathering zone and sedimentary rocks, thallium is bound by secondary minerals (e.g.jarosite, Pb jarosite), clay minerals as well as iron and manganese hydroxides (psylomelane contains up to 0.1% Tl) and the organic matter.Thallium element may be entirely easily emitted to the environment during high temperature production processes.It has low melting (303.5°C) and boiling points (1473°C) what results in its relative ability to be released to the environment.So smelting of zinc-lead concentrates, reduction of limonitic ironstones, coal burning, production of sulphuric acid and cement are main sources of the anthropogenic pollution with thallium.
Materials and Methods
Thallium concentrations were determined in different mineral resources exploited in Poland and processed in high temperatures.112 Samples of hard coal from Carboniferous deposits of the Upper Silesian (USCB) and 29 samples from Lublin Coal Basins (LCB) were studied.Samples of Miocene brown coal were taken from Turów (25 samples), Bełchatów (42 samples), and 37 samples in total from Adamów, Lubstów, Kazimierz and Koźmin deposits.All exploited lithotypes of Lower Zechstein Cu-Ag ore are represented by a total of 152 samples (Polkowice-Sieroszowice, Rudna, Lubin mines).Zinc and lead ores are represented by a total of 67 samples (Trzebionka mine near Chrzanów and Pomorzany mine near Olkusz).Analyzed samples of clay rocks represent the following genetic and stratigraphic kinds: ice-dammed clays and silts, tills, alluvial clays and muds, eolian loess clays, lacustrine and marine clays and silts as well as Jurassic and Triassic continental and epicontinental claystones.Individual clay deposits supplied 1-6 samples, depending on lithological variability.In total 178 samples of clay rocks from 41 deposits were analysed.Carbonate rocks -limestones E3S Web of Conferences and marls -were sampled in the deposits of wide span of ages.Individual deposits are represented by 4 up to 13 samples, depending on lithological variability.Total number of samples from carbonate rocks amounts to 137.
The content of thallium was determined after complete dissolution in all studied samples of ores or coarse grained individual ore minerals, fossil fuels and rock raw materials by ICP-MS method using the Perkin Elmer equipment ELAN DERCII (USA)
Results and Discussion
Thallium content in Polish hard coals falls into the interval from <0.2 mg/kg (detection limit) to 5.3 mg/kg.Coals from the Upper Silesian Coal Basin display higher thallium content in comparison to the coals from Lublin Coal Basin.The contents are 0.5 mg/kg and 0.3 mg/kg in the Upper Silesian and Lublin coals respectively.The highest concentration was observed in the upper coal series of USCB, generally enriched in pyritic sulphur, i.e. in the Libiaz and Laziska Beds.
The thallium content in Polish brown coals oscillates from <0,2 to 2.4 mg/kg.The coals from the Bełchatów deposit and small deposits from the Adamów-Konin region, i.e.Koźmin, Lubstów, Adamów, and Kazimierz are typically low in thallium; its content does not exceed 0.4 mg/kg, and the average is below 0.2 mg/kg.The coals from the Turów deposit distinguish themselves with a significantly higher thallium contents; their average equals to 0.7 mg/kg.This enrichment may result from the alimentation of the coal basin by the material derived from granitoids of the crystalline basement and Tertiary volcanic rocks interfingering with the coal bearing formation.
Polish coals display lower Tl average concentrations in comparison to the published data on world deposits.For example, Virginia hard coals contain in average 1.2 mg/kg, and Brazilian 2 mg/kg (Kalkreuth et al., 2006;Kabata-Pendias and Mukherjee, 2007).Very high thallium contents exceeding 30 mg/kg are present in the coals from the Wulantuga deposit in China (Qi et al. 2007).Lignites from Turkish Kantal Basin display 4.2 to 8.0, in average 5.8 mg/kg thallium, and from Pond Creek, Kentucky, USA, up to 46 mg/kg in the upper seams (Karayigit et al., 2001;Hower et al., 2005).Otherwise, in the majority of the brown coal deposits, thallium is quite low, e.g. in the Turkish lignites the average thallium content is 0.14 mg/kg, while in coals from Xingren, Guizhou (China) -0.11 mg/kg (Gürrdal, 2008;Dai et al., 2006).
In copper ores thallium was found in concentration ranging from <0.2 to 17.9 mg/kg.The average concentration in the copper-bearing shales is the highest, being much lower in the sandstones and carbonates.The shales contain in average 3.8 mg/kg of thallium, while dolomites and sandstones below 1mg/kg.The ores from Rudna mine contain in average of 0.9 mg/kg, whereas from Polkowice-Sieroszowice and Lubin mines 1.4 and 1.5 mg/kg, respectively.The average thallium concentration in copper-bearing shales is higher than that in the standard black shales -2.0 mg/kg (Yudowich and Ketris, 1997); however it is lower than the value 8.3 mg/kg in black shales given by Huyck (1990) and distinctly lower than in the metal-bearing shales, 16.6 mg/kg (Huyck, 1990).
In zinc-lead ores thallium concentration ranges from below 0.2 to 550 mg/kg.Its average concentration of 52.1 mg/kg is very high in comparison to the copper ores (1.3 mg/kg).The geometric mean (9.8 mg/kg) and median (11.1 mg/kg) are also high when compared with the corresponding values in the sedimentary rocks, especially in carbonates (average 0.1-1.4mg/kg, Kabata-Pendias and Mukherjee, 2007).High thallium concentrations are characteristic for the ores from the Pomorzany mine (in average of 82.5 mg/kg Tl) in comparison to the Trzebionka mine (average 15.5 mg/kg).Studies on minerals (sphalerite, galena, marcasite) have proved that the higher thallium amounts (0.08-1.20%) have been found in the metacolloidal pyrites, while in melnikovite 0.05% (Kucha and Jędrzejczyk, 1995;Sawłowicz, 1981).Much lower amounts of thallium occur in galena -up to 60 mg/kg (the average content of 13.2 mg/kg) and sphalerite -up to 92 mg/kg (the average 26.5 mg/kg).It can be concluded that the presence of thallium in Zn-Pb ores and derived concentrates is connected first of all with the occurrence of the thallium-bearing polymorphs of iron sulphide.
Rock raw materials Thallium concentration in Polish clays subjected to high temperature processing ranges from <0.2 to 1.3 mg/kg, its average amount is 0.7 mg/kg, the geometric mean 0.6 mg/kg, the median 0.7 mg/kg.The average concentration in the studied deposits oscillated from <0.2 to 1.2 mg/kg.Most thallium is contained in the Mio-Pliocene Poznań Clays in the Słowiany and Fordon deposits, while the lowest concentrations occur in Cretaceous kaolinite clays of Maria deposit, Miocene refractory clays of Rusko-Jaroszów deposit, Pleistocene loesses from Izbica and Triassic clays of Ligota Dolna deposit.The occurrence of the thallium in clay raw materials is probably due to a presence of relictic feldspars and micas, where thallium substitutes for potassium.The thallium may be occasionally combined by clay minerals or iron and manganese hydroxides, the organic matter as well.Our results fit the average concentration in the world claystones corresponding to the interval of 0.5-2 mg/kg Tl (Kabata-Pendias and Mukherjee, 2007) but seem to be a bit lower; only in Słowiany deposit the content averages 1.2 mg/kg breaking the level of 1 mg/kg.In the clays of greater importance for construction ceramics in Poland, i.e.Pleistocene ice-dammed clays and Neogene Poznań and Krakowiec series, the average thallium content is the same -0.7 mg/kg (table 2).These rocks display also similar potassium contents (Nieć and Ratajczak, 2004), that may point to the distinct thallium relation with potassium-bearing feldspars and layered minerals.The correlation of thallium and iron compounds, which are ubiquitous in clay rocks may be of lesser importance.
In carbonate rocks thallium concentration ranges from <0.2 to 0.8 mg/kg, its arithmetic average being 0.2 mg/kg, while the median and geometric means are below 0.2 mg/kg.The average amounts of the thallium in individual deposits studied were most frequently lower 14006-p.2 2. The highest thallium concentrations occur in the zinc-lead ores, the average content being of 52.1 mg/kg.The ores from Pomorzany mine (average 82.5 mg/kg) are richer in thallium than those from Trzebionka mine (average 15.5 mg/kg).The copper ores contain in average 1.4 mg/kg of thallium, among them shale ores 3.8 mg/kg, while carbonate and sandstone ores contain below 1 mg/kg of thallium.
3. Hard coals from the Upper Silesian Coal Basin display higher thallium content than those exploited in the Lublin Coal Basin.The highest thallium concentration occurs in the seams of Libiaz and Laziska beds, occupying an upper portion of the USCB profile.Brown coals from large Bełchatów deposit and smaller one in Konin-Turek region show low thallium concentration not exceeding 0.4 mg/kg, while the average is below 0.2 mg/kg.Brown coals from Turow deposit distinguish by much higher values, 0.7 mg/kg Tl.
4. Average thallium concentrations in clays used for ceramic materials are lower than 1 mg/kg, except of Mio-Pliocene clays from the Slowiany deposit.In ice-dammed Pleistocene clays, as well as in the Poznan and Krakowiec clays, the average equals to 0.7 mg/kg. 5.The average amount of thallium in the studied calcareous raw materials used for manufacturing portland cement, lime and as a flux in iron and steel metallurgy is very low in comparison to the average amounts in the 14006-p.3world carbonate rocks.Analyzed carbonate rocks contain generally below 0.2 mg/kg Tl.Only limestones from the Kowala and Nowiny, and dolomites from the Dubie deposits are relatively enriched in this element.
Table 1 .
Statistical parameters of thallium in Polish mineral resources (mg/kg) | 2,462.6 | 2013-04-23T00:00:00.000 | [
"Geology"
] |
Synthesis and Characterization of Ligand-Stabilized Silver Nanoparticles and Comparative Antibacterial Activity against E. coli
Silver is a well-established antimicrobial agent. Conjugation of organic ligands with silver nanoparticles has been shown to create antimicrobial nanoparticles with improved pharmacodynamic properties and reduced toxicity. Twelve novel organic ligand functionalized silver nanoparticles (AgNPs) were prepared via a light-controlled reaction with derivatives of benzothiazole, benzoxazine, quinazolinone, 2-butyne-1,4-diol, 3-butyne-1-ol, and heptane-1,7-dioic. UV-vis, Fourier-transform infrared (FTIR) spectroscopy, and energy-dispersive X-ray (EDAX) analysis were used to confirm the successful formation of ligand-functionalized nanoparticles. Dynamic light scattering (DLS) revealed mean nanoparticle diameters between 25 and 278 nm. Spherical and nanotube-like morphologies were observed using transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Seven of the twelve nanoparticles exhibited strong antimicrobial activity and five of the twelve demonstrated significant antibacterial capabilities against E. coli in a zone-of-inhibition assay. The synthesis of functionalized silver nanoparticles such as the twelve presented is critical for the further development of silver-nanoconjugated antibacterial agents.
Introduction
Silver is a toxic transition metal and a known antibacterial agent against both aerobic and anaerobic bacteria [1,2]. It has been incorporated into wound dressings and medical treatments for centuries. More recently, nanotechnology has amplified the efficiency of silver in medicine [3,4]. Silver nanoparticles have a high surface area-to-volume ratio and unique chemical and physical properties, making them ideal for antibacterial use [5]. Specifically, they have been shown to inhibit bacterial growth through mechanisms that include the precipitation of cellular proteins, interference with DNA function, and inhibition of the electron transport chain [6]. Silver nanoparticles also demonstrate antibacterial effects in both Gram-positive and Gram-negative bacteria [7].
Several distinct silver nanoparticles have been synthesized for medicinal use in the last decade. However, many of the syntheses are limited by low yields and therefore rendered less useful while the need for new antibacterial nanoparticles remains. Recently, AgNPs synthesized with quinazolin-4(3H)-one derivatives were developed by Abdulkader Masri's group [8]. This work has provided a foundation for new organic ligand-functionalized AgNPs to be further developed.
One promising route being studied currently for improving the synthesis and efficacy of silver nanoparticles is the incorporation of organic compounds. For example, quinazolinone is a highly stable nitrogen-containing heterocyclic scaffold used to generate antibacterial drugs [8,9]. Benzoxazine is a bicyclic compound containing an oxazine ring attached to a benzene ring that acts as a basis for the synthesis of other organic ligands such as quinazolinone [10,11]. Both benzothiazole, a benzene ring fused with a thiazole 2 of 13 ring, and heptane-1,7-dioic, commonly referred to as pimelic acid, are also used as starting materials for antibacterial compounds [12,13]. These organic ligands can be coated with silver, enabling their conjugation with nanoparticles.
Results
Twelve organic ligand functionalized silver nanoparticles (AgNPs) were synthe and characterized with UV-vis spectroscopy, FTIR, EDAX, zeta potential analysis, SEM, and DLS analysis to confirm the formation, size, and shape of the nanoparticle
Results
Twelve organic ligand functionalized silver nanoparticles (AgNPs) were synthesized and characterized with UV-vis spectroscopy, FTIR, EDAX, zeta potential analysis, TEM, SEM, and DLS analysis to confirm the formation, size, and shape of the nanoparticles.
UV-Vis and FTIR Spectroscopy
The UV-vis spectra of the organic ligands (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12) were measured both independently and again following conjugation with AgNPs. The presence of SPR maxima within the typical range of 380-480 nm confirmed successful nanoparticle formation [14]. In Figure 2, the comparative UV-vis spectra display an SPR maximum of around 460 nm for 2-amino-6bromobenzothiazole-AgNPs (5-np) and no peak at all for its independent organic ligand counterpart. Since the peak present in the AgNPs falls within the characteristic 380-480 nm range and there is no similar peak in the UV-vis spectrum of the isolated organic ligand, successful conjugation of the nanoparticle is apparent. Analyses of the UV-vis spectra for all conjugated silver nanoparticles with their organic ligands can be found in the Supporting Information ( Figure S1). Peaks within the characteristic 380-480 nm range were visible for all 12 molecules with similarly missing peaks within that range for the independent organic ligands. This signifies the successful formation of all 12 organic ligand-functionalized AgNPs. Both UV band broadening and red shifting were also observed and can be attributed to some aggregation and size confinement with the size increase of the AgNPs. This may be the result of less confined charge carrier wave functions [14]. Initially, some peaks also showed aggregation due to size variation and difficulty controlling the combination of nanoparticles; however, we were able to overcome the agglomeration by adjusting the dilution parameters of the conjugated silver nanoparticles. in the Supporting Information ( Figure S1). Peaks within the characteristic 380-480 nm range were visible for all 12 molecules with similarly missing peaks within that range for the independent organic ligands. This signifies the successful formation of all 12 organic ligand-functionalized AgNPs. Both UV band broadening and red shifting were also observed and can be attributed to some aggregation and size confinement with the size increase of the AgNPs. This may be the result of less confined charge carrier wave functions [14]. Initially, some peaks also showed aggregation due to size variation and difficulty controlling the combination of nanoparticles; however, we were able to overcome the agglomeration by adjusting the dilution parameters of the conjugated silver nanoparticles. Additionally, FTIR analyses of the 12 conjugated nanoparticles with their corresponding organic ligands were carried out. The complete set of results can be found in the Supporting Information ( Figure S2). The differences between the organic ligands as independent compounds and as nanoparticle conjugates were highly apparent in the FTIR spectra. This can especially be seen in the comparative FTIR analyses of 5-np and organic ligand 5 along with 11-np and organic ligand 11, as shown in Figure 3. All observed bands with slightly varied shifts can be attributed to the distinct vibrational stretching of newly formed functional groups, such as N-H, C=O, C-N, and C=N. This clear functional group transformation was present in the comparative FTIR spectra for all 12 AgNPs and their corresponding organic ligands, further confirming the successful formation of the organic ligand functionalized AgNPs. These results signify that NaBH4, the reducing agent used in the synthesis of the AgNPs, most likely led to a functional group transformation that promoted silver coordination with the organic ligands. The organic ligands could have also acted as capping or stabilizing agents for the AgNPs.
Specifically, numerous deviations between 5-np and organic ligand 5 ( Figure 3) were observed. As an example, organic ligand 5 showed two bands at 3190 and 2980 cm −1 . These data are characteristic of a primary amine (NH2) group stretching. Organic ligand 5 possesses an amino-benzothiazole scaffold, but when the organic ligand is conjugated with silver, the band indicating NH2 disappears, and one strong band appears at 3115 cm −1 , as shown by the FTIR spectrum of 5-np. Also, in the 11-np to organic ligand 11 FTIR comparison, a minor alcohol peak is apparent at 3680 cm −1 for the ligand, but a strong broadband can be observed at 3380 cm −1 for the conjugated nanoparticle. This shift could be due to the remote effects of the typical interaction between silver and the terminal alkyne of the organic ligand in the AgNPs. These structural changes indicate the successful formation of the conjugated AgNPs. Additionally, FTIR analyses of the 12 conjugated nanoparticles with their corresponding organic ligands were carried out. The complete set of results can be found in the Supporting Information ( Figure S2). The differences between the organic ligands as independent compounds and as nanoparticle conjugates were highly apparent in the FTIR spectra. This can especially be seen in the comparative FTIR analyses of 5-np and organic ligand 5 along with 11-np and organic ligand 11, as shown in Figure 3. All observed bands with slightly varied shifts can be attributed to the distinct vibrational stretching of newly formed functional groups, such as N-H, C=O, C-N, and C=N. This clear functional group transformation was present in the comparative FTIR spectra for all 12 AgNPs and their corresponding organic ligands, further confirming the successful formation of the organic ligand functionalized AgNPs. These results signify that NaBH 4 , the reducing agent used in the synthesis of the AgNPs, most likely led to a functional group transformation that promoted silver coordination with the organic ligands. The organic ligands could have also acted as capping or stabilizing agents for the AgNPs.
Specifically, numerous deviations between 5-np and organic ligand 5 ( Figure 3) were observed. As an example, organic ligand 5 showed two bands at 3190 and 2980 cm −1 . These data are characteristic of a primary amine (NH 2 ) group stretching. Organic ligand 5 possesses an amino-benzothiazole scaffold, but when the organic ligand is conjugated with silver, the band indicating NH 2 disappears, and one strong band appears at 3115 cm −1 , as shown by the FTIR spectrum of 5-np. Also, in the 11-np to organic ligand 11 FTIR comparison, a minor alcohol peak is apparent at 3680 cm −1 for the ligand, but a strong broadband can be observed at 3380 cm −1 for the conjugated nanoparticle. This shift could be due to the remote effects of the typical interaction between silver and the terminal alkyne of the organic ligand in the AgNPs. These structural changes indicate the successful formation of the conjugated AgNPs.
EDAX Analysis
Scanning electron microscopy (SEM) with energy-dispersive X-ray analysis (EDAX) was used to further confirm the formations of the conjugated AgNPs. The data generated by EDAX analysis yielded spectra showing peaks corresponding to the elements composing the conjugated AgNPs. This elemental analysis further confirmed the binding of the nanoparticles to the organic ligands. Representative EDAX analyses are shown in Figure 4 and data for all 12 AgNPs can be found in the Supporting Information ( Figures S3-S6).
EDAX Analysis
Scanning electron microscopy (SEM) with energy-dispersive X-ray analysis (EDAX) was used to further confirm the formations of the conjugated AgNPs. The data generated by EDAX analysis yielded spectra showing peaks corresponding to the elements composing the conjugated AgNPs. This elemental analysis further confirmed the binding of the nanoparticles to the organic ligands. Representative EDAX analyses are shown in Figure 4 and data for all 12 AgNPs can be found in the Supporting Information (Figures S3-S6).
EDAX Analysis
Scanning electron microscopy (SEM) with energy-dispersive X-ray analysis (EDAX) was used to further confirm the formations of the conjugated AgNPs. The data generated by EDAX analysis yielded spectra showing peaks corresponding to the elements composing the conjugated AgNPs. This elemental analysis further confirmed the binding of the nanoparticles to the organic ligands. Representative EDAX analyses are shown in Figure 4 and data for all 12 AgNPs can be found in the Supporting Information (Figures S3-S6).
Zeta Potential and Size Measurement
Zeta potential values indicate nanoparticle stability, and those of higher magnitude are representative of higher particle stability. Based on the zeta potentials obtained, nine of the twelve AgNPs were considered to have good stability, with zeta potentials around −50 mV. This reflects high electrostatic repulsions between adjacent particles. The 2-butyne-1,4-diol-AgNPs (10-np), 3-butyn-1-ol-AgNPs (11-np), and heptane-1,7-dioic-AgNPs (12- stability to those of the other AgNPs mentioned in Table 1. These results are displayed by the representative zeta potential graphs in Figure 5 and in full by the values in Table 1. The average sizes of the functionalized nanoparticles were also measured and ranged from 25 to 278 nm, confirming size within the nano range. Size information for the AgNPs can also be found in Table 1. Table 1. Average sizes and zeta potentials for the 12 AgNPs.
Nanoparticles
Size (nm) Zeta Potential (mV) 152.42 −50. Table 1. These results a displayed by the representative zeta potential graphs in Figure 5 and in full by the valu in Table 1. The average sizes of the functionalized nanoparticles were also measured an ranged from 25 to 278 nm, confirming size within the nano range. Size information for th AgNPs can also be found in Table 1.
TEM and SEM Analyses
The 12 synthesized silver nanoparticles were further analyzed with TEM. Results showed the formation of nanoparticles with a wide distribution of shapes and sizes, shown in Figure 6. Among the 12 AgNPs, eight were spherically shaped (1-np, 3-np, 5-np, 6-np, 7-np, 9-np, 11-np, and 12-np) whereas the other four displayed nanotube morphologies (2-np, 4-np, 8-np, and 10-np). Agglomeration in some of the samples (namely 4-np, 10-np, 11-np, and 12-np) occurred due to covalent and metallic bond formation as a result of functionalization by the organic ligand.
SEM images of the 12 AgNPs were also obtained, yielding the morphological data presented in Figure 7. The morphological details provided by the SEM analyses matched the data shown in the TEM images. Both sets of images confirm spherical morphologies for eight nanoparticles and nanotube morphologies for four nanoparticles. The nanotubelike structure of these four AgNPs could be attributed to high-speed crystal growth or overgrowth within a short time.
Antibacterial Assay Analysis
The antimicrobial activities of the 12 conjugated silver nanoparticles and organic compounds 1-10 were compared to silver nitrate in a zone of inhibition assay using E. coli, a Gram-negative bacterium that has been extensively used in biotechnology, microbiology, and molecular biology [15]. The organism is common, genetically versatile, and can replicate rapidly, making it an excellent choice for this study. Strong antimicrobial activity was observed for silver nitrate (AgNO 3 ), with no visual colony formation. Additionally, a white precipitate was observed on the plate. Similarly strong antibacterial activity was apparent for the following nanoparticles with no visible colony formation in the zones of inhibition: 4-np, 6-np, 7-np, 8-np, 9-np, 10-np, and 12-np. These nanoparticles all had zone of inhibition diameters of approximately 9.00 mm or higher except 9-np. 7-np and 12-np both produced zones of inhibition with diameters higher than 11.00 mm. These two AgNPs show the most promise for further development as antimicrobial agents. Several of the other nanoparticles showed modest activity, including 1-np, 2-np, 3-np, 5-np, and 11-np. These organic ligand-functionalized AgNPs all produced zone-of-inhibition diameters of approximately 6.00 mm or above, which is larger than the zones of inhibition produced by any of the singular organic ligands (Figure 8). SEM images of the 12 AgNPs were also obtained, yielding the morphological data presented in Figure 7. The morphological details provided by the SEM analyses matched the data shown in the TEM images. Both sets of images confirm spherical morphologies for eight nanoparticles and nanotube morphologies for four nanoparticles. The nanotubelike structure of these four AgNPs could be attributed to high-speed crystal growth or overgrowth within a short time. eters of approximately 6.00 mm or above, which is larger than the zones of inhibition produced by any of the singular organic ligands (Figure 8).
Only three of the unconjugated organic compounds showed noticeable antimicrobial activities with full bacterial clearance being the standard for measurable activity. However, following conjugation, all 12 of the organic ligand-functionalized AgNPs demonstrated bacterial clearance to an extent significantly higher than their respective singular ligands [16,17]. Only three of the unconjugated organic compounds showed noticeable antimicrobial activities with full bacterial clearance being the standard for measurable activity. However, following conjugation, all 12 of the organic ligand-functionalized AgNPs demonstrated bacterial clearance to an extent significantly higher than their respective singular ligands [16,17].
Discussion
The zone-of-inhibition assay demonstrated strong antimicrobial activities in 4-np, 6-np, 7-np, 8-np, 9-np, 10-np, and 12-np. Neither strong nor significant antibacterial activity was found in the organic ligands corresponding to the AgNPs. This signifies that the conjugation of the silver nanoparticles to the organic ligands was the factor increasing antimicrobial activity. Among these seven conjugated AgNPs, three had nanotube-like morphologies (4-np, 8-np, and 10-np) and four had spherical morphologies (6-np, 7-np, 9-np, and 12-np). These data alone show no correlation between conjugated AgNP morphology and antimicrobial ability. However, in combination with size data, the shapes of the AgNPs gain significance. The three nanotube-shaped AgNPs were additionally three of the largest particles analyzed with sizes of 278.25 nm, 152.42 nm, and 218.60 nm. The 2-np particle, which was also nanotube-shaped, showed significant but not strong antibacterial activity, with a zone-of-inhibition diameter of 6.35 mm; 2-np only had a size of 121.70 nm, which is lower than those of the other nanotube-shaped AgNPs. This difference in both size and antimicrobial activity could be an indicator that larger nanotube-shaped AgNPs tend to be more effective for antibacterial purposes. As for 5-np, it was also relatively large, 171.25 nm, but was spherical. Antimicrobial activity was determined to be significant, but not strong like the other large particles. This was concluded from the 6.47 mm clearance diameter. The lowered antibacterial capabilities of 5-np show that the size benefit for bacterial inhibition is most likely exclusive to nanoparticles with nanotube-like morphologies.
In contrast to this, the opposite seems to hold true for the spherically shaped nanoparticles, but with less strong of a correlation. Both 7-np and 9-np have the smallest sizes of all 12 nanoparticles: 25.02 nm and 38.72 nm, respectively. The high antibacterial activity for particles 7-np and 9-np could be due to the small sizes and spherical shapes providing a more facile transport through the E. coli's cytoplasmic membrane, increasing biodistribution [18]. Particles 6-np and 12-np, however, are much more average in size: 86.02 nm to 76.60 nm, respectively. This signifies that antimicrobial capabilities are influenced by factors other than size and shape.
A few of these factors to be examined include agglomeration, organic ligand identity, and organic ligand functionalized AgNP complexity. All nanoparticles for which agglomeration was observed (4-np, 10-np, 11-np, and 12-np) except for 11-np showed strong antimicrobial activity with zones of inhibition above 9.00 mm. While the exception of 11-np lessens the validity of this correlation, links between agglomeration and antimicrobial activity could yield useful results with larger samples. The wide distribution between both the properties of the varying organic ligands and their respective antimicrobial activities when conjugated with AgNPs also suggests that the individual identities of the ligands greatly affect results. Both AgNPs with benzoxazine (6-np and 7-np) performed exceptionally well in the zone-of-inhibition assay, and the same held true for the two AgNPs with quinazolinone (8-np and 9-np). It is notable that the use of quinazolinone derivatives for conjugation with AgNPs for antimicrobial activity has been explored and found to be effective [8]. However, this pairing pattern was not matched by the other nanoparticles, showing that the actual changes made to the compounds from which the organic ligands were derived influenced antimicrobial activity as well. Furthermore, functionalization of the nanoparticles themselves was shown to alter antibacterial ability. The organic ligands 3, 5, and 7 showed promising antimicrobial activity in the zone-of-inhibition assay, with moderate bacterial clearances. Despite this, when conjugated with nanoparticles, only 7-np showed strong antibacterial activity. These results exemplify the structural alteration caused by the conjugation of the AgNPs. This outcome confirms that not only the structure of the organic ligand but the structural interactions between the ligand and the silver must be considered in the synthesis of new organic ligand functionalized AgNP-based antimicrobial agents.
The results presented here suggest a high potential for the discovery of new antimicrobial agents using a variety of organic ligands. Studies conducted with a larger set of organic ligand-functionalized AgNPs could provide further insight into the conceptual link between organic ligand differentiation and antimicrobial activity distribution. Additional biological analysis could also be useful. This study presents numerous correlations between the characteristics of organic ligand-conjugated AgNPs and their respective antibacterial capabilities. However, more research is needed to confirm definitive causation for the proposed factors. The variety of organic ligands used led to a differentiation between the abilities of the conjugated nanoparticles, resulting in seven strong candidates for use as antimicrobial agents.
Scheme 1.
Reaction scheme for the synthesis of the benzoxazine-4-one-derivatives.
Scheme 2.
Reaction scheme for the synthesis of the quinazolinone derivatives.
Synthesis of Benzoxazin-4-One Derivatives
2-aminobenzoic acid: Sodium hydroxide (10.00 g, 250 mmol) was added to a 250 mL Erlenmeyer flask containing ice-cold water (60.00 mL) and a magnetic stir bar. The solution was stirred until the sodium hydroxide dissolved. The phthalimide (10.00 g, 73 mmol) was quickly added to the sodium hydroxide solution. An ice bath was placed around the flask and stirring was continued. Sodium hypochlorite (14.00 mL, 5.00 M) was added to the solution, which was stirred for another 15 min before removal from the ice bath. The solution, which had turned a faint-yellow color, was then heated to 75 °C. This temperature was maintained for an additional 15 min. The solution was cooled in an ice bath and for use as antimicrobial agents.
Scheme 1.
Reaction scheme for the synthesis of the benzoxazine-4-one-derivatives.
Scheme 2.
Reaction scheme for the synthesis of the quinazolinone derivatives.
Synthesis of Benzoxazin-4-One Derivatives
2-aminobenzoic acid: Sodium hydroxide (10.00 g, 250 mmol) was added to a 250 mL Erlenmeyer flask containing ice-cold water (60.00 mL) and a magnetic stir bar. The solution was stirred until the sodium hydroxide dissolved. The phthalimide (10.00 g, 73 mmol) was quickly added to the sodium hydroxide solution. An ice bath was placed around the flask and stirring was continued. Sodium hypochlorite (14.00 mL, 5.00 M) was added to the solution, which was stirred for another 15 min before removal from the ice bath. The solution, which had turned a faint-yellow color, was then heated to 75 °C. This temperature was maintained for an additional 15 min. The solution was cooled in an ice bath and Scheme 2. Reaction scheme for the synthesis of the quinazolinone derivatives.
Synthesis of Benzoxazin-4-One Derivatives
2-aminobenzoic acid: Sodium hydroxide (10.00 g, 250 mmol) was added to a 250 mL Erlenmeyer flask containing ice-cold water (60.00 mL) and a magnetic stir bar. The solution was stirred until the sodium hydroxide dissolved. The phthalimide (10.00 g, 73 mmol) was quickly added to the sodium hydroxide solution. An ice bath was placed around the flask and stirring was continued. Sodium hypochlorite (14.00 mL, 5.00 M) was added to the solution, which was stirred for another 15 min before removal from the ice bath. The solution, which had turned a faint-yellow color, was then heated to 75 • C. This temperature was maintained for an additional 15 min. The solution was cooled in an ice bath and 10.00 mL of the solution was transferred to a small beaker. Hydrochloric acid (8.00 M) was added until the solution reached a neutral pH (7.0). Then, glacial acetic acid (10.00 mL) was added, and the resulting precipitate was washed and recrystallized with cold water.
(6) 2-methyl-4H-3,1-benzoxazin-4-one: A mixture of anthranilic acid (10.00 mmol) and acetic anhydride (1.50 mL) was heated at 150 • C for 2.5 h. Excess acetic anhydride was then removed under reduced pressure and the resulting solid was triturated with petroleum ether, collected by filtration, and dried in a vacuum.
(7) 2-phenyl-4H-3,1-benzoxazin-4-one: Anthranilic acid (5.00 g, 22 mmol) was dissolved slowly at room temperature in 10.00 mL of anhydrous pyridine with continuous stirring. The solution was heated by the addition of anhydrous pyridine, then cooled to 10 • C in a water bath. Cooling of the mixture contributed to the production of solid crystals. Benzoyl chloride (2.60 mL, 22 mmol) was then slowly added to 10.00 mL of anhydrous pyridine and stirred for 30 min. The resulting solid was washed with water and treated with aqueous sodium bicarbonate to remove any unreacted acid. The reaction mixture was left stirring overnight when the resulting product was not formed immediately. The reaction can be slowed by various conditions, but the crude can be recrystallized using ethanol in these situations. Following the addition of the DI water and the dissolution of the precipitate in its entirety, the final solid product was formed through the neutralization of the reaction mixture using sodium bicarbonate.
Synthesis of Silver Nanoparticles
For the syntheses of the AgNPs, the benzothiazole, benzoxazine, quinazolinone, heptane-1,7-dioic, 2-butyne-1,4-diol, and 3-butyn-1-ol derivatives were conjugated with silver to generate the twelve functionalized silver nanoparticles. Each organic ligand derivative (1-12) (10.00 mg) was reacted aqueously with silver nitrate in ethanol (0.1 mM, 3.00 mL). The mixture was then stirred for 6 h in the dark and a solution of NaBH 4 (10 µL, 6 mM) was added. The mixture transitioned from transparent to light brown following the addition of the reducing agent, suggesting the reduction of silver ions and the formation of silver-functionalized nanoparticles. All organic derivatives (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12) were conjugated through adjustments to the diverse volume ratio (v/v) of the twelve derivatives to the silver nitrate solution [14]. The procedure was performed in duplicate under red light at 620 nm for 6 h and the nanoparticle formation yields were compared.
Antibacterial Assay
The effects of the synthesized nanoparticles on bacterial growth were evaluated using a halo assay. A bacterial lawn was generated from a saturated overnight culture of E. coli bacteria (DH5A) grown in lysogeny broth (LB), diluted 100-fold in LB, and spread onto LB agar plates using sterile glass beads. Plates were dried for approximately 15 min before solutions of the organic ligand-modified AgNPs (1 µg/mL, 3 µL) were spotted onto the plates. As a control, the organic ligands corresponding to their respective nanoparticles were also spotted on the plates (1 µg/mL, 3 µL). Ampicillin (100 mg/mL, 3 µL, Fisher Bioreagents) was used as a positive control and ultrapure water (3 µL) was used as a negative control. An AgNO 3 solution was included as a reference. The plates were then incubated at 37 • C overnight before imaging with a BioRad ChemiDoc MP Imaging System and Image Lab Software 5.2 (Bio-Rad, Hercules, CA, USA). Analysis was performed using Mac OS 10.15.7 Preview, Microsoft PowerPoint, and NIH ImageJ. Antimicrobial activity was determined by the clearance of bacteria. Each assay was performed using three independent biological replicates.
Conclusions
Twelve silver nanoparticles functionalized with organic ligands were synthesized with red light and characterized through a variety of methods. Various analyses confirmed successful nanoparticle conjugation with the organic ligands. A variety of nanoparticle sizes and shapes also resulted from the array of organic ligands used, and all tested AgNPs exhibited some antibacterial capabilities against E. coli. However, seven of the AgNPs showed especially strong antimicrobial properties, indicating the potential for further development. A correlation between the morphological details of the conjugated nanoparticles and the extent of their antimicrobial activities was observed. The differentiation between the organic ligands led to a wide distribution of antibacterial capabilities of the produced AgNPs. While nanoparticles are more effective and less toxic than traditional antibacterial agents, further research on the biological effects of these nanoparticles must be conducted. More research must be conducted on the seven AgNPs possessing strong antibacterial capabilities. Optical density (OD) measurement of the bacterial assay should be analyzed for more precise quantitative data. Future investigation of the nanoparticle solutions at varying concentrations along with the determination of MIC values would also be beneficial. Nevertheless, this research reveals promising new insights into the use of organic ligand-functionalized AgNPs as antimicrobial agents with high potency, reduced toxicity, and strong mechanisms of activity. These particles represent both concrete and conceptual contributions to the field of nanomedical chemistry. Funding: The publication of this article was funded in part by the Open Access Subvention fund and the John H. Evans Library. The NIH provided partial support for common lab supplies (R15-GM112119).
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 6,372.8 | 2022-12-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
GLIDING HUMP PROPERTIES AND SOME APPLICATIONS
AIISrl’llAC’I ’. in this not(, xe consider several types of gliding h,.p properties for a sequence space E and we consider he various implications between these properties. By means of exa.=ples we show that mos! of the implit-ations are strict and they afford a sort of structure between solid sequence spaces and those wit h weakly sequentially complete /-(tuals. Our/nain result is used to extend a result of Bennett and Kalton which characterizes the class of sequence spaces E with the properly that E C S, whenever F is
Over the past eighty years the "gliding hump" technique has been a frequently used tool to esta- blish results in summability and sequence space theory.Among the more familar examples would be the Silverman-Toeplitz theorem which gives necessary and sufficient conditions for the regularity of a sum- mability method [22], the Mazur-Orlicz bounded consistency theorem ([6], [12] and [13]), the theorem of KSthe and Toeplitz on the weak sequential completeness of the KSthe dual of a solid sequence space Ill] and the theorems of Schur on the characterization of coercive matrices and the equivalence of weak and strong convergence in [19].Whereas the first three of these have subsequently been argued using functional analytic techniques (see e.g.[24] and [10]) no such "soft" proofs of Schur's theorems are known.
In section 3 of this note we introduce various types of gliding hump properties and discuss the im- plications between them.We give examples in section 5 to show that most of these implications are strict and they are, in some sense, affording a structure to the set of sequence spaces between the solid spaces and those with weakly sequentially complete -duals.
2. NOTATION AND Ptll';I,IMINAlllES.I,et c denote the linear space of all scalar (real or complex) sequences.By a sequence space /:' we shall mean any linear subspace of .A sequence space E endowed with a locally conw,x topology is called a h'-space if the inclusion map E cv is continuous where has the topology of coordinatewise conwrgence.A h'-space E with a Fr6chet topology is called an FK-space.If, in addition, the topology is normable then E is called a BK-space.We assune throughout this note familarity with the standard sequence spaces and their natural topologies (see e. g. [24], [9]).
For a sequence space E the multiplier space of E and the fl--dal of E are given by .M(E) {xw xy E for each y E} and where xy denotes the coordinatewise product.For x w, u Ibl the n h section ot .r is x [n] x e k-1 where e ($,) is the k t coordinate vector.For any positive term sequence # (/) let If (E, F) is a dual pair then a(E, f), r(E, F) denotes the weak topology and the Mackey topology respectively.For a sequence space E and a linear subspace F of E (E,F) is a dual pair under the natural bilinear form {x, y) , If E is a K-space containing , the space of finitely non-zero sequences, we let where E' denotes the topologicM dual of E. A K-space E containing with E SE is ced an AK-space.
If A (a,) is an infinite matrix with scMar entries the convergence domMn adnits a tal,ral I"h lOl>oiogy [2 I]. i"or .rE ca we wrile li .rli ,I.r.
We begin by introducing several types of gliding hump properties.
DEFINITION 3.1.,A sequence (y(n)) in w \ {0} is called a block sequence if there exists an index sequence (kj) such that y") 0 for any n,k IN with k ]k,,,_,k,,] where k0 := 0, and it is called a 1-block sequence if furthermore y(k ' for each k ]k,,_,k,] and n Let E be a sequence space containing . E has the gliding hump property (ghp) if for each block sequence and any monotonicly increasing sequence (n) of integers there exists a subsequence (m) of (n) with y(,) _ E (pointwise sum).E has the pointwise gliding hump property (p_ghp) if for each z E, any block sequence (y(")) satisfying sup I1-11 < o0 nd any monotonicly increasing sequence (n) of integers there exists nell subsequence (mk) of (n) with E zy(",) E (pointwise sum).
E has the uniform gliding hump property (u_ghp) if the sequence (ro,) in the definition of the p_ghp may be chosen independently of z E. E has the pointwise weak gliding hump property (p_wghp) if the definition of the p_ghp is fulfilled for each 1-block sequence.
E has the uniform weak gliding hump property (u_wghp) if the definition of the u_ghp is fulfilled for each 1-block sequence.
We say that E has the strong p.ghp (u_ghp, p_wghp or u_wghp) if xy (',) .E (pointwise sum) holds =1 for any subsequence of (rn) in the above definitions; in this case, we use the notation sp_ghp, su_ghp, sp_wghp and su_wghp, respectively.REMARKS 3.2.Let E be a sequence space containing (a) Obviously, the definition of the ghp corresponds with the definition given in [20], [4] and the definition of the p_wghp corresponds to the weak gliding hump property considered by D. Noll [14].
(f) I. [.1] T. l,eiKer an! tl' first, author proved the validity of theore, of Nlazur Orlicz tyi' u.der the asnunptio thal ,I is a seqence space such that .(M)has the glp, that is, I has the _ghp.Actally, each instance only the fact lhat ;(,1) has the p_ghp was used in theargl.ents.
"I'IIEOI{EM 3.3.l,et E be an Fh" space containing @.Then S.: has the stro,g php; in particular, E is an l"h'-A h space then E has the 8[rong p_ghp.
i>ROO!:.The Fh" topology of E may be generated by seminorms p,.(rE IN) such that p,(x)_<p,.+l(x)(rEIN and x E E). (o) Si,ce S: is an l"h'--Ah" space we may assume that E is an FK-AK-space.Now, let x E E be givcn.Then supp,.( xke 0 (n oc and 'rEIN).
On account of (o) it is sucient to prove zy O) 0 in E. For that end let r IN be given.Then we P"(xYO)) P " ( -] x ' Y ( ' i ) e ' ) \ k = , , have by (,) which proves xyO) In general, WE fails the p_wghp. [Example: Let E be the summation matrix and E := c-,.Then WE fails the p_wghp since x := e Z es E WE (pointwise sum) and (ns) (2/) does not have any subsequence (m) such that , := e (pointwise sum) E since E-fi m0 c.] THEOREM 3.5.Let E be a sequence space contning , and let B be a matrix such that E C cn.
Then E C Sn if E h the p_wghp.PROOF.Suppose E has the p_wghp.We know from Threm 6 of D. Nell [14] and Remark 3.2(a) that (E , a(Ea, E)) is weakly sequentiy complete.Therefore, by an inclusion threm of G. Bennett Since E has the p_wghp we may assume that yx E (otherwise we switch over to a subsequence (y('")) and adapt the chosen index sequences).For a proof of Theorem a.5 it is sufficient to prove x .
(ii) If b' is any separable FK-space with E C F then E C (iii) If A isany matrix with ECca then EC I'ROOF.The equivalence (i)o(ii) is Theorem 6, (i)o(ii) of G. Bennett and N. J. Kalton [2].The implication (ii) (iii) is obviously vafid since domains ca are separable FK-spaces.
Assume, (E,r(E, Ea))is not AK.Thus, we may choose an z E and an absolutely convex a(E,E) compact subset K of E a such that pK(x [hI -z)O(n) where p(z) := sup Iaz*[ (z ).
ah" Therefore we may choose an index sequence (n,) and a sequence (aO)) in h" such that Since K is (a, -compct, (,) nd (a,) coincide on K nd (Ea,)is metrizable we may nssume that (a(')) is a(Ea, E)-convergent to an a K. (Otherwise we switch over to subsequence of (a0)) .)If A denotes the mtrix given by a, a ') (i, IS) then -in summability language-the last assumption tells us ECCA (even EcA).
From (,) we get z Sa which contradicts the assumption that (iii) is true.
[] COROLLARY 3.7.Let E be a sequence space containing and F be a separable FK-space with E C F. If E has the p_wghp then E C PROOF.Theorem 3.6 and 3.5.[] COROLLARY 3.8.Let Y be a sequence space and E be an FK-space with C Y 91 E and B be a matrix with Y 91SE C co. Then Y 71SE C So if Y has the p_wghp.
The statement remains true if we replace co by any separable FK-space F. IRO()I:.Corollary 3.7 anl the fact that Y ,%: has lhe p_wghp.COI{()I,I,ARY 3.9 l,et E I)e a separable l"E space ontaining such that S: WE Then fails the p_wghp (whereas SE has the slrong p-ghl).I)IO()1'.'l'hooren 3.3 and Corollary 3. n{l k=l PROOF.The implication (a)(b) comes from the AK-property of Co and the monotonicity of FK-topologies.(This statement follows also by Theorem 3.5 since Co obviously has the p_wghp.)Using standard estimations we may prove (c):: (a).We are going to prove the essentiaJ part (b) (c).Let Co C Sa. Therefore, we can apply the above remark to any z Co.If ]IA]] oo wemay choose asequence (Ha)in IN and index sequences (%) and (/3j) with (j I) such that Defining y Co by we get la.,,I> j (j e IN).sgna., if% _<k_< :--0 otherwise E a,,,.y[ -E la",']>-j (J ,N)" k=o k=o 0 Thus a,y does not converge uniformly in n I which contradicts c0 C Sa.
k=l Using the same method we get Mso a proof of a threm contorting a theorem of Hahn (equivMence of (a) and (c)).However, we should mention that the proof of '(a) (c)' presented in [18, Threm 4.1, p. 110] is more elegant.n,k PROOF.(a) = (b) follows fron lhe coniinity of the in('lusion map and lhe facl lhat is an FK .IK-- space and the monotoniciy of l"h" lop<flogies whereas (c) (a) may be prove<l with classical estination.
'(b)(c)': l:et t C S. Ths.obviosly, C ca is lrue.We assume up ]a,.] . In the next step we use this method to reprove both the well-known Schur theorem and the Hahn theorem.(The Schur theorem characterizes the matrices summing all bounded sequences, the Hahn theorem tells us that a conservative matrix which sums all x X sums also all bounded sequences where X denotes the set of all sequences with 0 and .)Moreover, we take an extended version of Schur's theorem (see [3]) into consideration.In ce of conservative matrices the equivalence (a)(c) is Hahn's theorem.
The implications (1) and ( 5) are immediate corollaries of Theorem 3.5 since m, and m0 have the p_wghp.
l,'or a proof of () al,l (1(I) we refer to [3].Now, we give a proof of (6).l"or thal we ass.me thai ,1 is a ,alrix with real entries.[in the general case of complex entries we have to .ore that [a,.] converges .niformly in .E I if and o,,ly if this is true for the real part of a,., and the imaginary part of a. .] Let (c*) be true.Then C c,t.
We define y E m0 by SiIice sgna.,.ifo <k :--0 otherwise the series a,,y does not converge uniformly in n I. Therefore y S which contradicts X C S. The aim of this section is the presentation of some examples distinguishing almost all of the gliding hump properties.For that purpose we collect known connections between gliding hump and related properties of sequence spaces in the following graphic.
Figure 1: Each arrow stands for 'implies' and the corre- sponding number in the circle gives the number of the example in 5.1 pro- ving the strictness of the implication.
Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.
However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning (e) su_ghp == su_wghp u_wghp p_wghp; su_ghp sp_ghp : sp_wghp =: p_wghp; su_ghp u_ghp p_ghp ==:, p_wghp; and N. J. Kton[2, Theorem 5] we get E C W, in particar E C Ln and E C A. Now, sume E C Wu and E Sn, that is, there ests x fi E C Wn L A with x Sn, thus ma loroforo wo .,avI,oo.,,o . . .! > 0 a.! i,lex seq.e.es ().(,) a.l (.) wil I, b n.cl lhal Now we onil)lov a gli(lin hulli I) arRlimoni, l,(,l k()"-and ('li()os(, n; sllcll lhal Thon tliere exisi a Yl IN with n, > n; and a kl >/J.such tllltt (nolo m /inductively, we get index seque.ces(k.).(3).(') w it h define a subsequence (y()) of a 1-block sequence by if %
.1.l,et A-(an) be amatrix with C ea and let x ( ca.Then z Sa e== a.x converges uniformly in n This observation gives us a short proof of the following theorem containing a, Toeplitz-Silvernan theorem.
THEOREM 4 . 2 (
matrices being conservative for Co ).For matrices A (a,,) the following state- ments are equivalent: (a) Co C ca.(b) Co C (c) p C CA and Ilall := sup y] la.l < .
•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation | 3,465 | 1995-01-01T00:00:00.000 | [
"Mathematics"
] |
Block Preconditioning Matrices for the Newton Method to Compute the Dominant λ-Modes Associated with the Neutron Diffusion Equation
In nuclear engineering, the λ-modes associated with the neutron diffusion equation are applied to study the criticality of reactors and to develop modal methods for the transient analysis. The differential eigenvalue problem that needs to be solved is discretized using a finite element method, obtaining a generalized algebraic eigenvalue problem whose associated matrices are large and sparse. Then, efficient methods are needed to solve this problem. In this work, we used a block generalized Newton method implemented with a matrix-free technique that does not store all matrices explicitly. This technique reduces mainly the computational memory and, in some cases, when the assembly of the matrices is an expensive task, the computational time. The main problem is that the block Newton method requires solving linear systems, which need to be preconditioned. The construction of preconditioners such as ILU or ICC based on a fully-assembled matrix is not efficient in terms of the memory with the matrix-free implementation. As an alternative, several block preconditioners are studied that only save a few block matrices in comparison with the full problem. To test the performance of these methodologies, different reactor problems are studied.
Introduction
The neutron transport equation is a balance equation that describes the behavior of the neutrons inside the reactor core.This equation for three-dimensional problems is an equation defined in a phase space of dimension seven, and this makes the problem very difficult to solve.Thus, some approximations are considered such as the multigroup neutron diffusion equation by relying on the assumption that the neutron current is proportional to the gradient of the neutron flux by means of a diffusion coefficient.
Given a configuration of a nuclear reactor core, its criticality can be forced by dividing the production operator in the neutron diffusion equation by a positive number, λ, obtaining a neutron balance equation: the λ-modes problem.For the two energy groups approximation and without considering up-scattering, this equation can be written as [1]: where φ 1 and φ 2 denote the fast and thermal flux, respectively.The macroscopic cross-sections D g , Σ ag , and νΣ f g , with g = 1, 2, and Σ 1,2 , are values that depend on the position.The largest eigenvalue in magnitude, called the k-effective (or multiplication factor), indicates a measure of the criticality of the reactor, and its corresponding eigenfunction describes the steady-state neutron distribution in the reactor core.Next, dominant eigenvalues and their corresponding eigenfunctions are useful to develop modal methods for the transient analysis.
To make a spatial discretization of Problem (1), a high order continuous Galerkin finite element method is used, leading to a generalized algebraic eigenvalue problem of the form: where the matrix elements are given by: ) where N i is the prescribed shape function for the ith node.The vector φ = φ1 , φ2 T is the algebraic vector of the finite weights corresponding to the neutron flux in terms of the shape functions.The shape functions used in this work are Lagrange polynomials.The subdomains Ω e (e = 1, . . ., N t ) denote the cells in which the reactor domain is divided and where the cross-sections are assumed to be constant.Similarly, Γ e is the corresponding subdomain surface, which is part of the reactor boundary.More details on the finite element discretization can be found in [2].For the implementation of the finite element method, the open source finite elements library Deal.II [3] has been used.In this work, a matrix-free strategy for the blocks of the matrix M and for the non-diagonal blocks of L is developed.In this way, matrix-vector products are computed on the fly in a cell-based interface.For instance, we can consider that a finite element Galerkin approximation that leads to the matrix M 1,1 takes a vector u as input and computes the integrals of the operator multiplied by trial functions, and the output vector is v.The operation can be expressed as a sum of N t cell-based operations, where P e denotes the matrix that defines the location of cell-related degrees of freedom in the global vector and M e 1,1 denotes the submatrix of M 1,1 on finite element e.This sum is optimized through sum-factorization.Details about the implementation are explained in [4].This strategy greatly reduces the memory used by the matrix elements.
Calculation of the dominant lambda mode has traditionally utilized the classical power iteration method, which although robust, converges slowly for dominance ratios near one, as occurs in some practical problems.Thus, acceleration techniques are needed to improve the convergence of the power iteration method.Some approaches in diffusion theory are, for instance, Chebyshev iteration [5] and Wielandt shift [6].Alternative approaches to the power iteration method have been studied in an attempt to improve upon the performance of accelerated power iteration methods [7,8].The subspace iteration method [9], the Implicit Restarted Arnoldi method (IRAM) [10], the Jacobi-Davidson [11], and the Krylov-Schur method [2] implemented in the SLEPclibrary [12] have been used to compute the largest or several dominant eigenvalues for the neutron diffusion equation and their corresponding eigenfunctions.More recently, other Krylov methods have been used to compute these modes for other approximations of the neutron transport equation [7,13].Usually, applying these kinds of methods requires either transforming the generalized problem (2) into an ordinary eigenvalue problem or applying a shift and invert technique.In both cases, in the solution process, it is necessary to solve numerous linear systems.These systems are not well-conditioned, and they need to be preconditioned.Thus, the time and computational memory needed to compute several eigenvalues become very high.
One alternative is to use a method that does not require solving any linear system, such as the generalized Davidson, used for neutron transport calculations in [14].Other methods are the block Newton methods that have been shown to be very efficient in the computation of several eigenvalues in neutron diffusion theory.These methods either do not need to solve as many linear systems as the Krylov methods or avoid solving any linear system with some hybridization.One of these Newton methods is the modified block Newton method, which has been considered for the ordinary eigenvalue problem associated with Problem (2) [15] or directly for the generalized eigenproblem (2) [16].One advantage of these block methods is that several eigenvectors can be approximated simultaneously, and as a consequence, the convergence behavior improves.The convergence of the eigensolvers usually depends on the eigenvalue separation, and if there are clustered or multiple eigenvalues, the methods may have problems finding all the eigenvalues.In practical situations of reactor analysis, the dominance ratio corresponding to the dominant eigenvalues is often near unity, resulting in a slow convergence.In the block methods, this convergence only depends on the separation of the group of target eigenvalues from the rest of the spectrum.Another advantage is that these methods do not require solving as many linear systems as the previous methods.However, these linear systems still need to be preconditioned.Another of this kind of Newton method is the Jacobian-free Newton-Krylov methods that have been studied with traditional methods such as the power iteration used as the preconditioner [17,18] or with a more sophisticated Schwarz preconditioner [19].In this work, we use the Modified Generalized Block Newton Method (MGBNM) presented in [16], and we propose several ways to precondition the linear systems that need to be solved in this method in an efficient way.
The structure of the rest of the paper is as follows.In Section 2, the modified generalized block Newton method is described.In Section 3, the different preconditioners for the MGBNM are presented.The performance of the preconditioners is presented in Section 4 for two different benchmark problems.Finally, Section 5 synthesizes the main conclusions of this work.
The Modified Generalized Block Newton Method
This method was presented by Lösche in 1998 [20] for ordinary eigenvalue problems, and an extension to generalized eigenvalue problems was studied in [16].Given the partial generalized eigenvalue problem (2) written as: where X ∈ R n×q is a matrix with q eigenvectors and Λ ∈ R q×q is a diagonal matrix with the q eigenvalues associated, we suppose that the eigenvectors can be factorized as X = ZS, where Z is an orthogonal matrix.Moreover, the biorthogonality condition W T Z = I is introduced, where W is a fixed matrix.Thus, if we denote K = SΛS −1 , the problem (4) can be rewritten as: We construct this projection to ensure that the method converges to independent eigenvectors.Then, the solution of Problem ( 4) is obtained by solving the non-linear problem: By applying Newton's method, a new iterated solution arises as: where ∆Z (k) and ∆K (k) are solutions of the system obtained when the equations ( 6) are substituted into the equations ( 5), and these are truncated at the first order terms.
The matrix K (k) is not necessarily a diagonal matrix, and as a result, the system is coupled.To avoid this problem, the modified generalized block Newton method (MGBNM) needs to apply the previous two steps.Firstly, the modified Gram-Schmidt process is used to orthonormalize the matrix Z (k) .Then, the Rayleigh-Ritz projection method for the generalized eigenvalue problem [21] is applied.Thus, i ∈ R are obtained from the solutions of the linear systems: The solution of these systems is computed by using the Generalized Minimal Residual method (GMRES) computing the matrix vector products with block matrix multiplications.However, these systems need to be preconditioned (in each iteration and for each eigenvalue) to reduce the condition number of the matrix.
Preconditioning
The first choice for a preconditioner is assembling the matrix: and constructing the full preconditioner associated with the matrix.We use the ILUT(0) preconditioner since A is a non-symmetric matrix.There are no significant differences if the preconditioner obtained for the matrix associated with the first eigenvalue is used for all eigenvalues in the same iteration because in the matrix, A only changes the value of λ i , and usually, the eigenvalues in reactor problems are clustered.This preconditioner is denoted by P.
To devise an alternative preconditioner without the necessity of assembling the matrix A, we write the explicit inverse of A, by using its block structure, where: We desire a preconditioner for A by suitably approximating A −1 .Let us call P J a preconditioner for J.For instance, P J = (LU) −1 , where L, U are the incomplete L and U factors of J. Thus, we can define, after setting C T 2 = Z T P J , the preconditioner of A as: The previous preconditioner does not need to assemble the entire matrix A, but it needs to assemble the matrix J to build its ILUpreconditioner.Therefore, the next alternative that we propose is using a preconditioner of −L instead of J = M − λ 1 L.This preconditioner works well because in the discretization process, the L matrix comes from the discretization of the differential matrix that has the gradient operators and the diffusion terms.In addition, in nuclear calculations, λ 1 is near 1.0.Thus, we can build a preconditioner of −L instead of the matrix J.We denote by PL the preconditioner PJ where the preconditioner of −L is used to precondition the block J.
Finally, the last alternative is avoiding assembling the matrix L taking advantage of its block structure.For that purpose, we carry out a similar process as the one used for matrix A. We write the explicit form of the inverse of L as: and substitute the inverses of the blocks by preconditioners.Thus, the preconditioner of L has the following structure: are symmetric and positive definite.Then, we can use as preconditioner the Incomplete Cholesky decomposition (IC(0)).However, the main advantage of this preconditioner is that it permits using a matrix-free implementation that does not require allocating all matrices.We only need to assemble the blocks L 11 and L 22 to construct the associated IC(0) preconditioners.The application of PJ with −Q L to precondition J is called PQ .
Numerical Results
In this section, the performance of the proposed preconditioners has been tested on two different problems: a version of the 3D NEACRPreactor [22] and a configuration of the Ringhals reactor [23].The neutron diffusion equation in both problems has been discretized using the finite element method presented in Section 1 using Lagrange polynomials of degree three because it is shown in previous works that this degree is necessary to obtain accurate results in similar reactor problems [2].The number of eigenvalues computed was four for each reactor.
The incomplete lower-upper preconditioner with Level 0 of fill (ILU) has been provided by the PETScpackage [24].
As the modified generalized block Newton method needs an initial approximation of a set of eigenvectors, a multilevel initialization with two meshes was used to obtain this approximation (for more details, see [25]).
The stopping criteria for all solvers has been set equal to 10 −6 in the global residual error, where λ i is the i th eigenvalue and x i its associated eigenvector such that x i = 1.
The modified block Newton method has been implemented using a dynamic tolerance in the residual error of the solution in the linear systems.The tolerance values have been set to {10 −2 , 10 −3 , 10 −5 , 10 −8 , 10 −8 , . . .}.
The methods have been implemented in C++ based on the data structures provided by the library Deal.ii [3] and PETSc [24].The computer used for the computations was an Intel R Core TM i7-4790 @3.60 GHz with 32 Gb of RAM running on Ubuntu GNU/Linux 16.04 LTS.
NEACRP Reactor
The NEACRP benchmark in a near-critical configuration [22] is chosen to compare the proposed methodology.The reactor core has a radial dimension of 21.606 cm × 21.606 cm per assembly.Axially, the reactor, with a total height of 427.3 cm, is divided into 18 layers with height (from bottom to top): 30.0 cm, 7.7 cm, 11.0 cm, 15.0 cm, 30.0 cm (10 layers), 12.8 cm (two layers), 8.0 cm, and 30.0 cm. Figure 1 shows the reactor geometry and the distribution of the different materials.The cross-sections of materials are displayed in Table 1.The total number of cells of the reactor domain is 3978.Zero flux boundary conditions were set.The spatial discretization of the neutron diffusion equation, by using polynomials of degree three, gave a number of 230,120 degrees of freedom.The mesh built to obtain an initial guess had 1308 cells, and the computational time needed to obtain this approximation was 24 s.The four dominant eigenvalues computed are collected in Table 2.This table shows that the spectrum associated with the problem is clustered with two degenerated eigenvalues.A representation of the fast flux distribution for each mode is displayed in Figure 2. First, we show the convergence history of the MGBNM to obtain the solution of the eigenvalue problem. Figure 3 shows the number of iterations against the residual error for the NEACRP reactor.It is observed that the MGBNM only needed four iterations to reach a residual error equal to 10 −6 .Table 3 collects the average number of iterations obtained by directly applying the ILU preconditioner of A and the total time that the GMRES method needs to reach the residual error in the linear systems given in Tol( b − Ax ).The time spent to assemble the matrices and to build the preconditioner (setup time (s)) is also displayed.These data are presented for each iteration and in a total sum.This table shows that the number of iterations is not very high, but the time spent to assemble the matrix and to construct the preconditioner increases the total CPU time considerably.It is necessary to build in each iteration a new preconditioner for A because the columns related to the block Z change considerably in each updating.Table 4 displays these data related to the block preconditioner proposed PJ that uses the ILU preconditioner for approximating the inverse of M − λ 1 L. It is observed that we only needed to assemble the matrix M − λ 1 L once in the first iteration to build the preconditioner.This is because we only needed a preconditioner of M − λ 1 L, and the value of λ 1 was very similar for all iterations.The mean of the number of iterations of the GMRES preconditioned with PJ was larger than in the previous case, but the total CPU time of using this block preconditioner was reduced by 26 s with respect to the full preconditioner.Table 5 shows the data related to the block preconditioner PJ , but in this case, we have used the Geometric Multigrid (GMG) preconditioner to approximate the inverse of M − λ 1 L. The results show, in comparison with the results of Table 4, that in spite of the total number of iterations and the setup time being much lower for the GMG, the total computational time is much higher.This is due to the application of the GMG preconditioner being more expensive than the application of the ILU preconditioner.The next results were obtained by using the block preconditioner, P, but in these cases, approximating the (M − λ 1 L) −1 by the ILU preconditioner of −L ( PL ) and by a block preconditioner of −L (Q L).The most relevant data to compare the preconditioners considered in this work are exposed in Table 6.They were the total iterations of the GMRES, the total setup time, the total time to compute the solution, and the maximum computational memory spent by the matrices.We observe that the number of iterations increased when worse approximations of the inverse of A were considered, but the setup time that each preconditioner needs became smaller.Moreover, the maximum CPU memory was also reduced significantly.In the total CPU times, we observed that the block preconditioner ( P), in all of its versions, improved the times obtained by applying the ILU preconditioner of A directly.Between the possibilities for obtaining a preconditioner of M − λ 1 L, there were no big differences in the computational times, but there was an important savings of the computational memory.The best results were obtained by PL if the computational memory consumption was taken into account.Table 7 shows the timings and the memory spent in the matrix allocation by using the matrix-free technique or without using this strategy.The results show that not only the matrix memory consumption and the time to assemble were reduced, but also the time spent to compute the matrix-vector products.That implies that the matrix-free strategy reduced the total CPU time by about 30%.Finally, we compared the MGBNM with this methodology against other eigenvalue solvers commonly used in the neutron diffusion computations (Table 8).We show the results by using a different number of computed eigenvalues (No. eigs).In particular, we have chosen for this comparison the generalized Davidson preconditioned with the block Gauss-Seidel preconditioner and the Krylov-Schur method by previously reducing the generalized eigenvalue problem to an ordinary eigenvalue problem as in [2].To use both methods, the library SLEPc has been used [12].From the computational times, we can deduce that the MGBNM was twice as fast as the rest of solvers for the computation of one and two eigenvalues, and it was very competitive at computing four eigenvalues.
Ringhals Reactor
For a practical application of the preconditioners in a real reactor, we have chosen the configuration of the Ringhals rector.Particularly, we have chosen the C9 point of the BWRreactor Ringhals I stability benchmark, which corresponds to a point of operation that degenerated in an out-of-phase oscillation [23].It is composed of 27 planes with 728 cells in each plane.A representation with more detail of its geometry can be observed in Figure 4.The spatial discretization using finite elements of degree three gave 1,106,180 degrees of freedom.The coarse mesh considered to obtain an initial guess for the MGBNM had 6709 cells and the problem associated with this mesh a size of 386,768 degrees of freedom.The computed dominant eigenvalues were 1.00191, 0.995034, 0.992827, and 0.991401.The corresponding fast fluxes are represented in Figure 5.The convergence history of the MGBNM associated with the Ringhals reactor is represented in Figure 6.For this reactor, the number of iterations needed to reach the tolerance (10 −6 ) was also equal to four.
2 .
(a) 1st mode (b) 2nd mode (c) 3rd mode (d) 4th mode Fast fluxes' distribution of the NEACRP reactor corresponding to the first four modes.
Figure 3 .
Figure 3. Convergence history of the Modified Generalized Block Newton Method (MGBNM) for the NEACRP reactor.
Figure 5 .
Figure 5. Fast fluxes' distribution of the Ringhals reactor corresponding to the first four modes.
P 22 denote a preconditioner of L 11 and L 22 , respectively.The block matrices L 11 , L 22
Table 2 .
Eigenvalues for the NEACRP reactor.
Table 3 .
Data for the preconditioner P for the NEACRP reactor.GMRES, Generalized Minimal Residual.
Table 4 .
Data for the preconditioner PJ with ILUfor the NEACRP reactor.
Table 5 .
Data for the preconditioner PJ with the Geometric Multigrid (GMG) for the NEACRP reactor.
Table 6 .
Data obtained by using different preconditioners for the NEACRP reactor.
Table 7 .
Data obtained using different matrix implementations for the NEACRP reactor.
Table 8 .
Computational times for the MGBNM with PQ , the generalized Davidson method, and the Krylov-Schur method for the NEACRP reactor.eigs, eigenvalues. | 4,954.6 | 2019-01-15T00:00:00.000 | [
"Computer Science"
] |
Life science database cross search: A single window system for dispersed biological databases
A comprehensive search system for the bioscience databases is in progress. We constructed a search service, Life science database cross search system (https://biosciencedbc.jp/dbsearch/index. php?lang=en) by integrating numerous biomedical databases using database crawling algorithms. The described system integrates 600 databases containing over 90 million entries indexed for biomedical research and development.
Background:
The cross-search service of bioscience databases is still developing. Because data are considerably dispersed across various organizations and networks, finding required information immediately is difficult. Additionally, conducting comprehensive searches in large bioscience databases using general web search engine such as Google is difficult [1]. Some search-related infra structure such as BioCaddie [2] and World Wide Science [3], which are dedicated to research, have been constructed. Because of the deep web problem [4], these search results are not comprehensive and efficient. In this project, we collected all the data from selected bioscience web databases that contained entries in the deep web and developed a web search engine that could search the compiled and comprehensive bioscience database.
Methodology:
This web service is constructed in three steps as shown in Figure 1 and the details are described below.
Web data crawling:
URLs of bioscience web database entries were collected from database catalog sites and funding databases. We checked their site policies, terms of uses, and robots.txt and evaluated the pros and cons of crawling web data by well-known algorithm [5]. Then, biocurators investigated each entry in the database and distinguished the data containing text that was suitable for text search. Additionally, they checked the variation range of URLs. For instance, some URLs comprised sequential numbers in a predetermined number of digits and padding zero or well-known identifiers in the bioscience field such as PDB ID or Uniprot ID. Database crawling scripts were programmed with a compiled URL list. If a database entry contained some useful metadata such as "species name," "gene name," or "date of creating the database entry" the script parsed each metadata for storage. These metadata were classified into bioscience categories, so that the category of each database could be distinguished.
Server construction and application installation:
A web server and a search server were constructed.
[A] Web server: The input interface containing the search box was written in PHP. If a user inputted some words into the text box, related words were suggested via an internal dictionary. The search query was parsed, converted into JSON format, and properly processed (as described in Section 3) to be inputted to the Elasticsearch application The search server parses the outputted JSON data from the Elastic search and displays the search results with a parsed title and snippet fields for visibility. The web interface was published with the page of the target database list and the help page written in English and Japanese. Elasticsearch version 1.7 was installed into the search servers. The crawled data were analyzed by a bigram tokenizer and indexed. The cluster structure that comprised multiple servers accomplished load balancing and data redundancy. The search server receives a search query request from the web server and inputs it into the search process of Elastic search.
Search query processing and the search algorithm:
The search query that is inputted to the search box is subjected to Boolean interpretation, word distinction, and stop word removal in the web server. In this process, if the search query contains words that are often used in the field of bioinformatics (e.g., gene and database), the ranking score of the related database category is boosted. The search results are displayed from Elasticsearch in the order of a decreasing score.
Availability:
Users can retrieve comprehensive information of a matched text by inputting a keyword query into the search box from Life science database cross search system [7]. Detailed information is available via the link of the search result snippet. The index is automatically updated using the batch script when the updates of the original site have been detected or the running annual updates have been recognized on the basis of RSS or update history. This extensive service would be helpful as an academic search infrastructure for researchers who need to access comprehensive entries in the bioscience database and for users from the intellectual property department. This type of domain-specific search infrastructure has been expensive to construct, and the legal decision of crawling each database has been difficult to make. However, search infrastructure has recently become ubiquitous for global commercial search engines and has become prevalent in ordinary society. Search engine optimization technologies such as "robots.txt" have been well understood. The importance of crawling is well known because data-driven analytic studies are becoming common. Japanese copyright law has been revised and the legitimacy of data crawling has been clarified. In this situation, Life science database cross search is the only service that provides high quality search capabilities to access large quantities of bioscience data. The expectation that the bioscience database must continuously distill to this infrastructure is reasonable.
Conclusion:
We describe the development and use of a comprehensive search system integrating 600 databases containing over 90 million entries indexed for biomedical research and development using database crawling algorithms. | 1,206.8 | 2019-12-31T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Solution group representations as quantum symmetries of graphs
In 2019, Aterias et al. constructed pairs of quantum isomorphic, non-isomorphic graphs from linear constraint systems. This article deals with quantum automorphisms and quantum isomorphisms of colored versions of those graphs. We show that the quantum automorphism group of such a colored graph is the dual of the homogeneous solution group of the underlying linear constraint system. Given a vertex-and edge-colored graph with certain properties, we construct an uncol-ored graph that has the same quantum automorphism group as the colored graph we started with. Using those results, we obtain the first-known example of a graph that has quantum symmetry and finite quantum auto-morphism group. Furthermore, we construct a pair of quantum isomorphic, non-isomorphic graphs that both have no quantum symmetry.
Introduction
Generalizing Mermin's magic square [12], linear system games were introduced by Cleve and Mittal in [7].It was then shown by Cleve, Liu and Slofstra [6], that perfect quantum strategies for those games are related to the representations of the associated solution group; a finitely presented group with generators and relations reflected by the underlying linear system.
Together with his collaborators, the first author [1] constructed quantum isomorphic, but non-isomorphic graphs from linear system games having perfect quantum strategies but no such classical strategies.In this case, a quantum isomorphism corresponds to a perfect quantum strategy of the isomorphism game which was also introduced in [1].
Quantum automorphism groups of graphs were first introduced by Banica [2] and Bichon [5] to obtain examples of quantum permutation groups.They are generalizations of the automorphism group of a graph in the framework of Woronowicz' compact matrix quantum groups.An interesting question to ask is which quantum permutation groups can be realized as the quantum automorphism group of some graph.This question has for example been considered in [4].
In this work, we give a correspondence between representations of solution groups and quantum isomorphisms/quantum automorphisms of colored versions of the graphs appearing in [1].More specifically, we will see that the quantum automorphism groups of the colored versions of those graphs are given by the dual of the homogeneous solution group of the underlying linear constraint system.Furthermore, representations of the non-homogeneous solution group provide quantum isomorphisms between the colored versions of the graphs.
Additionally, we will discuss a decoloring procedure for vertex-and edge-colored graphs which does not change the quantum automorphism group for certain graphs.This procedure decolors the vertices by adding paths of different lengths to vertices of different colors.In a second step, we get rid of the edge-colors by subdividing the colored edges and then adding paths of different lengths to the newly added vertices.
The previous results allow us to construct two explicit examples of graphs that were not known before: First, we construct a graph that has quantum symmetry and finite quantum automorphism group.Second, we obtain a pair of graphs that are quantum isomorphic and non-isomorphic, where both graphs additionally do not have quantum symmetry.
The article is structured as follows.In Section 2, we briefly discuss compact quantum groups and give the definition of the quantum automorphism group of a vertex-and edgecolored graph.In Section 3, we define colored versions of the graphs appearing in [1].Here we prove that the dual of the solution group is the quantum automorphism group of such a graph.Section 4 deals with decoloring the graphs from the previous section without changing the quantum automorphism group.In Section 5, we provide (uncolored) graphs whose quantum automorphism group is the dual of a solution group.We in particular obtain a graph with quantum symmetry and finite quantum automorphism group, see Corollary 5.6.Finally, we discuss quantum isomorphisms of the colored graphs in Section 6.Here, we present our example of a pair of graphs that are quantum isomorphic but non-isomorphic, where both graphs do not have quantum symmetry, see Corollary 6.10.
Preliminaries
We start with the definition of a compact quantum group, see [20].Throughout this article, we write A ⊗ B for the minimal tensor product of the C * -algebras A and B.
We then write G ∼ = H.Definition 2.3.Let G = (C(G), ∆ G ) be a compact quantum group.We say that G is finite if the C * -algebra C(G) is finite-dimensional.
The following example can for example be found in [11,Example 4.5].
We will now define compact matrix quantum groups, a subclass of compact quantum groups.
Definition 2.5 ([19]).A compact matrix quantum group G is a pair (C(G), u), where C(G) is a unital C * -algebra and u = (u ij ) ∈ M n (C(G)) is a matrix such that • the matrix u and its transpose u T are invertible.
The matrix u is usually called fundamental representation of G.
A very important example is the quantum symmetric group, the quantum analogue of the symmetric group.It was defined by Wang in [17].
Definition 2.6.The quantum symmetric group S + n = (C(S + n ), u) is the compact matrix quantum group, where Note that a matrix u = (u ij ) i,j∈ [n] with entries from a nontrivial unital C * -algebra satisfying , as in the definition above, is known as a magic unitary.
Quantum automorphism groups of finite graphs were defined in [2], [5].We give a more general definition in the following, for vertex-and edge-colored graphs.Note that edgecolorings are similar to distances in finite quantum metric spaces, as considered in [3].For us, graphs are undirected and do not have loops nor multiple edges.A colored graph is a graph G with vertex set V and edge set E along with a coloring function c : V ∪ E → S for some set S. We refer to c(x) as the color of the vertex/edge x.To be specific, we will sometimes refer to colored graphs as vertex-and edge-colored graphs.Furthermore, we will also consider edge-colored graphs where the coloring function is defined only on the edge set E, i.e., edges but not vertices are colored.In either case we will use E c to refer to the set of edges of color c.Also, A Gc will denote the adjacency matrix of the edge color c, i.e., (A Gc ) ij = 1 if (i, j) ∈ E c and (A Gc ) ij = 0 otherwise.Definition 2.7.Let G be a vertex-and edge-colored graph.The quantum automorphism group Qut(G) is the compact matrix quantum group (C(Qut(G)), u), where C(Qut(G)) is the universal C * -algebra with generators u ij , i, j ∈ V (G) and relations where ( 4) is nothing but k u ik (A Gc ) kj = k (A Gc ) ik u kj for all i, j ∈ V (G) and all edge-colors c.
It is not immediately obvious that this defines a compact matrix quantum group.By Definition 2.5 we have to show that u and u T are invertible as well as that the comultiplication n is a compact matrix quantum group.The first equation follows for example from [15,Lemma 2.1.2].For the second equation, we take vertices i, j with c(i) = c(j).We have ∆(u ij ) = k u ik ⊗ u kj .Note that there is no k ∈ V (G) with c(k) = c(i) and c(k) = c(j), since we assumed c(i) = c(j).We deduce u ik ⊗ u kj = 0 for all k and thus ∆(u ij ) = 0. Summarizing, we see that Qut(G) is indeed a compact matrix quantum group.
The following lemma gives relations that are equivalent to Relation (4), see [15, Proposition 2.1.3].Lemma 2.8.Let u ij , 1 ≤ i, j ≤ n, be the generators of C(S + n ).Then Relation (4) is equivalent to the relations for some edge-color c.
Colored graphs whose quantum automorphism group is the dual of a solution group
In the following definition we use 1 to denote the identity element of a group.1.
If b = 0, then we refer to Γ(M, b) as the homogeneous solution group of the system M x = 0, and define this the same as above except that we add the relation γ = 1.This is equivalent to removing γ from the list of generators, changing the righthand side of the equation in (3) to 1, and removing items (4) and (5).We will sometimes also use Γ 0 (M ) to denote this group.
We will denote by S k (M ) the set {i ∈ [n] : M ki = 1}, often writing simply S k when M is clear from context.We use ±1 S to denote the set of functions α : S → {1, −1}, and will typically write α i instead of α(i).We will also use ±1 S 0 to denote the subset of such functions satisfying i∈S α i = 1, and similarly use ±1 S 1 for the set of such functions satisfying i∈S α i = −1.
, and these partition the vertex set.For any l, k ∈ [m] such that S l ∩ S k = ∅, the graph G contains all (non-loop) edges between V l and V k (thus each V k induces a complete subgraph).Given an edge e between adjacent vertices (l, α) and (k, β), the color of e, denoted c(e), is equal to the function α△β ∈ ±1 S l ∩S k defined as (α△β) i = α i β i .
Note that α△β = β△α and so the edge colors defined above really are edge colors and not arc (directed edge) colors.Remark 3.3.According to the above definition, it is possible for two edges e and e ′ between pairs of vertices (l, α), (k, β) and (l ′ , α ′ ), (k ′ , β ′ ) to be colored the same color even if l is not equal to either l ′ nor k ′ , i.e., the edges are between different pairs of subsets V l , V k and V l ′ , V k ′ .This can happen since it is possible that S l ∩ S k = S l ′ ∩ S k ′ and α△β = α ′ △β ′ .However, we do wish such edges to be distinguished by the (quantum) automorphism group of G(M, b), i.e., u (l,α),(l ′ α ′ ) u (k,β),(k ′ β ′ ) = 0 where u is the fundamental representation of Qut(G(M, b)) 1 .This could be done explicitly by defining the color of the edge e (for instance) to be c(e) = ({l, k}, α△β).However, this is redundant for our purposes since such edges e and e ′ as described above are already distinguished by the (quantum) automorphism group due to the vertex colors of the endpoints of the edges.In other words, the edges between V l and V k can be thought of as being implicitly colored distinctly from those between V l ′ and V k ′ whenever {l, k} = {l ′ , k ′ }.Remark 3.4.We also note that in order to reduce the total number of colors used, for each pair l, k ∈ [m], we can choose one color α ∈ ±1 S l ∩S k and replace these edges with non-edges.Moreover, instead of coloring the edges between V l and V k with functions from ±1 S l ∩S k , we can simply use {1, . . ., 2 |S l ∩S k | − 1}, and this will not change Qut(G) as explained in the previous remark.In fact we will need to do this for some of our results later on.
Example 3.5.Consider the linear system M x = b, where Then the graph G(M, b) is the one given in Figure 1. ( (1, −1, −1) (1, 1, −1) We used Remarks 3.3 and 3.4 to reduce the number of colors needed in the graph.
where each u (k) is a magic unitary.Furthermore, u Proof.If l = k, then c((l, α)) = l = k = c((k, β)) and thus u (l,α),(k,β) = 0.This shows that u has the block form given in the lemma statement.Moreover, each diagonal block must be a magic unitary since u is a magic unitary.Now fix k ∈ [m].Pick α, β ∈ ±1 S k b k arbitrarily.Suppose that α, β ∈ ±1 S k b k are such that α△β = α△ β.Note that since all these functions are elements of ±1 S k b k , the operation △ is simply the pointwise product, and therefore we have that α△α = β△β.In particular, this implies that the color of the edge between (k, α) and (k, α) is equal to the color of the edge between (k, β) and (k, β ′ ) if and only if β ′ = β.Therefore, u Thus the operator u Thus the set of operators appearing in any row (and similarly any column) of Since the entries of any given row of a magic unitary commute, and every row of u (k) contains the same set of entries, we have that all entries of u (k) commute.Remark 3.7.Let Γ = Γ(M, 0) be the homogeneous solution group of the system M x = 0.By Example 2.4, the group C * -algebra C * (Γ) can be written as Note that we slightly abuse the notation by writing x i instead of u x i .
Proof.The proof is structured as follows: we use the universal properties of C * (Γ) and C(Qut(G)) respectively to first obtain a * -homomorphism ϕ 1 : C * (Γ) → C(Qut(G)), and then obtain a *homomorphism ϕ 2 : C(Qut(G)) → C * (Γ).We will then show that ϕ 1 and ϕ 2 are inverses of each other thus proving that they are in fact isomorphisms.Lastly, we will prove that ϕ 1 intertwines the coproducts as described in the theorem statement.
Step 1: Construction of a * -homomorphism For this step, we will construct elements y i of C(Qut(G)) that satisfy the relations of the generators x i of C * (Γ) given in Remark 3.7.This proves that there is a * -homomorphism ϕ 1 from C * (Γ) to C(Qut(G)) such that ϕ 1 (x i ) = y i .Later we will see that ϕ 1 is in fact an isomorphism.
Let u be the fundamental representation of Qut(G).By Lemma 3.6, u = k∈[m] u (k) where each u (k) is a magic unitary indexed by α,β := u (k,α),(k,β) depends only on k and the value of α△β ∈ ±1 S k 0 .Thus, as in the proof of Lemma 3.6, for each δ ∈ ±1 S k 0 we let u α,β such that α△β = δ.Note that for any 0 and thus every row/column of u (k) contains the same set of operators and Now we define y α for all k ∈ [m] and i ∈ S k .We first aim to show that Consider the edges between the subsets V l and V k .For each δ ∈ ±1 S l ∩S k , let A δ be the adjacency matrix of the graph consisting of the edges of G colored δ.Further, let B δ be the submatrix of A δ consisting of the rows indexed by V l and columns indexed by V k .In other words, , and similarly define B + and B − .Then Since u must commute with each A δ by definition of Qut(G), it must also commute with both A + and A − .This implies that u (l) . Considering the (l, α), (k, β) entry of both sides of the former equation, we see that Note that for every term in the above sums, we have that α where α = α△α ′ , and similarly for u Doing the same for u (l) B − = B − u (k) yields the same equation but with α i β i replaced with −α i β i and combining these proves that i .
So we have shown that the value of y (k)
i does not depend on k, and thus we will simply denote this operator by y i .Now note that since y i is a linear combination (with real coefficients) of the operators u (k) α for k ∈ [m] such that i ∈ S k , and these operators are entries of the magic unitary u, we have that y * i = y i .Also, by Equation ( 6) we have that u β = 0 for α = β and thus Thus the y i satisfy relation (1) from Definition 3.1.Next we will show that relation (2) of Definition 3.1 holds, i.e., that y i y j = y j y i if there exists k ∈ [m] such that i, j ∈ S k .Suppose that i, j, k are as described.Then y i = y (k) i and y j = y (k) j are both linear combinations of the entries of u (k) which pairwise commute by Lemma 3.6.Therefore y i and y j commute as desired.
Lastly, we must show that relation (3) of Definition 3.1 holds.Recall that we are trying to show that C * (Γ 0 (M )) ∼ = C(Qut(G)), i.e., we are in the homogeneous solution group case.Thus we must show that i∈S k y i = 1 for all k ∈ [m].We have that α are pairwise orthogonal, when we expand the above product all cross terms disappear and we obtain as desired.Therefore the elements y i ∈ C(Qut(G)) for i ∈ [n] satisfy the relations of the generators of C * (Γ) and thus there exists a * -homomorphism Step 2: Construction of a * -homomorphism For this step, we will construct elements v (l,α),(k,β) of C * (Γ) that satisfy all the relations of the generators u (l,α),(k,β) of C(Qut(G)).This proves that there is a * -homomorphism . Later we will see that ϕ 2 is an isomorphism.
, and that Note that the latter implies that p + i and p − i are orthogonal, i.e., p + i p − i = 0. We will abuse notation somewhat and write p α i i to denote p + i whenever α i = +1 and similarly for p − i when α are indeed well defined since the commutativity of the elements p ± i for i ∈ S k follows from the commutativity of the x i for i ∈ S k (i.e., relation (2) from Definition 3.1).Since it is the product of pairwise commuting projections, we have that v This also implies that for a fixed k ∈ [m] the projections v α are pairwise orthogonal.Now suppose that α ∈ ±1 S k 1 , i.e., that i∈S k α i = −1.Then, using the easily checked fact that x i p α i i = α i p α i i , we have that Thus v Combining this with Equation ( 7), we have that Next, for all k ∈ [m] and α, β δ where δ = α△β.Lastly, define v to the the matrix indexed by V (G) such that We aim to show that v satisfies all of the relations satisfied by the fundamental representation u of Qut(G).First, v is a magic unitary since all of its entries are projections and the sum of its This means there exists j ∈ S l ∩ S k such that α j α ′ j = β j β ′ j .Since all these terms are in {+1, −1}, we have that δ j = α j β j = α ′ j β ′ j = δ ′ j .This implies that p So we have shown that v is a magic unitary satisfying all of the relations satisfied by u and thus by the universal property of C(Qut(G)), there exists a * -homomorphism Step 3: Showing that ϕ 1 and ϕ 2 are inverses of each other.
We first show that ϕ 2 • ϕ 1 : C * (Γ) → C * (Γ) is the identity.Since both ϕ 1 and ϕ 2 are *homomorphisms, it suffices to show that ϕ 2 • ϕ 1 acts as the identity on the generators To see that the final expression above is equal to x i , we will show that multiplying it by x i yields 1.First, note that since x i p α i i = α i p α i i , we have that Thus, making use of Equation ( 9) in the last equality, we have that ) is the inverse of x i , which is its own inverse by definition.Therefore ϕ 2 • ϕ 1 (x i ) = x i and thus ϕ 2 • ϕ 1 is the identity map on C * (Γ).Now we will show that is the identity map.As above, it suffices to show that ϕ 1 •ϕ 2 acts as the identity on all entries of the fundamental representation u.Other than 0, the entries of u are precisely the elements u where we have used the fact that α to denote the final expression of Equation (10).We wish to show that Since the y i satisfy all of the relations that the x i satisfy, the same argument as for the v Suppose that α = β.Then there exists j ∈ S k such that α j = β j and therefore where the penultimate equality comes from the fact that α j = β j implies that α j β j = −1.Using the above we have that This completes the proof that α and thus that ϕ 1 • ϕ 2 is the identity on C(Qut(G)).Combining with the above, this proves that ϕ 1 : C * (Γ) → C(Qut(G)) and ϕ 2 : C(Qut(G)) → C * (Γ) are isomorphisms of these C * -algebras which are inverse to each other. Step Since ϕ 1 and the coproducts are * -homomorphisms, it suffices to prove the identity on the generators, i.e., that First, let us determine ∆ G (u α,δ△α and thus Since we have already shown that (ϕ 1 ⊗ ϕ 1 ) • ∆ Γ (x i ) = y i ⊗ y i , this completes the proof.
Constructing uncolored graphs that have the same quantum automorphism group as a given colored graph
In the following, we will describe a procedure that given certain vertex-and edge-colored graphs produces a decolored graph with isomorphic quantum automorphism group.This procedure is divided into two steps: We first decolor the vertices and then the edges.We start with a lemma known from [8, Lemma 3. The following lemma can be found in [14,Lemma 3.2].The distance d(v, w) between two vertices v, w ∈ V (G) is the length of a shortest path connecting v and w.Lemma 4.2.Let G be a finite graph and u vw , 1 ≤ v, w ≤ n be the generators of C(Qut(G)).If we have d(v, p) = d(w, q), then u vw u pq = 0.
Lemma 4.3. Let G be a finite graph and u the fundamental representation of
Proof.Let v, w ∈ V (G) and p ∈ V (G), d(v, p) = k as above.Using Relation (2) and Lemma 4.2, we get We conclude We will now define a edge-colored graph G ′ from the vertex-and edge-colored graph G. Definition 4.5.Let G be a vertex -and edge-colored graph.Depending on the color c, attach a path of length n c ∈ N 0 to every vertex colored c, where n c 1 = n c 2 for colors c 1 = c 2 and then decolor the vertices of the graph.We choose one of the edge-colors of G and let the edges in the paths all have this edge-color.We denote this new edge-colored (but not vertex-colored) graph by G ′ .
The next lemma gives a description of the fundamental representation of Qut(G ′ ).We will see in the following theorem that the quantum automorphism groups of G and G ′ are isomorphic.
Example for the construction of the graph G ′ from G. We chose the edge-colors between the added paths to be black.
where additionally u v i w i = u v 0 w 0 for all i and u v i w i = 0 for c(v) = c(w), c(.) being the vertex-colors in the original graph G. Proof.
Step 1: It holds u v i w 0 = u v 0 w i = 0 for i = 0. We know that it holds deg(v i ) ∈ {1, 2} for vertices v i with i > 0. Since we have deg(w 0 ) ≥ 3 by assumption, we get u v i w 0 = 0 by Lemma 4.1.We similarly obtain u v 0 w i = 0.
Step 2: We have u v i w j = 0 for i = j, i, j > 0. First assume i < j.Then v 0 is a vertex with d(v i , v 0 ) = i and deg(v 0 ) ≥ 3. Since i < j, we know that the vertices q with d(w j , q) = i are in the path added to w and thus deg(q) ∈ {1, 2}.We deduce u v i w j = 0 by Lemma 4.3.The case i > j follows similarly.
Step 3: It holds since u v 1 p 0 = 0 and u v 0 w 2 = 0 by Step 1. Furthermore, for i ≥ 1, we have since it holds that u v i+1 w i−1 = 0 and u v i w i+2 = 0 by Step 1 or Step 2. Note that if there is no w i+2 , we still get u v i w i = u v i+1 w i+1 by a similar calculation, since then w i is the only neighbor of w i+1 .
Step 4: It holds Proof.Let u be the fundamental representation of Qut(G ′ ).As in Lemma 4.6, we denote by v i , 0 ≤ i ≤ n c(v) the vertices in the added path with d(v, v i ) = i and let Let c e be the color of the edges in the attached paths in G ′ .For colors c = c e , we get that By Lemma 4.6, we directly see that uA (11)).The adjacency matrix where B i is the (V i−1 ×V i )-matrix with (B i ) a i−1 b i = δ ab .Also by Lemma 4.6, we see that uA G ′ ce = A G ′ ce u implies u 0 A Gc e = A Gc e u 0 .Furthermore, since (u i ) v i w i = (u 0 ) v 0 w 0 for all i, we see that C(Qut(G ′ )) is generated by the entries of u 0 .We conclude that C(Qut(G ′ )) is generated by a magic unitary u 0 that fulfills u 0 A Gc = A Gc u 0 for all edge-colors c and (u 0 ) v 0 w 0 = 0 for c(v) = c(w) (see Lemma 4.6).This yields a surjective * -homomorphism ϕ : C(Qut(G)) → C(Qut(G ′ )), w ab → (u 0 ) ab , where w is the fundamental representation of Qut(G).
For the other direction, take the fundamental representation w of Qut(G) and build the matrix w ′ as follows where we put (w i ) a i b i = w a 0 b 0 .Note that w ′ is a magic unitary if and only if the matrices w i are magic unitaries.We compute b;nc(b)≥i where we used w ab = 0 for c(a) = c(b) and the fact that w is a magic unitary.Similarly, we get a;nc(a)≥i (w i ) a i b i = 1 and thus w ′ is a magic unitary.It remains to show that w ′ commutes with A G ′ c for all edge colors c.We deduce from ( 12) and ( 14) that wA Gc = A Gc w implies 13) and ( 14), we see that Note that we have by definition of w i and B i .We similarly get that w i+1 B t i+1 = B t i+1 w i is automatically fulfilled by definition of w i and B i .Since we have wA Gc e = A Gc e w by assumption, we obtain We will now deal with the edge-colors of G ′ .First, we need the following easy lemma.
Lemma 4.8.Let w ij , 1 ≤ i, j ≤ n be elements in a C * -algebra such that the matrix w = (w ij ) 1≤i,j≤n is a magic unitary.Then w ij w kl + w il w kj is a projection if and only if w ij w kl = w kl w ij and w il w kj = w kj w il .
Proof.Let w ij w kl + w il w kj be a projection.Since it is self-adjoint, we have Multiplying by w ij and w il , respectively, yields w ij w kl = w ij w kl w ij and w il w kj = w il w kj w il .But, by taking adjoints, this implies w ij w kl = w kl w ij and w il w kj = w kj w il .The other direction is clear, since w ij w kl + w il w kj is the sum of two orthogonal projections if w ij w kl = w kl w ij and w il w kj = w kj w il .
We will define a (uncolored) graph G ′′ from the edge-colored graph G ′ .In the next theorem, we will then see that for certain graphs, the quantum automorphism groups of G ′ and G ′′ are isomorphic.Let G be a graph and e = (u, v) ∈ E(G).We say that we subdivide e if we delete the edge e = (u, v) from G and add a vertex w as well as edges (u, w) and (w, v) to the graph.Definition 4.9.Let G be a vertex -and edge-colored graph.First decolor the vertices by applying the construction as in Definition 4.5 to obtain the edge-colored graph G ′ .We denote the color of the newly added edges in the graph G ′ by c 0 .We subdivide each colored edge with c(e) = c 0 and add a path of length m c to the subdivision, where m c 1 = m c 2 for colors c 1 = c 2 and then decolor the edges in the graph G ′ .We call this graph G ′′ .
Figure 3: Example for the construction of the graph G ′′ from G ′ .We chose black to be the edge-color c 0 .Lemma 4.10.Let G be a vertex -and edge-colored graph such that deg(v) ≥ 3 for all v ∈ V (G) and let G ′′ be the graph as in Definition 4.9.We denote the vertex that subdivided the edge e in G by e 0 and the vertices in the added path with d(e 0 , e i ) = i by e i , 1 ≤ i ≤ m c(e) .Furthermore where u e i f i = u vx u wy + u vy u wx for c(e) = c(f ), e = (v, w), f = (x, y) and u e i f i = 0 for c(e) = c(f ).
and thus u e 0 v 1 = 0 by Lemma 4.1.If n c(v 0 ) ≥ 2, then v j has at least one neighbor of degree 1 or 2. Since e 0 only has neighbors of degree ≥ 3, we get u e 0 v j = 0 by Lemma 4.3.
Assume i = j and i, j = 0. Let furthermore i < j.Then e 0 is a vertex with d(e i , e 0 ) = i and deg(e 0 ) = 3.Since i < j, we know that the vertices q with d(v j , q) = i are in the path and thus deg(q) ∈ {1, 2}.We deduce u e i v j = 0 by Lemma 4.3.The case i > j follows similarly.
It remains to show We know deg(q) ≤ 3 for vertices q with d(q, e i ) = i, since either q = e 0 or q is a vertex in the added path.We deduce u e i v i = 0 by Lemma 4.3.Now assume n c(v 0 ) = 0.If m c(e 0 ) = 0, then we know deg(e 0 ) = 2 and thus u e 0 v 0 = 0 by Lemma 4.1.If m c(e 0 ) > 0, then e 0 has a neighbor of degree 1 or 2. If v 0 has no neighbor of this degree, then u e 0 v 0 = 0 by Lemma 4.3.The vertex v 0 only has a neighbor of degree 2 if there exists a subdivision f 0 of some edge f = (v 0 , w 0 ) with m c(f ) = 0.Then, it holds If m c(e 0 ) = 1, then deg(e 1 ) = 1 = 2 = deg(f 0 ) which yields u e 1 f 0 = 0 by Lemma 4.1.If m c(e 0 ) ≥ 2, then e 1 has a neighbor e 2 with deg(e 2 ) ∈ {1, 2}.Since the neighbors v 0 , w 0 of f 0 have degree ≥ 3, we get u e 1 f 0 = 0 by Lemma 4.3.We deduce u e 0 v 0 = 0 in all those cases.
Step 2: It holds u e i f 0 = u e 0 f i = 0 for i = 0. We know that it holds deg(e i ) ∈ {1, 2} for vertices e i with i > 0. If m c(f 0 ) > 0, then we have deg(f 0 ) = 3 and get u e i f 0 = 0 by Lemma 4.1.Let m c(f 0 ) = 0.If m c(e 0 ) = 1, then deg(e 1 ) = 1 and thus u e 1 f 0 = 0 by Lemma 4.1.If m c(e 0 ) ≥ 2, then e i has a neighbor with degree 1 or 2. Since the neighbors of f 0 have degree ≥ 3 by assumption, we get u e i f 0 = 0 by Lemma 4.3.We similarly obtain u e 0 f i = 0.
Step 3: It holds u e i f j = 0 for i = j, i, j > 0. First assume i < j.Then e 0 is a vertex with d(e i , e 0 ) = i and deg(e 0 ) = 3.Since i < j, we know that the vertices q with d(f j , q) = i are in the path and thus deg(q) ∈ {1, 2}.We deduce u e i f j = 0 by Lemma 4.3.The case i > j follows similarly.
Step 4: It holds u e i f i = u e 0 f 0 for all i.We first show u e 0 f 0 = u e 1 f 1 .It holds since u e 1 p 0 = 0 by Step 1 and u e 0 f 2 = 0 by Step 2. Furthermore, for i ≥ 1, we have = u e i+1 f i+1 since we have u e i+1 f i−1 = 0 and u e i f i+2 = 0 by Step 2 or Step 3. Note that if there is no f i+2 , we still get u e i f i = u e i+1 f i+1 by a similar calculation, since then f i is the only neighbor of f i+1 .Step 6: It holds u e i f i = u vx u wy + u vy u wx for e = (v, w), f = (x, y).We have where we used u vf 1 = 0, u wf 1 = 0 by Step 1.Note that e 0 is the only vertex in E ′ 0 that is a common neighbor of v and w.Thus, we have (u vx + u vy )u kf 0 (u wx + u wy ) = 0 for all k = e 0 by Lemma 4.
By
Step 4, we obtain u e i f i = u vx u wy + u vy u wx .Remark 4.11.Note that the operator u (v,w)(x,y) = u vx u wy + u vy u wx does not depend on the order of (v, w) or (x, y), since u (v,w)(x,y) = u vx u wy + u vy u wx = u vy u wx + u vx u wy = u (v,w),(y,x) , u (v,w)(y,x) = u vy u wx + u vx u wy = u wx u vy + u wy u vx = u (w,v),(x,y) .
Before stating the theorem, we first need to define a quantum subgroup of the quantum automorphism group of the graph G ′ .It is straightforward to check that the comultiplication is a *-homomorphism.Definition 4.12.Let G be a vertex -and edge-colored graph and G ′ as in Definiton 4.5.We define Qut * c 0 (G ′ ) to be the compact matrix quantum group whose corresponding C * -algebra is generated by a magic unitary x with xA Gc = A Gc x for every edge color c and x ik x jl = x jl x ik for c((i, j)) = c((k, l)), c = c 0 , where c 0 is the edge-color we choose for the newly added edges in G ′ .Theorem 4.13.Let G be a vertex -and edge-colored graph such that deg(v) ≥ 3 for all v ∈ V (G).Let G ′′ be the graph as in Definition 4.9 and let Qut * c 0 (G ′ ) be the compact matrix quantum group as in Definition 4.12.Then there exists a * -isomorphism ϕ : Proof.Let u be the fundamental representation of Qut(G ′′ ).As in Lemma 4.10, we denote the vertex that subdivided the edge e in G by e 0 and the vertices in the added path with d(e 0 , e i ) = i by e i , 1 ≤ i ≤ m c(e) .Moreover, we denote by v i , 0 ≤ i ≤ n c(v) the vertices in the added path with d(v, v i ) = i.The adjacency matrix of G ′′ is of the form where A G 0 is the adjacency matrix of the edge-color that is not subdivided together with the paths from the construction of G ′ , B G ′′ is the matrix with (B G ′′ ) ve = 1 for v incident to e, 0 otherwise, , where w and u 0 are blocks in u (see (15)).Furthermore, since (u i ) e i f i = w ak w bl + w al w bk for e = (a, b), f = (k, l), we also have that C(Qut(G ′′ )) is generated by the entries of the matrix w.We will now show that w fulfills the relations of the generators of C(Qut * c 0 (G ′ )).We already know wA G ′ 0 = A G ′ 0 w.We will now show wA Gc = A Gc w for every edge color c = c 0 .By Lemma 2.8, this is equivalent to w ik w jl = 0 for c((i, j)) = c((k, l)) if (i, j) ∈ E ′ 0 or (k, l) ∈ E ′ 0 .We will show that the elements of w fulfill those relations.Let v ∈ V (G ′ ), (a, b) = e ∈ E ′ 0 .It holds Multiplying w va from left and right yields We deduce s;(v,s) / ∈E ′ 0 w va w sb w va = 0 and since all elements in the sum are positive, we get w va w sb w va = 0 for all v, s ∈ V (G ′ ) with (v, s) / ∈ E ′ 0 .We deduce w va w sb = 0 for all v, s ∈ V with (v, s) / ∈ E ′ 0 .Similarly, by using u 0 B t G ′′ = B t G ′′ w, we get w av w bs = 0 for all v, s ∈ V (G ′ ) with (v, s) / ∈ E ′ 0 .It remains to show w ik w jl = 0 for c(e) = c(f ), e = (i, j) ∈ E ′ 0 , f = (k, l) ∈ E ′ 0 .We know (u 0 ) ef = 0 for c(e) = c(f ) from Lemma 4.10.Then w ik w jl + w il w jk = (u 0 ) ef = 0 and by multiplying w ik and w il from the left, respectively, we get w ik w jl = 0 and w il w jk = 0. Thus, we have shown w ik w jl = 0 for c((i, j)) = c((k, l)) if (i, j) ∈ E ′ 0 or (k, l) ∈ E ′ 0 which is equivalent to wA Gc = A Gc w for every edge color c = c 0 .
Summarizing, we get wA Gc = A Gc w for every edge color c.By Lemma 4.10, we furthermore know that (u 0 ) ab = w ik w jl + w il w jk for a = (i, j), b = (k, l) and c(a) = c(b), c(a) = c 0 and thus w ik w jl + w il w jk is a projection.Then Lemma 4.8 yields w ij w kl = w kl w ij and w il w kj = w kj w il .Therefore, we get a surjective * -homomorphism ϕ : , where w ′ is the fundamental representation of Qut * c 0 (G ′ ).For the other direction, take the fundamental representation w ′ of Qut * c 0 (G ′ ) and build the matrix w ′′ as follows where we used w ′ ak w ′ bl = 0 for c((a, b)) = c((k, l)) and the fact that w ′ is a magic unitary.We get f ;nc(f )≥i (u ′ i ) f i e i = 1 similarly and therefore w ′′ is a magic unitary.It remains to show that w ′′ commutes with A G ′′ .Similar to equation (17), we see that and therefore 16) and the form of w ′′ , we see that which is true because of the following: We get the following corollary, which we will use in the next section to construct a graph with quantum symmetry and finite quantum automorphism group.
(Uncolored) graphs whose quantum automorphism group is the dual of a solution group
In this section, we will look at certain graphs from Definition 3.2, where we replace one of the edge colors by non-edges.Furthermore, we restrict to graphs coming from linear constraint systems as in the following definition.By using Theorem 3.8 and the decoloring procedure in Section 4, we will obtain (decolored) graphs whose quantum automorphism group is the dual of the corresponding solution group.
By the isomorphism above, we get that C(Qut(G ′′ * (M K 3,4 , 0))) is finite-dimensional and non-commutative which yields the assertion.
Quantum Isomorphisms
In this section we consider quantum isomorphisms between the colored graphs G(M, b) and G(M, b ′ ) for b = b ′ .In particular we give analogs of Lemma 3.6 and Theorem 3.8 for this case.From this we are able to obtain non-isomorphic colored graphs G(M, b) and G(M, b ′ ) that are quantum isomorphic but neither G(M, b) nor G(M, b ′ ) has quantum symmetry.Further, applying decoloring techniques as in Section 4 allows us to produce non-isomorphic uncolored graphs G and G ′ without quantum symmetry that are nevertheless quantum isomorphic.This appears to be the first known such example.
To define quantum isomorphism of colored graphs, we must provide a suitable generalization of the isomorphism game.The way to do this is quite natural.Definition 6.1 (Isomorphism game for colored graphs).Given colored graphs G and G ′ , with respective color functions c and c ′ , the (G, G ′ )-isomorphism game has as inputs and outputs for both Alice and Bob the set V (G) ∪ V (G ′ )2 .To win, upon receiving x ∈ V (G) Alice (respectively Bob) must respond with y ∈ V (G ′ ) and vice versa.If this condition is met, then there is a vertex g a ∈ V (G) that is either Alice's input or output, and there is similarly g b ∈ V (G) and g ′ a , g ′ b ∈ V (G ′ ).Alice and Bob then win if the following conditions are met: 3. (g a , g b ) is an edge of color c if and only if (g ′ a , g ′ b ) is an edge of color c.
We then say that two colored graphs G and G ′ are quantum isomorphic if there is a quantum strategy3 that wins the (G, G ′ )-isomorphism game with probability 1.In [10] it was shown that (uncolored) graphs G and G ′ are quantum isomorphic if and only if there exists a magic unitary u such that A G u = uA G ′ .Precisely the same proof applied to colored graphs gives the following: Proposition 6.2.Colored graphs G and G ′ with coloring functions c and c ′ are quantum isomorphic if and only if there exists a V (G) × V (G ′ ) magic unitary u such that c(g) = c ′ (g ′ ) implies u gg ′ = 0 for g ∈ V (G), g ′ ∈ V (G ′ ), and A Gc u = uA G ′ c for all edge colors c. i .
So we have shown that the value of y (k) i does not depend on k, and thus we will simply denote this operator by y i .Now note that since y i is a linear combination (with real coefficients) of the operators u Thus the y i satisfy relation (1) from Definition 6.5.Next we will show that relation (2) of Definition 6.5 holds, i.e., that y i y j = y j y i if there exists k ∈ [m] such that i, j ∈ S k .Suppose that i, j, k are as described.Then y i = y (k) i and y j = y (k) j are both linear combinations of the entries of u (k) which pairwise commute by Lemma 6.6.Therefore y i and y j commute as desired.
Lastly, we must show that relation α are pairwise orthogonal, when we expand the above product all cross terms disappear and we obtain by definition.Therefore the elements y i ∈ Iso(G, G ′ ) for i ∈ [n] satisfy the relations of the generators of A and thus there exists a *homomorphism ϕ 1 from A to Iso(G, G ′ ) such that ϕ 1 (x i ) = y i for all i ∈ [n].
Step 2: Construction of a * -homomorphism ϕ 2 : Iso(G, G ′ ) → A. This step is almost identical to Step 2 of the proof of Theorem 3.8, and so we omit it.We only remark that the biggest change is that when showing that v 8) gets an additional factor of (−1) b k +b ′ k in the first and last expressions.
Step 3: Showing that ϕ 1 and ϕ 2 are inverses of each other.This step is nearly identical to Step 3 of the proof of Theorem 3.8 and so we omit it.
Definition 3 . 1 .
Let M ∈ F m×n 2 and b ∈ F m 2 with b = 0.The solution group Γ(M, b) of the linear system M x = b is the group generated by elements x i for i ∈ [n] and an element γ satisfying the following relations:
Figure 1 :
Figure 1: The graph G(M, b) for M and b as above.The vertices on the left hand side are the solutions of x 1 x 2 x 3 = 1, the vertices on the right hand side are the solutions of x 1 x 4 x 5 = −1.We used Remarks 3.3 and 3.4 to reduce the number of colors needed in the graph.
δ = 1 .
Next we must show that for a, b ∈ V (G), we have v a,b = 0 whenever c(a) = c(b), i.e., the colors of a and b are different.Recall from the definition of G(M, b) that the color of the vertex (l, α) is l.Thus v satisfies this relation by definition.Lastly, we must show that for a, b, a ′ , b ′ ∈ V (G), we have v a,b v a ′ ,b ′ = 0 whenever the edges {a, a ′ } and {b, b ′ } have different colors (or when one is an edge and the other is not).The previously shown relation already implies this one unless c(a) = c(b) and c(a ′ ) = c(b ′ ).So we may assume that a = (l, α), b = (l, β), a ′ = (k, α ′ ), and b ′ = (k, β ′ ).Thus, letting δ = α△β and δ δ ′ .Suppose the edges {a, a ′ } and {b, b ′ } do have different colors.By definition of G(M, b) this is equivalent to α△α ′ = β△β ′ .
αũ(k) α = 1 .
can be used to show that, for fixed k ∈ [m] the ũ(k) α are pairwise orthogonal projections satisfying α∈±1 S k 0 For i ∈ S k and β ∈ ±1 S k 0 we have that u (k)
2 . 3 ]Lemma 4 . 1 .
. The degree deg(v) of a vertex v denotes the number of neighbors of v in the graph G. Let G be a finite graph, A G be its adjacency matrix and u vw , 1 ≤ v, w ≤ n be the generators of C(Qut(G)).If (A l G ) vv = (A l G ) ww for some l ∈ N, then u vw = 0. Particularly, if deg(v) = deg(w), then u vw = 0.
1 . 4 . 4 .
since we have deg(p) = deg(q) and thus u pq = 0 by Lemma 4.Remark Lemma 4.1, Lemma 4.2 and Lemma 4.3 also work for vertex-and edge-colored graphs (take the decolored adjacency matrix in Lemma 4.1), since colors just add more relations on the generators of C(Qut(G)).
Lemma 4 . 6 .
Let G be a vertex -and edge-colored graph with deg(v) ≥ 3 for all v ∈ V (G) and let G ′ be the graph as in Definition 4.5.Denote by v i , 0 ≤ i ≤ n c(v) , the vertices in the added path with
0 for all i by Step 3 .Theorem 4 . 7 .
The case n c(v) > n c(w) follows similarly.Let G be a vertex -and edge-colored graph with deg(v) ≥ 3 for all v ∈ V (G) and let G ′ be as in Definition 4.5.Then there exists a * -isomorphism ϕ : C
Step 5 :
It holds u e i f i = 0 for c(e) = c(f ).Since c(e) = c(f ), we know m c(e) = m c(f ) .First assume m := m c(e) < m c(f ) .Then we have u emfm = 0 by Lemma 4.1, since deg(e m ) = 1 = 2 = deg(f m ).We deduce u e i f i = u e 0 f 0 = u emfm = 0 for all i by Step 3. The case m c(e) > m c(f ) is similar.
Corollary 4 . 14 .
Let G be a vertex -and edge-colored graph such that deg(v) ≥ 3 for all v ∈ V (G).Denote by G ′ and G ′′ the graphs constructed in Definitions 4.5 and 4.9.If Qut
1 .
c(g a ) = c ′ (g ′ a ) and c(g b ) = c ′ (g ′ b ); 2. g a = g b if and only if g ′ a = g ′ b ; Based on the above, we define the following 4 : for any x ∈ {+1, −1}.Therefore, k ∈ [m] such that i ∈ S k , and these operators are entries of the magic unitary u, we have that y * i = y i .Also, by (23) we have that u for α = β and thus
( 3 )
of Definition 6.5 holds, i.e., thati∈S k y i = (−1) b k +b ′ k for all k ∈ [m].We have that i∈S k ′ 0 (for the same color by assumption, for different colors because the product is 0).Note that w ′′ is a magic unitary if and only if the matrices u ′ i are magic unitaries.Let e = (a, b) ∈ E ′ 0 .We compute mwhere we put(u ′ i ) e i f i = w ′ ak w ′ bl +w ′ al w ′ bk for e = (a, b), f = (k, l).Those elements are projections by Lemma 4.8, because we have w ′ ak w ′ bl = w ′ bl w ′ ak for (a, b), (k, l) ∈ E f ;nc(f )≥i | 12,594.4 | 2021-11-24T00:00:00.000 | [
"Mathematics"
] |
An Analysis of Defence Expenditure and Economic Growth in South Africa Teboho
This research paper investigates the relationship between defence expenditure and economic growth in South Africa. It is understood that South African Defence Force plays a vital role in peace keeping security in African and SADC as a region. This makes it a particularly interesting case study on nexus between defence expenditure and economic growth. This investigation presents such a study, by estimating an econometric model of the South African military expenditure in considering pure economic factors. The period of the study covers from 1988 to 2012. On the basis of determining the long term equilibrium the application of Johansen cointegration and Engel-Granger were applied. At the later stage the technique of Granger causality was performed on variables of interest in the study. The study concludes that there is long run relationship between defence expenditure and economic growth. Also for causal analysis military expenditure seem to granger cause gross domestic product per capita at 5 percent significance level.
Introduction
Since the beginning of life the protection of every human kind has been a priority of the government.It is realised through the literature that citizens protection became one of the political mandate.Defence spending it can be described by a situation where the country ensures its internal and external security for its citizens.From the economics point of view it means that defence spending compete with all other public goods the inhabitants may need.According to Shahbaz, Afza and Shabbir (2013) there are two paths in which military expenditure may affect economic growth.Firstly, they explains that a rise in military expenditure may increases total demand by stimulating output and ultimately economic growth.Secondly, an increase in defence expenditure may also lead to improvements in infrastructure.World military expenditure in 2004 peaked at U$ 1 trillion, where United States accounts for 47 percent in the world defence expenditure, (World Council of Churches, 2005).Prior to portray the picture of military expenditure in South Africa, this study takes a closer look at military expenditure from continental view.According to Smaldone (n.d) African military expenditure has been a small fraction in comparison of global outlays for military.It is claimed that during 1989 African countries constituted 1.5 percent of world military outlay, compared to decade before of 1.8 percent, (Smaldone, n.d).Perlo-freeman, Sköns, Solmirano and Wilandh (2013) indicates that in Sub-Sahara military expenditure increased strongly for previous years, but for first time since 2003 it fell by 3.2 percent.
Since South Africa is the most advanced economy on the African continent.Stalenheim, Fruchart, Omitoogun and Perdomo (2006) pointed that military expenditure in Africa rose by less than 1 per cent in 2005, in which, South Africa was among countries accounted for 62 per cent of Africa's military spending.During the apartheid government, South Africa built up one of the higher arms industries of any newly industrialising economy.Batchelor, Dunne and Lamb (2002) stated that during the 1970s and 1980s there were sustained upward trends in military spending as a result of South Africa's military involvement in Namibia/Angola.However, South Africa's military expenditure has been reduced substantially in the late 1980s.This was attributed by fact that South Africa had ended its apartheid regime and the end of the Cold War.After nearly 10 years of defence budget cuts, by the end of the 1990s South Africa's military spending as a percentage of GDP was at the same level as it had been in the early 1960s.
This study intends to briefly review the literature on military spending in South Africa and other international studies.It then investigates the causal linkages link between gross domestic product (GDP) and military spending.More ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, No 20 September 2014 2770 importantly, the purpose of this study is to examine some of the important questions.Firstly, is there long run equilibrium between defence expenditure and economic growth?Secondly, is there causality existing between defence expenditure and economic growth in South Africa or visa versa?It should be pointed that former Ministry of Defence in 2010 in the budget speech described military budget of South Africa as "shoestring" and insufficient for one of Africa s biggest contributors to peacekeeping forces, by mentioning that at least the budget for defence should be 2 percent of the GDP.This statement has drawn some attention to this study to explore econometric relation between defence expenditure and economic growth.
The study is organized as follows: Section 2 discusses the literature survey at international level and South African survey later on.Section 3 presents data and methodology for the model specifications.Section 3 discusses the variables descriptions used in this study.Section 4 and 5 is econometric methods and model specification respectively.Section 6 and 7 is motivation for variables used and empirical results.Lastly, section 8 its summary and conclusion.
Literature Survey
Military has been a major item of national expenditure in many countries, but it has received little attention from researchers.This has led Tambudzai (n.d. 1 ) enlightens the rationality of the defence budgetary process in Southern Africa through identifying the main factors that influence military expenditure.The researcher used panel data analysis to study the behaviour of a particular group of countries over a given time period.For estimation, the study used time series data from 1996 to 2005 for each of the 12 countries in which it includes most of the countries in Southern Africa.The result of the study validates the significance of both economic and strategic variables in the determination of military expenditure in developing countries.It was found that the change in military burden is not explained by previous growth rates of military expenditure.The study also indicated the importance of GDP per capita and wars as determinants of military expenditure in Southern Africa.Dunne and Perlo-Freeman (2001) undertakes a study in an attempt to evaluate the driving forces behind military spending in developing countries by comparing a period during the Cold War with the period afterwards.Their study is concerned with developing economies and is particularly concerned with the impact of changes in security webs on military spending.The authors carried out separate time frames in which, one for during the Cold War the other for (1990-97) Post Cold-War.Results from Cold War time the coefficient of income term is insignificant, suggesting that military spending rises more or less in proportion to income.
Military spending in Egypt has seen marked changes that have passed through different phases.Aamer, Abu-Qarn, Dunne, Abdelfattah and Zaher (2010) undertake a study on time series analysis of the evolution of military spending in Egypt over the period 1960-2007.The results from the regression analysis suggest that military burden in Egypt is mainly determined by an auto regressive (AR) process, with some important economic and strategic factors.The findings indicate that an increase in GDP decreases the military burden of Egypt.Several studies such as Khalid and Mustapha (2014), Rashid and Arif (2012) have been conducted on the determinants of military expenditure in developing countries.Tambudzai (2006) tested the effects of economic factors, external factors, and geopolitical factors on Zimbabwe's military expenditure.The study was encouraged by the fact that the current economic crisis makes Zimbabwe's defence expenditures of concern.The study employs ordinary least squares (OLS) and uses time series from 1980 to 2003.The finding in the long run indicates that military expenditure and income shows a negative sign at 5 per cent significance level.The effect of military expenditures on economic growth in Pakistan and its causality analysis were investigated by Shahbaz, Afza and Shabbir (2013).The findings of their study have confirmed the long run equilibrium between military expenditure and economic growth.Precisely, the study showed a negative impact of military expenditure and economic growth for Pakistan.Anwar, Rafique and Joiya (2012) examined the defense spending and its linkages with economic growth in Pakistan.The study applied Johansen cointegration method and causality analysis.It was concluded by the study that military expenditure and economic growth are cointegrated, and causality runs from economic growth to defence spending.As is the trend in the literature that military expenditure is causally prior to economic growth, Dunne, Nikolaidou and Vougas (1998) empirically investigated the hypothesis in Greece and Turkey.Their study indicates that for Greece there is a positive impact of military expenditure on economic growth.Conversely, for Turkey the study finds a significant negative causal link from military expenditure to economic growth.
In terms of the literature of studies in South Africa Dunne and Vougas (1999) found evidence of a significant negative effect of military burden on the economic growth over the period from 1964 to 1996.The study applied Granger causality within VAR system which suggests no significant relationship among the variables.The authors further suggested that the military expenditure of the apartheid system did have a bad effect on the growth of South Africa.Another study conducted in South Africa was that of Aye, Balcilar, Dunne, Gupta and van Eyden (2013) found that at 5 percent significant level no cointegration between GDP and military expenditure.Their study only applied the techniques of Johansen cointegration.The current study intents to contribute in the literature in following ways: firstly, this study extends on the period of investigation compared to previous studies such as Dunne and Vougas (1999).Secondly, this study contributes in the literature by applying two types of cointegration to rely on the robust results of cointegration taking into consideration the use of two techniques.Lastly, this study contributes to the existing literature by using only economic variables in explaining military expenditure unlike previous studies.This study argues that military expenditure its economic issue therefore it should be explained by economic variables.
Data Description
To analyse the impact of economic growth on defence expenditure, the current study uses data spanning for 1988 to 2012.The data was extracted International Monetary Fund (IMF) and Stolkhom International Peace Research Institute (SIPRI) websites.All the variables used in the study were transformed into logarithms.The variables are as follows: government expenditure on military spending, government spending on education, government spending on health, population growth and gross domestic product per capita.
Econometric Methods
This research paper applies the Morden econometric methods such as unit root testing, cointegration and causality test to investigate the phenomena in hand.In the first step the current study will apply Augmented Dickey-Fuller2 test unit root test to investigate the order of integration in the variables under scrutiny.After determining the order of integration on the variables the next step is to determine whether they cointegrated or not with the application of Johansen cointegration and Engel-Granger cointegration test.Lastly, after determining that there is a long run relationship among the variables this study will do the causality test.
Model Description
This paper will adopt the modified model by Tambudzai (2006), who estimated determinates of military expenditure in Zimbabwe.The current econometric model will be as follows: 1 . 1 Where for the purpose of estimation, question (1.1) it can be expressed in logarithm form as follows: 1.2Where is the defence/military spending by government, is the population growth, is the general government expenditure on education, is the South African gross domestic product per capita, is general government expenditure on health and lastly, represent the disturbance term for all other factors not included in the model but not considered.
Motivation for Some Variables Used
Gross domestic product per capita is defined as the total market value of all final goods and services produced annually within the boundaries of a country, using both domestic and foreign-supplied resources divided by population.Collier and Hoeffler (2002) indicated that resource availability which means what a country can afford is viewed as the most important determinant of the level of military expenditure.Most of the studies have used the growth rate of GDP and as GDP rises, a country has more resources for production and greater means and need for protection.According to Nikolaidou (n.d) the inclusion of non-military government expenditure that is government expenditure on education and health in the model represents the economic burden of defence and is expected to enter the equation with a negative sign to account for the opportunity cost of defence.Lastly, population variable is incorporated to confine possible size effects.Dunne and Perlo-Freeman (2001) stated that it may be seen as giving some intrinsic security, reducing the need for military expenditure, or may reduce costs by allowing reliance on a large army rather than hi-tech equipment.On the other hand "public good" theory would suggest that a high population makes military spending more effective, as it ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 5 No 20 September 2014 2772 benefits a larger number of people as a "pure public good".
Empirical Results
Pre-getting to any econometric analysis for this paper, Table .1 below shows descriptive statistics for the study, for the economy of South Africa GDP per capita is U$31345.34 million, where on average South Africa spend U$71329.28 million also U$38048.20 million on health.The South African average population is 43 million.This study also employs the technique of correlation relations, this application it is important because it describes the degree of relationship between variables in the matrices.
2773
shows that all the variables population, government expenditure on health and education are negatively correlated with defence expenditure but their coefficients are not that higher except for population.Only the correlation between GDP per capita and defence expenditure is positive with a low coefficient.The following step after confirming the stationarity of variables is the cointegration of variables.The method of Johansen cointegration is applied in this study.This method was introduced by Johansen (1991), after the criticism of the former technique of Engle-Granger two step model.But before that the study needs to determine the lag length to be used in cointegration based on AIC and SIC.The results show that the optimal lag length is 3. -13.53024 -23.89450 -23.95793 -32.13595 -197.3489* -13.28155 -22.40233 -21.22227 -28.15681 -192.1263* -13.47627 -23.57066 -23.36422 -31.27237 -196.2155** indicates lag order selected by the criterion LR 3 , FPE 4 , AIC 5 , SC 6 , HQ 7 Table 5 and 6 below shows the results for Johansen cointegration test, in both tables the first column represent the number of cointegrating equations, the second column represent the trace statistics for table 5 but max-eigen statistic for table 6.The last column represent the critical values of Johansen cointegration test at 5% significance level.The results from trace test and maximum eigenvalue test indicates that there are four cointegrating vectors between the variables at 0.05 percent significance level.Cointegrating vectors are between military expenditure, gross domestic product per capita, population, government expenditure on health and education.The test it simply concludes by comparing the trace statistic of 24.612 and its corresponding critical value.If the critical value is less than the trace test then there is an existence of long run equilibrium, the same applies to test of maximum eigenvalue.These results of cointegration are also confirmed by applying Engel-Granger cointegration below: Davison and Mackinnon (1993).
Table 7 above presents the results for Engel-Granger cointegration test.The study finds that the computed ADF teststatistics for this study is -4.564 which is less than the critical value of -4.43 at 10% level.This implies that the null hypothesis of no cointegration in the long-run relationship is rejected.These findings reveal long run equilibrium between Milex, GDPC, HEAT, EDUC and POP.
Granger Causality Results
Since the study has confirmed the existence of cointegration for variables under study, the final step is to determine the direction of causality.The application of pairwise Granger causality was used to examine the causal relationship between military expenditure and economic growth since this is variables of interest.The results for causality are presented in table 8.The rejection rule is that, when the probability value (p-value) is less than the level of significance at 5 percent.In this case it can be observed that the first hypothesis it can not be rejected.Conversely, the second hypothesis it is rejected in favour that military expenditure does granger causes gross domestic product per capita.
Summary and Conclusion
The present investigation studied the relationship between defence expenditure and economic growth in South Africa.The investigation used annual time series data spanning from 1988 to 2012.The empirical analysis suggested that all the variables used in this study were subject to unit root testing.On the basis of determining the long term equilibrium the application of Johansen cointegration and Engel-Granger were applied.At the later stage the technique of Granger causality was performed on variables of interest in the study.
In conclusion, the following research questions were considered: is there long run equilibrium between defence expenditure and economic growth?From this question the study can conclude that there is long run relationship between defence expenditure and economic growth.Hence, any policy suggestion to either of the variable may affect the other.These findings are consistent with the study of Shahbaz, Afza and Shabbir (2013).The secondly, research question was: is there causality existing between defence expenditure and economic growth in South Africa or visa versa?From this question it is concluded that there is no causality running from gross domestic product per capita to military expenditure.This implies that gross domestic product per capita does not seem to be a pave way to cause either higher or less military expenditure.Conversely, looking on the other hypothesis this is interesting that military expenditure seem to granger cause gross domestic product per capita at 5 percent significance level.Also these findings are similar to Dunne, Nikolaidou and Vougas (1998) specifically for Turkey case study.Based on causality outcomes of the study it is recommended to policy makers that the decision for military expenditure should not be based on gross domestic product as a stick-yard.
Table 1 :
Descriptive statistics of the variablesThe table 2 below it shows the results for ADF unit root test, this test is important before the cointegration part.The table shows that in the first column represents the variables used, the second and third column gives the ADF model used.The fourth column is the ADF statistics and lastly, is the critical values column.It shows that following the results of ADF all the variables under study are non-stationary at levels, but this hypothesis changes after the variables are first differenced. | 4,170.6 | 2014-09-05T00:00:00.000 | [
"Economics",
"Political Science"
] |
Bleaching of crude marula oil using activated bentonite and activated marula shells: A comparative analysis
Refining of edible and cosmetic oil consists of various steps which include bleaching process. Bleaching is an important step as it removes colour pigments resulting in improvement of taste of oil and appearance. In this work, activated bentonite and activated marula shells were used to refine crude marula oil. Fresh bentonite was activated using H 2 SO 4 whilst marula shells were activated using both H 2 SO 4 and KOH. Vacuum bleaching of crude marula oil was performed using both the activated bentonite and activated marula shells. Fresh, activated, spent bentonites and marula shells were analysed using X-ray Diffraction (XRD), Fourier transform infrared (FTIR), X-ray Florescence (XRF), Thermal gravimetric analysis (TGA) for phase identification, elemental composition of crystalline material, functional groups present and thermal decomposition of the materials. Analysis of β -carotene and chlorophyll was done using UV–visible spectrophotometer on bleached and crude marula oils. Acid values (AV) and free fatty acids (FFA) values in the oil were calculated using standard procedures. The effects of acid activation parameters were investigated namely, acid concentration and activation time. Acid and alkali activation improved the adsorption properties of fresh bentonite and fresh marula to remove colour pigments. Increase in acid concentration and activation time resulted in increase in bleaching capacity of adsorbent. Optimum acid concentrations for activation were 2 N and 15 N for bentonite and marula shells respectively and optimum activation times were 7 and 1.5 h for bentonite and marula shells respectively. The study indicated that activated bentonite and activated marula shells are effective and were successful in improving the appearance of crude marula oil to the required quality standards. Marula shells are a by-product waste in cosmetic oil processing and its use assists in waste utilization in-line with circular economy principles.
INTRODUCTION
It is very important to have a quality oil with the appropriate colour so that it attracts customers.Coloured compounds in some oils are not favoured by the customers and should be bleached.The colour of oil is mostly as a result of α and β-carotene which can be removed through adsorption using activated bentonite and this is termed bleaching.Bleaching involves mixing oil with clay under specific conditions to eliminate colour components and other impurities.It is usually carried out under vacuum at contact temperatures (90-100°C) and contact time (20-30 min) where bleaching earth used depends on oil type and oil quality with about 1% of bleaching material being commonly used during refining of the oil Erten et al (2004).This process involves decomposition of colouring organic pigments, ion exchange, physical adsorption, chemical adsorption on the bleaching earth with the power to remove impurities of an acid-activated bentonite commonly measured on adsorption capacities of chlorophyll and the β-carotene.Bleaching increases the shelf life of the oil by removing gums, soaps, odour and metal components, and its success is linked to crude oil quality to be refined and bentonite composition Didi et al (2009).It addresses aspects during oil refining which include colour pigments removal, trace metals, phosphatides, odour, remaining soap that are important to oil quality, and materials with good adsorption strength remove colouring pigments from crude oil Kaynak et al (2004).Vacuum bleaching is carried out to minimize oxidation and for better stability Tai et al (2007).
During commercial bleaching, the oil is first neutralised before bleaching then it is mixed in a bleaching tank with the adsorbent to form a slurry.The slurry is transferred to the bleaching filter tank where steam is applied followed by filtration process to separate the oil from the bleaching agent Deniz (2020).Bleaching process is done before the final process of refining called deodorization Shin et al (2020).
Bentonite
Bentonite is an abundant, clay-based mineral which is mainly composed of the smectite clay mineral montmorillonite Gates et al (2009).The structure is made up of two tetrahedral silicon ions sheets surrounding an octahedral co-ordinated aluminum ion sheet Eren et al (2009).Bentonite belong to the smectite group which is a 2:1 silicate layer with slight negative charge attributed to ionic substitutions in the octahedral and tetrahedral sheets because of replacement of trivalent aluminium ions with Mg 2+ and Fe + and they shrink upon drying and swell upon wetting.Bentonites exhibit high surface area, good adsorptive properties and can absorb water in the interlayer site Christidis et al (1997).
In oil refining, Tonsil (acid activated bentonite) adsorbs impurities in crude oil during bleaching process such as fatty acids, gums, phosphatides, and trace metals.Several methods have been carefully reported to activate bentonites.These include photonic, electric and biological processes Zhou (2011), activation by acid Steudel et al (2009), thermal activation Toor et al (2012), interparticle polymerization by polymer addition Betega et al (2008), chemical grafting of organic compounds Liu (2007).Among the methods, thermal treatment and acid activation processes are widely used to improve the adsorptive properties of the clay owing to their simplicity and low-cost.In acid activation, inorganic acids such as H2SO4 and HNO3 are usually used to treat bentonite to improve its adsorptive capacity by removing some impurities.In addition, the adsorptive performance of the clay mainly depends on the activation procedures.
Activated bentonite has been useful in producing sulphur, conservation of water, for protection of the environment, adsorption in the paper industry as well as in the chemical industry.Applications of acid-activated bentonite powders include carbonless copy paper preparations, as catalysts, adsorbents, electrodes, bleaching earth, pillared clay, nanocomposites and organoclay.Moreover, activated bentonite can also be used to remove oil coloured pigments, metal oxides from edible and cosmetic oils.To get maximum bleaching performance, there is need to control preparation of acid-activated bentonite.
Marula Shells
Marula tree is part of mango trees with a fruit that contains a hard shell with 2-4 seeds and the tree grows mostly in Southern African countries Moyo et al (2015).The valuable oil extracted from the seeds can be used as cooking oil (Mamvura et al., 2018;Edokpayi et al., 2019) and it is used in cosmetics as well.Marula fruit is juicy and may be eaten fresh.Marula shells are disposed as agricultural waste after use of marula seed and fruit resulting in environmental pollution.Marula products can be used for many applications Francis et al (2016) and marula shells are not measured as valuable because of less available processes to convert them to products that are useful Molelekoa et al (2018).
Adsorption in Oils
Biosorption has gained more research interests because it is renewable and cheap and marula shells have gained interest in reducing quantity of Pb (II) and Cu (II) Moyo et al (2015), methylene blue in aqueous solution (Mathew et al., 2017;Edokpayi et al., 2019), adsorption of methyl orange and zinc Lotfy et al (2012), removal of parasites Misihairabgwi et al (2014).Marula shells can be acid activated resulting in increase in acidic functional groups.These shells can also be alkali activated resulting in expansion of the carbon lattice which then improves porosity and increase in surface area due to metallic potassium (K) intercalation as well as enhancing uptake of organics Foo et al (2010).Previous studies reported the use of activated groundnut hulls, snail shells and rice husks in bleaching crude palm oil Ojewumi et al (2021), activated carbon in refining sunflower oil Guliyev et al (2018), activated carbon in refining carp oil Monte et al (2015), activated carbon in removing glycidyl ester from palm oil Restiawaty et al (2021), use of activated carbon from African teak wood and coconut shell in bleaching palm oil Onwumelu (2021).In literature, there is no report on the use of marula shells for bleaching of crude marula oil.This work is an extensive study with the objective to determine activation effects on marula shells and bentonite and to remove colour pigments in crude marula oil using activated marula shells and activated bentonite with potential developments in the cosmetics and food industry.Thus, it is best to understand the behaviour of the adsorbent during activation treatment and since the production rate of marula shells is approximately one tonne per tree Rakereng et al (2019), this results in a potential supply to meet the bleaching process needs of an oil refining company as a result promoting recycling of agricultural waste for a clean environment.
Materials
Fresh bentonite (FB) was bought locally in Botswana, fresh marula shells (FM) and Tonsil (T) were collected from one of the marula oil processing companies in Botswana.Tonsil was used as a reference adsorbent since it is used as a commercial bleaching earth in oil production industries.
Acid Activation of Bentonite
Fresh bentonite (ECCABOND-N Bentonite) was mixed in 2 N 98% pure H2SO4 at a temperature of 90°C in a round bottomed flask for 7 h in an ESCO Perchloric acid fume hood using an electric heater (Medline Scientific Limited MS-E electric heater).The whole process is shown in Fig. 1.After activation process, 125 mm Whatman filter paper was used to filter the solid residue followed by continuous washing of the residue with distilled water to remove excess acid then drying was done at 105°C in TTM-J4 oven for 4 h Önal et al (2007).Dried bentonite was pulverized to size of particle that infiltrated 75 µm sieve.Storage for the activated bentonite was done in polythene bags which were tightly closed and denoted AB.
Activation of bentonite was repeated in 1, 3, 5, 7, 10, 15 and 20 N acid concentrations at a fixed temperature of 90°C for 7 h.To determine effect of activation time, the slurry was activated at various contact times (0.5, 1, 1.5, 2, 3, 8 h) in a 2 N acid concentration solution at 90°C.
Acid Activation of Marula Shells
Acid activated marula shells were prepared through two step chemical activation process.Marula shells were carbonized first at 500°C then mixed with 2 N 98% H2SO4 in a beaker and heated for 1.5 h at 90°C with continuous stirring.The mixture was cooled to room temperature.The solid residue was filtered followed by continuous washing with distilled water then drying was done at 105°C for 6 h and heated in the furnace for 30 min at a temperature of 450°C.
The same procedure was repeated at concentrations of 1, 3, 5, 7, 10, 15 and 20 N at 90°C for 1.5 h.To determine effect of activation time, the same procedure was repeated for 0. 5, 1, 2, 3, 7 and 8 h in 15 N acid concentration at 90°C.Acid activated marula shells (ACM) were stored in sealed tubes.
Alkali Activation of Marula Shells
Activated carbon from marula shells were prepared through one step chemical activation process.At first, fresh marula (FM) shells were washed with distilled water, then drying at 100℃ overnight in an oven.The dried marula shells were then pulverized using ball mill (size 1-2 mm).The crushed marula shells were mixed with KOH (ratio 1:1) mechanically using mortar and pestle and the mixture was activated using an electric resistance tube furnace Model OTF 1200 X at 700℃ for 1h (heating rate: 10℃ per min) under an atmosphere of nitrogen (flow rate: 20.0 std.cubic centimetres per minute).After cooling to room temperature, the resulting activated sample was soaked in 1M HCl to remove intercalated K on the carbon pores and inorganic particles before being washed with deionized water several times.Finally, the wet alkali activated marula (AM) was oven dried at 100℃ overnight.Dried activated carbon sample was cooled to room temperature before being stored in a separate sealed tube for further use.
Characterization of Bentonite and Marula Shells
X-ray diffraction (XRD) spectra of FB, AB, spent bentonite (after bleaching) (SB), FM, AM, spent marula (SM) and Tonsil (T) samples were generated from a Bruker D8 Advance powder diffractometer.The radiation used was Cu-Kα with a wavelength of 1.5418 Angstroms Dinh et al (2022).The data collection was for 2 θ, ranged between 5and 90-degrees ant steps of 0.02 degrees with a time of 0.2500 s/step.
Fig. 1. Acid activation of bentonite process flow
The machine was run at 40 mA and 40 kV.pH was measured using Orion Star A111 Thermo Scientific pH meter by dispersing 1.43 g sample in 14.3 mL of deionized water.Chemical composition of FB, AB and SB was determined by use of Olympus DELTA Portable X-ray fluorescence (XRF).Thermal decomposition of adsorption material was analysed using Mettler Toledo TGA/DSC3+ model.Functional groups of adsorption material were determined using Vertex 70 v vacuum FTIR spectrometer.Proximate analysis for all adsorbents was done using Leo TGA 701.Filtration rate of different adsorbents was also observed by passing slurry through a permeable material.
Bleaching of Marula Oil
Vacuum bleaching was conducted by connecting a 500 ml flat bottomed flask to MRC vacuum pump (model CVP-13).140 g crude marula oil was placed into flat-bottomed flask and then heated to 90℃ as well as subjected to a vacuum whilst stirring at 240 rpm using Camlab MS-H280-Pro electric heater.The vacuum was then released intermittently, and 1.4 g of activated bentonite was added.The bleaching apparatus were re-subjected to a vacuum pump and heated further to 100℃ for 20 min.After 20 min, vacuum was broken followed by filtering the clay and oil mixture using Whatman 125 mm filter paper through vacuum filtration to separate the oil from the clay Kaynak et al (2004).This same procedure was performed using FB, AM, ACM and T.
Characterization of Marula Oil after Bleaching
Chlorophyll pigment content was calculated using an AOCS Official Method Topkafa et al (2013) for both crude and bleached marula oil.Chlorophyll content was determined at wavelengths of 710 and 630 nm on a UV-Vis spectrophotometer (Model Evolution 201) using equation 1: where: C is the chlorophyll pigments (ppm), L is the cell length (cm) and A is the absorbance.
Carotenoid content was measured spectrophotometrically as chlorophyll content.Specific absorbance measurements were conducted.Oil (1 g) was put in a 25 mL volumetric flask followed by adding isooctane to the mark (40 g / 1 L).Maximum absorbance was monitored (region 440-455 nm) then carotenoid concentration calculation was done using equation 2: (2) where: C is the β-carotene pigment (ppb), A is the absorbance, W is the sample weight (g), and V is the volume of the solution (mL).Acid value was determined according to AOAC 940.28 & ISO 660.2009Standard procedure.
Bleaching Capacity
Fresh bentonite, activated bentonite and activated marula shells were tested to determine their bleaching capacity on marula oil.Bleaching capacity (BC) or fractional degree of bleaching (FDB) was calculated using equation 3: where: A0 is the crude oil absorbance and A is the bleached oil absorbance at the maximum absorbance wavelength (362 nm) of crude oil.
Effect of Acid Concentration and Activation Time
Effect of acid concentration and activation time was investigated for acid activated bentonite and acid activated marula shells.
Adsorption Material Characterization
Adsorption material properties were examined using XRD, XRF, TGA and FTIR.
XRD
The crystallinity of fresh, activated and spent bentonite is shown in Fig. 2. Fresh, activated and spent bentonite reflect the same peaks with the presence of cristobalite (C), montmorillonite (M), willemite (W) and quartz impurities (Q) which indicates the presence of smectite phase Maged et al (2020).The montmorillonite, cristobalite, quartz and willemite peaks are positioned at 6.498°, 22.181°, 26.507° and 31.753°,respectively.Quartz (SiO2) and cristobalite (SiO2) have the same chemical formula but differ in their crystal structure.The sample shows more of cristobalite than the other silica phases.This observation is in agreement with previous studies showing that the treatment undergone by the material physiochemically does not affect the clay principle structure Meziti et al (2011).The quality of bentonite depends on crystal chemistry as well as size and shape of montmorillonites Kaufhold et al (2002).Other crystals like cristobalite play an important role on adsorption capacity Salem et al (2015) because of the loose bonds that exist on the surface of a minute grain of it than on montmorillonite Fuller et al (1940).From this observation, the primary structure of bentonite did not change because of the activation process.Only some small changes are noted in the region of 5°< 2 θ <13° and 21°< 2 θ < 24° with peaks which corresponds to diffraction angles 6.498 and 22.181 and the inter-lamellar distance was 13.5918A and 4.0043 A, respectively.Montmorillonite and cristobalite peak intensity decreased and width increased slightly due to acid activation by H2SO4.This can be due to replacement of cations (Ca 2+ , Fe 2+ , Al 3+ ) with H + ions.However, the differences between the XRD profiles are minimum.2007) but surface properties were affected by improvement of adsorption capacity.This is as a result of replacement of cations in the bentonite structure by hydrogen ions as well as impurities being removed from the clay material, followed by leaching of ions from the octahedral and tetrahedral positions in the clay material resulting in the exposure of the edges of platelets Toor et al (2015).Activation results in H + ions replacing cations thereby breaking up octahedral and tetrahedral sheets which results in the change in properties of the clay such as an increase in its adsorption capacity Foletto et al (2011).
The diffraction pattern of FM, AMC and SMC showed uncrystallized nature of the adsorbent since no peaks were detected which were prominent as shown by Fig. 2. The absence of sharp peaks is a typical characteristic of activated carbon.After adsorption process, the diffraction peaks did not change showing that adsorption using marula shells do not lead to bulk phase changes of activated carbon prepared.
XRF
Chemical composition of FB, AB and SB is shown in Fig. 3 with the chemical composition showing presence of Al, Ca, Cr, Fe, K, Mn, P, Si and Zn for all the three samples which was similar to those previously recorded (Patel et al., 2007;Tabak et al., 2007;Changchaivong et al., 2009;Abdallah et al., 2011;Nweke et al., 2015;Abdullahi et al., 2017).The anions of the bentonite interlayer are not included.
Fig. 3. Elemental analysis for FB, AB and SB
There are noted changes in the chemical composition for fresh and activated bentonite.The amount of silicon increased from 61700 ppm to 88500 ppm showing an increase of 43%, most likely due to acid activation Didi et al (2009) whereby protons penetrate into the mineral layers resulting in exchange of hydrated interlayer charge which contributes to movement of cations in the tetrahedral or octahedral sites (Venaruzzo et al., 2002;Maged et al., 2020) and release of cations into solution as a result of dissolution of clay mineral layers but silicate groups remains intact and the final product constitute hydrated, protonated, amorphous silica with cross-linked structure which is three dimensional.Sulphur was detected after acid activation which meant its source is sulphuric acid.This showed that acid treatment for bentonite caused changes in some features for the physical structure.Bleaching process did not change the structure significantly.As previously mentioned, Al, S, Ca and Fe remained relatively constant while Si was reduced significantly (37%).Phosphorus was only detected for the first time after bleaching process and this means it was introduced from marula oil.
pH and Filtration Rate
Addition of sulphuric acid during activation process resulted in a lower pH for AB (Table 1) with hydrogen ions replacing cations within the lattice and at the edges during activation process thereby making it acidic Salem et al (2015).The spent bentonite is also acidic due to the same process, however, the change in the pH value maybe due to some H + ions neutralized during bleaching process.Pores start to develop within the bentonite structure when it is mixed with an acid solution which increases its specific surface area compared to its original state.Al 3+ , Fe 2+ and Mg 2+ move from the octahedral layers resulting in emptiness in the cationic spaces Salem et al (2015).Soetaredjo et al (2021).Treatment of bentonite with acids may collapse the clay framework under severe activation conditions.Activated marula carbon is basic because of KOH added during activation process.Activation of biochar leads to increase in surface area and porosity driven by metallic potassium intercalation which expands the carbon lattice (Hui et al., 2015).Filtration is important in oil refining because it is essential for separation of adsorbent and oil after bleaching.Filtration rate will then determine the extent of flow rate of oil being filtered using different adsorbents.
Thermal Gravimetric Analysis (TGA)
Figs. 4 (a), (b) and (c) show the TGA curve of bentonite, marula shells and tonsil respectively.Thermogravimetric analysis of (a) was conducted to find out effects of activating fresh bentonite on degradation of fresh bentonite.From the analysis, two levels of degradation are noted for fresh and activated bentonite Galamboš et al (2021).For fresh and activated bentonite, there was mass loss at first degradation of 7% and 9%, respectively which was as a result of water loss from the interlayer of mineral Jlassi et al (2021).
Weight loss also appeared at second degradation with mass losses of 5% and 2% for FB and AB, respectively and this was attributed by hydroxylation which is desorption of -OH groups from the surface of bentonite material in water form Martín et al (2021).The second decomposition occurred at 597℃ for both FB and AB and from the graphs after decomposition, fresh bentonite sample remains with a higher residue compared to acid activated bentonite Amalanathan et al (2021).This is because hydroxyl groups and interlayer water become less due to activation process thereby easy decomposition of AB Mao et al (2020) resulting in less residue.Thermogravimetric analysis for spent bentonite indicated three levels of degradation.Generally, the change in decomposition of spent bentonite involves removal of adsorbed water molecules at temperatures below 200℃, hydroxylation and degradation of organic material at temperatures above 200℃ Naser et al (2021).There was a sharp weight drop for spent bentonite sample from 200℃ to 500℃.This is because of evaporation of marula oil which is a volatile matter and has a flash point of 250℃.This remarkably high flash point makes marula oil less flammable and enables safer handling as a cosmetic and edible oil.Total weight loss for spent bentonite and activated bentonite was approximately 49% and 14% respectively.This can be attributed to the fact that residual oil adsorbed during bleaching process was 35% of total weight of the spent bentonite which is in line with literature (Huang et al., 2010;Eliche-quesada et al., 2014;Liu et al., 2020).
Initial weight loss of marula shells from Fig. 4(b) was recorded at approximately 195℃ (1-4%) because of evaporation of water bound by the raw material as well as moisture content in the adsorbent and water on the surface of marula shell plant material as reported by different authors (Edokpayi et al., 2015;Pathak et al., 2017;Joshua et al., 2019;Salman et al., 2019).There was a subsequent weight loss at 250℃ which is due to decomposition of hemicellulose and cellulose and thermal degradation at 370℃ which can be related to lignin degradation.Hemicellulose and cellulose polymers have a lower stability than lignin (Yang et al., 2007;Postai et al., 2016).Weight loss for activated marula carbon and spent marula carbon summed up to 14% and 72%, respectively giving a range of 58% and this is the residual oil left after bleaching process in activated carbon.For Tonsil the total weight loss was 14% and 49% for fresh and spent tonsil respectively, resulting in 35% residual oil in spent tonsil.The proximate analysis for the materials after thermogravimetric analysis are summarised in Table 2. Table 2 shows percentage content of volatile solids, moisture, ash and fixed carbon for the adsorbents.These results correspond to TGA graphs in Fig. 4.
FTIR Analysis
FTIR results are shown in Fig. 5 and Table 3 with FM, AM and SM showing visible differences in their spectra.Reduction in intensity and disappearance of a number of peaks after activation of marula shells is noted with some peaks emerging after adsorption process which relate to residual oil attached to the biosorbent.Spectra of SM showed very intense peaks compared to AM and FM suggesting that adsorption process has taken place.Spectra for spent biosorbents (SM, SB, ST) showed new peaks which were absent from the other spectra.The same functional groups must have been involved in adsorption process since new peaks which emerged were similar.
From Fig. 5 (a), peak at 612/cm shifted to 619/cm after activation and adsorption process.A new peak emerged at 2169/cm after activation which relates to weak -C≡Cstretch vibrations and can be assigned to alkynes.The band at 2169/cm moved to 2155/cm after adsorption.Peaks at 2853/cm and 2930/cm emerged after adsorption and are attributed to medium C-H stretch vibrations of alkanes.The presence of aliphatic compounds suggests that these compounds contain C-H bonds and are organic compounds and these IR bands are similar to those of other plant materials (Moyo et al., 2015;Edokpayi et al., 2015;Mathew et al., 2017).
FTIR spectra for bentonite shows the same number of vibration and OH stretching bands for FB and AB.Band positions deviated slightly due to replacement of cations (Ca 2+ , Fe 2+ , Al 3+ ) with H + during activation.There was an intensity decrease and shift in the band at 1625/cm for AB and FB 1632/cm.This is due to loss of water during activation process and is associated with high temperatures.The stretching vibration band of Si-O group at 1012/cm; after activation with acid, had a slight shift with a higher frequency at 1019/cm attributing to dissolution because of activation process.(Christidis et al., 1997;Eren et al., 2009) 612 619 619 Alcohol OH out of plane bending vibrations (Pathak et al., 2017) 794 794 794 Quartz OH bending vibrations (Maged et al., 2020;Petit et al., 2013) 830 830 830 Aluminium magnesium hydroxide OH bending vibrations (Maged et al., 2020;Petit et al., 2013) 1012 1019 1012 1034 1027 Amorphous silica Si-O stretching (inplane) vibrations in the tetrahedral sheet.(Toor et al., 2015;Christidis et al., 1997;Eren et al., 2009) 1031 Aliphatic amine C-N stretch (Edokpayi et al., 2015;Pathak et al., (Edokpayi et al., 2015;Mkungunugwa et al., 2021;Jegede et al., 2021) 3630 3637 3609 Montmorillonite O-H stretching vibrations (Eren et al., 2009;Maged et al., 2020) Like XRD, FTIR bands did not show significant change of AB compared to FB meaning activation with acid did not destroy the parent bentonite.The use of Fourier transform infrared was able to distinguish the chemical structure of the adsorbents before and after activation and adsorption.It also helped in identifying the type and extent of adsorption as it occurs at the surface of the adsorbent.
Bleaching of Marula Oil
The following parameters were measured on the filtered oil samples; chlorophyll, β-carotene, acid value, FFA value and bleaching capacity.Crude oil before refining was also analyzed to ascertain any differences.The results are summarised in Table 4.
Bleaching Capacity
From Table 4, activated bentonite has four times better adsorption capacity when compared to unmodified bentonite (fresh bentonite).Impurities and colour pigments in the oil were adsorbed onto the activated bentonite at above half capacity.Raw bentonite was less effective for bleaching process using heat because of lower surface area per unit volume for adsorption Shin (2020).Results from literature corresponds to this information (Christidis et al., 1997;Makhoukhi et 2009;Foletto et al., 2011) where the bleaching capacities of activated bentonite were above 50% depending on the type of oil bleached, activation time, acid concentration and the type of acid used.
Activated carbon and Tonsil had bleaching capacities of 52% and 40% respectively signifying the use of activated carbon instead of commercial bleaching earth.Activation using KOH produces activated carbon with good pore development and a greater surface area because of potassium intercalation onto the carbon network during activation process.This shows the importance of the activation step and the benefits it brings to the process.
Chlorophyll and β-Carotene Content
More than 50% decrease in chlorophyll and β-carotene content after the bleaching process with activated bentonite and activated marula shells was noted, removing most of the colour pigments (Table 4) compared to raw bentonite.More than half of the chlorophyll present was adsorbed by raw bentonite and activated bentonite removed almost all chlorophyll present in crude oil.The presence of high levels of chlorophyll results in serious problems for the refiner.There was a small range on the amount of β-carotene removed by FB and AB.63%, 52% and 12% of β-carotene was removed by AB, AM and FB, respectively showing effectiveness of activation.Results showed that some βcarotene is left after bleaching process which may suggest that the final refining process, deodorization is and should be responsible for removal of the remaining β-carotene.
Acid Value and FFA Value
The acid value and FFA value reduced upon addition of adsorbent and generally free fatty acids in cooking oil range from 0%-3% Ojewumi et al (2021).The shelf life of oil is linked to amount of free fatty acids with low values indicating that the oil will be prolonged.Lowest value for FFA was obtained for AB followed by T, then AM and lastly FB.
Colour Appearance of Marula Oil after Bleaching Process
Two types of natural pigments, carotenoids and chlorophylls determine the colour of edible and cosmetic oils.AB and AM were found capable to improve the colour of oil by removing these pigments after the bleaching process (Fig. 6).It also removed smell compounds to make the oil smell better, but these were not tested in this study.Oil exposed to AB also exhibited a more transparent appearance showing that more impurities were adsorbed.AB and AM improved the colour of marula oil by removing the cloudy appearance in crude oil leaving the oil with a clear appearance (Fig. 6).
Effect of Acid Concentration on Activation of Adsorbent
Bleaching capacities of the samples after varying acid concentrations were obtained (Fig. 7).Bleaching capacity increased with increase in acid concentration for both bentonite and marula shells.Optimum acid concentrations were 2 N and 15 N for bentonite and marula shells respectively.Activated bentonite has higher bleaching capacity over acid activated marula shells.
Increase in acid concentration improves bleaching capacity and beyond 2 N for bentonite and 15 N for marula shells, significant reduction in bleaching capacity was observed due to collapse of material structure.Further increase in acid concentration results in destruction of crystal structure because of leaching of Al 3+ , Mg 2+ and Fe 3+ from the octahedral sites hence decrease in surface area.Carbon lattice collapse with further increase in acid concentration hence reduction in bleaching capacity after 15 N acid concentration for marula shells.Bleaching capacity results of activated adsorbents with varying activation times are shown in Fig. 8. Maximum bleaching capacity was for activation time of 1.5 and 7 h for marula shells and bentonite respectively.This difference in contact times is as a result of differences in material properties.Prolonged contact time reduces bleaching capacity because of pore enlargement and collapse of material.
CONCLUSIONS
FB, AB, FM and AM were characterized to a great extent to bring out changes before and after activation of the adsorbent.Crystallinity of bentonite material was not affected by acid activation but affected surface properties.
Alkali activation also showed changes in surface properties and this study has demonstrated that sulphuric acid and KOH form activated bentonite and activated carbon which were effective in reducing the amount of chlorophyll, carotenoids and free fatty acids in crude marula oil with a huge difference in terms of colour intensities as observed by the naked eye for crude and bleached marula oil.Activation using 2 N acid concentration (bentonite), 15 N acid concentration (marula shells) and KOH 1:1 (marula shells) yields adsorbent materials that effectively removes colour pigments from marula oil and their bleaching capacity performance were three times better than fresh bentonite.Bleaching capacity increases with increase in acid concentration and activation time.Bentonite and marula shells were able to refine marula oil effectively therefore there is no need to dispose marula shells as agricultural waste since valuable products can be produced from marula shells which can be used in oil refining and are biodegradable.Marula shells can be blended with bleaching earth found in the domestic market to improve adsorption.Future studies in addition to analyzing the effects of activating agent, temperature and bleaching time using marula shells also focus on finding sustainable technologies for treating the spent marula carbon and spent bentonite after the bleaching process.
Fig. 2 .
Fig. 2. XRD patterns for adsorption material (a) bentonite clay (b) marula shellsAcid activation did not affect the crystallinity of clay mineralÖnal et al (2007) but surface properties were affected by improvement of adsorption capacity.This is as a result of replacement of cations in the bentonite structure by hydrogen ions as well as impurities being removed from the clay material, followed by leaching of ions from the octahedral and tetrahedral positions in the clay material resulting in the exposure of the edges of plateletsToor et al (2015).Activation results in H + ions replacing cations thereby breaking up octahedral and tetrahedral sheets which results in the change in properties of the clay such as an increase in its adsorption capacityFoletto et al (2011).The diffraction pattern of FM, AMC and SMC showed uncrystallized nature of the adsorbent since no peaks were detected which were prominent as shown by Fig.2.The absence of sharp peaks is a typical characteristic of activated carbon.After adsorption process, the diffraction peaks did not change showing that adsorption using marula shells do not lead to bulk phase changes of activated carbon prepared.
Fig. 7 .
Fig. 7. Effect of acid concentration on activation of bentonite and marula shells
Fig. 8 .
Fig. 8. Effect of activation time on activation of bentonite and marula shells
Table 1 .
pH values and filtration rate for bentonite and
Table 3 .
Compounds represented by marula shells, bentonite and tonsil FM AM SM FB AB SB FT ST | 7,736.4 | 2023-01-01T00:00:00.000 | [
"Materials Science"
] |
Service Cooperation Incentive Mechanism in a Dual-Channel Supply Chain under Service Differentiation
The co-principal-anent models of manufacturer and retailer are built under the complete information and asymmetric information. In addition, the optimal profit sharing and the optimal fixed payment ratio are analyzed and compared in both situations. A motivate mechanism about service effort provided by the manufacturer towards the retailer in a dual-channel supply chain is studied. It implies that the profit of manufacturer under asymmetric information decreases dramatically contrasted to complete information and the retailer can gain profits by providing lower services, thus refusing deficiency of the supply chain.
Introduction
With the development of network and growth of customer's passion in network and increment of online shopping orders, channel reconstruction is a measure that more and more enterprises choose to apply [1]. In this case, dual channel supply chain with both the online channel and offline channel is generated, which may lead to serious channel conflicts. The previous researches have indicated that the buy-back strategy [2], price compensation strategy [3], price-discount strategy [4], two parts and promotion level compensation strategy [5] can effectively alleviate channel conflict and conductive for achieving supply chain coordination. But the important role of service level is neglected. In recent years, a certain number of electric business giants (e.g. Jingdong Mall, Su-Ning electronics, and Taobao) use high-quality service rather than intense price war to gain competitive ad-vantages, and floor, LED and other industries constantly increase capital investment to improve their service level. All of these have indicated that the competition between the enterprises gradually focuses on the service rather than the product. Enterprises that devote themselves to better logistic delivery service, return or replacement service, maintenance and experience service may win customers' trust. As the resulting problems, competition and cooperation with service in a dual channel supply chain have drawn attention within academic and business.
The introduction of an online channel will be beneficial to enhance the manufacturers' barging power and reduce the double marginal benefit, which is the direct theory that supports the rapid development of electronic commerce [6]. Besides the convenience of online shopping, quality of service is another factor that might affect customers' purchasing behavior [7], and opening a direct channel might force the retailers to improve its service level [8]. Considering the significant influence of service on customer purchasing behavior, existing scholars mainly study from two dimensions: service competition and service cooperation. About the service competition, Xu et al. study the problems of Stackelberg and Nash game decisions when suppliers compete with retailers for service [9]. Similarly, assuming that the costs of service provided by traditional retailers are private informations, Mukhopadhyay respectively studies the optimal decision for service competition in a multi-channel supply chain under information sharing and information un-sharing [10]. Chen et al. formulate that a model of the channel selection of the customer based on service level has effect on demand [11]. Chen and Liu study the optimal decisions for the competition of supply chain members when there are differentiated services. They find that the service competition makes the supply chain with dual-channel superior than the single channel [12]. Sun establishes a service competition model, where customers' channel preference is considered. The study finds that there is service discrimination in supply chain after adding a direct channel, which may be results in reduction of the customers' overall utility and the performance of supply chain system [13]. In addition, Luo et al. study the influence on the service competition and the profit of supply chain when online channel provides value-added service [14]. Dan studies the retailers' optimal service and pricing strategy under noncooperation in dual-channel supply chain [15]. Various decision schemes under the service competition are conducive to improve the performance of supply chain, while the loss of the system efficiency is still large. Therefore, some scholars turn to study the optimal decision under the service cooperation in a dual-channel supply chain. For example, Xiao studies the pricing strategy in a dual-channel supply chain under service cooperation [16]. Luo et al. build a mechanism for coordinating supply chain, which is based on suppliers and retailers to share the cost of services [17]. Kong et al. study the impact of different service costs on the pricing strategy of manufacturers and retailers under service cooperation [18].
Viewing from the above literatures, most of the existing researches that are related to service in dual-channel supply chain are from the perspective of service competition. Researches that are related to service cooperation are gradually in-depth, and most of them are studied under information symmetry. In fact, there are no linear correlations between the increment of service cost and the improvement of service level. And, the motivation to further increase the investment of service cost is very weak for the retailers, especially after the service level reaches a certain level. However, consumers require a mostly perfect level of services, which makes the manufacturer to motivate the retailer to improve its service level when facing with fierce competition. Hence, considering the cooperation model of service that manufacturers entrust all the service of network channel to retailers to fulfill, we study the optimal incentive strategy of manufacturers to motivate retailers to provide high service level for customers of network channel and traditional retail channel.
Model Description
We consider a dual-channel supply chain composed of one manufacturer and one retailer, and the manufacturer is the leader. The manufacturer sales its products to end customers directly at price d p . The retailer buys the products from the manufacturer at wholesale price w and then sales them to customers at retail price r p , where r w p < . Since the retailer has the location advantage that it can face the customers directly, in order to improve online channel efficiency and customer satisfaction and effectively integrate supply chain resources, the manufacture entrusts electronic channel service (e.g. return and replacement service, advertising, mail notification) to the retailer to fulfill. That is, the retailer not only provides service r s to customers of traditional retail channel, but also provides service d s to customers in online channel. The cost for proving the service is The parameter η represents the cost coefficient of the services. The smaller the value of η , the greater the utility of unit service level. Assuming the service level [ ] 0,1 s ∈ , the value of 0 represents the retailer does not provide services and 1 represents the retailer provides perfect service. The manufacturer forms a principal-agent relationship with the retailer, in which the manufacturer is the principal and the retailer is the agent. The specific process is shown as Figure 1. Despite it is difficult for the manufacturer to observe service level of the retailer, the sales volume Q generated by the retailer providing service can be known exactly, where ( ) 0 f s ′′ < , which indicated that improvement of service level will increase output of service and the increase is decreasing. For analytic simplicity, we assume ( ) f s ks = , where k represents the service output coefficient. Since the retailer provides service for both online channel and traditional channel, there is . Therefore, the sales volume of manufacturer and retailer are respectively given by d The parameter ε represents exogenous and uncertain stochastic variable, such as changes of consumer preference or market environment. Referencing literature [19] and [20], there is pending on the sales volume the manufacturer will pay service reward ( ) , d r t Q Q to the retailer in an effort to maximize its profits. Weitman [21] put forward the rationality for using linear contract. Holmstro and Milrgrom [22] also proved that linear contract can optimize the supply chain system. They assumed the incentive function where α is the fixed payments that the manufacturer pays to the retailer and β ( ) is the ratio of profit sharing provided by the manufacturer for motivating retailer to improve its service level. Similar to the literature [21], we assume that manufacturer is a risk neutral and the retailer is a risk aversion. It means that retailer can eliminate the risk or the risk condition by changing the plan to protect its interests from damage. We also assume there is no cross-buying between customers.
According to the above assumes, the manufacturer's expected profits is given by The retailer's profits is given by When the retailer is a risk averse, according to literature [21], we use absolute risk aversion to describe the degree of risk aversion of retailer and give the retailer's utility function as classical constant absolute risk aversion function ( ) e r r ρπ µ π − = − . The parameter ρ is risk aversion measurement. It represents that the retailer is risk appetite type when 0 ρ < and is risk neutral when 0 ρ = and is risk aversion when 0 ρ > . We assume the real returns obey normal distribution and the expectation is m and the variance is n , namely ( ) , r N m n π . Therefore, the retailer's expected utility is given by r r x m m n n r x The retailer maximizes the expected utility function is equivalent to maximize its certainty equivalent earnings [19]: According to retained earnings, the retailer judge whether accept the contract or not. If the certainty equivalent gains r π are less than the retained earnings 0 v , namely 0 r v π < , the retailer will not accept the contract.
Service Cooperation Incentives under Information Symmetry
In a dual-channel supply chain, information symmetry means the manufacturer, who controls the online channel, knows the information that is related to customers. The information is held by retailers, which includes consumer preference, service cost and service level. In order to maximize its own profits, by controlling the fixed payment and profit sharing ratio the manufacturer will make the retailer to provide a higher service level for the two channels and simultaneously ensure the retailer can get at least the retained earnings. On the contrary, if there is no information superiority, the retailer will certainly ensure the service quality to improve customer service satisfaction in order to avoid damaging its retained earnings. Service cooperation under information symmetry makes supply chain members to achieve a win-win. Under information symmetry, the retailer's service level d s can be observed by manufacturers. At this point, the incentive compatibility constraint doesn't work, and any level of service s can be achieved by meeting compulsory contract of the participation constraints IR . The manufacturer chooses appropriate service level d s and r s , fixed payments α and profit sharing ratio β to maximize its own profit. The decision-making model is given by According to the decision-making model, if we maximize the manufacturer's profits, equal sign of the participation constraints must be taken. Therefore, Taking the first order partial derivatives of (9) with respect to d s , r s and β respectively, and letting them equal to zero. The optimal service levels under the online channel and the retail channel are β * = . When the manufacturers can observe the retailer's service level, in each channel the service level is inversely proportional to the service costs and proportional to the sale pricing of products and has nothing to do with the profit sharing ratio. In this case, the retailer just obtains the part of the fixed payment in the incentive compensation and does not share the profits of online channel.
Substituting d s * , r s * and β * into (8) and (9), we can obtain the optimal fixed payments α * paid by the manufacturer and the optimal expected profits of manufacturer. They are given by , the manufacturer can forcibly make the retailer's profit less than its retained earnings by reducing fixed payments as a punishment.
Cooperation Incentive Services under Asymmetric Information
The dual-channel supply chain is a value chain composed of different stakeholders, in which the distribution of information is often asymmetrical. Since it is difficult for the manufacturers to observe the retailer's service level, the retailer has more information than the manufacturer. In order to obtain more profits, the retailer often conceals or misrepresents some important and related information from the manufacturer, such as the service requirements and service preferences of consumers and the service costs. The manufacturer can't obtain the service information accurately, which result in compulsory measures losing its efficacy. At this time, the manufacturer will design a corresponding incentive measure to motivate the retailer to improve its service level. Both the participation constraint and incentive compatibility constraint play a role. The decision-making model is reformulated as The retailer seeks the optimal service level d s and r s to achieve maximum profits, according to the firstorder conditions of the incentive compatibility, we obtain By solving the above two equation, we can obtain the optimal service levels provided by the retailer for network channel and traditional channel under information asymmetry, which respectively given as d Taking the first order partial derivatives of (12) with respect to β and letting it equal to zero, the optimal profit sharing ratio under asymmetric information is given by Viewing from the Equation (13), when manufacturers can't observe the retailer's service level, the service level that the retailer provides to the customer of online channel is proportional with the profit sharing ratio and inversely proportional with service costs. When the profits sharing ratio is improved, there must be a corresponding increase in the service level provided to online channel. The service level of retail channel is related to the retail price and wholesale price. In the retail channel, the customers will enjoy a higher service level when the difference between the wholesale price and the retail price is large, and the retail channel also will provide higher service level when the profit sharing ratio is improved. Meanwhile, the retailers' risk preferences ρ have influence on the profit sharing ratio of the retailer. The more conservative the retailer, the lower the profitsharing ratio shared from manufacturers will be.
Substituting d s * , r s * and β * into (11) or (12), the optimal fixed payment that the retailer gain from the manufacturer and the optimal profits of the manufacturer under information asymmetry respectively are
Analysis about the Impact of Different Services on Cooperative Mechanisms
The distinction between online channel and traditional channel gives rise to the difference of customer service experience in each channel. The online channel brings customers more product information, and the work hours more convenient and unlimited, while the traditional channel provides customers with the lower risk, perceived experience in store and without distribution. Different service experience in the two channels will alter the customer purchasing behavior and lead to the change of demand structure of the market. On the premise of service cooperation, the optimal service level provided to the two channels by the retailer respectively are d for the two channels. The following, we will further analyze whether the information symmetry has impact on profits of manufacturer, retailer and supply chain system when the manufacturer cooperates with the retailer who will provide differentiated services. The difference between manufacturer's profits under information symmetry and information asymmetry is According to the limits that 0 1 is constantly positive. Therefore, because of information asymmetry the manufacturer suffers losses. Using the advantage of facing customer directly the retailer delivers incomplete information to the manufacturer, which making it difficult for manufacturer to predict market demand accurately, thereby it is difficult to make scientific and rational decisions of production and transportation.
Different with manufacturer, the retailer's profits remain unchanged whether information is symmetric or not. The reason is that the retailer's certainty equivalent gains unchanged. However, the retailer can still profit. Under information asymmetry service level of the two channels both are lower than that under information symmetry, which allows the retailer to obtain unchanged profits by providing a lower service level. In other words, actually, the retailer's profits disguisedly increase under the existing service level. Because of the reduction of the manufacturer's profits, the supply chain system's profits also reduce.
The retailer's service costs directly influence the manufacturer's profits under the condition of cooperation. Whether the information is symmetric or not the manufacturer's profits will decrease with the increase of service cost coefficient η . But when information is asymmetric, the reduction of service costs enable the retailer to obtain a greater profits sharing ratio ( ) 0 β η ∂ ∂ < . Because of information asymmetry, the manufacturer loses more profits ( ) 0 E η ∂∆ ∂ < . Accordingly, the manufacturers will set up efforts to collect more information that the market demands.
Numerical Analysis
This section will verify the impact of service costs on manufacturers' profits in the cases of information symmetry and asymmetry using numerical examples. Assuming that a kind of product is sold through dual channel, and 0.5 Using Matlab7.0 for simulation calculations, we can get changes of the manufacturer's profits with different service level under the conditions of information symmetry and asymmetry, and these are shown as Figures 2-5.
In Figure 2 and Figure 3, whether information is symmetric or not, the manufacturer's profits always decrease with the increase of the retailer's service costs, but under information asymmetry the decrease is only slightly higher than that under information symmetry. This confirms the assertion that there is no seriously influence on manufacturer's profits when the retailer conceals service information.
Viewing from Figure 4 and Figure 5, the service level provided by retailers to the traditional channel is always higher than that to the direct online channel, and with the increase of service costs, differentiation of service level will become increasingly obvious, especially in the case of information asymmetry.
Conclusion
With the continuous improvement of market, customer service has become an important factor that strongly in- fluences the interests of supply chain members. Assuming that there are no cross-buying in dual-channel supply chain composed of one manufacturer and one retailer, the incentives mechanism when the manufacturer entrusts online channel service to the retailer is studied. We establish a principal-agent model to solve it and the optimal fixed payment and optimal sharing ratio are given. The authors also compare and analyze the relationship be-tween service levels of online channel and retail channel and the impact of service costs and uncertain factors of market on manufacturer profits. The results show that the decrease of the retailer's service cost and the increase of uncertain factors in market will reduce the manufacturer's profits. It is worth noted that some assumptions in this text are very strict. For example, cross-buying behavior does not exist, which is different from the actual market situation. Thus, further and perfect researches are needed. | 4,390.4 | 2014-12-01T00:00:00.000 | [
"Business",
"Economics"
] |
A combinatoric shortcut to evaluate CHY-forms
In our recent work, we proposed a differential operator for the evaluation of the multi-dimensional residues on isolated (zero-dimensional) poles. In this paper we discuss some new insight on evaluating the (generalized) Cachazo-He-Yuan (CHY) forms of the scattering amplitudes using this differential operator. We introduce a tableau represen-tation for the coefficients appearing in the proposed differential operator. Combining the tableaux with the polynomial form of the scattering equations, the evaluation of the gen-eralized CHY form becomes a simple combinatoric problem. It is thus possible to obtain the coefficients arising in the differential operator in a straightforward way. We present the procedure for a complete solution of the n-gon amplitudes at one-loop level in a generalized CHY form. We also apply our method to fully evaluate the one-loop five-point amplitude in the maximally supersymmetric Yang-Mills theory; the final result is identical to the one obtained by Q-Cut.
Introduction
Scattering equations are derived at tree level for the high-energy behavior of string theory in [1,2] and have drawn attention from theoretical physicists in diversified contexts [3,4]. Incorporating scattering equations, Cachazo, He and Yuan [5][6][7] propose a closed formula for arbitrary n-point tree amplitudes in a variety of massless quantum field theories. This form is proven by Dolan and Goddard in [8] for Yang-Mills theory in arbitrary dimensions and a polynomial form of the scattering equations is obtained by the same group in [9].
The scattering equations are generalized to loop levels in a number of contexts, such as open string theory, pure spinor formalism of superstring, and ambitwistor string theory [10][11][12]. The CHY expressions are subsequently extended to one and two loops for the bi-adjoint scalar theory, gauge theory and gravity in [13][14][15][16]. In addition to the ambitwitor approach, the one-loop generalized CHY forms are obtained from tree-level ones in one higher spatial dimension in [17,18] for scalar and gauge theories, and higher-loop CHY forms for scalar theory are constructed in [19].
Besides constructing the generalized CHY forms, it is also a challenge to evaluate such multi-dimensional CHY integrals. Indirect methods, such as "building block method" [20][21][22][23] and "Integration rules" [24][25][26][27][28][29][30][31], evaluate the generalized CHY integrals reductively. Direct approaches are also explored. At tree level, the n-point CHY form is evaluated for gauge and gravity theories in special kinematics in [32] and the scattering equations are solved in four dimensions up to six points in [33][34][35]. Elimination theory is exploited in JHEP06(2017)015 solving the scattering equations up to seven points in [36,37] and a general prescription based on elimination theory is proposed in [38]. A direct evaluation of the CHY form for the MHV tree amplitude is given in [39]. Algebraic geometry based methods -the companion matrix method [40], Bezoutian matrix method [41], and polynomial reduction techniques [42,43] -are employed to evaluate the CHY expressions, without solving the underlying scattering equations.
In our previous work [44] we proposed a conjecture that enables us to compute multidimensional residues on isolated (zero-dimensional) poles by a differential operator. Here we briefly recall that conjecture.
Suppose f 1 , f 2 , . . . , f k are homogeneous polynomials in complex variables z 1 , z 2 , . . . z k of degrees d 1 , d 2 , . . . , d k respectively. And we assume that the common zeros of f 1 , f 2 , . . . , f k consist of a single isolated point p. Let R(z i ) be a holomorphic function in a neighborhood of p. Then the conjecture states that the residue of R at p can be computed by a differential operator D as follows, where D is of the form Here coefficients a r 1 ,r 2 ,...,r k are z-independent constants and ∂ i = ∂ ∂z i , i = 1, . . . , k, where r i 's are non-negative integers. The sum is done over all solutions of the equation Moreover, it is conjectured that D is uniquely determined by two conditions respectively from 1) the local duality theorem [45,46] and 2) the intersection number of the divisors D i = (f i ).
This conjecture is verified numerically to be widely applicable for computing both degenerate and non-degenerate multi-dimensional residues, as long as the poles are isolated. In the application to evaluating the generalized CHY integrals, the linear equations for a r 1 ,··· ,r k arising from the local duality theorem and the intersection number requirement are the structures of interest. In this paper, we study these structures by introducing a tableau representation for each a r 1 ,··· ,r k . Using the tableau representation, we explain how to obtain the differential operator efficiently, especially for the polynomial scattering equations, which makes the evaluation of the generalized CHY forms a simple combinatoric problem. In this way, the evaluation of CHY integrals in their prepared forms (which we define later in this paper) is easily achieved for any number of external lines at one loop.
This leads straightforwardly to a full evaluation of the n-gon one-loop integrand in the generalized CHY form, since the n-gon integrand is naturally of the prepared form. Furthermore, for the amplitudes in super-Yang-Mills, the one-loop integrands for four and five particles can also be recast into the prepared forms using the cross-ratio identites derived in [31], which are then calculated effortlessly. We expect that one-loop integrands for higher points can also be transformed into the prepared forms or similar expressions. Upon evaluating the prepared forms, the final result is identical to the Q-cut representation of the expression obtained in [47], while the Q-cut method is proposed in [48].
JHEP06(2017)015
2 Prescription for determining differential operators In this section we introduce our new method to determine the differential operators that is more efficient than the approach taken in [44]. We begin with a warmup example for the four point one loop SYM integrand. After that we present the notations used in obtaining the coefficients in the differential operators. Finally we use this method to get the complete solutions of the coefficients for a particular class of CHY integrands.
Toy model: one-loop four-point SYM integrand
The one-loop scattering amplitude for four external gluons in N = 4 SYM has been revisited in various contexts in literature [47,49]. The generalized CHY integral for its loop integrand is given in [14,15]. A detailed analysis of the evaluation of this generalized CHY integral has already been presented in [44]. In this section we reconsider this example as a toy model to illustrate the main ideas of the method proposed in this paper. Details of and more general discussions on the method are to appear in later sections.
Firstly let us recall the settings. The four-point generalized CHY form of the integrand obtained in [14,15] where PT 4 is the well-known Parke-Taylor factor that reads where S 4 is the cyclic permutation group of four objects. For illustrative purpose, here we only consider the first term in PT 4 which leads to the following integral, where we fix the gauge σ 4 = 1 and the polynomials h i 's are and we denote the loop momentum as ℓ and have used the following notations, As shown in [44], using the global residue theorem, the integral can be rewritten as the following Figure 1. Tableaux for the coefficients appearing in this example. The red tiles in the tableaux are allowed to perform a permutation among the rows. The meaning of such permutations are to be discussed shortly.
whereh i is obtained from h i with the replacement σ 4 → σ ′ 4 . From now on in this section we will simply write σ 4 for σ ′ 4 for notational convenience. The main idea of the method presented in [44] is to replace the contour integration with a differential operator D, which should be of the third-order in this particular case and can be written as D = 0≤r i ≤3 , r 1 +r 2 +r 3 +r 4 =3 a r 1 ,r 2 ,r 3 ,r 4 ∂ ∂σ 1 where the coefficients a's are to be fixed by the local duality theorem and the intersection number equation. One of the main points of the current paper is to provide an efficient method to determine those coefficients without solving the local duality and intersection number equations by brute force. Now we briefly describe how the shortcut method works out in this particular example. There are 20 coefficients a r 1 ,r 2 ,r 3 ,r 4 to be determined. Before turning to the equations from local duality and intersection number, we firstly classify the coefficients as follows. For each coefficient a we define its rank to be R(a r 1 ,r 2 ,r 3 ,r 4 ) = r 1 !r 2 !r 3 !r 4 !. Furthermore, we associate a tableau with each a. All tableaux (up to subscript permutation) for our example are shown in figure 1. The three tableaux in figure 1 depict a 0,1,1,1 , a 0,0,1,2 and a 0 , 0, 0, 3 and their corresponding ones with permuted subscripts respectiely.
JHEP06(2017)015
Similarly this equation can also be read off from the tableau. For that purpose, we paint red the two tiles as shown in figure 2(b) and permute the red tiles among the four rows. To get eq. (2.11), we simply take the sum of the products of the corresponding a's with their ranks and corresponding ℓ i,j 's, where i, j are the row indices of the red tiles of respective tableaux obtained through the tile moving described above. Accordingly, it is also seen that the first term and the forth term on the left hand side of eq. (2.9) are equal. By considering all the equations of D(σ i σ jh1 ) = 0 with i = j and D(σ ih2 ) = 0 in a similary way, we can see that all the terms in the intersection number equation are actually equal. Hence the solutions of those a's appearing in (2.9) are just the inverses of their respective coefficients. Up to this point, we thus have already solved all the a's and found the exact form of the operator D. Now we can apply the D operator to evaluate the generalized CHY integrand,
Preliminary: notations & tableaux
In the previous discussion, the one-loop SYM integrand for four external particles is massaged and turned into a sum of several terms, all of which are of the following form, where σ j (j = 1, · · · , n − 1) are the (n − 1) variables that need to be integrated out (with the gauge choice σ n = 1) and σ ′ n is the auxiliary parameter that comes into play in the homogenization of the scattering equations (for details, see section 4.2 and 4.3 of [44]). h j (j = 1, · · · , n−1) denote the (n−1) polynomial scattering equations (3.6) that are necessary to capture the behavior of an n-point scattering and h ′ n = σ i (i = 1, · · · , n − 1) or h ′ n = σ ′ n . In this form, the divisors are generated either by one of the polynomial scattering equations or by the simplest monomial possible. As reviewed in the introduction, such contour integrations are associated with a differential operator with a certain number of coefficients fixed by the local duality theorem and the intersection number. These constraints are all linear and especially the local duality theorem gives rise a sparse constraint matrix, which in general can be solved by various sparse matrix methods. In the case of an integral in the form of (2.13), thanks to the beautiful mathematical structures of scattering equations, these constraints can indeed be solved analytically. In this section we present the method that reduces considerably the size of the constraint matrix and obtains quickly the analytic solutions of the coefficients. For this reason, we would like to call an expression of form (2.13) prepared. Moreover, as we will observe in more examples, the CHY-like representations for amplitudes/integrands can often be reduced to the prepared form or similar ones with a slightly modified h ′ n , even though the original expressions seem much more complicated. For an expression with a modified h ′ n which still remains a monomial, we expect our algorithm can be generalized.
JHEP06(2017)015
288 288 As shown in the discussion of the four-point one-loop integrand, to demonstrate our method in a more intuitive way, a graphical representation of the coefficients in the operator comes in handy. Let us take a coefficient a 0,1,2,3,4 as an example and address a few details of its corresponding tableau depicted in figure 3(a). The i-th row of the tableau corresponds to the i-th index of the coefficient while the number of the tiles in the i-th row is equal to the value of the i-th index. The total number of tiles is the order of the corresponding differential operator. As mentioned above, we define the rank of coefficient a r 1 ,r 2 ,··· ,rn as the following integer number, 14) and this number is written on top of the corresponding tableau. Sometimes we drop the row labels in the tableau as shown in figure 3(b), and this tableau represents the class of a's whose indices are related by permutations. For instance, the tableau in figure 3(b) is corresponding to the coefficients {a ρ(0,0,1,2,3) } ρ∈S 5 where S 5 is the permutation group of five items and ρ(0, 0, 1, 2, 3) means a permuation of the five digits (0, 0, 1, 2, 3).
As shown in figure 3, we have painted some tiles red in the tableaux. For a given unpainted tableau, we associate with it a class of colored tableaux which are obtained in the following way: in each row only the rightmost tile can be painted red, and we can choose to paint it or not. So altogether we have 2 J colored tableaux for each given unpainted one, where J is the number of nonempty rows in the unpainted tableau. From each colored tableau, we can get several new tableaux (without considering their coloring) by permuting the red tiles among the rows. For example, in figure 4 are depicted the new tableaux resulted from the tile moving of the first colored tableau. In this example, we can obtain a 0,0,1,2,3 , a 0,0,2,1,3 , a 0,1,1,1,3 and a 1,0,1,1,3 from a 0,0,1,1,4 .
So far we have played with the tableaux. Now we demonstrate how the tile moving can acquire actual meanings from the local duality theorem. Recall that the polynomial scattering equation of degree m for n particles takes the following form (throughout this paper we adopt the gauge choice σ n = 1 and all the polynomial scattering equations, if not specified otherwise, are homogenized with an auxiliary parameter σ ′ n ; for notational JHEP06(2017)015 compactness, we drop the prime and simply denote the auxiliary as σ n from now on), The order of the differential operator D associated with the prepared form for n points is Since the differential operator D acting on a monomial in σ i 's does nothing but picking out the exponents of σ i 's, the local duality theorem yields, where r j 's are non-negative integers satisfying the Frobenius equation Here we see that the rank of a r 1 ,··· ,rn is merely the products of the factorials that arises when taking derivatives multiple times. There are C m n coefficients a r 1 ,··· ,rn 's on the right-hand side above and their corresponding tableaux all have r j black tiles in the row j and m red tiles that sit in the m different rows {i 1 , i 2 , · · · , i m }. The tableaux corresponding to these a's are exactly those related by permuting the red tiles as mentioned JHEP06(2017)015 above! In other words, given a tableau that has r j black tiles in the j-th row and the rows {i 1 , · · · , i m } have one red tile each, there is a unique equation following from the local duality theorem that relates this tableau and those obtained by permuting the red tiles. Furthermore, scanning over the monomials of the correct degrees to construct the local duality equations is equivalent to exhausting all the legitimate colorings and writing down the corresponding equations.
As mentioned above, the equations arising from the local duality theorem are not independent from each other and the number of equations is much larger than the number of coefficients. With the correspondence between the colorings and the local duality constraints established, we are ready to present a method that systematically selects just enough independent local duality equations to fix the coefficients completely.
Coefficient generating algorithm
In this section, we spell out our method that analytically solves the local duality and the intersection number constraints associated with a prepared form and display the general solutions of the coefficients in the corresponding differential operator.
Although there are several possible colored tableaux associated with a given uncolored tableau, there is a preferred one that leads (by shuffling the red tiles among the rows) to new tableaux whose ranks are either lower than or equal to that of the original. This particular colored tableau is constructed as follows. For convenience let us assume a r 1 ,r 2 ,··· ,rn is a coefficient with indices such that r i 1 r i 2 · · · r in . In the prefered colored tableau, we paint red the last tile in row i n . Now for the other rows, if the row i p has one red tile and r ip − r i p−1 1, r i p−1 > 0, we should also paint red the last tile of the row r i p−1 ; otherwise all the tiles in the row r i p−1 remain black and the coloring process stops here.
There are two types of the coefficients a r 1 ,r 2 ,··· ,rn : let (j 1 , j 2 , · · · j n ) be a particular permutation of the indices such that r j 1 r j 2 · · · r jn ; if r j+1 −r j 1 for j = 1, · · · n−1, a r 1 r 2 ···rn is called elementary; otherwise it is called non-elementary. It is easy to observe that, for an elementary coefficient, the aforementioned preferred coloring relates it with those of lower or equal ranks. A non-elementary coefficient gets turned into only lowerrank ones through the preferred coloring. Moreover, for a particular prepared form for n points, the corresponding tableaux are all of M = (n − 1)(n − 2)/2 tiles in total. Up to row permutation, there is a unique tableau with the highest rank among the elementary ones, that is the tableau associated with a 0,0,1,2,3,...,n−2 whose rank is R e = (n − 2)!(n − 1)! · · · 1!.
The above observations about the prepared form actually provide us a natural solving order in which the local duality constraints can be conveniently solved. In the following discussion, we will show that in the operator associated with a prepared form, any coefficients with ranks lower than R e must vanish while the elementary ones with rank R e can be easily worked out, which leaves us only non-elementary ones with ranks greater than R e . Those higher rank coefficients can however be obtained by an inductive method: suppose we have obtained all the coefficients -both elementary and non-elementary -up to some rank R 0 R e and the next possible rank is R 1 , then a non-elementary coefficient of rank R 1 whose tableau is painted in the preferred way can be directly read off from the relation associated with the colored tableau, i.e. it can be represented in terms of other coefficients JHEP06(2017)015 of lower ranks only. This procedure continues all the way up to the highest-rank coefficients and thus this way we can determine all the coefficients analytically one after another.
Trivial coefficients. In general, the differential operator D may be very complicated. However, it is easy to see that many coefficients in D are actually zero, i.e. trivial, when D is associated with a form that is prepared (see (2.13)).
Firstly, for a prepared form with h n = σ i , the coefficients a r 1 ,··· ,rn with r i 1 must vanish, because the local duality theorem for h n yields, where r j are non-negative solutions to r 1 + · · · + r n = M − 1.
Secondly, the coefficients a r 1 ,··· ,rn must vanish if R(a r 1 ,··· ,rn ) < R e . To see this, let us consider a prepared form with h n = σ i . Since r 1 +· · ·+r n = M , the lowest-rank coefficients must be elementary and have r i 1 when n 5 (when n = 4 the lowest-rank coefficient has the i-th index being 0 and all the rest 1), which obviously vanish. Now we only need to show that every elementary coefficient of the form a r 1 ,··· ,r i−1 ,0,r i+1 ···rn (where r 1 ! · · · r n ! < R e ) is zero, since the non-elementary ones can be written in terms of coefficients with lower ranks only. Let {i 1 , · · · , i i−1 , i i+1 , · · · , i n } be a permutation of {1, · · · , i − 1, i + 1, · · · , n} such that r i 1 · · · r in and r i j+1 − r i j 1, r i i+1 − r i i−1 1. Then we must have r in n − 3 in order to guarantee that the rank is lower than R e . 1 On the other hand, we must also have r i 1 = 1 because r 2 + · · · + r n = M = ord(D). 2 Hence the tableau for a r 1 ,··· ,r i−1 ,0,r i+1 ,··· ,rn has only the i-th row empty and all the other rows filled. The aforementioned preferred coloring leads to (n − 1) red tiles in total and the local duality relation represented by this colored tableau involves a r 1 ,··· ,r i 1 ,0,r i+1 ,··· ,rn and other coefficients that all have r i = 1, i.e. a r 1 ,··· ,r i−1 ,0,r i+1 ,··· ,rn is rewritten as a linear combination of several vanishing a's and therefore is zero. By induction, all the non-elementary coefficients with ranks lower than R e are also zero.
Non-trivial elementary coefficients. Now we turn to the elementary coefficients of rank R e . Recall that the intersection number constraint reads where h n = σ i and the scattering equations h j (j = 1, · · · n − 1) are polynomials in σ i 's of degree j respectively. Using the explicit forms of h's, we can see that only coefficients of ranks no greater than R e appear in (2.18). Combining with the previous discussion about 1 If ri n = n − 2 and all the other rj's saturate their lower bounds, we have R(a0,r 2 ···rn ) = ri n !ri n−1 ! · · · ri 1 ! = (n − 2)!(n − 3)! · · · 1! .
The relations corresponding to the painted tableaux in figure 6 are independent from each other, since every relation involves a new coefficient. These relations are obviously complete as well, since they are linear and fix all the coefficients uniquely.
One-loop five-point SYM amplitude
In this and the next sections, we further illustrate the aforementioned method by discussing more examples, the one-loop MHV amplitude for five external particles in N = 4 SYM and the n-gon amplitude for an arbitrary number of particles. At first sight, the one-loop SYM integrand [14] might appear considerably more complicated than the four-point case due to the existence of the Pfaffian. Fortunately for us, supersymmetries, which enter the story in the GSO projection, simplify the integrand/amplitude by a great deal and cast the integrand into a form that is no longer intimidating at all. We first demonstrate this process using the relations among the Jacobi theta functions studied in [50] and the decomposition of the polarization vectors. In the end we arrive at an expression that is a sum of prepared forms and the evaluation of this expression becomes straightforward.
Simplify the one-loop generalized CHY integrand
The generalized CHY form for the one-loop 5-point integrand in SYM is first conjectured as the following by Geyer et al. in [14,15] and checked against the corresponding Q-cut JHEP06(2017)015 expressions up to five points, where PT 5 is the well-known Parke-Taylor factor for five points and its general expression reads , (3.2) in which S n is the cyclic permutation group of n elements. The shorthand notation F denotes the summation of the Pfaffians over the GSO sectors that have even spin structures, namely F = α c α Z α Pf(W α )| τ →∞ where τ is regarded simply as a parameter in the SYM computations, but can be interpreted as the moduli parameter of the corresponding oneloop worldsheet in string theory. 3 Z α denotes the theory-specific partition functions for the GSO sectors and in the case of N = 4 super Yang-Mills reads where η(τ ) is the Dedekind eta function and α labels the GSO sectors with even spin structures, that is, (R+), (N S+), (N S−) respectively. The coefficients c α take care of the GSO projection, and thus take the values {c 2 , c 3 , c 4 } = {+1, −1, +1}. The matrix in the Pfaffian takes the following skew-symmetric form, (3.5) where the polarization vectors and the momenta of the external gluons are denoted as ǫ's and k's (without loss of generality, we choose the helicities of ǫ 1 and ǫ 2 to be negative and the rest positive throughout this section). S α denotes the fermionic propagator in the RNS formalism of string theory, with α labeling the sector, whose explict form will not be used in this paper. We have also re-parameterized the variables as σ = e 2πi(z− τ 2 ) and the scattering equations in terms of the complex variables z i read θ 1 (z ij |τ ) , with G denoting the bosonic propagator.
JHEP06(2017)015
Now we begin to simplify the Pfaffian (3.4), which by definition is a polynomial in the elements of the matrix (3.4). The terms in this polynomial are all of the following form with coefficients containing the kinetic informations only, is a permutation of (i 1 , · · · , i m ). The diagonal terms C ll of the off-diagonal blocks in (3.4) are the only ones that contribute factors other than S α . The union of all the indices above is simply {1, 2, 3, 4, 5} with each number appearing twice while the sets of indices {i 1 , · · · , i m } and {j 1 , · · · j 5−m } have no overlap. The coefficients c({ǫ}, {k}) and the entries C ll are evidently universal for all GSO sectors and can be pulled out from the the summation over the spin structures; the only factors relevant for the GSO summation are the products of S α 's. Such summations are worked out explicitly at one loop for the N = 4 partition functions in [50], exploiting the properties of the Jacobi theta functions. For m ≤ 3 the weighted sum simply vanishes and thus the surviving terms in the five-point Pfaffian are those with m = 4 or m = 5, which sum up to the following simple results when z i 1 i ′ 1 + · · · z imi ′ m = 0 (this condition is satisfied trivially in our case), Using these relations, the sum of the Pfaffians corresponding to the MHV amplitude over the even spin structures can be directly written down as the following, where ρ{345} and ρ{2345} are the permutation groups S 3 /S 2 4 and S 4 /S 2 × S 2 respectively. This expression does not yet put all the external particles on the same footing. In the N = 4 SYM case, it is possible to make the cyclic symmetry manifest at the level of integrand, by projecting the polarization vectors ǫ µ i onto the external momenta k µ i 's and eliminating the loop momentum ℓ using the scattering equations (3.6). The decomposition of the polarization vector reads, and ǫ ρ 1 ρ 2 ρ 3 ρ 4 denotes the Levi-Civita tensor and v ρ i i denotes the ρ i -th component of the vector v j . Now all the ǫ i · ℓ terms can be rewritten in terms of R i,j 's using the scattering equations, leading to the following form of (3.9) that treats all the external lines democratically,
Transformations to the prepared CHY integrand
So far we have reduced the Pfaffian in the SYM integrand to a simple expression. Substituting the explicit expressions for the Parke-Taylor factor and applying the linear transformations that turn the original scattering equations to polynomials, we write down the following expressions for the five-point integrand, The factor . The former term above is already of the prepared form (2.13) up to the global residue theorem (which only introduces an overall minus sign to the result), while the latter, although not yet of the form we want, can be easily massaged into one, using the following cross ratio identities derived in [31], where i ∈ [1,5]. The denominator on the right-hand side contains the factors (σ i − σ j ) of non-adjacent pairs (ij) only and these factors are canceled out by the Vandermonde determinant that was introduced by the linear transformations of the scattering equations, leaving the monomial σ i in the denominator, which is just the case of the prepared form up to the global residue theorem.
Evaluating the prepared form of the five point integrand
In order to evaluate (3.14), we first construct the differential operators that capture the residues corresponding to the prepared expressions obtained previously. Without loss of generality, we take the following prepared form with h 5 = σ 1 as an example to demonstrate the detailed evaluation (all the other terms can be computed the same way), where H(σ) is a holomorphic function of the σ i 's. The differential operator for this residue is of order 6 and has 210 coefficients a r 1 ,r 2 ,r 3 ,r 4 ,r 5 to be determined. We first get rid of those coefficients that are obviously zero. As discussed in the previous sections, these coefficients are a r 1 ,··· ,r 5 with r 1 ≥ 1 and those with ranks lower than 3!2!1! = 12, namely, a 0,1,1,1,3 , a 0,1,1,2,2 , and a 0,0,2,2,2 (including permutations of the last four indices). The total number of the vanishing coefficients is It is easy to observe that the elementary coefficients in this case are a 1,1,1,1,2 , a 0,1,1,2,2 and a 0,0,1,2,3 (including index permutations) and their corresponding tableaux are depicted in figure 7. Among them, only a 0,0,1,2,3 (including the permutations of the last four indices) is non-vanishing. Therefore the intersection number constraint DJ = 24 yields where ρ runs over all permutations of {0, 1, 2, 3} and v (ρ) has the same meaning as in (2.20). For this particular case, these non-vanishing elementary coefficients read, : a 2,1,1,1,1 , a 2,2,1,1,0 , a 3,2,1,0 Now we are only left with the non-elementary coefficients whose ranks are all higher than 12, that is, a 0,0,1,1,4 , a 0,0,0,2,2 , a 0,0,0,2,4 , a 0,0,0,1,5 and a 0,0,0,0,6 . In figure 8 their corresponding tableaux are shown and arranged in such an order that the preferred coloring relates each tableau with only the ones to the left of it. The coefficients corresponding to the two tableaux in the second column in figure 8, namely a 0,0,1,1,4 and a 0,0,0,3,3 , can be written in terms of the non-vanishing elementary one alone and the analytic solutions of them read, Table 1. Final results for the pentagon diagrams. The first line lists the terms while the second line their analytic expressions respectively. The last line shows the forward-limit channels corresponding to each term, where the cut lines in the Q-cut analysis are depicted in figure 9. For a given cut line, its momentum is chosen to be ℓ.
This concludes the computation of the residues at finite poles and at infinity (the computation of the residue at infinity is transformed to evaluating a prepared form with h n = σ n in the same fashion as in [44]).
Analysis for the spurious poles
In this subsection we investigate the spurious poles that arise from the extra solutions to the polynomial scattering equations. Such contribution should vanish in consistent supersymmetric theories and we will show this is indeed true in this particular case. It is easy JHEP06(2017)015 Figure 9. Pentagon diagram. figure 10. Again, for a given cut line in the Q-cut analysis, its momentum is chosen to be ℓ. to observe that the extra solutions all coincide at the position σ 1 = σ 2 = · · · = σ n−1 = 1. Naturally, we prefer to deal with poles at the origin and shift the spurious pole there by the parameter transformation σ i → σ i + 1. Therefore the residue at the spurious pole is associated with the meromorphic form,
JHEP06(2017)015
where the factor H 5 is read off from (3.14) and (3.24) The product i+1<j (σ i − σ j ) contains all non-adjacent pairs of (σ i − σ j ) with the gauge choice being implicitly σ 5 = 1. The shifted polynomial scattering equations read the following, Note that the leading order terms above do not give rise to a zero-dimensional intersection and we have to apply transformations on these polynomials such that the divisors generated by them coincide at an isolated point. The explicit expressions for such transformations will not be needed and we simply display the new polynomials here, According to the transformation law, we also need to take care of the determinant of the transformations, which is simply 1 here.
In principle we need to construct a different differential operator following section 4.4 of [44]; and to this end the hatted polynomials are homogenized in a way such that the JHEP06(2017)015 higher-order terms are lowered to the same degree as the leading order ones, leading to the order of this operator being 4. Luckily the actual form of the operator is not necessary in this case, since (3.24) is holomorphic at the shifted origin and the lowest term in its numerator is of degree 5. The action of the fourth-order differential operator on (3.24) must vanish when evaluated at the shifted origin.
Since the spurious poles do not contribute, the total residue obtained previously is the final integrand for the five-point SYM amplitude at one loop which simply reads, where the expressions for I P i 5 and I are given in table 1 and table 2. Just like its fourpoint counterpart studied in [44], our result presents a clear one-to-one correspondence with the forward-limit channels in the Q-cut analysis. We expect this nice property continues to hold for higher points.
The method we have illustrated so far generalizes naturally to the one-loop integrands for any number of external particles. For higher-point cases, we expect the integrands can be decomposed into several residues that are associated with the prepared form, or similar expressions with slightly more complicated h n 's.
Similar to the analysis in section 2, when h n is a monomial, the corresponding local duality theorem leads to the vanishing of some coefficients.
The local duality equations arising from the scattering equations are universal despite the choice of h n , that is to say, the fact that the coefficients in the corresponding differential operator can be related to those with lower or equal ranks only holds for any number of points, and most of the relations among these coefficients remain the same. The construction of the differential operator always boils down to solving for the elementary coefficients. Since the number of the elementary coefficients is quite limited, these coefficients can be solved efficiently.
One-loop n-gon amplitude: a direct evaluation
In this section we discuss the evaluation of the generalized CHY form for one-loop n-gon amplitude. The evaluation can be done directly since all n-gon amplitudes are already of the prepared form. The one-loop n-gon amplitude is conjectured to be the following in [17] where PT 4 is the Parke-Taylor factor. The terms in (4.1), after homogenization, are of form
JHEP06(2017)015
where i||j denotes pairs of neighboring indices in the cyclic order. In terms of the differential operator representation, we have where v (ρ −1 ) means the result of performing ρ −1 permutation on v = {0, 0, 1, 2, 3, · · · , n − 3, n − 2}, and subscripts of v indexes the components of v (ρ −1 ) . These a's have already been obtained in eq. (2.29). On the other hand, as we will see from later discussion, here the spurious pole does not contribute. Thus the sum over ρ ∈ S n of (4.3) is the final result of the general one-loop n-gon amplitudes.
It is easy to see that each factor σ ρ(i) − σ ρ(i+1) in the dominator will be canceled by a factor in the numerator. The finial degree of σ in the numerator is (n − 2)(n − 1) 2 .
According to our conjecture, the differential operator D for the residue at the shifted origin is (n − 4)(n − 1) 2 + 2 , where n 4. Then it is easy to find that the spurious residues are all vanishing for n-gon with n 4.
Conclusion and outlook
In this paper we use our previously proposed differential operator to compute the residues on the solutions of scattering equations. We find that the coefficients in the differential operator can be determined in a combinatoric way. We present an analytical solution for the differential operator for the prepared forms of the generalized CHY integrands for npoint scattering amplitudes. We then use said differential operator to evaluate the one-loop CHY integrand for five points in N = 4 SYM which is casted into the prepared forms, and the one-loop CHY integrands of the n-gon amplitudes which are naturally of the prepared forms for any number of external lines.
In both examples, we arrive at the compact final expression effortlessly using our method and the final results are analytical and identical with the ones obtained through the Q-cut analysis respectively. Although the examples studied in this paper can be massaged to the prepared form, more complicated denominators may turn up in the integrands when we investigate higher-point one-loop amplitudes in super Yang-Mills. An immediate JHEP06(2017)015 followup is to generalize our method for differential operators corresponding to these new ingredients and determine how efficient this method can be for higher-point amplitudes in comparison with other methods. We are hopeful that it is possible to work out such generalizations and solve for the corresponding differential operator analytically, which will lead to a straightforward evaluation of the one-loop generalized CHY forms.
At the moment of writing, the construction of generalized CHY forms in super Yang-Mills theory has been promoted to two loops for four external lines. Another natural direction to take is to study the combinatoric structures of the two-loop scattering equations and lift the coefficient-generating method to two loops, which in turn will provide a useful tool in constructing the generalized CHY form for any number of external lines at two-loop level.
Moreover, the method we propose is theory-independent and thus has the potential to evaluate any CHY-type expressions in pure Yang-Mills and Gravity theory. This, in turn, will make the generalized CHY forms an efficient method for computing amplitudes. | 9,515.8 | 2017-06-01T00:00:00.000 | [
"Mathematics"
] |
Increased Susceptibility of Mycobacterium tuberculosis to Ethionamide by Expressing PPs-Induced Rv0560c
Tuberculosis, an infectious disease, is one of the leading causes of death worldwide. Drug-resistant tuberculosis exacerbates its threat. Despite long-term and costly treatment with second-line drugs, treatment failure rates and mortality remain high. Therefore, new strategies for developing new drugs and improving the efficiency of existing drug treatments are urgently needed. Our research team reported that PPs, a new class of potential anti-tuberculosis drug candidates, can inhibit the growth of drug-resistant Mycobacterium tuberculosis. Here, we report a synergistic effect of PPs with ethionamide (ETH), one of the second-line drugs, as a result of further research on PPs. While investigating gene expression changes based on microarray and 2DE (two-dimensional gel electrophoresis), it was found that PPs induced the greatest overexpression of Rv0560c in M. tuberculosis. Based on this result, a protein microarray using Rv0560c protein was performed, and it was confirmed that Rv0560c had the highest interaction with EthR, a repressor for EthA involved in activating ETH. Accordingly, a synergistic experiment was conducted under the hypothesis of increased susceptibility of ETH to M. tuberculosis by PPs. As a result, in the presence of 0.5× MIC PPs, ETH showed a growth inhibitory effect on drug-sensitive and -resistant M. tuberculosis even at a much lower concentration of about 10-fold than the original MIC of ETH. It is also suggested that the effect was due to the interaction between PPs and Rv2887, the repressor of Rv0560c. This effect was also confirmed in a mouse model of pulmonary tuberculosis, confirming the potential of PPs as a booster to enhance the susceptibility of M. tuberculosis to ETH in treating drug-resistant tuberculosis. However, more in-depth mechanistic studies and extensive animal and clinical trials are needed in the future.
Introduction
Tuberculosis, a communicable disease caused by Mycobacterium tuberculosis, is one of the leading causes of death worldwide, killing 1.3 million HIV-negative (up from 1.2 million in 2019) and 214,000 HIV-positive people in 2020 (up from 209,000 in 2019) [1]. Moreover, the development of drug resistance, including multidrug resistance (MDR), extensive drug resistance (XDR), and total drug resistance (TDR), in M. tuberculosis decisively impedes the efficacy of currently available drug therapies [2]. To respond to the threat of tuberculosis to humankind, it is necessary to develop new drugs and treatment strategies based on the basic molecular biology of tuberculosis.
The development of anti-tuberculosis drugs started with the development of streptomycin in 1944 by Nobel Prize laureate Selman Waksman, followed by the development of p-aminosalicylic acid in 1949, isoniazid in 1952, pyrazinamide in 1954, ethambutol in 1962, rifampin in 1963, and cycloserine in 1964 [3]. As mentioned above, the current antituberculosis drugs discovered from the 1950s to the 1970s were developed through clinical trials until the 1980s. However, there was a discovery void of tuberculosis drug R&D for about 30 years until about 2000 [4]. The void in anti-tuberculosis drug discovery is underscored by the fact that it coincided with the emergence of drug-resistant M. tuberculosis. In contrast, the recent FDA approval of bedaquiline and delamanid for MDR/XDR tuberculosis spurs the development of tuberculosis drugs [5].
To develop antibiotic-resistant tuberculosis treatment, our research team discovered and reported anti-tuberculosis drug candidates called PPs (methyl (S)-1-((3-alkoxy-6,7dimethoxyphenanthren-9-yl) methyl)-5-oxopyrrolidine-2-carboxylate derivatives) [6]. PPs are a new class of drugs with different basic structures from existing tuberculosis drugs. Both in vitro and in vivo experiments confirmed that PPs could effectively inhibit the growth of drug-resistant M. tuberculosis. PPs also showed no detectable toxicity in single/repeat oral toxicity and genotoxicity studies performed at the GLP (good laboratory practice) level. These results suggest that PPs have sufficient potential for developing new drugs for treating drug-resistant tuberculosis. Therefore, various experiments were additionally performed, and here, we report on gene expression by PPs and their excellent synergistic effect with ethionamide (ETH).
Briefly explaining the article preview, during the gene expression profiling experiment of M. tuberculosis, overexpression of Rv0560c by PPs was confirmed, and the function of this protein was explored. As a result of performing a protein microarray for M. tuberculosis using Rv0560c protein, it was confirmed that Rv0560c had the highest interaction with EthR, an inhibitor of EthA involved in ETH activation. Accordingly, in vitro and in vivo experiments were conducted with the hypothesis that the susceptibility of ETH to M. tuberculosis by PPs would increase. The results are described in this article.
Exploring PPs-Induced Alterations of Gene Expression in M. tuberculosis
Gene expression changes in M. tuberculosis H37Rv after treatment with PP1S were investigated ( Figure 1). A two-dimensional gel electrophoresis (2-DE) experiment was performed to examine changes in protein expression in M. tuberculosis treated with PP1S at 1× and 10× MICs ( Figure 1A). As a result, seven differentially expressed proteins with fold changes of at least 1.5 between drug-treated cells and untreated samples were identified ( Figure 1B). Of these proteins, the two most over-expressed proteins (spots 1 and 2) were identified as the same protein, Rv0560c, a putative benzoquinone methyltransferase. Two under-expressed proteins (spots 3 and 4) were identified as enoyl-CoA hydratase and translation elongation factor EF-Tu, respectively. Another three proteins (spots 5, 6, and 7) showed no consistent changes. In the results of comparative transcript analysis of M. tuberculosis after treatment with PP1S or PP2S through microarray experiments, it was also confirmed that the Rv0560c gene was the most highly upregulated ( Figure 1C). Results of microarray experiments analyzed by genomic category are presented in Supplementary Table S2. Overexpression of Rv0560c was verified by real-time PCR ( Figure 1D) and Western blotting experiments ( Figure 1E). PPs upregulated Rv0560c in a concentration-dependent manner. Salicylates used as positive control drugs also upregulated Rv0560c. significantly different expressions between PP1S treated and untreated groups were identified. Two spots whose expressions were significantly increased by PP1S were found to be Rv0560c. (C) As a result of confirming the gene transcription pattern through microarray under the same conditions as 2DE, the most upregulated gene by PPs was Rv0560c. Overexpression of Rv0560c by PPs was confirmed by (D) RT-PCR and (E) Western blotting using an Rv0560c-specific antibody. (D,E) Rv0560c gene was also upregulated by salicylate (SAL). RT-PCR results are expressed as mean ± standard deviation.
ETH-Boosting Activity of PPs
Rv0560c, confirmed to be overexpressed by PPs, was synthesized. The interaction with all proteins of M. tuberculosis was investigated using this ( Figure 2). Protein microarray experiments were performed on the interaction with all proteins of M. tuberculosis. The results show that EthR had the highest interaction with Rv0560c ( Figure 2A). To verify this result, Rv0560c and EthR proteins were synthesized, and direct interaction between the two proteins was tested through surface plasmon resonance (SPR) experiments (Figure 2B). EthR induced a change in refractive index when it was added to Rv0560c immobilized on a sensor surface. The KD of EthR was 31.9 μM, indicating that it could bind to Rv0560c at a low micromolar affinity.
Given the critical role of EthR in the repression of EthA, we tested whether PPs could boost the anti-tubercular activity of ETH ( Figure 3). In the presence of 0.5× MIC of PP1S or PP2S, ETH inhibited the growth of M. tuberculosis H37Rv even at concentrations more than 10-fold lower than the MIC of ETH alone ( Figure 3A). However, no change in MIC was observed for PPs in the presence of ETH at 0.5× MIC, which is the opposite condition and untreated groups were identified. Two spots whose expressions were significantly increased by PP1S were found to be Rv0560c. (C) As a result of confirming the gene transcription pattern through microarray under the same conditions as 2DE, the most upregulated gene by PPs was Rv0560c. Overexpression of Rv0560c by PPs was confirmed by (D) RT-PCR and (E) Western blotting using an Rv0560c-specific antibody. (D,E) Rv0560c gene was also upregulated by salicylate (SAL). RT-PCR results are expressed as mean ± standard deviation.
ETH-Boosting Activity of PPs
Rv0560c, confirmed to be overexpressed by PPs, was synthesized. The interaction with all proteins of M. tuberculosis was investigated using this ( Figure 2). Protein microarray experiments were performed on the interaction with all proteins of M. tuberculosis. The results show that EthR had the highest interaction with Rv0560c ( Figure 2A). To verify this result, Rv0560c and EthR proteins were synthesized, and direct interaction between the two proteins was tested through surface plasmon resonance (SPR) experiments ( Figure 2B). EthR induced a change in refractive index when it was added to Rv0560c immobilized on a sensor surface. The K D of EthR was 31.9 µM, indicating that it could bind to Rv0560c at a low micromolar affinity. ylates used as positive controls also increased the sensitivity of ETH to M. tuberculosis H37Rv (Supplementary Figure S2).
In the M. tuberculosis-infected animal model, compared to the group administered with 10 mg/kg/day of ETH alone, mice treated with PP2S in combination with 10 mg/kg/day of ETH showed significantly lower numbers of M. tuberculosis ( Figure 4). In addition, PPs were tested for synergistic effects with first-line drugs, and there were no synergistic or antagonistic effects for all drugs tested (Table 1). tuberculosis over concentrations ranging from 0.5× to 0.06× lower than 1× MIC (p < 0.05). (B) Conversely, no significant susceptibility-increasing effect of M. tuberculosis to ETH was observed at the various concentrations of PPs tested in the presence of 0.5× MIC of ETH (p > 0.05). Values were expressed as mean ± standard deviation, and statistical significance was determined using an unpaired Student's t-test by comparing fluorescence values of samples treated with either ETH or PPs alone. Given the critical role of EthR in the repression of EthA, we tested whether PPs could boost the anti-tubercular activity of ETH ( Figure 3). In the presence of 0.5× MIC of PP1S or PP2S, ETH inhibited the growth of M. tuberculosis H37Rv even at concentrations more than 10-fold lower than the MIC of ETH alone ( Figure 3A). However, no change in MIC was observed for PPs in the presence of ETH at 0.5× MIC, which is the opposite condition ( Figure 3B). The effect of increasing the sensitivity of ETH in M. tuberculosis of PPs was similar to the results for the XDR strain (Supplementary Figure S1). Treatment with salicylates used as positive controls also increased the sensitivity of ETH to M. tuberculosis H37Rv (Supplementary Figure S2). ( Figure 3B). The effect of increasing the sensitivity of ETH in M. tuberculosis of PPs was similar to the results for the XDR strain (Supplementary Figure S1). Treatment with salicylates used as positive controls also increased the sensitivity of ETH to M. tuberculosis H37Rv (Supplementary Figure S2). In the M. tuberculosis-infected animal model, compared to the group administered with 10 mg/kg/day of ETH alone, mice treated with PP2S in combination with 10 mg/kg/day of ETH showed significantly lower numbers of M. tuberculosis ( Figure 4). In addition, PPs were tested for synergistic effects with first-line drugs, and there were no synergistic or antagonistic effects for all drugs tested (Table 1). In the M. tuberculosis-infected animal model, compared to the group administered with 10 mg/kg/day of ETH alone, mice treated with PP2S in combination with 10 mg/kg/day of ETH showed significantly lower numbers of M. tuberculosis (Figure 4). In addition, PPs were tested for synergistic effects with first-line drugs, and there were no synergistic or antagonistic effects for all drugs tested (Table 1).
Interaction between Rv2887, a Repressor of Rv0560c, and PPs
It was hypothesized that the overexpression of Rv0560c might be possible through the interaction of Rv2887, a repressor of Rv0560c, with PPs. An experiment was conducted
Interaction between Rv2887, a Repressor of Rv0560c, and PPs
It was hypothesized that the overexpression of Rv0560c might be possible through the interaction of Rv2887, a repressor of Rv0560c, with PPs. An experiment was conducted for this ( Figure 5). The interaction between PPs and Rv2887 was analyzed using an in silico method ( Figure 5A). PP1S and PP2S could dock into a hydrophobic cavity formed by residues Leu13, Leu20, and Leu35 of Rv2887, while phenanthrene rings of PPs could contact Antibiotics 2022, 11, 1349 6 of 11 the Arg21 side chain through cation-π interactions. These results were also confirmed by SPR, demonstrating direct binding of PP1S and PP2S to Rv2887, with K D values of 274 µM and 250 µM, respectively ( Figure 5B).
Antibiotics 2022, 11, 1349 6 of 11 for this ( Figure 5). The interaction between PPs and Rv2887 was analyzed using an in silico method ( Figure 5A). PP1S and PP2S could dock into a hydrophobic cavity formed by residues Leu13, Leu20, and Leu35 of Rv2887, while phenanthrene rings of PPs could contact the Arg21 side chain through cation-π interactions. These results were also confirmed by SPR, demonstrating direct binding of PP1S and PP2S to Rv2887, with KD values of 274 μM and 250 μM, respectively ( Figure 5B).
Discussion
M. tuberculosis is a globally distributed lethal human pathogen. Its path adaptation is further enhanced by the modularity, flexibility, and interactivity of mycobacterial effectors and their regulators [7]. There is a need to develop new drugs and various therapies based on understanding the mechanisms for maintaining adaptability that characterize these mycobacteria.
Our research team reported the anti-tuberculosis effect of PPs, a new class of candidate drugs structurally different from existing tuberculosis drugs [6]. As reported in our previous study, results of anti-tuberculous activity and toxicity of PPs confirmed the feasibility of entering the next stage of development for PPs. Thus, we proceeded with various in-depth studies. While exploring the M. tuberculosis gene expression pattern, it was found that PPs significantly upregulated Rv0560c. Based on this finding, the synergistic effect of PPs with ETH was inferred and verified.
Two highly over-expressed proteins in response to PPs were identified as the same benzoquinone methyltransferase encoded by Rv0560c. Although the function of Rv0560c is currently unknown, it shares sequence identities with a benzoquinone methyltransferase involved in ubiquinone biosynthesis in Escherichia [8,9]. Rv0560c is also suggested to be involved in menaquinone biosynthesis [5][6][7], specifically in demethylmenaquinone-tomenaquinone conversion [8], the same as Rv0558 [8][9][10]. Our data confirmed that Rv0560c could bind strongly to EthR, a TetR/CamR family repressor that controls ETH bioactivation in M. tuberculosis [10]. Given the critical role of EthR in the repression of EthA (ETH bioactivator) [11], we tested whether PPs could boost the anti-tubercular activity of ETH. A synergistic effect of PPs at 0.5× MIC was demonstrated even at a concentration lower than 1× MIC of ETH. This synergistic effect was also seen in a tuberculosis mouse model. In an experiment using salicylate known to cause overexpression of Rv0560c [8], it was also confirmed that ETH was more sensitive to M. tuberculosis, validating the results of
Discussion
M. tuberculosis is a globally distributed lethal human pathogen. Its path adaptation is further enhanced by the modularity, flexibility, and interactivity of mycobacterial effectors and their regulators [7]. There is a need to develop new drugs and various therapies based on understanding the mechanisms for maintaining adaptability that characterize these mycobacteria.
Our research team reported the anti-tuberculosis effect of PPs, a new class of candidate drugs structurally different from existing tuberculosis drugs [6]. As reported in our previous study, results of anti-tuberculous activity and toxicity of PPs confirmed the feasibility of entering the next stage of development for PPs. Thus, we proceeded with various in-depth studies. While exploring the M. tuberculosis gene expression pattern, it was found that PPs significantly upregulated Rv0560c. Based on this finding, the synergistic effect of PPs with ETH was inferred and verified.
Two highly over-expressed proteins in response to PPs were identified as the same benzoquinone methyltransferase encoded by Rv0560c. Although the function of Rv0560c is currently unknown, it shares sequence identities with a benzoquinone methyltransferase involved in ubiquinone biosynthesis in Escherichia [8,9]. Rv0560c is also suggested to be involved in menaquinone biosynthesis [5][6][7], specifically in demethylmenaquinone-tomenaquinone conversion [8], the same as Rv0558 [8][9][10]. Our data confirmed that Rv0560c could bind strongly to EthR, a TetR/CamR family repressor that controls ETH bioactivation in M. tuberculosis [10]. Given the critical role of EthR in the repression of EthA (ETH bioactivator) [11], we tested whether PPs could boost the anti-tubercular activity of ETH. A synergistic effect of PPs at 0.5× MIC was demonstrated even at a concentration lower than 1× MIC of ETH. This synergistic effect was also seen in a tuberculosis mouse model. In an experiment using salicylate known to cause overexpression of Rv0560c [8], it was also confirmed that ETH was more sensitive to M. tuberculosis, validating the results of this study. Domain and homology analyses suggested that Rv2887 might be a transcriptional regulator [12]. Our data suggest that Rv0560c is controlled by a tight interaction between the repressor Rv2887 and its binding motif [9,13,14]. Overexpression of MarR (multiple Antibiotics 2022, 11, 1349 7 of 11 antibiotic resistance repressor) family transcription factor Rv2887 could lead to repression of Rv0560c [13]. Rv0560c is upregulated following the deletion of Rv2887 [14]. This study confirmed that PPs could bind to Rv2887, similar to previous reports showing that salicylate can increase the expression of Rv0560c by binding to Rv2887 [15]. Therefore, we extend the existing EthR-EthA pathway [16] and Rv2887-Rv0560c [13] and propose a new Rv2887-Rv0560c-EthR-EthA pathway in which the two existing pathways are merged.
Despite these results, this study still has limitations. As mentioned above, in ETH's activation by EthA, how EthR acts as a repressor for EthA has already been well understood [11,16]. With this mechanism in mind, the observed interaction between Rv0560c overexpressed by PPs and EthR and the result of increased sensitivity of PPs to ETH was judged based on the EthR-EthA pathway mechanism. However, the lack of direct experimental results for this remains a limitation and needs to be investigated in more detail in future studies. In addition, the function of Rv2887 as a repressor for Rv0560c has been previously reported [9,13,14]. This study confirmed the interaction between PPs and Rv2887 and the result of overexpression of Rv0560c. However, more experiments are required to determine that this result is due to the Rv2887-Rv0560c pathway. Moreover, more extensive and in-depth animal validation is needed, along with various additional experiments on drug target validation.
ETH, one of the most effective second-line drugs, has several side effects. Thus, it is essential to reduce its dose while maintaining its anti-tuberculotic effect [17]. Accordingly, several research studies have been reported on developing new drugs that can increase the sensitivity of ETH to M. tuberculosis [17,18]. Therefore, the synergistic effect of PPs with ETH has significant implications for the treatment of tuberculosis. Further mechanistic studies, extensive animal efficacy, and clinical trials are needed in the future.
In conclusion, overexpression of Rv0560c by PPs increases the sensitivity of ETH to M. tuberculosis through the Rv2887-Rv0560c-EthR-EthA pathway newly proposed in this study (Supplementary Figure S3). Therefore, using PPs could be a new treatment strategy for drug-resistant tuberculosis. Further research needs to be carried out extensively before PPs can be applied to clinical treatment.
Drugs
PPs were prepared as previously reported [6]. Among PPs, the structures of PP1S and PP2S tested in this study are presented in Supplementary Figure S4. Isoniazid, rifampicin, streptomycin, ethambutol, pyrazinamide, ETH, and salicylate were purchased from Sigma-Aldrich (USA).
Exploration of Gene Expression of M. tuberculosis Changed by Drugs
M. tuberculosis culture medium in the intermediate exponential growth stage was used to search for gene expression in M. tuberculosis changed by drugs. A freshly prepared M. tuberculosis culture medium that had reached the intermediate exponential phase was adjusted to an OD600 nm 0.8 using a Spectrophotometer (DR1900, Hach, Loveland, CO, USA). Then, PPs were treated with 1× MIC and 10× MIC, respectively, and incubated for 6 h in an incubator at 37 • C and 180 rpm. The M. tuberculosis culture medium volume was 30 mL, and three samples were tested for each drug, including the control group not treated with PPs. The M. tuberculosis culture medium exposed to the drug was centrifuged at 3000 rpm for 10 min and washed to obtain a cell pellet. RNA was extracted from M. tuberculosis harvest and used for the microarray experiment. Extracted protein was used for the 2DE experiment. The microarray and 2DE experiment are described immediately below, but detailed methods are presented in Supplementary Material.
Microarray
Microarray experiments with drugs in M. tuberculosis were performed according to previously reported methods [19]. First, RNA was extracted from M. tuberculosis with an RNAprotect Bacteria Reagent kit (QIAGEN, Hilden, Germany). The extracted RNA was sent to ebiogen (Seoul, Korea) for microarray analysis. Briefly, RNA samples were reverse transcribed into cDNA, and target cRNA probes were synthesized and hybridized on a MYcroarray.com (accessed on 08 September 2022) (M. tuberculosis) 3 × 20 k Microarray. Images were then acquired with an Agilent DNA microarray scanner (Agilent, Technology, Santa Clara, CA, USA) and quantified using Feature Extraction software (Agilent Technology, Santa Clara, CA, USA). Differentially transcribed genes were sorted by functional category [20]. A hypergeometric distribution method was used to determine which specific functional category genes were affected by the drug [21,22]. Genes identified with a fold change cutoff of >2 and p < 0.01 were analyzed for an in-depth study.
2DE
Control and drug-treated cells were harvested, lysed in lysis buffer (pH 7, 0.3% SDS, 28 mM Tris base), and transferred to tubes containing silica beads (0.1 mm, MP biomedicals, Irvine, CA, USA). These cells were disrupted with 10 pulses for 30 s using a BeadBug microtube homogenizer (Benchmark Scientific, Inc., Sayreville, NJ, USA) at 4000 Hz with medium ice-cooling for 20 s. The concentration of extracted protein was measured by BCA (bicinchoninic acid) protein assay (Pierce, Waltham, MA, USA). 2DE experiment was performed by ProteomeTech (Seoul, Korea) as previously described [6]. Protein spots were analyzed by ImageMaster 2D Platinum software (Amersham Biosciences, Amersham, United Kingdom). As reported, separation of proteins from the gel and protein identification were performed [23,24].
Confirmation of Overexpression of Rv0560c Gene Using Real-Time PCR and Western Blot
As previously reported, real-time PCR experiments for M. tuberculosis were performed [25]. The Rv0560c gene targeting primers used in this study were as follows: sense primer sequence (5 -3 ), GTAGAACTGGCTCGGCATGA; antisense primer (5 -3 ), CCGGTCGAATACCAACACGA. Western blotting experiments were also performed, as previously reported [26].
Protein Microarray
First, Rv0560c protein was biotinylated using a biotinylation kit (EZ-Link Micro NHS-PEG4, Thermo Fisher Scientific, Waltham, MA, USA). MTBprot™ Mycobacterium tuberculosis Proteome Microarray (BC Biotechnology, China) containing 4262 full-length recombinant M. tuberculosis proteins was treated with 3 µg of Rv0560c biotinylated sample protein for 8 h at 4 • C and sequentially incubated with Streptavidin-Alexa Fluor 546 conjugate. Rv0560cspecific binding proteins were detected by scanning with a GenePix 4100A microarray laser scanner (Molecular Devices, San Jose, CA, USA). This study was conducted by Geneonbiotech (Daejeon, Korea).
In Silico Protein Modeling and Docking
Molecular docking studies were performed with a Glide docking program of the Schrödinger suite to evaluate the binding mode of PPs within the Rv2887 protein (PDB Antibiotics 2022, 11, 1349 9 of 11 code 5X7Z) using an OPLS2005 force field. The grid center was set as the center of the selected residue, and the length of one side of the cubic grid was 15 Å. After the grid was created, ligand molecules were docked to the generated acceptor grid using the Glide SP docking program.
SPR Analysis
The ProteOn XPR36 protein interaction array system (Bio-Rad, Hercules, CA, USA) interacted between Rv0560c protein-EthR protein and Rv2887 protein-PPs. After the purified Rv0560c protein or Rv2887 protein was immobilized on the ProteOn GLH sensor chip, dilutions of the test EthR protein and PPs were prepared using PBS at various concentrations and then passed over the chip at a flow rate of 100 µL/min. Data were analyzed with ProteOn Manager software. The rate of complex formation is denoted by the association constant (k a , M −1 s −1 ), and the rate of complex decay is denoted by the given dissociation constant (k d , s −1 ), respectively.
In Vitro Evaluation of the Synergistic Anti-Tuberculosis Effect
The synergistic effect of drugs against M. tuberculosis was determined by previous reports [27,28]. It was evaluated against H37Rv and XDR M. tuberculosis based on the checkerboard titration method using 96-well plates. In each well, 200 µL of 7H9 broth was mixed with M. tuberculosis (1.6 × 10 5 CFU/mL), and various concentrations of each drug were determined by the 2-fold serial dilution method. After 7 days of incubation in aerobic conditions at 37 • C, 20 µL of 0.02% resazurin solution (Sigma, Saint Louis, MO, USA) was added to each well and incubated at 37 • C for an additional 2 days. Wells with the growth of M. tuberculosis turned pink, while wells with M. tuberculosis growth suppressed remained blue, the original color of resazurin. The color was measured and quantitatively analyzed with a Victor multimode microplate reader (PerkinElmer, Waltham, MA, USA). The MIC was the lowest drug concentration inhibiting M. tuberculosis growth. The FICI (fractional inhibitory concentration index) was calculated using these determined MIC values.
Evaluation of ETH-Boosting Effect in a Pulmonary Tuberculosis Mouse Model
In this experiment, BALB/c female mice (Dooyeol biotech, Seoul, Korea) were used (6 mice per group). A freshly cultured M. tuberculosis H37Rv culture solution with an OD 600 nm of about 0.8 was sprayed for 30 min using a nebulizer (Omron, Kyoto, Japan). A low dose of about 2 log CFU per mouse was used to infect the mouse. The drug was dissolved in corn oil (Sigma, Saint Louis, MO, USA). Mice were treated with the drug once a day, 5 days a week, for 4 weeks from the first day of infection. These mice were then sacrificed, and their lung bacterial load was enumerated as CFU. This animal study was conducted under the Soonchunhyang University Institutional Animal Care and Use Committee (IACUC, SCH22-0098).
Synthesis of Proteins and Preparation of Polyclonal Antisera
Rv3855 (Supplemental Figure S5), Rv2887 (Supplemental Figure S6), and Rv0560c (Supplemental Figure S7) proteins were synthesized, and rabbit polyclonal antisera against Rv0560c protein was prepared (Supplemental Table S1). Experiments on these were performed by Young In Frontier (Seoul, Korea), and detailed methods are presented in Supplemental Material. | 6,095.6 | 2022-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
FLUID IMAGE REGISTRATION USING A FINITE VOLUME SCHEME OF THE INCOMPRESSIBLE NAVIER STOKES EQUATION
. This paper proposes a stable numerical implementation of the Navier-Stokes equations for fluid image registration, based on a finite volume scheme. Although fluid registration methods have succeeded in handling large deformations in various applications, they still suffer from perturbed solutions due to the choice of the numerical implementation. Thus, a robust numeri- cal scheme in the optimization step is required to enhance the quality of the registration. A key challenge is the use of a finite volume-based scheme, since we have to deal with a hyperbolic equation type. We propose the classical Patankar scheme based on pressure correction, which is called Semi-Implicit Method for Pressure-Linked Equation (SIMPLE). The performance of the proposed algorithm was tested on magnetic resonance images of the human brain and hands, and compared with the classical implementation of the fluid image registration [13], in which the authors used a successive overrelaxation in the spatial domain with Euler integration in time to handle the nonlinear viscous. The obtained results demonstrate the efficiency of the proposed approach, vi- sually and quantitatively, using the differences between images criteria, PSNR and SSIM measures.
1.
Introduction. Image registration is one of today's most challenging problems in image processing, its main object being to find geometrical correspondences between two images or more, which we call the reference and the template [9,39,31,10,19,45,44]. Image registration is an important tool in various areas of applications such as astronomy, robotics and especially in bio-medical imaging [32,40,38,30,20,23,2]. These images contain a similar but not identical object. Indeed, scalar intensities are known while a transformation vector is to be calculated. To compute this vector, a natural way is to formulate the image registration as an optimization problem involving a distance measure to evaluate the similarity. However, due to different illuminations (grey-levels) of the image regions, the pairing cannot be achieved successfully. An additive regularization term is then needed, motivated by the nature of the transformation. Broit introduced the so-called elastic regularization [5], this term is based on linear elasticity model. Even if this model has been addressed in several image registration problems [24,11,29], it has some limitations such as it does not handle largely deformed data sets [32]. A more robust term is constructed by nonlinear elasticity which is addressed in many works, see [26,22,33,34]. The advantage of nonlinear models is that they preserve topology much better and also allow for intuitive deformations when the displacements are large. In contrast, the nonlinear models still limited in the treatment of irregular deformations. Recently, a non-local topology-preserving segmentation-guided registration method is performed to handle large deformations but it is specified to the smooth ones where the shapes to be matched are viewed as hyperelastic materials. To treat large transformation, Christensen [14] developed one of the most effective models used in the non parametric registration called the fluid registration scheme and Thirion [12,13], other derived fluid registration are also proposed in [4,8,27]. The purpose of the fluid image registration is to compute the displacement u for a given force field f , while the deforming image is considered to be embedded in a viscous fluid whose motion is governed by the Navier-Stokes equations for conservation of momentum (where the pressure is neglected). This equation is given by µ∆v(x, t) + (λ + µ)∇(∇ · v(x, t)) = f (x, t, u(x, t)), v(x, t) = ∂ t u(x, t) + v(x, t) · ∇u(x, t).
The component ∆v is called the viscous term because it constrains the velocity field spatially, while the component term ∇(∇ · v(x, t)) allows for contraction and expansion of the fluid. The second part of (1) defining material derivative of the displacement u, nonlinearity relates the velocity v and the displacement vector field. Constants µ and λ are the Lam coefficients. Although the fluid problem appears easy to solve, in reality it is quite complicated. One of the key complications is the choices of demons force [6,17,7]. Moreover, the fluid image registration is not based on an optimization approach, since we deal with a set of nonlinear partial differential equations (PDE), great care must be taken in the choice of the discretization method. Another well-known drawback of this method is the computational cost. Indeed, numerically, fixing the force field f , the first part of equation (1) is solved for v using the successive over-relaxation (SOR) scheme [14], which computes an accurate fluid model at the expense of a large computational time. Then an explicit Euler scheme is used to advance u in time. The method requires at each iteration the computation of the Jacobian matrix of the displacement field, which is also computationally very expensive. This framework is thus time-consuming, which motivates the search for faster implementations.
The focus of this paper is on a stable and fast numerical implementation of the Navier-Stokes equations for an incompressible fluid to resolve the fluid image registration problem, introducing ideas from computational fluid dynamics into problems in image analysis [42,18,15].
The use of the finite difference approximation is very limited because it does not take into account the continuous operator properties. To overcome this problem, we choose an alternative and appropriate discretization scheme for the continuous operator that we will descritize. In this paper, we propose a finite volume type scheme to handle different properties of the fluid registration in the discretization process of the proposed Navier-Stokes equation [28], precisely, we define the pressure term in the registred image. After the success of the Semi-Implicit Method for Pressure Linked Equations (SIMPLE) [35] algorithm (introduced by Brian Spalding and his student Suhas Patankar [37]) in solving the Navier-Stokes equations and many other problems in computational fluid dynamics (CFD) [16,35,41], no such attempt has been proposed to solve the fluid image registration problem. Because of this, in our work we use the SIMPLE algorithm as a discretization approach to solve the Navier-Stokes equation applied to image registration problem. The numerical results confirm that our proposed scheme is more efficient compared with the finite difference discretization used in previous literature [31].
This paper is organized as follows: Section 2 explains the main concepts of the image registration and the proposed Navier-Stokes equation for solving the fluid registration problem. In Section 3, we propose the discretization scheme. Finally, simulations and comparisons of our result with the finite difference method are presented in Section 4.
2.
Mathematical model of fluid registration. The registration problem is usually based on a minimization problem between two images, template image and reference image [31,25]. Given are two images T , R : Ω ⊂ R d → R, compactly supported on Ω, where d = 2. The goal is to find a transformation x+u(x) : Ω ⊂ R d or a deformation such that ideally T (x + u(x)) ≈ R(x) for all x ∈ Ω. This goal is achieved by minimizing a so-called distance measure D. As this problem is ill-posed, an appropriate regularization S is inevitable.
A variational formulation of the image registration problem is to find a minimizer u of where A denotes the set of admissible transformations and α is a regularization parameter.
The regularization term S is in general based on the gradient of u, denoted by ∇u. Important choices for this regularizer include the diffusion registration [31] (3) and the elastic registration [31,25] (4) where d is the dimension of the image, λ and µ are the Lam coefficients. One of the most successful regularization choices was the fluid one [31], given as an elastic potential of ∂ t u. To solve this problem, the Euler-Lagrange identity is used, which coincides with the resolution of the Navier-Lam equation. Based on the properties of this regularization, we propose a dynamic equation for the incompressible Newtonian fluids (5), which are governed by the Navier-Stokes equations, this equation coupling/pairing the velocity vector field v to a scalar pressure p such that ∂v ∂t + v · ∇v = −∇p + ν∆v + f ( . , u( . )), where the velocity v is calculated according to the displacement u such that The main analogy followed in this paper is the parallel between the incompressible Newtonian fluid and the image velocity of each pixel v(x) under the image registration concept. To do so, a Navier-Stokes equation (5) is introduced, where ν is supposed to be the factor of the diffusion. Also, the pressure p is modelled as the effect of each region on the nearest one in the image during the registration process. We suppose in the following that p is an external force. On the other hand, ∇ · v = 0 is well posed since the pixels are incondensable. The analogy is summarized in Table 1. Finally, f is supposed to be the external force obtained by Table 1. The analogy between the incompressible Newtonian fluid and the image registration.
the gradient of the distance D between the two images; T and R. A typical choice for this distance is the sum square difference (SSD) measure defined as (7) D SSD (T , R; u) = T (· + u(·)) − R(·) 2 L 2 (Ω) . Hence the external force f is computed by We can effectively assure the existence of a solution to problem (5) using techniques in [42], since we deal with a two dimensional problem. There are many advantages in the use of the Navier-Stokes equation. Firstly, we don't have any problem with the existence of a solution, since it is well-developed in the literature [42] while the uniqueness is assured for only the 2D case. Secondly, there are many stable numerical approaches that we can use to resolve this equation derived from a classical example of fluid dynamics. Finally, we are able to implement this method efficiently and we have a sufficient theoretical framework in which we can rely on to understand the obtained results.
For a mathematical transparency of the Navier-Stokes equation (5), a convenient way is to consider its dimensionless form [18], which is obtained by introducing a reference length L * and a reference velocity V * , then we set where T * is the reference time defined as T * = L * V * . We note that concerning the choice of these parameters, we initialise the reference velocity by V * = 1 and the reference length L * by the length of the image domain. By a substitution into (5), we obtain for v ′ , P ′ , f ′ the following equation where R e is a dimensionless constant called the Reynolds number defined as In practice, the Reynolds number is always taken in the interval [2,100]. There is in fact a relation between the choice of the Reynold number and the deformations type. The Reynold term becomes higher when the deformations between template and reference images are larger. To avoid the complexity of the notations, we keep the same notation in (5), i.e. we substitute v ′ , P ′ , f ′ by v, p, f . Then, we rewrite the equation (9) as follows In the following section, we discuss the proposed discretization scheme for the Navier-Stokes equation (11), appropriate to the image registration problem.
Discretization.
To solve the Navier-Stokes equation (11), we use a finite volume discretization based on a staggered control volume illustrated in Fig. 1. To calculate the variables v 1 (the horizontal component of velocity), v 2 (the vertical component of velocity) and p (Pressure), we use three staggered meshes [21,36]. The use of a staggered mesh has a number of advantages for solving incompressible flow problems. In particular, this type of grid overcomes numerical problems associated with pressurevelocity coupling, which is the case when a collocated grid is used (the spurious pressure oscillations). Indeed, the representation of curved boundaries on a staggered Cartesian grid avoids some complexities which are inevitable when a non-staggered one is used. The control volumes for v 1 and v 2 are displaced with respect to the control volume for the continuity equation. In Fig. 1, 'P' denotes the node at which the partial differential equation is approximated, and 'E', 'W', 'N', 'S' are its neighbours. Cell faces 'e' and 'w' for v 1 and 'n' and 's' for v 2 are in the midway between the nodes.
Firstly, we rewrite the momentum equation of (11) in a differential form for both velocity components To solve this system of equations, we discretize each equation separately and we use the SIMPLE method.
The discretization momentum equation for v 1 is derived by integrating the first equation of (12) over the control volume corresponding to v 1 , using the different point (E, P, n, s) shown in Fig. 1 and over the time interval from t to t + ∆t. Thus, using the fact that the velocities v 1 and v 2 are not depending on the vertical and horizontal component respectively under each volume control face, we have 1)∆t, x)), and vol = ∆x∆y. To simplify the study of this problem, we introduce some new entities defined as If we substitute these new entities in equation (14), we then have Also, the integration of the third continuity equation in (12) over the corresponding control volume and time gives which is equivalent to where (15) and (16), we find On the other hand, to calculate the entities (J E , J W , J N , J S ) in each grid, we need a approximation method. For this reason, we suppose that the flow is unidirectional, laminar with a constant pressure and without the external force (f = 0) on each grid [35]. Thus, the first equation of (12) in the ox direction becomes and the second one representing the equation in the vertical direction oy In order to respect the conservation of the velocities v 1 and v 2 , the quantities must be continuous across the common boundary of two control volumes. Therefore, we add to the equations (18) and (19) the boundary conditions which depend on the control volume corresponding to velocity components v 1 and v 2 , given by the following systems.
The first one (20) using the grid {P, E, s, n}, and the second one (21) using the grid {w, e, P, N }, see Fig. 1.
To calculate the entities (J E , J W , J N and J S ), we use the solution of the equations (20) and (21). The solution of equation (20) is given by the following expression In addition, the solution of (21) is given by Here v 1e and v 1n are only notations to make the difference between the two solutions of equations 20 and 21. To simplify the notations, we assume that where P e is called Peclet number, representing the local report on the boundary portion of the volume control for inertial forces and viscous forces. Equation (22) is now described as We can now calculate the expression of J E using the equation (25) injecting the expression of J E in (17), we have By the same way, we can find the other expressions ( given as follows
Finally, the discretization equation can be written as
where ; P e = (R e v 1 ) E ∆x, ; P n = (R e v 2 ) n ∆y, ; P s = (R e v 2 ) s ∆y, . The momentum equations for the other direction is obtained by the same way [35]. So the discretization for v 2 is given as where Since we have calculated the different momentum equations, we need to correct the pressure in each step to verify that ∇ · u = 0.
3.1. The pressure correction equation. The main idea of this step is to improve the guessed pressure p * such that the velocity field will progressively converge to the solution of the continuity equation, last equation of (12).
Generally, the correct pressure p is obtained using where p ′ is the pressure correction. Then, we have to compute the new velocity components corresponding to the new corrected pressure p ′ . The two corresponding velocity corrections, denoted by v ′ 1 and v ′ 2 respectively, are obtained as Using the fact that v * 1 satisfies equation (27), if we replace the expression of v 1 in equation (27), we obtain an algebrical problem that is easy to solve following the same steps as in [35].
Finally, the correction pressure equation is given by where , To solve these equations, we use the algebraic equations to obtain the fields verifying the conservation equations [35]. We summarize the proposed method in the algorithm below. Once we have calculated v 1 and v 2 , the computation of Data: v * 1 , v * 2 and the pressure field p * , the Reynolds number 1 ≤ R e ≤ 1000. Result: The velocities v 1 , v 2 and the pressure p repeat the whole procedure until convergence is reached.
1. Solve the momentum equations (27) and (28) Algorithm 1: The proposed finite volume algorithm for image registration the deformations u is given through a finite difference scheme. In the following subsection, we detail the computation of the deformations u.
Approximation of displacement u.
To compute the displacement u = (u 1 , u 2 ) from the associated velocities v 1 and v 2 , calculated through algorithm 1, we use a Euler scheme to resolve equation (6). For each grid point x i ∈ R 2 with a fixed index i = (i 1 , i 2 ) ∈ N 2 we have u k (x i ) ∈ R 2 , and we set where U k,1 i and U k,2 i are, respectively, the first and second component approximations of u. We also set which is the velocity approximation, where V k,1 i is the first component and V k,2 i is the second one.
As for the first step, to approximate the ∇u term, a centred finite difference approximation is used. For a fixed iteration k and for i = (i 1 , i 2 ), we have which represents the Jacobian matrix with Secondly, for the partial time derivative ∂ t u, we use a forward finite difference approximation. Therefore, the displacement and the velocity are connected through the following Euler scheme which is performed for all i Our algorithm to compute the transformations u is finally summarized in Algorithm 2. 3. If the relative change of the distance measure is brought below a user supplied tolerance tol = 10 5 , the iteration is stopped, and v k+1 2 from the algorithm 1, return to step 1.
Algorithm 2:
The proposed algorithm to compute the displacement u 4. Results and discussion. In this section, we evaluate the performance of the proposed registration model. We have tested our algorithm on a benchmark of approximately one hundred MRI images. We present only five of them chosen with different deformations type and grey-level histogram. To measure the robustness of the proposed algorithm, we compare it with the classical implementation of the fluid registration model [13], for which the authors used a finite difference scheme. We approved that neither human participants nor animals are involved in this research.
In the first experiment, we choose firstly two images of human hands used for the first time in [1] and downloaded from the website 1 . In Fig. 2, we present the reference and template image.
Our aim is to find an optimal geometric transformation between corresponding images, and finally finding the reference image by registering the template one. In Fig. 3, we represent the registered image using the proposed approach compared with the obtained one by the fluid image registration proposed in [13]. In Fig. 4, we present the grid of deformations that are applied to the template image and the Jacobian determinants of deformation field. To show the evolution of the deformations with respect to the iterations, we present in Fig. 5 the application of the deformation to a rectangular grid with a rotation of 90 • degrees. We note that in some cases we apply a zoom to a region of the grid to better see the deformations. Finally, in the Fig. 6, we compare our algorithm with [13] using the error between the two images (reference and template). In the second example we take two MRI images of human brain downloaded from the web site 2 . We use the same process followed in the first example to present the next ones and keeping the same order of the figures. Fig. 7 shows the reference and template images of brain, Fig. 8 illustrates the obtained results, Fig. 9 shows deformation field for all the grid and the Jacobian determinants, while Fig. 10 shows the evolution of the deformations in a rectangular grid and the error between the deformed template and the reference images using the fluid image registration [13] and the proposed approach are presented in Fig. 11. In the third experiment, we apply the proposed approach to another slice of human brain. Again, we compare our model to the fluid image registration model [13] in Fig. 13 and, as in the previous tests, the deformation field and the Jacobian determinants in Fig. 14, while the evolution of the deformation in a rectangle grid and the error between template and reference image are presented, respectively, in Fig. 15 and Fig. 16. We follow the same steps as in previous tests for the human head image Fig. 17 and we present the results obtained in Fig. 18, Fig. 19, Fig. 20 and Fig. 21. Finally, we show the obtained results in Fig. 23, Fig. 24, Fig. 25 and Fig. 26 associated to the EPI slice image of human brain (Fig. 22) downloaded from the web site 3 . Note that the chosen parameters in the implementation of the proposed approach are presented in Table 2. For the Jacobian determinant, it is well known from calculus that the determinant of the Jacobian matrix det(J = I + ∇u) (here, I = diag(1, ..., 1) ∈ R d×d ) can be used to assess the invertibility of the deformation mapping x → x + u(x) as well as local volume change. Indeed, the Jacobian determinant value changes in each iteration and for all the tests. In general, the Jacobian determinant values are in the interval [0.005, 0.5]. For example, in the first test (human hands) min(det(J)) = 0.0105 and max(det(J)) = 0.4962, while in the fourth test (human brain) min(det(J)) = 0.0051 and max(det(J)) = 0.4988.
If we focus on what happens in the error between the template image and the reference one, in all examples, we can see that the proposed method is more efficient than the classical implementation of the fluid image registration [13]. However, we remark that the grey-levels of the deformed template are different from the original template ones in all tests approximately. There is in fact many reasons for the grey-levels loss. The widespread reason is due to the interpolation process, but in our problem, the main reason comes from the use of the Navier-Stokes equation (velocity-pressure (v, p) version). In fact, the NavierStokes equations can be simplified by introducing the stream function ψ satisfying ∇ ⊥ ψ = v and vorticity ω which satisfies ω = ∇ · v as dependent variables. Based on the work proposed by Bertalmio et al. [3], the image intensity I is related to the vorticity ω by a Poisson's equation ∆I = ω and according to this equation, I is more regular than ω (the smoothing effect of the Laplace operator). Indeed, the image intensity I defines the velocity field by v = ∇ ⊥ I. Due to this high regularity of the image I, intensities are perturbed. Then, a grey-level measure is needed to evaluate the ability of the proposed approach in avoiding this drawback. We then use two measures such as peak signal to noise ratio (PSNR) and the structural similarity (SSIM). The PSNR is a popular metric used to measure the quality of the estimated image, while the SSIM is a complementary measure, which gives an indication of image quality based on known characteristics of the human visual system [43]. In Table 3, the SSIM and PSNR values are calculated for all the used MRI images to confirm the robustness of the proposed approach in preserving the grey-level values compared to the fluid registration scheme and Thirion [13]. The best results are represented by a bold number. Always, the proposed method outperforms the others in terms of both PSNR and SSIM. On the other hand, we can also see some topological changes, especially in Fig. 13 where the two central disconnected components are merged, this is in fact related to the diffusion effect caused by the term 1 Re ∆v. We can also see the diffusion process in the figures that come after. To see the effect of the diffusion process, we have added two tests where we show the evolution of the registered image with respect to the Reynold's coefficient R e using images with same and different modalities (see Figs. 27 and 28). We precise that in all the above experiments, we have chosen the coefficient R e manually with respect to the low error between the deformed template and the reference image. Figure 11. Difference error between template and reference images of human brain 1 using fluid registration (on the left) and the proposed approach (on the right). Figure 16. Difference error between template and reference images of human brain 2 using fluid registration (on the left) and the proposed approach (on the right). The full codes for the proposed model are implemented in MATLAB 2013. Typically, the execution of the main implemented programme requires an average of 2 ∼ 4 minutes on a 2.4 GHz Pentium Quad core computer for 256 × 256 grey-scale images; for the color and large-size images the computation time becomes more considerable. While the execution of the fluid image registration takes about 3 ∼ 7 minutes in the same conditions.
Conclusion.
A new numerical approach for the fluid image registration based on Navier-Stokes equation was introduced to developing the quality of the registration. A finite volume-based scheme was used to handle the discretization part errors. The performance of this new approach has been assured by the efficiency of the results, both visually and using the error criteria.
List of abbreviation. SIMPLE: Semi-Implicit Method for Pressure-Linked Equation.
Compliance with ethical standards.
• Funding: This research was entirely funded by institution of the authors.
• Conflict of interest: The authors declare that they have no conflict of interest.
• Neither human participants nor animals are involved in this research. | 6,276.2 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Maximum Likelihood Estimation Based Nonnegative Matrix Factorization for Hyperspectral Unmixing
Hyperspectral unmixing (HU) is a research hotspot of hyperspectral remote sensing technology. As a classical HU method, the nonnegative matrix factorization (NMF) unmixing method can decompose an observed hyperspectral data matrix into the product of two nonnegative matrices, i.e., endmember and abundance matrices. Because the objective function of NMF is the traditional least-squares function, NMF is sensitive to noise. In order to improve the robustness of NMF, this paper proposes a maximum likelihood estimation (MLE) based NMF model (MLENMF) for unmixing of hyperspectral images (HSIs), which substitutes the least-squares objective function in traditional NMF by a robust MLE-based loss function. Experimental results on a simulated and two widely used real hyperspectral data sets demonstrate the superiority of our MLENMF over existing NMF methods.
Introduction
A hyperspectral image (HSI) can be represented as a three-dimensional data cube, containing both spectral and spatial information to characterize radiation properties, spatial distribution and geometric characteristics of ground objects [1,2].Compared with panchromatic, RGB and multispectral pictures that have only several broad bands, HSI usually has hundreds of spectral bands.The rich spectral information of HSI can be used to discriminate subtle differences between similar ground objects, which makes HSI suitable for different applications, such as target recognition, mineral detection, precision agriculture [1][2][3].Due to the scattering of ground surface and low spatial resolution of the hyperspectral sensor, an observed HSI pixel is often a mixture of multiple ground materials [4][5][6].This is the so called "mixed pixel".The presence of "mixed pixels" seriously affects the application of HSIs.To address the problem of mixed pixels, hyperspectral unmixing (HU) techniques have been developed [4][5][6][7][8].HU aims to decompose a mixed spectral into a collection of pure spectra (endmembers) while also providing the corresponding fractions (abundances).In terms of the spectral mixture mechanism, HU algorithms can be roughly categorized into linear and non-linear ones [4,5].Although, in general, the nonlinear mixing assumption represents the most-real cases better, the linear mixing assumption (although more simplified) has been proved to work very satisfactory in many cases in practice.Taking into account its mathematical tractability, it has attracted significant attention from the scientific community.For these reasons, the linear mixture model is adopted in the present paper, in which a measured spectral can be represented as a linear combination of several endmembers.
Nonnegative matrix factorization (NMF) is a widely used linear HU method [9][10][11][12][13][14][15][16][17][18][19][20].In this framework, HU is regarded as a blind source separable problem, and decomposes an observed HSI matrix into the product of the pure pixel matrix (endmember matrix) and corresponding proportion matrix (abundance matrix).Respecting the physical constraints, nonnegative constraints on the endmembers and abundances, and abundance sum-to-one constraint (ASC) are imposed.The NMF algorithm has the characteristics of intuition and interpretability.However, due to the existence of large number of unknown dependent variables, the solution space of NMF model is too large.To restrict its solution space, many NMF variants are proposed by adding constraints on the abundance or endmember [10][11][12][13][14][15][16].Miao et al. incorporated a volume constraint of endmember into the NMF formulation and proposed a minimum volume constrained NMF (MVC-NMF) model [10], which can perform unsupervised endmember extraction from highly mixed image data without the pure-pixel assumption.Jia et al. introduced two constraints to the NMF [11], i.e., piecewise smoothness of spectral data and sparseness of abundance fraction.Similarly, two constraints on abundance (i.e., abundance separation constraint and abundance smoothness constraint) were added into the NMF [12].Qian et al. imposed an l 1/2 -norm-based sparse constraints on the abundance and proposed an l 1/2 -NMF unmixing model [13].Lu et al. considered the manifold structure of HSI and incorporated manifold regularization into the l 1/2 -NMF [14].Wang et al. added endmember dissimilarity constraint into the NMF [15].
Although the aforementioned NMF methods improved the classical NMF unmixing model at a certain extent, they ignored the effect of noise.As the objective function of NMF is the least squares loss, NMF is sensitive to noise and corresponding unmixing results are usually inaccurate and unstable.To suppress the effect of noise and improve the robustness of the model, many robust NMF methods were proposed [17][18][19][20].He et al. proposed a sparsity-regularized robust NMF by adding a sparse matrix into the linear mixture model to model the sparse noise [17].Du et al. introduced a robust entropy-induced metric (CIM) and proposed a CIM-based NMF (CIM-NMF) model, which can effectively deal with non-Gaussian noise [18].Wang et al. proposed a robust correntropy-based NMF model (CENMF) [19], which contained a correntropy-based loss function and an l 1 -norm sparse constraint on the abundance.Based on the Huber's M-estimator, Huang et al. constructed l 2,1 -norm and l 1,2 -norm based loss functions to obtain a new robust NMF model [20,21].Defining the l 2,1 -norm (l 1,2 -norm) based loss function actually assumes that the columnwise (row-wise) approximation residual follows Laplacian (Gaussian) distribution from the viewpoint of maximum likelihood estimation (MLE).However, in practice this assumption may not hold well, especially when HSI contains complex mixture noise, such as impulse noise, stripes, deadlines, and other types of noise [22,23].
Inspired by the robust regression theory [23,24], we design the approximation residual as an MLE-like estimator and propose a robust MLE-based l 1/2 -NMF model (MLENMF) for HU.It replaces the least-squares loss in the original NMF by a robust MLE-based loss, which is a function (associated with the distribution of the approximation residuals) of the approximation residuals [24].The proposed MLENMF can be converted to a weighted l 1/2 -NMF model and can be solved by a re-weighted multiplication update iteration algorithm [9,13].By choosing an appropriate weight function, MLENMF can automatically assign small weights to bands with large residuals, which can effectively reduce the effect of noisy bands and improve the unmixing accuracy.Experimental results on simulated and real hyperspectral data sets show the superiority of MLENMF over existing NMF methods.
The rest of the paper is organized as follows.Section 2 introduces the NMF and l 1/2 -NMF.Section 3 describes our proposed MLENMF method.The experimental results and analysis are provided in Section 4. Section 5 discusses the effect of parameters in the algorithm.Finally, Section 6 concludes the paper.
NMF Unmixing Model
Under the linear spectral mixing mechanism, an observed spectral h ∈ R M×1 can be represented linearly by the endmember z 1 , • • • , z P [4,[10][11][12][13]: where Z = [z 1 , • • • , z P ] ∈ R M×P represents the endmember matrix, s ∈ R P×1 is the coefficient (abundance) vector, and is the residual.Applying the above linear mixing model ( 1) for all hyperspectral pixels h 1 , • • • , h N , the following matrix representation can be obtained: where are nonnegative hyperspectral data matrix and abundance matrix, respectively.E ∈ R M×N is the residual matrix.
In Equation ( 2), to make the decomposition result as accurate as possible, the residual should be minimized.Then, an NMF unmixing model can be obtained by considering the nonnegative property of endmember and abundance matrices: min where • F denotes the Frobenius norm, and Z ≥ 0 means that each element of Z is nonnegative.As each column of abundance matrix S records the proportion of endmembers in representing a pixel, the columns of S (each one corresponding to a pixel) should satisfy the sum-to-one constraint, i.e., ∑ P p=1 S pn = 1, n = 1, • • • , N. The above NMF Model (3) can be easily solved by the multiplication update algorithm [9,13].However, its solution space is very large [13].To restrict the solution space, an l 1/2 -constraint can be added to the abundance matrix S, and an l 1/2 -NMF model can be obtained as [13]: min where λ is a regularization parameter and S 1/2 is the l 1/2 -regularizer [13].As proved in Refs.[13,25], l 1/2 -regularizer is a good choice in enforcing the sparsity of hyperspectral unmixing because the sparsity of the l q (1/2 ≤ q < 1) solution increases as q decreases, whereas the sparsity of the solution for l q (0 < q ≤ 1/2) shows little change with respect to q.Meanwhile, the sparsity represented by l 1/2 also enforces the volume of the simplex to be minimized [13].
MLENMF Unmixing Model
In the NMF model (3) or (4), the objective function H − ZS 2 F is the least-squares (LS) function which is sensitive to noise.Here, we employ a new robust MLE-based loss to replace the LS objective function and propose an MLE-based NMF (MLENMF) model for HU.
Firstly, the matrix norm form is transformed into vector norm form: where H i is the i-th row of matrix H.We can regard the least squares objective function as the sum of approximation residuals, and then construct an MLE-like robust estimator to approximate the minimum of objective function.Denote the approximation residual of the i-th band as e i = H i − (ZS) i 2 and define residual vector e = [e 1 , . . . ,e M ] T , the above Formula (5) can be rewritten as: Assume that e 1 , . . ., e M are independent and identically distributed (i.i.d) random variables, which follow the same probability distribution function g θ (e i ), where θ is the distribution parameter.The likelihood function can be expressed as: According to the principle of MLE, the following objective function should be minimized: where ϕ θ (e i ) = − ln g θ (e i ).If we replace the objective function H − ZS 2 F in Equation ( 4) by the loss in Equation ( 8), we can get the following optimization problem: In fact, the aim is to construct a loss function to replace the least squares function to reduce the impact of noise.To construct the loss function, we analyze its Taylor expansion.Assume that g θ is symmetric, and g θ (e i ) < g θ e j if e i > e j .We can infer that: (1) g θ (0) is global maximum of g θ and ϕ θ (0) is the global minimum of ϕ θ ; (2) ϕ θ (e i ) = ϕ θ (−e i ); (3) ϕ θ (e i ) > ϕ θ e j if e i > e j .For simplicity, we assume ϕ θ (0 According to the first-order Taylor expansion around e 0 , D θ (e) can be approximated as [24]: (e − e 0 ) T W(e − e 0 ), (10) where D θ (e 0 ) is the first order derivative of D θ (e) at e 0 , and W is the Hessian matrix.We can get the mixed partial derivatives ∂ 2 D θ ∂e i ∂e j = 0 (e i = e j ) as the error residuals e i and e j are assumed i.i.d., and hence W is a diagonal matrix.Taking the derivative of D θ (e) with respect to e, it gets D θ (e) = D θ (e 0 ) + W(e − e 0 ).
As ϕ θ (0) = 0 is the global minimum of ϕ θ , the minimum of D θ (e) is D θ (0).D θ (e) should also reach its minimum at e = 0 for it is an approximation of D θ (e), so D θ (0) = 0 and then we can derive the following formulas from Equation (11): where W i,i is the i-th diagonal element of W. Denote w i = W i,i , Equation (13) can be written as As ϕ θ (x) is a nonlinear and nonconvex function, it is difficult to solve the model (9) directly.Inspired by the above Formula (14), we can get: and then the Model (9) can be expressed as a weighted NMF model: min The objective function of Model ( 16) can be rewritten as: where Then, the Model ( 16) can be expressed as: It is easy to see that model ( 18) is also an l 1/2 -NMF algorithm, and can be solved by the multiplication update iteration rule as follows [9,13]: The final endmember matrix is Z = W − 1 2 Z.In the model (18), a key factor is the weight.In this paper, the weight function is set as the logistic function [23,24,26]: where γ, τ are positive scalars.Parameter γ controls the decreasing rate from 1 to 0, and τ controls the location of demarcation point [24].It is clear that the value of weight function decreases rapidly with the increase of residual e i .MLE weight function in Equation ( 21) can approximate the weight of commonly used loss functions, such as l 2,1 , maximum correntropy and Huber weights.
When γ = 2 and τ → 0 , the MLE weight function is: which is close to l 2,1 weight: 1 1+e 2 i .The corresponding weights are shown as red and blue lines in Figure 1a.
When γ = 1 σ 2 and τ → 0 , the MLE weight function is: which is close to the weight of maximum correntropy criterion: exp − (σ is a parameter).The corresponding weights are shown in Figure 1b.Based on Equations ( 14) and ( 21), the objective function of MLE can be obtained From Equations ( 25) and ( 8), we can see that the probability distribution fun ( ) has the form: If = 0, → 0, the probability distribution function ( ) is actually a Gau distribution: In this case, the weight defined in Equation ( 21) is: ω = 1/2, which is the LS ca In Figure 2a, we compare the MLE objective function with the LS loss function.objective function is controlled by the parameters , , and is truncated to a consta large residuals (e.g., | | > 2).As the constant has no effect on the optimization m the negative effect of noise (points with large residuals) can be automatically dimini Compared with the MLE function, LS loss function is global and increases quadrat as the increase of residual.When there has heavy noise, the objective function of LS m will be dominated by the points with heavy noise.Figure 2b shows the influence fun [22,27] of MLE and LS.The influence function of a loss () is defined as: ()/, which measures the robustness of loss function as the increase of error resi For residual > 0, the influence function of MLE increases first, then decrease By choosing appropriate parameters, the MLE weight can also approximate the Huber weight: as shown in Figure 1c.
Based on Equations ( 14) and ( 21), the objective function of MLE can be obtained as: From Equations ( 8) and ( 25), we can see that the probability distribution function g θ (e i ) has the form: If τ = 0, γ → 0 , the probability distribution function g θ (e i ) is actually a Gaus- sian distribution: In this case, the weight defined in Equation ( 21) is: ω i = 1/2, which is the LS case.
In Figure 2a, we compare the MLE objective function with the LS loss function.MLE objective function is controlled by the parameters γ, τ, and is truncated to a constant for large residuals (e.g., |e i | > 2).As the constant has no effect on the optimization model, the negative effect of noise (points with large residuals) can be automatically diminished.Compared with the MLE function, LS loss function is global and increases quadratically as the increase of residual.When there has heavy noise, the objective function of LS model will be dominated by the points with heavy noise.Figure 2b shows the influence function [22,27] of MLE and LS.The influence function of a loss ϕ(e) is defined as: ψ(e) = ∂ϕ(e)/∂e, which measures the robustness of loss function as the increase of error residual.For residual e i > 0, the influence function of MLE increases first, then decreases and finally reaches the zero value.It means that larger errors finally have no effect on the MLE-based model.However, the influence function of LS continues to grow linearly.So, the LS loss function is seriously affected by noise.In the presence of noise, MLE is obviously more robust than LS.
finally reaches the zero value.It means that larger errors finally have no effect on t based model.However, the influence function of LS continues to grow linearly.S loss function is seriously affected by noise.In the presence of noise, MLE is ob more robust than LS.The procedure of the proposed MLENMF is shown in Algorithm 1.
1. Initialize Z (0) = Z 0 , S (0) = S 0 , v = 1, W = I 2. Run the following steps until convergence: (a) Compute the errors: (b) Calculate the weight of each entry: (c) Compute the weighted matrices: (d) Updating endmember matrix and weighted abundance matrix: Remark.In the current method, it assumes that different bands are independent and then an MLE solution can be deduced.The band independence assumption is only used in the derivation of MLE estimator.By means of this assumption, it can finally generate a weighted NMF model where the weight function can be used to reduce the effect of noisy bands.Although hyperspectral bands are not independent from each other in practice, the final weighted NMF model (i.e., MLENMF) can still alleviate negative effects of noise.
Evaluation Metrics
Spectral angular distance (SAD) and root mean square error (RMSE) are used to quantitatively evaluate the accuracy of estimated endmembers and abundances.
The formula of SAD is: where SAD k represents the similarity between the k-th real endmember z k and estimated endmember ẑk .The RMSE is: where s k and ŝk are the k-th real and estimated abundance maps (i.e., the k-th row vector in S and Ŝ), respectively.N is the number of pixels in HSI.
Implementation Details
The vertex component analysis (VCA) and fully constrained least squares (FCLS) methods are used to generate the initial endmember Z 0 and abundance S 0 for different unmixing methods [11][12][13][14][15][16][17][18][19].The regularization parameter λ in l 1/2 -NMF and CENMF is dependent on the sparsity of the material abundances and is estimated based on the sparseness criterion in Ref. [13].The parameter of CIMNMF and Huber-NMF are set to be the recommended values in Ref. [18].The proposed MLENMF contains two parameters γ and τ as shown in Equation (21).It is clear that τ is related to the amplitude of residual e 2 i .For different data sets, the amplitude of residuals may be different.So, it is difficult to determine a specific value of τ.Here, we set τ in a data-dependent way: τ is the (100ξ)-th percentile of residual vector e = e 2 1 , . . ., e 2 M T , where ξ ∈ (0, 1] controls the ratio of inliers.
Tables 1 and 2 show the average results of 20 random experiments under different degrees of noise.Each SAD (RMSE) value is the mean of SAD (RMSE) over seven endmembers.It is clear that the performance of different methods are improved as the increase of SNR or µ, and MLENMF shows better results in different degrees of noise.To visualize the results of different methods, the real and estimated spectra for the endmember 1 (i.e., "Carnallite NMNH98011") at µ = 20 are shown in Figure 3. Here, we only show the results for the endmember 1 due to space limitations.Similar good results are obtained for the other endmembers.It can be seen that the spectral curve estimated by the MLENMF can well approximate the reference one while the curve of other methods exhibit deviations in amplitude from the reference spectral.As the reference spectral and estimated spectral by different methods have similar shape, the SAD of different methods show small differences as shown in Table 1.Notwithstanding, the estimated abundance map of different methods show large differences as shown in Figure 4. Taking into account both SAD and RMSE results, we can see that our MLENMF method is more robust than other NMF methods when the data contains noise.To visualize the results of different methods, the real and estimated spectra for the endmember 1 (i.e., "Carnallite NMNH98011") at = 20 are shown in Figure 3. Here, we only show the results for the endmember 1 due to space limitations.Similar good results are obtained for the other endmembers.It can be seen that the spectral curve estimated by the MLENMF can well approximate the reference one while the curve of other methods exhibit deviations in amplitude from the reference spectral.As the reference spectral and estimated spectral by different methods have similar shape, the SAD of different methods show small differences as shown in Table 1.Notwithstanding, the estimated abundance map of different methods show large differences as shown in Figure 4. Taking into account both SAD and RMSE results, we can see that our MLENMF method is more robust than other NMF methods when the data contains noise.
Experiments on Real Data
Two real hyperspectral unmixing data sets, i.e., Urban and Japser are used to evaluate the performance of different NMF unmixing methods (Available at https://rslab.ut.ac.ir/data,https://sites.google.com/site/feiyunzhuhomepage/datasetsground-truths,accessed on 2 July 2019).The Urban data was obtained by the HYDICE sensor.This scene has the size of 307 × 307 pixels and each pixel corresponds to an 2 × 2 m area.The original data has 210 bands, where band 1-4, 76, 87, 101-111, 136-153, 198-210 are severely affected by dense water vapor and atmosphere.After removing these noisy bands, 162 bands are kept.This scene contains four reference materials: Asphalt road, Grass, Tree and Roof, which are also available at the https://rslab.ut.ac.ir/data, accessed on 2 July 2019 We first perform experiments on the Urban data with 162 bands.The parameters of MLENMF are set as: = 0.8 and = 1.The estimated endmembers and abundances by different unmixing methods are compared with the groundtruth references and then the SAD and RMSE results are computed, as shown in Tables 3 and 4, respectively.Compared with other NMF methods, the proposed MLENMF shows better overall results.Figure 5 shows the estimated endmembers by different methods.It can be seen that the other methods cannot well estimate the endmember 'Roof', while our MLENMF generates spectral curve that is similar to the reference signature.From the abundance maps in Figure 6, we can see that the maps of MLENMF are more consistent with the reference maps than comparison methods.
Experiments on Real Data
Two real hyperspectral unmixing data sets, i.e., Urban and Japser are used to evaluate the performance of different NMF unmixing methods (Available at https://rslab.ut.ac.ir/data, https://sites.google.com/site/feiyunzhuhomepage/datasets-ground-truths,accessed on 2 July 2019).The Urban data was obtained by the HYDICE sensor.This scene has the size of 307 × 307 pixels and each pixel corresponds to an 2 × 2 m 2 area.The original data has 210 bands, where band 1-4, 76, 87, 101-111, 136-153, 198-210 are severely affected by dense water vapor and atmosphere.After removing these noisy bands, 162 bands are kept.This scene contains four reference materials: Asphalt road, Grass, Tree and Roof, which are also available at the https://rslab.ut.ac.ir/data, accessed on 2 July 2019.
We first perform experiments on the Urban data with 162 bands.The parameters of MLENMF are set as: ξ = 0.8 and c = 1.The estimated endmembers and abundances by different unmixing methods are compared with the groundtruth references and then the SAD and RMSE results are computed, as shown in Tables 3 and 4, respectively.Compared with other NMF methods, the proposed MLENMF shows better overall results.Figure 5 shows the estimated endmembers by different methods.It can be seen that the other methods cannot well estimate the endmember 'Roof', while our MLENMF generates spectral curve that is similar to the reference signature.From the abundance maps in Figure 6, we can see that the maps of MLENMF are more consistent with the reference maps than comparison methods.To test the unmixing performance of different methods in the case of noisy bands, we also calculate the SAD and RMSE for the Urban data with the whole 210 bands and show the results in Tables 5 and 6, respectively.The parameters of MLENMF in this case are set as: ξ = 0.4 and c = 10.Even with some known bad bands, our MLENMF also provides the best results.The Japser data is collected by the AVIRIS sensor, covering a spectral range of 380 to 2500 nm, with a total of 224 spectral bands, including 26 noisy bands.The spectral resolution is 9.46 nm, and the image size is 100 × 100.The image mainly contains four materials: Tree, Water, Soil, and Road.The parameters of MLENMF are set as: ξ = 0.4 and c = 1.The SAD and RMSE results of different unmixing methods on this data set are shown in Tables 7 and 8, respectively.Figures 7 and 8 show the estimated endmembers and abundance maps of different methods.It can be seen from these results that the proposed MLENMF can provide more accurate estimation on the endmembers and abundances.
Parameter c and ξ control the decreasing rate and the location of truncation point, respectively.The larger the value of c, the greater the degree of truncation.The smaller the value of ξ, the more forward the position of the truncation point.As shown in Figure 9, when the noise or residual is large, it is better to choose a larger c and a smaller ξ that truncates the weight of larger residuals to a constant (seeing the red dotted line).
Discussion
As described in Section 4.2, is the (100)-th percentile of resid [ , . . ., ] , and = /, ∈ (0,10], ∈ (0,1].By tuning the paramete MLE objective function in Equation ( 26) can be truncated, as shown in Fig and control the decreasing rate and the location of truncation point, r larger the value of , the greater the degree of truncation.The smaller th more forward the position of the truncation point.As shown in Figure 9, or residual is large, it is better to choose a larger and a smaller th weight of larger residuals to a constant (seeing the red dotted line).We take the Urban data set as an example to show the effect of parameters c and ξ. Figure 10 shows the SAD results of MLENMF on Urban data with 210 bands.The results in Figure 10a are obtained by fixing ξ = 0.4 and changing c in the set {0.1, 0, 5, 1, 2, 5, 10}.When ξ is fixed, larger c values correspond to better unmixing results.As shown in Figure 9, c affects the degree of truncation.If choosing a large c, the weight of large errors can be truncated to a constant (e.g., the objective function values are constant for errors larger than 1.5, showing as the red solid line in Figure 9).As their objective function values are constant, they have no influence on the model.For Urban data with all 210 bands, MLENMF with a larger c can effectively alleviate the effect of noisy bands.By fixing c = 10 and changing ξ in the set {0.1, 0.2, 0, 4, 0.6, 0.8, 1}, Figure 10b shows the SAD of MLENMF versus parameter ξ.It is better to set the parameter ξ in the interval [0.4 0.8] when c is fixed.Parameter ξ determines the ratio of inliers.As the data contains noisy bands, the value of ξ should be less than 1.
When the known noisy bands on the Urban data are removed, the experimental results on Urban data with 162 bands are obtained at fixing ξ = 0.8 and c = 1, respectively.The results are shown in Figure 11.From Figure 11a, we can see that the proposed MLENMF is not sensitive to parameter c because different c values generate similar results for small errors in the case of low noise or no noise data as shown in Figure 9. From Figure 11b, the best result is achieved at ξ = 1, which means that the data points are almost inliers.When the known noisy bands on the Urban data are removed, the experim sults on Urban data with 162 bands are obtained at fixing = 0.8 and = 1 tively.The results are shown in Figure 11.From Figure 11a, we can see that the p MLENMF is not sensitive to parameter because different values generate si sults for small errors in the case of low noise or no noise data as shown in Figure Figure 11b, the best result is achieved at = 1, which means that the data poin most inliers.The above analysis recommends setting the parameter in the interval [0.4 data with heavy noise, can be set to be a small value, such as = 0.4.Param chosen in the interval [1,10].For data with heavy noise, it can set = 10.Othe moderate value = 1 is recommended.
Conclusions
This paper proposes a maximum likelihood estimation-based nonnegativ factorization (MLENMF) model for hyperspectral unmixing.The proposed MLEN ploys an MLE-like loss function that replaces the least-squares loss function in t model.The MLE-like loss is a robust loss, which can truncate the objective functi of noise and can reduce their negative effects on the unmixing model.Experim sults on a simulated data and two real data sets (Urban and Jasper) show that posed MLENMF model has obvious noise suppression effect and can obtain mo rate unmixing results.In the current model, it assumes that different bands are in ent and then an MLE solution can be deduced.Notwithstanding, in practice the The above analysis recommends setting the parameter ξ in the interval [0.4 0.8].For data with heavy noise, ξ can be set to be a small value, such as ξ = 0.4.Parameter c is chosen in the interval [1,10].For data with heavy noise, it can set c = 10.Otherwise, a moderate value c = 1 is recommended.
Conclusions
This paper proposes a maximum likelihood estimation-based nonnegative matrix factorization (MLENMF) model for hyperspectral unmixing.The proposed MLENMF employs an MLE-like loss function that replaces the least-squares loss function in the NMF model.The MLE-like loss is a robust loss, which can truncate the objective function value of noise and can reduce their negative effects on the unmixing model.Experimental results on a simulated data and two real data sets (Urban and Jasper) show that the proposed MLENMF model has obvious noise suppression effect and can obtain more accurate unmixing results.In the current model, it assumes that different bands are independent and then an MLE solution can be deduced.Notwithstanding, in practice the assumption of band independence is not generally valid.Taking also into account the dependence
Figure 2 .Figure 2 .
Figure 2. Comparison of objective function (a) and influence function (b) between MLE and LS.
Figure 3 .
Figure 3.The reference and estimated spectra for the endmember 1 of simulated data.
Figure 3 .
Figure 3.The reference and estimated spectra for the endmember 1 of simulated data.
Figure 4 .
Figure 4.The reference and estimated abundance maps for the endmember 1 of simulated data.
Figure 4 .
Figure 4.The reference and estimated abundance maps for the endmember 1 of simulated data.
Figure 6 .
Figure 6.Comparison of abundance maps estimated by different algorithms for Urban data with 162 bands for different materials: (a) Asphalt Road, (b) Grass, (c) Tree, (d) Roof.
Figure 6 .
Figure 6.Comparison of abundance maps estimated by different algorithms for Urban data with 162 bands for different materials: (a) Asphalt Road, (b) Grass, (c) Tree, (d) Roof.
Figure 8 .
Figure 8.Comparison of abundance estimated by different methods on Japser data for different materials: (a) Tree, (b) Water, (c) Soil, (d) Road.Figure 8. Comparison of abundance estimated by different methods on Japser data for different materials: (a) Tree, (b) Water, (c) Soil, (d) Road.
Figure 9 .
Figure 9.Comparison of MLE objective function and LS under different paramete
4 DFigure 9 .
Figure 9.Comparison of MLE objective function and LS under different parameters.
Figure 10 .
Figure 10.The SAD results under different parameter settings on Urban data with 210 bands.(a) SAD versus c at ξ = 0.4.(b) SAD versus ξ at c = 10.
gure 11 .
The SAD results under different parameter settings on Urban data with 162 bands.(a) SAD versus at 8. (b) SAD versus at = 1.
Figure 11 .
Figure 11.The SAD results under different parameter settings on Urban data with 162 bands.(a) SAD versus c at ξ = 0.8.(b) SAD versus ξ at c = 1.
Table 1 .
The SAD results of different unmixing methods for simulated data.
Table 2 .
The RMSE results of different unmixing methods for simulated data.
Table 1 .
The SAD results of different unmixing methods for simulated data.
Table 2 .
The RMSE results of different unmixing methods for simulated data.
Table 3 .
The SAD results of different methods for Urban data with 162 bands.
Table 4 .
The RMSE results of different methods for Urban data with 162 bands.
Table 3 .
The SAD results of different methods for Urban data with 162 bands.
Table 4 .
The RMSE results of different methods for Urban data with 162 bands.
Table 5 .
The SAD results of different methods for Urban data with 210 bands.
Table 6 .
The RMSE results of different methods for Urban data with 210 bands.
Table 7 .
The SAD results of different methods for Jasper data.
Table 8 .
The RMSE results of different methods for Jasper data. | 7,718.2 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
The mathematical foundations of general relativity revisited
The purpose of this paper is to present for the first time an elementary summary of a few recent results obtained through the application of the formal theory of partial differential equations and Lie pseudogroups in order to revisit the mathematical foundations of general relativity. Other engineering examples (control theory, elasticity theory, electromagnetism) will also be considered in order to illustrate the three fundamental results that we shall provide. The paper is therefore divided into three parts corresponding to the different formal methods used. 1) CARTAN VERSUS VESSIOT: The quadratic terms appearing in the"Riemann tensor"according to the"Vessiot structure equations"must not be identified with the quadratic terms appearing in the well known"Cartan structure equations"for Lie groups and a similar comment can be done for the"Weyl tensor". In particular,"curvature+torsion"(Cartan) must not be considered as a generalization of"curvature alone"(Vessiot). Roughly, Cartan and followers have not been able to"quotient down to the base manifold", a result only obtained by Spencer in 1970 through the"nonlinear Spencer sequence"but in a way quite different from the one followed by Vessiot in 1903 for the same purpose and still ignored. 2) JANET VERSUS SPENCER: The"Ricci tensor"only depends on the nonlinear transformations (called"elations"by Cartan in 1922) that describe the"difference"existing between the Weyl group (10 parameters of the Poincar\'e subgroup + 1 dilatation) and the conformal group of space-time (15 parameters). It can be defined by a canonical splitting, that is to say without using the indices leading to the standard contraction or trace of the Riemann tensor. Meanwhile, we shall obtain the number of components of the Riemann and Weyl tensors without any combinatoric argument on the exchange of indices. Accordingly, the Spencer sequence for the conformal Killing system and its formal adjoint fully describe the Cosserat/Maxwell/Weyl theory but General Relativity is not coherent at all with this result. 3) ALGEBRAIC ANALYSIS: Contrary to other equations of physics (Cauchy equations, Cosserat equations, Maxwell equations), the Einstein equations cannot be"parametrized", that is the generic solution cannot be expressed by means of the derivatives of a certain number of arbitrary potential-like functions, solving therefore negatively a 1000 $ challenge proposed by J. Wheeler in 1970. Accordingly, the mathematical foundations of mathematical physics must be revisited within this formal framework, though striking it may look like for certain apparently well established theories such as electromagnetism and general relativity. We insist on the fact that the arguments presented are of a purely mathematical nature and are thus unavoidable.
algebraic analogue of a differential sequence. Finally, we get in the third part of the paper: THIRD FUNDAMENTAL RESULT: Contrary to other equations of physics (Cauchy equations, Cosserat equations, Maxwell equations), the Einstein equations cannot be " parametrized ", that is the generic solution cannot be expressed by means of the derivatives of a certain number of arbitrary potential-like functions, solving therefore negatively a 1000 $ challenge proposed by J. Wheeler in 1970.
As no one of these results can be obtained without the previous difficult purely mathematical arguments and are thus unavoidable, the purpose of this paper is to present them for the first time in a rather self-contained and elementary way through explicit basic examples.
INTRODUCTION:
The purpose of this paper is to present an elementary summary of a few recent results obtained through the application of the formal theory of systems of ordinary differential (OD) or partial differential (PD) equations and Lie pseudogroups in order to revisit the mathematical foundations of General relativity (GR). More elementary engineering examples (elasticity theory, electromagnetism (EM)) will also be considered in order to illustrate the quoted three fundamental results that we shall provide. The paper, based on the material of two lectures given at the department of mathematics of the university of Montpellier 2, France, in may 2013, is divided into three parts corresponding to the different formal methods used. 1) FIRST PART: Lie groups of transformations may be considered as Lie pseudogroups of transformations, that is to say groups of transformations solutions of systems of OD or PD equations, but no action type method can be used as parameters never appear any longer.
2) SECOND PART: The work of Cartan is superseded by the use of the canonical Spencer sequence while the work of Vessiot is superseded by the use of the canonical Janet sequence but the link between these two sequences and thus these two works is not known today.
3) THIRD PART: Using duality theory, the formal adjoint of the Spencer operator for the conformal group of transformations of space-time provides the Cosserat equations, the Maxwell equations and the Weyl equations on equal footing but such a result, even if it allows to unify the finite elements of engineering sciences, also leads to contradictions in GR that we shall point out.
The new methods involve tools from differential geometry (jet theory, Spencer operator, δcohomology) and homological algebra (diagram chasing, snake theorem, extension modules, double duality). The reader may just have a look to the book ( [18], review in Zbl 1079.93001) in order to understand the amount of mathematics needed from many domains.
The following diagram summarizes at the same time the historical background and the difficulties presented in the abstract: Roughly, Cartan and followers have not been able to " quotient down to the base manifold " ( [1,2]), a result only obtained by Spencer in 1970 through the nonlinear Spencer sequence ( [5], [9], [15], [22]) but in a way quite different from the one followed by Vessiot in 1903 for the same purpose ( [17], [25]). Accordingly, the mathematical foundations of mathematical physics must be revisited within this formal framework, though striking it may look like for certain apparently well established theories such as EM and GR.
FIRST PART : FROM LIE GROUPS TO LIE PSEUDOGROUPS
If X is a manifold with local coordinates (x i ) for i = 1, ..., n = dim(X), let E be a fibered manifold over X, that is a manifold with local coordinates (x i , y k ) for i = 1, ..., n and k = 1, ..., m simply denoted by (x, y), projection π : E → X : (x, y) → (x) and changes of local coordinates x = ϕ(x),ȳ = ψ(x, y). If E and F are two fibered manifolds over X with respective local coordinates (x, y) and (x, z), we denote by E× X F the fibered product of E and F over X as the new fibered manifold over X with local coordinates (x, y, z). We denote by f : X → E : (x) → (x, y = f (x)) a global section of E, that is a map such that π • f = id X but local sections over an open set U ⊂ X may also be considered when needed. We shall use for simplicity the same notation for a fibered manifold and its set of sections while setting dim X (E) = m. Under a change of coordinates, a section transforms likef (ϕ(x)) = ψ(x, f (x)) and the derivatives transform like: We may introduce new coordinates (x i , y k , y k i ) transforming like: ) where both f q and j q (f ) are over the section f of E. Of course J q (E) is a fibered manifold over X with projection π q while J q+r (E) is a fibered manifold over J q (E) with projection π q+r q , ∀r ≥ 0. DEFINITION 1.1: A (nonlinear) system of order q on E is a fibered submanifold R q ⊂ J q (E) and a solution of R q is a section f of E such that j q (f ) is a section of R q . DEFINITION 1.2: When the changes of coordinates have the linear formx = ϕ(x),ȳ = A(x)y, we say that E is a vector bundle over X. Vector bundles will be denoted by capital letters C, E, F and will have sections denoted by ξ, η, ζ. In particular, we shall denote as usual by T = T (X) the tangent bundle of X, by T * = T * (X) the cotangent bundle, by ∧ r T * the bundle of r-forms and by S q T * the bundle of q-symmetric tensors. When the changes of coordinates have the form x = ϕ(x),ȳ = A(x)y + B(x) we say that E is an affine bundle over X and we define the associated vector bundle E over X by the local coordinates ( Finally, If E = X × X, we shall denote by Π q = Π q (X, X) the open subfibered manifold of J q (X × X) defined independently of the coordinate system by det(y k i ) = 0 with source projection α q : Π q → X : (x, y q ) → (x) and target projection β q : Π q → X : (x, y q ) → (y).
we may introduce the vertical bundle V (E) ⊂ T (E) as a vector bundle over E with local coordinates (x, y, v) obtained by setting u = 0 and changes v l = ∂ψ l ∂y k (x, y)v k . Of course, when E is an affine bundle over X with associated vector bundle E over X, we have V (E) = E × X E.
For a later use, if E is a fibered manifold over X and f is a section of E, we denote by f −1 (V (E)) the reciprocal image of V (E) by f as the vector bundle over X obtained when replacing (x, y, v) by (x, f (x), v) in each chart. A similar construction may also be done for any affine bundle over E.
We now recall a few basic geometric concepts that will be constantly used through this paper. First of all, if ξ, η ∈ T , we define their bracket [ξ, η] ∈ T by the local formula ([ξ, η] When I = {i 1 < ... < i r } is a multi-index, we may set dx I = dx i1 ∧ ... ∧ dx ir for describing ∧ r T * by means of a basis and introduce the exterior derivative d : The Lie derivative of an r-form with respect to a vector field ξ ∈ T is the linear first order operator L(ξ) linearly depending on j 1 (ξ) and uniquely defined by the following three properties: We now turn to group theory and start with two basic definitions: Let G be a Lie group, that is a manifold with local coordinates (a τ ) for τ = 1, ..., p = dim(G) called parameters, a composition G × G → G : (a, b) → ab, an inverse G → G : a → a −1 and an identity e ∈ G satisfying: (ab)c = a(bc) = abc, aa −1 = a −1 a = e, ae = ea = a, ∀a, b, c ∈ G DEFINITION 1.4: G is said to act on X if there is a map X ×G → X : (x, a) → y = ax = f (x, a) such that (ab)x = a(bx) = abx, ∀a, b ∈ G, ∀x ∈ X and we shall say that we have a Lie group of transformations of X. In order to simplify the notations, we shall use global notations even if only local actions are existing. It is well known that the action of G onto itself allows to introduce a purely algebraic bracket on its Lie algebra G = T e (G).
DEFINITION 1.5:
A Lie pseudogroup of transformations Γ ⊂ aut(X) is a group of transformations solutions of a system of OD or PD equations such that, if y = f (x) and z = g(y) are two solutions, called finite transformations, that can be composed, then z = g • f (x) = h(x) and x = f −1 (y) = g(y) are also solutions while y = x is the identity solution denoted by id = id X and we shall set id q = j q (id). In all the sequel we shall suppose that Γ is transitive that is ∀x, y ∈ X, ∃f ∈ Γ, y = f (x) It becomes clear that Lie groups of transformations are particular cases of Lie pseudogroups of transformations as the system defining the finite transformations can be obtained by eliminating the parameters among the equations y q = j q (f )(x, a) when q is large enough. The underlying system may be nonlinear and of high order. Looking for transformations "close" to the identity, that is setting y = x + tξ(x) + ... when t ≪ 1 is a small constant parameter and passing to the limit t → 0, we may linearize the above nonlinear system of finite Lie equations in order to obtain a linear system of infinitesimal Lie equations of the same order for vector fields. Such a system has the property that, if ξ, η are two solutions, then [ξ, η] is also a solution. Accordingly, the set Θ ⊂ T of solutions of this new system satisfies [Θ, Θ] ⊂ Θ and can therefore be considered as the Lie algebra of Γ. EXAMPLE 1.6: While the affine transformations y = ax + b are solutions of the second order linear system y xx = 0, the projective transformations y = (ax + b)/(cx + d) are solutions of the third order nonlinear system Ψ ≡ (y xxx /y x ) − 3 2 (y xx /y x ) 2 = 0. The sections of the corresponding linearized systems are respectively satisfying ξ xx = 0 and ξ xxx = 0. The generating differential invariant Φ ≡ y xx /y x of the affine case transforms like u =ū∂ x f + (∂ xx f /∂ x f ) whenx = f (x) and we let the reader exhibit the corresponding change for Ψ as an exercise.
We now sketch the discovery of Vessiot ([17], [25]) still not known today after more than a century for reasons which are not scientific at all. Roughly, a Lie pseudogroup Γ ⊂ aut(X) is made by finite transformations y = f (x) solutions of a (possibly nonlinear) system R q ⊂ Π q while the infinitesimal transformations ξ ∈ Θ are solutions of the linearized system R q = id −1 q (V (R q )) ⊂ J q (T ) as we have T = id −1 (V (X × X). When Γ is transitive, there is a canonical epimorphism π q 0 : R q → T . Also, as changes of source x commute with changes of target y, they exchange between themselves any generating set of differential invariants {Φ τ (y q )} as in the previous example.Then one can introduce a natural bundle F over X, also called bundle of geomeric objects, by patching changes of coordinates of the formx = f (x),ū = λ(u, j q (f (x)) thus obtained. A section ω of F is called a geometric object or structure on X and transforms likeω( . This is a way to generalize vectors and tensors (q = 1) or even connections (q = 2). As a byproduct we have Γ = {f ∈ aut(X)|j q (f ) −1 (ω) = ω} and we may say that Γ preserves ω. Replacing j q (f ) by f q , we also obtain R q = {f q ∈ Π q |f −1 q (ω) = ω}. Coming back to the infinitesimal point of view and setting f t = exp(tξ) ∈ aut(X), ∀ξ ∈ T , we may define the ordinary Lie derivative with value in the vector bundle F 0 = ω −1 (V (F )) by the formula : and we say that D is a Lie operator because Dξ = 0, Dη = 0 ⇒ D[ξ, η] = 0 as we already saw.
Differentiating r times the equations of R q that only depend on j 1 (ω), we may obtain the r-prolongation R q+r = J r (R q ) ∩ J q+r (T ) ⊂ J r (J q (T )). The problem is then to know under what conditions on ω all the equations of order q + r are obtained by r prolongations only, ∀r ≥ 0 or, equivalently, R q is formally integrable (FI). The solution, found by Vessiot, has been to exhibit another natural vector bundle F 1 with local coordinates (x, u, v) over F with local coordinates (x, u) and to prove that an equivariant section c : F → F 1 : (x, u) → (x, u, v = c(u)) only depends on a finite number of constants called structure constants. The integrability conditions (IC) of R q , called Vessiot structure equations, are of the form I(j 1 (ω)) = c(ω) and are invariant under any change of source.
We provide in a self-contained way and parallel manners the following five striking examples which are among the best nontrivial ones we know and invite the reader to imagine at this stage any possible link that could exist between them (A few specific definitions will be given later on). EXAMPLE 1.7: Coming back to the last example, we show that Vessiot structure equations may even exist when n = 1. For this, if γ is the geometric object of the affine group y = ax + b and 0 = α = α(x)dx ∈ T * is a 1-form, we consider the object ω = (α, γ) and get at once the two second order Medolaghi equations: Differentiating the first equation and substituting the second, we get the zero order equation: EXAMPLE 1.8: (Principal homogeneous structure) When Γ is the Lie group of transformations made by the constant translations y i = x i + a i for i = 1, ..., n of a manifold X with dim(X) = n, the characteristic object invariant by Γ is a family ω = (ω τ = ω τ i dx i ) ∈ T * × X ... × X T * of n 1-forms with det(ω) = 0 in such a way that Γ = {f ∈ aut(X)|j 1 (f ) −1 (ω) = ω} where aut(X) denotes the pseudogroup of local diffeomorphisms of X, j q (f ) denotes the derivatives of f up to order q and j 1 (f ) acts in the usual way on covariant tensors. For any vector field ξ ∈ T = T (X) the tangent bundle to X, introducing the standard Lie derivative L(ξ) of forms with respect to ξ, we may consider the n 2 first order Medolaghi equations: The particular situation is found with the special choice ω = (dx i ) that leads to the involutive system ∂ i ξ k = 0. Introducing the inverse matrix α = (α i τ ) = ω −1 , the above equations amount to the bracket relations [ξ, α τ ] = 0 and, using crossed derivatives on the solved form , we obtain the n 2 (n − 1)/2 zero order equations: The integrability conditions (IC), that is the conditions under which these equations do not bring new equations, are thus the n 2 (n − 1)/2 Vessiot structure equations: . When X = G, these equations can be identified with the Maurer-Cartan equations (MC) existing in the theory of Lie groups, on the condition to change the sign of the structure constants involved because we have [α ρ , α σ ] = −c τ ρσ α τ . Writing these equations in the form of the exterior system dω τ = c τ ρσ ω ρ ∧ ω σ and closing this system by applying once more the exterior derivative d, we obtain the quadratic IC: and is a Lie group with a maximum number of n(n + 1)/2 parameters. A special metric could be the Euclidean metric when n = 1, 2, 3 as in elasticity theory or the Minkowski metric when n = 4 as in special relativity ( [12]). The first order Medolaghi equations: are also called classical Killing equations for historical reasons. The main problem is that this system is not involutive unless we prolong it to order two by differentiating once the equations. For such a purpose, introducing ω −1 = (ω ij ) as usual, we may define the Christoffel symbols: This is a new geometric object of order 2 providing the Levi-Civita isomorphism j 1 (ω) = (ω, ∂ω) ≃ (ω, γ) of affine bundles and allowing to obtain the second order Medolaghi equations: 0 Surprisingly, the following expression called Riemann tensor: is still a first order geometric object and even a 4-tensor with n 2 (n 2 − 1)/12 independent components satisfying the purely algebraic relations : , ω rl ρ l kij + ω kr ρ r lij = 0 Accordingly, the IC must express that the new first order equations R k lij ≡ (L(ξ)ρ) k lij = 0 are only linear combinations of the previous ones and we get the Vessiot structure equations: with the only structure constant c describing the constant Riemannian curvature condition of Eisenhart ( [4], [16,p139]). One can proceed similarly for the conformal Killing system L(ξ)ω = A(x)ω and obtain that the Weyl tensor must vanish, without any structure constant involved ([16,p 141]). EXAMPLE 1.10: (Contact structure) We only treat the case dim(X) = 3 as the case dim(X) = 2p + 1 needs much more work ( [15,p684]). Let us consider the so-called contact 1-form α = dx 1 − x 3 dx 2 and consider the Lie pseudogroup Γ ⊂ aut(X) of (local) transformations preserving α up to a function factor, that is Γ = {f ∈ aut(X)|j 1 (f ) −1 (α) = ρα} where again j q (f ) is a symbolic way for writing out the derivatives of f up to order q and α transforms like a 1-covariant tensor. It may be tempting to look for a kind of "object " the invariance of which should characterize Γ. Introducing the exterior derivative dα = dx 2 ∧ dx 3 as a 2-form, we obtain the volume As it is well known that the exterior derivative commutes with any diffeomorphism, we obtain sucessively: As the volume 3-form α ∧ dα transforms through a division by the Jacobian determinant ∆ = , the desired object is thus no longer a 1-form but a 1-form density ω = (ω 1 , ω 2 , ω 3 ) transforming like a 1-form but up to a division by the square root of the Jacobian determinant. It follows that the infinitesimal contact transformations are vector fields ξ ∈ T = T (X) the tangent bundle of X, satisfying the 3 so-called first order Medolaghi equations: When ω = (1, −x 3 , 0), we obtain the special involutive system: For an arbitrary ω, we may ask about the differential conditions on ω such that all the equations of order r + 1 are only obtained by differentiating r times the first order equations, exactly like in the special situation just considered where the system is involutive. We notice that, in a symbolic way, ω ∧ dω is now a scalar c(x) providing the zero order equation ξ r ∂ r c(x) = 0 and the condition is c(x) = c = cst. The integrability condition (IC) is the Vessiot structure equation: involving the only structure constant c.
FIRST FUNDAMENTAL RESULT : Comparing the various Vessiot structure equations containing structure constants that we have just presented and that we recall below in a symbolic way, we notice that these structure constants are absolutely on equal footing though they have in general nothing to do with any Lie algebra.
Accordingly, the fact that the ones appearing in the MC equations are related to a Lie algebra is a coincidence and the Cartan structure equations have nothing to do with the Vessiot structure equations. Also, as their factors are either constant, linear or quadratic, any identification of the quadratic terms appearing in the Riemann tensor with the quadratic terms appearing in the MC equations is definitively not correct ( [22]). We also understand why the torsion is automatically combined with curvature in the Cartan structure equations but totally absent from the Vessiot structure equations, even though the underlying group (translations + rotations) is the same.
HISTORICAL REMARK 1.12: Despite the prophetic comments of the italian mathematician Ugo Amaldi in 1909 ([16,p46-52]), it has been a pity that Cartan deliberately ignored the work of Vessiot at the beginning of the last century and that the things did not improve afterwards in the eighties with Spencer and coworkers (Compare MR 720863 (85m:12004) and MR 954613 (90e:58166)).
SECOND PART : THE JANET AND SPENCER SEQUENCES
Let µ = (µ 1 , ..., µ n ) be a multi-index with length |µ| = µ 1 + ... + µ n , class i if µ 1 = ... = µ i−1 = 0, µ i = 0 and µ + 1 i = (µ 1 , ..., µ i−1 , µ i + 1, µ i+1 , ..., µ n ). We set y q = {y k µ |1 ≤ k ≤ m, 0 ≤ |µ| ≤ q} with y k µ = y k when |µ| = 0. If E is a vector bundle over X with local coordinates (x, y) and J q (E) is the q-jet bundle of E with local coordinates (x, y q ), the Spencer operator just allows to distinguish a section ξ q from a section j q (ξ) by introducing a kind of "difference" through the operator D : ..) and more generally (Dξ q+1 ) k µ,i (x) = ∂ i ξ k µ (x)−ξ k µ+1i (x). Minus the restriction of D to the kernel S q+1 T * ⊗ E of the canonical projection π q+1 q : J q+1 (E) → J q (E) can be extended to the Spencer map δ : The kernel of D is made by sections such that ξ q+1 = j 1 (ξ q ) = j 2 (ξ q−1 ) = ... = j q+1 (ξ). Finally, if R q ⊂ J q (E) is a system of order q on E locally defined by linear is locally defined when r = 1 by the linear equations Φ τ (x, y q ) = 0, d i Φ τ (x, y q+1 ) ≡ a τ µ k (x)y k µ+1i + ∂ i a τ µ k (x)y k µ = 0 and has symbol g q+r = R q+r ∩ S q+r T * ⊗ E ⊂ J q+r (E) locally defined by a τ µ k (x)ξ k µ+ν = 0, | µ |= q, | ν |= r if one looks at the top order terms. If ξ q+1 ∈ R q+1 is over ξ q ∈ R q , differentiating the identity a τ µ k (x)ξ k µ (x) ≡ 0 with respect to x i and substracting the identity a τ µ k (x)ξ k µ+1i (x) + ∂ i a τ µ k (x)ξ k µ (x) ≡ 0, we obtain the identity a τ µ k (x)(∂ i ξ k µ (x) − ξ k µ+1i (x)) ≡ 0 and thus the restriction D : R q+1 → T * ⊗ R q . This first order operator induces, up to sign, the purely algebraic monomorphism 0 → g q+1 δ → T * ⊗ g q on the symbol level ( [17], [24). The Spencer operator has never been used in GR. DEFINITION 2.1: R q is said to be formally integrable (FI) when the restriction π q+r+1 q+r : R q+r+1 → R q+r is an epimorphism ∀r ≥ 0. In that case, the Spencer form R q+1 ⊂ J 1 (R q ) is a canonical equivalent formally integrable first order system on R q with no zero order equations. DEFINITION 2.2: R q is said to be involutive when it is formally integrable and the symbol g q is involutive, that is all the sequences ... δ → ∧ s T * ⊗ g q+r δ → ... are exact ∀0 ≤ s ≤ n, ∀r ≥ 0. Equivalently, using a linear change of local coordinates if necessary, we may successively solve the maximum number β n q , β n−1 q , ..., β 1 q of equations with respect to the leading or principal jet coordinates of strict order q and class n, n − 1, ..., 1. Then R q is involutive if R q+1 is obtained by only prolonging the β i q equations of class i with respect to d 1 , ..., d i for i = 1, ..., n. In that case, such a prolongation procedure allows to compute in a unique way the principal jets from the parametric other ones and may also be applied to nonlinear systems as well ( [6], [17]).
When R q is involutive, the linear differential operator D : . Each operator D r+1 : F r → F r+1 is first order involutive as it is induced by D : ∧ r T * ⊗ J q+1 (E) → ∧ r+1 T * ⊗ J q (E) : α ⊗ ξ q+1 → dα ⊗ ξ q + (−1) r α ∧ Dξ q+1 and generates the compatibility conditions (CC) of the preceding one. As the Janet sequence can be cut at any place, the numbering of the Janet bundles has nothing to do with that of the Poincaré sequence, contrary to what many people believe in GR. Similarly, we have the involutive first Spencer operator D 1 : C 0 = R q j1 → J 1 (R q ) → J 1 (R q )/R q+1 ≃ T * ⊗ R q /δ(g q+1 ) = C 1 of order one induced by D : R q+1 → T * ⊗ R q . Introducing the Spencer bundles C r = ∧ r T * ⊗ R q /δ(∧ r−1 T * ⊗ g q+1 ), the first order involutive (r + 1)-Spencer operator D r+1 : C r → C r+1 is induced by D : ∧ r+1 T * ⊗ R q+1 → ∧ r+1 T * ⊗ R q and we obtain the canonical linear Spencer sequence ([17, p 150]): as the Janet sequence for the first order involutive system R q+1 ⊂ J 1 (R q ). Introducing the other , the linear Spencer sequence is induced by the linear hybrid sequence: which is at the same time the Janet sequence for j q and the Spencer sequence for J q+1 (E) ⊂ J 1 (J q (E)) ( [17, p 153]). Such a sequence projects onto the Janet sequence and we have the following commutative diagram with exact columns: In this diagram, only depending on the linear differential operator D = Φ • j q , the epimorhisms Φ r : C r (E) → F r for 0 ≤ r ≤ n are induced by the canonical projection Φ = Φ 0 : C 0 (E) = J q (E) → J q (E)/R q = F 0 if we start with the knowledge of R q ⊂ J q (E) or from the knowledge of an epimorphism Φ : J q (E) → F 0 if we set R q = ker(Φ). In the theory of Lie equations coinsidered, E = T , R q ⊂ J q (T ) is a transitive involutive system of infinitesimal Lie equations of order q and the corresponding operator D is a Lie operator. As an exercise, we invite the reader to draw this diagram in the affine and projective 1-dimensional cases. EXAMPLE 2.3 : If we restrict our study to the group of isometries of the euclidean metric ω in dimension n ≥ 2, exhibiting the Janet and the Spencer sequences is not easy at all, even when n = 2, because the corresponding Killing operator Dξ = L(ξ)ω = Ω ∈ S 2 T * , involving the Lie derivative L and providing twice the so-called infinitesimal deformation tensor ǫ of continuum mechanics, is not involutive. In order to overcome this problem, one must differentiate once by considering also the Christoffel symbols γ and add the operator L(ξ)γ = Γ ∈ S 2 T * ⊗ T . Now, one can prove that the Spencer sequence for Lie groups of transformations is locally isomorphic to the tensor product of the Poincaré sequence by the Lie algebra of the underlying Lie group. Hence, if two Lie groups G ′ ⊂ G act on X, it follows from the definition of the Janet and Spencer bundles that the Spencer sequence for G ′ is embedded into the Spencer sequence for G while the Janet sequence for G ′ projects onto the Janet sequence for G but the common differences are isomorphic to ∧ r T * ⊗ (G/G ′ ). This rather philosophical comment, namely to replace the Janet sequence by the Spencer sequence, must be considered as the crucial key for understanding the work of the brothers E. and F. Cosserat in 1909 ([3], [19], [21], [22]) or H. Weyl in 1918 ([22], [26]), the best picture being that of Janet and Spencer playing at see-saw. Indeed, when n = 2, one has 3 parameters (2 translations + 1 rotation) and the following commutative diagram which only depends on the left commutative square: In this diagram, there is no way to compare D 1 (curvature alone as in Vessiot) with D 2 (curvature + torsion as in Cartan).
For proving that the adjoint of D 1 provides the Cosserat equations which can be parametrized by the adjoint of D 2 , we may lower the upper indices by means of the constant euclidean metric and look for the factors of ξ 1 , ξ 2 and ξ 1,2 = −ξ 2,1 in the integration by parts of the sum: in order to obtain: Finally, we get the nontrivial first order parametrization σ 11 = ∂ 2 φ 1 , by means of the three arbitrary functions φ 1 , φ 2 , φ 3 , in a coherent way with the Airy second order parametrization obtained if we set φ 1 = ∂ 2 φ, φ 2 = ∂ 1 φ, φ 3 = −φ when µ 1 = 0, µ 2 = 0 as we shall see in the third part.
The link between the FI of R q and the CC of D is expressed by the following diagram that may be used inductively: The " snake theorem " ( [17], [23]) then provides the long exact connecting sequence: If we apply such a diagram to first order Lie equations with no zero or first order CC, we have q = 1, E = T and we may apply the Spencer δ-map to the top row with r = 2 in order to get the commutative diagram: with exact rows and exact columns but the first that may not be exact at ∧ 2 T * ⊗ g 1 . We shall denote by B 2 (g 1 ) the coboundary as the image of the central δ, by Z 2 (g 1 ) the cocycle as the kernel of the lower δ and by H 2 (g 1 ) the Spencer δ-cohomology at ∧ 2 T * ⊗ g 1 as the quotient.
Comparing the classical and conformal Killing systems by using the inclusions R 1 ⊂R 1 ⇒ g 1 ⊂ g 1 , we finally obtain the following commutative and exact diagram where a diagonal chase allows to identify Ricci with S 2 T * ⊂ T * ⊗ T * ≃ T * ⊗ĝ 2 and it split the right column ( [22]): The Ricci tensor only depends on the "difference" existing between the clasical Killing system and the conformal Killing system, namely the n second order jets (elations once more). The Ricci tensor, thus obtained without contracting the indices as usual, may be embedded in the image of the Spencer operator made by 1-forms with value in 1forms that we have already exhibited for describing EM. It follows that the foundations of both EM and GR are not coherent with jet theory and must therefore be revisited within this new framework.
THIRD PART: ALGEBRAIC ANALYSIS: EXAMPLE 3.1: Let a rigid bar be able to slide along an horizontal axis with reference position x and attach two pendula, one at each end, with lengths l 1 and l 2 , having small angles θ 1 and θ 2 with respect to the vertical. If we project Newton law with gravity g on the perpendicular to each pendulum in order to eliminate the tension of the threads and denote the time derivative with a dot, we get the two equations: x + l 1θ1 + gθ 1 = 0,ẍ + l 2θ2 + gθ 2 = 0 As an experimental fact, starting from an arbitrary movement of the pendula, we can stop them if and only if l 1 = l 2 and we say that the system is controllable.
More generally, we can bring the OD equations describing the behaviour of a mechanical or electrical system to the Kalman formẏ = Ay+bu with input u = (u 1 , ..., u p ) and output y = (y 1 , ..., y m ). We say that the system is controllable if, for any given y(0), y(T ), T < ∞, one can find u(t) such that a coherent trajectory y(t) may be found. In 1963, R.E. Kalman discovered that the system is controllable if and only if rk(B, AB, ..., A m−1 B) = m. Surprisingly, such a functional definition admits a formal test which is only valid for Kalman type systems with constant coefficients and is thus far from being intrinsic. In the PD case, the Spencer form will replace the Kalman form. EXAMPLE 3.2:ẏ −u = 0 ⇒ y(t) − u(t) = c = cst ⇒ u(T ) − u(0) = y(T ) − y(0) can always be achieved and the system is thus controllable in the sense of the definition but z = y − u ⇒ż = 0 is not controllable in the sense of the test. EXAMPLE 3.3:ẏ 1 − a(t)y 2 −ẏ 3 = 0, y 1 −ẏ 2 +ẏ 3 = 0. Any way to bring this system to Kalman form provides the controllability condition a(a − 1) = 0 if a = cst but nothing can be said if a = a(t). Also, getting y 1 from the second equation and substituting in the first, we get the second order OD equationÿ 2 −ÿ 3 −ẏ 3 − a(t)y 2 = 0 for which nothing can be said at first sight.
PROBLEM 1: Is a SYSTEM of OD or PD equations " controllable " (answer must be YES or NO) and how can we define controllability ?. Now, if a differential operator ξ D −→ η is given, a direct problem is to find (generating) compatibility conditions (CC) as an operator η D1 −→ ζ such that Dξ = η ⇒ D 1 η = 0. Conversely, the inverse problem will be to find θ D−1 −→ ξ such that D generates the CC of D −1 and we shall say that D is parametrized by D −1 . Of course, solving the direct problem (Janet, Spencer) is necessary for solving the inverse problem. EXAMPLE 3.4: When n = 2, the Cauchy equations for the stress in continuum mechanics are ∂ 1 σ 11 + ∂ 2 σ 21 = 0, ∂ 1 σ 12 + ∂ 2 σ 22 = 0 with σ 12 = σ 21 . Their parametrization σ 11 = ∂ 22 φ, σ 12 = σ 21 = −∂ 12 φ, σ 22 = ∂ 11 φ has been discovered by Airy in 1862 and φ is called the Airy function. When n = 3, Maxwell and Morera discovered a similar parametrization with 3 potentials (exercise). EXAMPLE 3.5: When n = 4, the Maxwell equations dF = 0 where F ∈ ∧ 2 T * is the EM field are parametrized by dA = F where A ∈ T * is the 4-potential. The second set of Maxwell equations can also be parametrized by the so-called pseudopotential which is a pseudovector density (exercise). EXAMPLE 3.5: If n = 4, ω is the Minkowski metric and φ = GM/r is the gravitational potential, then φ/c 2 ≪ 1 and a perturbation Ω of ω may satisfy in vacuum the 10 second order Einstein equations for the 10 Ω: The parametrizing challenge has been proposed in 1970 by J. Wheeler for 1000 $ and solved negatively in 1995 by the author who only received 1 $. PROBLEM 2: Is an OPERATOR parametrizable (answer must be YES or NO) and how can we find a parametrization ?.
Let A be a unitary ring, that is 1, a, b ∈ A ⇒ a + b, ab ∈ A, 1a = a and even an integral domain, that is ab = 0 ⇒ a = 0 or b = 0. We say that M = A M is a left module over A if x, y ∈ M ⇒ ax, x + y ∈ M, ∀a ∈ A and we denote by hom A (M, N ) the set of morphisms f : M → N such that f (ax) = af (x). PROBLEM 3: Is a MODULE M torsion-free, that is t(M ) = 0 (answer must be YES or NO) and how can we test such a property ?.
In the remaining of this paper we shall prove that the three problems are indeed identical and that only the solution of the third will provide the solution of the two others.
Let Q ⊂ K be a differential field, that is a field (a ∈ K ⇒ 1/a ∈ K) with n commuting Using an implicit summation on multiindices, we may introduce the (noncommutative) ring of differential operators D = K[d 1 , ..., d n ] = K[d] with elements P = a µ d µ such that µ < ∞ and d i a = ad i + ∂ i a. Now, if we introduce differential indeterminates y = (y 1 , ..., y m ), we may extend d i y k µ = y k µ+1i to Φ τ ≡ a τ µ k y k µ di −→ d i Φ τ ≡ a τ µ k y k µ+1i + ∂ i a τ µ k y k µ for τ = 1, ..., p. Therefore, setting Dy 1 + ... + dy m = Dy ≃ D m , we obtain by residue the differential module or D-module M = Dy/DΦ. Introducing the two free differential modules F 0 ≃ D m , F 1 ≃ D p , we obtain equivalently the free presentation F 1 D → F 0 → M → 0. More generally, introducing the successive CC as in the preceding section, we may finally obtain the free resolution of M , namely the exact sequence ...
D2
trick " is to let D act on the left on column vectors in the operator case and on the right on row vectors in the module case. Homological algebra has been created for finding intrinsic properties of modules not depending on their presentation or even on their resolution. EXAMPLE 3.7: In order to understand that different presentations may nevertheless provide isomorphic modules, let us consider the linear inhomogeneous system P y ≡ d 22 y = u, Q ≡ d 12 y−y = v with K = Q. Differentiating twice, we get y = d 11 u − d 12 v − v and the two fourth order CC: However, as P Q = QP , we also get the CC C ≡ d 12 u − d 22 v − u = 0 and the two resolutions: where we have identified the two differential modules involved on the right because: We now exhibit another approach by defining the formal adjoint of an operartor P and an operator matrix D: ∧ n T * ⊗ E * ad(D) ←− ∧ n T * ⊗ F * where E * is obtained from E by inverting the transition matrix and EM provides a fine example of such a procedure ( [12]). Now, with operational notations, let us consider the two differential sequences: EXAMPLE 3.10: With ∂ 22 ξ = η 2 , ∂ 12 ξ = η 1 for D, we get ∂ 1 η 2 − ∂ 2 η 1 = ζ for D 1 . Then ad(D 1 ) is defined by µ 2 = −∂ 1 λ, µ 1 = ∂ 2 λ while ad(D) is defined by ν = ∂ 12 µ 1 + ∂ 22 µ 2 but the for the stress tensor density ( [16], [26] Accordingly, the second order Airy parametrization is nothing else than the adjoint of the only Riemann CC involved, namely ∂ 11 ǫ 22 + ∂ 22 ǫ 11 − 2∂ 12 ǫ 12 = 0 which is the linearization of the Riemann tensor of Example 1.9 . EXERCISE 3.20: Prove thatÿ 2 −ÿ 3 −ẏ 3 − a(t)y 2 = 0 is controllable if and only ifȧ + a 2 − a = 0 (Riccati) and find a parametrization. | 10,759 | 2013-06-12T00:00:00.000 | [
"Mathematics"
] |
The Cosmos in Its Infancy: JADES Galaxy Candidates at z > 8 in GOODS-S and GOODS-N
We present a catalog of 717 candidate galaxies at z > 8 selected from 125 square arcmin of NIRCam imaging as part of the JWST Advanced Deep Extragalactic Survey (JADES). We combine the full JADES imaging data set with data from the JWST Extragalactic Medium Survey and First Reionization Epoch Spectroscopic COmplete Survey (FRESCO) along with extremely deep existing observations from Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) for a final filter set that includes 15 JWST/NIRCam filters and five HST/ACS filters. The high-redshift galaxy candidates were selected from their estimated photometric redshifts calculated using a template-fitting approach, followed by visual inspection from seven independent reviewers. We explore these candidates in detail, highlighting interesting resolved or extended sources, sources with very red long-wavelength slopes, and our highest-redshift candidates, which extend to z phot ∼ 18. Over 93% of the sources are newly identified from our deep JADES imaging, including 31 new galaxy candidates at z phot > 12. We also investigate potential contamination by stellar objects, and do not find strong evidence from spectral energy distribution fitting that these faint high-redshift galaxy candidates are low-mass stars. Using 42 sources in our sample with measured spectroscopic redshifts from NIRSpec and FRESCO, we find excellent agreement to our photometric redshift estimates, with no catastrophic outliers and an average difference of 〈Δz = z phot − z spec〉 = 0.26. These sources comprise one of the most robust samples for probing the early buildup of galaxies within the first few hundred million years of the Universe’s history.
INTRODUCTION
The earliest galaxies that appeared from the Cosmic Dark Ages fundamentally changed the Universe.For hundreds of millions of years after recombination, the decoupling of matter and radiation, the Universe's baryon content consisted of predominantly neutral hydrogen that was gravitationally pooling and collecting, pulled by early dark matter halos.Eventually these massive clouds collapsed and formed the first stars which gave off energetic ultraviolet (UV) radiation, ionizing the neutral hydrogen medium throughout the universe.Reionization is thought to have taken place across the the first billion years after the Big Bang, but exactly how this process occurred, and more specifically, what types of galaxies are responsible for this phase transition, has been an active area of research for decades (Barkana & Loeb 2001;Stark 2016;Dayal & Ferrara 2018;Finkelstein et al. 2019;Ouchi et al. 2020;Robertson et al. 2023).Observations of early galaxies offer us a vital insight into the first stages of galaxy formation and evolution, and help us understand emergence of the elements heavier than helium.To aid in understanding these distant sources, in this paper we present a sample of 717 galaxies and candidate galaxies with spectroscopic and photometric redshifts corresponding to the first 200 to 600 Myr after the Big Bang and describe their selection and properties.
To explore the very early universe, researchers search for galaxies at increasingly high redshifts using deep observations from space.One of the pioneering early universe surveys was the Hubble Space Telescope (HST) Deep Field project (HDF, Williams et al. 1996), a set of observations at wavelengths spanning the nearultraviolet to near-infrared (IR).These data provided an opportunity to explore galaxy evolution out to z = 4 − 5 (Madau et al. 1996).Following the success of the HDF, the next decades were spent observing multiple deep fields down to unprecedented observational depths of 30 mag (AB) at optical and near-IR wavelengths.These surveys included the Hubble Ultra-Deep Field (HUDF, Beckwith et al. 2006), HUDF09 (Bouwens et al. 2011a), HUDF12 (Ellis et al. 2013;Koekemoer et al. 2013), the UVUDF (Teplitz et al. 2013), the HST Great Observatories Origins Deep Survey (GOODS, Giavalisco et al. 2004), the Cosmological Evolution Survey (COS-MOS, Scoville et al. 2007), and the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CAN-DELS, Grogin et al. 2011;Koekemoer et al. 2011).The Brightest of Reionizing Galaxy survey (BoRG, Trenti et al. 2011) was a campaign to search for bright highredshift galaxies across a wide (274 square arcminutes), but relatively shallow area.Researchers hoping to target fainter galaxies also focused on lensing clusters, leading to the Cluster Lensing and Supernova Survey with Hubble (CLASH, Postman et al. 2012), the Hubble Frontier Fields (HFF, Lotz et al. 2017), and the Reionization Lensing Cluster Survey (RELICS, Coe et al. 2019).
It has therefore been exciting to see the fruits of these observations: the discovery of many thousands of galaxies at z > 4 (Bunker et al. 2004(Bunker et al. , 2010;;Bouwens et al. 2011b;Lorenzoni et al. 2011;Ellis et al. 2013;McLure et al. 2013;Oesch et al. 2013;Schenker et al. 2013;Oesch et al. 2014;Bouwens et al. 2015;Finkelstein et al. 2015;Ishigaki et al. 2015;Harikane et al. 2016;McLeod et al. 2016;Oesch et al. 2018;Morishita et al. 2018;Bridge et al. 2019;Rojas-Ruiz et al. 2020;Bouwens et al. 2022;Bagley et al. 2022;Finkelstein et al. 2023).While these sources have been found through multiple methods, the primary method of high-redshift galaxy selection relies on photometry alone.Neutral hydrogen within, surrounding, and between distant galaxies serves to absorb ultraviolet radiation, leading to what is commonly referred to as the "Lyman break" in the spectral energy distribution (SED) at 912 Å.At redshifts above z > 5, the increasingly neutral hydrogen in the universe results in Lyman-α forest absorption between 912 Å and 1216 Å, and sources at these redshifts are more commonly known as "Lyman-α break" or "Lyman-α dropout" galaxies.By identifying galaxies where the Lyman break and Lyman-α break fell between two adjacent filters at a given redshift, these sources could be selected in large quantities, as done initially in Guhathakurta et al. (1990) and Steidel & Hamilton (1992).A similar approach involves fitting galaxy photometry to simulated or observed galaxy SEDs, a method that utilizes more data than pure color selection (Koo 1985;Lanzetta et al. 1996;Gwyn & Hartwick 1996;Pello et al. 1996;Koo 1999;Bolzonella et al. 2000).These results require accurate template sets that span the full color space of the photometric data, and include the effects of both dust extinction and intergalactic medium (IGM) absorption.This template-fitting procedure is uncertain at high-redshifts given the current lack of UV and optical SEDs for galaxies in the early universe (for a review of selection methods for finding high-redshift galaxies, see Stern & Spinrad 1999;Giavalisco 2002;Dunlop 2013;Stark 2016).
While both galaxy selection techniques have been used to find galaxies out to z ∼ 10 with HST, the reddest filter on the telescope's Wide-Field Camera 3 (WFC3/IR) is at 1.6µm, such that potential galaxies at higher redshifts would have their Lyman-α break shifted out of the wavelength range of the instrument.Exploring the evolution of galaxies at earlier times was limited by the availability of deep, high-resolution near-and mid-IR observations.This changed with the launch of the James Webb Space Telescope (JWST) in late 2021, an observatory carrying a suite of sensitive infrared instruments behind a 6.5m primary mirror.The instruments include NIRCam (Rieke et al. 2005), a high-resolution camera operating at 0.7 − 5.0 µm across a 9.7 square arcminute field of view, and NIRSpec (Jakobsen et al. 2022), a spectrograph operating at similar wavelengths with a unique multi-object shutter array capable of obtaining spectra at multiple resolutions.
In the first year of JWST science, researchers have identified scores of candidate high-redshift galaxies at z > 8 (Castellano et al. 2022;Naidu et al. 2022;Harikane et al. 2023;Finkelstein et al. 2023;Leethochawalit et al. 2023;Morishita et al. 2023;Whitler et al. 2023;Yan et al. 2023;Donnan et al. 2023;Adams et al. 2023;Austin et al. 2023;Harikane et al. 2023;Pérez-González et al. 2023;Atek et al. 2023).Some of these sources have been spectroscopically confirmed at z > 8 (Arrabal Haro et al. 2023a,b), demonstrating the efficacy of using NIR-Cam for early universe observations.It should be noted, however, that this is an imperfect science -Arrabal Haro et al. (2023a) describe how the early bright z ∼ 16 candidate CEERS-93316 was spectroscopically found to be at z spec = 4.9 with strong line emission and dust obscuration simulating the colors of a distant galaxy, a possibility discussed in Naidu et al. (2022) and Zavala et al. (2023).
One of the largest JWST Cycle 1 extragalactic surveys by time allocation is the JWST Advanced Deep Extragalactic Survey (JADES; Rieke & JADES Collaboration 2023), a GTO program that will eventually encompass 770 hours of observations from three of the telescope's instruments: NIRCam, NIRSpec, and the mid-infrared instrument MIRI.These data, which focus on the GOODS-S and GOODS-N regions of the sky, are ideal for finding and understanding the most distant galaxies through imaging and follow-up spectroscopy.Because the JADES target regions have been observed by multiple telescopes and instruments across the electromagnetic spectrum, there is a rich quantity of ancillary data for comparing with JWST images and spectroscopy.
Early JADES observations resulted in the discovery of the highest-redshift spectroscopically-confirmed galaxy thus far, JADES-GS-z13-0 (z spec = 13.20 +0.04 −0.07 ) (Robertson et al. 2023;Curtis-Lake et al. 2023).Because of NIRCam's wavelength range and dichroic offering simultaneous short wavelength (0.7 -2.3 µm) and long wavelength (2.4 -5.0 µm) images, these and other highredshift candidates are detected in multiple bands at wavelengths longward of the Lyman-α break.The highredshift galaxies that can be observed thanks to the wavelength coverage of JWST are vital for exploring the potential downturn in the number density of ultra-highredshift (z ≳ 10) galaxies previously predicted by HST observations alone (Oesch et al. 2018).
In this study, we present the results of a search through the first year of JADES NIRCam imaging of the GOODS-S and GOODS-N regions for galaxy candidates at z > 8, where we combine the deepest HST optical and near-IR observations with JADES NIRCam data taken across ten filters.These data are supplemented by medium-band JWST imaging in five additional filters from both the publicly available JWST Extragalactic Medium Survey (JEMS) (Williams et al. 2023a) and First Reionization Epoch Spectroscopic COmplete Survey (FRESCO) (Oesch et al. 2023) programs.We perform template fitting in order to select candidate highredshift candidates, capitalizing on the large number of filters at wavelengths longer than 2µm.Because of both the unparalleled HST coverage and the mixture of medium and wide NIRCam filters present in the JADES data, these data currently represent the best opportunity for uncovering galaxies at z > 8 with minimal lowredshift interlopers.The deepest portions of the JADES dataset probe down to 5σ depths of 2.17 nJy (30.6 mag AB) at 2.7µm, currently deeper than the other similar JWST extragalactic fields studied in the literature.In addition, because of the FRESCO grism spectra and the JADES NIRSpec spectroscopy, we also have a number of spectroscopic redshifts for these sources confirming their selection, providing constraints on the accuracy of photometric redshifts for galaxies in the early universe.
The structure of this paper is as follows.We begin by introducing the JADES dataset used in this study, and we discuss our data reduction and photometric and spectroscopic measurements in Section 2. In Section 3 we describe how we estimate photometric redshifts and, from these results, select candidate galaxies at z > 8.We then spend the bulk of this study exploring the resulting sample in Section 4, separating the objects into three bins: z = 8 − 10 ( §4.1), z = 10 − 12 ( §4.2), and z > 12 ( §4.3).We then consider candidate galaxies that fall out of our primary selection either because of their template fits ( §4.4) or their proximity to brighter sources ( §4.5).We also discuss the possibility of these sources being low-mass stars ( §4.6), describe which candidates have been included in samples previous to this study ( §4.7), and explore the impact of different galaxy template sets for photometric redshifts ( §4.8).Finally, we examine the selection and further properties of these sources in Section 5 and conclude in Section 6.Throughout this paper we assume the Planck Collaboration et al. ( 2020) cosmology with H 0 = 67.4km s −1 Mpc −1 , Ω M = 0.315 and Ω Λ = 0.685.All magnitudes are provided using the AB magnitude system (Oke 1974;Oke & Gunn 1983).
JADES IMAGING AND PHOTOMETRY
JADES is a joint Guaranteed Time Observations (GTO) program between the NIRCam and NIRSpec extragalactic GTO teams that consists of NIRCam imaging, NIRSpec spectroscopy, and MIRI imaging across the GOODS-S (RA = 53.126deg, DEC = -27.802deg) and GOODS-N (RA = 189.229,DEC = +62.238deg) (Giavalisco et al. 2004) fields.In this section we describe the Cycle 1 JADES observations taken as of February 8 2023, the data reduction, and the measurement of fluxes and spectroscopic redshifts.The full description of these observations is provided in Eisenstein et al. (2023a).
Observations
In this paper we will discuss galaxy candidates selected from the NIRCam imaging in both GOODS-S, with observations taken on UT 2022-09-29 through 2022-10-10 (Program 1180, PI:Eisenstein), and GOODS-N, with observations taken on UT 2023-02-03 through 2023-02-07 (Program 1181, PI:Eisenstein).In addition, a set of NIRCam parallels (9.8 square arcmin each) were observed during NIRSpec observation PID 1210 (PI:Ferruit) on UT 2022-10-20 to 2022-10-24 within and southwest of the JADES Medium footprint in GOODS-S.Another set of NIRCam observations (9.8 square arcmin) parallel to NIRSpec PID 1286 (PI:Ferruit) were observed on UT 2023-01-12 to 2023-01-13 to the northwest of the JADES Deep footprint in GOODS-S.These data were partly presented in both Robertson et al. (2023) as well as Tacchella et al. (2023), although here we combine the full suite of JADES data observed as of February 8 2023.
The total current survey area of the JADES GOODS-S is 67 square arcminutes, with 27 square arcminutes for the JADES Deep program, and 40 square arcminutes for the JADES Medium program.The filters used for JADES Deep are NIRCam F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W (λ = 0.8 − 5.0µm), while JADES Medium uses the same filters without F335M.For the 1286 parallel, the JADES observations include the F070W filter.
The total current area of the NIRCam GOODS-N program is 58 square arcminutes.The NIRCam filters observed for GOODS-N are F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W (λ = 0.8−5.0µm).The GOODS-N observations are separated into two portions: the northwest (NW) portion, which covers 30.4 square arcminutes, and a southeast (SE) portion, which covers 27.6 square arcminutes.The NW portion was taken under PID 1181 (PI:Eisenstein) with NIRCam as the prime instrument and MIRI in parallel, while the SW portion was taken as part of the same program with NIRSpec as prime and NIRCam in parallel.
We also include observations taken for the JWST Extragalactic Medium-band Survey (JEMS, Williams et al. 2023a).These data, which are part of program PID 1963 (PIs C. Williams, S. Tacchella, M. Maseda) were taken on UT 2022-10-12.For this study, we use the NIRCam data from JEMS which covers the Ultra Deep Field (UDF, Beckwith et al. 2006) by the NIRCam A module, with the NIRCam B module to the southwest, spanning the JADES Deep and Medium portions, for a total area of 10.1 square arcminutes.The NIR-Cam observations in the JEMS survey were taken with the F182M, F210M, F430M, F460M, and F480M filters (Williams et al. 2023a).
We also supplement our observations with NIRCam data from the The First Reionization Epoch Spectro-scopic COmplete Survey (FRESCO, PID 1895, PI P. Oesch).While nominally a NIRCam grism survey across GOODS-S and GOODS-N, we use the FRESCO F182M, F210M, and F444W imaging of GOODS-S and GOODS-N to supplement the filters available in JADES.The FRESCO area extends beyond the JADES Deep and Medium region, and we do not select galaxies in this region due to the lack of NIRCam filter coverage afforded by the JADES observations.We use the FRESCO grism data as well as the NIRSpec observations from PID 1210 and 1286 to measure spectroscopic redshifts for sources within our sample.
The GOODS-S and GOODS-N regions have been the target of deep HST observations, and we utilize existing HST/ACS and WFC3 mosaics.We use the HST/ACS mosiacs from the Hubble Legacy Fields (HLF) v2.0 for GOODS-S and v2.5 for GOODS-N (25 ′ × 25 ′ for GOODS-S, and 20.5 ′ × 20.5 ′ for GOODS-N, Illingworth et al. 2013;Whitaker et al. 2019).We use data in the HST/ACS F435W, F606W, F775W, F814W, and F850LP filters.
JADES NIRCam
The data reduction techniques used in this present study will be fully described in a future paper (Tacchella et al. in prep), but they follow the methods outlined in Robertson et al. (2023) and Tacchella et al. (2023), which we briefly summarize here.For both the JADES GOODS-S and GOODS-N observations, the data were first reduced using the JWST calibration pipeline v1.9.2, with the JWST Calibration Reference Data System (CRDS) context map 1039.The raw images (uncal frames) are processed using the default JWST Stage 1 pipeline, which performs the detector-level corrections and results in count-rate images (rate frames).
The JWST pipeline Stage 2 involves flat fielding and flux calibration, and was run largely with the default values.We convert from counts/s to MJy/sr following Boyer et al. (2022).During the data reduction, we discovered that the current long wavelength flats used in the JWST pipeline result in non-astrophysical artifacts in the final mosaics.To mitigate this effect, we developed our own sky-flats, stacking in each filter 80 -200 source-masked raw uncal frames from across PID 1180, 1210, 1286, and JEMS.For F335M and F410M, where we did not have enough exposures to properly perform this stacking procedure, we instead constructed these sky flats via interpolation using the other wide-band LW sky flats.
After Stage 2, we used custom corrections for common features seen in JWST/NIRCam data (Rigby et al. 2023).We fit and subtracted the 1/f noise (Schlawin et al. 2020) assuming a parametric model.To fit for the scattered-light "wisps" in the NIRCam SW channel we constructed templates by stacking our images from the JADES program (PID 1180(PID , 1210(PID , 1286) ) as well as other publicly available programs (PIDs 1063(PIDs , 1345(PIDs , 1837(PIDs , and 2738)), and then subtracted these scaled templates for the SW channel detectors A3, A4, B3, and B4 (Tacchella et al. in prep).The background was removed using the photutils Background2D class (Bradley et al. 2023).
We created our final mosaics using the JWST Pipeline Stage 3, after performing an astrometric alignment using a custom version of the JWST TweakReg software.In both GOODS-S and GOODS-N, we calculated the relative and absolute astrometric corrections for the individual images grouped by visit and by photometric band.We matched to sources in a reference catalog created from HST F814W and HST F160W mosaics with astrometry tied to Gaia-EDR3 (Gaia Collaboration et al. 2021, private communication from G. Brammer).Following this alignment, we performed the default steps of Stage 3 of the JWST pipeline for each filter and visit.For our final mosaics we chose a pixel scale of 0.03 arcsec/pixel and drizzle parameter of pixfrac=1 for both the SW and LW images.
FRESCO
The FRESCO (Oesch et al. 2023) NIRCam grism spectroscopic data in the F444W filter (λ = 3.9−5.0µm) were reduced and analyzed following the routines in Sun et al. (2023) and Helton et al. (2023).Here we briefly summarize the main steps of the process.Because we aim to conduct a targeted emission line search of [O III] and Hβ lines for our z > 8 galaxy candidates, and we do not expect any of them to have strong continuum emission that can be detected with grism data, we used a median-filtering technique to subtract out the remaining continuum or background on a row-by-row basis, following the methods outlined by Kashino et al. (2023).We extracted 2D grism spectra using the continuum-subtracted emission-line maps for all objects that are brighter than 28.5 AB mag in the F444W band and within the FRESCO survey area.The emission lines from sources fainter than 28.5 AB mag are not expected to be detected with FRESCO.The FRESCO short-wavelength parallel imaging observations were used for both astrometric and wavelength calibration of the F444W grism spectroscopic data.
We extracted 1D spectra from the 2D grism spectra using the optimal extraction algorithm (Horne 1986) using the light profiles of sources in the F444W filter.We then performed automatic identifications of > 3σ peaks in 1D spectra (see Helton et al. 2023), and fit these detected peaks with Gaussian profiles.We tentatively assigned spectroscopic redshifts for > 3σ peaks which minimize the difference from the estimated photometric redshifts (Section 4.9).Visual inspection was performed on these tentative spectroscopic redshift solutions and spurious detections caused by either noise or contamination were removed.The final grism spectroscopic redshift sample of JADES sources will be presented in a forthcoming paper from the JADES collaboration.
JADES NIRSpec
In addition to FRESCO data, we discuss NIRSpec spectroscopic redshifts in Section 4.9, and these were reduced following the same procedure as outlined in Curtis-Lake et al. ( 2023), Cameron et al. (2023), Bunker et al. (2023b), andBunker et al. (2023a).For the present study, we are only using the derived spectroscopic redshifts from these data.
Photometry
To compute the photometry from both the GOODS-S and GOODS-N mosaics in each filter, we used the software package jades-pipeline developed by authors BR, BDJ, and ST.We began by creating an inverse-variance-weighted stack of the NIRCam F277W, F335M, F356W, F410M, and F444W images as an ultradeep signal-to-noise ratio (SNR) image.From this SNR image, jades-pipeline utilizes software from the Photutils package to define a catalog of objects with five contiguous pixels above a SNR of 3 (Bradley et al. 2023), creating a segmentation map in the process.
From this catalog, we calculated circular and Kron aperture photometry on both the JWST NIRCam mosaics as well as the 30mas pixel scale HST Legacy Fields mosaics (Illingworth et al. 2016;Whitaker et al. 2019) for ACS F435W, F606W, F775W, F814W, and F850LP filters.Forced photometry was performed using a range of aperture sizes.The uncertainties we report were measured by combining in quadrature both the Poisson noise from the source and the noise estimated from random apertures placed throughout the image (e.g.Labbé et al. 2005;Quadri et al. 2007;Whitaker et al. 2011).Elliptical Kron aperture fluxes were measured using Photutils with a Kron parameter of K = 2.5 and the default circularized radius six times larger than the Gaussian-equivalent elliptical sizes while masking segmentation regions of any neighboring source.We created empirical HST/ACS and JWST/NIRCam point spread functions to estimate and apply aperture cor-rections assuming point source morphologies (Z.Chen, private communication).
For this present study we will fit to the JADES "CIRC1" (0.2 ′′ diameter aperture) fluxes, which reduces the background noise associated with the use of larger apertures, and is appropriate given the typically small sizes found for high-redshift galaxies (Shibuya et al. 2015;Curtis-Lake et al. 2016;Robertson et al. 2023;Tacchella et al. 2023).We note that in Section 4.1 we discuss a sample of morphologically extended sources with photometric redshifts z a = 8 − 9, although these sources consist of multiple smaller clumps, supporting the use of the smaller aperture photometry for their selection.We also use Kron aperture fluxes to calculate some derived parameters, such as the UV magnitude M U V to better encompass the full flux from more extended sources.We estimated the 5σ limiting flux across both GOODS-S and GOODS-N from the 0.2 ′′ diameter fluxes and uncertainties.In Table 1, we report these 5σ limiting fluxes in nJy for these portions of GOODS-S: JADES Deep, JADES Medium, the 1210 Parallel, and the 1286 Parallel.In addition, we report the limiting fluxes for both the shallower NW portion of the GOODS-N field and the SE portion.Understanding these depths is important for exploring the recovery of high-redshift galaxies across the JADES data.
GALAXY SELECTION AT z > 8
Our final photometric catalogs span 20 optical and near-IR filters, including both HST/ACS and JWST/NIRCam observations.Because of the multiple datasets included in these catalogs, however, objects will only have coverage in a subset of these filters, with the maximum number being in the area of the JEMS survey in the GOODS-S region, where there is coverage in 19 filters (F070W was only observed in the 1286 parallel, no portion of which overlaps with JEMS).In this section we describe how we identified z > 8 sources from the measured line-flux catalog.Throughout this study, we will identify sources using "JADES-GS-" or "JADES-GN-" followed by the right ascension (RA) and declination (DEC) values in decimal degrees corresponding to the source.
As discussed in the introduction, we choose to employ template fitting in this study due to the large quantity of available data in the JADES data set, especially longward of the potential Lyman-α break for objects at z > 8.The rest-frame UV and optical continuum can be fit with the templates as well, better constraining the exact redshift than with color selection alone.In addition, potential strong optical emission lines such as [OIII]λ5007 observed in high-redshift galaxies can boost the flux in photometric filters, and can be modeled with template fitting.The JADES data set includes multiple medium-band filters longward of 3µm, where these effects can be more significant.
EAZY Photometric Redshifts
In order to estimate the redshifts of the GOODS-S galaxies, we used the photometric redshift code EAZY (Brammer et al. 2008).EAZY combines galaxy templates and performs a grid-search as a function of redshift.We used the EAZY photometric redshift z a , corresponding to the minimum χ 2 of the template fits, to identify high-redshift galaxies.For the fits, we started with the EAZY "v1.3" templates, which we plot in the left panel in Figure 1.These templates include the original seven templates modified from Brammer et al. (2008) to include line emission, the dusty template "c09 del 8.6 z 0.019 chab age09.40av2.0.dat," and the high-equivalent-width template taken from Erb et al. (2010) (the equivalent width of [OIII]λ5007 measured for the galaxy this template is derived from, Q2343-BX418, is 285 Å).We supplemented these with seven additional templates that were designed to optimize photometric redshift estimates for mock galaxy observations from the JAGUAR simulations (Williams et al. 2018).These templates were created to better span the observed color space of the JAGUAR galaxies, including both red, dusty and blue, UV-bright populations.Similar to what has been demonstrated by other authors (e.g.Larson et al. 2023), we found that young galaxies with very high specific star-formation rates can have very blue observed UV continuum slopes, which is made more complex due to strong nebular continuum and line emission (Topping et al. 2022).To aid in fitting these galaxies, we generated additional templates using Flexible Stellar Population Synthesis (fsps, Conroy & Gunn 2010), and added these to the "5Myr" and "25Myr" simple stellar population models introduced in Coe et al. (2006) for fitting blue galaxies in HUDF.We show our additional templates in the right panel of Figure 1, and we provide these templates online hosted on Zenodo: https://doi.org/10.5281/zenodo.7996500.Multiple templates from the full set contain nebular continuum and line emission, including that from Lyman-α.
In each redshift bin considered, EAZY combines all of the available templates together and applies an IGM absorption consistent with the redshift (Madau 1995).The best fit in that redshift bin, measured using the minimum χ 2 , is recorded in a χ 2 (z) surface that is output from the program.We explored the redshift range z = 0.01 − 22, with a redshift step size ∆z = 0.01.We did not adopt any apparent magnitude priors, as the exact relationship between galaxy apparent magnitude and redshift at z > 8 is currently not well constrained, so any attempt to impose a prior would serve to only remove faint objects from the sample.To prevent bright fluxes from overly constraining the fits and to account for any photometric calibration uncertainties not captured by the offset procedure described below (e.g.due to detector specific offsets as observed in Bagley et al. 2023), we set an error floor on the photometry of 5%, and additionally, we used the EAZY template error file "template error.v2.0.zfourge" to account for any uncertainties in the templates as a function of wavelength.We also explored the use of the EAZY templates discussed in Larson et al. (2023), which were used in finding high-redshift galaxies in the JWST Cosmic Evolution Early Release Science (CEERS) observations, and we describe how using these photometric redshifts affects our final sample in Section 4.8.
To match the EAZY template set to the observed fluxes in our catalog, we estimated photometric offsets with EAZY.We calculated the offsets for GOODS-S and GOODS-N data separately, where we first fit the observed photometry for a sample of galaxies with an SNR in F200W between 5 and 20, and calculated the offsets from the observed photometry to the template photometry.We then applied these offsets to the photometry and re-fit, iterating on this procedure.We list the final photometric offsets that we used for GOODS-S and GOODS-N, normalized to F200W, in Table 2.These offsets are within 10% of unity for all of the filters, with the exception of a large offset used for the F850LP observations in GOODS-N.We find that the F850LP depths are among the shallowest in our dataset (Table 1) which is likely contributing to the large offset.While we observe differences between the GOODS-S and GOODS-N offsets, this is primarily driven by the comparison of the HST/ACS photometry to the NIRCam photometry.To demonstrate this, we recalculated the photometric redshifts but used identical filter sets between the sources in the two fields excluding the HST/ACS bands and the NIRCam medium bands F182M, F210M, F430M, F460M, and F480M where we have limited coverage in these filters in GOODS-S and GOODS-N.The median difference in the photometric offsets between the GOODS-S and GOODS-N fits for the remaining filters as provided in Table 2 is 0.006 with a standard deviation of 0.012.However, when we calculate the offsets without the HST/ACS bands or the NIRCam medium bands, the median difference goes down to 0.001 and a standard deviation of only 0.004, consistent with no difference.We used the χ 2 (z) values output from EAZY to calculate a probability P (z) assuming a uniform redshift prior: where we normalize such that P (z)dz = 1.0.The P (z) and χ 2 (z) values allowed us to calculate P (z > 7), the summed probability from EAZY that the galaxy is at z > 7, as well as the χ 2 minimum for EAZY fits restricted to z < 7.These statistics, and others, are helpful for identifying and removing interlopers from our sample.
In Figure 2 we show the EAZY fit to an object in GOODS-S, JADES-GS-53.17551-27.78064,along with the P (z) surface, and the JADES NIRCam thumbnails.The source is an F115W dropout, with no visible flux at shorter wavelengths.The fit constrained at z < 7 produces significantly more F115W flux than is observed, lending evidence of this galaxy being at z > 9.
High-Redshift Galaxy Selection and Catalogs
Because of the extensive deep photometric data for the GOODS-S and GOODS-N fields, we chose to use the EAZY photometric redshifts for finding z > 8 candidates, as template fitting utilizes more photometric data points in the fit than color selection by itself.Following work done in the literature (Finkelstein et al. 2023), we 53.17551-27.78064,at the best-fit redshift za = 9.66.We plot the NIRCam photometry with red points, and the HST photometry with light purple points.The error bars represent the 1σ uncertainties on the fluxes.We plot 2σ upper limits with downward pointing triangles.The blue line represents the fit corresponding to za and the gold line shows the best fit at z < 7. We show the minimum χ 2 value for the za fit, as well as the ∆χ 2 value corresponding to the difference between the minimum χ 2 and the χ 2 for the fit at z < 7.In the inset, we plot P (z), with the 1σ, 2σ, and 3σ uncertainty regions derived from the P (z) surface with shades of grey, and za with a blue line, along with the P (z) distribution for the z < 7 best fit in gold (also normalized to 1).Above the inset we provide the summed probability of the source being at z > 7, along with the EAZY redshifts corresponding to σ 68,low and σ 68,high .Below the SED we plot 2 ′′ × 2 ′′ thumbnails for the JADES NIRCam filters.
selected galaxies at z > 8 by imposing these rules on the EAZY fits: 1.The redshift of the fit corresponding to the minimum χ 2 , z a , must be greater than 8.
2. The SNR in at least two photometric bands must be above 5.For this study, we chose NIR-Cam F115W, F150W, F200W, F277W, F335M, F356W, F410M, or F444W, as these filters are longward of the Lyman-α break at z > 8.We used the photometry derived using 0.2 ′′ diameter apertures for measuring this SNR.
3. The summed probability of the galaxy being above z > 7 must be greater than 70%, or 22 7 P (z)dz > 0.7.
4. The difference between the overall minimum χ 2 and the minimum χ 2 at z < 7, ∆χ 2 , must be greater than 4.
5. There should be no object within 0.3 ′′ (10 pixels in the final JADES mosaics), or within the object's bounding box, that is 10 times brighter that the object.
For this study, we targeted galaxies at z a > 8 as galaxies above this redshift should have no observed flux in the JWST/NIRCam F090W filter.This allows us to use the deep JADES F090W observations to aid in visually rejecting lower-redshift contaminants.The second requirement, that the source be detected in multiple bands, was chosen to ensure that the sources we selected were not artifacts found in individual exposures such as cosmic rays or bad pixels.We imposed the EAZY 22 7 P (z) > 0.7 (which we will shorten to "P (z > 7)") and ∆χ 2 limits in order to help remove objects where EAZY could fit the observed SED at low redshift with high probability.In Harikane et al. (2023), the authors recommend the use of a more strict cut, ∆χ 2 > 9, and we consider this cut in Section 5. We also, in Section 4.4, discuss those objects where ∆χ 2 < 4 in our sample, as these sources, though faint, may contain true high-redshift galaxies that should be considered.Finally, we remove objects with close proximity to bright sources because of the possibility of selecting tidal features or stellar clusters near to the edges of relatively nearby galaxies.We list those objects that satisfied our other requirements but were close to a brighter source, along with discussion of these targets, in Section 4.5.We chose not to implement a direct cut on χ 2 as this metric is dependent on the flux uncertainties, which vary across the field in such a way as to make a comparison of the value between objects difficult and potentially non-meaningful.We still report the resulting χ 2 values, however.In comparison, the ∆χ 2 value is calculated from two fits to the same photometry and uncertainties, and is helpful in exploring the relative goodness of fits at different redshifts.
These cuts resulted in 1078 objects in GOODS-S and 636 objects in GOODS-N.From here, we began the process of visual inspection, first to remove obvious nonastrophysical data artifacts, including extended diffraction spikes from stars, and hot pixels caused by cosmic rays.We also removed extended, resolved lowredshift, dusty sources, many of which were not visible in HST imaging.These sources were identified by very red slopes between 1 and 5µm, and have half-light radii r half ≳ 1 ′′ in the filters where they are observed.These sources comprise only ∼ 0.4% of the objects that satisfy our cuts.After removing these sources, we were left with 580 possible objects in GOODS-S, and 212 objects in GOODS-N.There is a much larger fraction of spurious sources in GOODS-N as compared to GOODS-S as these data had a larger number of bright pixels and cosmic rays, which primarily affected the NIRCam longwavelength channels.
After this initial inspection, authors KH, JH, DE, MWT, CNW, LW, and CS independently graded each target with a grade of Accept, Reject, or Review.For those objects where 50% or more of the reviewers accepted the candidate, it was then added to the final candidate list.In cases where greater than 50% of the reviewers chose to reject the candidate, this candidate was removed entirely from the candidate list.In all other cases (57 objects in GOODS-N and 102 objects in GOODS-S), the reviewers did one more round of visual inspection with only the grades Accept or Reject, with a larger discussion occurring for objects where necessary.Again, a 50% of Accept grades was required for these galaxies under review to be listed as part of the final sample.
RESULTS
Our final z > 8 samples consist of 535 objects in GOODS-S and 182 objects in GOODS-N.In Table 3 we provide the descriptions of the columns in our final catalog; the catalog itself is provided as an online table on Zenodo: https://doi.org/10.5281/zenodo.7996500.We include 0.2 ′′ diameter aperture photometry in each of the observed photometric bands, as well as the EAZY z a , χ 2 , P (z > 7), and ∆χ 2 values used in selecting the galaxies.We also provide the σ 68 , σ 95 , and σ 99 confidence intervals estimated from the P (z) distribution.In this table we also list the z > 8 candidates that have EAZY ∆χ 2 < 4, and we will discuss these sources in Section 4.4.Similarly, in our output table, we list those z > 8 candidates that were either within 0.3 ′′ or within the bounding box of a target 10 times brighter than the candidate, which we discuss in Section 4.5.
We show the positions of the GOODS-N sources in the left panel and the GOODS-S sources in the right panel of Figure 3. On these figures, we include both those with EAZY ∆χ 2 > 4 (dark points) and EAZY ∆χ 2 < 4 (lighter points).The relatively higher density of sources in the southern portion of the GOODS-N observations compared to the northern portion is a result of the increased observational depth in that region.In GOODS-N, we find 2.1 objects in our z > 8 sample per square arcminute in the NE footprint, and 4.3 objects per square arcminute in the SW footprint.Similarly, the deepest portions of the JADES GOODS-S coverage are the large rectangular JADES Deep region, and the smaller 1210 parallels, where a significantly higher density of objects are detected.In GOODS-S, we find 7.8 objects in our z > 8 sample per square arcminute in JADES Deep, 4.5 objects per square arcminute in JADES Medium, 4.1 objects per square arcminute in the 1286 parallel, and 13.9 objects per square arcminute in the 1210 parallel.
In Figure 4, following similar work done in the literature (Finkelstein et al. 2023;Pérez-González et al. 2023;Austin et al. 2023), we show the F277W observed AB magnitude measured using a Kron aperture against the EAZY photometric redshift for each candidate z > 8 galaxy in GOODS-S and GOODS-N.Across the top we show the distribution of the photometric redshifts, and on the right side we show the F277W magnitude distribution for the photometric redshift sample as well as the GOODS-N and GOODS-S sample independently.For those objects where we have spectroscopic redshifts from In Figure 4 we can see how the usage of wide filters for estimating photometric redshift leads to relative dearths of objects at z ∼ 10 and z ∼ 13, as these redshifts are where the Lyman-α break is between the F090W, F115W, and F150W filters.This is an artificial effectfor Lyman-α-break galaxies, estimations of precise redshifts are highly predicated on the flux in the band that probes the break, and when the break sits between filters, the resulting redshifts are more uncertain.For example, faint galaxies at z ∼ 9 − 11 can have similar SEDs where there is flux measured in the F150W filter and none measured in the F115W filter.This degeneracy results in photometric redshifts of these galaxies of z a ∼ 9.5, with broad χ 2 minima reflecting larger redshift uncertainties.At slightly higher redshifts, how- In grey, we plot the outline of the Hubble Deep Field (North) (Williams et al. 1996) as a comparison to the JADES survey area.(Right) GOODS-S footprint.The dashed blue outline highlights the JADES Deep GOODS-S area, and the solid blue outline highlights the JADES Medium GOODS-S region.The colored squares denote additional NIRCam pointings from the JEMS survey (burgundy), the 1210 parallels (purple), and the 1286 parallels (green).In grey, we plot the outline of the Hubble Ultra Deep Field (Beckwith et al. 2006).There is a noticeable increase in the density of sources in the JADES Deep and the ultra-deep 1210 parallel footprint.
ever, the Lyman-α break moves into the F150W filter and this results in red F150W -F200W colors, leading to more precise photometric redshifts.This same effect is seen between the F150W and F200W filters at z ∼ 13.The usage of medium-band filters, like NIRCam F140M, F162M, F182M, and F210M would help mitigate this effect somewhat for galaxies at these redshifts.We further explore these gaps by simulating galaxies across a uniform redshift range in Appendix A.
In this section we discuss the candidates in three subcategories: z a = 8 − 10 (Section 4.1), z a = 10 − 12 (Section 4.2), and z a > 12 (Section 4.3).For each subcategory we describe the properties of the sample, plot example SEDs for galaxies spanning the magnitude and redshift range, and discuss notable examples.
z phot = 8 − 10 Candidates
We find 547 total galaxies and galaxy candidates combined across the JADES GOODS-S (420 sources) and GOODS-N (127 sources) areas at z a = 8−10.We show a subsample of the EAZY SED fits and the JADES thumbnails for eight example candidate high-redshift galaxies in this photometric redshift range in Figure 5.In each plot, we show both the minimum χ 2 fit, as well as the fit constrained to be at z < 7. We chose these objects from the full sample to span a range of F277W Kron magnitudes as well as photometric redshifts.
Because of the availability of both NIRSpec and FRESCO spectroscopy for our sample, there are 34 (27 in GOODS-S and 7 in GOODS-N) galaxies in this photometric redshift range where a spectroscopic redshift has been measured.For 14 of these sources (13 in GOODS-S and 1 in GOODS-N), the resulting spectroscopic redshift z spec = 7.65 − 8.0.Because these objects satisfied our photometric redshift selection criteria, we choose to include them in our sample, and discuss their spectroscopic redshifts in Section 4.9.
There are a number of sources in this photometric redshift range with extended morphologies, often seen in the JADES data as multiple clumps observed in the images at shorter wavelengths.In Figure 6, we show a subsample of nine resolved galaxies with z a = 8 − 9.For each object we show the F090W, F115W, and F356W thumbnails, along with a color image combining these three filters.Each thumbnail is 2 ′′ on a side, showcasing the resolved sizes of some of these targets.At z = 8 − 10, 1 ′′ corresponds to 4.6 − 4.9 kpc, and we provide a scale bar of 0.5 ′′ in each panel.At these .F277W AB Kron Magnitude plotted against the best-fitting EAZY za photometric redshift for the 717 galaxies and candidate galaxies in the GOODS-S (blue) and GOODS-N (red) z > 8 samples.Along the top we show the redshift distribution for the full sample (grey) as well as the GOODS-S and GOODS-N samples.On the right we show the magnitude distribution in a similar manner.The points colored with dark circles are plotted with the available spectroscopic redshifts for those sources, which, in many cases, extends to z < 8.We discuss these sources in Figure 4.9.There is a lack of sources at z ∼ 10 because of how the Lyman-α break falls between the NIRCam F115W and F150W at this redshift, making exact photometric redshift estimates difficult.The brightest source in the sample is the spectroscopically-confirmed galaxy GN-z11 at zspec = 10.6, and the highest-redshift spectroscopically-confirmed source is JADES-GS-z13-0 at zspec = 13.2.There is one source JADES-GN-189.15981+62.28898,at za = 18.79, which we plot as a right-facing arrow in the plot and discuss in Section 4.3.redshifts, F090W is to the blue of the Lyman-α break, so the galaxies should not appear in this filter, F200W spans the rest-frame UV and F356W spans the restframe optical continuum.We are then seeing UVbright star-forming clumps in the F200W filter, and restframe ∼ 4000 Å stellar continuum in F356W.We show two sources, JADES-GS-53.1571-27.83708(top row, left column), and JADES-GS-53.08738-27.86033(top row, middle column), which have spectroscopic redshifts from FRESCO at z spec = 7.67 and z spec = 7.96 respetively, as indicated below the photometric redshifts in the color panel.
These nine sources show multiple irregular morphologies, and many are elongated.
JADES-GN-189.18051+62.18047 is an especially complex system at z a = 8.92 with four or five clumps that span almost 7 kpc at this photometric redshift, similar to the "chain of five" F150W dropout system presented in Yan et al. (2023).Five of the extended sources we highlight were previously presented in the literature: JADES-GS-53.1571-27.83708,JADES-GS-53.08738-27.86033,JADES-GS-53.08174-27.89883, JADES-GS-53.1459-27.82279 andJADES-GS-53.10393-27.89059 (McLure et al. 2013;Bouwens et al. 2015;Finkelstein et al. 2015;Harikane et al. 2016;Bouwens et al. 2021).Given the depth and resolution of NIR-Cam, we can see new details for these sources from what was observed in the HST ACS and WFC3 observations, such as the nearly ∼ 0.8 ′′ -long haze to the northeast for JADES-GS-53.10393-27.89059,which corresponds to about 4 kpc at the candidate photometric redshift.
z phot = 10 − 12 Candidates
We find a total of 137 galaxies and candidate galaxies at at z phot = 10 − 12: 92 in GOODS-S and 45 in GOODS-N.We show the EAZY SED fits and the JADES thumbnails for eight example candidates in this photometric redshift range in Figure 7.
In the GOODS-S region, this redshift range includes two of the spectroscopically-confirmed galaxies from Robertson et al. (2023) and Curtis-Lake et al. ( 2023), JADES-GS-z10-0 (z spec = 10.38 +0.07 −0.06 ) and JADES-GS-z11-0 (z spec = 11.58 +0.05 −0.05 ).The EAZY photometric redshifts for these targets are z a = 10.84 for JADES-GS-z10-0, and z a = 12.31 for JADES-GS-z11-0.Both photometric redshifts are higher than the measured spectroscopic redshift, but considering the P (z) uncertainty in both measurements, the measurements are within 2σ of the true values.Indeed, the ∆χ 2 between the minimum value corresponding to z a and the value at z spec is 10.25 for JADES-GS-z10-0 and 1.75 for JADES-GS-z11-0.In Robertson et al. (2023), the authors estimate photometric redshifts for these sources using the Bayesian stellar population synthesis fitting code Prospector (Johnson et al. 2021) and recover P (z) surfaces that are similarly offset to higher values than the spectroscopic redshifts.In the GOODS-N region, we find the brightest object overall in our sample (a Kron F277W aperture magnitude of 25.73 AB), GN-z11, discussed at length in Tacchella et al. (2023) and spectroscopically confirmed to lie at z = 10.603 ± 0.001 in Bunker et al. (2023b).In our EAZY fit, we estimate z a = 11.0, which is within 2σ of the spectroscopic redshift, but again, higher than the spectroscopic redshift.We further explore this difference in Section 4.9.
We want to highlight three galaxies seen in Figure 7 because of their extended, somewhat complex morphologies.JADES-GS-53.13918-27.84849(z a = 10.45,first row, right column), an F115W dropout, has three components and spans 0.5 ′′ , which is 2 kpc at this photometric redshift.We observe an increase in the F444W flux over what is seen at 3 -4µm, which could either be a result of [OII]λ3727 emission at this redshift or evidence of a Balmer break.The F115W dropout JADES-GS-53.09872-27.8602(z a = 10.69,second row, right column) is the southern clump of two morphologically distinct components separated by 0.3 ′′ (1.2 kpc at this photometric redshift) in the rest-frame UV, which becomes less distinct at longer wavelengths.The northern clump, JADES-GS-53.09871-27.86016(z a = 9.59), is also in our sample, but the EAZY fit prefers a lower photometric redshift which is consistent to within 1σ.Finally, JADES-GS-53.07597-27.80654(z a = 11.27,third row, right column) consists of two, bright, connected clumps separated by 0.2 ′′ (580 pc at this photometric redshift).The sources are detected as separate clumps in the relatively shallower FRESCO F182M and F210M data as well.These sources could be interacting seed galaxies or star-forming clumps in the very early universe.
z phot > 12 Candidates
We find 33 galaxies and candidate galaxies across both the JADES GOODS-S (23 sources) and GOODS-N (10 sources) footprints at z > 12.We show their SEDs and thumbnails for eight examples in Figure 8, and we show the remaining in Figures 18, 19, and 20 in Appendix B. For objects at these redshifts, the Lyman-α break falls in the F150W filter at z = 12, in between the F150W and F200W filters at z = 13.2, and in between the F200W and F277W filters at z = 17.7.The objects in our z > 12 sample, then, are a mixture of solid F150W dropouts and more tentative galaxies that show evidence for faint F200W flux associated with the Lyman-α break lying in that filter.
Our sample in this redshift range includes the other two high-redshift spectroscopically-confirmed galaxies from Robertson et al. (2023) and Curtis-Lake et al. ( 2023), JADES-GS-z12-0 (z spec = 12.63 +0.24 −0.08 ) and JADES-GS-z13-0 (z spec = 13.20 +0.04 −0.07 ).We estimate EAZY photometric redshifts for these targets of z a = 12.46 for JADES-GS-z12-0, and z a = 13.41 for JADES-GS-z13-0.While both photometric redshifts are quite uncertain due to the width of the bands used to probe the Lyman-α break, the range of uncertainties based on the EAZY σ 68 redshifts are consistent with the spectroscopic redshifts.
Because of the importance of these galaxies towards understanding galaxy formation in the very early universe, we will discuss the candidate galaxies in this redshift range individually, in order of decreasing photometric redshift.In our descriptions.We include brief discussions of two of the spectroscopically confirmed galaxies from Robertson et al. (2023) and Curtis-Lake et al. ( 2023), JADES-GS-z12-0 and JADES-GS-z13-0, but we refer the reader to these papers for more detailed discussions of these sources.
JADES-GN-189.15981+62.28898(z a = 18.79)This F200W dropout, the highest-redshift candidate in our sample, is clearly detected in multiple LW filters.There is no detection in the F200W filter, and we calculate a dropout color assuming a 2σ upper-limit on the F200W flux of m F 200W − m F 277W > 1.29.While this source lies in the relatively shallower GOODS-N NW portion of the survey, the large ∆χ 2 provides strong evidence for this source being at high redshift.
JADES-GS-53.12692-27.79102(z a = 15.77)This is one of the more intriguing objects in our sample, as it is a relatively bright (m F 277W,Kron = 29.37)F150W dropout detected at greater than 16σ in all of the detection bands.While there may be F115W flux observed in the thumbnail, it is only at SNR = 1.76.Caution should be exercised in adopting the derived redshift for this source as a result, since this object's fluxes are consistent with it being at z = 5.
JADES-GS-53.0541-27.70399(z a = 15.67)This F150W dropout has quite large photometric redshift uncertainties, but the σ 68 range is still consistent with it being at z > 12.The source 1 ′′ to the east is a potential F090W dropout with z a = 8.24, but we measure P (z < 7) = 0.68 from the EAZY fit, so it does not appear in our sample.
JADES-GS-53.19592-27.7555(z a = 15.32)This slightly extended F150W dropout has SNR > 5 in three filters: F277W, F356W, and F444W.It is also over 3 ′′ away from any bright sources.Because of the non-detection in F150W, we estimate that m F 150W − m F 200W > 0.74 given a 2σ upper limit on the observed F150W flux.
JADES-GS-53.07557-27.87268(z a = 15.31)This is one of the faintest z > 12 sources (m F 277W,Kron = 30.04),although it is observed at SNR > 5 in three filters: F277W, F356W, and F444W.In the thumbnail we show how this candidate is surrounded by other, brighter sources.The sources to the northwest and southeast are both at z a ∼ 1.0, while the source with multiple components to the northeast is an F435W dropout at z a = 3.74.
JADES-GS-53.17847-27.75591(z a = 15.13)This very compact F150W dropout is quite faint (m F 277W,Kron = 29.79),and is relatively isolated, with the nearest bright galaxy being almost 2 ′′ to the west.The lack of significant detections in the bands to the blue of the proposed Lyman-α break (m F 150W − m F 200W > 1.14) provides strong evidence of this source's photometric redshift.
JADES-GN-189.32608+62.15725(z a = 14.77)This is an F150W dropout 0.5 ′′ northeast of an F850LP dropout galaxy at z a = 5.2, which is a slightly higher redshift than the potential secondary minimum in the P (z) surface for this source.We measure m F 150W − m F 200W = 2.46, and find that this source is still at z > 12 within the σ 68 range on the photometric redshift.
JADES-GS-53.02212-27.85724(z a = 14.59)This slightly diffuse F150W dropout has SNR > 5 in all of the bands where it is detected, and we measure m F 150W − m F 200W = 2.91.It is near the western edge of the JADES medium mosaic, and is 2.5 ′′ southeast of the star GOODS J033205.16-275124.2.
JADES-GS-53.10763-27.86014(z a = 14.44)This is a faint, diffuse F150W dropout that is within 1.5 ′′ of a larger galaxy at z a = 0.9.While there is evidence of F115W flux in the thumbnail, it is only at a SNR = 1.64.
JADES-GS-53.07427-27.88592(z a = 14.36)This F150W dropout is not detected at SNR < 0.8 in each of the bands shortward of the potential Lyman-α break, and we measure m F 150W − m F 200W = 4.05.It is 1 ′′ away from a F435W dropout at z a = 4.31, and could be associated with that source, as the secondary P (z) peak indicates.
JADES-GN-189.16733+62.31026(z a = 14.33)This F150W dropout is very bright in F277W (m F 277W,Kron = 27.28),pushing it above the distribution at these redshifts as seen in Figure 4.It is within 0.5 ′′ of an F435W dropout at z a = 4.34.As a result, the potential Lyman-α break for this object could be a Balmer break if these two sources are associated at similar redshifts.
JADES-GS-53.11127-27.8978(z a = 14.22)This source is an F150W dropout solid SNR > 5 detections in the LW JADES filters.The SW fluxes for this objects may be impacted by detector artifacts which are seen to the northwest and southeast of the source.
JADES-GS-53.06475-27.89024(z a = 14.0)This F150W dropout (m F 150W − m F 200W > 2.27) is detected in the F277W filter at 19.9σ, is found in the exceptionally deep GOODS-S JADES 1210 Parallel.In this region, there are no medium band observations from either JEMS or FRESCO for this source, and we do not see any detection in any of the WFC3 or ACS bands.This source a quite promising high-redshift candidate, with a ∆χ 2 ∼ 65.
JADES-GS-53.14673-27.77901(z a = 13.68)This F150W dropout is quite well detected in multiple bands, including F182M, but it has a fairly broad P(z) surface, although the σ 68 values are consistent with z > 12 solutions.At z phot ∼ 13−14, the fits are more unconstrained due to the widths of the F150W and F200W photometric bands and the gap between them.
JADES-GS-53.14988-27.7765,(z a = 13.41)This source, also known as JADES-GS-z13-0, was spectroscopically confirmed to be at z spec = 13.20 in Curtis-Lake et al. ( 2023), and the NIRCam photometry and morphology for the source was discussed in Robertson et al. (2023).The source is fairly bright (m F 200W,Kron = 28.81) with a strong observed Lyman-α break, and the σ68 range on the photometric redshift (z σ68 = 12.92 − 14.06) is in agreement with the observed spectroscopic redshift.
JADES-GN-189.27873+62.2112(z a = 13.12)This source has F182M and F210M fluxes boosted by flux from a diffraction spike.This source has an F150W detection at 2.4σ, potentially demonstrating that it is at a slightly lower redshift, as indicated by the large P (z) distribution.
JADES-GN-189.11004+62.23638(z a = 13.12)The F182M and F210M fluxes for this F150W dropout (m F 150W − m F 200W = 2.69) are also boosted by a diffraction spike from a nearby star.There appears to be F115W flux at 2.14σ, but this is shifted to the northeast of the primary source by 0.3 ′′ seen in F200W and F277W.
JADES-GS-53.16635-27.82156(z a = 12.46)This galaxy, also known as JADES-GS-z12-0, was originally spectroscopically confirmed to lie at z spec = 12.63 in Curtis-Lake et al. ( 2023), and the NIRCam photometry and morphology for the source was discussed in Robertson et al. (2023).Recent, deeper NIRSpec observations for this source showed C III]λλ1907,1909 line emission, indicating a spectroscopic redshift of z spec = 12.479 (D'Eugenio et al. 2023).This source is a bright (m F 277W,Kron = 28.64)F150W dropout (m F 150W − m F 200W = 2.01) with a 26σ detection in F277W, and is observed at the 4 − 6σ level in the relatively shallow F182M and F210M filters.
JADES-GS-53.02868-27.89301(z a = 12.39)This source is an F150W dropout with strong detections (SNR > 5) in each filter to the red of the potential Lyman-α break.
JADES-GS-53.10469-27.86187(z a = 12.27)This source is an F150W dropout with strong detections in F200W and F277W.The fluxes at F115W and F150W are observed at 1.55σ and 1.39σ significance, respectively.
JADES-GN-189.27641+62.20724(z a = 12.19)This source is well detected in F200W and F277W (SNR > 10 in both filters), but is quite faint at longer wavelengths.There is a source 1.5 ′′ the southeast of the target at z spec = 2.44 (Reddy et al. 2006), so we caution that the observed Lyman-α break for the high-redshift candidate may be a Balmer break at 1.5µm.
JADES-GS-53.14283-27.80804(z a = 12.06)This object is an F150W dropout that is primarily seen in F200W (SNR = 5.89) and F277W (SNR = 6.72).The fit very strongly favors the high-redshift solution, and it does not seem be associated with the nearby galaxy to the east, an F814W dropout at z a = 5.98 with [OIII]λ5007 potentially boosting the F335M flux.
JADES-GS-53.18936-27.76741(z a = 12.05)This source is a very faint, slightly diffuse F150W dropout.While the ∆χ 2 is still in favor of the z > 8 fit, the lowerredshift solution would help explain the boosted F210M flux as potentially arising from an [OIII] emission line, although the F210M flux is only significant at 2.6σ.
We caution that photometric redshifts at z > 12 are quite uncertain, and that our sources are observed down to very faint magnitudes, and thus require deep spectroscopic follow-up to confirm.In many cases we also raise the possibility that the source is potentially associated with a nearby galaxy at lower redshift.For the GOODS-S sources, continued observations extending both the size of the JADES Medium region and the depth of JADES Deep planned for Cycle 2 as part of JADES will help to provide evidence as to whether these sources are truly at high-redshift or not.4.4.z > 8 Candidates with ∆χ 2 < 4 In the previous sections we explored those objects for which the EAZY fit strongly favors a high-redshift solution.The fits to these sources at their proposed photometric redshifts indicate strong Lyman-α breaks and more robust upper limits on the photometric fluxes blueward of the break.For cases where the observed HST/ACS or short-wavelength JWST/NIRCam fluxes have higher uncertainties (for fainter objects or those objects in shallower parts of the GOODS-S or GOODS-N footprint), fits at z < 7 are less strongly disfavored, leading to values of ∆χ 2 < 4.
We selected candidate z > 8 galaxies in our sample that satisfy our criteria outlined in Section 3.2, but where ∆χ 2 < 4. While the bulk of the output EAZY P (z) indicates that the galaxy is at high-redshift (P (z > 7) > 0.7), the minimum χ 2 for the z < 7 solution is more similar to the overall minimum χ 2 at z > 8.In this section we explore these targets, as they represent a non-insignificant number of candidates.
Following the initial selection of these objects, they were visually inspected following the same routine as for the ∆χ 2 > 4 objects, where an object was removed from the sample if the majority of reviewers flagged it for rejection.Our final sample consists of 163 candidates in GOODS-S and 64 candidates in GOODS-N, for a total of 227 objects.These objects are also plotted with lighter symbols in Figure 3.While these sources span the full redshift range of the ∆χ 2 > 4 sample, the median F277W Kron magnitude is 29.47 for the GOODS-S objects and 29.11 for the GOODS-N objects, fainter than the median F277W magnitudes of the ∆χ 2 > 4 sources (29.25 for GOODS-S and 28.62 for GOODS-N).This is expected as ∆χ 2 is strongly dependent on the observed flux uncertainties for each source.
In Figure 9 we highlight some targets from both GOODS-S and GOODS-N with ∆χ 2 < 4, demonstrating the variety of targets in this subcategory.The median F090W flux (measured in a 0.2 ′′ diameter aperture) for the full sample of GOODS-S and GOODS-N z > 8 candidates is -0.02 nJy (the distribution is consistent with 0 nJy), while the median F090W flux for the combined sample of ∆χ 2 < 4 targets is 0.39 nJy.Targets like JADES-GS-53.04744-27.87208(z a = 12.33) and JADES-GN-189.33478+62.1919(z a = 12.38) have faint F115W and F150W flux measurements (with high uncertainties) consistent with dusty z ∼ 4 solutions with strong emission lines, similar to CEERS-93316 observed in Arrabal Haro et al. (2023a).Many of these objects are also limited by the lack of deep HST/ACS data; JADES-GS-53.04744-27.87208only has coverage with F435W and F606W due to its position in the southwest of the JADES GOODS-S footprint.
We include these sources and their fluxes to aid in the selection of high-redshift galaxies in future deep JWST surveys with different filter selection and observational depths.While a larger number of these objects may be lower redshift interlopers masquerading as z > 8 galaxies, this sample may serve as a pool of additional sources to be placed on multi-object slit-masks in follow-up spectroscopic campaigns to confirm source redshifts.In addition, these objects are helpful with calibrating template sets as they have colors that can be fit with models at low and high redshift.
z > 8 Candidates Proximate to Brighter Sources
In addition to exploring the sources with ∆χ 2 < 4, we also looked at objects (at all ∆χ 2 values) that were near bright sources, being within 0.3 ′′ or the bounding box of a source with ten times greater brightness.There is an increased probability that proximate sources are at similar redshifts, and so we can compare the χ 2 distribution of the brighter galaxy to that of the fainter highredshift candidate.At faint magnitudes, Balmer breaks and strong line emission can be lead to sources being mistaken for higher redshift objects, which be be seen by looking at the χ 2 minima for these sources.In addition, being so close to a bright source can potentially introduce flux into the circular aperture photometry and change the observed colors of the candidate galaxy and the shape of the SED.We went through the same visual classification procedure for these sources as we did for the full sample, and ended up with 41 candidates (30 have ∆χ 2 > 4) in GOODS-S and 17 candidates (14 have ∆χ 2 > 4) in GOODS-N.These sources have a median F277W Kron magnitude of m AB = 28.61 for those in GOODS-S, and m AB = 27.50 for those in GOODS-N, and range in redshift between z = 8.0 − 16.7.
We want to specifically highlight some of the higherredshift candidates from this subsample.Notably, there are three galaxies at z a > 12: JADES-GS-53.08016-27.87131(z a = 16.74),JADES-GS-53.09671-27.86848(z a = 12.03), and JADES-GN-189.23121+62.1538(z a = 12.16).JADES-GS-53.08016-27.87131has ∆χ 2 < 4, although this source is the most interesting due to its photometric redshift.This source has a very faint F200W detection (SNR > 4 in an 0.2 ′′ diameter aperture), but is relatively bright at longer wavelengths (with an F277W AB magnitude of 29.20).This source lies 3 ′′ − 4 ′′ north of a pair of interacting galaxies at z spec = 1.1 (Bonzini et al. 2012;Momcheva et al. 2016), and flux from the outskirts of these galaxies may be contributing to the aperture photometry for this object, leading to an artificially red UV slope.We caution that this source may be a stellar cluster associated with this pair or the dusty galaxies that are to the north of its position.
Stellar Contamination
One primary source of contamination for high-redshift galaxy samples are low-mass Milky Way stars and brown dwarfs which at low temperatures can have near-IR colors similar to high-redshift galaxies, and many studies have explored the selection of these sources from within extragalactic surveys (Ryan et al. 2005;Caballero et al. 2008;Wilkins et al. 2014;Finkelstein et al. 2015;Ryan & Reid 2016;Hainline et al. 2020).Candidate brown dwarfs have been observed in extragalactic surveys, such as GLASS (Nonino et al. 2023).To explore whether our sample contains objects with a high probability of being a possible brown dwarf, we looked at the sizes of the targets in our sample and their fits to stellar models and observed brown dwarf SEDs.
We fit the targets in our sample using the jades-pipeline profile fitting software, which utilizes the python lenstronomy package (Birrer & Amara 2018;Birrer et al. 2021).We fit each source, as well as the other nearby sources within 2 ′′ and up to two magnitudes fainter than the primary galaxy as Sersic profile.Objects that are fainter or farther from the source are masked instead of fit.We use the final residuals to determine the goodness of fit for each source.We measured the sizes using the NIRCam F444W mosaic, as brown dwarfs are bright and unresolved at 4 µm (Meisner et al. 2020).To determine whether an object was unresolved, we looked at those where the observed half-light radius for each source was smaller than the NIRCam long-wavelength channel pixels size (0.063 ′′ ).We note that the maximum half-light radius measured using the same procedure on a sample of stars and brown dwarfs in GOODS-S and GOODS-N was 0.02 ′′ , but adopted a larger limit to broaden our search.These sources were identified using both photometric fits to theoretical brown-dwarf models and identification of sources with proper motions compared to HST observations, and will be described further in Hainline et al. (in prep).
We fit the NIRCam photometry of our z > 8 candidates using both the SONORA cloud-free brown dwarf models from Marley et al. (2018) as well as a sample of observed brown-dwarf observations from the SpeX Prism Spectral Library1 .As the SpeX spectra, in general, are only observed to 2.5µm, we took a group of objects across the temperature range that were detected in the Wide Field Infrared Survey Explorer (WISE) all-WISE catalog (Cutri et al. 2021) and used their photometry at 3.4 and 4.6 µm to create an extrapolated spectra out to 5µm, which we used to estimate NIR-Cam photometry, following Finkelstein et al. (2023).We supplemented these with empirical NIRCAM SEDs of M dwarfs, obtained from a selection of extremely compact objects in F115W-F200W color magnitude space, consistent with stellar evolutionary models and JWST observations of globular clusters (Weisz et al. 2023, B. Johnson priv. comm).The full set of model photometry were then fit to the observed NIRCam 0.2 ′′ diameter aperture photometry for the z > 8 candidate galaxies using a χ 2 minimization approach.We compared the resulting χ 2 minima for the stellar fits to those from the EAZY galaxy templates, and if an object had a ∆χ 2 < 4 between the galaxy model fit and the stellar fit, it was flagged as a brown dwarf candidate.
We find 303 objects in our z > 8 sample with ∆χ 2 > 4 that are unresolved (42% of the full z > 8, ∆χ 2 > 4 sample), with a half-light radius less than 0.063 ′′ , while only six objects across both fields have fits to stellar models with EAZY χ 2 min within ∆χ 2 < 4 (two of these sources have lower χ 2 values with the brown dwarf fits).We flag the sources in the online table if they satisfy either of these requirements.Of these objects, only two sources are both unresolved and have stellar fits within ∆χ 2 < 4: JADES-GS-53.0353-27.87776(z a = 10.82) and JADES-GN-189.19772+62.25697(z a = 8.61).The latter source, which is detected with HST WFC3/IR, was identified as the Y-dropout candidate GNDY-6474515254 in Bouwens et al. (2015).While this object has evidence for being a brown dwarf, FRESCO identified both [OIII]λλ5007,4959 emission lines (z spec = 8.28), ruling out the brown dwarf hypothesis.
There are additional brown dwarf candidates in the z > 8 candidate galaxies with ∆χ 2 < 4 identified in §4.4.We find 83 objects with an F444W half-light radius less than 0.063 ′′ , and 17 objects with stellar fits within ∆χ 2 < 4 (6 of these sources have lower χ 2 values with the brown dwarf fits).We caution that because of the larger flux uncertainties for these objects, it is more likely that models would fit these data with comparable χ 2 values, but we include flags in the online table in these cases.Only five of the sources in this subsample are unresolved with comparable brown dwarf fits to the EAZY fits: JADES-GS-+53.02588-27.87203(z a = 8.84), JADES-GS+53.12444-27.81363(z a = 8.33), JADES-GS+53.07645-27.84677(z a = 8.64), JADES-GN+189.16606+62.31433(z a = 8.6), and JADES-GN+189.07787+62.23302(z a = 8.1).These sources, on visual inspection, do not appear to be strong brown dwarf candidates due to them being quite faint, which would indicate potentially unphysical distances compared to models of the halo brown dwarf population (Ryan et al. 2005).While there are 19 unresolved sources in the sample that are proximate to brighter objects (as in Section 4.5), none of these have stellar fits within ∆χ 2 < 4 of the EAZY fits.
z > 8 Candidates in the Literature
As the GOODS-S and GOODS-N fields have been observed across a wide wavelength range and to deep observational flux limits, a number of the sources in our sample have been previously presented in the literature.As described in Robertson et al. (2023), both JADES-GS-z10-0 (JADES-GS-53.15883-27.7735)and JADES-GS-z11-0 (JADES-GS-53.16476-27.77463)were previously identified in Bouwens et al. (2011b), as UDFj-38116243 and UDFj-39546284, respectively.Both of these galaxies are in our z > 8 sample, as JADES- GS-53.15883-27.7735 and JADES-GS-53.16476-27.77463.Similarly, we also previously discussed GN-z11, first identified in Bouwens et al. (2010), and later further explored in Oesch et al. (2014Oesch et al. ( , 2016)), which is present in our sample as JADES-GN-189.10605+62.24205.
In Bouwens et al. (2023), the authors use the publicly available JEMS data to search for z > 8 candidates in GOODS-S and construct a sample of ten sources.Nine of the ten sources appear in our sample (their source XDFY-2376346017, which they measure at z EAZY = 8.3 +0.2 −0.2 , is at z a = 7.89 in our fits, and we additionally measure a FRESCO z spec = 7.975 for this source), and of those, eight sources were previously known and are in our sample.The remaining two sources were not previously known, and also appear in our sample: JADES-GS-53.13918-27.78273(z a = 10.49) and JADES-GS-53.16863-27.79276(z a = 11.71).Donnan et al. ( 2023) perform a similar search, and find two additional candidates that fall into our sample: JADES-GS-53.17551-27.78064(z a = 9.66), and JADES-GS-53.12166-27.83012(z a = 9.42) (they also independently recover JADES- GS-53.16863-27.79276).The photometric redshifts presented in Bouwens et al. (2023) for JADES- GS-53.13918-27.78273 (XDFH-2334046578 in their sample, z EAZY = 11.8 +0.4 −0.5 ) and JADES-GS-53.16863-27.79276(XDFJ-2404647339 in their sample, z EAZY = 11.4 +0.4 −0.5 ) are broadly similar to our values, but we measure a much lower redshift for the former due to the availability of the F150W flux from JADES.Similarly, Donnan et al. (2023), estimate similar photometric redshifts to what we find for JADES-GS-53.17551-27.78064(UDF-21003 in their sample, z phot = 9.79 +0.15 −0.13 ) and JADES-GS-53.16863-27.79276(UDF 16748 in their sample, z phot = 11.77+0.29 −0.44 ), but they claim a much higher redshift for JADES-GS-53.12166-27.83012(UDF-3216 in their sample, z phot = 12.56 +0.64 −0.66 ), which is inconsistent with the measured F150W flux.We note that this latter candidate appears in our catalog of sources proximate to brighter objects, although with ∆χ 2 = 4.32.
For the full sample of ∆χ 2 > 4 candidates at z > 8, we additionally cross-matched their sky positions against GOODS-S and GOODS-N high-redshift catalogs in the literature, including Bunker & Wilkins (2009) Because our JADES mosaics were aligned using the GAIA reference frame, we had to carefully visually match against each sample, which have different reference frames.In Table 4, we list the targets that were matched to sources previously discussed in the literature, the photometric redshift for these sources, and we include the references for each object.
We find 47 objects across the full ∆χ 2 > 4 catalog have been discussed previously in the literature, 42 in GOODS-S and 5 in GOODS-N.As previously mentioned seven are at z a > 10, three are at z a = 9 − 10, and the remaining 37 are at z a = 8 − 9.
JADES ID EAZY za
In Larson et al. (2023), the authors present a series of theoretical galaxy templates 2 designed to be used with EAZY to better model the bluer UV slopes expected for very high redshift galaxies.To create these templates, the authors first used EAZY to calculate photometric redshifts for mock galaxies in the CEERS Simulated Data Product V32 catalog using the EAZY "tweak fsps QSF v12 v3" templates, which were derived from fsps.At this point, the authors created an additional set of templates that better matched the simulated m F 200W − m F 277W colors for the z > 8 galaxies in their sample using the binary stellar evolution models BPASS (Eldridge et al. 2017) with nebular emission derived from the spectral synthesis code CLOUDY (Ferland et al. 2017).These templates resulted in significantly better photometric redshift estimations for the mock galaxies with the CEERS filter set.
To explore how our choice of EAZY templates affects our final z > 8 sample, we fit the photometry for all of the objects recovered across GOODS-N and GOODS-S with EAZY with the recommended template set from Larson et al. (2023) for fitting z > 8 galaxies: tweak fsps QSF v12 v3 along with the BPASS-only "Set 1" and the "BPASS + CLOUDY -NO LyA" "Set 4" templates.We ran EAZY in an otherwise identical manner, including the template error function used, but we utilized the same photometric offsets as provided in Table 2.
The resulting photometric redshifts for our primary sample of z > 8 sources provide no significant differences or noticeably improved photometric redshift fits: only 4% have |z Larson − z a |/(1 + z a ) > 0.15 (23 sources in GOODS-S and 5 sources in in GOODS-N).More importantly, if we look at those sources where z Larson < 8, only 2.5% (14 sources in GOODS-S and 4 sources in GOODS-N) have significantly different photometric redshifts.In the majority of these cases, the lower-redshift solution offered by the Larson et al. (2023) templates is at the same secondary χ 2 minimum seen for our own template fits, and the validity of the fit is strongly dependent on the observed F090W or F115W fluxes.
The sources in our sample with ∆χ 2 < 4 fits are less robust, as has previously been discussed, and show more discrepancy between the fits with our EAZY templates and those from Larson et al. (2023).Here, 37% have |z Larson − z a |/(1 + z a ) > 0.15 (67 sources in GOODS-S and 17 sources in in GOODS-N).64 of these GOODS-S sources and 15 of the GOODS-N sources (for a total 2 https://ceers.github.io/LarsonSEDTemplatesfraction of 35% of the ∆χ 2 < 4 objects) have z Larson < 8.
In addition, we derived a sample of z a > 8 sources from fits with the Larson et al. (2023) templates after applying the same SNR, P (z > 7) and ∆χ 2 cuts as described in §3.2.We compared the resulting candidates with those from the original template set, and after visual inspection found a total of 10 additional z > 8 candidates (7 in GOODS-S and 3 in GOODS-N) which we list in Table 6.Of those sources, 5 are at z a = 6 − 8 and 2 have P (z < 7) < 0.7 in our own EAZY fits.The remaining three objects are quite faint, but should be considered alongside the main sample.We conclude that our results would not be significantly improved by using the Larson et al. (2023) templates.
Spectroscopic Redshifts
In total, we have spectroscopic redshifts for 42 objects in our sample.As discussed previously, five of the high-redshift galaxies have been spectroscopically confirmed to lie at z > 10: JADES-GS-z10-0, JADES-GS-z11-0, JADES-GS-z12-0, JADES-GS-z13-0 (Curtis-Lake et al. 2023;D'Eugenio et al. 2023), and GN-z11 (Bunker et al. 2023b).In this section, we discuss the other objects in our sample with spectroscopic confirmation from both JWST NIRSpec and JWST NIRCam grism spectroscopy from FRESCO.We also compare the photometric redshifts to the spectroscopic redshifts and discuss the observed offset between the two values.Four additional GOODS-S sources have NIRspec spectroscopic redshifts at z > 8.Besides GN-z11, there are no GOODS-N NIRSpec spectroscopic redshifts for our sample.An additional 28 sources in our sample have FRESCO spectroscopic redshifts, with 19 objects in GOODS-S and 8 in GOODS-N.As described in Section 4.5, there are 4 additional GOODS-S sources, and 1 GOODS-N object with FRESCO z spec that are proximate to other bright sources.The photometric redshifts for these sources were derived from either single [OIII]λ5007 line detections or, in some brighter cases, multiple line detections.Fifteen of the sources in our sample of z > 8 candidates have FRESCO spectroscopic redshifts at z < 8 (13 in GOODS-S, and 1 in GOODS-N) in all cases at z spec > 7.6.We chose to include these objects as they satisfy our EAZY selection criteria.
In Figure 10 we show the spectroscopic redshifts of the objects in our sample against their photometric redshift.There are no catastrophic outliers, defined here as those objects where |z spec − z a |/(1 + z spec ) > 0.15.As discussed previously with individual objects, the photometric redshifts have a systematic offset such that EAZY is slightly overpredicting the distances to these galaxies (⟨∆z = z a − z spec ⟩ = 0.26).To estimate the scatter on the relationship, we also calculated the normalized mean absolute deviation σ N M AD , defined as: where δz = z spec − z phot .For all of our sources with spectroscopic redshifts, σ N M AD = 0.05.Understanding the source of this offset is quite important given the usage of photometric redshifts in deriving statistical parameters like the UV luminosity function.By constraining the EAZY fit for each of these sources to be at the spectroscopic redshift, we find that the primary reason for these higher-redshift fits is due to the flux of the filter that spans the Lyman-α break.In the fits where redshift is constrained to be at z spec , the observed fluxes in the band at the Lyman-α break are overestimated in the template fits.While this effect maybe be due to photometric scatter upwards in those bands, it is more likely due to the templates themselves.In Arrabal Haro et al. (2023b), the authors present z spec = 8 − 10 objects with NIRSpec spectroscopy which show a larger offset (⟨∆z = z a − z spec ⟩ = 0.46 ± 0.11) to higher photometric redshifts, and these authors also hypothesize that this might be a result of potential differences between the observed high-redshift galaxy SEDs and the templates used to model high-redshift galaxies.
One potential source of excess flux in the UV is the strength of the Lyman-α emission line in our templates.To explore the effect of this line, we first took our EAZY templates and removed the Lyman-α contribution by cutting out the flux between 1170 to 1290 Å and replacing that portion in each template with a linear fit.Using these templates without Lyman-α, we re-fit every one of our sources with spectroscopic redshifts, and calculated a new σ N M AD = 0.02, as well as a difference in the average offset ⟨∆z = z a − z spec ⟩ = 0.19.While this is smaller, the offset is still present, indicating that Lyman-α flux is not the dominant factor.One alternate possibility is that the strength of the optical emission lines at long wavelength may not be fully reflected in our limited template set, and for those z spec < 9 sources (where the optical emission is not redshifted out of the NIRCam filters), this may have an effect of pushing fits at higher redshifts.At the redshift range of our sample, the FRESCO redshifts are calculated preferentially for those objects with strong line emission, which may not be probed by our template set.Understanding this offset may prove important for future fits to high-redshift galaxies, and it will necessitate the creation of templates derived from high-resolution NIRSpec spectra of these sources once larger samples are observed.
Rejected High-Redshift Candidates
Finding and characterizing high-redshift galaxies is a complex process, even given the IR filters on board JWST.In our visual inspection, we found a number of bright galaxies that we rejected from our z > 8 sample because of multiple reasons, and in this section, we will provide four examples as case studies to demonstrate the sorts of galaxies with colors that can mimic those of high-redshift galaxies.This analysis follows discussions in Naidu et al. (2022) and Zavala et al. (2023), and seen directly with CEERS-93316, a candidate galaxy at z phot = 16.4 which was shown to be at z spec = 4.912 (Arrabal Haro et al. 2023a).
JADES-GS-53.0143-27.88355(m Kron,AB = 29.3),appears from the thumbnails and from the EAZY minimum χ 2 fit to be an F150W dropout at z a = 12.51.However, the red UV slope indicates that perhaps this object is much dustier and at low-redshift (z alt = 3.41), where the Hα emission line was boosting the observed F277W flux.This UV slope could also arise from the bright source to the southeast (at z spec = 0.2472, Cooper et al. 2012).In addition there is what appears to be a flux detection in the F115W thumbnail which helps to rule out the high-redshift solution.
JADES-GS-53.08294-27.85563(m F 277W,Kron = 26.8)appears to be a bright F150W dropout clump immediately adjacent to another object.The SED is well fit at z a = 14.51, and the fit constrained to be at z < 7 is significantly worse (z alt = 3.56).The secondary source, which is detected with all five of the JADES HST/ACS bands (although only at 2σ significance for F435W), has an EAZY template redshift z a = 3.4.This redshift puts the Balmer break between the NIRCam F150W and F200W filters, and we cannot rule out the possibility that this is what we are observing for JADES- .This source appeared in a sample of "HST dark" galaxies in Williams et al. (2023b), where they present a photometric redshift of z phot = 3.38, although they use a larger aperture for measuring photometry which may introduce flux from the nearby source.JADES-GS-53.08294-27.85563was further imaged in six NIRCam medium-band filters as part of the JWST Cycle-2 GO observations of the JADES Origins Field Figure 10.(Left) Spectroscopic Redshifts for objects in our sample measured from both NIRSpec (square symbols) and FRESCO (circular symbols) spectroscopy, as compared to the EAZY photometric redshifts.We highlight those four sources proximate to brighter sources, as discussed in Section 4.5, with black filled circles.(Right) A zoom in on the dashed region at z = 7 − 9 in the left panel.We find that the estimated photometric redshifts overpredict the spectroscopic redshifts (⟨∆z = za − zspec⟩ = 0.26), potentially due to the differences between our adopted templates and high-redshift galaxies.We explored removing Lyman-α emission in our template set, and this lowers the offset to ⟨∆z = zspec − za⟩ = 0.19.
Example Visually Rejected Candidates
Figure 11.Example SEDs for four rejected GOODS-S and GOODS-N galaxies.In each panel, the colors, lines, and symbols are as in Figure 2. (JOF, PID 3215;Eisenstein et al. 2023b), and a more detailed analysis of this source will be presented in a forthcoming paper from the collaboration (Robertson, B. et al. in prep.).Both sources will be observed by the NIRSpec prism in JWST GTO program 1287 (Willott, C. in prep).JADES-GS-53.20055-27.78493(m F 277W,Kron = 28.8)appears to be an F200W dropout at z a = 15.89southeast of another, brighter source.While positive flux is observed at the 1 -2 nJy level in F115W and F150W, this is at a SNR < 0.8 in both cases.This source was ruled out as an F200W dropout because of the detection at the 4σ of F090W flux, which can be seen in the thumbnail.
JADES-GN-189.30986+62.20844(m F 277W,Kron = 27.4) is best fit at z a = 11.36,placing the observed Lyman-α break at 1.5µm.This object is proximate to another, brighter galaxy with an EAZY fit at z a = 1.87 with a complex morphology first observed as part of the GOODS survey in Giavalisco et al. (2004).This is at a lower redshift than the alternate EAZY result for JADES-GN-189.30986+62.20844,z a = 2.58, but the faint F115W detection (SNR = 3.64) demonstrates that the minimum χ 2 redshift solution for this object is erroneous.
DISCUSSION
In this section we explore the selection and derived properties of this large sample of candidate high-redshift galaxies in more detail.A full description of the theoretical implications of these sources is outside the scope of this paper.The stellar mass and star formation histories for these sources will be the focus of a study by Tacchella et al. (in prep), while the full estimation of the evolution of the UV luminosity function at z > 8 from the JADES sources will be presented in Whitler et al. (in prep).
UV Magnitudes
We calculated the UV magnitudes from the EAZY fits to explore the range of intrinsic UV brightnesses for the sample.To calculate M U V we started by fitting the Kron magnitude catalog fluxes forced to be at the redshifts derived from the smaller circular apertures, or, if available, the spectroscopic redshifts for each source.This was done to not bias the resulting UV magnitudes against more extended objects by encompassing more of the total flux.From here, we took the best-fitting rest-frame EAZY template for each object and passed it through mock top-hat filter centered at 1500 Å with a width of 100 Å, and calculated the intrinsic UV magnitude based on the resulting flux.
In Figure 12 we show the resulting M U V values against the photometric and spectroscopic redshift for the sample.As can be expected, GN-z11 is by far the brightest source in the sample at M U V = −22.03 .Excitingly, we find 227 objects in our sample with M U V > −18, and 16 objects (all in GOODS-S) with M U V > −17, entirely at z a < 11.5.These UV-faint high-redshift galaxy candidates demonstrate the extraordinary depth of the JADES survey.In addition, these results stand in contrast to the decline in the number counts of HSTobserved galaxies discussed in Oesch et al. (2018) and Bouwens et al. (2019), and help to confirm results from other JWST surveys (Finkelstein et al. 2023;Harikane et al. 2023;Pérez-González et al. 2023).
Dropout Colors
As discussed in the introduction, traditionally, highredshift samples are assembled by targeting Lymanα dropout galaxies in color space.In Hainline et al. (2020), the authors used the JAGUAR mock catalog (Williams et al. 2018) to explore the NIRCam colors of simulated dropout samples, and demonstrated the tradeoff between sample completeness and accuracy for high-redshift dropout galaxies.Because of the utility of dropout selection, we sought to explore how successful this technique alone would be at finding the JADES z > 8 candidate galaxies.We utilized a uniform twocolor selection scheme to target F090W, F115W, and F150W dropouts within our primary z > 8 sample, where in each case the color limit for the filters that targeted the Lyman-α break was m 1 − m 2 > 1.0, while the color limit for the filters that targeted the rest-frame UV was m 2 − m 3 < 0.5.
In Figure 13, we show the F090W, F115W, and F150W color selection in the top row plots, targeting the entire z > 8 sample in each panel.In the bottom panel we show a photometric redshift histogram for the sources in the sample with a thick grey line and, in the shaded regions the F090W, F115W, and F150W dropout sample distributions.We sum these distributions and plot that with a thick black line.The lone F090W dropout at z a > 15 is JADES-GS-53.12692-Figure12. MUV plotted against the best-fitting EAZY za photometric redshift or the observed spectroscopic redshift for the z > 8 galaxies and candidate galaxies in the GOODS-S (blue) and GOODS-N (red) z > 8 samples.The points and colors are the same as Figure 4. On the right we show the UV magnitude distribution.GN-z11 stands out for its extreme MUV .
27.79102, which we discuss in Section 4.3 and plot in the upper-right panel of Figure 8.We find that 71% of the z > 8 sample would be selected as dropouts with these color criteria, while 171 GOODS-S and 37 GOODS-N objects in the sample are not selected by any scheme, which are are predominantly at z ∼ 8.5 and z ∼ 11.5, as seen from the bottom panel of the figure.These candidates have colors just outside of the selected color space, where z ∼ 8.5, while m F 090W − m F 115W > 1.0, m F 115W − m F 150W = 0.5 − 1.0.A similar effect is seen for the F115W and F150W dropouts at z ∼ 11.5.This effect could be mitigated by expanding the selection criteria, but this is at the risk of including significantly more lower-redshift interlopers (Hainline et al. 2020).
Another way of looking at color selection is by directly plotting the dropout color against the EAZY photometric redshift.At z phot = 8, the Lyman-α break is at ∼ 1.1µm, which is on the blue edge of the NIRCam F115W band, and by z phot = 10, the Lyman-α break should sit between the F115W and F150W filters, so for the objects at increasing photometric redshifts in this range, the F115W SNR will vary as the galaxy's restframe UV emission drops out of this band.In Figure 14, we plot the m F 115W − m F 150W color against the EAZY z a value for the GOODS-N and GOODS-S objects at z phot = 8−10.As expected, the m F 115W −m F 150W color increases in this redshift range.We find that 95% of the candidate high-redshift galaxies selected as F115W dropouts by our cuts have z a > 8.75, while 16% of the candidates at z a = 8.75 − 10.0 in our sample would still fall outside of this simple color cut.
Using ∆χ 2 to Discern Between High-and Low-Redshift Template-fitting Solutions
Fitting a galaxy's SED with templates or stellar population synthesis models enables a measurement of the probability of a galaxy being at a range of photometric redshifts.In this study, we have used the difference in χ 2 values between the best-fit model and the model constrained to be at z < 7 as a our metric of accuracy.The exact ∆χ 2 value we measure for each object is dependent on the template set used, as well as the flux uncertainties and, in our case, the template error function and photometric offsets used.As a result, as is the case for any continuous value of merit, choosing a specific cut is a tradeoff between sample accuracy and completeness.
In Harikane et al. (2023), the authors discuss that ∆χ 2 > 4, the value we adopt in this current work (following Bowler et al. 2020;Donnan et al. 2023;Finkelstein et al. 2023;Harikane et al. 2022) is not sufficient for properly removing low-redshift interlopers, through injecting and recovering mock galaxies in the CEERS extragalactic data.Instead, these authors recommend the stricter cut of ∆χ 2 > 9.Because we have a larger number of observed photometric filter in the JADES data, choosing a low ∆χ 2 limit may be resulting in the inclusion of more potential interlopers, which has lead to our releasing output catalogs which include all of the sources we visually inspected regardless of the chosen ∆χ 2 cut.If we do instead look only at those objects in our sample with ∆χ 2 > 9, our primary sample is reduced to 483 candidates (358 in GOODS-S and 125 in GOODS-N), or 67% of the 717 ∆χ 2 > 4 sources (67% in GOODS-S and 69% in GOODS-N).This subsample selected with a stricter cut has a similar redshift distribution to our full sample (19 of the 33 candidates at z > 12 would still be included), but the sources have brighter F277W magnitudes, as would be expected.The median F277W magnitude for the ∆χ 2 > 4 sample is 29.11, while the median F277W magnitude for the ∆χ 2 > 9 sample is 28.96.It should be noted that every source in our sample with a spectroscopic redshift has ∆χ 2 > 13.Pushing the cut to even stricter values, we find that 45% of the original sample has ∆χ 2 > 15 and 36% of the original sample has ∆χ 2 > 20.
Candidate Galaxies with Red Long-Wavelength Slopes
In our visual inspection of the galaxy candidates, we find a number of high-redshift candidates with very red long-wavelength slopes, following the discovery of similar sources at z phot = 5−9 in Furtak et al. ( 2023 et al. (2023b).These objects are often very bright and unresolved in F444W, and in many cases are comparatively faint at shorter wavelengths.To systematically search for these sources in our full sample, we selected those objects that have m F 277W − m F 444W > 1.3 and m 200W − m F 356W > 0.0.These color limits ensure that the observed red long-wavelength slope is not due to an emission line boosting the F444W flux, and return sources similar to those presented in the literature.For our sample, these cuts select 12 objects (9 in GOODS-S and 3 in GOODS-N).Of those sources, 11 are at z a = 8 − 10, while one source is at z a = 11.64.We provide the IDs, z a values, F444W magnitudes (measured in an 0.2 ′′ aperture), and colors for these sources in Table 5, and we show six of these sources in Figure 15.Outside of the highest-redshift source, JADES-GS-53.11023-27.74928,these sources have fairly tight lower limits on their redshift due to both the lack of flux observed in the F090W band and the red slope not being easily reproduced at low redshift.The EAZY templates used in the present analysis are able to fit the observed SEDs as high-redshift sources, as is demonstrated with the blue lines in Figure 15.However, JADES-GS-53.11023-27.74928 is very faint (∼ 1 nJy) at wavelengths shorter than 2µm, making a photometric redshift estimate difficult.JADES-GS-53.18354-27.77014,a source with a FRESCO spectroscopic redshift (z spec = 8.38) is extended with three visible clumps spanning 0.6 ′′ (2.9 kpc at z a = 8.38), of which the central knot has a very observed red UV through optical slope.
The origin of these sources is not obvious.One possible cause of such a red slope is the presence of a dust-obscured accretion disk from supermassive black hole growth in these objects, as discussed in Furtak et al. (2023), Barro et al. (2023) and Akins et al. (2023).This would be of interest given the lack of ultra-high-redshift active galaxies currently known, and the short timescales by which these supermassive black holes could have grown in the early universe.Another alternative is that these sources could have strong optical line emission that boosts the long-wavelength flux, similar to what is presented in Endsley et al. (2023).In this work, the authors describe how galaxy models with young stellar populations or supermassive black hole growth can replicate the photometry for a sample of sources selected from JWST CEERS.An alternate view is offered in Labbé et al. (2023), who argue that sources like these are instead very massive, and the red longwavelength slope is indicative of an evolved population, although this interpretation is in contrast to theoretical models of galaxy growth (Prada et al. 2023).A continued exploration of the stellar properties of JADES sources at z = 7 − 9 with red long-wavelength slopes is discussed in Endsley et al. (in prep).However, until a number of these sources are followed-up with deep spectroscopy, their nature will remain elusive.
CONCLUSIONS
In this paper, we have assembled a sample of 717 galaxies and candidate galaxies at z > 8 selected from the 125 sq.arcmin JWST JADES observations of GOODS-N and GOODS-S.We combined these data with publicly available medium-band observations from JEMS and FRESCO, and describe our data reduction and photometric extraction.Our primary results are listed below: • Using the template-fitting code EAZY, we calculated photometric redshifts for the JADES sources, and selected z > 8 candidates based on source SNR, the resulting probability of the galaxy being at z > 7, P (z > 7), and the difference in χ 2 between the best-fit at z > 8 and the fit at z < 7. The final sample was visually inspected by seven of the authors, and contains 182 objects in GOODS-N and 535 objects in GOODS-S, consistent with the areas and observational depths in the different portions of the JADES survey.
• The photometric redshifts of these sources extend to z ∼ 18, with an F277W Kron magnitude range of 25 − 31 (AB).The brightest source in our sample is the previously studied galaxy GN-z11 (m F 277W,Kron = 25.73).We find 33 galaxy candidates at z a > 12, with the highest-redshift candidate being JADES-GN-189.15981+62.28898with a photometric redshift of z a = 18.79.
• We find a number galaxies and galaxy candidates at z = 8 − 12 that are visually extended Example Red Long-Wavelength Slope Candidates across many kpc and consist of multiple UVbright clumps with underlying diffuse optical emission, potentially demonstrating very early massive galaxy growth.
• Forty-two of the sources in our sample have spectroscopic redshift measurements.Each spectroscopic redshift agrees with the photometric redshift for the source within |z spec − z a |/(1 + z spec ) < 0.15.We find an average offset between the calculated photometric redshifts and the spectroscopic redshifts of ⟨∆z = z spec − z a ⟩ = 0.26, lower than the results seen with other high-redshift samples in the literature.We speculate that the offset may be due to differences between the templates used to fit these objects and the observed galaxy SEDs, which will be mitigated as more accurate templates are created using high-redshift galaxy spectra from JWST/NIRSpec.
• To explore whether any of the sources are consistent with being low-mass stars, we fit our sources with brown dwarf models and measure whether the objects are unresolved.The galaxy templates fit the photometry with better accuracy than the brown dwarf templates for the vast majority of cases.
• We demonstrate that while traditional color selection would find most of the sources in our sample, at specific redshift ranges there are a number of sources that fall outside of typical color selection criteria.
• These results are robust to the exact EAZY templates used; the vast majority of sources found in our sample have similar redshifts when fit using the independently derived templates from Larson et al. (2023).
• Our sample includes a number of intriguing sources with red long-wavelength slopes, potentially from dust heated by a growing supermassive black hole at z > 8.This red slope could also be due to an abundance of strong optical line emission from young stellar populations.
Taken together, these sources represent an exciting and robust sample for follow-up studies of the early universe.The detailed stellar populations, as well as the resulting evolution of the mass and luminosity functions for the z > 8 JADES galaxies will be found in forthcoming studies from the JADES collaboration members.We also look forward to JADES Cycle 2 observations which will push to fainter observed fluxes.In addition, many of these sources will be observed with JADES NIRSpec MSA spectroscopy to both confirm their redshifts and to explore their ionization and metallicity properties.JWST has only just opened the door to the early universe, and the years to come promise to be the most scientifically fruitful in the history of extragalactic science.
We want to thank the anonymous referee for their comments and suggestions which significantly improved this paper.This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope.The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-03127 for JWST.These observations are associated with PID 1063PID , 1345PID , 1180PID , 1181PID , 1210PID , 1286PID , 1963PID , 1837PID , 1895PID , and 2738.Additionally, this work made use of the lux supercomputer at UC Santa Cruz which is funded by NSF MRI grant AST1828315, as well as the High Performance Computing (HPC) resources at the University of Arizona which is funded by the Office of Research Discovery and Innovation (ORDI), Chief Information Officer (CIO), and University Information Technology Services (UITS).We acknowledge support from the NIRCam Science Team contract to the University of Arizona, NAS5-02015.
Figure 1 .
Figure 1.EAZY templates used in this work.In the left panel, we show the EAZY "v1.3" templates, and in the right panel, the new EAZY templates created for fitting to JADES galaxies.The templates are presented in Fν units normalized at 2.8µm.These templates include both UV-faint quiescent and UV-bright star-forming populations, and were both compiled from the literature and created using Flexible Stellar Population Synthesis (fsps, Conroy & Gunn 2010).
Figure 2 .
Figure 2. Example best-fit SED for object JADES-GS- 53.17551-27.78064,at the best-fit redshift za = 9.66.We plot the NIRCam photometry with red points, and the HST photometry with light purple points.The error bars represent the 1σ uncertainties on the fluxes.We plot 2σ upper limits with downward pointing triangles.The blue line represents the fit corresponding to za and the gold line shows the best fit at z < 7. We show the minimum χ 2 value for the za fit, as well as the ∆χ 2 value corresponding to the difference between the minimum χ 2 and the χ 2 for the fit at z < 7.In the inset, we plot P (z), with the 1σ, 2σ, and 3σ uncertainty regions derived from the P (z) surface with shades of grey, and za with a blue line, along with the P (z) distribution for the z < 7 best fit in gold (also normalized to 1).Above the inset we provide the summed probability of the source being at z > 7, along with the EAZY redshifts corresponding to σ 68,low and σ 68,high .Below the SED we plot 2 ′′ × 2 ′′ thumbnails for the JADES NIRCam filters.
Figure 3 .
Figure 3. (Left) GOODS-N footprint showing the positions of the z > 8 candidates.The southeastern portion (dashed) of the GOODS-N area is deeper than the northwestern portion (solid), resulting in a larger density of candidate high-redshift galaxies.In grey, we plot the outline of the Hubble Deep Field (North)(Williams et al. 1996) as a comparison to the JADES survey area.(Right) GOODS-S footprint.The dashed blue outline highlights the JADES Deep GOODS-S area, and the solid blue outline highlights the JADES Medium GOODS-S region.The colored squares denote additional NIRCam pointings from the JEMS survey (burgundy), the 1210 parallels (purple), and the 1286 parallels (green).In grey, we plot the outline of the Hubble Ultra Deep Field(Beckwith et al. 2006).There is a noticeable increase in the density of sources in the JADES Deep and the ultra-deep 1210 parallel footprint.
Figure 4
Figure4.F277W AB Kron Magnitude plotted against the best-fitting EAZY za photometric redshift for the 717 galaxies and candidate galaxies in the GOODS-S (blue) and GOODS-N (red) z > 8 samples.Along the top we show the redshift distribution for the full sample (grey) as well as the GOODS-S and GOODS-N samples.On the right we show the magnitude distribution in a similar manner.The points colored with dark circles are plotted with the available spectroscopic redshifts for those sources, which, in many cases, extends to z < 8.We discuss these sources in Figure4.9.There is a lack of sources at z ∼ 10 because of how the Lyman-α break falls between the NIRCam F115W and F150W at this redshift, making exact photometric redshift estimates difficult.The brightest source in the sample is the spectroscopically-confirmed galaxy GN-z11 at zspec = 10.6, and the highest-redshift spectroscopically-confirmed source is JADES-GS-z13-0 at zspec = 13.2.There is one source JADES-GN-189.15981+62.28898,at za = 18.79, which we plot as a right-facing arrow in the plot and discuss in Section 4.3.
Figure5.Example SEDs for eight ∆χ 2 > 4 candidate GOODS-S and GOODS-N galaxies at za = 8 − 10.In each panel, the colors, lines, and symbols are as in Figure2.
Figure 8 .
Figure 8. Example SEDs for eight candidate GOODS-S and GOODS-N galaxies at za > 12.The remainder of the objects are in Figures 18, 19, and 20 in Appendix B. In each panel, the colors, lines, and symbols are as in Figure 2.
Figure 9 .
Figure9.Example SEDs for eight ∆χ 2 < 4 candidate GOODS-S and GOODS-N galaxies.In each panel, the colors, lines, and symbols are as in Figure2.
Figure 13 .
Figure 13.(Top Row) F090W, F115W, and F150W color selection diagrams for the GOODS-S and GOODS-N z > 8 sample.We show all of the sources in each panel, with upward pointing arrows plotted for those whose colors place them off the top of each figure.The colored region indicates the simple dropout selection in each panel, and we indicate which sources are selected with thick lines around the symbol.(Bottom Row) Photometric redshift distribution for the z > 8 sample plotted with a thick grey line, with the photometric redshifts for each color-selected samples overplotted in colored regions.In black, we plot the sum of all three of the dropout distributions.While the F090W, F115W, and F150W dropouts broadly map to photometric redshifts of z = 8 − 8.5, z = 8.5 − 11.5, and z = 11.5 − 15 respectively, there are a large number of sources in our sample that would not be selected via color selection, as seen by comparing the black and grey histograms.
Figure 14 .
Figure14.mF 115W − mF 150W color, as measured using 0.2 ′′ diameter aperture photometry, plotted against the bestfitting EAZY za photometric redshift for the GOODS-S (blue) and GOODS-N (red) samples.As photometric redshift increases, the effect of the Lyman-α break can be seen in the redder mF 115W − mF 115W color.We shade the region and highlight those objects selected by a mF 115W −mF 150W > 1.0 and mF 150W − mF 200W < 0.5 color cut with a thick outline around the point.
Figure 15 .
Figure 15.Example SEDs for six candidate galaxies with red long-wavelength slopes.In each panel, the colors, lines, and symbols are as in Figure 2.
Figure 16 .
Figure16.Photometric redshifts plotted against input redshifts for a simulated EAZY SED placed at a grid of uniformly spaced redshifts between z = 7 − 18 with ∆z = 0.2, and at F200W SNR values between 0.5 and 20.We plot all of the resulting photometric redshifts in grey, and those with F200W SNR > 3 in red.We see a pile-up of sources at photometric redshifts of z ∼ 10 and z ∼ 13.
Figure 17 .
Figure17.Distribution of photometric redshifts from the results shown in Figure16.We plot the distribution of all of the sources with grey bars, and we plot the distribution of sources with F200W SNR > 3 with red bars.The gaps we observe at z ∼ 10 and z ∼ 13 in Figure4are more easily visible here for the simulated galaxies.
Figure 18 .
Figure 18.Continuation of Figure 8.In each panel, the colors, lines, and symbols are as in Figure 2.
Figure 19 .
Figure 19.Continuation of Figure 18.In each panel, the colors, lines, and symbols are as in Figure 2.
Figure 20 .
Figure 20.Continuation of Figure 19.In each panel, the colors, lines, and symbols are as in Figure 2. | 25,295.2 | 2023-06-04T00:00:00.000 | [
"Physics"
] |
Transient effect to small duty-cycle pulse in cascaded erbium-doped fiber amplifier system
Abstract. Factors that affect the transient effect on small duty-cycle pulse in a cascaded erbium-doped fiber amplifier (EDFA) system are studied in simulation and experiment. The considered factors consist of the numbers of cascaded EDFAs, the peak power and the extinction ratio of optical pulse, with results showing that the optical pulse will be severely distorted by the transient effect of EDFA. The distortion becomes more serious with the increase of the three parameters. To avoid or mitigate the transient effect, a method of adding another optical signal with a different wavelength to the objective pulse is employed in the experiment. The experimental results show that this method could effectively restrain the transient effect in a cascaded EDFA system.
Introduction
In the past few decades, erbium-doped fiber amplifiers (EDFA) have been widely used due to their high gain, low noise figure and broad gain spectrum. Generally speaking, there are many EDFAs cascaded in a long-distance optical fiber communication system. Due to the slow gain dynamics of EDFA, the transient effect 1-7 that can significantly affect the amplifying performance of amplifiers, especially in a cascaded EDFA system, is inevitable in EDFA. Presently, on the one hand most research on transient effect has been done in the wavelength division multiplexing (WDM) system where the duty cycle of the signals is usually 50%. [8][9][10][11][12] In this research, the number of cascaded EDFAs, switching time, and signal powers were found to influence the transient effect in WDM system. [9][10][11][12] However, if a small duty-cycle pulse with a long period and narrow pulse width is used, the aforementioned research might be insufficient to describe the amplify procedure in this situation. On the other hand, a small duty-cycle pulse is usually used in fiber-sensing technology, such as coherent optical timedomain reflectometry (COTDR) that is used to monitor the faults in the long-distance cascaded EDFA system. 1 When a small duty-cycle pulse transmits in the cascaded EDFA system, the EDFA transient effect will bring about an optical surge, pulse distortion, and will be accompanied by nonlinear effects. This will severely affect the quality of the sensing and limit the monitor distance. Therefore, the transient effect is an obstacle that must be overcome when a small duty-cycle pulse is used in cascaded EDFA system.
In this paper, the transient effect on the small duty-cycle pulse when the pulse is employed in cascaded EDFA system is studied in both experiment and simulation. The research shows that the probe pulses are distorted due to the transient effect, and the number of cascaded EDFAs, peak power, and the extinction ratio of a small duty-cycle pulse are the main influencing factors on the transient effect. Thus, the distortion becomes more serious when the values of the three factors increase. And a method is proposed and successfully used to suppress the transient effect in a cascaded EDFA system.
Design of Experiment and Simulation
The experiment setup is shown in Fig. 1, an ECL (external cavity laser) with a wavelength of 1561.42 nm used as a source, and an optical pulse is generated by an acoustooptic modulator (AOM) with the extinction ratio (ER) of 50 dB. Here ER is defined as the ratio of the peak power to the base power of an optical pulse. The modulated optical pulse is launched into the cascaded EDFA system. In the cascaded system, the length of each fiber section is 85 km. The gain of each EDFA is 17 dB and the noise figure. of EDFA is 4.5 dB. To observe the distortion of a small duty-cycle optical pulse, a 5∕95 optical coupler is placed after each EDFA, and the output of 5% port is put into photodetector (PD), then the output of PD is directed to an oscilloscope (Agilent MSO6104A) to monitor the pulse shape. The PD's conversion gain of RF-output is 10 3 V∕M.
To compare with the experimental results, an EDFA is specially designed for simulation using the "Er doped fiber dynamic" module, which is specially designed for the time scales on the order of microseconds or longer in the OptiSystem software, and the details of the algorithm for this module is based on the solution of the propagation and rate equations for transitions between the upper and lower levels of a two-level system approximation. 13 The parameters of the devised EDFA in simulation are shown in Table 1, the emission and absorption cross-sections are set to default by the software. In the simulations, the parameters of EDFA are fixed, and the optical configuration used is the same as that in the experiment shows in Fig. 1. The evolutions of pulse shape in a cascaded EDFA system are monitored through the 5% port of the coupler after each EDFA. The influences of the numbers of cascaded EDFAs, the ER and peak power of the input pulse on the transient effect are studied separately, both in the experiments and the simulations, for comparisons in the next section.
Transient Effect with the Increasing Numbers of Cascaded EDFAs
In this part we focus on the influence of the number of cascaded EDFAs on the transient effect. In the experiment and the simulation, the width of optical pulse is 5 μs with the period of 50 ms, and the peak power and the ER of the pulse are 0 dBm and 50 dB, respectively. The output pulse shapes from experiment and simulation are shown in Figs. 2 and 3, respectively. The values of peak pulse power and the power of accumulated amplified spontaneous emission (ASE) are compared in Table 2.
The tendency for pulse distortion due to the transient effect in EDFA is nearly the same in both the experimental and simulated results. Pulse distortion and power accumulation of ASE can be seen in both results. According to Table 2, the power of ASE becomes higher and higher with the increasing number of cascaded EDFAs because the power of ASE is accumulated in the cascaded EDFA system. On the contrary, the pulse peak power decreases with the increasing number of cascaded EDFAs. Therefore, the pulse distortion becomes more and more serious with the increased number of cascaded EDFAs, which are shown in Figs. 2 and 3. The pulse distortion is very serious in (c) and (d) of Figs. 2 and 3, the 5 μs pulse is nearly split into two pulses. The reason for this is that the power of ASE is too high while the pump power of EDFA is finite. When many erbium (Er) ions are consumed by ASE, the remaining ions cannot adequately amplify a pulse with a width of 5 μs, a pulse with a width of 1 or 2 μs may exhaust the remaining Er ions. As Er ions resume over time, another part of the pulse can thenbe re-amplified, so the split output pulse can be seen both in the experiment and the simulation. In brief, the transient effect is enhanced with the increasing number of cascaded EDFAs and therefore pulse distortion is more serious.
Transient Effect with Different ER
The condition of the experiment and the simulation in this section is the same as Sec. 3.1, except that the peak power of the pulse is 10 dBm and another continuous wavelength (CW) laser is added to the system to change the ER of the optical pulse. The peak power of the optical pulse is unchanged, and the ER of the optical pulse is changed by changing the power of the CW laser. The output pulse shapes of the second EDFA is distorted, and the power difference of a single pulse is changing with the ER of the optical pulse, with the results of both experiment and simulation are given in Fig. 4, the ERs of the pulses in Fig. 4 are 5, 12, 15, and 21 dB, respectively. From Fig. 4, we find that the ER of signal can affect the pulse distortion. Distortion of the output pulse becomes more serious with the increase of ER, the reason being that when the ER is lower, the noise level is higher. The noise of low ER pulse can consume more Er ions on the metastable level when it is amplified. So when EDFA is amplifying a pulse with a low ER, it cannot accumulate too many Er ions on the metastable level. As a result, the pulse distortion can be slighter. However, if you reduce ER, for example, the sensing distance of fiber sensor becomes shorter when the ER is low. As a result, a method is needed to restrain the transient effect without reducing the ER.
Transient Effect with Different Peak Power of Input Pulse
To study the influence of peak power of input pulse on the transient effect, the pulse with fixed ER of 50, period of 50 ms, pulse-width of 5 μs and the peak powers of the pulses are −5, 0, 5, and 10 dBm are employed in the experiment and simulation. The output pulse shapes from the second EDFA of both the experiment and simulation are given in Fig. 5 In Fig. 5, we see that the higher the peak power, the more serious the pulse distortion. This is because when period and pulse-width of a small duty-cycle pulse is the same, the Er ions accumulated on metastable level is equal, and the transient gain of EDFA is the same for pulses with different peak powers. As a result, if the input power is higher, the output from EDFA will be higher, and the pulse distortion will be more serious.
Method to Restrain the Transient Effect
Small duty-cycle pulse is commonly used in the longdistance distributed fiber-sensing system. In this situation, the performance of a sensor would be limited due to the distorted pulse induced by the transient effect of EDFA. The transient effect is more notable in a cascaded EDFA system, such as transatlantic optical fiber communication, where the COTDR is usually used for nondestructively measuring the attenuation of a fiber link and locating the discrete defects. Therefore, it is significant to mitigate the transient effect in the long-distance sensing. To restrain the transient effect to a small duty-cycle pulse in cascaded EDFA system, the method adopted in the experiment is to add another signal with a different wavelength. The experimental setup is shown in Fig. 6. Compared to Fig. 1, we add an additional pulse (marked as complementary pulse) with a wavelength of 1562.23 nm whose duty cycle is complementary with the small duty-cycle pulse (called objective pulse) being amplified. The arrangement of the combined pulses is sketched in Fig. 7, and two pulses are combined and put into the cascaded EDFA system at the same time. By using this method, the amplified optical pulses in the experiment are shown in Fig. 8. It can be found that slight pulse distortion occurs after it transmitting through four EDFAs, which might be because of the small gaps between the two pulses induced by the not quite ideal electric pulse signal from our generator and the two incompletely complementary pulses. However, the distortion would not seriously affect the detection results in the sensing system. Therefore, there are two points that should be noted in practical application. One is that the peak power of the two pulses should be equal; if not, there will still be pulse distortion and accumulated ASE caused by the transient effect. Another is that the two pulses should be completely complementary, otherwise, there will still be pulse distortion caused by the transient effect. When the pulse period is larger than 2 μs, the transient effect will begin to affect the objective pulse found in the experiment and simulation.
Conclusion
In this paper, the pulse distortion caused by the transient effect in cascaded EDFA system is successfully observed, and the factors that affect this effect are studied in the experiment and simulation. The factors include the number of cascaded EDFAs, ER, and peak power of the input pulse. Results show that the pulse distortion caused by the transient effect of EDFA is more serious with the increase of the three different factors. Moreover, to reduce the harm caused by the transient effect, a complementary pulse is added to the objective pulse being amplified, and the experimental results show that this method can effectively suppress the transient effect. | 2,877 | 2013-02-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Multiplicity dependence of (multi-)strange hadron production in proton-proton collisions at root s=13 TeV
The production rates and the transverse momentum distribution of strange hadrons at mid-rapidity ( | y | < 0 . 5) are measured in proton-proton collisions at √ s = 13 TeV as a function of the charged particle multiplicity, using the ALICE detector at the LHC. The production rates of K 0 S , (cid:3) , (cid:4) , and (cid:5) increase with the multiplicity faster than what is reported for inclusive charged particles. The increase is found to be more pronounced for hadrons with a larger strangeness content. Possible auto-correlations between the charged particles and the strange hadrons are evaluated by measuring the event-activity with charged particle multiplicity estimators covering different pseudorapidity regions. When comparing to lower energy results, the yields of strange hadrons are found to depend only on the mid-rapidity charged particle multiplicity. Several features of the data are reproduced qualitatively by general purpose QCD Monte Carlo models that take into account the effect of densely-packed QCD strings in high multiplicity collisions. However, none of the tested models reproduce the data quantitatively. This work corroborates and extends the ALICE findings on strangeness production in proton-proton collisions at 7 TeV.
Introduction
The production rates of strange and multi-strange hadrons in high-energy hadronic interactions are important observables for the study of the properties of Quantum Chromodynamics (QCD) in the non-perturbative regime. In the simplest case, strange quarks (s) in proton-proton (pp) collisions can be produced from the excitation of the sea partons. Indeed, in the past decades a significant effort has been dedicated to the study of the actual strangeness content of the nuclear wave function [1]. In QCD-inspired Monte Carlo generators based on Parton Showers (PS) [2] the hard (perturbative) interactions are typically described at the Leading Order (LO). In this picture the s quark can be produced in the hard pare-mail<EMAIL_ADDRESS>tonic scattering via flavour creation and flavour excitation processes as well as in the subsequent shower evolution via gluon splitting. At low transverse momentum, ss pairs can be produced via non-perturbative processes, as described for instance in string fragmentation models, where the production of strangeness is suppressed with respect to light quark production due to the larger strange quark mass [3]. However, these models fail to quantitatively describe strangeness production in hadronic collisions [4][5][6].
An enhanced production of strange hadrons in heavy-ion collisions was suggested as a signature for the creation of a Quark-Gluon Plasma (QGP) [7,8]. The main argument in these early studies was that the mass of the strange quark is of the order of the QCD deconfinement temperature, allowing for thermal production in the deconfined medium. The lifetime of the QGP was then estimated to be comparable to the strangeness relaxation time in the plasma, leading to full equilibration. Strangeness enhancement in heavy-ion collisions was indeed observed at the SPS [9] and higher energies [10,11]. However, strangeness enhancement is no longer considered an unambiguous signature for deconfinement (see e.g. [12]). Strange hadron production in heavy-ion collisions is currently usually described in the framework of statistical-hadronisation (or thermal) models [13,14]. In central heavy-ion collisions, the yields of strange hadrons turn out to be consistent with the expectation from a grandcanonical ensemble, i.e. the production of strange hadrons is compatible with thermal equilibrium, characterised by a common temperature. On the other hand, the strange hadron yields in elementary collisions are suppressed with respect to the predictions of the (grand-canonical) thermal models. The suppression of the relative abundance of strange hadrons with respect to lighter flavours was suggested to be, at least partially, a consequence of the finite volume, which makes the application of a grand-canonical ensemble not valid in hadron-hadron and hadron-nucleus interactions (canonical suppression) [15][16][17]. However, this approach cannot explain the observed particle abundances if the same volume is assumed for both strange and non-strange hadrons [18] and does not describe the system size dependence of the φ meson, a hidden-strange hadron [19,20].
The ALICE Collaboration recently reported an enhancement in the relative production of (multi-) strange hadrons as a function of multiplicity in pp collisions at √ s = 7 TeV [21] and in p-Pb collisions at √ s NN = 5.02 TeV [22,23]. In the case of p-Pb collisions, the yields of strange hadrons relative to pions reach values close to those observed in Pb-Pb collisions at full equilibrium. These are surprising observations, because thermal strangeness production was considered to be a defining feature of heavy-ion collisions, and because none of the commonly-used pp Monte Carlo models reproduced the existing data [3,21]. The mechanisms at the origin of this effect need to be understood, and then implemented in the state-of-the-art Monte Carlo generators [3].
In this paper, strangeness production in pp interactions is studied at the highest energy reached at the LHC, √ s = 13 TeV. We present the measurement of the yields and transverse momentum ( p T ) distributions of single-strange (K 0 S , , ) and multi-strange ( − , + , − , + ) particles at mid-rapidity, |y| < 0.5, with the ALICE detector [24]. The comparison of the present results with the former ones for pp and p-Pb interactions allows the investigation of the energy, multiplicity and system size dependence of strangeness production. Schematically, the multiplicity of a given pp event depends on (i) the number of Multiple Parton Interactions (MPI), (ii) the momentum transfer of those interactions, (iii) fluctuations in the fragmentation process. A systematic study of the biases induced by the choice of the multiplicity estimator along with the specific connections to the underlying MPI are also discussed in this paper. The paper is organised as follows. In Sect. 2 we discuss the data set and detectors used for the measurement; in Sect. 3 we describe the analysis techniques; in Sect. 4 we cover the evaluation of the systematic uncertainties; in Sect. 5 we present and discuss the results; in Sect. 6 we report our conclusions.
Experimental setup and data selection
A detailed description of the ALICE detector and its performance can be found in [24,25]. In this section, we briefly outline the main detectors used for the measurements presented in this paper. The ALICE apparatus comprises a central barrel used for vertex reconstruction, track reconstruction and charged-hadron identification, complemented by specialised forward detectors. The central barrel covers the pseudorapidity region |η| < 0.9. The main central-barrel tracking devices used for this analysis are the Inner Tracking System (ITS) and the Time-Projection Chamber (TPC), which are located inside a solenoidal magnet providing a 0.5 T magnetic field. The ITS is composed of six cylindrical layers of high-resolution silicon tracking detectors. The innermost layers consist of two arrays of hybrid Silicon Pixel Detectors (SPD), located at an average radial distance r of 3.9 and 7.6 cm from the beam axis and covering |η| < 2.0 and |η| < 1.4, respectively. The SPD is also used to reconstruct tracklets, short two-point track segments covering the pseudorapidity region |η| < 1.4. The outer layers of the ITS are composed of silicon strips and drift detectors, with the outermost layer having a radius r = 43 cm. The TPC is a large cylindrical drift detector of radial and longitudinal sizes of about 85 < r < 250 cm and −250 < z < 250 cm, respectively. It is segmented in radial "pad rows", providing up to 159 tracking points. It also provides charged-hadron identification information via specific ionisation energy loss in the gas filling the detector volume. The measurement of strange hadrons is based on "global tracks", reconstructed using information from the TPC as well as from the ITS, if the latter is available. Further outwards in radial direction from the beam-pipe and located at a radius of approximately 4 m, the Time of Flight (TOF) detector measures the time-of-flight of the particles. It is a large-area array of multigap resistive plate chambers with an intrinsic time resolution of 50 ps. The V0 detectors are two forward scintillator hodoscopes employed for triggering, background suppression and event-class determination. They are placed on either side of the interaction region at z = −0.9 m and z = 3.3 m, covering the pseudorapidity regions −3.7 < η < −1.7 and 2.8 < η < 5.1, respectively.
The data considered in the analysis presented in this paper were collected in 2015, at the beginning of Run 2 operations of the LHC at √ s = 13 TeV. The sample consists of 50 M events collected with a minimum bias trigger requiring a hit in both V0 scintillators in coincidence with the arrival of proton bunches from both directions. The interaction probability per single bunch crossing ranges between 2 and 14%.
The contamination from beam-induced background is removed offline by using the timing information in the V0 detectors and taking into account the correlation between tracklets and clusters in the SPD detector, as discussed in detail in [25]. The contamination from in-bunch pile-up events is removed offline excluding events with multiple vertices reconstructed in the SPD. Part of the data used in this paper were collected in periods in which the LHC collided "trains" of bunches each separated by 50 ns from its neighbours. In these beam conditions most of the ALICE detectors have a readout window wider than a single bunch spacing and are therefore sensitive to events produced in bunch crossings different from those triggering the collision. In particular, the SPD has a readout window of 300 ns. The drift speed in the TPC is about 2.5 cm/µs, which implies that events produced less than about 0.5 µs apart cannot be resolved. Pile-up events produced in different bunch crossings are removed exploiting multiplicity correlations in detectors having different readout windows.
Analysis details
The results are presented for primary strange hadrons. 1 The measurements reported here have been obtained for events having at least one charged particle produced with p T > 0 in the pseudorapidity interval |η| < 1 (INEL > 0), corresponding to about 75% of the total inelastic cross-section. In order to study the multiplicity dependence of strange and multi-strange hadrons, for each multiplicity estimator the sample is divided into event classes based on the total charge deposited in the V0 detectors (V0M amplitude) or on the number of tracklets in two pseudorapidity regions: |η| < 0.8 and 0.8 < |η| < 1.5. The event classes are summarised in Table 1. Since the measurement of strange hadrons is performed in the region |y| < 0.5, the usage of these three estimators allows one to evaluate potential biases on particle production, arising from measuring the multiplicity in a pseudorapidity region partially overlapping with the one of the reconstructed strange hadrons (N |η|<0.8 tracklets ), or disjoint from it (V0M and N 0.8<|η|<1.5 tracklets ). In particular, the effect of fluctuations can be expected to be stronger if the multiplicity estimator and the observable of interest are measured in the same pseudorapidity region. The usage of two different non-overlapping estimators allows the study of the effect of a rapidity gap between the multiplicity estimator and the measurement of interest.
The events used for the data analysis are required to have a reconstructed vertex in the fiducial region |z| < 10 cm. As mentioned in the previous section, events containing more than one distinct vertex are tagged as pile-up and discarded. For each event class and each multiplicity estimator, the average pseudorapidity density of primary charged-particles dN ch /dη is measured at mid-rapidity (|η| < 0.5) using the technique described in [27]. The dN ch /dη values, corrected for acceptance and efficiency as well as for contamination from secondary particles and combinatorial background, are listed in Table 1. When multiplicity event classes are selected outside the |η| < 0.8 region, the corresponding charged particle multiplicity at mid-rapidity is characterized by a large variance. In the case of the V0M estimator, the variance ranges between 30 and 70% of the mean dN ch /dη for the highest and lowest multiplicity classes, respectively.
The following decay channels are studied [28]: 1 A primary particle [26] is defined as a particle with a mean proper decay length cτ larger than 1 cm, which is either (a) produced directly in the interaction, or (b) from decays of particles with cτ smaller than 1 cm, excluding particles produced in interactions with material. Table 2 Track, topological and candidate selection criteria applied to K 0 S , and candidates. DCA stands for "distance of closest approach", PV represents the "primary event vertex" and θ is the angle between the momentum vector of the reconstructed V 0 and the displacement vector between the decay and primary vertices. The selection on DCA between V 0 daughter tracks takes into account the corresponding experimental resolution In the following, we refer to the sum of particles and antiparticles, + , − + + and − + + , simply as , and .
The details of the analysis have been discussed in earlier ALICE publications [5,6,18,22]. The tracks retained in the analysis are required to cross at least 70 TPC readout pads out of a maximum of 159. Tracks are also required not to have large gaps in the number of expected tracking points in the radial direction. This is achieved by checking that the number of clusters expected based on the reconstructed trajectory and the measurements in neighbouring TPC pad rows do not differ by more than 20%.
Each decay product arising from V 0 (K 0 S , , ) and cascade ( − , + , − , + ) candidates is verified to lie within the fiducial tracking region |η| < 0.8. The specific energy loss (dE/dx) measured in the TPC, used for the particle identification (PID) of the decay products, is also requested to be compatible within 5σ with the one expected for the corresponding particle species' hypothesis. The dE/dx is evaluated as a truncated mean using the lowest 60% of the values out of a possible total of 159. This leads to a resolution of about 6%. A set of "geometrical" selections is applied in order to identify specific decay topologies (topological selection), improving the signal/background ratio. The topological variables used for V 0 s and cascades are described in detail in [6]. In addition, in order to reject the residual out-of-bunch pile-up background on the measured yields, it is requested that at least one of the tracks from the decay products of the (multi-)strange hadron under study is matched in either the ITS or the TOF detector. The selections used in this paper are summarised in Table 2 for the V 0 s and in Table 3 for the cascades.
Strange hadron candidates are required to be in the rapidity window |y| < 0.5. K 0 S ( ) candidates compatible with the alternative V 0 hypothesis are rejected if they lie within ±5 MeV/c 2 (±10 MeV/c 2 ) of the nominal (K 0 S ) mass. A similar selection is applied to the , where candidates compatible within ±8 MeV/c 2 of the nominal mass are rejected. The width of the rejected region was determined according to the invariant mass resolution of the corresponding competing signal. Furthermore, candidates whose proper lifetimes are unusually large for their expected species are also rejected to avoid combinatorial background from interactions with the detector material. The signal extraction is performed as a function of p T . A preliminary fit is performed on the invariant mass distribution using a Gaussian plus a linear function describing the background. This allows for the extraction of the mean (μ) and width (σ G ) of the peak. A "peak" region is defined within ±6(4)σ G for V 0 s (cascades) with respect to μ for any measured p T bin. Adjacent background bands, covering a combined mass interval as wide as the peak region, are defined on both sides of that central region. The signal is then extracted with a bin counting procedure, subtracting counts in the background region from those of the signal region. Alternatively, the signal is extracted by Identity assumption for cascades and for daughter tracks fitting the background with a linear function extrapolated under the signal region. This procedure is used to compute the systematic uncertainty due to the signal extraction. Examples of the invariant mass peaks for all particles are shown in Fig. 1.
Only the turns out to be affected by a significant contamination from secondary particles, coming from the decay of charged and neutral . In order to estimate this contribution we use the measured − and + spectra, folded with a p T -binned 2D matrix describing the decay kinematics and secondary reconstruction efficiencies. The → π decay matrix is extracted from Monte Carlo (see below for the details on the generator settings). The fraction of secondary particles in the measured spectrum varies between 10 and 20%, depending on p T and multiplicity. Further details on the uncertainties characterising the feed-down contributions are provided in the next section.
The raw p T distributions are corrected for acceptance and efficiency using Monte Carlo simulated data. Events are generated using the PYTHIA 6.425, (Tune Perugia 2011) [29,30] event generator, and transported through a GEANT 3 [31] (v2-01-1) model of the detector. With respect to previous GEANT 3 versions, the adopted one contains a more realistic description of (anti)proton interactions. The quality of this description was cross-checked comparing to the results obtained with the state-of-the-art transport codes FLUKA [32,33] and GEANT 4.9.5 [34]. It was found that a correction factor < 5% is needed for the efficiency of , + , + for p T < 1 GeV/c, while the effect is negligible at higher p T . Events generated using PYTHIA 8.210 (tune Monash 2013) [35,36] and EPOS-LHC (CRMC package 1.5.4) [37] and transported in the same way are used for systematic studies, namely to compute the systematic uncertainties arising from the normalisation and from the closure of the correction procedure (details provided in the next sections).
The acceptance-times-efficiency changes with p T , saturating at a value of about 40%, 30%, 30% and 20% at p T 2, 3, 3 and 4 GeV/c for K 0 S , , and , respectively. These values include the losses due to the branching ratio. They are found to be independent of the multiplicity class within 2%, limited by the available Monte Carlo simulated data. The dependence of the efficiency on the generated p T distributions was checked for all particle species. It is found to be relevant only in the case of the , where large p T bins are used. This effect is removed by reweighting the Monte Carlo p T distribution with the measured one using an iterative procedure.
In order to compute p T and the p T -integrated production yields, the spectra are fitted with a Tsallis-Lévy [38] distribution to extrapolate in the unmeasured p T region. The
Systematic uncertainties
Several sources of systematic effects on the evaluation of the p T distributions were investigated. The main contributions for three representative p T values are summarised in Table 4 for the INEL > 0 data sample.
The stability of the signal extraction method was checked by varying the widths used to define the "signal" and "background" regions, expressed in terms of number of σ G as defined in Sect. 3. The raw counts were also extracted with a fitting procedure and compared to the standard ones computed by the bin counting technique. An uncertainty ranging between 0.2 and 3.5% depending on p T is assigned to the signal extraction of the V 0 s and cascades based on these checks.
The stability of the acceptance and efficiency corrections was verified by varying all track, candidate and topological selection criteria within ranges leading to a maximum variation of ±10% in the raw signal yield. The results were compared to those obtained with the default selection criteria (Sect. 3). Variations not compatible with statistical fluctuations (following the prescription in [39] with a 2σ threshold) are added to the systematic uncertainty.
The resulting uncertainty from topological and track selections (except TPC dE/dx) depends on p T and amounts at most to 4%, 5%, 4% and 6% for K 0 S , , and , respectively.
The TPC dE/dx selection is used to reduce the combinatorial background in the strange baryon invariant mass distribution. The uncertainty was evaluated varying the TPC dE/dx selection requirements between 4 and 7σ and was found to be at most 1% (3%) for ( and ). For the K 0 S the uncertainty due to the TPC dE/dx usage was evaluated by comparing results obtained adopting the default loose PID requirements (5σ ) with those obtained without applying any PID selection. The difference is found to be negligible (< 1%).
The systematic uncertainty for the competing decay rejection was investigated by removing entirely this condition for and . It resulted in a deviation on the p T spectra of at most 4% and 6%, respectively. For the K 0 S the systematic uncertainty was evaluated by changing the width of the competing rejected mass window between 3 and 5.5 MeV/c 2 and the corresponding deviation was found to be at most 1%. For the strange baryons, the systematic uncertainty related to the proper lifetime was computed by varying the selection requirements between 2.5 and 5 cτ . The variation range for the K 0 S was set to 5-15 cτ . The statistically significant deviations were found to be at most 3% for the and negligible (< 1%) for all other particles.
An uncertainty related to the absorption in the detector material was assigned to the anti-baryons, mostly due to the interactions of anti-proton daughters. It was estimated on the basis of the comparison of the different transport codes mentioned in Sect. 3. The uncertainty on the absorption cross section for baryons and K 0 S was found to be negligible. Furthermore an additional 2% uncertainty is added to account for possible variations of the tracking efficiency with multiplicity (Sect. 3).
The uncertainty due to approximations in the description of the detector material was estimated with a Monte Carlo simulation where the material budget was varied within its uncertainty [25]. The assigned systematic uncertainty ranges between 8% at low p T to about 1% at high p T .
The p T spectrum is affected by an uncertainty coming from the feed-down correction, due to the uncertainties on the measured spectrum and on the multiplicity dependence of the feed-down fraction. Furthermore the contribution from neutral 0 was taken into account by assuming ± / 0 = 1 or using the ratio provided by the Monte Carlo (using the reference PYTHIA 6 sample described in the previous section). The difference between these two estimates was taken into account in the calculation of the total uncertainty due to the feed-down correction, which ranges from 2% to 4% depending on p T and multiplicity.
The systematic uncertainty due to the out-of-bunch pileup rejection was evaluated by changing the matching scheme with the relevant detectors, considering the following configurations: matching of at least one decay track with the ITS (TOF) detector below (above) 2 GeV/c of the reconstructed (multi-) strange hadron; ITS matching of at least one decay track in the full p T range. Half of the maximum difference between these configurations and the standard selection was taken as the systematic uncertainty, which was found to increase with transverse momentum and to saturate at high p T , reaching a maximum value of 2.4% (3%) for V 0 s (cascades).
The effect of a possible residual contamination from inbunch pile-up events was estimated varying the pile-up rejection criteria and dividing the data sample in three groups with an average interaction probability per bunch crossing of 3%, 6%, 13%, respectively. The resulting uncertainty is larger at low multiplicity, and ranges between 1% (3%) for the K 0 S to 2% (3%) for the baryons in high-(low-)multiplicity events.
Several additional consistency checks have been performed which are described in the following. The analysis has been repeated separately for events with positive and negative z-vertex position, as well as considering candidates reconstructed in positive and negative rapidity windows. The resulting p T -spectra were found to be statistically compatible with the standard analysis.
In order to ensure that the estimated systematic uncertainties are not affected by statistical fluctuations, two checks were performed. First of all, the threshold used to consider a variation statistically significant was varied between one and three standard deviations. Then, the analysis was repeated with wider p T bins. These checks showed that statistical fluc- Transverse momentum distribution of K 0 S , , , and , for multiplicity classes selected using the tracklets in 0.8 < |η| < 1.5. Statistical and total systematic uncertainties are shown by error bars and boxes, respectively. In the bottom panels ratios of multiplicity dependent spectra to INEL > 0 are shown. The systematic uncertainties on the ratios are obtained by considering only contributions uncorrelated across multiplicity. The spectra are scaled by different factors to improve the visibility. The dashed curves represent Tsallis-Lévy fits to the measured spectra p T spectra well for all the examined strange hadrons over the measured p T range.
The uncertainty on the extrapolated fraction was estimated repeating the fit to the spectra with five alternative functions (Blast-Wave, Boltzmann, Bose-Einstein, m T -exponential, Fermi-Dirac). Since these alternative functions do not, in general, describe the full p T -distribution, the fit range was limited to a small number of data points in order to obtain a S , , , and in multiplicity event classes selected according to different estimators (see text for details). Statistical and systematic uncertainties are shown by error bars and empty boxes, respectively. Shadowed boxes represent uncertainties uncorrelated across multiplicity good description of the fitted part of the spectrum. This procedure was repeated separately for the low and for the high p T extrapolation, with the final uncertainties being dominated by the low p T contribution. The reliability of the extrapolation uncertainty estimate was checked using a simple linear extrapolation to p T = 0 as an extreme case and in a full Monte Carlo closure test where the EPOS model was used with data and PYTHIA was used to estimate the corrections. The resulting uncertainty on the integrated yields is around 2.5% for the and ranges between 3% (4%) at high multiplicity to 19% (12%) at low multiplicity for the ( ). The extrapolation uncertainty on p T is ∼2% for the and ranges between 2% (3%) at high multiplicity to 12% (7%) at low multiplicity for the ( ).
The main focus of this paper is the study of the multiplicity dependence of strangeness production. In this light, the different systematic uncertainties can be categorised in the following way: 1. Fully uncorrelated uncertainties: the change in the data is completely uncorrelated across multiplicity classes. These sources are rare, as most systematic effects have a smooth evolution with multiplicity. 2. Fully correlated uncertainties: lead to a correlated shift of the data in the same direction, independently of the multiplicity class being studied. The common part of this shift has to be considered separately, since it does not affect the shape of the multiplicity-dependent observable.
3. Anti-correlated uncertainties: the effect is opposite in low and high multiplicity events.
In the following we quote separately uncertainties that affect the trends as a function of multiplicity and those that lead to a constant fractional shift across all multiplicity classes. The effect of every systematic variation was evaluated simultaneously in each multiplicity class and in minimum bias events to separate the fully correlated part of the uncertainty.
Sources leading to a global, fully-correlated shift in the spectra were subtracted from the total uncertainty while the remaining contribution could in principle belong to any of the three aforementioned categories. However, since different (independent) sources are combined in the final uncertainties, it is a reasonable assumption to consider them as uncorrelated. For the figures shown in Sect. 5, total uncertainties are shown as boxes, while shadowed boxes represent uncertainties uncorrelated across different multiplicity classes. Statistical uncertainties are depicted as error bars.
Results and discussion
Particles and anti-particles turn out to have compatible transverse momentum distributions within uncertainties, consistently with previous results at the LHC. In the following, unless specified otherwise, we present results for their sum. The p T distributions of strange hadrons, measured in |y| < 0.5, are shown in Figs. 2, 3, 4 for the different multiplicity classes, selected using the estimators V0M, tracklets in |η| < 0.8 and tracklets in 0.8 < |η| < 1.5, as summarised in Table 1. The Lévy-Tsallis fit to the p T distributions, used for the extrapolation, are also displayed. The bottom panels depict the ratio to the minimum bias (INEL > 0) p T distribution. The p T spectra become harder as the multiplicity increases, as also shown in Fig. 5, which shows the average p T ( p T ) as a function of the mid-rapidity charged particle multiplicity. While the V0M and N 0.8<|η|<1.5 tracklets results show the same trend, the spectra obtained with the N |η|<0.8 tracklets estimator are systematically softer for comparable dN ch /dη values (though still compatible within uncertainty in the case of the strange baryons).
The increase of the p T as a function of the chargedparticle multiplicity (Fig. 5) is compatible, within uncertainties, for all particle species. The hardening of p T spectra with the charged-particle multiplicity was already reported for pp [40] and p-Pb collisions [22] at lower energies.
It is interesting to notice that the ratio of the measured p T spectra to the minimum bias one, shown in the bottom panel of Figs. 2, 3, 4, reaches a plateau for p T 4 GeV/c. This applies to all particle species and to all multiplicity syst. syst. uncorr. Fig. 8 dN /dy (integrated over the full p T range) as a function of multiplicity for different strange particle species reported for different multiplicity classes (see text for details). Error bars and boxes represent statistical and total systematic uncertainties, respectively. The bin-tobin systematic uncertainties are shown by shadowed boxes estimators. The trend at highp T is highlighted in Fig. 6, which shows the integrated yields for p T > 4 GeV/c as a function of the mid-rapidity multiplicity. Both the yields of strange hadrons and the charged particle multiplicity are selfnormalised, i.e. they are divided by their average quoted on the INEL > 0 sample. The highp T yields of strange hadrons increase faster than the charged particle multiplicity. Despite the large uncertainties, the data also hint at the increase being non-linear. The self-normalised yields of strange hadrons reach, at high multiplicity, larger values for the N |η|<0.8 tracklets multiplicity selection as compared to the other estimators. For all multiplicity selections the self-normalised yields of baryons 5 10 15 20 25 30 35 5 10 15 20 25 30 35 5 10 15 20 25 30 35 5 10 15 20 25 30 Fig. 9 dN /dy (integrated over the measured p T ranges 0-12, 0.4-8, 0.6-6.5 and 0.9-5.5 GeV/c for K 0 S , , − and − , respectively) as a function of multiplicity for different strange particle species reported for different multiplicity classes (see text for details). Error bars and boxes represent statistical and total systematic uncertainties, respectively. The bin-to-bin systematic uncertainties are shown by shadowed boxes are higher than those of K 0 S mesons. The Monte Carlo models EPOS-LHC, PYTHIA 8 and PYTHIA 6, also shown in Fig. 6, reproduce the overall trend of strange hadrons seen in the data, with EPOS-LHC showing a clear difference between the K 0 S and the baryons, as observed in the data. Indeed, a non-linear increase can be explained by the combined effect of multiplicity fluctuations and interactions between different MPIs, which produces a collective boost [41]. Both color reconnection effects implemented in PYTHIA and a collective hydrodynamic expansion can account for the non-linear increase pattern.
PYTHIA6-Perugia2011
The left (right) panel of Fig. 7 shows the ratio of the p T spectra obtained with different estimators for multiplicity classes with comparable average dN ch /dη value tracklets estimators shows that the bias introduced by the latter estimator does not depend on the particle species and is more pronounced at low and intermediate p T , although the uncertainties are large.
The p T -integrated yields of strange hadrons are shown in Fig. 8 for the three estimators considered in this paper. For reference, Fig. 9 shows the results integrated in the measured p T range with no extrapolation. These are characterised by a smaller uncertainty and can be useful for Monte Carlo tuning. In the rest of this section, we focus on the discussion of the fully extrapolated yields. The results obtained with the V0M and N 0.8<|η|<1.5 tracklets estimators follow a similar linear trend, while the results obtained with N |η|<0.8 tracklets tend to saturate, showing a lower dN /dy for a similar high-multiplicity class. In order to gain insight on this difference, a generatorlevel PYTHIA (tune Perugia 2011) simulation study was performed. The abundance of strange hadrons in |y| < 0.5 was studied as a function of the estimated mid-rapidity charged shown by error bars and empty boxes, respectively. Shadowed boxes represent uncertainties uncorrelated across multiplicity Fig. 11 Integrated yields of K 0 S , , , and as a function of dN ch /dη in V0M multiplicity event classes at √ s = 7 and 13 TeV. Statistical and systematic uncertainties are shown by error bars and empty boxes, respectively. Shadowed boxes represent uncertainties uncorrelated across multiplicity. The corresponding results obtained for INEL > 0 event class are also shown particle multiplicity for several event classes, selected using charged primary particles measured in the η ranges corresponding to the experimental estimators presented in this paper: this generator level study confirms the trend observed in the data, which can be understood in terms of a selection bias sensitive to fluctuations in particles yields. Indeed, an estimator based on charged tracks enhances charged primary particles over neutral particles or secondaries. If the multiplicity estimator and the observable under study are measured in the same η region one expects primary charged-particles to be enhanced with respect to strange hadrons, which explains the saturation of yields observed in Fig. 8 for the N |η|<0.8 tracklets estimator. As a further check of this interpretation, Fig. 10 shows the correlations between the yields of different strange hadrons for the multiplicity estimators studied in this paper. The trend is linear, and similar for all estimators, confirming that the selection bias on primary charged-particles is stronger than that on strange hadrons.
The energy dependence of the strangeness yields and p T versus the charged particle multiplicity at mid-rapidity is studied in Figs. 11 and 12, where our results are compared to the previous pp measurements at √ s = 7 TeV [21]. The minimum bias results in the INEL > 0 event class at √ s = 7 and 13 TeV [5] are also shown.
As can be seen in Fig. 11, the yields of strange hadrons increase with the charged particle multiplicity following a power law behaviour, and the trend is the same at √ s = 7 and 13 TeV. The INEL > 0 results also follow the same trend at all the tested centre-of-mass energies. This result indicates that the abundance of strange hadrons depends on the local charged particle density and turns out to be invariant with the collision energy, i.e. an energy scaling property applies for the multiplicity-dependent yields of strange hadrons. It should also be noted that the yields of particles with larger strange quark content increase faster as a function of multiplicity as already reported in [21]. Figure 13 shows the /K 0 S (no contribution considered here), /K 0 S and /K 0 S ratios compared to calculations from grandcanonical thermal models [13,14], which were found to satisfactorily describe central Pb-Pb data at √ s NN = 2.76 TeV. In the context of a canonical thermal model for a gas of hadrons, an increase of the relative strangeness abundance depending on the strange quark content can be interpreted as a consequence of a change in the system volume, called canonical suppression. Indeed, it was recently shown in [18] that the existing ALICE data can be described within this framework, introducing an additional parameter to quantify the rapidity window over which strangeness is effectively correlated. It was also suggested that strangeness follows a universal scaling behaviour in all colliding systems, when the transverse energy density [42] or the multiplicity per transverse area [43] are used as a scaling variable. While there are some caveats in the observation reported in these papers (due to the uncertainties on the transverse size of the system) this is a very intriguing observation, that could help to clarify the origin of strangeness enhancement.
The p T is seen in Fig. 12 to be harder at 13 TeV than at 7 TeV for event classes with a similar dN ch /dη. All the tested Monte Carlo models describe in a qualitative way the observed smooth rise of the p T with dN ch /dη; from a quantitative point of view, EPOS-LHC provides a slightly better description of the p T multiplicity evolution, especially for what concerns the strange baryons.
In Fig. 14 the results on the strange hadron yields as a function of the charged particle multiplicity at mid-rapidity are compared with some commonly-used general purpose QCD inspired models, focusing on the multiplicity classes defined by the V0M estimator. The yields of all measured strange S results, however it shows a less pronounced increase of the strange baryons yields versus the charged particle multiplicity than what is observed in the data. The description worsens with increasing strangeness content. Recent attempts to improve the PYTHIA color reconnection scheme to account for the observed strangeness enhancement are discussed in [3], how-ever they cannot explain the reported experimental observations.
The DIPSY Monte Carlo [44,45] is based on a BFKL inspired initial state dipole evolution model [46] interfaced to the Ariadne model [47] for final state dipole evolution and to the PYTHIA hadronisation scheme, where the latter has been modified to take into account the high density of strings that occurs in events with several MPI. More specifically, depending on their transverse density, the strings are allowed to form "colour-ropes" leading to increased string tension, which leads to an increase of strangeness production stronger than in default PYTHIA [48]. However, despite a qualitative improvement in the description of the data, the discrepancy for the distributions remains large. Moreover, as discussed in [21], DIPSY also predicts an increase of protons normalised to pions as a function of multiplicity, which is not observed in our √ s = 7 TeV data. The Monte Carlo generators based on EPOS [49] rely on the Parton Based Gribov Regge Theory (PBGRT) [50]. A common feature of all the EPOS versions is a collective evolution of matter in the secondary scattering stage in all reactions, from pp to AA, with a core-corona separation mechanism [51] which defines the initial conditions of the secondary interactions. In EPOS, the initial parton scatterings create "flux tubes" that either escape the medium and hadronise as jets or contribute to the core, described in terms of hydrodynamics. The core is then hadronised in terms of a grand-canonical statistical model. The relative amount of multi-hadron production arising from the core grows with the number of MPIs. The EPOS-LHC model adopted in this analysis (distributed with the CRMC package 1.5.4) shows a pronounced increase of the strangeness yields as a function of the charged particle multiplicity at mid-rapidity. Despite the differences in the underlying physics, the comparison to data leads to a similar conclusion for EPOS-LHC and DIPSY.
The comparison of the Monte Carlo model predictions to the data indicates that the origin of the strangeness enhancement in hadronic collisions and its relation to the QCD deconfinement phase transition are still open problems for models. While the current versions of these models do not reproduce the data quantitatively, it is possible that the agreement can be improved with further tuning or with new implementations. A quantitative description of strangeness production and enhancement in a microscopic Monte Carlo model would represent a major step to understand flavour production in hadronic collisions at high energy.
Summary
We studied the production of primary strange and multistrange hadrons at mid-rapidity in pp collisions at √ s = 13 TeV, focusing on the multiplicity dependence. The boxes, respectively. Shadowed boxes represent uncertainties uncorrelated across multiplicity. The results are compared to predictions from several Monte Carlo models main feature of this analysis is the usage of multiplicity estimators defined in different pseudorapidity regions, providing a different sensitivity to the fragmentation and MPI components of particle production and allowing for a detailed study of the selection biases due to fluctuations.
Hardening of the p T spectra with the increase of the multiplicity is observed, as already reported for pp [40] and p-Pb collisions [22] at lower energies.
The p T -integrated yields of strange hadrons increase as a function of multiplicity faster than the ones of unidentified charged-particles. This behaviour is even more pronounced for particles with higher strangeness content, confirming the earlier observations in pp collisions at √ s = 7 TeV [21]. This leads to a multiplicity-dependent increase of the ratio of the strange baryons and over K 0 S , while the over K 0 S ratio turns out to be constant within uncertainties. In the context of a canonical thermal model, an increase of the relative strangeness abundance depending on the strange quark content can be understood as a consequence of an increase in the system volume leading to a progressive removal of canonical suppression.
Comparing the 13 TeV results to the 7 TeV ones, the data exhibit an interesting scaling property: for multiplicity esti-mators selected in the forward region, the strange hadron yields turn out to be independent of the centre of mass energy, the √ s increase resulting just in harder p T spectra. The use of high-multiplicity triggered data collected during the full Run 2 period will allow us to test these scaling ansatzes extending the measurement to higher multiplicities, comparable to peripheral Pb-Pb collisions at the LHC (dN ch /dη ∼ 40-50).
The color reconnection effects implemented in PYTHIA 8 and DIPSY as well as the collective hydrodynamic expansion implemented in EPOS-LHC could not account quantitatively for all the reported results. Some of the qualitative features of the data are described by these models, including the indication of a non-linear increase of the yields of strange hadrons at high p T , similar to what was previously reported in [52] for J/ψ and D meson production. Although none of the tested models can reproduce the data from a quantitative point of view, EPOS-LHC and DIPSY do provide a better qualitative description of the strangeness enhancement. outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detector:
Data Availability Statement
This manuscript has associated data in a data repository. [Authors' comment: The numerical values of the data points will be uploaded to HEPData.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . | 10,362.8 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
Observation of 1.5 μm band entanglement using single photon detectors based on sinusoidally gated InGaAs/InP avalanche photodiodes
We describe the application of single photon detectors based on InGaAs/InP avalanche photodiodes incorporating the sinusoidal gating technique into a 1.5 μm band entanglement measurement. We constructed two detectors based on this technique with a 500 MHz gate frequency. Using these detectors, we successfully demonstrated the high-speed and high signal-to-noise ratio observation of 1.5 μm-band time-bin entangled photon pairs.
Introduction
Great progress has been made on quantum key distribution (QKD) systems over optical fibres in the last 10 years [1]. The key distribution distances of QKD systems using attenuated laser sources have already exceeded 100 km [2,3]. However, QKD systems with attenuated laser sources often suffer from unconditional security problems. For example, QKD systems based on the Bennett Brassard 1984 protocol [4] implemented with attenuated laser sources are known to be vulnerable to an attack called photon number splitting (PNS) [5]. Although the use of 'decoy states' is effective in increasing the distance [6] and a successful QKD experiment over a 107 km fibre has been undertaken with this scheme [7], the implementation of decoy states complicates the signal processing significantly and thus results in a huge system cost overhead.
In contrast, entanglement-based QKD is promising with a view to achieving long-distance key distribution with relatively simple signal processing [8,9]. Since the security of QKD can be analysed by quantifying the amount of entanglement shared between two parties [10], we expect to realize QKD with a high level of security simply by distributing entangled photons to two parties. In fact, Waks et al showed that we can realize entanglement-based QKD that is secure against individual attacks using practical components such as entanglement sources based on spontaneous parametric processes and threshold detectors, which are single photon detectors with no photon number discrimination ability [11]. Also, recent work by Tsurumaru and Tamaki [12], Beaudry et al [13] and Koashi et al [14] independently showed that we can achieve unconditionally secure QKD using entanglement sources that probabilistically emit multi-pairs and threshold detectors. Another merit of entanglement-based QKD is that we can eliminate the need for a random number generator, which has been an unresolved issue as regards a high-speed, attenuated laser-based QKD system with a GHz clock rate [3].
Although entanglement-based QKD has been intensively developed for free-space systems [15], there have only been a few experiments in the telecom wavelength bands [16]- [19]. As far as we know, only two experiments ( [18] and [19], both by our group) 3 have been conducted using two 1.5 µm band photons. This is mainly because of the slow and inefficient single photon detectors available for use in the 1.5 µm band.
In the experiment reported in [18], we used single photon detectors based on InGaAs/InP avalanche photodiodes (APDs) operated in a conventional gated mode with a gate frequency of 5 MHz [20]. This slow gate frequency resulted in a very slow sifted key rate of 0.3 bit s −1 even without a transmission fibre. In our second experiment [19], we employed superconducting single photon detectors (SSPD) and time-bin entangled photons generated at a clock frequency of 333 MHz. Thanks to the very low dark count probability of the SSPDs, we successfully distributed keys over 100 km (50 km × 2) fibres. However, mainly because of the relatively low quantum efficiency of the SSPDs (∼1%), the obtained secure key rate was as low as 0.14 bit s −1 . This small key rate has unfortunately made our entanglement-based QKD systems impractical.
Recently, groups from Nihon University [21] and Toshiba Cambridge [22] reported 1.5 µm band single photon detection using InGaAs/InP APDs with improved gate frequencies. Namekata et al employed a sinusoidal signal as a gate, and inserted filters that reject the sinusoidal signal at the output, by which they can efficiently suppress the APD capacitive response to the gate signal at the output [21]. In the scheme demonstrated by Yuan et al, a selfdifferentiation circuit was inserted so that the APD capacitive response to the rectangular gate could be cancelled out [22]. With these techniques, we can detect smaller avalanche signals. As a result, we can reduce the gate voltage supplied to the APD, which results in a smaller afterpulse probability. Or if we accept an afterpulse probability that is similar to that of a conventional gated-mode InGaAs/InP APD, we can increase the gate frequency to a range typically between 500 MHz and 1.25 GHz. These techniques have already been used in several point-to-point QKD experiments [23,24].
In this paper, we describe the observation of 1.5 µm band entangled photons using highspeed InGaAs/InP APDs. In section 2, we show that these high-speed InGaAs/InP APD-based single photon detectors are attractive candidates for realizing practical entanglement-based QKD systems over optical fibres, based on numerical calculation. In section 3, we describe the two single photon detectors that we constructed using 500 MHz sinusoidally gated InGaAs/InP APDs. Section 4 reports the observation of time-bin entangled photons using high-speed single photon detectors and a significant increase in the speed of 1.5 µm band entanglement measurement. Section 5 summarizes the paper.
Comparison of telecom-band single photon detectors for entanglement-based QKD
In this section, we compare the performance of entanglement-based QKD systems with various types of telecom-band single photon detectors using a numerical simulation. The simulation model is shown in figure 1. We assume the protocol proposed by Bennett et al in 1992, which we refer to as BBM92 [9], implemented with a sequential time-bin entanglement and an active basis selection scheme using differential phase modulation [25]. A 1.5 µm band entangled photon pair source generates a pulsed entangled photon train through a spontaneous parametric process such as spontaneous parametric downconversion and spontaneous four-wave mixing (SFWM). We assume that the number distribution of the photon pairs per pulse is Poissonian with an average value of µ. The signal and idler photons are input into transmission fibres with a collection efficiency of α. Each fibre has a length L with a loss coefficient of 0.2 dB km −1 . Then, each photon is passed through a phase modulator for the basis selection and input into a 1-bit delayed Mach-Zehnder interferometer. The total transmittance of the phase modulator and the interferometer is denoted by β. The two output ports of the interferometer are connected to the single photon detectors. Alice and Bob apply phase modulations to the incoming pulses with a value that is randomly chosen from {0, π/2}. It is only when the phase difference between pulses given by their phase modulation coincided that the measurement results between Alice and Bob exhibit correlation [25].
In [26], we reported a detailed theory for calculating the two-photon interference visibility obtained by entangled pairs generated by spontaneous parametric sources. We can modify the theory for use in calculating the error rate of the QKD system shown in figure 1. By taking account of the fact that four detectors and active demodulation are used, the probability of obtaining a coincidence that can contribute to the key is well approximated with the following equation: Here, α t denotes the total collection efficiency (= αβ10 −0.02L ) and α t η is assumed to be much smaller than 1. In an intuitive picture, the first term in equation (1) corresponds to the probability of a coincidence caused by a correlated pair. On the other hand, the second term denotes the probability of an accidental coincidence caused by the uncorrelated pairs generated in a multi-pair emission event, which we denote by R acc .
The sifted key rate is given by where f c denotes the clock frequency of the whole system or the gate frequency of the detectors. Since half of the accidental coincidences contribute to errors, the error rate e is obtained as The rate of the final key that is secure against general individual attacks is given by the following equation [11]: Here, τ (e) denotes the privacy amplification factor of the BBM92 protocol that is given by [11] τ (e) = − log 2 [28] 0.004 3.5 × 10 −8 1 GHz SSPD [3] 0.01 6 × 10 −10 10 GHz High-speed InGaAs/InP APD [21,22] and h(e) is the binary entropy function that is expressed as Also, f (e) characterizes the performance of the error correction algorithm. We calculated the secure key rate as a function of the fibre length between the entanglement source and Alice/Bob for four types of single photon detector: conventional gated-mode InGaAs/InP APDs [20], up-conversion detectors [27,28], SSPDs [3,29] and high-speed InGaAs/InP APDs [21,22]. The quantum efficiencies, dark count probabilities and the maximum clock frequency or gate frequency for each detector are listed in table 1. We used the f (e) value shown in [11], and assumed that there is no baseline system error and that the errors are caused only by the detector dark counts. The average photon pair number per pulse µ was optimized to maximize the secure key rate for each fibre length. The result is shown in figure 2. It is clear that the SSPDs and the up-conversion detectors are more effective in increasing the secure key distribution distance than detectors based on InGaAs/InP APDs. The maximum distribution distances are 460 (230 km × 2) for SSPDs and 244 km (122 km × 2) for up-conversion detectors. However, at approximately those maximum distribution distances, the secure key rates are very low. For example, the secure key rate for a fibre length of 400 km (200 km × 2) with SSPDs is ∼1.2 × 10 −6 bit s −1 , which means that we need ∼230 h to establish 1 key bit, and this is obviously useless in real communication. Therefore, the maximum distance over which meaningful QKD is possible is in fact limited by the secure key rate with those high signal-to-noise ratio detectors. If we assume that we require a minimum secure key rate of 1 bit s −1 , then the maximum key distributions with the detectors listed in table 1 are 38 km (conventional gated-mode InGaAs/InP detectors), 20 km (up-conversion detectors), 110 km (SSPD), and 126 km (high-speed InGaAs/InP detectors). Thus, if we consider both the error rate and the secure key rate, high-speed InGaAs/InP detectors can perform better than other detectors depending on the bit rate requirement. In addition, InGaAs/InP APDs do not require cryogenic cooling as with the SSPD, or additional optics as with the up-conversion detectors. Therefore, we believe that the high-speed APDs constitute a promising candidate for realizing practical entanglement-based QKD systems.
Here, we briefly note that intensive research is being undertaken to improve the SSPDs' quantum efficiencies. For example, it is shown that the quantum efficiency of an SSPD chip (which presumably does not include the loss involved in the photon collection or the coupling between fibre and the device) can be as high as 57% with an integrated optical cavity and an anti-reflection coating [30]. Thus, those devices may be useful for realizing high-speed entanglement-based QKD if they are successfully incorporated into a practical photon detection system in the near future.
Principle of the sinusoidal gating technique
Here, we briefly review the principle of single photon detection using a sinusoidally gated InGaAs/InP APD [21]. When the APD is biased over the breakdown voltage V B , an incoming photon excites an electron-hole pair in an absorption region and it is amplified into a macroscopic current, which is called an avalanche. An unexpected avalanche, namely a dark count, can be triggered by either a thermal carrier or a trapped carrier from a previous avalanche. Geometrical and chemical impurities in the APD's active layer sometimes trap carriers and release them at a later time, and the carrier causes another avalanche without an incoming photon: this phenomenon is known as an afterpulse.
In single photon counting applications, we usually cool the APD with a Peltier cooler so that we can reduce the dark counts caused by the thermal carrier. However, if we cool the APD too much, the afterpulse probability increases, because the temporal period a carrier trapped in an impurity tends to become longer as the temperature is reduced. For the above reasons, the APD is typically operated in a temperature range of −70 to −30 • .
Even when the APD is cooled, such a detector is too noisy to be run in the Geiger mode, so we generally use it in the gated mode: the APD is periodically biased over V B , but is disabled most of the time by being biased below V B . Usually, a dc signal V dc is applied, in addition to a time-dependent signal.
We employed a rectangular pattern as a gate signal in the conventional gated mode. With this method, it is difficult to distinguish the avalanche signals from the APD capacitive response to the gate signal [22]. As a result, we needed to use a relatively large gate signal (or large dc bias) so that we could obtain a larger avalanche signal that could be easily distinguished from the APD capacitive response. The larger avalanche (i.e. larger number of carriers) increases the probability of carrier trapping by impurities, and thus results in a larger afterpulse probability.
Sinusoidally gated APDs performed better than conventional gated-mode APDs [21]. In addition to the usual dc bias, we employ a sinusoidal signal as a gate. With this scheme, while the avalanche signals are close to the delta function in the time domain, the APD capacitive response (which is also a sinusoidal signal) is a delta function in the frequency domain. As a result, we can easily distinguish the avalanche from the APD capacitive response in the frequency domain, and thus we can reduce the afterpulse probability by reducing the gate voltage. Or if we accept a similar level of afterpulse probability to that of conventional gated mode detectors, we can significantly increase the gate frequency.
Detector set-up
Our detector set-up is shown in figure 3. The APD is surrounded by a gated passive quenching circuit (GPQC). We used a synthesizer to produce the gate signal at a frequency of f g = 500 MHz, which passes through a phase shifter and an amplifier, and is then input into the GPQC. The sinusoidal signal is converted into another sinusoidal signal at the same frequency as a result of capacitive response of the APD. When an avalanche occurs, a peak is added to the sinusoidal signal. At the output, band rejection filters are inserted to eliminate the 500 MHz component. Then, the output signal is amplified and input into a discriminator. In the following, the deadtime and the discrimination level set by the discriminator are expressed as τ and V disc , respectively. Although numerous parameters can be adjusted, we adopted the following process: first, we set V g at 10 V (peak-to-peak), then we set V disc at the smallest possible value without it being overwhelmed by noise and then we increased the dark count probability P d by changing V dc . Although this approach is simple, we obtained similar results to those obtained when P d is 8 changed by varying other parameters (for instance, V g ). Furthermore, in our set-up, V dc was by far the easiest parameter to increase without inducing instability.
Detector characterization
A continuous light from an external-cavity diode laser with a wavelength of 1551 nm is modulated into pulses with an intensity modulator driven by a pulse generator. The pulse width and the repetition frequency are 100 ps and 100 MHz, respectively. Using calibrated optical attenuators, we set the average input photon number per second at N input = 10 6 . The synthesizers and the pulse generator are synchronized. Therefore, we can ensure that photons arrive when the detector is gated by tuning the phase shifter. We can insert a deadtime τ ranging from 40 ns to 1.6 µs using the discriminator. If we denote the count rate per second and the overall afterpulse probability as C r and P a , respectively, we can evaluate the quantum efficiency with the following formula: The quantity P d f g can simply be read on the counter, in the absence of light. Therefore, if we measure P a , we can determine η using equation (8). We measured P a by observing the autocorrelation of the dark counts [31]. The dark count signal from the discriminator is divided into two paths and input into a time interval analyser (TIA). One channel is directly input as a start signal and the other is delayed by ∼100 ns and used as a stop signal for the TIA. We define the correlation function, π(n), which denotes the probability that an afterpulse event occurs at the nth gate from the start pulse. Then, the waveform we obtain with the TIA is given by N s {π(n) + d}, where N s and d are the number of start pulses and the dark count probability per gate, respectively. Then, the overall afterpulse probability P a is obtained as where n d (= τ f g ) is the number of gates in the deadtime set by the discriminator and the TIA. A typical histogram obtained with the TIA is shown in figure 4(a). Here, the dark count probability was set at d = 1.7 × 10 −5 by adjusting V dc . Using equation (9), the overall afterpulse probability can be calculated as a function of the inserted deadtime, as shown in figure 4(b). This suggests that even without any deadtime (except for the 40 ns deadtime inserted by the TIA), the afterpulse probability has already been suppressed to ∼5%. In this way, we measured P a for various d values. Then, using the measured P a and C r for each d, we finally obtained η as a function of d using equation (8).
We constructed two channels of single photon detectors, which we denote as channels 1 and 2. The afterpulse probability of channel 1 was much larger than that of channel 2 (whose afterpulse characteristics are shown in figure 4). Therefore, we inserted a 400 ns deadtime into channel 1 so that the afterpulse probability was reduced to a similar level to that of channel 2. The quantum efficiency and the afterpulse probability as a function of the dark count probability for the two channels are shown in figure 5. Thus, we successfully constructed two channels of 500 MHz gated-mode single photon detectors using sinusoidally gated InGaAs/InP APDs.
In the entanglement measurement experiments that follow, we operated the two detectors under the conditions shown in table 2. The set-up for generating and observing 1.5 µm time-bin entangled photon pairs is shown in figure 6 [32,33]. A continuous laser light from an external cavity diode laser with a wavelength of 1551.1 nm was modulated into a pulse train with a pulse width of 100 ps and a repetition frequency of 500 MHz by using an intensity modulator. The pulse train was amplified using an erbium-doped fibre amplifier (EDFA), and launched into a DSF after being passed through filters to eliminate the amplified spontaneous emission noise from the EDFA. In the DSF, a sequential time-bin entangled state was generated through SFWM. The quantum state of a generated photon pair is approximately given by Here, |k z denotes a state in which there is a photon in a temporal position k and a mode z (= s, i), and N is the number of pulses in which the phase coherence of the pump is preserved. The DSF was cooled with liquid nitrogen to suppress the noise photons caused by spontaneous Raman scattering [33]. The photons output from the DSF were passed through a polarizer to eliminate noise photons with a polarization state that was orthogonal to the photon pair polarization state. The photons were input into a fibre Bragg grating (FBG) to suppress the pump photons and then launched into an arrayed waveguide grating (AWG) to separate signal and idler photons. The wavelengths of the signal and the idler were 1547.9 and 1554.3 nm, respectively, both with a 0.2 nm (25 GHz) bandwidth. Each photon was passed through an optical bandpass filter to further suppress the noise photons, and then launched into a 1-bit delayed Mach-Zehnder interferometer fabricated using planar lightwave circuit (PLC) technology [32]. The photons output from the interferometers were detected by single photon detectors based on sinusoidally gated InGaAs/InP APDs. The signal and idler channels were connected to channels 1 and 2 of the detectors, respectively. The detection signals from the single photon detectors were input into a time interval analyzer (TIA) for coincidence measurements.
The state |k z is transformed by the PLC interferometer as follows: With the above transformation, the state shown by equation (10) is transformed as follows: Here, only those terms that contribute to the coincidence are shown. This equation implies that we can observe two-photon interference in any time slot except for the very first and last slots in the whole photon pulse train, and the probability of obtaining the coincidence is given by 1 4N at the peak of the two-photon interference fringe. When the number distribution of the photon pairs is given by a Poissonian with an average photon-pair number per pulse of µ, the coincidence rate at the peak of the two-photon interference fringe R p is approximately given by [26] R where α z , η z and d z denote the total collection efficiency the detector quantum efficiency, and the dark count probability per gate for the mode z (= s, i), respectively. This equation is valid when α z η z 1. In our experiment, α s and α i were both ∼ −8.5 dB including the insertion loss of the PLC interferometer. Therefore, equation (13) gives a good approximation of the peak coincidence rate.
Coincidence-to-accidental ratio (CAR) measurement
The CAR is often used to evaluate the strength of the temporal correlation between correlated photons. Prior to the entanglement observation, we measured the CAR of the photon pairs generated by the SFWM in the DSF using our detectors. CAR > 1 implies the existence of a temporal correlation between the photons, and a larger CAR means a stronger correlation.
According to [26], the theoretical CAR C is obtained with the following equation.
This equation implies that the CAR generally improves if we reduce µ, but is eventually limited by the detector dark counts if µ is set too small. We removed the interferometers from the set-up shown in figure 6 and measured the coincidence counts using the TIA. Typical histograms obtained with the TIA measurements are shown in figure 7(a) (average idler photon number per pulse: 0.019) and figure 7(b) (0.002). The main peak, which is caused by the coincidences between photons generated with the same pump pulse, corresponds to the 'coincidences'. On the other hand, the small side peaks (which are more apparent in figure 7(a) than in (b)) are caused by the coincidences between photons generated in different temporal positions, and these peaks provide an estimation of the number of accidental coincidences contained in the main peak. We obtain the CAR by dividing the counts in the main peak by the average counts in the side peaks.
The measured CAR as a function of the average idler photon number per pulse µ i is plotted in figure 7. The CAR reached its maximum value of 83 at µ i = 0.002. In [33], we reported a CAR obtained using the same set-up (and the same DSF) with conventional gate-mode detectors operated at a frequency of 4 MHz, in which the maximum CAR was 28. This clearly shows that our sinusoidally gated detectors have both a much higher gate frequency and a better signal-tonoise ratio than the conventional gated-mode detectors used in [33].
Observation of two-photon interference
We observed two-photon interference using the set-up shown in figure 6. We fixed the temperature of the interferometer for the signal, and swept that for the idler, while measuring the coincidence counts. µ was set at ∼0.01. The result is shown in figure 8(a). We obtained a clear two-photon interference fringe with a visibility of 90.6 ± 1.5%. The peak coincidence rate estimated from the fitted curve was 184.8 counts s −1 . The peak coincidence rate calculated with equation (13) was 362 bits s −1 . This relatively large discrepancy between theory and experiment is caused by the noise photons: a previous report [33] shows that even when the DSF was cooled with liquid nitrogen, a significant number of noise photons were generated through spontaneous Raman scattering, which reduced the actual number of correlated photons and thus resulted in the smaller coincidence rate.
For comparison, we undertook the same measurement using two conventional gated-mode APDs with a 4 MHz gate frequency. The quantum efficiencies and dark count probabilities of the detectors were ∼8% and ∼5 × 10 −5 , respectively. All the other parameters, including the average photon-pair number per pulse, were set at the same values as those used for the experiment with the sinusoidally gated APDs. The obtained two-photon interference fringe is shown in figure 8(b). The visibility of the fitted curve was 87.8 ± 9.6%. It is clear that a better signal-to-noise ratio and an increased measurement speed with the sinusoidally gated APDs resulted in a larger visibility with less uncertainty. The peak coincidence rate obtained from figure 8(b) was 0.6 counts s −1 , which means that the use of sinusoidally gated APDs led to an increase in the coincidence rate by a factor of ∼300. Equation (13) suggests that the peak coincidence rate is approximately proportional to η s η i f g . Using the parameter values of our experimental set-up, the calculated ratio of the peak coincidence rate obtained with the sinusoidally gated APDs to that for the conventional gated-mode APDs is 280, which agrees well with the experimental result. Thus, we successfully achieved a significant increase in the speed of the coincidence fringe measurement by using the sinusoidally gated APDs.
Conclusion
We reported the observation of 1.5 µm band entangled photons using single photon detectors based on InGaAs/InP APDs with the sinusoidal gating technique. First, we showed that highspeed detectors based on InGaAs/InP APDs are potentially useful for practical QKD systems based on entangled photon pairs. Then, we constructed two detectors using the sinusoidal gating technique that operated at a gate frequency of 500 MHz. With those detectors, we achieved a significant reduction in the measurement time compared with that when using conventional gated-mode APDs. We consider these high-speed single photon detectors based on InGaAs/InP APDs to be useful for realizing practical quantum communication over optical fibre networks. | 6,235.6 | 2009-04-01T00:00:00.000 | [
"Physics"
] |
New Atomic Decomposition for Besov Type Space ˙ B 01 , 1 Associated with Schrödinger Type Operators
Let ( X , d , μ) be a space of homogeneous type. Let L be a nonnegative self-adjoint operator on L 2 ( X ) satisfying certain conditions on the heat kernel estimates which are motivated from the heat kernel of the Schrödinger operator on R n . The main aim of this paper is to prove a new atomic decomposition for the Besov space ˙ B 0 , L 1 , 1 ( X ) associated with the operator L . As a consequence, we prove the boundedness of the Riesz transform associated with L on the Besov space ˙ B 0 , L 1 , 1 ( X )
We also assume further that (X , d, μ) satisfies the noncollapsing condition, i.e., there exists c 0 > 0 such that for all x ∈ X . From now on, for any measurable subset E ⊂ X , we denote V (E) := μ(E). For all x ∈ X and r > 0, we also denote V (x, r ) = μ(B(x, r )).
Note that the classical Hardy space H 1 (X ) is a suitable substitution for the space L 1 (X ) when we work with Calderón-Zygmund operators but the classical Hardy space might not be suitable for the study of certain operators that lie beyond the Calderon Zygmund class. This observation highlights the need for the development of new function spaces that adapt well to these operators. In recent times, there has been a remarkable progress in the field of function spaces associated with operators, reflecting the growing interest in understanding the behaviour of these operators and their associated function spaces. See for example [1,5,15,21,23,29] and the references therein.
Motivated by this ongoing research, we aim to study new atomic decomposition of Besov spaces associated to Schrödinger type operators. Throughout this paper, we assume that H is a non-negative self-adjoint operator on L 2 (X ) which generates the analytic semigroup {e −t H } t>0 . Denote by p t (x, y) and q t (x, y) the kernels of e −t H and t He −t H , respectively, we assume that the kernels p t (x, y) satisfy the following conditions: (H1) There exist positive constants C and c such that for all x, y ∈ X and t > 0; (H2) There exist positive constants δ 1 , c and C such that whenever d(x, x) ≤ √ t and t > 0; (H3) X p t (x, y)dμ(x) = 1 for y ∈ X .
In fact, the assumptions (H1) and (H2) can be assumed only for the kernel p t (x, y) since the estimates in (H1) and (H2) for p t (x, y) imply similar estimates for q t (x, y). However, for the sake of simplicity, we make the assumptions for both p t (x, y) and q t (x, y).
Standard examples of operators which satisfy conditions (H 1), (H 2) and (H 3) include the Laplacians on the Euclidean spaces R n , the Laplace-Beltrami operators on non-compact Riemannian manifolds with doubling property, the Bessel operators on (0, ∞) n , the sub-Laplacians on stratified Lie groups and certain degenerate elliptic operators on doubling spaces and domains.
Our motivation is to study the Schrödinger operator L = H + V which is a nonnegative self-adjoint operator on L 2 (X ). Under suitable conditions, the potential V induces a critical function ρ which appears on the upper bounds and regularity estimates of the heat kernels of L and its time derivative. We refer the reader to Sect. 2.1 for a general definition of critical functions and further details.
In this paper, without the assumption L = H + V , we assume that L is a nonnegative self-adjoint operator on L 2 (X ). Denote by p t (x, y) and q t (x, y) the kernels of e −t L and t Le −t L , respectively. Suppose that ρ is a critical function defined on X . See Sect. 2.1 for the precise definition of critical functions. We assume that the kernels p t (x, y) and q t (x, y) satisfy the following conditions: (L1) For all N > 0, there exist positive constants c and C so that for all x, y ∈ X and t > 0; (L2) There is a positive constant δ 2 so that for all N > 0, there exist positive constants c and C which satisfy √ t and t > 0; (L3) There is a positive constant δ 3 such that for all x, y ∈ X and t > 0.
(b) Note that the condition (L1) implies that for all N > 0, there exist positive constants c and C so that for all x, y ∈ X and t > 0. Since the proof of (4) is standard, we leave it to the interested reader. (c) As mentioned above, an example of the pairs of operators (H , L) which satisfy our assumptions are the operators H mentioned above and L = H + V for suitable potentials V . See Sect. 2.1, also [9, Section 6] and [34]. We remark that our work on the operator L in this paper only relies on the assumptions (L1), (L2), (L3) and does not use the representation L = H + V .
Our aim is to study the homogeneous Besov spaceḂ 0,L 1,1 (X ) associated with the operator L.
When L = − the Laplacian on R n , the Besov spaceḂ 0,L 1,1 (R n ) coincides with the classical Besov spaceḂ 0 1,1 (R n ). It is well known that the Besov spaceḂ 0 1,1 (R n ) is contained in the Hardy space H 1 (R n ) and is used in proving the dispersive estimates of the wave equations (see for example [3,8,13]) and the regularity of the Green functions on domains (see for example [20]). See also [17,18,[24][25][26] and the references therein for further discussion on the Besov space typeḂ 0 1,1 and the Besov spaces on spaces of homogeneous type. It is worth noticing that in the definition above we define the Besov space a subset of L 1 (X ). This is more advantageous than the approach using new distributions as in [5,26].
We are interested in atomic decompositions of the Besov spaceḂ 0,L 1,1 (X ). Note that atomic decompositions of Besov spaces associated to non-negative self-adjoint operators satisfying Gaussian upper bounds were obtained in [5] for homogeneous Besov spaces and in [27] for inhomogeneous Besov spaces. Adapting ideas in [5,27], we can define atoms for the Besov spacesḂ 0,L 1,1 (X ) as follows. Note that the atoms defined in [5,27] are supported in balls associated to dyadic cubes. See Lemma 2.2 for the definition of dyadic cubes. In this paper, we do not need the dyadic cubes in Definition 1.3 and we are able to prove the following result.
Then there exist a sequence of (L, M) atoms {a j } and a sequence of coefficients {λ j } ∈ 1 so that f = j λ j a j in L 1 (X ), .
where {a j } is a sequence of (L, M)-atoms and {λ j } ∈ 1 , then The proof of Theorem 1.4 will be presented later. In comparison with the atomic decomposition in Theorems 4.2 and 4.3 in [5], the main difference is that in Theorem 1.4, the convergence used in the atomic decomposition is in L 1 (X ) instead of in the space of new distributions associated with the operator L; moreover, Theorem 1.4 uses the atoms associated with balls rather than the dyadic cubes as in Theorems 4.2 and 4.3 in [5].
We now consider new atoms associated with the critical function ρ which will be defined in Sect. 2.1. Note that the idea of the atomic decomposition associated to the critical functions was used in the setting of Hardy spaces. In [16], the atomic decomposition associated to the critical functions was studied for the Hardy spaces associated to Schrödinger operators with potential satisfying certain reverse Hölder inequality. Then the results were extended to encompass a broader scope, incorporating Schrödinger operators in various contexts such as stratified Lie groups and doubling manifolds. See for example [9,34]. However, this is the first time the atomic decomposition associated to the critical functions was established for the Besov spaces. Definition 1.5 Let > 0 and ρ be a critical function. A function a is said to be an ( , ρ(·))-atom if there exists a ball B such that It is interesting that the atoms in Definition 1.5 depend on the critical function ρ only. This type of atoms can be viewed as an extended version of the atoms used for the inhomogeneous Besov type. In fact, in the particular case ρ = constant, the atoms in Definition 1.5 turn out to be the atoms which characterize the inhomogeneous Besov spaces. See for example [26]. Our main result is the following theorem.
Theorem 1.6 If f ∈Ḃ 0,L 1,1 (X ), then there exist a sequence of ( , ρ(·))-atoms {a j } for some > 0 and a sequence of coefficients {λ j } ∈ 1 so that Conversely, if where {a j } is a sequence of ( , ρ(·))-atoms with > 0 and {λ j } ∈ 1 , then The organization of the paper is as follows. In Sect. 2, we recall the definitions of critical functions and dyadic cubes, and prove some kernel estimates of the spectral multipliers of H . In Sect. 3, we will set up the theory of the inhomogeneous Besov space B 0 1,1 (X ) including atomic decomposition results. The proofs of the main results will be given in Sect. 4. Finally, Sect. 5 is devoted in the proof of the boundedness of the Riesz transform associated with L in Besov spaces.
Throughout the paper, we always use C and c to denote positive constants that are independent of the main parameters involved but whose values may differ from line to line. We write A B if there is a universal constant C so that A ≤ C B and A ≈ B if A B and B A. Given a λ > 0 and a ball B := B(x, r ), we write λB for the λ-dilated ball, which is the ball with the same center as B and with radius λr . For each ball B ⊂ X , we set S 0 (B) = B and S j (B) = 2 j B\2 j−1 B for j ∈ N.
Critical Functions
A function ρ : X → (0, ∞) is called a critical function if there exist positive constants C ρ and k 0 so that for all x, y ∈ X .
Note that the concept of critical functions was introduced in the setting of Schrödinger operators on R D in [19] (see also [30]) and then was extended to the spaces of homogeneous type in [34].
A simple example of a critical function is ρ ≡ 1. Moreover, one of the most important classes of the critical functions is the one involving the weights satisfying the reverse Hölder inequality. Recall that a non-negative locally integrable function w is said to be in the reverse Hölder class R H q (X ) with q > 1 if there exists a constant C > 0 so that for all balls B ⊂ X . Note that if w ∈ R H q (X ) then w is a Muckenhoupt weight. See [32]. Now suppose V ∈ R H q (X ) for some q > max{1, D/2} and, following [30,34], set Then it was proved in [30,34] that ρ is a critical function provided q > max{1, D/2}. The following result will be useful in the sequel which is taken from Lemma 2.3 and Lemma 2.4 of [34].
Lemma 2.1
Let ρ be a critical function on X . Then there exist a sequence of points {x α } α∈I ⊂ X and a family of functions {ψ α } α∈I satisfying the following: (ii) For every λ ≥ 1 there exist constants C and N 1 such that
Dyadic Cubes
We now recall an important covering lemma in [10].
Remark 2.3
Since the constants η and a 0 are not essential in the paper, without loss of generality, we may assume that η = a 0 = 1/2. We then fix a collection of open sets in Lemma 2.2 and denote this collection by D. We call open sets in D the dyadic cubes in X and x Q k τ the center of the cube Q k τ ∈ D. We also denote For the sake of simplicity we might assume that κ 0 = 1.
Kernel Estimates
Denote by E H (λ) a spectral decomposition of H . Then by spectral theory, for any bounded Borel funtion F : [0, ∞) → C we can define as a bounded operator on L 2 (X ). It is well-known that the kernel cos(t for somec 0 > 0. See for example [31].
In what follows, without loss of generality we may assume thatc 0 = 1. We have the following useful lemma.
(b) For any N > 0 and s = 2(N + 3D + 2) there exists C = C(N ) so that for all x, y, y ∈ X with d(y, y ) < λ, and for all functions Then we have This, along with Lemma 2.5, (H2) and the fact G W 2 On the other hand, Therefore, which implies (13). The estimate (14) can be proved similarly. This completes our proof.
Lemma 2.7
Let ϕ ∈ S(R) be an even function. Then for any N > 0 there exists C N such that and for all t > 0 and x, y, y ∈ X with d(y, y ) < t.
We need only to prove (16). Hence, By (14), for N > 0 we have for all t > 0 and x, y, y ∈ X with d(y, y ) < t.
Remark 2.8
The results in Lemmas 2.5, 2.6 and 2.7 hold true if we replace H by L since we do not use the assumption (H3) in the proofs.
for all x ∈ X and t > 0.
Proof Let ψ j be the function as in the proof of Lemma 2.7 for j = 0, 1, 2, . . .. Then we have Arguing similarly to the proof of Lemma 2.7, we also yield that for any N > n and j = 0, 1, 2, . . ., This, together with (19), implies that Using (20), and letting R → ∞, the above identity deduces that Therefore, due to Lemma 2.5, the upper bound of q t (x, y) and Fubini's theorem, In addition, from the conservation property (H3), we immediately have This completes our proof.
Inhomogeneous Besov Spaces B 0 1,1 (X) and Atomic Decomposition
In this section, we will introduce the Besov space B 0 1,1 (X ). Our approach relies on the function spaces associated to the "Laplace-like" operator. This is motivated from the classical case in which the classical Besov spaces can be viewed as Besov spaces associated with the Laplacian. In our setting, under the three conditions (H1), (H2) and (H3), the operator H satisfies important properties which are similar to the Laplacian on the Euclidean space.
Inhomogeneous Besov Spaces
In the sequel we will show that the Besov space B 0 1,1 (X ) is independent of the operator H . This is a reason why we do not include the operator H in the notation of the Besov space. In order to prove Lemma 3.2 we need the following technical lemmas.
Lemma 3.3 For each
Proof We first recall the following fact in [4] lim s→0 Assume that f ∈ B 0 1,1 (X ). It follows that f ∈ L 1 (X ). For each n ∈ N, define From the Gaussian upper bound condition (H1) and (3), By (21), Similarly, On the other hand, since e −s H is bounded on L 1 (X ), we have In addition, By the Dominated Convergence Theorem, It follows that This, along with the fact that f k ∈ L p (X ) for each n ∈ N and p ∈ [1, ∞), implies that L p (X ) is dense in B 0 1,1 (X ) for each p ∈ [1, ∞). This completes our proof.
We are now ready to prove Lemma 3.2.
Proof of Lemma 3.2
Assume that { f k } is a Cauchy sequence in B 0 1,1 (X ). Hence, this is also a Cauchy sequence in L 1 (X ) since B 0 1,1 (X ) → L 1 (X ). As a consequence, f k → f ∈ L 1 (X ) for some f ∈ L 1 (X ). On the other hand, we have Since { f k } is a Cauchy sequence in B 0 1,1 (X ), for any > 0 there exists N such that for m, k ≥ N , Fixing k, then using Fatou's Lemma we have It follows that This completes our proof.
Atomic Decomposition
In order to establish atomic decomposition for the Besov space, we need another Calderón reproducing formula.
Proof Similarly to the proof of Lemma 3.4, it suffices to prove the proposition for f ∈ L 2 (X ) ∩ B 0 1,1 (X ). Observe that which implies that This, along with spectral theory, yields in L 2 (X ). Set Then, by Lemma 3.9 and Corollary 3.5, This implies that for some g ∈ L 1 (X ). This, in combination with (25), implies that f = g for a.e.. Therefore, for f ∈ L 2 (X ) ∩ B 0 1,1 (X ). This completes our proof.
For any bounded Borel function ϕ defined on [0, ∞). We now define, for λ > 0, for some > 0, and Arguing similarly to the proof of Therem 1.2 in [28], we have:
Lemma 3.8 Let (ϕ, ϕ 0 ) be a pair of even functions in A(R).
Then, for λ > 2n, we have
We now introduce the notion of atoms for the Besov space B 0 1,1 (X ). Definition 3.10 Let > 0. A function a is said to be an -atom if there exists a ball B with r B ≤ 1 such that and (b) In particular, if f ∈ B 0 1,1 (X ) supported in a ball B with r B = 1, then there exist a sequence of -atoms {a j } supported in 3B for some > 0 and a sequence of numbers {λ j } such that (28) and (29) hold true.
Proof (a) Let , be as in Lemma 3.6 such that Moreover, according to Lemma 2.4, we have, for t > 0 and x, y ∈ X , and where F ∈ { , , , }.
We first decompose f 1 as follows: For each Q ∈ D 2 as in Remark 2.3, we set It is clear that For the part f 2 , we write For each Q ∈ D j with j ≥ 3, we set and Then we have Therefore, We next claim that a Q is an atom for each Q ∈ D j , j ≥ 2. Indeed, for j = 2 we have It follows, by (30) and Remark 2.
On the other hand, by Lemma 2.7, Hence, a Q is a multiple of an -atom associated to the ball B Q for each Q ∈ D j with j = 2.
Arguing similarly to above, we can verify that for Q ∈ D j , j ≥ 3, a Q satisfies (i)-(iii) in Definition 3.10 with the corresponding ball defined by The condition a Q (x)dμ(x) = 0 follows directly from Lemma 2.9 and the fact that and are even and (0) = (0) = 0. Hence, a Q is a multiple of an -atom associated to B Q with = δ for each Q ∈ D j , j ≥ 3.
It remains to show that ∞ j≥2 Q∈D j It remains to prove that 1 0 t He −t H a 1 dt t 1.
To do this, we write t He −t H a 1 dt t For the second term E 2 , using the Gaussian upper bound of q t (x, y), To estimate the term E 1 , using the fact that By the smoothness condition of the atom a and the Gaussian upper bound of q t (x, y), we have It follows that E 1 1.
It remains to estimate E 3 . Note that if r B = 1, then E 3 = 0. Hence, we need only to consider the case r B < 1. Due to the cancellation property of the atom a, we have It follows that E 3 1. This completes our proof of (a).
where {s Q } is a sequence of numbers satisfying (29) and {a Q } is a sequence of -atoms defined by (32) and (33). From (30), (32) and (33), we have This completes the proof of (b).
We now introduce a new variant of the inhomogeneous Besov spaces. For > 0, the Besov space B 0, 1,1 (X ) is defined as the set of functions f ∈ L 1 (X ) such that When = 1, we simply write B 0 1,1 (X ).
Definition 3.12
Let > 0 and > 0. A function a is said to be an ( , )-atom if there exists a ball B such that Using the approach in the proof of Theorem 3.11 and the scaling argument, we are also able to prove the following theorem.
Theorem 3.13
Let > 0 and f ∈ L 1 (X ). Then f ∈ B 0, 1,1 (X ) if and only if there exist a sequence of ( , )-atoms {a j } for some > 0 and a sequence of numbers {λ j } such that and In particular, if f ∈ B 0, 1,1 (X ) supported in a ball B with r B = , then there exist a sequence of ( , )-atoms {a j } j supported in 3B for some > 0 and a sequence of numbers {λ j } such that (34) and (35) hold true.
Proof of Theorem 1.4
We state the following results in which the proofs of Lemma 4.1 and Proposition 4.2 below are similar to those of Lemmas 3.4, 3.2, 3.3 and Corollary 3.5.
Lemma 4.1 Let ψ be an even function in
Then we have for f ∈Ḃ 0,L 1,1 (X ).
Proposition 4.2
The following properties hold true for the homogeneous Besov spacė B 0,L 1,1 (X ).
Proof Similarly to the proof of Lemma 3.4, it suffices to prove the proposition for f ∈ L 2 (X ) ∩Ḃ 0,L 1,1 (X ). By spectral theory, On the other hand, from Lemma 2.7, This implies that for some g ∈ L 1 (X ). This, in combination with (37), implies that f = g for a.e.. Therefore, for f ∈ L 2 (X ) ∩Ḃ 0,L 1,1 (X ). This completes our proof.
Proof of Theorem 1.4:
The proof of the atomic decomposition for functions f ∈ B 0,L 1,1 (X ) is similar to that of Theorem 4.2 in [5] and the proof of Theorem 3.11. Hence, we leave it to the interested reader.
For the reverse direction, it suffices to show that there exists C > 0 such that Suppose that a is an (L, M)-atom associated with a ball B. Then we have For the first term, we have For the second term, using a = L M b, This completes our proof.
Proof of Theorem 1.6
We refer the reader to Sect. 2.1 for the index set I, the family functions {ψ α } α and the family of balls {B α } α which will be used in this section.
Lemma 4.5 For each f
Proof Denote Then we write We estimate E 1 first. Owing to Lemma 2.1 and the upper bound of q t (x, y), we have This implies that where Since J 1,β is uniformly bounded in β ∈ I, using (39) we obtain If β ∈ I 2,α , then ψ α (y) = 0 for all y ∈ B β . Therefore, By the upper bound of q t (x, y) and the fact that d(x, y) > ρ(x α ) whenever x ∈ B α , y ∈ B β with β ∈ I 2,α , we further simplify to that On the other hand, invoking (5) we have Therefore, dt t dμ(y).
Since {B β } β∈I is a finite overlapping family and ∪ α∈J 2,β B α ⊂ X \B * β , we also obtain that This completes our proof.
We are ready to give the proof of Theorem 1.6.
Proof of Theorem 1.6: We first prove that each function f ∈Ḃ 0,L 1,1 (X ) admits an atomic decomposition as in the statement of the theorem.
Observe that by (5), for z ∈ 3B, This, together with (L1), yields that It follows that A 3 1. This completes our proof.
Application to Boundedness of Riesz Transforms Associated to Schrödinger Operators on R n
In this section, we show the boundedness of the Riesz transforms associated to Schrödinger operators L = − + V on R n on the new Besov spaceḂ 0,L 1,1 (R n ). It is worth noticing that although we restrict ourselves to consider the Schrödinger operators on R n , our approach works well in more general setting including settings listed in Remark 1.1.
Let L = − + V be a Schrödinger operator on R n , n ≥ 3 with V ∈ R H n/2 . Our main result in this section is the following theorem. We would like to remark that in the classical case, the Riesz transform ∇(− ) −1/2 is bounded on the classical Besov spacesḂ 0 1,1 (R n ). See for example [6,Proposition 2.4]. In the setting of Theorem 5.1, we have a better estimates for the Riesz transform ∇ L −1/2 since by Theorem 3.13,Ḃ 0 1,1 (R n ) →Ḃ 0,L 1,1 (R n ). Therefore, as a consequence of Theorems 3.13 and 5.1, we have:
In order to prove Theorem 5.1 we need the following technical lemma. To do this, we write | 6,295.8 | 2023-07-28T00:00:00.000 | [
"Mathematics"
] |
Proteomic analysis of the pulvinus , a heliotropic tissue , in Glycine max
Certain plant species respond to light, dark, and other environmental factors by leaf movement. Leguminous plants both track and avoid the sun through turgor changes of the pulvinus tissue at the base of leaves. Mechanisms leading to pulvinar turgor flux, particularly knowledge of the proteins involved, are not well-known. In this study we used two-dimensional gel electrophoresis and liquid chromatography-tandom mass spectrometry to separate and identify the proteins located in the soybean pulvinus. A total of 183 spots were separated and 195 proteins from 165 spots were identified and functionally analyzed using single enrichment analysis for gene ontology terms. The most significant terms were related to proton transport. Comparison with guard cell proteomes revealed similar significant processes but a greater number of pulvinus proteins are required for comparable analysis. To our knowledge, this is a novel report on the analysis of proteins found in soybean pulvinus. These findings provide a better understanding of the proteins required for turgor change in the pulvinus.
Introduction
Plants respond to light, gravity, touch, and other environmental signals by both temporary and permanent differential growth. 1,2The directional growth of plants as a response to an external stimulus is called tropism.The movement of leaves by sunlight, or heliotropism, can angle leaf lamina both toward (diaheliotropism) and away from (paraheliotropism) the light depending on the intensity of the irradiance, circadian rhythms (nyctinasty), and environmental stresses.Diaheliotropic movement has been shown to increase water use efficiency and carbon assimilation compared to horizontal controls in cotton and soybean. 3,4araheliotropism benefits plants by maintaining high levels of photosynthetic quantum yield under stressed conditions and reducing UVB radiation levels. 5,6he pulvinus is an enlarged motor organ at the base of leaves found in many leguminous plants.It has been observed to force the movement of leaves in heliotropic, seismonastic, and nyctinastic patterns. 7,8Unlike pulvini found in maize and oat that respond to gravity by permanent growth, the movement of soy bean pulvini is reversible. 8,9In heliotropism, upon light exposure, an asymmetric turgor gradient formed between the adaxial and abaxial motor cells leads to leaf movement.Potassium ion influx coupled with chlorine ion is powered by a proton gradient and results in osmotic influx. 7In addition to heliotropism, the pulvinus changes turgor for nyctinastic leaf folding, and is affected by alterations in the length of the photoperiod. 10he structure of the pulvinus reveals its specialized role in leaf movement.In contrast to the stem and petiole, the pulvinus has a relatively larger cortex and smaller pith.The motor cells are part of the cortex. 11Two types of pulvinar vacuoles found in many species participate in the volume flux.Tannin-rich vacuoles have been previously mentioned as a major source of cellular volume change. 12Still other studies have found the primary volume changes to be from the type of vacuole that does not contain tannin. 13 number of proteins have been linked with pulvinar heliotropism and nyctinasty.H + -ATPase activity increases turgor by H + efflux and consequent K + influx, and H + -ATPase inhibitors reduce diaheliotropic response in soybean pulvinus. 14Blue light is a deactivator of H + -ATPase in Phaseolus vulgaris motor cells which leads to decreased turgor pressure on the illuminated region rather than an activator of H + -ATPase on the opposite region. 15urthermore, studies on gravitropic grass pulvini proteins have begun to identify differentially expressed proteins including one demonstrating MAPK-like activity. 16hile studies have identified a number of genes involved in a variety of tropic responses, there remains a great dearth of knowledge on the gene products and their expression patterns.The purpose of this study was to map the proteome of the soybean (Glycine max) pulvinus using trichloroacetic acid/acetone extraction, 2-dimensional gel electrophoresis (2-DE) and identification by liquid chromatographytandom mass spectrometry (LC-MS/MS).This map would highlight the molecular and functional characterization of the pulvinus at the protein level.The soybean has been selected because of its importance as a food crop, the relative size of its pulvinus, and ease in growing samples.After profiling the pulvinus proteome, it was compared to previously identified proteomes of nearby leaf tissue as well as guard cells, which are functionally similar to pulvini in that they also change cell shape through differences in turgor.
Plant material
Soybean (G.max cv.Clark) seeds were soaked overnight in tap water before they were planted in 6-inch pots (2-3 per pot) with an LC1 soil mixture (Sun Gro Horticulture, Vancouver, BC, Canada).The plants were grown in a growth chamber at the University of Maryland, College Park, set to a 16:8 photoperiod, temperatures of 25:20 C day:night, with a PAR of 500 mol m -2 s -1 and 60% relative humidity.The plants were watered to avoid water stress and received 100 ppm 20:20:20 fertilizer once a week.The plants were harvested after the appearance of six or seven trifoliolate leaves (between six and eight weeks).The terminal and lateral pulvini from the second through sixth trifoliolate leaves were separately excised with a razor one to two hours into the light period and frozen in liquid nitrogen.The pulvini were stored in a −80°C freezer until further use.
Protein extraction
Trichloroacetic acid (TCA)/acetone precipitation, described previously by Natarajan et al. was used to extract pulvinar protein. 17For each of three biological replicates approximately 2.0 g of pulvinus were ground into a powder using a mortar and pestle with liquid nitrogen, then extracted with a 10% TCA (w/v)/0.07%β-mercaptoethanol (v/v) in acetone mixture.Following a minimum of one hour incubation at −20°C and centrifugation at 14,000 g in 4°C for 20 minutes, the supernatant was discarded.The pellet was rinsed with 0.07% β-mercaptoethanol in acetone solution followed by centrifugation at 14,000 g (4°C) for 20 minutes; the rinsing and centrifugation steps were repeated until the supernatant was clear.After vacuum drying, the pellet was resolubilized in a 7 M urea, 2 M thiourea, 4% CHAPS, 1% DTT solution and sonicated on ice for 45 minutes.The supernatant was collected after centrifugation at 14,000 g (4°C) for 20 minutes and the protein concentration was quantified using the Bradford method. 18
2D gel electrophoresis
For first dimension electrophoresis, 500 µg of protein in a solution of 7 M urea, 2 M thiourea, 4% CHAPS, 50 mM DTT, 1% IPG buffer (pH 4-7), and 0.002% bromophenol blue were loaded onto 13 cm IPG strips, pH 4-7, and run on a flatbed Ettan IPGphor II (GE Healthcare, Piscataway, NJ, USA) under conditions described earlier by Natarajan et al.: 30 V for 13 hours, 500 V for one hour, 1000 V for one hour, 8000 V gradually for 1:30 hours, 8000 V for 24000 Vhr, and 5000 V for ten hours. 17The final step was truncated if the protein appeared to be sufficiently separated.Prior to second dimension SDS-PAGE the IPG strips were equilibrated twice to reduce the disulfide bridges, first in DTT and then in iodoacetamide (IAA), 15 minutes each in equilibration buffer (50 mM Tris-HCl pH 8.8, 6 M urea, 30% glycerol, 2% SDS, 0.002% bromophenol blue, 1% DTT).The strips were loaded onto 12.5% polyacrylamide gels using a Hoefer SE 600 Ruby electrophoresis unit and run for 15 mA per gel for 30 minutes and 25 mA per gel for up to five hours.The gels were stained for two days using Coomassie Blue G-250.Gels were scanned on an ImageScanner III (GE Healthcare), and analyzed using Progenesis SameSpots software (Nonlinear Dynamics, Durham, NC, USA).Gels were stored in a 17.5% aqueous ammonium sulfate solution until further use.
In-gel digestion
Trypsin digestion of selected spots was based on methods by Shevchenko et al. (1996) and Gharahdaghi et al. (1999). 19,20Spots were excised and rinsed twice with 50% methanol, ten minutes each.The gel pieces were reconstituted and subsequently dehydrated in solutions of 25 mM ammonium bicarbonate and acetonitrile respectively, by placement on a shaker for ten minutes per solution.The prior step was repeated for a second time.The spots were further dried in a speed vac concentrator for about 15 minutes.Each gel piece was then reswollen with a 20 μL aliquot of 10 ng/μL porcine trypsin (sequencing grade, Promega, Madison, WI, USA) in 25 mM ammonium bicarbonate and refrigerated for one hour in 4°C before overnight incubation at 37°C.
The excess trypsin solution surrounding the gel spot was transferred to new tubes and the remaining peptides were extracted from the gel pieces by 50% acetonitrile/5% trifluoroacetic acid (TFA)(v/v).50 μL of the extraction mixture was added to each gel piece and placed in a shaker for an hour.The supernatants were added to the original trypsin digests and the extraction was repeated once more with another 50 μL aliquot.The supernatants were then dried for up to two hours on a speed vac concentrator.The peptides were solubilized in a 20 μL solution containing 5% acetonitrile /0.1% formic acid.
Mass spectrometry
The peptides were run through an LTQ Orbitrap XL hybrid linear ion trap Orbitrap mass spectrometer (ThermoFisher Scientific, San Jose, CA, USA) with reverse-phase chromatography on a 100×0.18mm BioBasic-18 column and 3 μL/min flow rate.The 30-minute linear gradient was 5-40% acetonitrile in a 0.1% formic acid solution.The resolution survey scan was over the range 400-1600 m/z (4=30000 at m/z 400) and for each polypeptide gel spot, the MS/MS spectra of the five most abundant ions were recorded.The electrospray voltage was 3.5 kV with normalized collision energy set to 30% and a minimum ion count of 5000.Mascot Distiller version 2.3.00 was employed for producing searchable peak lists.
Data analysis
MS/MS data were analyzed by the Scaffold toolkit version 3 (Proteome Software, Portland, OR).Scaffold searches MS/MS data against several database search engines and computes a peptide probability incorporating similar results among the search engines.Based on the peptide distribution, a protein probability is computed and the peptides are identified as parts of the computed protein. 21The MS/MS data was searched against the UniProt Knowledgebase.The results were limited to G. max, with minimum values of two significant peptide matches, 80% peptide identification probability, and 95% protein identification probability.Uncharacterized proteins were identified by examining the homologous pro-tein clusters at 100%, 90%, and 50% homology as curated by UniRef.Proteins without a name at the 50% homology level remained uncharacterized.
Biological process gene ontology (GO) terms as listed in the Gene Ontology Annotation (GOA) Database, a collaboration with the UniProt Knowledgebase, were retrieved for the proteins and input into the agriGO version 1.2 GO analysis program (China Agricultural University, Beijing, China) for single enrichment analysis.For this analysis, the abundance of the GO terms found in identified pulvinar proteins were compared to a background list of soybean genes.Single enrichment analysis indicates the dominant biological processes in the examined tissue against the entire plant species.It is one of several available methods to compare the significance of each GO term.In addition, the analysis used as a default, a hypergeometric statistical method with the Benjamini-Yekutieli correction method and a minimum five terms for significance. 22The reference GO term list was the soybean locus genome provided by Phytozome (US Department of Energy, Joint Genome Institute, Walnut Creek, CA, USA).GO terms significantly present in higher numbers in the pulvinus tissue were discussed.
Protein extraction, identification, and functional analysis
To investigate pulvinus protein functional analysis, 2-DE was used to separate the extracted proteins.A total of 183 spots were then excised for LC-MS/MS analysis (Figure 1).MS/MS data run through Scaffold resulted in the identification of 195 proteins; 18 spots of the 183 total either did not contain any peptides or had a protein identification probability <95%.Fifty-five proteins were named directly as a result of the Scaffold analysis workflow.An additional 116 proteins were named through the UniRef sequence clusters at the 100%, 90%, or 50% homology level.The remaining 24 proteins were not named even at the 50% homology level.Of the 195 proteins, 129 had GO terms for biological processes.Protein or cluster names, UniProtKB accession numbers, UniRef accession numbers, molecular weight, corresponding gel spots, and genetic information are listed in Supplementary Table 1.
AgriGO single enrichment analysis of 129 proteins compared to a background soybean genome locus identified 74 significant GO terms (data not shown).Most of the 74 terms fell under three broad parent terms: nucle-N o n -c o m m e r c i a l u s e o n l y obase, nucleoside, and nucleotide metabolic processes (23 child GO terms), nitrogen compound metabolic processes (13 child GO terms), and transport (11 child GO terms).All terminal significant child GO terms are listed in Supplementary Table 2.The top two significant biological process GO terms were related to proton transport.The third significant term was a negative regulator of proton transport (oxidative phosphorylation, GO:0006119).Photosynthesis and respiration were also enriched compared to the reference background.
Proton transport
The two most significant (lowest e-values) GO terms were related to the transport of protons through hydrolysis and synthesis of ATP (Supplementary Table 2).This would indicate a higher demand for proton transport in the pulvinus relative to the overall soybean plant and confirms our understanding of turgor change in pulvinar motor cells.The changes in the different turgor pressures between the adaxial and abaxial motor cells that result in leaf movement are triggered by K + and Cl -flux across the plasma membranes.Proton concentrations dictate the level of activity in ion channels and consequent osmotic movement. 7acuolar H + -ATPases are believed to also assist in Ca 2+ accumulation through the generation of the electrochemical gradient across the tonoplast. 23In their examination of apoplast pH levels in the nyctinastic S. saman pulvini, Lee and Satter confirmed that swelling of motor cells corresponded to proton extrusion, and the same light treatment resulted in opposing proton fluxes between the adaxial and abaxial motor cells. 24Okazaki found similar pH changes in P. vulgaris pulvini and noted that an increase of extracellular pH were accompanied by inhibition of plasma membrane H + -ATPase activity which would lead to turgor decrease. 15Unlike Lee and Satter, Okazaki found that blue light increased extracellular pH in both adaxial and abaxial protoplasts and speculated that the relative differences in turgor between the two regions resulted from the differences in the amount of blue light received.It is important to remember that the responses measured by the two groups of researchers were not the same; Lee and Satter were looking at the leaf movements of a nyctinastic plant while Okazaki was examining the heliotropic leaf movement of P. vulgaris.The apparent differences in pH response to blue light may be a result of a photoreceptive signaling pathway unique to heliotropic plants.
Malate metabolism
Malate metabolism was the fifth most-common significant GO term as calculated by agriGO (Supplementary Table 2).In the pulvi-nus the dissociated anion of malic acid, malate, is believed to serve as an additional source of charge balance in the cell, as the cotransport of Cl -does not account for all the positive charge associated with K + movement.Malic acid appears in higher concentrations during the day in the whole pulvinus of P. coccineus motor cells than at night, and increased concentrations correspond to regions of K + accumulation.The majority of malic acid occurs through de novo synthesis in situ (~80%), not by transport between the abaxial and adaxial regions. 25Malate is also known to be an important counter-ion in guard cells. 26our of the five proteins with the malate metabolism annotation were various isoforms of malate dehydrogenase (spots 83-85, 87).The fifth, malic enzyme (spot 13), is involved in the synthesis of malic acid.A precursor to malate is sucrose, believed to be present in motor cells by import from nearby tissues and also by the degradation of starch.One protein identified as the starch degrading enzyme beta-amylase was detected in various amounts across eight spots in the pulvinus gel (spots 15-17, 28-30, 51 and 62).
Cytoskeletal processes
Two significant GO terms, protein polymerization and microtubule-based movement, were both annotated to eight tubulin proteins (spots 18-21, 24, 25, 27, 38, 39 and 43).The multiple tubulin proteins are an example of the tetraploid nature of soybean; despite having over 86% sequence homology five tubulin beta-2 chain proteins are encoded by genes located on five different chromosomes (data not shown).While tubulin is the cytoskeletal pro-tein type identified by significant GO terms, studies have found actin (spots 53, 55, 57) to be more integral to pulvinar bending.The GO terms for actin proteins in the soybean pulvinus, however, have yet to be annotated and were therefore unable to be included in the analysis.Kanzawa et al. and Yao et al. detected the fragmentation of actin filaments in response to cold and electrical stimuli, respectively in Mimosa pudica pulvini; Kanzawa et al. also detected microtubule fragmentation but microtubule modulators did not appear to affect pulvinar bending while actin modulators did. 1,27Yao et al. noted that both actin rearrangement and the presence of intracellular Ca 2+ were required for bending. 27Several other proteins with Ca 2+ -dependent activity were detected in the pulvinus, including inositol 3-phosphate synthase (spot 31), apyrase (spots 33 and 91), and annexin (spot 110).These three proteins could function as early participants in signal-transduction pathways that lead to ion movement across the plasma membrane.
Ion transporters and aquaporins
Along with proton ATPases was the expectation of finding aquaporins and K + transport proteins in the pulvinus.Previous studies have suggested the presence of aquaporins, and at least two K + transporters for inward and outward ion movement. 28,29The channels, conventionally labeled as K H and K D for hyperpolarized and depolarized K + channels, respectively, correspond to inward and outward ion movement.However, Yu et al. found evidence for a voltage-independent K H channel not affected by extracellular pH levels that appears to serve not a minor role in K + influx. 30Fluerat-Lessard et al. detected high levels of a 23 kDa aquaporin in addition to V-H + ATPases in the aqueous vacuole of mature M. pudica pulvini compared to juvenile, non-functioning pulvini. 31At least two aquaporin-like gene products were successfully cloned from S. saman pulvini; one of the two, SsAQP2, also demonstrated diurnal and circadian patterns and was not found to an appreciable degree in nearby leaf tissue. 28nfortunately we did not detect any other transporters (other than ferritin, spot 147) or any proteins localized to the plasma membrane in the soybean pulvinus.There could be several explanations for the lack of these proteins.For one, aquaporins have an isoelectric point (~8) higher than the range that was used in this study.Furthermore, plasma membrane proteins typically are not recovered from traditional protein extraction methods, requiring an additional partitioning step and special extraction buffers. 32V-type H + -ATPases, such as the three detected in the soybean pulvinus, are not only localized in vacuoles but may also be located in the plasma membrane.The GO annotation for cellular component of the three V-H + ATPases did not specify plasma membrane or the tonoplast; given the dearth of other plasma membrane proteins it seems more likely that the H + ATPases detected were from endomembranes.
Pulvinus proteome compared to the root, leaf, and guard cell
The pulvinus proteome profiled here was compared with the leaf proteome identified by Xu et al. to verify differences between the two tissues. 33One advantage of comparing the two tissues is that the protein extraction process as well as the soybean cultivar examined were the same in both studies.The distribution of pulvinar proteins in the various functional categories were expected to differ from the leaf because of the dominant biological functions distinct for type of tissue.Xu et al. identified a large number of soybean leaf proteins involved in energy production. 33Several of those proteins were also identified in the pulvinus, including rubisco large subunit, rubisco activase, and oxygen-evolving enhancer proteins 1 and 2, among others.The pulvinus proteome also contained ATP synthases and two other proteins involved in nucleoside phosphate biosynthesis which did not appear in the leaf proteome.While the separation methods for the two tissues were largely the same, Xu et al. used the method of Bevan et al. for classifying the proteins without reporting significant GO terms. 34As a result, direct side by side comparison of significant molecular processes cannot be undertaken.In general, the pulvinus proteome contained many more stress-response proteins than the leaf.
Pulvinus proteome compared to guard cells
Besides the pulvinus motor cell, another cell demonstrating rapid turgor changes is the guard cell.Like the pulvinus, guard cells require H + -ATPase activity triggered by light for its ion transport and subsequent water movement.In guard cells H + -ATPase activity in the plasma membrane increases when exposed to blue light, with a corresponding decrease of extracellular pH and turgor increase. 26However the opposite response occurred in Okazaki's examination of P. vulgaris motor cells, where blue light illumination led to the inhibition of H + -ATPase. 15This suggests that, despite functional similarities, guard cells and heliotropic pulvinus motor cells have distinct mechanisms for their turgor responses.It could be that guard cells, more often located on the abaxial leaf surface, have a similar photoreceptive mechanism to abaxial motor cells of nyctinastic plants such as S. saman.Therefore comparing the proteomes of the two tissues could assist in determining the degree of similarity of guard cells to pulvini of either heliotropic or nyctinastic plants.
Zhao et al. examined the diploid Arabidopsis guard cell proteome. 35Their top GO term was related to stress response (response to cold), with energy categories making up three of the top eight.In the soybean pulvinus GO terms for photosynthesis and respiration were less significant than in the Arabidopsis guard cell.The most abundant protein found in the Arabidopsis guard cell was the stress response protein TGG1, which functions as a defense against pathogen attack. 35In the soybean pulvinus protein abundance was not measured and so no similar observations could be made.The different rankings of significant GO terms may reflect different mechanisms between the two.Zhu et al. compared mesophyll and guard cells from the tetraploid crop species Brassica napus using a combination of quantitative GO analysis and a qualitative functional approach following the functional categories listed by Bevan et al. for the Arabidopsis genome. 34,36hu et al. found the most highly enriched guard cell proteins relative to the mesophyll cell fell under the categories of energy, photosynthesis, membrane and transport, metabolism, and stress response.Categorization of the soybean pulvinus proteins using the Bevan et al. method found similar results for the major groups (data not shown).In contrast to the guard cell proteome, cytoskeletal and other structural proteins appeared to factor in more strongly in the pulvinus.Both Zhao et al. and Zhu et al. recovered around 10-fold more proteins than recovered in soybean pulvinus proteins, including many of the plasma membrane proteins that were not extracted using the methodology of this study.The magnitude of detected proteins in the guard cells compared to the pulvinus conceivably skews the GO analysis.Additional comparisons of the two proteomes would require a higher amount of protein extraction from soybean pulvinus, and care must be taken to compare the soybean pulvinus with the guard cell profile of another tetraploid organism.
Conclusions
To our knowledge, this is a novel report on the analysis of protein in soybean pulvinus.These findings provide a better understanding of the molecular basis of pulvinar protein function.In summary, 195 proteins were extracted and positively identified from 165 spots from the pulvinus gel.Gene ontology analysis of significant terms found the top dominant GO biological processes were related to proton transport, malate metabolism, and cytoskeletal movement, which are in good agreement with previous studies on pulvinar response.Regrettably many highly basic proteins were not detected, including proteins at isoelectric points above pH 7 and plasma membrane proteins.In order to recover the latter, methods such as two-phase partitioning and the use of special buffers would be necessary to recover the integral membrane proteins.Finally, the number of proteins recovered from the pulvinus will have to increase for future comparative studies with proteins of the guard cell.
Figure 1 .
Figure 1.Representative SDS-PAGE gel of soybean lateral pulvinus, pH 4-7.The lateral and terminal pulvinus did not significantly differ from one another (data not shown).The numbers correspond to the proteins identified through LC-MS/MS and listed in Supplementary Table1.Eighteen spots did not produce significant hits. | 5,581.8 | 2014-06-23T00:00:00.000 | [
"Biology",
"Environmental Science",
"Materials Science"
] |
Learning misclassification costs for imbalanced classification on gene expression data
Background Cost-sensitive algorithm is an effective strategy to solve imbalanced classification problem. However, the misclassification costs are usually determined empirically based on user expertise, which leads to unstable performance of cost-sensitive classification. Therefore, an efficient and accurate method is needed to calculate the optimal cost weights. Results In this paper, two approaches are proposed to search for the optimal cost weights, targeting at the highest weighted classification accuracy (WCA). One is the optimal cost weights grid searching and the other is the function fitting. Comparisons are made between these between the two algorithms above. In experiments, we classify imbalanced gene expression data using extreme learning machine to test the cost weights obtained by the two approaches. Conclusions Comprehensive experimental results show that the function fitting method is generally more efficient, which can well find the optimal cost weights with acceptable WCA.
Background
Classification of gene expression data reveals tremendous information in various application fields of biomedical research, such as cancer diagnosis, prognosis and predictions [1][2][3]. However, the gene expression data is composed of high-dimensional, noisy and imbalanced data samples [4]. The characteristic of imbalanced data is serious imbalance in the proportion of positive and negative samples [5,6]. Gene expression data exacts a series of pre-processing steps to eliminate misleading classification results [7]. Moreover, the classification of gene expression data is a cost-sensitive problem, although both positive and negative classifications of cancer genes provide important evidences for doctors to make the treatment plan.
Traditional machine learning algorithms usually assume that the training set is balanced. For imbalanced datasets, such as the gene expression datasets, the classical classification algorithms with the correct classification rates (CCR) may bias towards the majority classes. However, the misclassifications of minority classes usually contribute the higher influences than those of majority classes. Therefore, The introduction of cost sensitive learning (CSL) is necessary to eliminate the defects of traditional classification algorithms for imbalanced datasets. Traditionally, oversampling the minority class, undersampling the majority class, and synthesizing new minority classes can be used to handle this problem. In this work, we utilize a more sophisticated way to search for the optimal weights, and the proposed methods are more advanced than ever.
In CSL, misclassification cost is an important factor to evaluate the classification performance of imbalanced datasets. However, solving the misclassification cost matrix is not a trivial task in many situations [8][9][10]. A direct solution for finding the misclassification costs is to assign them manually according to user expertise or inversely calculate the costs based on class distribution [11][12][13]. More sophisticated solutions can be found by fitting the importance of features to adaptive equations.
In this paper, we learn the misclassification cost from the evaluation functions of cost-sensitive algorithms, using weighted classification accuracy as the measurement of cost-sensitive classification performance. The cost weights that lead to optimal classification performance are learned by grid searching strategy. It will help the researchers to obtain a reference weight. Then, three fitting functions will be found to represent the optimal cost weights. A series of comprehensive experimental results show that the function fitting approach is an effective way of finding the optimal cost weights, targeting at high weighted classification accuracy (WCA). Fitting functions can accurately locate optimal weights. Appropriate weights will greatly improve the accuracy of the model. Imbalanced data greatly affects the accuracy of classification. We discuss the cost-sensitive classification algorithms in the imbalance problem. CSL is one of the most hot topics in the field of machine learning. Many works have studied on CSL and embedded the misclassification costs into various classifiers, such as the decision trees (DTs), support vector machines (SVMs) and extreme learning machines (ELMs). Chai et al. [14] considered the testing costs of missing values in naive Bayes (NB) and DT algorithms. Feng [15] defined a customized objective function for misclassification costs and designed a score evaluation based cost-sensitive DT. For multi-class classification problems, Feng's method generally achieves higher classification accuracy or lower misclassification costs. Zhao and Li [16] extended the evaluation function by including weighted information gain ratio and the test cost for the cost-sensitive DT. The proposed cost-sensitive DT algorithm not only reduced the misclassification cost, but also improved the classification efficiency of the original C4.5 algorithm [17,18]. Lu et al. [19] made use of the cost-sensitive DTs as base classifiers and constructed a cost-sensitive rotational forest. Two kinds of DTs, i.e., EG2 and C4.5, are considered and tested [20]. These experiments show that integrating cost-sensitive to classification algorithms can effectively improve classification efficiency.
Cost sensitivity and classification algorithms combine to form efficient classification methods. Cao et al. [21] proposed to embed evaluation measures into the objective function for to improve the performance of a cost-sensitive support vector machine (CS-SVM). He et al. [22] integrated the Gaussian Mixture Model (GMM) into the CS-SVM to deal with the imbalanced classification problem. Cheng and Wu [23] added weights to features and introduced a weighted features cost-sensitive SVM (WF-CSSVM). The WF-CSSVM algorithm showed significant performance improvement on both aspects of accuracy and cost. Silva et al. [24] combined CS-SVM with semi-supervised learning method to form a hybrid classification algorithm. The effectiveness of the proposed hybrid method is shown in the experimental results on Earth monitoring and landscape mapping. Cao et al. [25] tackled the problem of multilabeled imbalanced data classification problem. They successfully assigned different misclassification costs to different label sets for reducing the overall misclassification cost.
CS-ELM has been studied by many researchers in various aspects. Zong et al. [26] introduced a weighted extreme learning machine (WELM) for imbalanced data learning. It was claimed that the WELM can be extended to a cost-sensitive ELM (CS-ELM). Zheng et al. [27] formally applied the concept of the cost-sensitivity to extreme learning machine (ELM). Yan et al. [28,29] extended Zheng et al.'s work and introduced a costsensitive dissimilar ELM (CS-D-ELM). Compared to traditional ELM algorithms, the CS-ELM algorithms guarantee the classification accuracy and reduce the misclassification cost. More recently, Zhang and Zhang [30] solved the problem of defining and optimizing the cost matrix for CS-ELM to make it more robust and stable [31,32]. Zhu and Wang [33] treated CS-ELM as a base classifier to solve a semi-supervised learning problem. Incremental results show that the CS-ELM has better performance in terms of accuracy, cost, efficiency and robustness over other existing classifiers.
Classical definition of cost matrix
Considering the binary classification problem, the confusion matrix shows four types of classification results according to the prediction values, namely, true positive, false positive, false negative and true negative (Table 1) [34,35].
The CSL seeks the overall minimum cost by introducing sensitive costs, rather than only aiming at high CCR. While there are several types of classification costs, it should be noted that this work only focuses on the misclassification cost.
Misclassification cost can be viewed as penalties for errors in the classification process. In binary classification problems, costs caused by different types of errors may be different. We define the minority class as positive (P), the majority class as negative (N), and construct the cost matrix C as shown in Table 2.
In Table 2, C 00 and C 11 show the cost of correct classification. By default, we set the costs of correct classifications as 0. C 01 and C 10 show the costs of error classifications, where C 01 denotes the misclassification costs of samples from P class, and C 10 denotes the misclassification costs of samples from N class. Therefore, the cost matrix in Table 2 can be simplified as: Correct classification rates versus weighted classification accuracy For classical machine learning problems, the classification accuracy always refers to the correct classification rate (CCR) [36][37][38], or called overall accuracy (OA) [39][40][41][42], which is the proportion of all correctly classified samples: However, for imbalanced datasets where the numbers of positive and negative samples differ significantly, the CCR might be misleading [43,44]. Considering a test set containing 99 negative samples but with only one positive sample [45,46], a poorly designed classifier that simply puts all samples as negative will achieve an overall accuracy of 99/100 = 0.99, even though the accuracy for positive class is 0. To resolve this issue, we introduce the notion of adaptive classification accuracy (ACA) defined as follows: By embedding a weight w i into the i-th class, we get the weighted classification accuracy (WCA) as: By enforcing w 1 + w 2 = 1, Formula (9) is reduced to: Formula (10) can be easily extended to multiclassification problems: where n denotes the number of classes, M i (i = 1, 2,..., n) denotes the number of samples belonging to the i-th class, and CM i (i = 1, 2,..., n) denotes the number of correctly classified samples within i-th class. Since the WCA is more accurate describing the classification accuracy, we use the WCA to evaluate the classification performance of costsensitive classifiers in the problem of gene expression data classification.
Optimal cost weights searching
From the University of California Irvine (UCI) standard classification dataset, we choose Leukemia, Colon, Prostate, Lung and Ovarian gene as the datasets for cost weights searching and further test, i.e., the Leukemia cancer dataset, the Colon cancer dataset, the Prostate cancer dataset, the Lung cancer dataset, and the Ovarian cancer in the tumor data respectively. All details of aforementioned datasets are shown in Table 3.
Optimal cost weights searching by grid searching strategy
The optimal weights are searched by an adaptive algorithm using grid searching. There are two crucial factors to consider: the sample importance w and sample categorical distribution p. The sample categorical distribution p is the proportion between the number of positive class and negative class in test sets. Test set is constructed by random sampling. As such, it is necessary to study the relationship between the three factors, namely, w, p and WCA, where WCA is the fitness value for the grid searching strategy. In general, the grid searching strategy can be described as follows (the detailed algorithm steps are listed in Table 4): 1) Set the searching region as M, grid searching step size as T, and the initial position as P 0 ; 2) Calculate the fitness of the current position, record the position P max that has the best fitness f max (f max = WCA); 3) Update current location, P=P + T; 4) if the current fitness value is greater than f max , update f max and P max ; 5) return f max and P max. Extreme learning machine is an effective single hidden-layer feed-forward neural network (SLFN) learning algorithm. Cost-sensitive extreme learning machine (CS-ELM) is a kind of ELM, which attaches a cost matrix on output layer. In this research, we set the number of hidden neurons at 10. Less neurons will make the result more sensitive to observe the change of weights. And seven different gene expression datasets are used to obtain the classification results with CS-ELM as the classifier. CS-ELM minimizes the conditional risk by embedding misclassification cost in ELM.
where R(i|x) is the conditional risk when the sample x is assigned to the class i, and P(j|x) is the conditional probability that x belongs to j, C(i, j) is the risk of misclassifying j to class i, where i, j ∈ {c 1 , c 2 , …, c m } and m is the number of classification categories.
Optimal cost weights searching by function fitting
In this subsection, we use w and p as independent variables, and define a function fitting problem as: where w c = C 01 /C 10 , w = w 1 /(w 1 + w 2 ) and p represents the proportion of positive and negative classes. We set C 10 to 1 to reduce the complexity of calculation, i.e., f c = C 01 . The sample distribution p, the optimal weight w c = C 01 /C 10 and the highest fitness value of each dataset are listed in Table 6.
We use an automatic fitting software named 1STOPT to do the function fitting [47]. In 1STOPT, Levenberg-Marquardt and Universal Global Optimization are used to fit functions. We compared 500 functions with different types, and selected the three functions with the highest correlation coefficient: We compare the fitting functions with the optimal weights in Figs. 1, 2 and 3. Figures 1, 2 and 3 show the comparison results of the three-dimensional interpolation of optimal weights and fitting functions. The red surface represents the optimal weights. The green, yellow, blue planes are fit surfaces of f 1 , f 2 and f 3 . The correlation coefficient R of f 1 , f 2 and f 3 identified that the overall fitness of the function f 1 is better than other two. The function f 2 gradually deviates from optimal weights while we increase the value of w, and decrease the value of p. The function f 3 is slightly coarser than the function f 1 in general.
Comparison with grid searching and function fitting
Using different gene expression datasets, we compared the optimal cost weights obtained from the grid searching strategy and fitted functions f 1 , f 2 and f 3 . In Table 6, we compared the WCAs with four different datasets, namely, Ovarian, Prostate, Lung1 and Lung2. The majority over minority class proportion of the four datasets are 1.68, 2.5, 5 and 8 respectively. All WCAs are computed using ELM as the base classifier. We also compare the two approaches with ECSELM. The best fit datasets are listed in Table 6.
For each dataset, we plot the weight variance with different values of w. For different dataset, the fittest function (choice from f 1 , f 2 and f 3 ) might be different (Fig. 4). Figure 4 shows that the more unbalanced the dataset is, the higher degree of fitness we can get; and the cost weights obtained from the fitting functions are closer to the optimal weights. In addition, the cost weights from function f 1 and f 3 are slightly superior to f 2 . We put all cost weights obtained by different methods in a threedimensional picture and show the results in Fig. 5.
For each dataset, we also illustrate the comparison of WCAs against different w values (Fig. 6). Besides, we compare WCAs of optimal weights and f 1-3 with ECSELM [48].
In Fig. 6, we can see that the WCAs of the three fitting functions are lower than the optimal accuracy when w is less than 0.5. The reason is that the fitting degree of the cost weights in this range is lower. Moreover, it can be seen from Fig. 6 that the WCAs of the fitting functions approach to the optimal accuracy with the increment of p. Furthermore, the WCAs of our approaches is better than ECSELM in most field. Compared with ECSELM, Fig. 3 The values of function w c3 compared with the optimal weights Fig. 2 The values of function w c2 compared with the optimal weights our methods are more stable, and meanwhile can guarantee high WCA. This proves the robustness of our strategy. Similar to the case of cost weights, we ensemble all WCAs obtained by different methods in a threedimensional picture (Fig. 7). In summary, we find that the function f 1 provides better classification performance than the other two functions in general; and the fitting function f 3 and f 2 have better performance while the valuable p is large (when p above 5).
Conclusions
In this paper, we have proposed two approaches to calculate the optimal cost weights for gene expression data. The two approaches include a grid searching strategy and a function fitting method. They enrich the ways of calculating the cost weights for imbalanced data classification problems. In general, the function fitting approach is more efficient than the grid searching strategy. The experimental results also show that the function fitting approach can accurate find the optimal cost weights for imbalanced gene expression datasets.
The limitation of this work is that, although the ELM classifier is tested, the stability of the function fitting method is not proven, especially for other significantly different datasets. The exploration of the proposed algorithm's stability is left as future work. | 3,789.8 | 2019-12-01T00:00:00.000 | [
"Computer Science"
] |
The Digital Engagement of Older People: Systematic Scoping Review Protocol
Background: There is an ongoing negative narrative about aging that portrays older people as a socioeconomic burden on society. However, increased longevity and good health will allow older adults to contribute meaningfully to society and maximize their well-being. As such, a paradigm shift toward healthy and successful aging can be potentially facilitated by the growing digital technology use for mainstream (day-to-day activities) and assisted living (health and social care). Despite the rising digital engagement trend, digital inequality between the age groups persists. Objective: The aims of this scoping review are to identify the extent and breadth of existing literature of older people’s perspectives on digital engagement and summarize the barriers and facilitators for technological nonuse, initial adoption, and sustained digital technology engagement. Methods: This review will be based on the Arksey and O’Malley framework for scoping reviews. The 6-stage framework includes: identifying research questions, identifying relevant studies, study selection, charting the data, summarizing and reporting the results, and a consultation exercise. Published literature will be searched on primary electronic databases such as the Association of Computing Machinery, Web of Science, MEDLINE, PsycINFO, CINAHL, and ScienceDirect. Common grey literature sources will complement the database search on the topic. A two-stage (title/abstract and full article) screening will be conducted to obtain eligible studies for final inclusion. A standardized data extraction tool will be used to extract variables such as the profile of the study population, technologies under investigation, stage of digital engagement, and the barriers and facilitators. Identified and eligible studies will be analyzed using a quantitative (ie, frequency analysis) and qualitative (ie, content analysis) approach suitable for comparing and evaluating literature to provide an evaluation of the current state of the older person’s digital engagement. Inclusion will be based on the Joanna Briggs Institute–recommended participant, concept, and context framework. Articles on older people (65 years and older), on digital technology engagement, and from a global context will be included in our review. Results: The results of this review are expected in July 2021. Conclusions: The findings from this review will identify the extent and nature of empirical evidence on how older people digitally engage and the associated barriers and facilitators. International Registered Report Identifier (IRRID): PRR1-10.2196/25616 (JMIR Res Protoc 2021;10(7):e25616) doi: 10.2196/25616
Background
Global demographic trends show that the worldwide age structure is rapidly changing more than ever before.The United Nations defines older people as those aged 65 years or older based on people's chronological age.Currently, there are over 703 million older people, and it is expected to reach 2.1 billion by the year 2050 [1,2].Population projections have indicated Europe and North America have the fastest growing aging population, and by 2050, the population percentage of older adults is expected to reach 34% in Europe and 28% in North America [3].
There is an ongoing negative narrative about aging that age-related changes, disability, and dependency among older people with poor and deteriorating health conditions imply an increased expenditure on health and its burden on the socioeconomic aspects of society [4].Further, the COVID-19 pandemic has also underlined how older people are generally perceived and valued in our contemporary society [5][6][7].This crisis exacerbated existing and deeply rooted inequalities such as underfinancing in the care sector and the chronic shortage of caregivers (both in the health and social sector) [8].However, contrary to the negative narrative, increased longevity and good health allow older adults to meaningfully contribute socially and economically, and maximize their well-being late into life [9][10][11].To facilitate healthy and successful aging, the fast-growing digital technology, with all its drawbacks, barriers, and challenges, offers a staggering promise and opportunity [12].
Despite substantial mixed and inconclusive findings, several studies and reviews have demonstrated the positive impact of digital technologies on different dimensions of an older person's life, including health, housing, services and transactions, mobility and transportation, access to information, communication and work, recreation, and self-fulfillment [13][14][15].Moreover, digital technologies play a substantial role in improving older people's quality of life and independence [16][17][18].However, a review reported an ambivalence toward digital technology due to negative effects such as a sense of privacy and personal security breaches.Whereas, personal safety during emergencies was reported as a positive effect of owning a mobile phone [18].
Over the past decades, digital technology use among older populations has grown exponentially both in the mainstream (day-to-day lives) and assisted care (health and social care) [19,20].Changes in the workplace and the "digital by default" strategy for delivering public services are among contributing factors forcing older people to engage digitally [21].Digital engagement in health promotion and social support through health information is also growing.However, the breadth and the extent of digital technology use among older people remains limited to communications such as sending or receiving emails, instant messaging, video calls (Skype), and making voice calls [14].A perceived or actual lack of interest, skill gaps, and socioeconomic factors were mentioned as possible reasons for the limited use of digital technologies [14].Besides, the age-related decline in vision, hearing, cognition, and dexterity also attribute to the limited use of digital technologies [22][23][24].
Comparatively, there is a discrepancy in digital involvement, access, and connectivity between the younger and older populations [16,24].For instance, in the United Kingdom between 2014 and 2019, a significant proportion of the older population never connected digitally at all or had not used the internet over the past 3 months.The 2019 Office for National Statistics (ONS) survey showed 13.5% of older people aged 65 to 74 years old and 47% of those 75 years and older never used the internet [16].A similar population-based study in 7 European countries reported only 12% internet use among older people (60 years and older), of whom 64% used it for health-related issues [25].In the United States, smartphone ownership among older people 65 years and older is significantly lower in comparison to the national average (81%; ie, 59% of those between the ages of 65 and 74 years are smartphone owners, but it falls to 40% among those 75 years and older) [26].
To create a digitally inclusive and accessible world, the International Organization for Standardization recommends human-centered and accessible designs (ISO 9241-11:2018) [27].Adaptation guidelines such as text font size, screen setting, contrast, and color adjustments are among the recommended standards.These modalities enable older people with physical disabilities to engage digitally [28].However, technology designs are mostly driven by technology push rather than user demand pull factors.Additionally, the fast-evolving nature of digital technology makes it challenging for older people to catch up and sustain engagement with the adaptation guidelines.
Digital Engagement Later in Life
To thrive in the increasingly digitalized world, an acquaintance with technology is inadvertently becoming a mandatory way of life [21].Despite the current assumption that older people are not using digital technologies, many studies have indicated that older people are competent and skilled digital technology users [29,30].Still, there is a gap in evidence, with some key questions that require illumination: 1.What are the contributing factors to the digital inequality between the age groups? 2. How can we understand older people's digital technology use? 3. What constitutes the diversity of digital technology use?
The term "digital engagement or disengagement" has been widely used in marketing research with an aim toward promoting marketing strategies to end consumers [31][32][33].Factors like brand factor, product factor, consumer factors, and content factors have been the main focus of these studies [34].Though the factors are intertwined, this review will focus on studies that explore drivers of technological nonuse, initial adoption, and sustained digital engagement from older people's perspectives.
Overall, we propose to understand the current state of knowledge about older people's digital engagement through the stages of digital engagement (nonuse, initial adoption, and sustained engagement).This will facilitate an ongoing drive to reduce digital inequality and, in doing so, provide new understandings to promote the well-being of older people.It will also help identify potential alternatives for older people who remain nonusers of digital technology.
Digital Engagement Dimensions
To facilitate this review, operationalizing older people's digital engagement and disengagement is considered an important step in deciphering the continuum (Figure 1).This continuum with a three-stage approach involves technological nonuse, initial adoption or acceptance, and sustained digital engagement.This categorization will enrich the evidence mapping and the identification of barriers and facilitators for each dimension (initial adoption, sustained engagement, and technology nonuse).The description for each digital engagement dimension is provided in the following sections.
Stage I: Digital Technology Nonuse
Technological nonuse is not absolute as the term may suggest and goes beyond the absence of technology [35].It is also a mistake to assume a person has not used a single digital technology, as use or nonuse is a constant negotiation and renegotiation to engage or disengage with technology.This also includes older people who access digital technology through their existing social support system (family and friends).To understand the possible factors affecting older people's engagement and disengagement, efforts to investigate the technological nonuse should be encouraged [35,36].This will pave the way to understanding the bigger picture of digital exclusion among older people.
Governments across Europe (eg, the United Kingdom, Sweden, and Spain) have shown commitments to provide digital technologies through a framework (eg, universal service obligations for broadband) and accessible internet to citizens [37][38][39].However, evidence has indicated that technological nonuse, later in life digital disengagement, and lower use rates are the main features of digital inequality among the older population [21,40].The nonuse might involve technology, a service, an application, a platform, a communication medium, a set of practices, or some combination thereof.For example, the 2019 ONS survey in the United Kingdom showed 13.5% of older people aged 65 to 74 years and 47% of those 75 years and older never used the internet [16].
The drivers of technological nonuse are not only limited to sociodemographic and economic characteristics, but also the absence of tailored instructions and guidance, a lack of knowledge and confidence, and health-related barriers and costs [41].According to Knowles and Hanson [17], accessibility and trustworthiness of the digital technologies, values, and religious and cultural expectations are salient determinants for older people's technological nonuse.Moreover, complexity, security, and privacy issues also contribute to the technological nonuse among this age group.
Stage II: Initial Adoption
Studies dealing with user (older person) decisions to accept or reject digital technology and the drivers that influence the user decision will inform this stage.This will answer questions such as, "What influences users' decision to use a particular digital technology?"A considerable range of models and theories such as the Theory of Reasoned Action (TRA) [42], the Technological Acceptance Model (TAM) [43], the Unified Theory of Acceptance and Use of Technology (UTAUT) [44], the Diffusion of Innovations Theory [45], and Igbaria's model [46] have been developed to facilitate an understanding of the drivers toward the favorableness and the unfavorableness of technology initial adoption [47].The TAM [43,48] and UTAUT [44], a derivative of TRA, are among the prevailing theories.The TAM developed antecedent factors such as perceived usefulness, perceived ease of use, and attitude toward technological acceptance.Whereas, UTAUT, which is the extension of TAM, further developed the model by adding social influence and other moderating factors such as gender, age, experience, and voluntariness of use [44].This review will scope studies that address these factors with age (65 years and older) as an important moderating factor [44].Furthermore, this review will include qualitative accounts from older people's perspectives, unlike the TAM and UTAUT models, which are widely used to quantify acceptance [49].
Stage III: Sustained Digital Engagement
People who actively used technology start to disengage due to age or the generational effects of aging [30].According to Damodaran et al [50], sustained digital engagement is affected by the complexity and fast-changing nature of digital technology.Additionally, user's low awareness about the availability of design adaptation modalities such as font size, color, and screen determined its sustained use.The manuals and guidelines on this design adaptation, which enhance older people's capacity to adapt to technologies, are frequently inaccessible and outdated.Learning and support from existing social support such as family play a crucial role at this stage [50].A similar study reported that sustained mobile technology use among older people was influenced by personal factors (physical, cognitive, and mental changes), environmental factors (financial costs, social influence, and learning to use technology), and technical factors (complexity and usability, absence of feedback, and design challenge) [29].
Sustained use is vital in understanding the digital divide among different socioeconomic groups [50].However, studies suggested that it is one of the underresearched areas of digital engagement.A growing body of evidence has focused on understanding the early adoption, with the assumption that once people subscribe to the technology, they will keep using it.However, there is evidence that there will be a digital engagement negotiation and renegotiation between use and nonuse and vice versa [21].Therefore, this review will include studies focused on factors that prevent or promote sustained use among older people.
Scoping Review Rationale
There is a growing body of literature that often gives glowing reviews on the positive effect of digital technology engagement among older people.However, there is a gap in comprehensive reviews of evidence understanding the complexities of the barriers and facilitators of older people's digital engagement.This review will summarize the current state of knowledge concerning older people's perspectives on digital engagement and disengagement from technological nonuse, initial adoption, and sustained use.In addition, the varieties of technologies used or being used in social and health care for older people will be identified.
Studies have shown that the use of digital technology will have a great impact on different dimensions of older people's lives, for example, quality of life [18], decision making [29], and mobility and social connectedness [14].However, there are no reviews of existing studies that summarized the state of knowledge from older people's perspectives, specifically the drivers of engagement and disengagement from technological nonuse to initial adoption and sustained use.This scoping review aims to provide a base for a more comprehensive understanding of digital engagement among older people.The findings will inform older people, designers, developers, and decision makers about practical implications.In addition, this review will set an agenda for future research and further in-depth understanding of older people's digital engagement.
The findings from this review will inform the extent of evidence on older people's digital engagement, inform the extent and the breadth of the knowledge about barriers and facilitators of older people's digital engagement, and delineate the scope of what we already know.Further, these findings will indicate the gaps in the ongoing research of the issue.
The Rationale in Light of the COVID-19 Pandemic
As of November 2020, there have been over 50 million COVID-19 cases and over 1 million deaths worldwide.Governments worldwide have implemented different levels of public health infrastructures such as lockdowns, social distancing, testing, contact tracing, and isolation measures [51].As a result, digital technology use as a modality for coping with the crisis and socioeconomic continuity has substantially increased.For example, people are now using technology to work from home, to speak to their families and loved ones, and to source entertainment and information [52,53].In addition, contact tracing apps were implemented in European countries, China, Singapore, and the United Kingdom [54,55].Despite the unanticipated nature of the crisis and the higher vulnerability associated with age, the existing digital technology inequality among the age groups could imply low use or uptake of such services for the well-being of an older person, exacerbating the existing inequality [56].In this new configuration of societal roles and innovative ways to tackle the transmission and stop the pandemic, future understanding of older people's digital engagement will shed light on existing efforts to make technologies equitable.
Overview
The methodology for this scoping review is informed mainly by the Arksey and O'Malley framework for scoping reviews and will examine the extent, range, and breadth of evidence for the drivers of digital engagement among older people [57].Additional recent methodological development on scoping reviews by Levac et al [58] and Tricco et al [59] (ie, PRISMA-ScR [Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews] Checklist) will be incorporated in the main framework.The framework has 6 steps described in the following sections.
Stage 1: Identifying Research Questions
Identifying relevant and broader research questions is the first step in the process of a scoping review.Our review questions are as follows: 1. What is known from the existing literature about the perspectives of older people on digital technology engagement? 2. What digital technologies have been used in the health and social care of older people?
Stage 2: Identifying Relevant Studies
A comprehensive search of identified electronic databases will be conducted to locate relevant studies.Our search will include primary databases such as the Association for Computing Machinery Digital Library; Library, Information Science, and Technology Abstracts; MEDLINE; PsycINFO; CINAHL; and ScienceDirect.The search will be complemented by interdisciplinary (Web of Science, EBSCO, and Scopus) and secondary databases (Cochrane library and Joanna Briggs Institute [JBI] reviews).In addition, common grey literature sources from key journals (JMIR, the Journal of Gerontology, and the Journal of Gerontechnology) and Google Scholar will be included.Additional manual searches of peer-reviewed and grey source literature on the current COVID-19 crisis and the role of digital technology engagement among older people, published from December 2019 onward, will be included to support the review rationale.
Taking into consideration the research question and the JBI recommended population, concept, and context (PCC) approach, keywords and their synonyms, plurals, spellings, and acronyms will be used to develop a comprehensive search strategy as follows.
1.The population of this study is limited to studies conducted among older people 65 years and older.Terms such as "older person," OR "older people," OR "elderly," OR "geriatric," OR "old," OR "frail," OR "older user" will be used to form the population. 2. The concept will include studies dealing with digital engagement.Terms such as "digital," OR "digital technology," OR "digital engagement," OR "digital technology engagement," OR "technology" will form the concept. 3.There will be no restriction by context in terms of the geography of the studies.
All identified literature from our broad search strategy will be exported to the EndNote library manager and Evidence for Policy and Practice Information (EPPI) Reviewer 4 for the two-stage screening (title/abstract and full article).
Stage 3: Study Selection
Inclusion and exclusion criteria to select studies will be generated based on the scope of the inquiry.Accordingly, we will use the iterative search strategy as we go back and forth to refine the search strategy and study selection.
Inclusion Criteria
Peer-reviewed articles will be the primary target, but also, grey literature sources with important insights into the scope will also be included to enrich the review.The inclusion criteria will be in line with the PCC of the studies described as follows:
Study Identification
The study selection will involve two stages of screening.EPPI Reviewer software version 4 (from Evidence for Policy and Practice Information and Co-ordinating Centre) will be used to facilitate the screening process. 1. Title and abstract screening will be performed according to the inclusion and exclusion criteria. 2. Articles qualified by the title and abstract screening will be further considered for full article appraisal.Full articles will be accessed through the University of Brighton library, interuniversity library resources, and contacting the authors.The search results, screening process, and reasons for exclusion will be presented using the PRISMA-ScR flow diagram.
Stage 4: Charting the Data
Important variables from studies found to be eligible for final inclusion in the scoping review will be extracted using a customized data extraction tool.The extracted variables will inform the scope and the breadth of the existing literature on older people's digital engagement (Textbox 1).Variables such as study design, source of data, study size, study setting, study population, digital technology used, stage of digital engagement under study, and the barriers and facilitators of digital engagement among older people will be extracted.
XSL • FO
RenderX Textbox 1. Variables to be extracted by review questions.
• Authors • Year of publication An extension of the PRISMA-ScR flow diagram and guideline for reporting scoping reviews will be used to describe and collate the results of the final review [58,59].The scoping will involve quantitative analysis (ie, frequency analysis), numerical description and common characterization of the studies by a study setting (geography or distribution), type of the study designs, the mean age of the study participants, and other features.Finally, the qualitative analysis will be conducted using the content analysis technique.Conceptual categories and definitions will be formed to inform the meanings, barriers, facilitators, and experiences of older people related to digital technology engagement.These categories will be used to generate themes.Levac et al [58] recommended qualitative content analysis to facilitate the summary and make sense of the extracted variables.This relational conceptual analysis will help explore relationships between the concepts extracted from the articles in the field.Charting of important variables and a narrative description of the findings will be presented in the review report.
Stage 6: Consultation Exercise
We will conduct a consultation based on the identified preliminary literature findings on the topic of interest with identified stakeholders including advocacy groups, older people, academicians, digital developers, practitioners, and other early-stage researchers.This consultation exercise will be done after the preliminary electronic search on the common databases.
The findings from the consultation exercise will inform our revision of the research question and refine the search strategy.
The findings from the consultation exercise will be thematically presented.
Dissemination and Ethical Requirements
We will comment on the ethical approval status of the included studies.However, for this review, ethical approval is not required since it uses publicly available sources.The key finding from this scoping review will be made available online and will be disseminated to key stakeholders.
Results
We have conducted a preliminary search of the primary databases.We expect the final database search of this review to be completed in May 2021.We envisage disseminating the findings from this systematic scoping review in a scientific peer-reviewed journal.
Discussion
We conceptualized older people's digital engagement in a three-stage continuum from nonuse and initial adoption to sustained engagement.The findings from this review will identify the extent and nature of empirical evidence on how older people digitally engage and the associated barriers and facilitators at each stage of the continuum.
Figure 1 .
Figure 1.Older people's digital engagement dimensions and stages later in life. | 5,167.4 | 2020-11-09T00:00:00.000 | [
"Sociology",
"Computer Science"
] |
Bioinformatics analysis of Rab GDP dissociation inhibitor beta and its expression in non-small cell lung cancer
Background Lung cancer has been considered as one of the most important causes of cancer-related mortality worldwide. To predict lung cancer, researchers identified several molecular markers. However, many underlying markers of lung cancer remain unclear. One of these markers is Rab GDP dissociation inhibitor beta (GDIβ), which is related to tumorigenicity, development and invasion. This study was designed to analyze the biological characteristics of Rab GDIβ and to detect the mRNA and protein expressions of Rab GDIβ in lung cancer cells; this study also aimed to investigate the functions of this protein in lung cancer. Method Using online software from the websites of NCBI, ProtParam and so on, we analyzed the biological characteristics of Rab GDIβ. RT-PCR was performed to detect gene expressions in A549 and 16HBE cell lines and immunohistochemistry (IHC) staining was conducted to detect Rab GDIβ protein expression in 57 cases of human lung cancer tissues and 19 cases of normal lung tissues. The association of protein expression with patient clinical and pathological characteristics was assessed in each dataset. Results Bioinformatic analysis on Rab GDIβ: The mRNA of human Rab GDIβ contains two transcript variants; the common structural elements of the two proteins are mainly α-helix, random coil, β-turn and extended strand. Three and four transmembrane domains could be found in the entire polypeptide chain of protein variants 1 and 2, respectively; both transcript variants are hydrophilic and soluble proteins. The RT-PCR result: The mRNA expression of Rab GDIβ was down-regulation in A549 cells compared with that in 16HBE cells. The IHC result: The protein expression of Rab GDIβ in lung cancer cells was significantly lower than that in normal lung tissues (P <0.05) but was not correlated with patients’ age, gender, tumor size, pathological type, differentiation, lymph node metastasis, distant metastasis and TNM stage. Conclusion The expression of Rab GDIβ was low in non-small cell lung cancer (NSCLC). Hence, Rab GDIβ may be a tumor suppressor and could function as an indicator of tumorigenesis in NSCLC; nevertheless, this result should be further studied. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_201
Background
Lung cancer has been considered as one of the most common malignancies and yields the lowest survival rate among other cancers [1]. Non-small cell lung cancer (NSCLC) accounts for more than 80% of all lung cancers. Mortality related to this malignant disease has increased by 465% during the last 30 years in People's Republic of China [2]. Even those NSCLC patients have received standard treatments, including surgical resection, traditional chemotherapy, radiation therapy and molecular targeted therapy, the five-year survival rate of NSCLC is still lower than 15% [3][4][5]. It has long been acknowledged that the aggressive nature of lung cancer is closely related to the activation of oncogenes and the inactivation of tumor suppressor genes [6][7][8]. However, numerous molecular alterations are involved in lung cancer development [9]; cancer initiation, progression and metastasis also remain poorly understood. Thus far, we still lack markers that can be used in early detection and targeted therapy. Therefore, novel cancer-specific molecular targets and signalling pathways should be developed to establish new therapeutic strategies against this devastating malignancy and to improve patient survival.
In our previous study on mitochondria proteomics, different proteins containing Rab GDP dissociation inhibitor beta (GDIβ) were screened and identified [10]. Rab GDIβ is a member of the GDP dissociation inhibitor family that controls the recycling of Rab GTPases involved in membrane trafficking [11,12]. Rab GTPases, one of the Ras superfamily members of monomeric GTPases, are small G proteins. In recent years, a new function of Rab proteins has been observed in the control of tumor progression. Evidence [13][14][15][16][17][18][19] has further shown that Rab proteins are necessary to facilitate cancer cell adhesion and migration. As a Rab protein control factor, Rab GDIβ is also involved in the development of multiple tumors. Rab GDIβ controls the access of GTPases to regulatory guanine nucleotide exchange factors and GTPase-activating proteins [20]; Rab GDIβ may also function in tumor cell apoptosis [21]. Wang et al. [22] discovered that Rab GDIβ was significantly up-regulated in the highly metastatic gallbladder carcinoma cells GBC-SD18H compared with the poorly metastatic GBC-SD18L cell line. Proteomic analysis results have also shown that Rab GDIβ expressions in gastric cancer [23] and ovarian cancer [24] were aberrant compared with those in normal tissues. Therefore, Rab GDIβ possibly participates in cancer initiation and progression.
However, whether Rab GDIβ is involved in the development of NSCLC is yet to be reported. In the present study, the biological characteristics of Rab GDIβ were analyzed. RT-PCR was conducted to detect the mRNA expression levels of Rab GDIβ in lung adenocarcinoma cell line A549 and normal human bronchial epithelial cell line 16HBE. Immunohistochemistry (IHC) was performed to quantify the protein expression of Rab GDIβ in lung cancer tissues. Then the relationship between this expression and clinicopathological factors was examined. These studies may provide important references related to the potential function of Rab GDIβ in human NSCLC progression.
Bioinformatics analysis
Biological characteristics, including physicochemical properties, homology, secondary structure, transmembrane domain, functional domain, subcellular localisation, three-dimensional (3D) structure, phosphorylation sites and functions of Rab GDIβ, were analyzed using online software from the websites of NCBI, ProtParam, SOPMA, TMpred, SMART, ProtScale, SOSUI, PSORT II, Swiss-MODEL Repository and so on. The gene and protein interaction networks of Rab GDIβ were established on the basis of the platform of GeneMANIA and STRING9.0.
Cell culture
Human lung adenocarcinoma cells A549 were obtained from the central laboratory of Xi'an Jiaotong University and grown in a complete medium containing RPMI 1640 supplemented with 10% foetal bovine serum and pen/ strep (100 U/ml penicillin and 100 U/ml streptomycin). The cells were grown at 37°C in an incubator with a humidified atmosphere of 5% CO 2 until confluency was reached. Human normal bronchial epithelial cells 16HBE were kindly provided by the tumor Cell Library of the Chinese Academy of Medical Sciences.
Patients and tissue procurement 57 patients with primary lung cancer who underwent surgical resection at the Department of Thoracic Surgery of the Second Affiliated Hospital of Xi'an Jiaotong University, between June 2006 and June 2011, without pre-operative chemotherapy and/or radiation therapy, were enrolled in this study. 19 cases of normal lung tissues were benign lung lesions or at least 5 cm distant from the cancer site. The collections of lung cancer were mainly NSCLC (32 adenocarcinomas, 19 squamous carcinomas, 2 adenosquamous carcinoma, 1 carcinoid, 1 small cell lung cancer (SCLC), 1 pulmonary metastatic tumor from the oesophagus and 1 pulmonary metastatic tumor from the cervix). Patients' characteristics, such as gender, age, pathological pattern, lymph node invasion and Union for International Cancer Control (UICC) stage, are summarised in Table 1. The tissue procurement protocol used in this study was approved by the Human Research Committee of Xi'an Jiaotong University, and a written informed consent was obtained from each patient. All of the fresh tumor specimens and normal lung tissues were collected in the operating room, snap frozen in liquid nitrogen and stored at −80°C until analysis.
Primary reagent
RPMI 1640 and foetal bovine serum were purchased from Hyclone (USA). RNAfast200 and RevertAid™ first-strand cDNA synthesis kit were separately obtained from Feijie Biotechnology Company (Shanghai, China) and Fermentas (USA). Rabbit polyclonal antibody against Rab GDIβ was purchased from ProteinTech (USA).
RT-PCR
Total RNA was extracted from cultured cells with RNAfast200 according to the manufacturer's instructions. A total of 1 μg of RNA was used for reverse transcription; cDNA was generated and used as a template for RT-PCR analysis. RT-PCR was then performed using the RevertAid™ first-strand cDNA synthesis kit. PCR conditions for all of the reactions were as follows: Rab GDIβ, 94°C for 3 min, 94°C for 30 s, 50°C for 30 s and 72°C for 1 min (30 cycles) and 72°C for 5 min; and β-actin, 94°C for 3 min, 94°C for 30 s, 58°C for 30 s and 72°C for 1 min (30 cycles) and 72°C for 5 min. Gel images were collected by applying conventional electrophoresis on the PCR product. The expression levels of Rab GDIβ in A549 and 16HBE cell lines were evaluated using ImageJ 1.44 software and under the reference of β-actin. Each experiment above-mentioned was performed in triplicate.
Immunohistochemistry
IHC staining was conducted according to standard streptavidin-perosidase (SP) methods. In brief, the tissue specimens were fixed in neutral buffered formalin and then embedded in paraffin wax. Tissue sections (thickness = 4 μm) were dewaxed, rehydrated, subjected to heat-induced antigen retrieval and blocked with normal goat serum for 15 min. The slides were incubated with rabbit polyclonal antibody against Rab GDIβ at 4°C overnight, rinsed with phosphate buffered saline (PBS) and incubated with horseradish peroxidase-labelled secondary antibody. Rab GDIβ localisation was revealed using 3,3ʹdiaminobenzidine (DAB) as a chromogen. Negative control experiment was performed by replacing the primary antibody with PBS. The IHC staining levels of Rab GDIβ were assessed using a semi-quantitative staining index method [25]. The percentage of positive cells was assessed quantitatively and scored as follows: 0, <5% of the total counted cells were stained; 1, 5% to 24% of the total counted cells were stained; 2, 25% to 50% of the total counted cells were stained; and 3, >50% of the total counted cells were stained. Intensity was graded as follows: 0, no signal; 1, weak; 2, moderate; and 3, strong staining. A staining index ranging from 0 to 6 was generated by multiplying the percentage of positive cells and staining intensity of each sample. The total staining score was also graded as negative (-, score 0), weak (+, score 1 to 2), moderate (++, score 3 to 4), or strong (+++, score 5 to 6). "-" was defined as negative expression, and "+, ++, +++" were defined as positive. All of the slides were examined and scored independently by two pathologists who were blinded from the patient data.
Statistical analysis
The associations between IHC staining and clinicopathological factors were examined by χ 2 test and Fisher's exact test. Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) 19.0 software, and P <0.05 was considered statistically significant. The electrophoresis results of the mRNA level of Rab GDIβ in A549 and 16HBE cell lines are shown in Figure 5. In particular, the mRNA level of Rab GDIβ in A549 was down-regulated. Using ImageJ 1.44 software and β-actin as a reference, we found that the relative expression level of Rab GDIβ in A549 was 0.25 ± 0.07.
Protein expression of Rab GDIβ in tissue specimens
Human lung cancer and normal lung tissue specimens were immunohistochemically stained. Our results showed that Rab GDIβ was expressed in the cell membrane and the cytoplasm, as indicated by brown granules (Figure 6). The IHC results also showed that 12 of 19 (63.2%) normal lung samples were positively stained. Among the 57 lung cancer cases, only 18 showed positive expression, and the positive rate was only 31.6%. The protein expression level of Rab GDIβ in adenocarcinoma was uniformly and significantly decreased (P = 0.014) compared with normal samples. Similarly, χ 2 test showed that the Pvalue between squamous carcinomas and normal tissues is 0.022, meaning that the expressions in squamous carcinomas and normal tissues were also significantly different. Thus, Rab GDIβ expression was either absent or decreased in NSCLC.
Associations between IHC staining and clinicopathological factors
The Rab GDIβ protein expression in NSCLC tissues was not correlated with the patients' clinicopathological characteristics, such as age, gender, tumor size, pathological type, differentiation, lymph node metastasis, distant metastasis and tumor node metastasis (TNM) stage (Table 1). Adenosquamous carcinoma, carcinoid, SCLC and metastatic carcinoma were removed during the analysis because the included cases were very few.
Discussion
Bioinformatics is a new discipline that combines computer techniques and applied mathematics; bioinformatics is also a major core area of life science and natural science. In the current study, bioinformatics was applied to analyze the biological characteristics of Rab GDIβ. Our result revealed high homology in humans and other species, suggesting that Rab GDIβ is conservative and implicated in in vivo processes. Rab GDIβ is primarily located in the cytoplasm and the organelles of membrane structures; this result indeed showed that Rab GDIβ was involved in cellular vesicle transport. Gene and proteins interacting with Rab GDIβ are mainly family members of small G proteins. We could take these data to analyze the biological process and signal transduction pathways which Rab GDIβ may participate in. Thus, bioinformatics provided relevant information to elucidate the structure and function of Rab GDIβ. Rab GDIβ translocates prenylated Rab proteins from the cytosol to the membrane to form budding transport vesicles. Rab GDIβ also assists the subsequent retrieval of Rab proteins [26,27]; some of these proteins are tumorigenic or tumor suppressive [28]. Although the functions of Rab proteins in cancer progression have been studied intensively, information on Rab GDIβ action in this regard remains limited. Thus far, few expression studies have suggested that Rab GDIβ can activate or inhibit tumor progression. Sun et al. [29] conducted IHC and western blot analyzes, confirming that the increased level of GDIβ is associated with pancreatic carcinoma. In another study, proteomic analysis result has shown that GDIβ is identified as an upregulated protein with the effect of retinoic acids on the human breast cancer cell line MCF-7 [30]. By contrast, the expression levels of GDIβ in SKpac cells and chemo-resistant human ovarian cancer tissues are down-regulated [24].
In the current study, the mRNA concentrations of Rab GDIβ of lung adenocarcinoma cells and normal human bronchial epithelial cells were quantified by RT-PCR. This study is the first to report that the expression level of Rab GDIβ in lung adenocarcinoma cells was significantly lower than that in normal cells. Considering that mRNA expression may not accurately reflect protein level, we detected protein levels by IHC to verify this conclusion. The results showed that the protein level of Rab GDIβ varied with the corresponding mRNA level of Rab GDIβ in NSCLC and normal tissues; however, the protein level was not associated with patients' age, gender, tumor size, pathological type, differentiation, lymph node metastasis, distant metastasis and TNM stage. These findings suggested that the expression of Rab GDIβ in NSCLC was low, and this protein may be a candidate biomarker that could be used to diagnose NSCLC in early stages. This protein could also be used to provide new insights into the pathological mechanisms of tumor formation and development.
In a previous study, Rab GDIβ is considered as a gene that promotes differentiation and apoptosis but inhibits proliferation in various tumors. The current study suggested that Rab GDIβ was associated with human NSCLC. Rab GDIβ could be a potentially valuable prognostic indicator in patients with NSCLC. This information may also be used as reference by clinicians when they provide individualised therapy with optimal benefits for patients with NSCLC.
Conclusion
In summary, our data showed for the first time that the expression of Rab GDIβ decreased in human NSCLC. Rab GDIβ level was not correlated with patients' age, gender, tumor size, pathological type, differentiation, lymph node metastasis, distant metastasis and TNM stage. Rab GDIβ may be used as a novel marker in early-onset human NSCLC. However, the current study is only a preliminary report, and the number of samples in this research is limited; thus, further experiments should be conducted to confirm our conclusion. We recommend that a larger sample size should be used in future studies, and a cell excessive expression vector should be established to investigate the specific functions of Rab GDIβ in NSCLC. | 3,634.4 | 2014-11-04T00:00:00.000 | [
"Biology"
] |
Stability of nonlinear time-varying digital 2-D Fornasini-Marchesini system
Stability of a system described by the time-varying nonlinear 2-D Fornasini-Marchesini model is considered. There are given notions of stability of the system and theorems for stability and asymptotic stability which can be considered as the Lyapunov stability theorem extension for the system.
Introduction
The Lyapunov stability theorem is frequently used in control theory. It enables ones to test stability of linear time-invariant and time-varying systems as well as nonlinear systems. Approach based on the Lyapunov theorem is also often used for analysis and design of robust control systems.
Two dimensional (2-D) systems have found many applications, for instance in analysis of systems described by the partial differential equations, in the design of digital filters, etc. However, one of the main problems in analysis and design of 2-D systems is stability. Whereas 2-D system may be viewed as a generalization of 1-D one, the extension of 1-D stability tests for 2-D systems is rather difficult. Until now there is no simple stability test for linear 2-D systems like for 1-D systems. Therefore, every new method for stability testing of 2-D systems can be useful.
In Kurek (1995) there is given stability condition for 2-D system described by the nonlinear Roesser model. The condition is similar to the Lyapunov one. Analogous conditions for linear Roesser model one can find in El-Agizi and Fahmy (1979), Bliman (2002). In Kojima et al. (2011) it is shown that asymptotic stability of linear 2-D system is equivalent to the existence of a vector Lyapunov functional satisfying certain positivity conditions together with its divergence along the system trajectories. In Tatsuhi (2001) it is, however, shown that application of 2-D Lyapunov matrix inequality is limited in application to robust stability of a system described by the Fornasini-Marchesini model. Alternatively, in Zidong and Xiaohui (2003) robust stability of the linear uncertain Fornasini-Marchesini model is considered using the LMI approach. Some recent results concerning stability of the nonlinear Fornasini-Marchesini model one can find in Zhu and Hu (2011).
In this note we consider the stability problem for nonlinear 2-D systems described by model similar to the Fornasini-Marchesini one. The stability and asymptotic stability notions are defined for the system. Next, sufficient conditions for stability and asymptotic stability are formulated similar to the second Lyapunov stability theorem. Presented results are illustrated by numerical example. Finally, concluding remarks are given. Obtained stability conditions are simply and can be used for testing stability of 2-D nonlinear as well linear systems.
Stability of 2-D system
There is a number of state-space models for linear digital 2-D systems, eg. Roesser (1975), Fornasini and Marchesini (1980), Kurek (1985). In this paper we will deal with digital timevarying nonlinear 2-D system described by a model similar to the 2-D Fornasini-Marchesini one where x ∈ R n is a state vector and f 01 and f 10 take values in R n . The system we will call the time-varying nonlinear 2-D Fornasini-Marchesini system. The state vector of 2-D system is finite dimensional but the solution to the system is calculated under infinite dimensional set of boundary conditions (BC). For instance, the BC set for system (1) can be defined as follows or Since BC set (3) can be considered as a global state χ(h) of the 2-D system, χ(h) = {x(k, t) for k + t = h}, we define, for simplicity of the presentation, stability of system (1) assuming this BC set.
Then, denoting by ||x|| a vector norm, the Lyapunov stability of 2-D system (1) can be defined analogously to 1-D systems.
Definition 1 A state x e ∈ R n is an equilibrium state of system (1) if and only if for each integer numbers k 0 , t 0 < ∞ the equality ||x(k 0 + i, t 0 + j) − x e || = 0 for i + j = 0 implies ||x(k 0 + i, t 0 + j) − x e || = 0 for i + j > 0.
Definition 2 An equilibrium state x e of system (1) is stable if and only if for each real number ε > 0 and integer numbers k 0 , t 0 < ∞ there is a real number δ(ε, k 0 , t 0 ) > 0 such that Remark It follows from the definition that ||x(k, t) − x e || ≤ ε for k > k 0 and t > t 0 for stable equilibrium state x e independent on BC set, also for BC set (2).
Remarks
1. In short one can say that the equilibrium state x e of system (1) is asymptotically stable if and only if it is stable and ||x(k, t) − x e || → 0 for k + t → ∞. 2. From the definition it follows that x(k, t) → x e for k, t → ∞ for asymptotically stable equilibrium state x e independent on BC set, as well for BC set (2).
Definition 4
The equilibrium state x e is stable (asymptotically stable) in the large if and only if it is stable (asymptotically stable) and δ(ε, k 0 , t 0 ) → ∞ for ε → ∞.
Definition 5
The equilibrium state x e is uniformly stable (asymptotically stable) if and only if it is stable (asymptotically stable) and for each ε, k 0 and t 0 there exists δ(ε, k 0 , t 0 ) = δ(ε) independent on k 0 and t 0 .
Moreover, we say that an equilibrium state x e is unstable if it is not stable. Then, analyzing stability of the system it is easy to note that the system can have, analogously to 1-D systems, many stable, asymptotically stable or unstable equilibrium states but only one equilibrium state if the state is asymptotically stable in the large.
The main result
Based on Definition 1 one can prove the following Theorem (Kurek 1995).
Theorem 1 A state x e ∈ R n is an equilibrium state of system (1) if and only if
x e = f 01 (x e , k, t + 1) + f 10 (x e , k + 1, t) for all k, t Next one can prove the following theorems similar to the well known second stability theorem of Lyapunov for 1-D systems (Ogata 1967). Theorem 2 Given system (1) with equilibrium state x e = 0. The equilibrium state is uniformly stable if there exist real numbers ξ > 0 and ρ [0, 1], and real scalar function ϕ(x, k, t) :
The equilibrium state x e is uniformly asymptotically stable in the large if the conditions are satisfied for ξ → ∞ and condition (6) is fulfilled.
Proof According to Theorem 2 the equilibrium state x e is uniformly stable if conditions of the theorem are satisfied. Moreover, because of (8) it follows from Theorem 1 that there is only one equilibrium state x e = 0 for ||x|| ≤ ξ .
Next, from (8) and (7) one finds that function ϕ[x(k, t), k, t] is a decreasing function for k + t + 1 ≥ k 0 + t 0 and BC set (3) and since positive definite function ϕ[x(k, t), k, t] ≥ 0 is a decreasing function for k+t → ∞ there exists ϕ 0 such that However, according to (8) it is possible if and only if x(k, t) → 0 and this implies that the equilibrium state is asymptotically stable. In this case also ϕ[x(k, t), k, t] → 0. Thus, ϕ 0 = 0, too.
Finally, similarly as in the proof of Theorem 2, one easily finds that the equilibrium state x e is uniformly asymptotically stable in the large if condition (6) is satisfied and ξ → ∞.
Remarks
1. All remarks to Theorem 2 applies respectively to Theorem 3. Particularly, instead of condition (8) one can use the following onē (8) is negative definite function in x and it is also continuous function in x,k and t then condition (d) is satisfied since in this case ϕ (x 12 , k, t; ρ) → 0 only if x 1 , x 2 → 0. 3. From Fig. 1 one can see that ||x|| < δ(ξ) is a subregion of attraction of the asymptotically stable equilibrium state x e and there is only one equilibrium point x e such that ||x e || ≤ ξ .
One can note that the following corollaries simply follow from Theorem 2.
Corollary 1 The equilibrium state x e is unstable if there exists ξ > 0 such that the change ϕ (x, k, t; ρ) of function ϕ(x, k, t) along trajectory x of system (1) is positive definite in x for all k and t, i.e.
ϕ (x 12 , k, t; ρ) > 0 for ||x 1 ||, ||x 2 || ≤ ξ and x 1 , x 2 = 0 Corollary 2 The equilibrium state x e can be stable or unstable if there exists ξ > 0 such that the change ϕ (x, k, t; ρ) of function ϕ(x, k, t) along trajectory x of system (1) is neither negative nor positive definite function for ||x|| < ξ.
Remark In this case, checking the stability, one has to design another function ϕ(x, k, t) or apply different stability test.
In practice, for instance image filtering, we rather deal with linear 2-D systems. Unfortunately, in the contradiction to 1-D systems, there is no effective tests for its stability testing in spite of that the necessary and sufficient stability conditions for linear 2-D systems are well known. For the reason, we present simple examples which illustrate stability testing of linear 2-D system using the presented results.
Remarks
1. Let us note that the proper choice of the Lyapunov candidate function can significantly improve stability test. For instance using the following function one can analogously show that every 1 st order system such that |a 10 | + |a 01 | < 1,has asymptotically stable in the large equilibrium state x e = 0. 2. One can note that system (9) is unstable if a 10 , a 01 > 0 and a 10 + a 01 > 1. Indeed, in this case for x(k + 1, t) = x(k, t + 1) = x 0 one has x(k + 1, t + 1) > x 0 .
Example 2 Given linear time-invariant 2-D Fornasini-Marchesini system The equilibrium state of the system clearly is x e = 0. Checking stability of the equilibrium state we choose the following positive definite function ϕ(x, k, t) where P R n×n is a symmetric positive definite matrix. Since the function is continuous conditions (b) and (c) of Theorem 2 are satisfied according to remark 1 to Theorem 2. Then, one has 10 P A 01 A T 10 P A 10 x(k, t + 1) Thus, we obtain where Hence, the equilibrium state is asymptotically stable if the matrix Q is negative definite since in this case the quadratic form (11) according to remark 3 to Theorem 3 has an upper bound because it is continuous negative definite scalar function with maximum in x = 0.
Finally, since function ϕ(x, k, t) in (10) satisfies conditions (a), (b) and (c) of Theorem 2 for all x for ξ → ∞ as well since the change ϕ (x, k, t) is negative definite for all x if matrix Q is negative definite and condition (6) is satisfied then the equilibrium state x e is asymptotically stable in the large.
Remark Matrix Qcan be negative definite only if matrices Q 11 = A T 01 P A 01 − ρ P and Q 22 = A T 10 P A 10 − (1 − ρ)P are negative definite. It is easy to see that the above condition can be satisfied only if ρ > max i |λ i | 2 and (1 − ρ) > max i |μ i | 2 where λ i and μ ι denote, respectively, eigenvalues of matrices A 01 and A 10 , i = 1, . . ., n. This means, however, that the stability condition can be satisfied only if max i |λ i | 2 + max i |μ i | 2 < 1.
Concluding remarks
The stability notion for nonlinear parameter-varying digital 2-D systems was presented. Then, stability conditions were formulated for nonlinear time-varying digital 2-D systems similar to the Fornasini-Marchesini model. In particular, Example 2 gives simple sufficient stability condition for 2-D system described by the linear time-invariant Fornasini-Marchesini model.
The 2-D Roesser model can be easily presented as the Fornasini-Marchesini one. Thus, results obtained for stability of the Fornasini-Marchesini model can be also easily applied to the system described by the Roesser model. Then, it is easy to find that the presented stability conditions are neither simple consequence nor simple generalization of the results for the Roesser model for instance given in Kurek (1995), they are similar but different. This is a consequence of the fact that the conditions are only sufficient not necessary and sufficient.
It is, however, well known that there can be a lot of different only sufficient or only necessary stability conditions for the system.
The presented stability theorems are similar to the Lyapunov stability theorem for 1-D discrete-time systems and can be considered as a generalization. However, the presented theorems are not a simple consequence of the Lyapunov theorem since BC sets for 2-D systems are infinite dimensional, whereas they are finite dimensional for 1-D systems.
The presented results can be easily generalized on N-D systems.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 3,239.6 | 2012-06-29T00:00:00.000 | [
"Mathematics"
] |
Combination of Carbonate Hydroxyapatite and Stem Cells from Human Deciduous Teeth Promotes Bone Regeneration by Enhancing BMP-2, VEGF and CD31 Expression in Immunodeficient Mice
The objective of this study was to clarify the efficiency of a combination of stem cells from human deciduous teeth and carbonate apatite in bone regeneration of calvarial defects. Immunodeficient mice (n = 5 for each group/4 groups) with artificial calvarial bone defects (5 mm in diameter) were developed, and stem cells from human deciduous teeth (SHEDs) and carbonate hydroxyapatite (CAP) granules were transplanted with an atelocollagen sponge as a scaffold. A 3D analysis using microcomputed tomography, and 12 weeks after transplantation, histological and immunohistochemical evaluations of markers of bone morphogenetic protein-2 (BMP-2), vascular endothelial growth factor (VEGF), and cluster of differentiation (CD) 31 were performed. In the 3D analysis, regenerated bone formation was observed in SHEDs and CAP, with the combination of SHEDs and CAP showing significantly greater bone regeneration than that in the other groups. Histological and immunohistochemical evaluations showed that combining SHEDs and CAP enhanced the expression of BMP-2, VEGF, and CD31, and promoted bone regeneration. This study demonstrates that the combination of SHEDs and CAP transplantation may be a promising tool for bone regeneration in alveolar defects.
Introduction
Regenerative medicine is a therapeutic alternative to transplantation that allows for the regeneration of dysfunctional tissues [1]. The key to cell-based regenerative medicine is stem cells, and currently, somatic mesenchymal stem cells (MSCs) are widely utilized. MSCs are unspecialized cells that can self-renew and differentiate into any other cell [2]. Advances in stem-cell biology have facilitated research into the stem-cell sources. Miura et al. reported that stem cells from SHEDs were present in primary dental pulp tissue [3]; SHEDs can be easily utilized because they can be obtained by less invasive techniques, as deciduous teeth are usually discarded [4]. Thus, our group has repeatedly focused on SHEDs in our investigations.
In previous studies, the bone-regeneration ability of SHEDs, dental pulp stem cells (DPSCs), and bone marrow MSCs (BMSCs) was at the same level in skull-deficient immunodeficient mice [5]. SHEDs also have a higher proliferative capacity than BMSCs [6]. Further, the cell population of SHEDs isolated from deciduous dental pulp exhibits a high expression of CD90, CD73, and CD105, which are positive markers of MSCs. Additionally, angiogenesis and bone-regeneration ability were higher in CD146-positive SHEDs than in heterogeneous SHED cells and CD146-negative SHED cells [7]. However, when bone defects such as cleft palates are present, the 3D morphology of the defect is complicated, and external mechanical loading occurs during oral function. Therefore, the use of combined carriers, which are artificial materials, is essential in cell transplantation for bone regeneration in the maxillofacial region.
Using a material with a composition similar to that of natural bone minerals is important for reducing inflammatory responses and achieving optimal resorption behavior [8].
Artificial bone materials such as hydroxyapatite and β-tricalcium phosphate composites (β-TCP) are applied in oral surgery and prosthodontic fields [9,10]. These inorganic materials have high affinity for osteoblasts but are difficult to clinically evaluate for bone regeneration because they are mostly non-resorbable or have long resorption times and are radiopaque. Further, non-absorbable carriers may inhibit tooth eruption and the artificial movement of teeth. Therefore, a carrier in which regenerated bone acquires early physiological bone metabolism is desirable. Unsintered carbonate apatite (CAP), wherein a part of the hydroxyapatite crystal structure is replaced by carbonic acid, is the main inorganic component of bone and teeth and has high bio-affinity and shape retention. Additionally, containing carbonic acid, CAP is highly soluble in acid, and is susceptible to resorption by osteoclasts [11]. The resorption properties of CAP could be attributed to the tendency of carbonates to reduce crystallinity in the apatite structure; this promotes bone remodeling or turnover. CAP undergoes only osteoclastic resorption; therefore, its absorption rate closely matches that of natural bone [12].
CAP promotes osteoblast differentiation and CAP artificial bones have high bone conductivity [12,13]. A comparison between CAP bone prostheses and autologous bone revealed that CAP was similarly absorbed and replaced by bone in osteoclast precursor cell-culture experiments and in a rat tooth extraction socket model [14]. In addition, CAP bone prostheses showed promising results in the biopsy-histological assessment upon implant entry and three years after implantation in two-way cases of maxillary sinus floor elevation [15,16].
We conducted a pilot study focusing on the usefulness of CAP [17]. Preliminary studies clarified that in a beagle canine jaw fissure model, bone regeneration occurs in the jaw row division after BMSCs and CAP carrier transplantation; the transfer of the tooth to the bone-regeneration position is possible [17][18][19]. However, bone regenerative capacity and regenerated bone assessment have not been compared among SHEDs, CAP, and combined SHEDs and CAP transplantations, and the mechanism remains unclear. Therefore, this study aimed to compare bone-regeneration ability and evaluate bone regeneration after SHEDs, CAP, and combined SHEDs + CAP transplantations in immunodeficient mice with skull bone defects. We aimed to build a scientific rationale, based on the obtained results, for the use of this therapeutic strategy to promote optimal bone regeneration in cleft lip/palate.
Ethics Statement
Human deciduous teeth pulp harvest was approved by the Preliminary Review Board of the Epidemiological Research Committee of Hiroshima University (approval no. E-20-2). All experimental protocols were approved by the Ethics Committee for Animal Experiments of Hiroshima University School of Dentistry (approval no. A 20-81).
Cell Isolation
Upper right primary canine teeth were extracted from 11-year-old male patients undergoing orthodontic treatment at Hiroshima University Hospital; the SHEDs were isolated and cultured. Informed consent was obtained from the parents of all donors. After extraction, teeth were immediately soaked in phosphate-buffered saline (PBS) containing 100 mM amphotericin. The periodontal ligaments were dissected after the teeth were disinfected with isodine and hibitane. The teeth were split at the cementoenamel junction using an osteoclamp. Pulp tissue was then collected and dissected in 10 mL α-MEM with 4 mg/mL collagenase and 3 mg/mL dispase. Subsequently, it was transferred to 10 mL tubes containing collagenase and dispase solution and incubated at 37 • C and 5% CO 2 for a maximum of 30 min.
The tubes were then centrifuged for 5 min at 1500 rpm. The supernatant was aspirated, and the tissue was suspended in α-MEM with 20% (w/v) fetal bovine serum (FBS), 0.24 µL/mL kanamycin, 0.5 µL/mL penicillin, and 1 µL/mL amphotericin. The suspension was then cultured in a 35 mm cell-culture dish and incubated at 37 • C and 5% CO 2 . When at least 200 colonies were formed, the cells were removed from the culture dish, PBS containing 0.25% (w/v) trypsin and 1 mM ethylenediamine tetra-acetic acid was added, and the cells were passaged. After the first passage, the cultures were incubated at 37 • C and 5% CO 2 in 10% (w/v) FBS/α-MEM with the aforementioned antibiotics [5][6][7]. Cells from the sixth passage were used in this study.
Calvarial Bone Defect Immunodeficient Mouse Model
To prevent immunogenic and graft rejections, 6-week-old male immunodeficient mice (BALB/c-nu; Japan Charles River International Laboratories Inc., Yokohama, Japan) were used. Mice were fed non-fluorescent, alfalfa-free, solid food. An anesthetic consisting of midazolam, medetomidine, and butorphanol was applied before surgery. After each mouse was administered general anesthesia, a 5.0 mm-diameter calvaria defect was made using a trephine bur in the center of the calvaria, as previously described [5,7]. CAP (Cytrans Granules; GC Corporation, Tokyo, Japan) was ground using a nano-grinder (NP-100; Thinky Corporation, Tokyo, Japan) to produce a mean particle size of~110 nm. The mean particle size was validated by scanning electron microscopy (SEM) (S-3400N, Hitachi High-Technologogies, Tokyo, Japan) and particle-size analysis (scattering intensity and laser diffraction).
The graft material containing SHEDs and 110 nm-sized CAP was implanted in the defect area using an atelocollagen sponge (Mighty; Cells 2022, 11, x FOR PEER REVIEW 3 of 13
Cell Isolation
Upper right primary canine teeth were extracted from 11-year-old male patients undergoing orthodontic treatment at Hiroshima University Hospital; the SHEDs were isolated and cultured. Informed consent was obtained from the parents of all donors. After extraction, teeth were immediately soaked in phosphate-buffered saline (PBS) containing 100 mM amphotericin. The periodontal ligaments were dissected after the teeth were disinfected with isodine and hibitane. The teeth were split at the cementoenamel junction using an osteoclamp. Pulp tissue was then collected and dissected in 10 mL α-MEM with 4 mg/mL collagenase and 3 mg/mL dispase. Subsequently, it was transferred to 10 mL tubes containing collagenase and dispase solution and incubated at 37 °C and 5% CO2 for a maximum of 30 min.
The tubes were then centrifuged for 5 min at 1500 rpm. The supernatant was aspirated, and the tissue was suspended in α-MEM with 20% (w/v) fetal bovine serum (FBS), 0.24 µL/mL kanamycin, 0.5 µL/mL penicillin, and 1 µL/mL amphotericin. The suspension was then cultured in a 35 mm cell-culture dish and incubated at 37 °C and 5% CO2. When at least 200 colonies were formed, the cells were removed from the culture dish, PBS containing 0.25% (w/v) trypsin and 1 mM ethylenediamine tetra-acetic acid was added, and the cells were passaged. After the first passage, the cultures were incubated at 37 °C and 5% CO2 in 10% (w/v) FBS/α-MEM with the aforementioned antibiotics [5][6][7]. Cells from the sixth passage were used in this study.
Calvarial Bone Defect Immunodeficient Mouse Model
To prevent immunogenic and graft rejections, 6-week-old male immunodeficient mice (BALB/c-nu; Japan Charles River International Laboratories Inc., Yokohama, Japan) were used. Mice were fed non-fluorescent, alfalfa-free, solid food. An anesthetic consisting of midazolam, medetomidine, and butorphanol was applied before surgery. After each mouse was administered general anesthesia, a 5.0 mm-diameter calvaria defect was made using a trephine bur in the center of the calvaria, as previously described [5,7]. CAP (Cytrans Granules; GC Corporation, Tokyo, Japan) was ground using a nano-grinder (NP-100; Thinky Corporation, Tokyo, Japan) to produce a mean particle size of ~110 nm. The mean particle size was validated by scanning electron microscopy (SEM) (S-3400N, Hitachi High-Technologogies, Tokyo, Japan) and particle-size analysis (scattering intensity and laser diffraction).
Microcomputed Tomography (µCT) Analysis
The calvarial area was scanned by µCT (Skyscan1176; Bruker, Kontich, Belgium) at a resolution of 35 µm immediately after transplantation (t0) and at 4 (t1) and 8 weeks (t2). The scans were performed parallel to the coronal aspect of the calvaria. ZedView (Lexi, Tokyo, Japan) was used for 3D reconstructions of the microdiographic images, and RapidForm 2006 (INUS Technology, Seoul, Korea) and FreeForm (SensAble Technologies; Wilmington, MA, USA) were used, respectively, to cut and measure the images. Bone volume was measured by calculating the difference between the filled spaces at t1 and t2 in the 3D constructed defect of each group.
Histological Evaluation via Hematoxylin and Eosin (H&E) and Masson's Trichrome (MT) Staining
After the mice were sacrificed, the regenerated tissue was fixed in 4% (w/v) paraformaldehyde PBS, soaked for decalcification in 14% EDTA for 1 month, fixed in paraffin, and then sectioned into 5 µm-thick sections. H&E and MT staining were performed as previously described [5,7]. In MT staining, tissue sections were viewed under a fluorescence microscope (BZ-X800, Keyence, Osaka, Japan) coupled with BZ-II imaging analysis software (Keyence). A range was selected in the tissue section in which the 5 mm-diameter defect was created. The areas stained red indicated mature bone, and the areas stained blue indicated collagen fibers. These along with the osteoid were measured using BZ-II imaging analysis software (Keyence).
Immunohistochemical (IHC) Analysis
IHC analysis was performed to detect bone morphogenetic protein-2 (BMP-2) expression. For the analysis of angiogenesis on the slice from the center of the transplant site, sections were stained for vascular endothelial growth factor (VEGF-A) and CD31 markers. After deparaffinization and dehydration of the tissue sections, Dako Protein Block was used to inhibit non-specific responses. Anti-VEGF-A antibody, anti-CD31 antibody, and anti-BMP-2 antibody were used as primary antibodies and allowed to react overnight at 4 • C. The primary antibodies were diluted in sterile PBS. After the cells were washed with PBS, they were treated with anti-rabbit IgG as a secondary antibody for 1 h at room temperature. The color was then developed with 3,3 -diaminobenzidine using a Histofine ® SAB-PO kit. Subsequently, sections were counterstained with hematoxylin. For VEGF-A and BMP-2 IHC staining, tissue sections were observed using a fluorescence microscope (BZ-X800, Keyence) and a range of 5 mm defects were selected in the tissue sections. The percentage of VEGF-Aand BMP-2-stained areas in the transplanted area was calculated using the BZ-II imaging analysis application (Keyence). For CD31 IHC staining, CD31-positive blood-vessel counts in four random ranges were calculated at 200× magnification to evaluate angiogenesis. Tissue sections were observed using fluorescence microscopy (BZ-X800, Keyence) and analyzed using the BZ-II imaging analysis application (Keyence).
Statistical Analysis
The Kruskal-Wallis test was performed using Bell Curve for Excel (SSREI). The differences among groups were analyzed using the Bonferroni method. Data are presented as mean ± standard deviation (SD). Significance was considered at p < 0.05 and p < 0.01.
SEM Analysis and 3D Evaluation of Regenerated Bone after In Vivo Transplantation with SHEDs + CAP, SHEDs, or CAP
The SEM analysis of CAP granules showed a mean particle size of 110 nm at 100× magnification (Figure 1a).
At t0, in all groups, no new bone formation was observed in the calvarial area. In the control group at t1 and t2, wound closure at the bone defect area was seen, but only a few newly regenerated bones were observed. At the center of the calvarial defects in the other groups, significant wound closure with newly generated bone was observed relative to that in the control group (Figure 1c). The SHEDs + CAP group had significantly greater bone volume at four and eight weeks after transplantation compared to that of the other groups. Additionally, significantly greater bone volume was observed in the CAP and SHEDs groups than in the control group (Figure 1d). At t1 (4 weeks), a small amount of newly formed bone was detected in the control and SHEDs groups, and newly formed bone was more prominent in the CAP and SHEDs + CAP groups than in the control group. Newly formed bone volume was significantly higher in the SHEDs, CAP, and SHEDs + CAP groups at t1 and t0 (d).
H&E Staining
In the H&E staining, newly formed bone was clearly observed in the SHEDs, CAP, and SHEDs + CAP groups. In contrast, only a few newly formed bone areas were observed in the control group. Biodegradation of the scaffold is shown as a blank area in Figure 2a.
MT Staining
In the MT staining, areas stained red indicated mature bone, and areas-stained blue indicated collagen fibers and osteoids. In the control group, only a few collagen fibers and osteoids were observed. In contrast to the control group, the SHEDs + CAP group prominently exhibited collagen fibers, osteoids, and newly formed bone. In the SHEDs group, a small amount of newly formed bone was observed (Figure 2b). Among transplantation sites, the percentage area of mature bone was significantly higher in the SHEDs + CAP transplantation group than in other groups. The SHEDs and CAP transplantation groups showed significantly enhanced percentage areas of mature bone compared to the control group (Figure 2c).
IHC Staining
IHC staining for BMP-2, VEGF and CD31 was performed to observe bone formation and angiogenesis, respectively.
Comparison of BMP-2 Expression in SHEDs, CAP, and SHEDs + CAP Groups
The SHEDs + CAP group had a prominent BMP-2-stained area at the scaffold center (Figure 3a). The percentage of the area stained for BMP-2 among transplanted sites was significantly higher in the SHEDs + CAP transplantation group compared to the other groups (Figure 3b). Compared to the control group, the SHEDs transplantation group showed a significantly higher percentage area of BMP-2 (Figure 3b). No significant differences in BMP-2 expression were observed between the CAP transplantation and control groups (Figure 3b).
Comparison of Angiogenesis in SHEDs, CAP, and SHEDs + CAP In Vivo Transplantation
After IHC staining for VEGF-A, the SHEDs + CAP transplantation group showed a significantly enhanced percentage of the VEGF-A area in the transplantation site relative to the other groups (Figure 4a,b). Additionally, the CAP transplantation and SHEDs transplantation groups showed significantly higher proportions of the VEGF-A area compared to the control group (Figure 4b). After IHC staining for CD31 expression, a significant increase in new blood vessels was observed in the SHEDs + CAP group, but only a few new blood vessels were observed in the control group (Figure 5a). There were significantly more blood vessels in the SHEDs + CAP group (10.85 ± 1.76 blood vessels); however, no significant difference was found in the number of blood vessels between the SHEDs and SHEDs + CAP groups. SHEDs, CAP, and SHEDs + CAP groups had significantly more blood vessels than the control group (Figure 5b). (a) IHC analysis of CD31 expression revealed new blood vessels (indicated using red arrows) in the SHEDs, CAP, and SHEDs + CAP groups compared to the control group. (b) SHEDs + CAP had significantly more blood vessels than the CAP group and the control group. There was no significance difference in the number of blood vessels between the SHEDs and SHEDs + CAP groups. n = 5 for each group, ** p < 0.01, * p < 0.05. CD31 scale bar = 100 µm.
Discussion
Transforming apatite carriers into smaller-sized granules may facilitate transplantation and lead to earlier bone and carrier resorption. Therefore, in this study, CAP granules were crushed to an average size of 110 nm, and CAP transplantation was performed in calvarial defects in immunocompromised mice. µCT analysis at one and two months after transplantation showed significantly more bone regeneration in the CAP transplantation group than in the control group. Additionally, the equivalent bone-regeneration image was accepted, although significance was not observed in the SHEDs and CAP transplantation groups. Moreover, marked mature bone formation was confirmed in the CAP transplantation group by MT staining two months after transplantation. The residue of the CAP carrier was not confirmed. In recent studies, CAP promoted osteoblast differentiation and CAP artificial bone had high bone conductivity [13]. Comparing CAP bone prostheses with autologous bone revealed that CAP was similarly absorbed and replaced by bone in osteoclast precursor cell-culture experiments and a rat tooth extraction socket model [14]. Additionally, good results were reported from the biopsy-histological assessment in CAP bone prostheses upon implant entry and three years after implantation in two-way cases of maxillary sinus floor elevation [15,16]; our results are similar to previous results.
Osteogenesis in MSCs can be efficient with sustained stimulation by osteo-inductive bio-factors, such as BMP-2. BMP-2 promotes bone formation by directing MSC differentiation into osteoblasts or osteocytes [20,21]. Our previous in vitro study demonstrated that SHEDs have a higher BMP-2 expression compared to BMSCs or DPSCs [6]. Based on these findings, we investigated SHEDs transplantation combined with CAP and the consequent changes in BMP-2 expression. The SHEDs + CAP transplantation group clearly showed significantly higher bone regeneration than the control, CAP, and SHEDs transplantation groups. Additionally, BMP-2 IHC staining showed that the percentage of areas stained for BMP-2 among the transplantation sites was significantly higher in the SHEDs + CAP transplantation group than in the control, SHEDs, and CAP transplantation groups. Moreover, compared to the control group, the SHEDs transplantation group had significantly higher BMP-2 expression. A recent study showed that transplantation in Wistar rats using SHEDs-CAP scaffolds can enhance BMP-2 and BMP-7 expression, while the attenuation of MMP-8 expression strongly indicated that SHEDs-CAP scaffolds may be promising treatment for bone regeneration [22]. CAP scaffolds have osteoinductive and osteoconductive properties that can induce a microenvironment suitable for SHEDs to differentiate into an osteogenic lineage [22]; this is consistent with our findings. However, we found no significant differences in BMP-2 expression between the CAP transplantation and control groups. Here, we discuss the possibility of developing BMP-2 from transplanted SHEDs and its effects on transplanted tissues. The effects of BMP-4, BMP-6, and other growth factors such as basic fibroblast growth factor (bFGF) should be investigated and elucidated.
VEGF is a signaling protein that regulates angiogenesis, which consists of four stages, refers to the formation of new blood vessels, and occurs throughout life. VEGF plays a dual role in this process: First, it acts on endothelial cells to promote migration and proliferation, and second, it stimulates osteogenesis through regulation of osteogenic growth factors [23]. In the present study, the expression levels of VEGF and CD31 were significantly higher in the SHEDs and SHEDs + CAP groups than in the other two groups. This is consistent with the results of a previous study wherein dental-derived MSCs secreted a range of proangiogenic factors, including VEGF, FGF2, and platelet-derived growth factor (PDGF) when endothelial cells received pro-angiogenic signals, which bind to their corresponding receptors on endothelial cells [24].
BMP-2 and VEGF are important factors involved in endothelial cell and osteoblast coordination and are mediated by multiple growth factors and cytokines for bone formation [25]. The VEGF and BMP-2 combination has been studied for its interaction with angiogenesis and osteogenesis in MSCs [26]. The formation of supportive vascular networks can be stimulated by VEGF, which affects BMP-2 and enhances bone formation [27]. Additionally, VEGF may function as a mobilization cytokine for endothelial progenitor cells, which promote bone regeneration. Endothelial cell and osteoblast migration from neighboring tissues can be stimulated by VEGF and BMP-2 released from scaffolds [28]. Therefore, BMP-2, VEGF, and bFGF expression may be regulated by reciprocal signaling pathways and, via these interactions, may result in the synergistic promotion of MSC osseous differentiation [29]. Therefore, in vitro studies of SHEDs and CAP should be conducted in the future to elucidate their signaling pathways in terms of bone regenerative action mechanisms.
Our study suggests that, compared to SHEDs or CAP transplantation alone, SHEDs + CAP transplantation can enhance BMP-2 and VEGF expression and induce superior angiogenic and osteogenic potential. However, with the aim of elucidating SHEDs and the clinical applications of SHEDs and CAP, various factors must be studied, including cell number, cell population homogeneity, growth factors, CAP granule size, and detailed bone-regeneration mechanisms.
Conclusions
This study led to the following conclusions: (1) The transplantation of SHEDs with CAP granules can induce more new bone formation than the transplantation of either SHEDs or CAP alone, and (2) the transplantation of SHEDs with CAP granules can enhance BMP-2 expression and promote angiogenesis by enhancing VEGF and CD31 expression. These results indicate that the SHEDs + CAP combination may be a promising tool for bone regeneration in alveolar defects. Informed Consent Statement: Informed consent was obtained from the parents of all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 5,293.4 | 2022-06-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
THE EFFECT OF STRUCTURAL REINFORCEMENT IN THE SOLID ANKLE FOOT ORTHOSIS: A FINITE ELEMENT ANALYSIS
: Background: Solid Ankle Foot Orthosis (SAFO) has the ability to hold the plantarflexion and dorsiflexion. It helps improve foot clearance which can stop foot drop. During gait, it usually possesses a specific area with high pressure due to gait that leads to cracks in that particular area. Structural reinforcement can be introduced to tackle this problem. This study aims to
INTRODUCTION
Stroke is one of the leading causes of disability worldwide [1].The abnormality of gait includes a wide range of total patients who have recovered from the stroke.Around 20-30% of the patients who survived the acute stroke lost their ability to walk, and some needed an assistive device to do their daily activities [2].Post-stroke gait is characterized by reduced walking speed, increased energy consumption, asymmetry, drop foot, and lack of muscle activity in the stance phase of the gait cycle [3].
A drop foot is a symptom of a neuromuscular disorder that can be temporary or permanent.In patients with drop foot, the tibialis anterior muscle experiences weakness, eliminating the patient's ability to perform dorsiflexion to lift the forefoot from the ground [4].20% of poststroke patients have suffered from foot drops [5].
Apart from drop foot abnormalities, post-stroke gait is also characterized by less muscle activity during the stance phase.A heel rise indicates the transition from the mid-stance to the terminal stance.The heel rise begins with the internal plantarflexion motion of the ankle, and later the foot will do a pushing force on the ground to make a progressive push forward for propulsion.
Under normal conditions, the calf muscles will reach a peak of contraction during the terminal stance.This contraction compensates for the emergence of an external moment from the push of the foot against the ground during heel rise [6].PUTRA, DWIATMA, RAHMATILLAH, PUJIYANTO, SARI, RAHMA, MAYASARI, PAWANA AFO is commonly divided into two types which are passive and active AFO, which has a control system that can adapt to the human gait [7].Moreover, the passive AFO is divided into two types which are articulated and non-articulated AFO, which could be used by post-stroke patients with insufficient balance and unstable stature [8].
Solid-type AFOs (SAFO) can be applied to post-stroke patients with weakness of the tibialis anterior muscles (dorsiflexion muscles) and calf muscles that function as compensators for heel rises.This condition can be achieved because SAFO prevents plantarflexion and dorsiflexion [9].
When plantarflexion is restricted, the drop foot problem is resolved and provides better foot clearance.In addition, restriction of plantarflexion will inhibit the heel-rise process and eliminate the terminal stance phase.In this condition, the function of the calf muscles weakened in poststroke patients in stabilizing the gait process is no longer needed [10].
Most of the AFOs are custom-fabricated, accounting for 73%.On the other hand, the most frequently used material is thermoplastic, with a percentage of 83%.This production process takes a long time and is done manually, where the skills of the orthotics matter and will determine the quality of the fabricated AFO.Therefore, the existence of an initial quantification to measure the character of AFO, which is a crucial factor such as stiffness and thickness, will be very beneficial in determining the extent of resistance that can be given by an AFO [11].
Orthosis fabrication processes such as AFO can be carried out using Computer-Aided Design (CAD) and Computer-Aided Manufacture (CAM) to improve the quality of health services for rehabilitation patients [12][13][14].The application of both can be reflected in using the Finite Element Method (FEM) in modeling AFO in a specific patient.This method allows orthotics to measure the mechanical behavior and distribution of stress concentrations before the AFO manufacturing process.The existence of this process will reduce the occurrence of errors so that it will save time and wasted fabrication material [11].
SAFO will experience mechanical compressive forces during the gait cycle, leading to a critical point where it will eventually lead to cracks in several specific areas of the SAFO.Orthotics solved this problem by applying structural reinforcements of various particular shapes and different positioning positions.As a result, prefabricated SAFO has a fracture ratio of one to two STRUCTURAL REINFORCEMENT IN THE SOLID ANKLE FOOT ORTHOSIS relative to custom SAFOs [15].In their research, Gomes et al. [16] used structural reinforcements on retromalleolar sections with different dimensions.As a result, the thickest reinforcement provides the best endurance.
METHODS
The design phase of the SAFO 3D model was carried out using Autodesk Fusion 360.Fig. 1 shows the 3D SAFO model designed to vary in the length of reinforcements.Variations in the reinforcement length applied to the models are 260 mm, 130 mm, and without reinforcement.
The thickness of the reinforcements was 6 mm in all models.The simulation started with selecting the material that forms the SAFO model.The materials used are carbon fiber (CF) and polypropylene (PP).After choosing the material, the boundary conditions were set, including fixed support and the direction and magnitude of the applied load.
The load was applied to the hindfoot, midfoot, and forefoot.These loads represent human gait at initial contact, midstance, and terminal stance.The magnitude of the applied loads was 500 N.
Apart from describing the external force during the gait cycle, the loading was also applied to the posterosuperior part of the SAFO.The loading was directed perpendicular to the sagittal plane, PUTRA, DWIATMA, RAHMATILLAH, PUJIYANTO, SARI, RAHMA, MAYASARI, PAWANA pointing towards the medial region of the SAFO.This loading was used to measure the stiffness of the SAFO models.
A convergence study was conducted to find the optimal mesh size for the entire simulation [17,18].The model design used as an experimental model during the convergence study was model III (reinforcement length of 260 mm).The simulation carried out as the convergence study is the initial contact during the gait cycle.Convergence study results are shown in Fig. 2.
Fig. 2. Convergence Study of SAFO Model
Fig. 2 showed that the stress simulation results were stable or convergent from the number of elements 130917 to 609675.Therefore, the number of features 281375 was considered an optimal choice.The results of this convergence study indicated that a mesh size of 0.003 m was the optimal size.This mesh size was used throughout the simulation.The convergence study process with the same method has also been carried out by Basri et al. [19].
The model's mechanical behavior in response to loading was analyzed.The stress simulation results were used in Equation 1 to calculate the value of the safety factor.The safety factor is calculated by dividing the maximum stress the material can withstand by the equivalent pressure obtained from the simulation [20].The safety factor shows the safety level of the simulated model.A high safety factor value indicates that the AFO has high endurance [21].Therefore, it can be said that the AFO model is safe to use. ( The level of stiffness of each model analysis was also carried out.It was determined from the deformation result of posterosuperior AFO (cuff area) loading simulation.Fig. 3 shows the schematic calculation of the level of stiffness of the AFO.
The highest deformation value in the simulation was used in Eq. 2 to obtain the angular deflection.The angular deflection results from Eq. 2 were utilized in Eq. 3 to determine the ratio of rational stiffness, whose value will represent the AFO models stiffness level.Analysis of the level of stiffness using rational stiffness has also been carried out previously by Chen et al. [15]. ( (3) The von Mises stress data were used to calculate the safety factor of the model.The safety factor of the model is depicted in Figure 6.SAFO with CF material had a higher safety factor than with PP material.The material's Ultimate Tensile Strength (UTS) had a significant role in obtaining safety factors.Regarding subphases, the safety factor is the highest during initial contact.This condition indicated that both SAFO was safe during the initial contact.The concern from the result in Figure 6 was on the midstance and terminal stance for SAFO with PP material, which showed a value under 100%.This result indicated that the SAFO would be prone to crack or failure during those two subphases.subjected to forces [17].For example, the cuff loading of PP SAFO variation III (with 260 mm reinforcement) had a rotational stiffness of 87.67 Nm/°.On the other hand, the CF SAFO variation III yielded a rotational stiffness of 228.99 Nm/°.The AFO models that have a better level of stiffness would lead to a better effect on patients in cases of stroke with weakened tibialis anterior and calf muscles [23], [24].Higher rotational stiffness would provide more stability to the ankle of the users because higher force is needed to bend the SAFO.
Comparing stress results in gait cycle simulation regarding material difference showed that CF SAFO yielded a lower maximum stress value than PP SAFO.For example, PP AFO variation I (without reinforcement) during terminal stance had the maximum von Mises stress of 46.836 MPa.CF AFO variation I during the same phase produced the maximum von Mises stress of 43.959 MPa.
The safety factor obtained after analyzing the maximum stress showed that CF SAFO was far superior to PP SAFO.The CF SAFO model simulation stress in all phases and variations had values lower than the material's ultimate tensile strength.The safety factor of the CF SAFO model in all phases and variations yields results above 100%.This result showed that all CF SAFO variation models were safe for use during the gait cycle.Considering the high safety factor calculation results in CF SAFO, the thickness of the CF SAFO to be manufactured can be reduced due to material and cost efficiency for future possibilities [25].
The Effect of Structural Reinforcement Length
The stress on variation in the reinforcement length showed a decrease in the von Mises stress as the length of applied support increased.This result can be observed from the von Mises stress results in variation III (with 260 mm reinforcement), yielding the lowest stress compared to other model variations under the same gait subphase and material.This result was suitable with the study of Gomes et al. [16], which showed that the stress decreased along with increasing the length of the applied reinforcement.
The decrease in stress distribution obtained during the gait simulation and the increase in the reinforcement length were due to a change in the SAFO geometry.The longer the support used in the model, the more the model's geometry would be in terms of volume.This higher model volume caused the stress to be distributed to more elements.This condition reduced the stress results obtained during gait simulation [24].
The maximum stress in all gait simulations was found in the ankle trimline area.The function of a SAFO that provides ankle stability to the patient requires high rigidity in the ankle trimline area [26].This condition results in a low flexion tolerance in that area.Under this circumstance, that specific area will experience a higher stress concentration than others, making it vulnerable to cracks [22].
The comparison of deformation during cuff loading simulation showed a decrease in the deformation as the reinforcement length increased.These results indicated that the reinforcement affects the geometry of the designed SAFO model.where the maximum stresses accumulate [22].
Despite this, when SAFO is applied, the terminal stance will be eliminated due to its high rigidity.
Therefore, although the PP SAFO simulation during terminal stance in all variations experienced failure due to cracks, PP SAFO is still safe for patient use because the terminal stance has been eliminated in the first place.
CONCLUSIONS
In general, the toughness of SAFO to the forces that occur during the stance phase of the gait cycle would increase as the length of applied reinforcement increases.Comparing the material difference during the simulations under the same variations and phases, it can be concluded that CF SAFO provides better toughness than PP SAFO.A comparison of simulation results in terms of the reinforcement length showed that the structural stiffness level in the SAFO model would increase as the length of applied reinforcement increases.Furthermore, under the same variety of support, CF SAFO gives a better stiffness level than PP SAFO.
CF SAFO variation II gives the best parameter combination.Based on safety factor calculation, the model has the highest result at each phase of the gait cycle that has been simulated.In terms of the level of stiffness to provide more ankle stability, the CF SAFO variation II also gives the best results with a ratio of 246.52 Nm/°.Dynamic loading should improve the simulation results approach to the initial gait cycle.Fatigue analysis can also be carried out to examine the behavior of the SAFO model on repeated use.
Fig. 1 .
Fig. 1.The dimension of reinforcements for each SAFO variation.(a) Model I, (b) Model II, (c) Model III (Unit in mm)
Fig. 6 .Fig. 7 .
Fig. 6.Safety Factor of Three SAFO Models with Different Structural Reinforcement in Three Subphases (Initial Contact, Midstance, and Terminal Stance): (a) PP and (b) CF The deformation obtained showed the level of structural stiffness of each SAFO model variation.For PP, the highest level of stiffness was PUTRA, DWIATMA, RAHMATILLAH, PUJIYANTO, SARI, RAHMA, MAYASARI, PAWANA obtained in variation II with a value of 94.88 Nm/°.Variation model II also yielded the highest level of stiffness in CF material with a value of 246.52 Nm/°.The effect of increased stress due to the high moment arm during terminal stance is evident in the stress results of all the PP SAFO variations.All of the safety factors of PP SAFO during terminal stance yield below 100%.This result showed that the increasing stress during terminal stance exceeded the ultimate tensile strength of PP.In this all variations of the PP SAFO will be prone to fail during terminal stance, causing cracks to appear in ankle trimlines | 3,179.4 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
The double-edged sword of AI: Ethical Adversarial Attacks to counter artificial intelligence for crime
Artificial intelligence (AI) has found a myriad of applications in many domains of technology, and more importantly, in improving people’s lives. Sadly, AI solutions have already been utilized for various violations and theft, even receiving the name AI or Crime (AIC). This poses a challenge: are cybersecurity experts thus justified to attack malicious AI algorithms, methods and systems as well, to stop them? Would that be fair and ethical? Furthermore, AI and machine learning algorithms are prone to be fooled or misled by the so-called adversarial attacks. However, adversarial attacks could be used by cybersecurity experts to stop the criminals using AI, and tamper with their systems. The paper argues that this kind of attacks could be named Ethical Adversarial Attacks (EAA), and if used fairly, within the regulations and legal frameworks, they would prove to be a valuable aid in the fight against cybercrime.
Introduction
Artificial intelligence has been replacing many human activities. It has brought about a major revolution in countless domains of people's lives, such as education, Industry 4.0, data science, transport, healthcare, etc. Usually, AI solutions outperform humans in solving complex tasks of prediction, handling incomplete data, and data mining [13]. Undoubtedly, automation has many advantages, but also poses a number of threats. They do not only result from unintentional errors made by machines, which are usually the effect of improperly planned learning, but can also be caused by an intentional action. This could be done, e.g., based on the input of incorrect data in teaching collections. This particular action is called an adversarial attack. In other words, it consists in cybercriminals disrupting the correct machine learning process so that the trained model could be used for criminal activities, as shown in Fig. 1. Therefore, as in the game of 'paper, rocks and scissors', the AI arms race continues, to create new and better tools and methods to stop AI for Crime (AIC), and be one step ahead of cybercriminals. One of viable cybersecurity solutions could be the application of the Ethical Adversarial Attacks (EAA), the concept of which is going to be introduced in this paper.
Good and bad scenarios of using AI
There are both optimistic and pessimistic possible scenarios of using artificial intelligence. Given the outcomes of its possible application, AI may be seen as a double-edged sword.
AI to do good things
As widely known, nowadays AI is increasingly used in many domains of our lives to help people (e.g., to make decisions, predict, solve complex problems, etc.). There are a myriad of such applications and deployment of AI solutions (discussed in [4,5,12]), to name just a few). Actually, due to the broad range of applications, as well as their complexity, it would probably be impossible to mention all of them here. Nevertheless, AI technologies are commonly believed to be effective, reliable, created with best intentions and used to help and do good things within the framework of regulations and societal expectations.
AI designed to do bad things intentionally
Unfortunately, as with all the technologies, there is the possibility to misuse them for bad purposes. AI technologies may be utilized by criminals to enable fake news spreading, perform cyberattacks, commit computer crimes, launder money, steal data, etc. [2,1]). The malicious use of AI has been so widespread, that the term AI for Crime (AIC) has been introduced [7].
Therefore, researchers and societies, as well as law enforcement agencies, need to be prepared for those new, modern, and sometimes unprecedented AI-supported crimes, and most importantly should be aware that such crimes have become a part of current ecosystem, especially on the internet.
One of the interesting yet alarming examples of AIC is the situation when criminals or hackers attack (or fool) normally working, legal machine learning and artificial intelligence solutions; this in turn may result in their malfunctioning. Such practices are termed as adversarial machine learning; several classes of such attacks on AI systems have already been distinguished, such as evasion attacks, poisoning attacks, exploratory attacks, and many more. As a result, crucial AI systems, such as those used for medical images classification or the ones applied in intelligent transport and personal cars, while attacked, could generate mistakes, faults, could be simply fooled; all this might result in doing considerable harm.
So far, such attacks have not been common yet. However, there are some theoretical advances and considerations that foresee adversarial attacks as an emerging threat. For example, it has been shown that skillfully crafted inputs can affect artificial intelligence algorithms to sway the classification results in the fashion tailored to the adversary needs [3], and that successful adversarial attacks can change the results of medical images classification or healthcare systems [8], as well as other decision support systems.
Cybersecurity and ethics
Here, it should be clarified why one should be concerned about the countermeasures in cybersecurity being "ethical" at all. In substance, cybersecurity is the antithesis of cybercrime. It encompasses the concepts, technologies, tools, best practices, and all the other diverse elements of the complex ecosystem the objective of which is to mitigate cyberattacks, protect people's assets, rid of vulnerabilities in systems, and so on. Yet, despite the domain being wrongly perceived as purely technical, the results of the actions (or the lack thereof) are highly likely to influence various privileges of the individual, or even infringe basic human rights [10]. Thus, ethics and ethical behaviour ought to inescapably be taken into consideration in every cybersecurity-related planning, as a way of guaranteeing the protection of people's freedom and privacy [9].
Should Ethical Adversarial Attacks become a conventional cybersecurity tool?
In authors' opinion, one of the most crucial domains of the research in AI and security should be devoted to countering adversarial machine learning and proposing effective detectors [11]. Even though such attacks have not been carried out 'in the wild' yet, one can expect them to occur soon. The efforts must thus be made for the cybersecurity experts to be sufficiently prepared to tackling adversarial machine learning. One of the possible countermeasures and solutions to AIC, apart from detection mechanisms, could be attacking the AI and ML solutions used by criminals and wrongdoers, to stop them. An example of such an attack could consist in changing the labels of fraudulent transactions so that the type is not detected by the trained fraud detection system. It should also be noted that AI, like any new technology, may fall in the wrong hands and then be used as Fig. 1 The positive scenario of using AI (left) and the negative scenario of successful adversarial attacks on AI (right) 1 3 a powerful cybercrime tool. Criminals can also use AI to conceal malicious codes in benign applications or to create malware capable of mimicking trusted system components. Also, hackers can execute undetectable attacks as they blend with an organization's security environment, e.g., although TaskRabbit was hacked, compromising 3.75 million users, investigations could not trace the attack. 1 To combat hackers, AI is also used to improve computer systems security by continuous monitoring, network data analysis for intrusion detection and prevention, antivirus software, etc. Still, this approach is rather reactive, and mostly focuses on damage control. Thus, it is worth considering whether cybersecurity experts should start resorting to an ethical method modelled on Adversarial Attacks to counteract the activity of criminals. Such an approach could be named Ethical Adversarial Attack, as depicted in Fig. 2.
Therefore, the authors would like to introduce the EAA concept, i.e., there is the postulate to discuss and acknowledge ethical adversarial machine learning, which would stop, fool or successfully attack AI/ML algorithms designed for malicious intentions and harming societies. Such tools and techniques should be created along with relevant legal and ethical frameworks. Even more importantly, the authors believe that the methods of this kind should be included in national and international research strategies and roadmaps. Naturally, although this might prove to be a very effective tool for fighting cybercrime, it is crucial for such AI solutions to be explainable and fair, following the xAI (explainable AI) paradigm [6]. This way, all the users and societies will be able to understand how and why EAA are applied, and that despite stemming from the tools utilized by criminals, the ethical attacks are in fact designed to do good and protect IT systems and citizens. Successful implementation of such a strategy would also mean a range of ethical issues would have to be considered. One of them would be, paraphrasing sentence from the Holy Bible do not be overcome by evil, but overcome evil with good (Romance 12:21), that one is not overcome by evil, but overcomes evil with evil. Another dilemma would concern the degree of confidentiality that would need to be preserved. On the one hand, making the results public helps other researchers in their fight against cybercrime; on the other hand, cybercriminals may use the very same results to dodge the cybersecurity measures. If the ethical questions of EAAs were properly addressed, they would also contribute to building greater trust in the solution among citizens as well as businesses and policy-makers.
Conclusion
In the paper, the concept of Ethical Adversarial Attacks has been introduced. The authors have postulated to discuss EAA as the answer in the arms race against adversarial attacks or the misuse of AI systems (AI for Crime). The goal of this paper is to spark interdisciplinary discourse regarding the requirements and conditions for fair and ethical application of EAAs. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 2 The scenario of successful Ethical Adversarial Attacks (EAA) on AI for crime (AIC) | 2,433.2 | 2021-10-26T00:00:00.000 | [
"Computer Science"
] |
Accelerated simulation methodologies for computational vascular flow modelling
Vascular flow modelling can improve our understanding of vascular pathologies and aid in developing safe and effective medical devices. Vascular flow models typically involve solving the nonlinear Navier–Stokes equations in complex anatomies and using physiological boundary conditions, often presenting a multi-physics and multi-scale computational problem to be solved. This leads to highly complex and expensive models that require excessive computational time. This review explores accelerated simulation methodologies, specifically focusing on computational vascular flow modelling. We review reduced order modelling (ROM) techniques like zero-/one-dimensional and modal decomposition-based ROMs and machine learning (ML) methods including ML-augmented ROMs, ML-based ROMs and physics-informed ML models. We discuss the applicability of each method to vascular flow acceleration and the effectiveness of the method in addressing domain-specific challenges. When available, we provide statistics on accuracy and speed-up factors for various applications related to vascular flow simulation acceleration. Our findings indicate that each type of model has strengths and limitations depending on the context. To accelerate real-world vascular flow problems, we propose future research on developing multi-scale acceleration methods capable of handling the significant geometric variability inherent to such problems.
Introduction 1.The motivation for accelerating vascular flow simulations
Despite the widespread use of computational models across many scientific disciplines, their use in real-time and many-query contexts is limited by their high computational cost.These scenarios frequently arise in vascular blood flow modelling.Real-time vascular flow simulations could provide guidance to clinicians prior to performing a treatment procedure or provide near-instant feedback during the procedure [1,2].Many-query vascular flow simulations can be used to iteratively design new vascular implements, establish safety and performance measures for treatment devices, and simulate interventions on a population scale through so-called in silico trials [3].
Coupling the haemodynamics to solid mechanics or biochemical reaction models may also be required in certain applications.Fluid-structure interaction (FSI) is required when vessel distensibility is important or when there is a complex interaction between blood flow and valves or implanted devices [7][8][9][10].Biochemical reactions are crucial in modelling thrombosis and endothelialisation depends on interactions between blood and blood-contacting surfaces of devices [11,12].The constitutive nature of blood adds additional complexity-it is a suspension containing various biochemically active particles and molecules, meaning that multi-phase multi-component flow-biochemistry models may be required when modelling flow-thrombosis in small vessels [11,13].
As well as being multi-physical in nature, the length and time scales in vascular flow problems can differ greatly.Vascular flow is inherently pulsatile, which leads to features such as flow separation, vortex transport, mixing regions and impingement varying topologically throughout the cardiac cycle [14].Variation in length scale and morphology can also influence these flow features.This leads to varying flow regimes in different regions of the vasculature and at different times of the cardiac cycle.Vascular flow modelling encompasses short-term processes such as systemic haemodynamics, autoregulation and recanalisation in addition to long-term processes such as remodelling and thrombosis [15][16][17][18].Physiological changes due to factors such as age and lifestyle also have an impact on various flow problems.Vastly different length scales are also present, with thrombosis and endothelialisation happening on a molecular level at the micro-scale, whereas systemic blood flow occurs in arteries with diameters up to a few centimetres.
Nonlinear effects further complicate vascular flow modelling.This can result from the convective nonlinearity in the Navier-Stokes equation, the geometric complexity of blood vessels, or the interactions across different length and time scales between blood flow and other physical and physiological phenomena.Nonlinear flow features are often found in the presence of vascular pathologies such as stenosis, atherosclerosis, aneurysms or valve defects [19][20][21][22].Flow-device interactions can be an additional source of nonlinearity [23][24][25].
The most prominent complexities in vascular flow modelling can be summarised as: (i) nonlinearity, (ii) geometrical complexity, (iii) multi-physics, (iv) multi-scale in time, (v) multi-scale in space.In practice, assumptions can be made to simplify or eliminate these complexities for most problems, allowing for successful computational modelling.When aiming to accelerate vascular flow simulations, problem-specific approaches that are suited to handling particular types of complexity will be required depending upon the specific target application.
Reduced order models and machine learning for acceleration
Simulation acceleration refers to reducing the run time of computational models and is typically achieved through modelling assumptions and simplifications.Reduced order models (ROMs) are low-order representations of high-order models that preserve essential model input-output behaviour at the cost of some model accuracy and are a common approach for accelerating expensive computational models [26,27].ROMs can be categorised into two families, a priori ROMs and a posteriori ROMs.The former seek to reduce the order of the system prior to solving the high-dimensional model, using techniques such as spatial dimension reduction (SDR) or proper generalised decomposition (PGD).A posteriori ROMs are data-driven techniques that depend on first solving the high-dimensional model or acquiring experimental data to generate snapshot solution fields.Snapshot data are decomposed into a reduced representation using, for example, proper orthogonal decomposition (POD) [28][29][30][31], dynamic mode decomposition (DMD) [32,33] or variants thereof.The reduced representation can then be advanced in time directly or combined with projection or interpolation techniques to construct a ROM.There are a multitude of ROM techniques, some of which have been applied to vascular flow problems.Recent advances in machine learning have improved some ROM methodologies and provided alternative techniques to accelerate simulations.Machine learning acceleration methods operate under a similar paradigm to many ROM techniques, with an expensive offline training phase that primes the model for fast online inference in new geometries, parameter values or time points.There are various ways to use machine learning in simulation acceleration.Machine learning ROMs typically use machine learning to augment/replace a component of a ROM or they use machine learning entirely in place of existing ROM components [34,35].Physics-informed machine learning strategies are another possibility.In this approach, flow measurements are supplemented by additional constraints based on the underlying governing equations and boundary conditions [36].Physics-agnostic techniques ignore the underlying physics of the problem, but instead use large amounts of data to identify mappings from images or geometries to flow quantities of interest [37].Other techniques include tailor-made networks designed to handle point-cloud data [38,39] and operator learning strategies [40,41].Given the relatively recent emergence of machine learning simulation techniques, they have not been widely applied to acceleration of vascular flow simulations yet.
Overview
This review aims to provide an overview of various methods for accelerating simulations (figure 2) and to collate, categorise, and critique each method with respect to the target application of vascular flow modelling.We decompose vascular flow modelling into a series of complexities (nonlinearity, geometric complexity, multi-physics and multi-scale in time and space) and assess various acceleration methods with respect to these complexities.For ROM approaches, we provide guidance on what type of vascular problems the method may be suitable for, what problems they have already been applied to, and how successful these studies were in terms of the accuracy and acceleration offered by the approach compared to traditional numerical methods.For machine learning approaches, we review some common methods, discuss their benefits and limitations, and advise what vascular problems they may be suitable for.Throughout this review, we measure acceleration factors by comparing run times for a single evaluation of the accelerated and full-order models (FOMs), unless otherwise stated.For complementary reviews on parametric model reduction, model order reduction in fluid dynamics, data-driven cardiovascular flow modelling, machine learning for cardiovascular biomechanics, real-time simulation of computational surgery, and the challenges of vascular fluid dynamics, see [5,26,29,[42][43][44].Finally, we note that although this review focuses on vascular flow acceleration, the complexities of this application (nonlinearity, geometric complexity, multi-physics and multi-scale) are encountered across many other computational modelling domains.Therefore, we believe this review will be useful to computational vascular flow modelling researchers and the broader computational modelling community.
Reduced order modelling of vascular flow
ROMs aim to reduce the dimensionality of a numerical problem either by applying prior knowledge of the problem itself or by inferring knowledge based on previously gathered data from the system of interest.ROM methods can be described as a priori or a posteriori, depending on whether the reduction of the system exploits prior knowledge about the FOM or information (data) collected after solving it, respectively.A priori methods are useful when there exist symmetries or other known information about the underlying system, or when the system is too complex to solve with traditional techniques.A posteriori methods are useful when readily available data from the FOM can be used to guide the construction of the ROM.Another categorisation for ROM methods is whether the approach is intrusive or non-intrusive.Intrusive methods require the explicit use of the underlying high-order numerical implementation of the FOM, whereas royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 non-intrusive methods operate entirely separate to the FOM.Intrusive methods can be more numerically robust due to their incorporation of the underlying governing equations, but non-intrusive techniques can be easier to implement and use in conjunction with commercial solvers, which are common when studying fluid dynamics problems.Many categories of ROM have been applied to vascular flow, with various benefits and limitations to each approach.This section will describe some of the most common ROM techniques and their suitability to model various vascular flow complexities.
Spatial dimension reduction
The three-dimensional (3D) unsteady incompressible Navier-Stokes equations in non-dimensional form are: find ðu, pÞ [ where u is the velocity, p is the pressure and Re is the Reynolds number dependent upon the fluid density ρ and dynamic viscosity μ.The spatial dimension is d = 3 except for some cases of plane symmetric or axisymmetric flow, when d = 2, and the domain V , R d has a suitably regular boundary to ensure the existence of solutions.Spatial dimension reduction (SDR) involves reducing these equations down to a zerodimensional (0D)/one-dimensional (1D)/two-dimensional (2D) model that describes bulk quantities instead of the full spatio-temporal flow fields.A comprehensive review of 0D and 1D techniques has been provided by Shi et al. [45].
We provide an overview of this approach, quantify the acceleration and accuracy offered, and discuss how applicable this method is to vascular flow simulation acceleration.
Zero-dimensional models
Lumped parameter models (referred from hereon in as 0D models) exploit the analogy between hydraulic networks and electrical circuits.Blood pressure and flow rate are represented by voltage and current, and the frictional, inertial and elastic effects of blood flow are described by electrical resistance, inductance and capacitance, respectively [45].Established methods for modelling electrical circuits (Kirchhoff's current law, Ohm's Law for voltage-current) with ordinary differential equations (ODEs) can then be used to describe vascular flow problems.The first 0D models were based on the Windkessel model, which consists of a capacitor that describes the storage properties of large arteries and a resistor that describes the dissipative nature of small peripheral vessels [45].This simple approach cannot model specific pressure and flowrate changes in particular vascular segments and it cannot fully describe the effects of arterial impedance, venous pressure fluctuations, or pulse wave transmission.Various extensions to this model have been used to capture these more complex physiological phenomena by adding additional resistors, inductances, and capacitors.For example, in a system with capacitance/compliance C, voltage/pressure P, charge/flow rate Q, inductance/inertia L and resistance R, the two ODEs describing the system are [46] C ð2:2Þ Multi-compartment models can also be used to describe flow and pressure characteristics within specific vascular segments.
One-dimensional models
In 1D models, the form of the velocity profile across the vessel radius is constrained, which simplifies the 3D governing equations.1D blood flow is governed by the axisymmetric forms of the incompressible continuity and Navier-Stokes equations, which can be written as where x is a local coordinate describing the vessel segment, A is the cross-sectional area, U and p are the cross-sectionally averaged velocity and pressure, ρ is the blood density and f is a viscosity-dependent term describing the frictional force per unit length [45,47].These equations can be further coupled to a pressure-radius relationship that describes the elasticity of the vessel wall.The reduced equations can be solved using various numerical techniques, such as the method of characteristics [48,49] or finite differences [50].A primary benefit of 1D models over 0D models is that they can capture pressure and velocity pulse wave propagation [51].Waves carry information about the medium in which they travel, so capturing the pressure and velocity waves in blood vessels can tell us about the function of the cardiovascular system and provide information about various vascular pathologies, such as atherosclerosis and hypertension [52].
Two-dimensional models
For 2D vascular models, the 3D vessel loses its torsion and curvature, becoming a straightened tube governed by the 2D Navier-Stokes equations.2D models include the radial variation of the velocity and pressure fields in an axisymmetric tube, whereas 1D models only consider the cross-sectionally royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 averaged quantities.These models are used less frequently now due to improved computer processing power and widely available commercial solvers that make solving the 3D problem more tractable [53].However, in certain applications, such as the calculation of fractional flow reserve (FFR), 2D models are shown to be significantly faster than 3D models while retaining a clinically viable level of accuracy [54].
Summary
Table 1 summarises several vascular flow ROM studies using SDR methods.We include the specific application, the reported accuracy compared to the FOM as a baseline, and the acceleration factor compared to the FOM.The accuracy reported for most ROMs was .90% and the acceleration factors ranged from 10 2 to 10 5 .However, the ROMs are limited to investigating simple flow parameters, such as FFR, pressure drop or flow rates [60].Gashi et al. [54] demonstrated that adding complexity (steady state to unsteady) reduces the acceleration offered by three orders of magnitude.Mirramezani & Shadden [59] presented a comprehensive study applying distributed 1D lumped parameter models to aortic, aorto-femoral, coronary, cerebrovascular, pulmonary and paediatric blood flow problems.Analytical expressions were used to allow the model to capture energy losses along vascular segments due to viscous dissipation, unsteadiness, flow separation, vessel curvature and vessel bifurcations.
Conclusion
Zero-dimensional SDR models are suitable for global pressure/flow rate analysis of large regions of the cardiovascular system [45].1D models assume axisymmetric flow solutions to capture pressure and velocity pulse wave propagation [51].2D models can evaluate local flow fields with radial velocity variation in axisymmetric domains [61].
A prominent use of SDR models is providing boundary conditions to 3D models that incorporate information from significantly larger portions of the vasculature than it would be feasible to model in 3D [62][63][64][65][66][67][68][69][70].In this way, SDR models can facilitate multi-scale spatial models that provide well-resolved 3D flow information in local regions of interest while still including the effect of distal or proximal regions.Zero-dimensional SDR models are unable to describe the nonlinearities that can arise in cardiovascular mechanics due to the convective acceleration term in the Navier-Stokes equations and/or the complex velocity-pressure relationship in distensible vessels [45].1D SDR models can approximate the effect of vessel wall elasticity on blood flow by adding a constitutive law that relates blood pressure to vessel cross-sectional area [51].SDR models are generally only suitable for bulk velocity/ pressure analysis in relatively simple geometries (i.e.axisymmetry is a valid assumption).They are typically unsuitable for complex multi-physics or multi-scale temporal problems but well suited for spatial multi-scale problems.
Proper orthogonal decomposition
SDR methods depend upon being able to apply geometrical simplifications (i.e.axisymmetry) or analogies with electrical circuit analysis to the vascular flow problem at hand to simplify the 3D Navier-Stokes equation into something easier and faster to solve.While SDR methods can be useful in capturing bulk quantities across large spatial scales, the applicability of these methods to other vascular flow complexities is limited.An alternative approach is to solve the expensive 3D Navier-Stokes equations and leverage the wealth of information contained in the data generated from these simulations to develop a ROM for the specific problem solved in the first instance.This is often referred to as a data-driven (or a posteriori) approach, as prior to ROM construction the FOM must be solved for some instances.
The method used to extract low-dimensional structures from high-dimensional data is key to any data-driven ROM.The most commonly used approach for this in fluid dynamics is the POD.POD was first introduced in fluid dynamics to analyse the structure of experimental turbulent flow and was later adopted for the purpose of efficient simulation and control of fluid flows [71,72].POD extracts leading-order information from data in the form of orthogonal modes ordered by their energetic contribution to the data.In fluid flows, these modes typically capture spatial information contained within the data.
Before performing the POD, a snapshot matrix U is constructed by stacking columns of spatial data from different timesteps or input parameter configurations in a large matrix.A mean state derived by averaging over the timesteps or parameter configurations will often be subtracted from the snapshot matrix prior to performing the decomposition.Typically, the snapshot matrix will have many more rows than columns.POD is then performed by taking the singular value decomposition (SVD) of U where Φ is a matrix of the left singular vectors, or POD modes, S is a diagonal matrix containing the singular values and V* is a matrix of right singular vectors.The success of POD in model order reduction stems from the observation that, in most complex physical systems, the meaningful behaviour of a system is captured by a low-dimensional subspace spanned by the first few POD modes.The singular values quantify the relative importance of each POD mode based upon its energetic contribution to the snapshot matrix.This knowledge makes it possible to truncate the system to a certain energy level by discarding the low-energy POD modes and retaining the high-energy modes.POD-based ROMs have seen widespread application, including classical fluid dynamics problems [73][74][75], aerodynamics [76], FSI [77,78] and blood flow problems [30,[78][79][80][81][82][83].However, POD alone is not sufficient to build a ROM.POD provides a low-dimensional representation of the snapshots of the system, but the low-order representation must be combined with projection or interpolation techniques to build a ROM that can predict solution fields at new timesteps or input parameter configurations.
Proper orthogonal decomposition with projection
Projection-based methods use the underlying governing equations of a system and POD modes to construct a ROM.The governing equations are projected onto the POD basis to derive a set of reduced equations embedded in this low-dimensional space.A common approach is to use the Galerkin projection (GP) [84,85].POD-GP ROMs are among the most common ROMs that have been applied to vascular flow problems [30,82,83].
A POD-GP ROM can be derived by decomposing the velocity field u(x, t) uðx, tÞ % X N j¼1 a j ðtÞF j ðxÞ, ð2:5Þ where Φ j denote the POD modes and a j are the temporal coefficients.The Galerkin projection of the Navier-Stokes equations is written as where 〈 • , • 〉 represents the inner product.Following some algebraic manipulation using the decomposition from equation (2.5), the POD-GP ROM can be written as [86] da i ðtÞ dt C ijk a j ðtÞa k ðtÞ, i ¼ 1, . . ., N: ð2:7Þ A i , B ij and C ijk are tensors determined by the specific form of the governing system.The functional forms of the coefficient tensors are ð2:8Þ where u ¼ Ð T 0 uðx, tÞ dt is the time-averaged flow [29].The double sum in equation (2.7) arises due to the nonlinearity of the Navier-Stokes equations and is responsible for the slower ROM speeds and greater storage demands required in the case of nonlinear systems.
Nonlinearity When applied to problems governed by nonlinear equations, POD-GP does not fully decouple the ROM equations from the FOM, as the algebraic form of the ROM equations retains dependence on the FOM [29].This means that the algebraic operators for the ROM need to be recomputed at every iteration of the system, which limits the acceleration that this approach can offer for the target application of vascular flow.It is possible to overcome this issue by using hyperreduction techniques, such as the discrete empirical interpolation method (DEIM), which approximates the algebraic operators instead of calculating them exactly [87].Buoso et al. [30] employed this technique in a POD-GP-DEIM ROM to evaluate coronary blood flow, and found an acceleration by a factor of 25 for this method compared to the FOM.
Geometric complexity Complex geometric variability can be modelled by POD-Projection methods, as the POD modes can be made to contain spatial information about the geometry used to generate the data by mapping them back to a fixed reference geometry.However, applying any kind of ROM to a geometry not included in the training data is typically very challenging.In particular, when looking at vascular flow, the variability in morphology from one person to the next can be extreme, with entire vascular segments sometimes missing in certain regions [88].In some cases, such as when modelling relatively simple features such as stenosis in reasonably straight vessels, it is possible to parameterise the geometric variation and include these parameters as input to the ROM, as in [30].However, for pathologies such as intracranial aneurysms, where blood flow is highly dependent on the morphology, the number of parameters needed and the amount of high-fidelity data required can be prohibitive.Buoso et al. [30] demonstrated the use of DEIM to accelerate mesh generation by a factor of 10, which could help improve the overall efficiency of a simulation pipeline studying blood flow in multiple geometries.
royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 Multi-physics Provided the governing equations are known and data can be generated for the system, POD-Projection techniques are suitable for multi-physics problems.A common multi-physics application of POD-Projection is to FSI problems [89-91].Ballarin & Rozza [92] applied a POD-GP ROM to three idealised 2D FSI problems, including a parameterised valve configuration.The ROM showed good qualitative agreement across all cases and an acceleration factor of the order of ten.
Multi-scale (time) While POD-Projection ROMs are able to reduce simulation times significantly, the long-term stability of the ROM for unsteady flow problems is not guaranteed [29,93].This instability can be related to the truncation of the POD basis, the violation of boundary conditions, or an inherent lack of numerical stability [86].Various stabilisation techniques can overcome these issues, such as balanced truncation and balanced POD [29], pressure stabilisation [74], or adding corrective terms to the ROM equations to increase dissipation [94].Adding these stabilisation techniques to a ROM may increase its long-time accuracy, but will likely come at the cost of increased computational demands [90].Lassila et al. [29] noted that periodically driven inflow problems have been shown to demonstrate accurate long-term predictions.Given the quasi-periodic nature of vascular flow, this may imply that ROM stability is satisfactory in this context.However, care must be taken to train the ROM with data that are representative of the entire cardiac cycle.Flow features will exhibit strong time dependence due to the pulsatile nature of vascular flow [14].As a result, training a ROM using data from only one part of the cardiac cycle (e.g.flow acceleration) is unlikely to produce a ROM capable of accurately predicting the flow at another time (e.g.diastole).
Multi-scale (space) The spatial information is contained within the POD modes when constructing a POD-projection ROM.The number of spatial degrees of freedom is the same as the number of rows in each POD mode, which means that data and computing requirements for POD ROMs will increase as the mesh size grows.Furthermore, as POD requires input data from a FOM, using a refined mesh that captures fine flow details could lead to prohibitive run times when solving the FOM.This means that POD-Projection ROMs are often unsuitable for problems where large regions of the vasculature need to be modelled.A possible strategy to mitigate this issue is to couple a POD-Projection ROM with boundary conditions that are derived from a SDR ROM.Using this technique allows for the high spatial resolution of the POD-Projection approach in the region of interest while still accounting for the effects of the proximal and/or distal vasculature using the SDR model.This technique has been used in various haemodynamics studies to couple high-fidelity 3D models to SDR models, but POD-Projection ROMs have not been used for the 3D model [67,81,95].
Other comments While using the underlying governing equations is thought to improve the robustness of projection-based ROMs, it is also a weakness regarding the ease of implementation.Constructing a projection-based ROM requires explicit use of the underlying numerical implementation of the FOM, which may not be available or straightforward to use.In particular, when solving fluid dynamics problems, researchers often turn to commercial software for which source code is not readily available.This can hinder incorporating projection-based ROMs into simulation pipelines that are not built upon open-source software.Equation-free or non-intrusive methods offer an alternative strategy that mitigates these issues.
Proper orthogonal decomposition with interpolation
An alternative to projection-based ROMs is to use interpolation-based methods.Given a snapshot matrix U, with SVD given by U ¼ FSV Ã , it is possible to reconstruct each column of U using u n ðx, t; mÞ ¼ X N j¼1 a n j ðt; mÞF j ðxÞ, ð2:9Þ where μ are the parameter configurations contained in the snapshots, a n j ðt; mÞ are a set of time and parameter dependent coefficients, N is the number of truncated POD modes retained for the ROM and Φ j are the POD modes.a n are a set of temporal coefficients that can be considered as a path through the coordinate system given by Φ [76].The goal of POD-Interpolation is to predict the trajectory of the system under a new set of parameter values by using interpolation between the trajectories of previously computed parameter values.To perform the interpolation step, authors have turned to various techniques, such as linear interpolation [96], radial basis functions (RBFs) [76,97,98], Taylor series methods, or Smolyak grids [99].Once calculated, the new set of coefficients can be multiplied by the retained POD modes to efficiently calculate the solution field of interest for a new parameter configuration or time point.
Nonlinearity POD-Interpolation is a non-intrusive method, meaning that no modification of the underlying FOM numerical code is required.This means that the ROM is agnostic to the system it is being applied to and, therefore, POD-Interpolation does not suffer the same drawbacks as POD-Projection when applied to nonlinear systems.This does not guarantee that results with a POD-interpolation approach will be accurate for a nonlinear system, but the speed of the model is not drastically reduced in this scenario as can be the case when using POD-Projection on nonlinear problems.
Geometric complexity Similarly to POD-Projection methods, POD-Interpolation is suitable for complex-shaped individual geometries due to the POD modes containing rich spatial information.However, the success of this approach is also limited when applied to geometries that were not included in the training data.Girfoglio et al. [69] applied POD-Interpolation methods to patient-specific aortic blood flow in the presence of a left ventricular assist device, but only constructed their ROM for a single patient geometry.Geometric parameterisation approaches have been applied to POD-Interpolation methods, but not in the context of vascular flow problems [76].
POD-Interpolation approaches can be applied to subdomains of the FOM domain used to generate the snapshots.For example, if high-fidelity data were generated for a vessel with an aneurysm, it is possible to build a POD-Interpolation ROM only for the aneurysm rather than the full geometry.This can further accelerate the ROM, as the number of data and interpolation operations required is reduced.This feature of POD-Interpolation ROMs gives them an advantage when modelling flow in complex geometries where dense volumetric meshes are required (e.g. when modelling a flow-diverting stent), as the amount of data is vastly reduced without affecting the model performance.
[77] used a non-intrusive POD-RBF ROM for one-way and two-way coupled FSI problems and found acceleration factors of order 10 5 -10 6 while showing qualitative ROM-FOM agreement.Hajisharifi et al. [98] applied a POD-RBF ROM to a fluidised bed problem.Compared to the FOM, the POD-RBF ROM provided an acceleration factor of order 10 5 and an accuracy of approximately 99% when reconstructing the time evolution of the Eulerian and Lagrangian fields.They tested local and global POD approaches and found the local calculation of POD bases produced a more accurate and efficient ROM.
Multi-scale (time) Similarly to POD-Projection techniques, POD-Interpolation methods do not have any guarantee of long-term solution stability.
Multi-scale (space) In principle, POD-Interpolation ROMs can be coupled with 0D/1D models for boundary conditions by including the coupling parameters describing the inflow/ outflow conditions in the ROM construction.When evaluating the POD-Interpolation ROM, one can obtain the boundary condition parameter input from the output of the 0D/1D boundary condition model and use this to evaluate the 3D flow field using the ROM.In this way, POD-Interpolation approaches can be suitable for modelling highly resolved regions of interest in 3D while conscribing to the effects of the peripheral vasculature.This POD-Interpolation-SDR approach is yet to be applied to vascular flow, but coupling 0D/1D models with 3D computational fluid dynamics (CFD) is common [67,81,95].
Other comments Walton et al. [76] noted that POD-Interpolation, when all POD modes are retained, is equivalent to performing element-wise interpolation across all spatio-temporal coordinates.Therefore, the maximum accuracy for a POD-Interpolation ROM will be bounded by the elementwise interpolation error.For this reason, the acceleration offered by POD-Interpolation ROMs should not only be calculated relative to the high-fidelity CFD model, but also relative to the cost of performing element-wise interpolation of the solution field.Despite this limitation, relative to elementwise interpolation, POD-Interpolation is still capable of vastly reducing the number of interpolation operations required to calculate a new solution and the amount of data that needs to be stored offline.
Summary POD-Projection and POD-Interpolation techniques have been applied to a wide range of vascular flow problems, including blood flow in tetralogy of Fallot patients [80,81], coronary blood flow [30,82,100], aneurysm blood flow [101], aortic blood flow [69,102] and FSI problems [92].Tables 2 and 3 demonstrate that POD-Interpolation ROM techniques typically accelerate by factors ranging from 10 2 to 10 6 , while acceleration factors for POD-Projection ROMs range from 10 1 to 10 3 .Wang et al. [96] compared POD-GP and POD-Interpolation approaches for steady-state heat conduction problems with different numbers of parameters.They found that the POD-GP approach was more reliable, with better performance as the number of parameters grew.POD-Interpolation may require more snapshots than POD-GP to achieve similar accuracy, so despite the faster evaluation times of POD-Interpolation, the overall offline cost to build a ROM of equal accuracy to the POD-GP ROM may be greater.Xiao et al. [99] and Xiao et al. [97] performed two studies comparing POD-GP with various POD-Interpolation techniques (Taylors, Smolyak, RBF interpolation).In both studies, the interpolation-based ROMs were found to be approximately one order of magnitude faster while maintaining good accuracy relative to the high-fidelity model.
Conclusion POD-Projection and POD-Interpolation approaches have been applied to nonlinear, geometrically complex, multi-physics vascular flow problems.Both of these approaches can be coupled to 0D/1D models to capture multi-scale phenomena across large spatial scales in the vasculature.Geometric parameterisations can be incorporated into POD-based ROMs in an attempt to build models suitable for unseen geometries, but these models are limited in their generality and in the complexity of geometry they can model with a reasonable number of parameters.Attempts to build POD-based ROMs that are entirely general to geometry have seen either large errors [80] or minimal acceleration [81].POD-based ROMs are often unsuitable for problems with large time scales, as the long-term stability of the POD modes is not guaranteed.
Dynamic mode decomposition
Dynamic mode decomposition (DMD) was originally developed by Schmid [104] for analysing spatio-temporal data from simulations and experiments.Modes are extracted from the data and can then be used to describe the physical mechanisms present in the data or for dimensionality reduction.For ROM construction, DMD can provide an alternative technique to POD for extracting leading-order modes from data.DMD trades the optimal reconstruction property of POD for physical interpretability, as the eigenvalue associated with each mode provides quantitative information on the oscillation frequency or growth/decay rate of the given mode [105].
Both DMD and POD use the SVD, but the difference arises in the construction of the snapshot matrix prior to performing SVD.In POD, the snapshot matrix is given by The goal of DMD is to compute an approximation to the matrix A, where U 2 ≈ AU 1 [106].To do this, SVD is applied to U 1 and the resulting decomposition is used to calculate the pseudoinverse of U 1 , which is then used to calculate A. Thus, DMD finds a best-fit linear model that approximates the underlying time dynamics present in the data.In DMD, N will typically be a set of timesteps for the evolution of the system for one set of parameter values.Using the DMD model, an initial state can be propagated forward in time at a low cost.DMD ROMs are non-intrusive by being equation-free and entirely data-driven.
Since its inception, numerous extensions to DMD have been proposed to help tackle complexities such as nonlinearity, varying characteristic time scales in a given application, or handling externally driven data sequences.These extensions are thoroughly presented in [33].Despite its growing use as a tool for analysing complex spatio-temporal data, DMD has seen limited application to vascular flow.We will discuss the applicability of DMD and its extensions to modelling vascular flow.
Nonlinearity
DMD aims to find an optimal linear model based on data.The underlying system in blood flow problems is nonlinear but the strength of this nonlinearity will vary depending upon the application.Habibi et al. [105] found that more DMD modes are required in an aneurysm model than in a royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 stenosis model to achieve a particular reconstruction accuracy, highlighting the problem-specific nature of the complexity of vascular flow.In cases where nonlinearity is strong, a large number of measurements of the field of interest may be required to ensure the nonlinearity is captured in the reduced model.Extended DMD (EDMD) is an approach designed to help with this issue by using nonlinear functions of the measurements as input to the DMD algorithm [33,107].
Geometric complexity
Similarly to POD modes, DMD modes contain spatial information, so this approach is well suited to constructing ROMs for individual complex geometries.Habibi et al. [105,108,109] have demonstrated the use of DMD to identify blood flow structures in cerebral aneurysms and stenosis models.However, as with POD, using DMD to evaluate flow fields in an unseen geometry is very challenging.
Multi-physics
DMD is suitable for multi-physics problems as the decomposition can be applied separately to each field.DMD can also be used to identify spectral coherence between each field in multi-physics applications, which can help to improve understanding of the problem.So far, the main use of DMD in multi-physics problems is to study FSI.Rodríguez-López et al. [110] used DMD to capture spatio-temporal evolution of flow over a flexible membrane wing using experimental data.They found that basic DMD could not reconstruct the fields accurately.Instead, they used high-order DMD (DMDho), developed by Le Clainche & Vega [111].Where basic DMD only uses the previous snapshot, DMDho estimates each snapshot as a linear combination of a number of previous snapshots, thus improving performance in regimes where the FSI was stronger.This suggests that as the complexity of the system increases, accurate propagation of the time dynamics may require more than just the previous snapshot.This is worth considering when adding complexity (e.g.vessel elasticity, thrombosis models, device interactions) to vascular flow DMD models.
Multi-scale (time)
DMD ROMs are perhaps most beneficial for problems of complex temporal nature.A DMD ROM is inherently designed to uncover time dynamics in a system and then propagate the reduced system forwards in time.Vascular flow is often modelled as periodic, with results from a single cardiac cycle taken to be representative of the flow for all time.This assumption can break down when autoregulation occurs or when complex long-term physiological phenomena, such as blood clotting, occur.The period of a cardiac cycle is roughly one second, whereas processes such as blood clotting can occur over a period of months.Multi-resolution DMD (DMDmr) provides a way to robustly separate complex systems into a hierarchy of multi-resolution time components [112].DMDmr uses iteratively shorter snapshot sampling windows and recursive extraction of DMD modes from slow to fast time scales, which improves the predictions for short-time future states.This technique has been further generalised by Dylewsky et al. [113].Provided with the appropriate data, DMDmr may be able to produce ROMs that can capture both longand short-term effects of blood flow.Identifying a ROM for long-term effects (clotting, plaque build-up etc.) may be particularly useful in reducing the cost of vascular models, as current approaches are too expensive to simulate these processes for the time scales over which they occur [3].Another approach to handle complex temporal patterns is multi-stage DMD (mDMD) [105].mDMD divides a temporal system into stages and applies DMD to each stage in turn.This allows more DMD modes to be used during periods with a more complex flow, while reducing the number of modes required when the flow is simpler, as demonstrated by Habibi et al. [105].This approach can improve the efficiency of the ROM and reduce data storage requirements, but does not extend the original DMD method to more complex problems.
Multi-scale (space)
DMD modes are local to wherever the high-fidelity data were generated, so using this approach for large regions of the vasculature is not possible without generating enormous amounts of high-fidelity data.However, DMD with control (DMDc) allows for input controllers to be integrated into the DMD algorithm.Habibi et al. [105] used inlet velocity as a controller for cardiovascular flow.It may be possible to extend this approach to account for other flow parameters or boundary conditions, thus allowing the inexpensive DMD ROM to be coupled to 0D/1D SDR models that account for the large-scale flow changes in the vasculature.
Summary
Despite DMD being used as a ROM technique, very few papers directly compare the efficiency of the DMD ROM with the FOM used to generate the training data.Table 4 highlights a few studies that did evaluate the DMD ROM efficiency.From this, we can see speed-ups ranging from ∼10 0 to 10 2 .This acceleration seems small, but given the non-iterative equation-free nature of DMD ROMs, it is [115] included offline calculation times when determining the ROM speed-up, so higher acceleration values would be found if they only compared the online evaluation time with the FOM.Only a few papers in the literature use DMD for vascular flow problems.Habibi et al. [105] used multi-stage DMD with control (mDMDc) to reveal hidden low-dimensionality in patient-specific blood flow in coronary stenosis and cerebral aneurysms.They found that mDMDc requires fewer modes than DMD to reconstruct the velocity fields to a given accuracy, but these modes were not used to construct a ROM.Habibi et al. [109] used DMD for data assimilation in Womersley flow, 2D idealised aneurysm flow and 3D real aneurysm flow, but in this instance the DMD analysis was not used to construct a ROM.Di Labbio & Kadem [117] performed POD and DMD analysis of left ventricular flow and found that while DMD requires more modes to achieve a particular energy level, it also preserves global particle advection using fewer modes.Another important point to consider when using DMD for vascular flow is that due to the periodic nature of the flow, unstable modes will either decay or grow over time, thus potentially under-and over-influencing the dynamics as time goes on [117].
Conclusion
DMD can be used to construct reduced order linear dynamical systems from data that approximate underlying nonlinear dynamics.DMD ROMs can be inexpensively propagated forwards in time or used to extract coherent structures from data.DMD offers the benefit of having an associated frequency attached to each mode, thus providing interpretability (i.e.growth/decay/oscillation for each mode).DMD modes contain spatial information so this approach can be used to model individual complex geometries.DMD models are typically built with time as the only input parameter, so parametric DMD ROMs are rare; however, very recent work has begun to investigate this by adding interpolation into the DMD approach [118].DMDc offers the potential to include input controllers into a DMD model, so this approach can be used to include the effects of, for example, varying inlet flow rate [105].The input controllers could also potentially be boundary conditions derived from 0D/1D blood flow models, thus allowing DMD ROMs to account for larger portions of the vasculature.DMD can be applied to multi-physics problems; however, a high-order DMD approach may be required to correctly reconstruct the fields of interest [111].DMD ROMs are not commonly applied to vascular flow problems to date.A promising application of DMD in vascular flow is to problems where evaluating the long-term effects is not possible with conventional models.For these problems, DMD could perhaps be used to construct an efficient ROM for the time dynamics of long-term blood flow phenomena.
Other techniques
There are various other ROM techniques that have not been as widely used as those discussed previously.Herein, we will discuss two of those techniques, the reduced basis (RB) method, which has seen some application to vascular flow problems, and the proper generalised decomposition (PGD), which has not been applied to vascular flow modelling.
Reduced basis
The RB method is usually applied to the fast solution of parameter-dependent problems [29,119,120].Similarly to PODbased ROMs, the RB method uses a set of snapshots of the FOM.Whereas POD uses the SVD to extract an optimal basis from the snapshots, the RB method is more general and can use various alternative approaches (e.g.Gram-Schmidt orthonormalisation [121]) to construct a basis spanning a sub-space of typically much lower dimension than that of the full-order solution manifold.RB methods often employ a greedy procedure for basis construction, whereby Authors define speed-up as ratio of total simulation time to the sum of the time-lengths of the snapshot computational intervals, which is a particular definition suitable for their method [116].DMD, dynamic mode decomposition; DMDho, high-order DMD; GP, Galerkin projection; FOM, full-order model; ROM, reduced order model; POD, proper orthogonal decomposition.
royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 optimal snapshots are computed based upon an a posteriori error estimation [122].A key advantage of the greedy approach is that the specific dynamics of the problem at hand guide the sample selection process [26].Following basis construction, a Galerkin projection is often applied to build the ROM, similarly to POD-Projection ROMs.
The RB method has seen some application to vascular flow problems.Manzoni et al. [123] used this approach with RBF for interpolating the geometric parameters to calculate flow fields in 2D parameterised carotid artery bifurcation geometries.For two test cases of global deformations of the carotid branches and stenosis near the carotid sinus, they achieve speed-ups of 96 and 88 times, respectively.Lassila et al. [124] applied the RB method to inverse problems in flow through stenosed arteries and in optimal shape design for femoropopliteal bypass grafts, reporting estimated speed-ups of 30-175 times.While effective in predicting downstream shear rates in the stenosis problem and in identifying optimal design configurations, the models were only applied to 2D steady-state problems.Colciago & Deparis [125] combined POD and the RB method, specifically the greedy algorithm, to build a ROM for a haemodynamics problem, noting CPU time gains of order 10 3 .The application was to a femoropopliteal bypass problem, which was modelled using a 3D reduced FSI formulation, highlighting the suitability of the RB approach to multi-physics applications.The authors note that the greedy enrichment scheme can favour reducing the error in certain variables, especially when the quantities in the problem are of different orders of magnitude, so care should be taken in building an appropriate error estimator for multi-physics applications.Aside from vascular flow applications, the RB method has been applied to various other nonlinear Navier-Stokes problems [126,127], including FSI problems [128].Coupling the parametric RB method to boundary conditions derived from 0D vascular models is possible in order to capture some multi-scale spatial effects.
Proper generalised decomposition
PGD generalises POD using separated representations while avoiding the need for any a priori knowledge about the solution [129].Not using snapshot generation allows PGD to be applied to previously unsolved problems, which POD, DMD and RB ROMs are mostly incapable of.For a problem defined in space of dimension D, PGD provides an approximate solution u N in the separated form The PGD approximation is a sum of N functional products involving D functions F j i ðx j Þ [130].PGD solutions are constructed by successive enrichment, where a functional product F n is determined using the functions from the previous n − 1 steps.It should be noted that each enrichment step involves solving a nonlinear problem by means of a suitable iterative process.In PGD, both the number of terms N and the functions F are unknown a priori, making PGD an a priori ROM method.In a typical separation of variables, the coordinates x i could be space and time coordinates, but in PGD additional coordinates can be included for problem-specific inputs such as boundary conditions or material parameters.Furthermore, if M nodes are used to discretise each of the coordinate spaces, the total number of PGD unknowns is N × M × D instead of the M D degrees of freedom found in standard mesh-based discretisations [130].When the solution field is sufficiently regular, the number of terms N will be relatively small, highlighting how PGD overcomes the curse of dimensionality [131].
PGD was initially developed for solving time-dependent nonlinear problems in structural mechanics [132].It has since been applied to rheology [133] and the incompressible Navier-Stokes equations [131].Chinesta et al. [133] noted a speed-up of the order of 10 2 when using PGD for a transient rheology problem.Dumon et al. [131] found a speed-up of approximately 100 times for a 2D stationary diffusion problem, whereas a speed-up of 5-10 times was found for various Navier-Stokes problems, the most complex of which was a 2D lid-driven cavity flow.PGD has also been applied to multi-scale in time applications, where it is possible to separate the time dimension (1D in nature) into a multi-dimensional time space; however, in this study the authors are not able to draw conclusions on the efficiency of the ROM [134].PGD has also seen application to multi-scale in space and multi-physics problems, where the authors highlight that the savings due to PGD increase with problem complexity [135,136].Despite its potential usefulness in complex problems with known/ unknown equations, PGD has not seen as widespread use as other reduced order techniques.
Accelerating simulations with machine learning
Machine learning is a branch of artificial intelligence that excels at extracting underlying patterns in data.The basic building block of many machine learning algorithms is the neural network, shown in figure 3. Neural networks consist of a collection of processing units, called neurons, and a set of directed weighted synaptic connections between the neurons.The connections between neurons symbolise the passing of information between neurons, with a fully connected neural network (FCNN) meaning that all neurons in a given layer receive information from all neurons in the previous layer and pass information to all neurons in the subsequent layer.Each neuron processes the information it receives via some calculations and produces an output.The final layer is referred to as the output layer, where the final output of the network is produced.The fully connected neural network in figure 3 has two inputs, two hidden layers with four neurons per layer and one output.The objective of the network is to approximate a mapping between the input and output variables, given data to learn from.In vascular flow modelling, the inputs may be variables like space, time or Reynolds number and the outputs may be velocity, pressure or other variables of interest.Each neuron is characterised by three functions: the propagation function, the activation function and the output function.The propagation function converts the vectorial input from the previous layer's outputs into a scalar input.The activation function quantifies the extent to which a particular neuron is active by applying a chosen function to the net input, such as the hyperbolic tangent or rectified linear unit functions [137].Including activation functions for several sequential layers allows the deep network to approximate nonlinear mappings from inputs to outputs.The output royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 function calculates the scalar output of a neuron based upon its activation state.Each neuron has a trainable weight associated with it, and each layer often has a trainable bias.These weights and biases are the network parameters that are optimised through training.
For a supervised learning problem, training data consist of a set of inputs with known outputs.During the training procedure, input data are passed through the network to give an output that is compared to the ground truth values for the output.A loss function is used to quantify the discrepancy between the network output and the ground truth output.The parameters associated with the network are optimised, typically through back-propagation and gradient descent algorithms, in order to minimise the loss [138].Once the network has been trained to accurately match predictions for the training dataset, it can be used for input data where ground truth output values are unknown.Typically, the accuracy of the network will be assessed by evaluating its output on a dataset that was not used in training, or through procedures such as cross-validation.A trained neural network can be considered to approximate a function that maps the input data to the output data.Hornik et al.
Machine learning and deep learning have both employed neural networks to great effect in various classification and regression tasks in fields such as computer vision and natural language processing [141,142].Common across all learningbased strategies is the utilisation of data and the framework of an expensive up-front training stage preceding a cheap inference stage when evaluating the model for new data.In this way, machine learning approaches bear resemblance to ROM methods.A benefit of machine learning compared to ROMs is that the operations used in machine learning are highly parallelisable, which allows them to be trained and tested using highly parallel computing hardware, such as graphics processing units (GPUs).This can reduce the time taken for training and inference, which is driving the growing interest in using machine learning-based simulation methods for acceleration.
Machine learning can be used in conjunction with ROMs, where the dimensionality reduction inherent to the ROM provides acceleration and machine learning is used to improve or replace some aspect of the ROM.For example, when constructing an interpolative ROM, such as in the POD-Interpolation method, using a neural network for interpolation can produce a ROM capable of outperforming POD-GP ROMs in terms of both acceleration and accuracy for certain applications [143][144][145].Alternatively, machine learning can be used in place of conventional simulation methods to directly infer solution fields or other quantities of interest from inputs such as medical images and point clouds of spatio-temporal coordinates [36,37].In this instance, the machine learning model itself provides acceleration relative to the FOM, either through reduction of the dimensionality of the problem or through exploitation of parallel computing hardware.
3.1.Machine learning reduced order models 3.1.1.Machine learning-augmented reduced order models Various attempts have been made to augment ROMs with machine learning.Neural networks (NNs) are adept at interpolation, so using them in POD-Interpolation ROMs is a natural choice.Hesthaven & Ubbiali [143] were among the first to apply a POD-NN ROM to parameterised steadystate PDEs (the Poisson equation and lid-driven cavity problems).In this model, the network approximates a mapping from the input parameter vector (including, e.g.material/geometry parameters) to the ROM coefficients.
The POD-NN approach offers similar accuracy to POD-GP, while reducing computation time by two to three orders of magnitude.Wang et al. [146] extended the work by Hesthaven & Ubbiali [143] to time-dependent PDEs and applied it to a quasi-1D PDE problem.In this case, the time coordinate is included as an additional input to the neural network, allowing evaluation of the ROM at different timesteps.For the simple test problem, the authors found ROM accuracy of 99% and an acceleration factor of order 10 7 relative to the FOM.San et al. [144] applied the POD-NN approach to the viscous Burgers equation to model time-dependent nonlinear wave propagation.San et al. [144] used a different network design from Hesthaven & Ubbiali [143] and Wang et al. [146], with San et al. [144] building a network that maps from the ROM coefficients at time t n and any controllable input parameters (e.g.Reynolds number) to an output that characterises the ROM coefficients at time t n+1 .Within this framework, they present two variations: (i) a sequential network, where the outputs are the ROM coefficients, and (ii) a residual network, where the outputs are the residual between the ROM coefficients of t n+1 and t n .Of these two approaches, the residual network is found to be superior and both approaches outperform POD-GP for the Burgers equation application.Balzotti et al. [147] applied the POD-NN approach to optimal control of steady-state flow in a patient-specific coronary artery bypass graft.The Reynolds number parameterised the inflow and was the single input parameter for which the ROM was constructed.The objective of the optimal control algorithm was to identify the normal stress that has to be imposed at the outlet to ensure a satisfactory agreement between the computed and clinically measured velocity fields.Online evaluation of the ROM took approximately 10 −4 s, which is a speed-up of order 10 6 compared to the FOM.The POD-NN model was comparably accurate to a POD-GP model applied to the same problem, but the POD-NN ROM was four orders of magnitude faster [103].
It is also possible to augment POD-GP ROMs with machine learning.Two challenges in POD-GP ROMs are: (i) the potential lack of long-term stability and accuracy and (ii) the lack of complete decoupling for nonlinear governing equation projection onto the reduced basis and the subsequent high cost of evaluating these nonlinear reduced operators.To address the first challenge, Wang et al. [148] used a long short-term memory (LSTM) network, a type of recurrent neural network designed to operate on sequential data.The POD coefficients are fed into the LSTM units and the physical/geometric parameters are fed into the initial hidden state of the LSTM.When applied to various problems (3D Stokes flow, 1D Kuramoto-Sivashinsky equation and 2D Rayleigh-Bernard convection), the LSTM-POD-GP ROM is found to improve stability and accuracy compared to POD-GP for nonlinear problems.Furthermore, the LSTM ROM facilitates accurate predictions beyond the time interval of the training data.To address the second challenge, Gao et al. [149] proposed a non-intrusive approach to hyper-reduction that approximates the ROM velocity function using a FCNN.The FCNN-enhanced POD-GP ROM was applied to two nonlinear PDEs (1D viscous Burgers equation and 2D flame model) and found to be accurate to approximately 95%.The ROM was also shown to be more stable and accurate for the test problems than POD-GP with alternative hyper-reduction methods (DEIM), in the limit of a small basis.Another approach to improve accuracy is to use machine learning to adapt the ROM to a given input.Daniel et al. [150] used a deep classification network to recommend a suitable local POD-GP ROM from a dictionary of possible ROMs.This approach could be used in conjunction with small local ROMs, which have been shown to outperform a single global ROM in terms of accuracy and acceleration [98,151].
Machine learning-based reduced order models
Dimensionality reduction is a crucial step in ROM construction and is commonly performed using techniques such as POD or DMD.Autoencoders (figure 3) are neural networks used to compress and decompress high-dimensional data and are thus being increasingly used in the dimensionality reduction step in reduced models.Autoencoders can provide nonlinear data embedding, whereas POD and DMD offer only a linear reduced basis [34,35].This could allow autoencoders to compress complex nonlinear data more accurately than POD or DMD.Another approach that can offer nonlinear dimensionality reduction is manifold learning.Csala et al. [152] compared four manifold learning (locally linear embedding, kernel principal component analysis (PCA), Laplacian eigenmaps, isometric mapping) and two ML-based (autoencoder, mode decomposing autoencoder) nonlinear dimensionality reduction methods to PCA.They found that all six of the nonlinear dimensionality reduction methods achieved lower reconstruction errors than PCA for spatial reduction, but that only the autoencoder-based reduction was definitively superior for temporal reduction.Maulik et al. [34] used a ROM based on a convolutional autoencoder (CAE) and an LSTM to model the viscous Burgers equation and the inviscid shallow-water equations.In these advection-dominated systems, the deep learning (DL)-based ROM outperforms the POD-GP method.The CAE-LSTM approach is 14 times faster than the POD-GP method, producing errors of the same magnitude.Pant et al. [35] used a 3D CAE to compress simulation data and advance the solution in time without solving the Navier-Stokes equations in an iterative fashion.Using a 3D CAE allows for features to be royalsocietypublishing.org/journal/rsifJ. R. Soc.Interface 21: 20230565 extracted in both spatial and temporal axes, which mitigates the need for an additional network (e.g. an LSTM) for time propagation.Using this approach, the authors reduce computational run times by two orders of magnitude compared to traditional CFD solvers.
Fresca et al. [153] constructed a POD-DL-ROM that uses POD to reduce the dimensionality of the training data, improve training efficiency and reduce complexity.Compared to previous work by the same authors, enhancing with POD reduces the DL-ROM training time from 15 h to 24 min.The DL-ROM itself uses CAEs and feedforward neural networks trained on the POD-reduced solution vectors.Fresca & Manzoni [145] used the same approach for a series of additional applications including an unsteady advection-diffusion-reaction system, a coupled PDE-ODE Monodomain/Aliev-Panfilov system, a nonlinear elastodynamics problem and the unsteady Navier-Stokes equations.For the most pertinent example, the Navier-Stokes problem, the acceleration factor was of the order 10 5 compared to the FOM while achieving a comparable accuracy to the more expensive non-enhanced DL-ROM.Fresca & Manzoni [154] used the same POD-DL-ROM for flow around a cylinder, FSI between an elastic beam and a laminar flow, and blood flow in a cerebral aneurysm.High levels of accuracy are qualitatively displayed for each application.Acceleration factors for all applications are of the order of 10 5 .Essentially, the approach of Fresca et al. [145,153,154] reduces the size of the data passed through the network and the amount of training parameters required, thus improving the efficiency of and testing while preserving the precision of the DL-ROM without POD enhancement.
Conclusion
Machine learning (ML) has a lot to offer the ROM field, as demonstrated by the various studies in table 5 that used ML and ROMs in conjunction.ML can be used to provide closure in projection-based ROMs, improve interpolation in POD-Interpolation ROMs, improve long-time ROM predictions, or offer alternative dimensionality reduction algorithms that are essential in almost all ROMs.ML-ROMs are able to address the weaknesses that hinder various reduced order methods, such as poor performance for nonlinear problems, lack of stability or lack of generality.As a result, ML-ROMs will typically be suitable for a wider array of vascular flow problems than the traditional ROM techniques from which they are derived.Balzotti et al. [147] demonstrated the superior acceleration capacity of a POD-NN ROM compared to a POD-GP ROM for a vascular flow problem due to the POD-NN approach being better suited for the nonlinear nature of the problem.Similarly, Csala et al. [152] demonstrated the superior spatial reduction capability of nonlinear ML-based dimensionality reduction techniques when applied to aneurysm blood flow, which suggests that more accurate models may be possible using ML-based reduction techniques.Fresca & Manzoni [154] conversely used traditional dimensionality reduction techniques (POD) in conjunction with an ML-based ROM and achieved high levels of accuracy and acceleration for aneurysm blood flow.While not for vasclar flow applications, Wang et al. [148] and Gao et al. [149] augmented POD-GP ROMs with ML and achieved improved stability and accuracy.These findings demonstrate that ML-ROMs are a compelling option for vascular flow problems.In particular, ML-ROMs can offer methods suitable for vascular flow problems that are nonlinear, geometrically complex, multi-physics and multi-scale in time.
Physics-informed machine learning simulation
Machine learning can be used to construct fast surrogate models for vascular flow problems that directly predict haemodynamic quantities of interest, as in work by Itu et al. [37], Rutkowski et al. [155] and Liang et al. [156] (discussed further in §3.3.1).A criticism of this approach is that the models do not guarantee the underlying physics in the problem will be respected.This can be somewhat resolved by incorporating known physics into the learning procedure [157].The most widely used techniques to achieve this are physics-informed neural networks (PINNs), which can combine data acquired from simulations or experiments with knowledge of the underlying governing equations and boundary conditions [36,158].In contrast to most machine learning simulation techniques, PINNs can be used in the absence of data.PINNs without training data may be less accurate than with data, but data-free PINNs offer a direct alternative to standard numerical techniques [159].While PINNs were initially developed for solution and discovery of PDEs in forward and inverse scenarios, the development of data-free and parametric PINNs has since seen them applied to simulation acceleration.PINNs have been demonstrated to vastly reduce simulation times, particularly in the context of parametric design optimisation problems, hence our focus on this technique in this review [160,161].
A typical PINN is shown in figure 3. The PINN consists of a network with simulation parameters (e.g.space/time coordinates) as input and solution fields (e.g.velocity/pressure) as output.Fully connected neural networks are typically used for PINNs, but various other approaches have demonstrated superior results for certain applications [162].For the chosen architecture, automatic differentiation is typically used to differentiate network outputs with respect to its inputs, thus acquiring derivatives such as u x , p x , u t , etc. which can be combined to formulate governing equation residuals.For the incompressible Newtonian Navier-Stokes equations, the residual of the x-momentum equation will take the form where u = (u, v, w) is velocity, p is pressure and Re is the Reynolds number.Reduced Navier-Stokes equations (e.g.equation (2.3) for 1D blood flow) can also be used as residuals [163].The residuals are included in the loss function for the network, which encourages the network to learn mappings that minimise the residuals and therefore satisfy the underlying governing equations.It is possible to enforce additional loss constraints that penalise the network for non-satisfaction of boundary conditions, such as the no-slip condition that is often applied on blood vessel walls.Alternatively, boundary conditions can be imposed as hard constraints through the network architecture [164].Once trained, the PINN is able to infer solution fields that satisfy data, governing equations and boundary conditions.PINNs are designed to improve the efficiency of noninformed networks through reducing the amount of data royalsocietypublishing.org/journal/rsifJ. R. Soc.Interface 21: 20230565 required and helping the network train efficiently by discarding non-physical mappings.A further benefit of PINNs is their potential to be used as an alternative to traditional numerical solvers.If data are unavailable, PINNs can be trained on PDE residual points and boundary conditions alone, mirroring traditional numerical techniques' procedure.However, the input coordinates need only be a point cloud rather than the volumetric mesh required for typical numerical solvers.Furthermore, unlike traditional numerical solvers, when a problem is ill-posed with incomplete or noisy boundary conditions, PINNs are still a viable option [165].A final benefit of PINNs is that they are well suited to solving inverse problems as well as forward problems, whereas traditional numerical techniques are usually only suitable for forward problems.
Once trained, a PINN can quickly infer physics-respecting solution fields given spatio-temporal inputs, making them a promising acceleration technique.However, generalising a PINN for additional input parameters can decrease accuracy and increase training time, so the fast inference speeds must be balanced against training cost and accuracy.Despite their promise, PINNs are a relatively new technique for simulation and the application of PINNs towards acceleration and vascular flow is in its infancy.We aim to address three questions in order to determine the usefulness of PINNs for vascular flow acceleration: (i) How suitable are PINNs for simulation acceleration?(ii) How fast are PINNs relative to traditional numerical techniques?(iii) Are PINNs suited to the complexities of vascular flow acceleration?
How suitable are physics-informed neural networks for acceleration?
Developing and using a PINN model often consists of three stages: (i) generating or acquiring data from simulations or experiments, (ii) training the network while incorporating known physics and boundary conditions and (iii) using the model to infer solutions for new inputs.In inference mode, PINNs are usually faster than a traditional numerical model [160] introduced two input parameters describing the peak inflow rate and diameter in a pipe flow problem.Sun et al. [159] similarly included parameters that describe geometry and viscosity as input to their PINN.When parameterising the network in this manner, an active learning strategy can reduce the cost of up-front data generation.This consists of refining the training data with additional finite-element model (FEM) samples in regions of the parameter space where the PINN prediction is poor.Costabal et al. [167] used a positional encoding mechanism for PINNs that creates an input space for the network representing the geometry of a given object, improving PINN performance in complex geometries.However, for a Poisson forward problem in a simple domain, the positional encoding method was not observed to outperform traditional PINNs.De Avila Belbute-Peres et al. [168] developed a hyper-PINN approach, where an additional network is trained on sets of model input parameters (e.g.geometric parameters, boundary conditions, material properties) and network weights from previously trained PINN models for each simulation configuration.This precursor network learns how to map from the input parameter space to the weights needed for the PINN model for that particular parameter configuration.For a new parameter set, the precursor produces the weights needed to directly use the PINN in inference mode, thus bypassing the need to train a new PINN model entirely.
Alternatively to generalizing PINNs, reducing training time sufficiently can mean that training a new PINN for each problem is still a tractable approach.Kissas et al. [163] suggested transfer learning to solve this problem.Transfer learning consists of initialising new PINN models with the parameters from a model previously trained on a similar problem, which can drastically reduce training time.This is similar to providing an accurate initial guess in iterative numerical methods.A transfer learning approach could allow for a new PINN to be trained for each new simulation configuration (new geometry, boundary conditions, etc.) while still providing an acceleration relative to solving the problem with traditional numerical techniques.For this approach to make sense, the new PINN must be trained without the use of training data from solving the numerical model.To this end, Desai et al. [168] proposed a one-shot transfer learning approach for PINNs, which consists of training for a selection of PDEs and then reusing some of the trained layers for an unseen PDE, thereby reducing training time.Another approach to accelerate training is to incorporate a hyper-parameter into the activation functions in the PINN [169].The hyper-parameter dynamically changes the loss function topology throughout training and is shown to accelerate PINN convergence and increase accuracy.Residual-based adaptive refinement can also accelerate training [170,171].This approach aims to increase the number of network training points in regions where the PDE residual is inaccurate throughout training, thus accelerating convergence.
How fast are physics-informed neural networks?
Once the PINN training time is sufficiently reduced, or the network is generalized appropriately, the question of how fast PINNs are relative to traditional numerical techniques remains.Table 6 collates the literature on PINNs where the authors commented on the acceleration offered by their approach.
Arthurs & King [160] and Hennigh et al. [161] conducted design optimisation studies using PINNs.Arthurs & King [160] developed a parametric PINN model for Navier-Stokes applications and ran a parameter sweep experiment to identify the value of the geometric input parameter that would lead to a target pressure drop.This is a typical many-query problem, where repeated model evaluations are required to identify some kind of threshold in the output variable.The trained PINN required only 7.6 s to perform the sweep over 81 parameter points, whereas the same sweep using FEM would have taken 400 times longer.Scaling up the number of parameter queries to 1 million only increases the run time to 11.1 s, highlighting the scalability of the PINN due to its fast inference speed.However, it should be noted that the PINN evaluation was only performed at two spatial points, as this is all that is required to calculate the pressure drop.This demonstrates a benefit of PINNs, in that they can be used to query specific regions of interest, but the FEM model inherently evaluates the entire spatial field, so directly comparing model efficiency is not fair in this case.Hennigh et al. [161] presented NVIDIA SimNet, an AI-accelerated multi-physics simulation framework based on PINNs.They studied a design optimisation problem where SimNet is able to reduce total compute time by approximately 45 000 times compared to a commercial solver and 150 000 times compared to OpenFOAM.Gao et al. [173] trained physics-informed CNNs for superresolution of low resolution flow field inputs using only knowledge of the conservation laws and boundary conditions.They applied this approach to 2D flow in a vascular domain and parametric super-resolution for internal flow with a parameterised inlet velocity profile.The model accurately refines the spatial resolution by 400 times for the flow fields with any new inlet BCs sampled in the 20dimensional parameter space.The speed-up time for the trained model compared to the highly resolved CFD model is 3364 times.Sun et al. [159] used data-free parametric PINNs for flow in 2D idealised stenotic and aneurysmal vessels.They achieved accurate results in all test problems with mean test errors of order 10 −4 -10 −8 depending upon the problem and variable of interest.The authors noted that in the data-free PINN regime, implementing boundary and initial conditions with hard constraints improved performance when compared with the more widely used soft constraints.The trained PINN can be evaluated in 0.02 s, whereas the CFD model takes 40 s, yielding a speed-up of 2000 times.However, training the PINN took hundreds of times longer than an individual CFD simulation.The royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 PINN will therefore only reduce total computational cost in scenarios where a large number of model evaluations are required, such as uncertainty quantification or design optimisation.Sun et al. [159] suggested that the speed-up offered by their approach will be increasingly advantageous when more complex applications are considered.
Physics-informed neural networks for vascular flow acceleration
PINNs are inherently suited to nonlinear problems due to the nonlinear function approximating capacity of the network.In fact, the earliest applications of PINNs include nonlinear PDEs, such as the Navier-Stokes and Schrödinger equations [36].Since then, PINNs have been successfully applied to various cardiovascular fluid dynamics problems, all of which are governed by the nonlinear Navier-Stokes equations [162,163,[174][175][176][177][178].Individual complex geometries are relatively straightforward to handle with PINNs.Instead of the usual volumetric mesh required for traditional numerical techniques, PINNs require only spatio-temporal coordinates as input and do not require connectivity between these points.Volumetric meshes may still be required in order to generate simulation data to train the PINN, but if the PINN is used to generalise across geometries, then users can forego the time-consuming meshing step for some of the geometries [159].Raissi et al. [179] used PINNs to infer flow fields from concentration fields in an image-derived 3D aneurysm model and Sun et al. [159] applied PINNs with hard boundary condition enforcement to model flow in idealised stenosis and aneurysm models.This highlights two geometrically relevant applications of PINNs.
PINNs can also tackle multi-physics problems.Figure 3 shows a single-physics PINN, but additional physics can be added by using a second network that maps from the same inputs as the first network (space and time) to different outputs (e.g.displacements and stresses for solid mechanics).It is therefore possible to calculate all the required derivatives in order to impose the governing equations and boundary conditions from each aspect of the multi-physics problem.This approach has been applied to an inverse Navier-Stokes and Cahn-Hilliard blood flow-thrombosis problem [177], multi-phase heat transfer [180] and FSI [181].
Basic PINNs are not commonly applied to extrapolating the associated PDE in time.Kim et al. [182] proposed a dynamic pulling method (DPM) to overcome this issue.DPM manipulates the PINN's gradients to ensure the PDE's residual loss term continuously decreases during training.This is shown to improve extrapolation in time for various test problems.Basic PINNs are also not well suited to problems spanning very large spatial regions.This issue with large spatial and temporal domains is that the domain can become arbitrarily large, leading to prohibitive training times.The primary approach to tackling these problems is incorporating domain decomposition into the PINN framework.Decomposing the large spatio-temporal domain into smaller sub-domains allows for sub-PINNs to be trained in each sub-domain.This improves training efficiency as well as reducing error propagation, allowing for domain-specific hyper-parameter tuning, increasing representation capacity and facilitating paralellisation [183].
Conservative PINNs (cPINNs), extended PINNs (XPINNs) and parallel-in-time PINNS (PPINNs) are three possible domain decomposition approaches that can tailor PINNS for multi-scale problems.cPINNs enforce conservation properties at spatial sub-domain boundaries using flux continuity and solution averaging across the interfaces [183].XPINN is an extension to cPINN that applies to any type of PDE, not only conservation laws, and allows for decompositions in time and space [184].Shukla et al. [185] compared cPINN and XPINN for a series of forward problems and found that for space decomposition, cPINNs are more efficient in terms of communication cost but that XPINNs are more flexible as they can handle time decomposition, a wider array of PDEs and arbitrarily shaped sub-domains.PPINNs are an extension to PINNs that mitigate the issue of long-time integration through time-domain decomposition and using a coarse-grained solver for long-time supervision [186].The coarse-grain solver provides initial conditions for the PPINN in each time sub-domain.The coarse-grain solver needs be fast enough to solve the long-time PDE with some degree of accuracy cheaply, hence reduced-order or simplified models are viable options.Meng et al. [186] stated that the PPINN method could be extended to spatial domain decomposition, with a coarse-grained solver used to estimate the global solution and then a series of PINNs applied in parallel to spatial sub-domains, thus increasing training efficiency relative to applying one PINN for the entire domain.
Conclusion
PINNs offer a mixture of numerical mechanistic models and data-driven phenomenological models.Training a PINN model can be expensive compared to running a high-fidelity numerical model, so they are most useful for acceleration when a once-trained PINN can be used for numerous parameter or geometry instances.Various methods have been studied to parameterise PINNs [159,160,166,167].An alternative approach is to use PINNs in conjunction with transfer learning techniques to quickly retrain the model for a new system instance [168].Employing techniques such as these can make PINNs a viable option for accelerating vascular flow simulations, particularly as PINNs (and extensions thereof) are well suited to handling nonlinear, geometrically complex, multi-physics and multi-scale modelling problems.
Other techniques
Given the relatively recent application of machine learning to simulation and the continued growth of the machine learning field, there are numerous other machine learning methods that have been or can potentially be applied to vascular flow acceleration.Reviewing them all in detail is beyond the scope of this study, and in most instances, there is insufficient relevant literature to do so, but we will briefly discuss several of these approaches and highlight how they may prove useful in the future for our target application.
Physics-agnostic machine learning simulation
An alternative to augmenting/constructing ROMs using machine learning or attempting to encode physics into machine learning is to build a machine learning model that directly predicts the haemodynamic quantities of interest from inputs such as images or geometries [37,155].Some of these approaches are collated in table 7. One of the earliest examples of this is by Itu et al. [37], who used a machine learning model to predict FFR given parameterised coronary artery anatomy as input.The model consists of a FCNN with inputs corresponding to features of the coronary anatomy and FFR as the solitary output.Using this approach, the authors achieved an accuracy of 83.2% in correctly diagnosing positive ischaemia and reduced model run time by a factor >80. Liang et al. [156] trained a DNN to predict steady-state pressure and velocity fields in the thoracic aorta using 729 aorta geometries generated from a statistical shape model and CFD data generated for each geometry [193].The DNN consisted of autoencoders to encode the aorta shapes and the fields of interest and another network to map between the encoded shapes and fields.The trained network predicted velocity and pressure fields with mean errors of 2.0% and 1.4%, respectively.DNN evaluation time is approximately one second, whereas each CFD simulation took approximately 15 min, giving a speed-up of approximately 900 times.Liang et al. [194] applied this network structure to identifying the geometry corresponding to a particular pressure field, thus demonstrating an application of this method to inverse modelling.Morales et al. [189] applied two FCNNs, one with prior dimensionality reduction and one without, to predict endothelial cell activation potential (ECAP) from left atrial appendage (LAA) geometry.Their models were trained on 210 LAA geometries using CFD data.With and without dimensionality reduction, the average error was 5.8% and 4.7%, respectively.The network with dimensionality reduction was approximately 50 times faster than the other network when performing cross-validation.Gharleghi et al. [191] used a machine learning surrogate to replace a transient CFD solver in order to calculate WSS in the left main bifurcation of the coronary artery.The network requires the steady-state CFD solution for a given case as an input, but can then predict the transient WSS to an accuracy of .95%within 0.2 s using a CPU and 0.001 s using a GPU.Rutkowski et al. [155] trained a CNN to map from 4D flow phase-contrast magnetic resonance images to highly resolved flow fields using CFD data as labels.The focus of this work was fast and accurate flow field generation directly from images, foregoing the need for time-consuming and expensive simulation set-up and execution.The network successfully de-noised flow images, improved velocity field accuracy and enhanced near-wall flow measurements.Ferdian et al. [190] similarly developed a residual network that was applied to super-resolution of 4D flow magnetic resonance images of aortic blood flow.Their approach was able to predict flow rates in a real patient to greater than 95% accuracy within 40-90 s depending on the image size.
Various physics-agnostic machine learning simulation methods have been able to accurately and efficiently predict flow fields and flow-derived quantities in vascular flow applications.Provided that a FOM can be constructed and that sufficient data can subsequently be generated, the breadth of vascular flow problems that could be accelerated by these surrogate models is large.However, the vast amount of data required to generate accurate results could constrain these approaches, particularly in vascular flow applications where geometric data are typically derived from medical images that can be expensive to acquire and difficult to process.This is highlighted by Liang et al. [156], Morales et al. [189] and Gharleghi et al. [191] relying upon data augmentation strategies to extend their cohorts of real patients into larger cohorts of mostly synthetic patients.While this is necessary to create sufficiently large datasets, there is a risk that the augmentation may produce unrealistic results, as demonstrated by Morales royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 et al. [189] discarding 30% of their initial training samples due to unrealistic flow features.It is possible that data augmentation approaches from the wider machine learning field, such as variational autoencoders or generative adversarial networks, could provide techniques to generate highly realistic synthetic datasets [195][196][197].Another issue with physicsagnostic machine learning simulation methods is that the up-front cost of running CFD simulations in large cohorts to generate training data and the subsequent cost of training the complex network can lead to large overall costs.Despite these challenges, machine learning surrogate models are able to make predictions in previously unseen geometries due to being trained over an extensive array of different geometries.This is a crucial challenge in many vascular flow modelling problems that most acceleration techniques do not address with such generality.
Point network simulation
Typical convolutional deep learning architectures require regular input data, such as images.Point-Net was developed to allow the direct use of irregular point cloud data with techniques typically applied to regular input data [198].A benefit of using a Point-Net architecture is its ability to generalise well to new input point clouds.This means generalising to unseen geometries for vascular flow applications, which can lead to large savings in simulation times.Point-Net-based models have been applied to cardiovascular flow problems.Li et al. [38] used a Point-Net-based model to predict steadystate haemodynamics before and after coronary artery bypass surgery.Their approach yielded a prediction accuracy for velocity and pressure fields of around 90%.The time to evaluate the deep learning model was 600 times less than for the CFD model (1 s versus 10 min), although 40 h of training time was required prior to using the former.The same authors also applied their Point-Net-based model to predict steady-state aneurysm haemodynamics before and after treatment with a porous-medium flow-diverting stent model [39].A similar prediction accuracy was found (.87%) and the calculation time was reduced by a factor of 1800.Kashefi & Mukerji [199] developed a physics-informed Point-Net (PIPN) and evaluated it for steady-state incompressible flow problems.The acceleration factor is approximately 35 for trained PIPN evaluation compared to the standard numerical solver.Compared to PINNs, the accuracy of PIPNs is similar when trained to the same convergence criterion, but the computational cost of PINNs is 18 times greater.This factor is increased when exploiting the inherent generalisation of PIPN to model new geometries, as in this scenario, the PINN simulation time is ∼12 hours using 20 processors a Ten-fold cross-validation used with 300 geometries.One round of cross-validation on 30 geometries took 30 s or 25 min for each model.This is used to calculate evaluation time for one geometry and compared to reported 2 h CFD simulation time to calculate acceleration factors.b P-V accuracy taken for test cases with damage included, from table 3 of [192].c Network requires steady-state CFD result as input, which takes <2 min to calculate.With this included, acceleration factor is approximately 90.AE, autoencoder; CFD, computational fluid dynamics; CPU, central processing unit; DeepONet, deep operator network; ECAP, endothelial cell activation potential; FCNN, fully connected NN; FDS, flow-diverting stent; FOM, full-order model; FFR, fractional flow reserve; GPU, graphics processing unit; LAA, left atrial appendage; MRI, magnetic resonance images; MSE, mean-squared error; NN, neural network; PCA, principal component analysis; P-V, pressure-volume.
royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 will often need to be re-trained.PIPN is a recent technique that has not yet been applied to vascular flow.
Operator networks
The function approximation capacity of neural networks is well known, but it is also possible for neural networks to approximate operators that map between function spaces [200].The first and most general operator network is the deep operator network (DeepONet) [40].DeepONet consists of a branch network, which encodes the input function space, and a trunk network, which encodes the domain of the output functions.The input to the branch network are function values at fixed sensors and the input to the trunk network are spatio-temporal coordinates at which to evaluate the operator.The output of the trunk network is a set of basis functions, and the output of the branch network is the basis coefficients [41].Combining the basis coefficients and functions using the dot product gives the operator network output.Following training, the Deep-ONet approximates the underlying solution operator for the input function and coordinate spaces.Other operator learning methods include the Graph Kernel Network and Fourier Neural Operator [201,202].Physics-informed extensions to operator networks that can reduce the required training data have also been studied [41,203].
Operator learning approaches have been applied to various linear and nonlinear problems involving explicit and implicit operators [40].Cai et al. [187] used DeepONets for electroconvection, which is a multi-physics problem involving coupled flow, electric and concentration fields.They noted that training the DeepONets takes approximately 2 h, but the evaluation time once trained is less than 1 s, representing a speed-up of approximately 1000 times when compared with the NekTar solver used to generate training data.Mao et al. [188] used DeepONet for a hypersonic flow problem involving a coupling between flow and finite-rate chemistry.They found that the trained network was five orders of magnitude faster than the CFD solver used to generate the data.Furthermore, Cai et al. [187] and Mao et al. [188] combined multiple Deep-ONets to build a DeepM&MNet, which is specifically designed to handle multi-scale and multi-physics modelling.DeepONets have also been used as a surrogate for expensive microscopic models, thus accelerating the coupling between micro-and macro-scale models [204].Recent work has also investigated using physics-informed DeepONets for longtime integration of parametric partial differential equations [205].Applications of operator learning to vascular flow problems are limited, but two examples are by Yin et al. [192] and Arzani et al. [206].Yin et al. [192] applied DeepONets to simulation of aortic dissection, a complex fluid-structure interaction problem.The DeepONet was able to make predictions in less than 1 s, whereas the FEM used to produce training data took approximately 12 h to run using 20 processors.Arzani et al. [206] applied an operator learning surrogate model to 2D cardiovascular flow applications, but the focus of this work was on the interpretability and generalisation rather than acceleration.
Compared to function-based learning strategies, a benefit of operator learning is that they demonstrate small generalisation errors [40].Furthermore, DeepONets have been shown to overcome the curse of dimensionality, in that they do not require exponentially more training data to improve the approximation accuracy [207].These techniques can potentially address many of the inherent complexities of vascular flow, particularly the multi-physics and multi-scale nature of the problem, but they have not yet seen widespread adoption.
Summary
This review presents simulation acceleration methods based on ROM and machine learning for the target application of vascular flow.The review focuses on five complexities that are common in vascular flow problems, but which are also found across a multitude of other domains; namely: (i) nonlinearity, (ii) geometric complexity, (iii) multi-physics, (iv) multi-scale in time and (v) multi-scale in space.Each complexity presents unique challenges for vascular flow simulations and their acceleration.The ROM methods discussed in this review are spatial dimension reduction (SDR), POD and dynamic mode decomposition (DMD) ROMs, as well as brief overviews of reduced basis (RB) methods and proper generalised decomposition (PGD).The machine learning approaches reviewed are machine learning-augmented ROMs, machine learning-based ROMs, physics-informed neural networks (PINNs), physics-agnostic networks, Point-Nets and operator networks.We found that all acceleration methods are well suited to some of the complexities of vascular flow and limited for others, as highlighted in table 8.
Reduced order modelling
SDR methods are suitable for capturing spatial multi-scale behaviour and some nonlinear and multi-physics effects, but only in simplified geometries where axisymmetry or other assumptions are valid [45].These methods calculate bulk quantities instead of full spatio-temporal fields and are not designed for temporal multi-scale problems.SDR methods are widely used in various vascular applications, with one of its most common uses in deriving boundary conditions for 3D models [45,63].Due to their simplistic nature, SDR models can provide large acceleration ranging from two to six orders of magnitude [54].
POD-based ROMs branch into two categories depending upon whether they combine POD with projection or interpolation.POD-Projection and POD-Interpolation ROMs are able to calculate 3D time-varying solution fields in individual complex geometries.POD-Projection has been applied to various vascular flow problems [30,[80][81][82]92,100,101].Both approaches are suitable for multi-physics problems.For nonlinear problems, the projection applied to the governing equations does not fully de-couple the ROM and the fullorder model, limiting the acceleration offered by POD-Projection ROMs.POD-Interpolation does not depend upon the governing equations of the system, so it does not suffer the same limitations for nonlinear applications.However, POD-Interpolation ROMs have been shown to generalise less effectively than their projection-based counterparts [96].Neither POD-Projection nor POD-Interpolation are well suited to multi-scale modelling in time, with the long-term stability of POD modes not guaranteed.Finally, while neither approach is inherently well suited to spatial multi-scale modelling, coupling the POD-based ROM to an SDR ROM could produce a model that can quickly and accurately provide full royalsocietypublishing.org/journal/rsifJ. R. Soc.Interface 21: 20230565 spatio-temporal fields in a region of interest while capturing the influence of the systemic vasculature.Due to the noniterative nature of POD-Interpolation, it can typically provide large accelerations ranging from two to six orders of magnitude, whereas POD-Projection acceleration ranges from one to three orders of magnitude [77,97,99,100].
Similarly to POD-based ROMs, DMD ROMs can provide full spatio-temporal fields in individual geometries and could be coupled to SDR models to capture the influence of large regions of the vasculature.DMD ROMs are less common than POD-based approaches, so application to multi-physics simulation acceleration has not been thoroughly investigated.The main benefit to DMD ROMs is that they are designed to approximate the temporal dynamics of the system, which makes them well suited to the long-time model integration required in temporal multi-scale problems.
Other techniques include RB methods and PGD.RB methods are a similar approach to POD-Projection ROMs and have been successfully applied to various nonlinear, multi-physics, geometrically complex problems [126][127][128].RB methods have been applied to vascular flow problems such as flow field calculation in 2D parameterised carotid arteries, inverse modelling in stenosed arteries and flow in femoropopliteal bypass problems [123][124][125].The acceleration offered by RB methods ranges from two to three orders of magnitude.PGD sits apart from most ROM methods, as it uses separated representations and successive enrichment a priori instead of applying dimensionality reduction to snapshots from the full-order model a posteriori in order to construct the reduced basis [129].PGD has been applied to Navier-Stokes and rheology applications with acceleration ranging from one to two orders of magnitude [131,133].This approach is well suited for separable problems, whether the separation is in space or time [130,134,135]; however, it has not been applied as widely as other ROM methods and has seen no application to vascular flow simulation acceleration.
Machine learning simulation acceleration
Machine learning offers an array of approaches for simulation acceleration.A common approach is to use machine learning in conjunction with ROM methods, where the learning algorithm augments or replaces part of the ROM method.Neural networks can be used to provide a powerful high-dimensional interpolation algorithm in the POD-Interpolation ROM approach [143,144,146] or to overcome the difficulties POD-Projection ROMs encounter for nonlinear equations [148,149].Autoencoders can also replace the dimensionality reduction common across most ROM methods [34,35].Another approach is to build a machine learning ROM based on autoencoders and feedforward neural networks while using POD for dimensionality reduction of the data passed to the machine learning ROM [145,153,154].Machine learning can overcome some of the limitations of traditional ROMs and broaden the scope of problems for which the ROM methods are suitable.
PINNs are a machine learning-based simulation method that lies at the intersection of equation-based and data-driven modelling [36].To be used for simulation acceleration, PINNs need to be able to generalise across new input parameters and/or geometries or they need to be sufficiently fast to train that a new PINN can be constructed for each new problem instance.The former can be achieved by adding extra inputs to the network or by constructing a precursor network that handles the parametric dependence in the problem [160,166,167].Faster training times can be achieved through techniques such as transfer learning, trainable activation functions and residual-based adaptive refinement [163,168,170,171].When used in an acceleration context, such as many-query parameter sweeps, PINNs have been demonstrated to reduce total simulation time by two to five orders of magnitude, depending upon the application and the number of queries [160,161].PINNs and their extensions are suitable for all of the complexities that commonly occur in vascular flow problems and have been successfully applied to aneurysm flow modelling and synthesis of non-invasive flow measurements in a bifurcating vessel model [163,179].
Alternative machine learning-based simulation techniques include physics-agnostic methods, Point-Nets and operator networks.Physics-agnostic simulation methods have been applied to vascular flow problems such as fractional flow rate prediction in coronary arteries, steady-state pressure and velocity prediction in the thoracic aorta, inverse geometry prediction in the aorta, endothelial cell activation potential prediction and prediction of flow fields from magnetic resonance images [37,155,156,189,194].While these approaches can accelerate solution evaluations by two to three orders of magnitude and tend to generalise well to previously unseen geometries, they require large datasets and the network outputs do not necessarily respect the underlying physics in the problem.Point-Nets facilitate the use of powerful convolutional deep learning architectures on datasets consisting of point clouds.They have been used for steady-state haemodynamics predictions before and after coronary artery bypass surgery and aneurysm flow diversion, producing accurate predictions and reducing prediction time by two to three orders of magnitude compared to the computational fluid dynamics model [38,39].Point-Nets generalise well to new geometries despite paying no attention to underlying governing equations, but require large datasets for training.Physics can inform Point-Nets, but this is a new technique with very few use cases to date [199].Operator learning techniques, such as DeepONets, are other powerful simulation techniques that have demonstrated strong generalisation capabilities, the ability to accelerate by two to five orders of magnitude, and the ability to overcome the curse of dimensionality [40,187,188].However, operator learning is an emerging technique that has only seen a small number of applications to vascular flow problems to date [192,206].
Challenges
Despite years of research on ROMs and the recent application of machine learning to simulation acceleration, applying these techniques to real-world vascular flow problems remains challenging.Three key challenges to address that have been identified by this review are 1.The development of accelerated simulation methods that can handle large geometric variability, facilitating their application to previously unsimulated and dynamically varying geometries.2. The development of accelerated simulation methods for multi-scale problems, enabling seamless evaluation of small-and large-scale processes over short-and long-term time scales.
3. The development of a benchmarking framework for accelerated simulation methods, allowing for systematic quantification and comparison of new approaches and driving transparent progress in the field.
A critical challenge to widespread adoption of simulation acceleration in vascular flow applications is incorporating large geometric variability into the models.Whether performing large-scale testing of medical devices in cohorts with varying anatomy, simulating medical device responses as part of treatment planning for an individual patient, or providing real-time surgical feedback during operation, the ability of the accelerated model to accurately evaluate haemodynamics in a previously unsimulated or dynamically changing geometries is essential.Efforts to introduce geometric variability into vascular flow ROMs have mainly focused on developing parameterised models [30,82,92,100].While these approaches yielded accurate results, acceleration was only of one order of magnitude in most cases, with the largest acceleration roughly three orders of magnitude.Furthermore, models typically only used a small number of parameters describing features such as vessel diameter or stenosis severity and position [30].In pathologies with highly complex shapes, such as aneurysms, identifying descriptive parameterisations with few parameters may not be possible.This would be further exacerbated by device modelling or fluid-structure interaction.A possible approach to overcome this is to use domain decomposition ROMs that can partition an unseen geometry into sub-geometries that bear resemblance to the geometries for which snapshots were previously calculated [208,209].This approach has been applied to flow over urban landscapes and pipe flow problems so far, but could potentially be applied to vascular flow problems, where the sub-geometries could be a set of commonly required vascular segments and configurations.ML approaches such as physics-agnostic simulation methods [156,189,191] and Point-Nets [38,39] have demonstrated the ability to generalise to unseen geometries by using large sets of mostly synthetic geometries and corresponding simulation data for training.These are the most promising attempts to provide generalisation across geometries in vascular simulation acceleration, but they are still hampered by the amount of data required and the risk that data augmentation strategies can lead to unrealistic results.Informing these approaches with physics could potentially reduce the data requirement and increase the reliability of the results but there have been few studies into this to date [199].
Multi-scale problems represent the second challenge for accelerated simulation of vascular flow models.When using computational models to inform treatment decisions or in assessing medical device safety and efficacy, short-and long-term metrics are likely to be required.Depending upon the specific problem, models of small-scale processes like thrombosis or endothelialisation may need to be coupled to models of largescale haemodynamic effects.In principle, DMD ROMs are well suited to long-term solution evaluation, but the few studies using this approach for vascular flow applications have focused on solution reconstruction rather than long-term prediction [105,109].Domain decomposition PINN methods, such as cPINNs, XPINNs and PPINNs, are suitable for multi-scale problems in time and space, but also have seen little use in vascular flow applications [183][184][185].DeepONets have also shown great potential for multi-scale applications.Wang & Perdikaris [205] royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 used DeepONets for long-time prediction of partial differential equations, while Cai et al. [187] and Mao et al. [188] used modular DeepONets trained individually on single-physics single-scale problems to facilitate multi-physics and multiscale modelling for electroconvection and flow-chemistry applications.Modular DeepONets are referred to as DeepM&MNets (Deep Multi-Physics & Multi-Scale Networks) and represent a promising approach towards the challenge of long-time evaluation of multi-physics and multi-scale models which are crucial in vascular flow applications.
The final challenge we want to highlight is the need for a benchmarking framework for assessing simulation acceleration methods.Throughout this review, quantitatively comparing different approaches has proved challenging due to the following factors that vary across studies: (i) amount of training data; (ii) training details, e.g.stopping/convergence criteria, number of modes retained in model; (iii) accuracy and acceleration metrics, e.g.error metrics and variables of interest, acceleration relative to FOM or entire offline cost; (iv) target applications.To overcome this challenge, we propose the development of a benchmarking framework for use in the simulation acceleration community.This should consist of a series of example problems of varying nature and complexity, datasets for each example problem for use in training, specified allowances and/or metrics for the computational cost of data generation and training, and metrics defined for assessment of accuracy and acceleration.The example problems should also be motivated by real-world problems where a balance often must be struck between the amount of training data available for the machine learning model and the task for which it is to be used (e.g.manyquery tasks, control problems, real-time prediction etc.).Development and subsequent use of this framework would enable objective assessment and comparison of methodological advances in the field.Inspiration could also be taken from the medical image analysis field, where challenge problems are commonly proposed with publicly available data and predefined metrics to assess model performance for tasks like registration and segmentation [210,211].
Outlook
Accelerated vascular flow models are essential for applications such as in silico trials (ISTs), patient-specific treatment planning, and real-time surgery feedback.ISTs can require the evaluation of nonlinear, multi-physics, multi-scale models in large cohorts of virtual patients, which are anatomically and physiologically diverse, undergoing treatment with different devices [3,212].Patient-specific treatment planning requires similarly complex models that can be evaluated in an individual patient in a reasonable time frame given the prognosis of the pathology in question.Real-time surgery feedback requires complex model evaluation in individual patients fast enough to provide haptic feedback or visualisations to the surgeon performing the procedure [44].These three applications highlight some of the impact that accurate and efficient vascular flow models can have on patient care, which makes developing these approaches a worthwhile endeavour.This review has identified that the key challenge to be addressed is the development of multi-scale simulation acceleration methods that can handle the large geometric variability inherent to vascular flow problems.We also suggest that to achieve quantifiable and transparent progress in simulation acceleration, the community should develop a benchmarking framework consisting of a series of exemplar problems with standardised metrics for assessing acceleration and accuracy.This would benefit both the simulation acceleration and the vascular flow modelling communities.
Figure 1 .
Figure 1.Vascular flow modelling is a multi-physics, multi-scale problem where nonlinearity and geometric complexity frequently arise.
Figure 2 .
Figure 2. Taxonomy of various simulation acceleration methods reviewed in this paper.
(a) fully connected neural network (c) physics-informed neural network neural network data, boundary condition and PDE losses data
Figure 3 .
Figure 3. Selected neural network designs that can be used for simulation acceleration.(a) A fully connected neural network with two inputs, two hidden layers with four neurons per layer and one output.(b) A fully connected autoencoder, consisting of an encoder, a latent space and a decoder.(c) A physics-informed neural network, where physical constraints based on partial differential equations (PDEs) and boundary conditions (BCs) are included in the loss function of the network.x is position, t is time, u is velocity, p is pressure, superscript D or B means data or boundary point, F i are N residual equations.
Table 1 .
Various ROM papers using SDR for vascular flow problems.Acceleration is measured by comparing the time taken for one ROM evaluation with one FOM evaluation.This is the case for all tables presenting acceleration statistics, unless otherwise stated.Calculated by assuming a linear relationship between number of CPUs and simulation execution time.Where accuracy is not reported, only qualitative ROM-FOM agreement was presented in the referenced paper.AW, area-weighted; CCA, common carotid artery; FFR, fractional flow reserve; NT, normalised time (WCT × number of computation tasks); ROM, reduced order model; SDR, spatial dimension reduction; WCT, wall clock time.royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565
Table 2 .
[103]us ROM papers using POD for vascular flow and other selected problems.Maximum error estimated from graph in paper and used to calculate minimum accuracy (which occurs close to systole).Authors report computational savings of 99% (therefore acceleration factor of 100).In total, 1530 acceleration factor is calculated from simulation times presented for ten patients in table 2 of[100].cMeanaccelerationcalculated across three test cases in table 1 of[103].DEIM, discrete empirical interpolation method; FFR, fractional flow reserve; FSI, fluid-structure interaction; GP, Galerkin projection; LVAD, left ventricular assist device; p, pressure; Δp, pressure drop; PA, pulmonary artery; PI, pulsatility index; POD, proper orthogonal decomposition; RBF, radial basis functions; ROM, reduced order model; ToF, tetralogy of Fallot; u x , x-component of velocity; WSS, wall shear stress.royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 DMD is a less well-established technique than POD, so few (if any) attempts have been made to tackle this problem.
Table 3 .
ROM papers comparing POD-Projection and POD-Interpolation approaches for various applications.
royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 likely that they can provide more acceleration than this in some scenarios.Furthermore, Lu & Tartakovsky
Table 4 .
[115]apers using DMD for various applications.Authors include offline calculation times in DMD computational time, hence the ROM sometimes being slower than the FOM[115]. b
Table 5 .
Machine learning ROM studies for various applications..org/journal/rsifJ.R. Soc.Interface 21: 20230565 applied to the same problem.However, if the PINN relies on data generated by the numerical model and requires a potentially expensive training procedure prior to use, then the question of how to use PINNs for acceleration remains.In order to prove a useful and powerful tool for simulation acceleration, PINNs will either need to be able to generalise to unseen problems in a similar fashion to how parametric ROMs operate, or they will need to have a sufficiently small training time such that training a new PINN model is more efficient than solving a traditional numerical model.Generalising a PINN model can require adding additional parameters into the training procedure.These parameters could describe geometry, boundary conditions, or material properties and there are various ways to incorporate this information into the PINN.The most straightforward approach is to include additional network input parameters.Arthurs & King CAE, convolutional autoencoder; CPU, central processing unit; CVRC, continuously variable resonance combustor; DEIM, discrete empirical interpolation method; DL, deep learning; FCNN, fully connected NN; FOM, full-order model; GP, Galerkin projection; LDC, lid-driven cavity; LSTM, long short-term memory; ML, machine learning; NN, neural network; PDE, partial differential equation; POD, proper orthogonal decomposition; RN, residual network; ROM, reduced order model; SN, sequential network; SST, sea surface temperature.royalsocietypublishing
Table 6 .
Various PINN papers that mention the acceleration capability of their method.
Table 7 .
Various machine learning simulation papers applied to vascular flow problems that mention the acceleration capability of their method.
Table 8 .
Reduced order modelling and machine learning acceleration methods and their suitability for modelling various vascular flow complexities.RB, PGD and Point-Net simulation acceleration approaches were briefly reviewed in this paper but not in sufficient detail to include in this table.
In isolation the methods are not well suited for spatial multi-scale problems, but they can be coupled to patient-specific SDR models so that boundary conditions are derived from large portions of the vasculature.Physics-agnostic approaches are not only suitable for complex individual geometries, but are capable of generalising to previously unseen geometries.While suitable for multi-scale problems in principle, the data-hungry nature of physics-agnostic approaches may lead to prohibitive data requirements for problems spanning large spatial and time scales.eBasic PINNs are not designed for multi-scale problems, but extensions such as cPINNs, XPINNs and PPINNs are.cPINNs, conservative PINNs; DeepONet, deep operator network; DMD, dynamic mode decomposition; FCNN, fully connected NN; PGD, proper generalised decomposition; POD, proper orthogonal decomposition; POD-I, POD-Interpolation; POD-P, POD-Projection; NN, neural network; PINN, physics-informed NN; PPINNs, parallel-in-time PINNs; RB, reduced basis; RNN, recurrent NN; ROM, reduced order model; SDR, spatial dimension reduction; XPINNs, extended PINNs.royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230565 method is suitable; ∼, somewhat suitable; ✗, not suitable.a b Includes various types of NN used in conjunction with the ROM approach, such as FCNNs or RNNs.c d | 24,382.4 | 2024-02-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Dataset on the Acceptance of e-learning System among Universities Students' under the COVID-19 Pandemic Conditions
The COVID-19 pandemic has produced an unprecedented change in the educational system worldwide. Besides the economic and social impacts, there is a dilemma of accepting the new educational system "e-learning" by students within educational institutions. In particular, universities students have to handle several kinds of environmental, electronic and mental struggles due to COVID-19. To catch the current circumstances of more than two hundred thousand Jordanian university student during COVID-19. The students have been randomly selected to respond on an online survey using universities' portals and websites between March and April 2020. At the end of the data gathering process, we have received 587 records. The dataset includes 1) Demographics of students; 2) students’ perspectives concerning the factors influencing their intention to use e-learning system within the Jordanian universities context. Data were analyzed using Partial Least Squares - Structural Equation Modelling (PLS-SEM). Next, the result has confirmed the positive of direct effect variables (subjective norm, perceived ease of use, and perceived usefulness) on the students’ intention to use e-learning system. Next, the result has also confirmed the mediating effect of perceived usefulness and perceived ease of use between subjective norm and the behavioral intention to use the e-learning system with partially supported.
a b s t r a c t
The COVID-19 pandemic has produced an unprecedented change in the educational system worldwide. Besides the economic and social impacts, there is a dilemma of accepting the new educational system "e-learning" by students within educational institutions. In particular, universities students have to handle several kinds of environmental, electronic and mental struggles due to . To catch the current circumstances of more than two hundred thousand Jordanian university student during COVID-19. The students have been randomly selected to respond on an online survey using universities' portals and websites between March and April 2020. At the end of the data gathering process, we have received 587 records. The dataset includes 1) Demographics of students; 2) students' perspectives concerning the factors influencing their intention to use e-learning system within the Jordanian universities context. Data were analyzed using Partial Least Squares -Structural Equation Modelling (PLS-SEM). Next, the result has confirmed the positive of direct effect variables (subjective norm, perceived ease of use, and perceived usefulness) on the students' intention to use e-learning system. Next, the result has also confirmed the mediating effect of perceived usefulness and perceived ease of use between subjective norm and the behavioral intention to use the e-learning system with partially supported.
© 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license.
Specifications
The Jordanian universities students' intention to use e-learning system during the COVID-19 pandemic, subjective norm (SN), perceived ease of use (PEU), and perceived usefulness (PU), technology acceptance model (TAM).
Type of data
The raw data is available in Excel Workbook format. The analysed data in this article are provided in tables and figures.
How data were acquired
Online Survey
Data format
Raw Parameters for data collection The target population of this work was Jordanian universities students who are affected by COVID-19 pandemic. In light of the universities closure, all Jordanian universities have converted to the e-learning system. as a result, more than 200 thousand students from various universities were required to handle with a new educational system unprecedentedly.
Description of data collection
We colleted data using an online survey through universities' portals and websites between March and April 2020. The participants of this dataset were the Jordanian universities students. In this regard, the questionnaire is provided as a supplementary file Data source location Jordan Data accessibility Repository name: Mendeley Data Data identification number: 1 Direct URL to data: https://bit.ly/3i8KkI5 .
Value of the Data
• The dataset is valuable because it can be utilized as a reference for understanding the effect of factors, namely: subjective norms, perceived ease of use and perceived usefulness on the student's intention to accept the e-learning system. • This data presents the natural flow to measure students' intention to acceptance/use the elearning system during COVID-19 pandemic, which can be replicated in other countries. • This data can help to understand the factors that affect the e-learning system acceptance through integrating subjective norms with the extended Technology Acceptance Model (TAM). Besides, using both variables: perceived ease of use and perceived usefulness as mediation between subjective norms and the e-learning system acceptance. • Finally, the data is useful for all parties involved, especially for universities management, ministry of higher education, decision-makers in a country, researchers and practitioners in the e-learning system.
Data description
The data presented in this paper is focused on the students in the Jordanian universities who using the e-learning system. The research was conducted according to and complies with all regulations established in the ethical guidelines by the Jordanian Ministry of Higher Education and Scientific Research. The data file spreadsheet accompanying this article consists of 587 rows and 24 columns of dataset. Every row represents an individual's response to a survey. A fivepoint range scale was applied to allow the respondents to indicate how much they disagree or agree with a certain statement, so a numerical value in the dataset file means the respondent level of agreement, with 1 being "strongly disagree" and 5 being "strongly agree". Demographic dataset regarding Jordanian universities students' profile indicated that 394 were male and 193 were female. Regarding the age 148 (25%) of the respondents were between 18 and 20 years old, 311 (53%) of the respondents were between 21 and 23 years old, and 128 (22%) of the respondents were more than 23 years. The majority of respondents were bachelor students 572 (97%) followed by master degree 15 (3%). Of the 587 students that participated in the study, 111 are from the Faculty of Business, 172 from the Faculty of Languages and Arts, 39 from the Faculty of Engineering, 116 from the Faculty of Pharmacy, 125 from the Faculty of Educational Sciences, and 24 from the Faculty of Law. Further, each variable's items in the questionnaire was given a label, as shown in Table 1 . Wherein, the majority of respondents were undergraduate students (572). Of the 587 students that participated in the study, 172 from the Faculty of Languages and Arts, 116 from the Faculty of Pharmacy, 111 are from the Faculty of Business, 77 from Faculty of Science and Information Technology, 48 from Faculty of Educational Sciences, 39 from the Faculty of Engineering, and 24 from the Faculty of Law. Table 1 . shows the test of measurement model (inner model) including composite reliability, indicators reliability, average variance extracted. In regard to the composite reliability, the criterion of composite reliability was assessed to verify the internal consistency reliability. The values showed the constructs scores are at acceptable level of reliability [ 2,3 ]. Hence, internal consistency was confirmed. For the factors' loading, all items were higher than 0.70. Besides, the validity of the instrument was proved by calculating the average variance extracted [1] . Wherein, the results of the average variance extracted (convergent validity) are at acceptable level, which all variables have average variance extracted value larger than 0.50 (see Table 1 ).
In addition, as shown in Table 2 the test of discriminant validity was also a part of measurement model (inner model) assessments. The discriminant validity was conducted to evaluate the range to which a provided study latent variable is distinct from others. Hence, when the average variance extracted of an individual latent construct is higher than the multiple squared correlations of that construct with other constructs, the discriminant validity will be at an acceptable level [4] . Thus, the results revealed that all studied variables had good discriminant validity values (see Table 2 ).
Experimental design, materials and methods
Data were gathered using online survey through Jordanian universities students' portals and websites (between March and April 2020). The students were asked to fill in the online questionnaire through the provided link. From those students, there were 587 responses. The questionnaire and the answers to the questions are provided as a supplementary file. The data were analysed using statistical test including PL S-SEM approach. We used Smart PL S 3.0 software [1] The study's model was contained two level of constructs (upper and lower), thus to conduct the measurement model assessment, each of composite reliability, indicators reliability, average variance extracted, and discriminate validity were tested. In regard to the composite reliability, the criterion of composite reliability was assessed to verify the internal consistency reliability. The values in Table 1 showed the constructs scores exceed the acceptable level of reliability 0.7 [ 2,3 ] (see Table 1 ). Hence, internal consistency was confirmed. Besides, all factors' loading is higher than 0.70. The validity of the instrument was proved by calculating the average variance extracted [1] . Wherein, the average variance extracted is the indicator used for measuring convergent validity, by measuring the variance value of that the items share with their respective variable [1] . The results of the average variance extracted (convergent validity) are also presented in Table 1 , which all variables have average variance extracted value larger than 0.50.
Finally, the discriminant validity test as shown in Table 2 was also conducted to evaluate the range to which a provided study latent variable is distinct from others. Whereby, when the average variance extracted of an individual latent construct is higher than the multiple squared correlations of that construct with other constructs, the discriminant validity will be at an acceptable level [4] . As illustrated in Table 2 , all studied variables had good discriminant validity values.
Ethics statement
This work neither involves chemicals, procedures or equipment that have any unusual hazards inherent in their use nor involves the use of animal or human subjects.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article. | 2,338 | 2020-08-18T00:00:00.000 | [
"Education",
"Economics"
] |
From better schools to better nourishment: evidence from a school-building program in India1
This is a short paper analyzing the potential effects of a targeted school-building program on health indicators. The Kasturba Gandhi Balika Vidyalaya (KGBV) program in India intended to build residential schools for girls from historically disadvantaged sections of the society, providing a unique multifaceted policy setting with tenets of gender equality, affirmative action, and infrastructure reform in education. Exploiting the potentially exogenous cross-sectional variations generated by the institutional features of implementation of this intervention, I run triple-difference regressions to find that the program led to increases in body mass index (BMI) among the underweight. There seems to be a positive correlation between KGBV exposure and probability of being in the “healthy” band of BMI indicators. Current version: February 13, 2020
Introduction
Empirically identifying the causal effects of education on outcomes of interest is difficult because of obvious endogeneity issues. While most papers use mandated regulations, e.g., compulsory schooling reforms, some like Duflo (2001) and Chin (2005) use infrastructure reforms to generate identifying variations. Interestingly, very little seems to be known about the direct effects of such infrastructural reforms on health indicators even though such effects seem very likely. A potential channel through which schooling reforms may lead to better health could be the switch away from child labor, much of which is usually physically demanding, and a reform that keeps children in schools is likely to prevent the incidence of such labor. The other obvious channel is access to better sanitation and hygiene, which is likely to be brought about by reforms providing better schooling infrastructure.
Among the little evidence that exists, Breirova and Duflo (2004) find causal effects of the INPRES program of Indonesia on mortality and long-term fertility decisions although the context of their paper is to study the impacts of parental education on child mortality.
In this paper, I study a school-building program from India, namely, the Kasturba Gandhi Balika Vidyalaya (KGBV) initiative, to answer the question: does better schooling infrastructure lead to better health of the affected individuals? In a sense, the idea is to study the direct impacts of an infrastructural reform on the short-term health status of those potentially affected.
The intention of the KGBV program was to build residential schools for girls in Grades 6-8 from historically disadvantaged sections of society in educationally backward blocks identified based on predefined literacy thresholds all over India. The primary idea was to increase the levels of educational achievement among girls in the country and, since the program was targeted toward the scheduled castes (SCs) and scheduled tribes (STs), which are the marginalized sections of the Indian population, the program can essentially be viewed as an affirmative action in the field of elementary education. Chatterjee (2017) finds that KGBV led to an increase in enrollment and reading test scores of kids potentially exposed to the program, in what constitutes, to date, the only direct causal estimates of the program, to the best of my knowledge. Reduced-form effects of this program on health indicators have not been estimated. From what is previously known, this reform did not explicitly include any stated features to improve the health of the girls going to KGBV schools. The stated objectives were mainly to increase literacy and enrollment. Consequently, the estimates in this paper can also be considered potential spillover effects of the program.
Since the program was essentially implemented in certain regions based on whether female literacy rates in that region were less than the national average, a potentially attractive source of exogenous variation to identify the causal effects would be to compare these regions to others before and after program implementation. However, as Chatterjee (2017) argues, this methodology would lead to confounding estimates as there were other contemporary programs introduced based on this criteria in the country. Therefore, following Chatterjee (2017), I use a triple-difference estimation strategy exploiting plausibly exogenous cohort-level variation in exposure to the KGBV program to identify causal effects on health status. I use body mass index (BMI) as a proxy outcome variable for the health status.
The KGBV program: building residential schools for girls
The KGBV program was introduced by the Indian government in the year 2004-2005 for improvement in educational status of the historically marginalized sections of the Indian population, viz., SCs and STs. While 75% of all KGBV seats were reserved for minority girls, the remaining 25% was kept open also for families below the poverty line, irrespective of minority status. Implementation of the program was nationwide and carried out in all regions classified as educationally backward blocks (EBBs). A block is an administrative division smaller than a district but bigger than a village. A block is considered to be an EBB if the female rural literacy in the block is below the national average and if the gender gap in literacy is above the national average based on the 2001 census.
Census figures suggest that roughly 25% of the Indian population consists of the SCs and STs. The state of Punjab has the highest percentage of SCs (approximately 29%), whereas Mizoram has the highest percentage of STs (95%). India also has a very unfavorable sex ratio for women, with only about 74 out of a total of 593 districts -as per the census of 2001 -having at least as many women compared to men. While it is quite possible that marginalized sections of the Indian society have higher prevalence of malnutrition, the implementation of KGBV however, did not make health status a salient feature for consideration of program penetration.
In general, since the program was implemented in the EBBs, it is not unlikely that the health status of these regions would have been poor especially if we assume a positive association between literacy and health.
An evaluation report by the Planning Commission of India (Niti Aayog 2015) points out that 3,609 KGBVs have been sanctioned throughout the country. Around 69% of the teachers have had some sort of prior training and the majority of them have either a postgraduate degree or a professional qualification (such as Bachelor of Education [B.Ed.]). The report also suggests that about 80% of the schools are equipped with computer facilities and have access to fully functional libraries. KGBV is a voluntary program; therefore, the reform should not be confused with other standard compulsory schooling reforms prevalent elsewhere in the world. The KGBV program, implemented in 2004KGBV program, implemented in -2005, targeted girls in middle school in India, which corresponds to the age group of 11-14 years. Therefore, during the period of the survey used in the study (conducted in 2011-2012; described in the following section), girls aged 11 years are the youngest to be ever-affected by KGBV and those aged 22 years during the survey must be the oldest that have been exposed ever to KGBV.
A reason why this program serves as an interesting case study for estimating the effects of education on health is from the perspective of the policymaker in a developing country.
KGBV was a mix of an infrastructure reform, a gender-equality reform, and affirmative action, and therefore, the effects of such a three-pronged policy in a large economy such as India maybe relevant in terms of replicability elsewhere in the developing world. Since these schools are essentially residential in nature, it is not unlikely that greater enrollment would naturally lead to better health through nutritional channels. For instance, some news reports suggest that in the state of Telangana, which has 475 KGBVs catering to 80,000 underprivileged kids, nonvegetarian items have been included in the weekly menus with the idea of increasing the intake of protein, leading to better nourishment, which otherwise would not have been affordable for these families (https://www.thehindu.com/news/cities/ Hyderabad/mutton-on-menu-for-girls-of-kgbv-schools/article22320963.ece). This provides a potential channel through which KGBV exposure may lead to better BMI for the malnourished kids.
From education to better health: establishing the link
While the link between education and health has been widely studied, dating back to Grossman (1972), empirically identifying the causal effects of education on health has relied on finding relevant instruments for education, and the commonly used approach is through exploiting schooling reforms (see Arendt 2005Arendt , 2008Brunello et al. 2013Brunello et al. , 2016Parinduri 2017, and so on). The central idea of such a strategy is that a schooling reform is unlikely to affect health through channels other than education.
The other very important government intervention widely studied in the field of education is improving schooling infrastructure. For instance, the INPRES school construction program in Indonesia (Duflo 2001) and "Operation Blackboard" in India (Chin 2005) have been found to have significant effects on various measures of education. However, very little seems to be known about the direct effects of such infrastructural reforms on health indicators.
Why is it important to know the direct effect of this schooling infrastructure policy on health? This is because countries like India usually run against strict budgets in terms of development expenditure on health and education. For instance, while India spends about 4%-6% of its gross domestic product (GDP) on education, it is only able to spend about 2% of its GDP on health, compared to the much higher shares in developed nations, such as 18% for the USA (see https://thewire.in/health/indias-defence-budget-is-nearly-five-times-the-health-budget and https://www.crfb.org/papers/american-health-care-health-spending-and-federal-budget).
Consequently, if spillovers exist from a reform in one sector to another, designing and implementation of policy becomes a lot more efficient. The purpose of this paper is to study if such spillovers actually exist, using KGBV as an example case. Chatterjee (2017) has already evaluated the impact of the KGBV program on educational outcomes. The motivation of this paper is that in the presence of potential spillovers to other outcomes such as health, the overall assessment of the impact of the policy maybe underestimated if one does not take into account the unintended consequences as well.
In this paper, we use measures of BMI as the outcome variable and as a proxy indicator of health status. This choice of variable is motivated by two factors. First, KGBV schools are residential in nature and, as a result, meals and dietary supplements provided at these schools are likely to be a lot different from the standard nutritional intake at home. Considering the high incidence of malnutrition, better intake in schools is most likely to manifest in improved health through nutritional status. BMI is the commonly accepted metric for measures of health along this dimension. Unfortunately, we do not observe caloric intake in the data set, and the results can therefore not be validated through a more accurate channel. Second, as these KGBV schools require kids to be in residence, the likelihood of parents sending their kids off to child labor is minimized and this may lead to better BMI measures. The majority of existing evidence on the impact of residential schools on health and related outcomes is somewhat aberrant. In general, it has been associated with the mental trauma of being away from one's family (Schaverien 2015) or exposure to cohorts alienated from the society or from marginalized backgrounds, leading to poorer labor market consequences and poorer lifestyle choices (Kaspar 2014). The overall impact of sociocultural dimensions and how it matters in terms of impacts on BMI have been sparsely studied in the context of a residential school, with the notable exception of Cardoso and Caninas (2010). This is because most of the work that child laborers do involve physical labor in potentially unhealthy and hazardous environments for this age group, which are likely to have an impact on their health in terms of expending relatively more calories than that consumed. As a result, their BMI would be at a low level in the counterfactual condition.
The paper contributes to the literature in three major ways. First, to the best of my knowledge, this is the only paper looking at the direct effects of a school infrastructure policy on BMI. Second, considering the unique context of the policy, this paper presents new estimates of how a mix of affirmative action, gender equality, and infrastructure building in education may affect health indicators. Third, most of the works on schooling reforms used as instruments for education [(with the exception of the studies by Parinduri (2017) and Breirova and Duflo (2004)] have studied the context of developed countries. However, the basic link between health and education assumes greater policy significance in the context of developing countries because of tighter budgets and the potential for spillovers and, therefore, potential efficiency gains. This paper contributes to the literature by pointing out this positive externality of an education policy on the health sector. Table 1. 3.2 BMI: are KGBV cohorts healthier? Figure 1 presents a snapshot of the density of BMI across the sample. The two vertical lines at BMI = 18.5 and BMI = 25 form a band, indicating the healthy zone contained within. It is evident from the figure that the healthy band seems to have a higher density for cohorts with potentially more exposure to KGBV. This is further confirmed by the t-tests reported in Table 2, which show that KGBV cohorts have a higher probability of having a healthy BMI. Notes: The figure plots the density of BMI values. The area between the two vertical lines is the potential healthy zone, i.e., BMI between 18.5 and 25. The panel on the right plots the densities for the affected cohort, i.e., girls of lower castes in the affected age cohort, and the left-hand side panel includes everyone else.
Estimation
As in Chatterjee (2017), I use cohort-level variation in exposure to KGBV in a triple-difference estimation framework over three different cross sections, viz., gender, age cohort, and caste.
Although using cross-sectional regional variation based on reach of the program may seem appealing, for reasons described by Chatterjee (2017), I refrain from doing so. Such a methodology has been used by Debnath (2012), but it is unlikely to capture the KGBV effects uniquely as another program simultaneously affected these regions. Debnath (2012), as a result, estimates the joint effects of the two programs, but the method used by Chatterjee (2017) (2017) design. In the study by Chatterjee (2017), robust standard errors were left unclustered and regional fixed effects were not used, essentially making the empirical framework of this paper much stronger. The village indicator is essentially the place of residence and, in these societies, due to informal insurance considerations, migration rates are very low (Munshi and Rosenzweig 2016) and, therefore, less of an issue here in the context of selection into KGBV localities.
I propose to run the following specification as my main model, largely replicating the methodology of Chatterjee (2017): where a r represents the regional fixed effects, girl is a dummy variable for females, affected is the dummy variable for the age cohort 11-22 years, and disadvantaged is a dummy for marginalized castes. The interaction of the three cross-sectional dummy variables captured by KGBV generates potentially exogenous variation in access under the assumption that the difference in the difference-in-differences of mean values of outcome Y along the three cross sections is statistically indistinguishable from zero in the absence of the intervention. The controls for age, education of male and female household heads, and size of the household are represented by X. The outcome is represented by Y, which -for most of our regressions -is going to be BMI.
I present a brief summary of the identification strategy in Table 3. The treatment group, as identified by the affected dummy variable described above, consists of girls who have ever been exposed to the policy. Since the KGBV program was intended for girls in middle school and the middle school in India roughly corresponds to the 11-14 age group, we consider only those girls as affected by the KGBV who are currently in that age group or would have been in Table 3 Summary of identification strategy Age in data set 6-10 years 11 12 13 14 15 16 17 18 19 20 21 22 and above Age in policy year 0-3 years 4 5 6 7 8 9 10 11 12 13 14 15 and above Exposure to policy Not yet exposed Currently exposed Previously exposed Never exposed that age group after the introduction of the policy. Since our sample includes all students in the age group of 6-30 years, we consider the 6-to 10-year-old kids as part of the control cohort as they are yet to be in middle school. Moreover, the girls in the age group of ≥22 years would have potentially completed middle school by the time the policy was implemented. Considering that Indian schools mostly followed a no grade-detention policy up to middle school, this is a fairly innocuous assumption. Girls aged 11-14 years are currently likely to be in middle school and the ones ≤22 years and ≥14 years would have completed middle school post-KGBV intervention. As a result, these girls are considered the treated cohort.
To make sure that this cohort convergence is meaningful for estimating the causal effect of KGBV on BMI, I additionally run two other specifications as follows. This makes identification of the control group much more intuitive. First, I restrict the sample to only include girls (as KGBV would only have affected them) and then run a standard difference-in-difference across the other two dimensions: Here, θ 1 is the effect of the intervention on the affected cohort's BMI among girls from the disadvantaged sections of the society compared to girls from other sections. Then, I restrict the sample to only the disadvantaged groups (who are also the only ones potentially affected by KGBV) and run the following specification: Here, φ 1 is the differential effect of the intervention on the affected cohort's BMI between the girls and boys of the disadvantaged sections of the society.
Results
In this section, I present the results from the estimation and falsification exercises. I also report findings from robustness checks.
Impact of KGBV on BMI
I use the BMI of individuals in the age group of 6-30 years as the main dependent variable to check for any effects of the program on this health indicator at the extensive and intensive margins. The choice of this variable is almost obvious. Since the channel through which we expect KGBV to affect the health status is either improved access to health and sanitation and enhanced nutrition through better diet in the residential schools or through a reduction in child labor, it is most likely that any health effects would show up on how well nourished the individual is. As a result, the BMI seems to be the best approximation of any such measure.
I do not find any extensive margin effects, as reported in Column 1 of Table 4. KGBV did not lead to any change in the probability of being malnourished (BMI < 18.5). However, the estimation results of Equation 1 in Column 2 indicate significant intensive margin effects.
KGBV seems to have led to an improvement in the health status of the malnourished individuals. I find that with KGBV exposure, the BMI index is higher for the malnourished category by 0.19 points, which is roughly 1.25% compared to the mean. One concern could be that religion is an omitted variable and the behavior of individuals may be different by religious identity. However, because the caste categorization is largely prevalent only among Hindus, it is unlikely to be a cause of major concern, considering that the majority of the sample consists of Hindus either way. Regressions including religion as a control do not change the magnitude of the effect. Standard errors are marginally higher keeping the effect sizes significant at the 95% level of confidence.
The causal estimate holds under the assumption that in the counterfactual condition, i.e., in the absence of the KGBV, this estimated difference in BMI would be statistically indistinguishable from zero. Since this is an assumption about the counterfactual condition, there is no way to test this statistically. However, as per standard norms in practice, one might run placebo regressions to provide some support to this assumption. Following Chatterjee (2017) Table 4, I find that not only is the point estimate insignificant for this falsification exercise, it is also much smaller in magnitude, which is reassuring in terms of support for the assumptions required to sustain the identification strategy. I also perform a robustness check (results not reported here but available upon request) by running Notes: Column 1 suggests no effects of the program at the extensive margins of health; Column 2 presents the results on the intensive margin effects, whereas Column 3 reports the falsification results at the intensive margin. Therefore, the sample in columns 1-2 is restricted to only individuals in the age group of 6-30 years in the IHDS-II data set (2011-12), which is a period after the KGBV was implemented. In Column 3, the sample is from the pre-policy period, i.e., IHDS-I. All the columns report results from different regressions. Column 1 is a regression on the full sample. Columns 2 and 3 are the results from restricted subsamples. For columns 2 and 3, the sample is restricted to only the low-BMI individuals, with BMI <18.5. It only makes sense for an increase in BMI for this subcategory of the population. The extensive margin of this measure is essentially the outcome variable in Column 1. The coefficient KGBV is the causal effect of the KGBV program on outcomes, as described in the section on the triple-difference estimation strategy. All regressions include the regional (village) fixed effects and control for the relevant baseline dummy variables and double interactions. Additional controls are age, family size, and education of household head, both female and male. Robust standard errors clustered at the regional (village) level are reported in parentheses. **p < 0.05.
the same regressions on a sample from states that did not have a single EBB based on the 2001 census and hence were potentially not having any exposure to the KGBV program. The p-value of the significance test for the coefficient on BMI for our main specification is 0.93, indicating that there is absolutely no impact. This may be considered as a placebo experiment to support the main analysis.
In Table 5, I present the results from regressions described in Equations 2 and 3 above in columns 1 and 2, respectively. In Column 1, I restrict the sample to only girls and run a difference-in-difference regression along the other two dimensions to find very similar differentially affected cohort effects on the BMI for disadvantaged kids relative to the kids of the general castes. The point estimate is very similar in magnitude to the one estimated using the triple-difference method. In Column 2, I restrict the sample to disadvantaged kids and find a similar positive cohort effect for the BMI of girls relative to boys, although the magnitude is somewhat larger. The fact that the estimates across these specifications are not very different provides support for the identification strategy and suggests that the identification strategy that relies on this cohort convergence in a cross-sectional setting does make sense.
Choice of age cohorts
In the above analysis, identification is critically reliant on the choice of age cohorts. In other words, the control group for this quasi-experimental design includes the boys who do not belong to the disadvantaged SC/ST castes and who are aged 6-10 years and 23-30 years. A concern could be that the choice of this age cohort-based control group is not meaningful.
To alleviate such concerns, I run cohort-specific regressions of BMI for individuals with BMI <18.5 in a difference-in-difference framework using just dummies on whether the individual is disadvantaged, if the gender is female, and their interaction, apart from all the other usual controls with the age 6 cohort as the omitted category. Notes: Sample includes 6-to 30-year-old individuals in IHDS-II with BMI values <18.5. In Column 1, the subsample is restricted to only females. The estimated coefficient gives the effect of the intervention on the affected cohort's (disadvantaged groups) BMI compared to other groups. In Column 2, the subsample is restricted to only the disadvantaged groups. So, the estimated coefficient gives similar cohort effects for girls relative to boys, only in this subcategory. All regressions include regional (village) fixed effects and controls for the relevant baseline dummy variables. Additional controls are age, family size, and education of household head, both female and male. Robust standard errors clustered at the regional (village) level are reported in parentheses. ***p < 0.01. Figure 2 plots the estimated coefficients for the estimated regressions by age. Each point on the graph represents the estimated triple difference for that particular age cohort relative to the omitted age 6 years. So, instead of KGBV, which is essentially girls*disadvantaged*affected relative to the unaffected, points on the graph in Figure 2 represent girls*disadvantaged*age relative to age 6. The vertical lines represent the 95% confidence intervals. If the above identification strategy is meaningful, then one would expect these coefficients to be significant for the affected cohorts only. This is largely what we see in the figure. The coefficients become significant at age 11, which is the first age cohort in the treated group, and the coefficients are insignificant for all younger cohorts as they were unaffected by the KGBV. Roughly around age 22, the coefficient comes down to almost zero, which is again the eldest cohort likely to be affected. All coefficients from age 23 and above seem to be largely insignificant, providing support to the identification strategy.
Further robustness checks
A remaining concern with the above analysis could be that the short-run and long-run effects are mixed because the age spans of the people in the sample implies that the estimation is done for people in school age as well as older-than-school age. Furthermore, household composition variables are usually good controls for school-aged children but may not be so for adults. This is because household composition is potentially endogenous to education. I thank an anonymous referee for pointing this out and motivating the robustness check exercise. As a result, in this section, I report results from the above analysis by breaking down the sample to younger cohorts, to potentially include closer-to-school-age people in the control group. Table 2 has been reproduced as Column 1 in Table 4. It is found that the estimates mostly hold up even after restricting the sample to younger (and closer-to-school age) cohorts. There still seems to be a positive effect on BMI among the underweight children across the board. This exercise potentially addresses some of the concerns mentioned above.
Are KGBVs allotted based on health status?
Another concern regarding the potential validity of the above empirical exercise would be with regard to the penetration of the KGBV program. Is it possible that introduction and implementation of KGBV are driven by initial differences in health status? If this is likely, then a potential selection bias may confound the above estimates. The policy targeted the historically marginalized sections of the Indian population. Therefore, even though the caste identity of an individual, i.e., whether one is from an SC/ST household or not, is random, the fact that KGBVs could have been prioritized in areas with a low base in terms of health indicators is problematic because the above strategy would then overestimate the true effect of the program.
To alleviate these concerns, I conduct a very simple analysis on data collected from a The results from this analysis are reported in Table 7. While the mean BMI levels in states that have received KGBV funds do appear to be numerically smaller than that of states not receiving any KGBV funds, the difference is only marginal and statistically indistinguishable from zero. Therefore, it is very unlikely that the government was prioritizing KGBV in states Notes: All columns report results from different regressions. The coefficient KGBV is the causal effect of the KGBV program on outcomes, as described in the section on estimation strategy. All regressions include regional (village) fixed effects and controls for relevant baseline dummy variables and double interactions. Additional controls are age, family size, and education of household head, both female and male. Robust standard errors clustered at the regional (village) level are reported in parentheses. **p < 0.05; *p < 0.1.
based on BMI, which is our primary outcome variable here. This exercise provides reasonable support to the validity of our empirical framework.
Conclusions
Building residential schools for disadvantaged girls in India appears to have led to significant improvements in BMI among the potentially malnourished people in areas potentially exposed to the program. The probability of having a healthy BMI seems to be higher for individuals potentially affected by the policy. Since the program studied in this paper was a targeted education reform, much of these effects can be interpreted as the ancillary reduced-form effects of the program on health. One of the channels through which these effects may operate could be that better education leads to better awareness about hygiene and sanitation, and this leads to better observed health effects. Other channels may include a decline in child labor and access to better nutrition in the residential school setup. | 6,799.8 | 2020-03-01T00:00:00.000 | [
"Economics"
] |
Alignment Method of an Axis Based on Camera Calibration in a Rotating Optical Measurement System
The alignment problem of a rotating optical measurement system composed of a charge-coupled device (CCD) camera and a turntable is discussed. The motion trajectory model of the optical center (or projection center in the computer vision) of a camera rotating with the rotating device is established. A method based on camera calibration with a two-dimensional target is proposed to calculate the positions of the optical center when the camera is rotated by the turntable. An auxiliary coordinate system is introduced to adjust the external parameter matrix of the camera to map the optical centers on a special fictitious plane. The center of the turntable and the distance between the optical center and the rotation center can be accurately calculated by the least square planar circle fitting method. Lastly, the coordinates of the rotation center and the optical centers are used to provide guidance for the installation of a camera in a rotation measurement system. Simulations and experiments verify the feasibility of the proposed method.
Introduction
In recent years, an optical measurement method based on the photogrammetry principle [1][2][3] has been developed rapidly because of its high speed, non-contact, high accuracy and flexibility. Optical measurement is one of the effective methods for coordinate measurement, trajectory measurement or surface reconstruction. It is widely used in different fields such as three-dimensional (3D) sensor measurement, panoramic image mosaic, aerospace, virtual reality (VR), augmented reality (AR), industrial manufacturing and so on [4][5][6][7][8][9].
In the large-scale scene or 360-degree annular area measurement, due to the limitation of the field of view (FOV) of the camera, a single lens camera cannot cover the whole measurement range of the target. Therefore, scanning photography [10] is employed, in which a rotating mechanism rotating around a fixed point enables the camera to enlarge the measuring range by acquiring images from multiple angles. Theoretically, adjacent images taken by the camera rotating around its projection center can be matched by a purely projective transformation of their regions of overlap without requiring the three-dimensional shape of the scene [11][12][13]. In order to facilitate calculation and later data fusion, it is crucial to coincide the optical center of the camera with the rotation center of the rotating mechanism in practice for the creation of panoramic images. Here, the optical center of the camera is the origin of the camera coordinate system in the computer vision. It is also called the projection center in many references and corresponds to the first nodal point (or principle node) of the camera lens installed in the same medium. For the close-range applications, the misalignment of the two centers will be introduced into the final results and the images may not be spliced well. For simplicity, in the rest of this article, the "optical center of the camera" is abbreviated as "optical center".
The alignment of a rotating optical measurement system has always been a hot topic in many fields and has aroused the research interest of many researchers. Carlo Tomasi et al. [11] proposed a centering procedure, in which the rotation center was aligned by moving the camera via the translation platform with the help of the transition line of far and nearby targets, and the location accuracy can reach one tenth of a millimeter. Antero Kukko et al. [14] designed special calibration target "cones" to align the optical center with the rotation center to realize the adjustment of the digital camera to a spherical panoramic camera head. The achieved projection center uncertainty was approximated to be about 1 mm. Kauhanen et al. [15] developed a method to find the rotation center based on the camera calibration method, in which a three-dimensional calibration target is employed. The X, Y and Z shifts for the correction of the camera location were obtained by a numerical method. The standard deviation between the projection center and the rotation center after calibration could reach 0.161 mm. On the contrary, in some studies, the relationship between the camera and the rotation mechanism is calculated by the system calibration in advance, and then it is substituted into the final image mosaic [16][17][18]. For instance, Zhang Zuxun et al. [19,20] designed a measuring system named Photo Total Station System (PTSS), within which a metric camera with known internal parameters was rigidly installed on the telescope of the total station to extend the FOV and improve the accuracy. Zhang et al. [21] designed a theodolite-camera (TC) measurement system consisting of a theodolite and a camera for precise measurement at large viewing angles. The total station or theodolite can provide the spatial coordinates of the control points, which can be used to establish the relationship of the camera while rotating. This kind of measurement system needs a complicated calculation process to eliminate the influence of the misalignment between the optical center and the rotation center.
For any rotating optical measurement system, the motion trajectory of the optical center will be a circle centered on the rotation center. If the radius of the circle is equal to 0, it means that the optical center coincides with the rotation center. Otherwise, the optical center is not matched with the rotation center. However, since the imaging system is composed of multiple lenses, it is hard to accurately calculate and determine the real position of the optical center. Therefore, it is a challenge to guarantee a good alignment between the optical center and the rotation center, and any misalignment could affect the subsequent information stitching.
In order to calculate the coordinates of the optical center and rotation center to solve the alignment problem, the idea of camera calibration [22][23][24][25][26] is introduced. In this paper, we propose a method based on Zhang's camera calibration principle [27] to calculate the positions of the optical center in a unified world coordinate system from the external parameters (including translation vector and rotation matrix) of the camera in a rotating optical measurement system composed of a non-metric camera and a general rotating platform. With the rotation of the turntable, a set of external parameter matrices are obtained, from which a series of optical centers can be calculated to find out the rotation center by the least square circle (LSC) fitting method. The optical center and the rotation center provide the guidance for installation of the camera in a rotating measurement system. In order to reduce the fitting complexity and improve the fitting accuracy, we introduce an auxiliary plane coordinate system to map the optical centers on a unified virtual plane before circle fitting. In addition, for reducing the errors, we collect the images when the camera rotates from different positions, and multiple sets of optical centers are calculated simultaneously to obtain multiple fitted circle centers. The weighted algorithm is used to determine the final rotation center. Computer simulations and some experiments verify that the method can guide the installation and adjustment of the rotating optical measurement system. The rest of the paper is organized as follows: Section 2 illustrates the principle of the proposed method. Section 3 shows some simulations to validate the proposed method. Section 4 displays some experimental results to validate the proposed method and discusses the strengths and contributions of the proposed method. Section 5 summarizes this paper.
Principle
In this section, we explain the basic composition of the proposed rotating optical measurement system and the related techniques for finding out the rotation center.
The Composition of the Rotating Optical Measurement System
The established rotating optical measurement system and the calibration unit include a checkerboard target, a camera, servo control units, computer processing units, etc. The schematic diagram of the measurement system is shown in Figure 1a.
Principle
In this section, we explain the basic composition of the proposed rotating optical measurement system and the related techniques for finding out the rotation center.
The Composition of the Rotating Optical Measurement System
The established rotating optical measurement system and the calibration unit include a checkerboard target, a camera, servo control units, computer processing units, etc. The schematic diagram of the measurement system is shown in Figure 1a. In Figure 1a, the camera is placed on the servo control units composing of a rotating platform and two translation platforms perpendicular to each other. The two translation platforms are fixed on the turntable to move the camera for multiple calibrations and final alignment. Or is the rotation axis of the turntable. Oc-XcYcZc is the camera coordinate system, and Ow-XwYwZw is the fixed world coordinate system. We will give the explanations about these coordinate systems in Section 2.2. The checkerboard target is fixed in front of the system for camera calibration. A sequence of the images, which include the whole checkerboard or a part of the checkerboard, will be collected while the camera is rotated by the turntable.
Imaging Model
First, we give an introduction about the imaging model of the rotating optical measurement system. As shown in Figure 1b, Or is the rotating axis, P is a point on the target, and P1 and P2 are its imaging points when the camera is rotated to Position 1 and Position 2. Oc1 and Oc2 are the optical centers for the two shots, respectively. When the optical center coincides with the rotation center, r0. The positions of the rotation center and optical centers can be obtained with our method. They can be used to guide the alignment of the optical center and rotation center by adjusting the distance and direction of the camera movement. The details will be presented in the next sections. In Figure 1a, the camera is placed on the servo control units composing of a rotating platform and two translation platforms perpendicular to each other. The two translation platforms are fixed on the turntable to move the camera for multiple calibrations and final alignment. O r is the rotation axis of the turntable. O c -X c Y c Z c is the camera coordinate system, and O w -X w Y w Z w is the fixed world coordinate system. We will give the explanations about these coordinate systems in Section 2.2. The checkerboard target is fixed in front of the system for camera calibration. A sequence of the images, which include the whole checkerboard or a part of the checkerboard, will be collected while the camera is rotated by the turntable.
Imaging Model
First, we give an introduction about the imaging model of the rotating optical measurement system. As shown in Figure 1b, O r is the rotating axis, P is a point on the target, and P 1 and P 2 are its imaging points when the camera is rotated to Position 1 and Position 2. O c1 and O c2 are the optical centers for the two shots, respectively. O c1 Z c1 and O c2 Z c2 are the optical axis directions of the camera, O i -u i v i (i = 1, 2) are the image coordinate systems, and the origin is O i . The rotating angle between where f is the focal length of the camera. If the optical center deviates from the rotation axis of the turntable, O c1 and O c2 should be on a circle centered on the rotation center; that is, O r O c1 = O r O c2 = r, where r is the radius of the circle. When the optical center coincides with the rotation center, r→0. The positions of the rotation center and optical centers can be obtained with our method. They can be used to guide the alignment of the optical center and rotation center by adjusting the distance and direction of the camera movement. The details will be presented in the next sections.
Theoretically, the sequential positions of the optical centers should be located at the whole circle as evenly as possible when the camera is rotated by the turntable. However, the obtained positions could only be located at a small arc of the circle, because it is almost impossible to provide a very large and circular calibration target. In order to improve the calculation accuracy of the rotation center, the camera is moved to different positions to collect multiple groups of images to obtain multiple sets of optical centers. Only if the turntable plane is parallel to the plane O w -Y w Z w and the optical axis of the camera is perpendicular to the target at each initial position, the motion trajectory of the optical center is on the concentric circular arc parallel to O w -Y w Z w . However, for an actual rotating optical measurement system, the above conditions may not be met in the calibration process, which makes the obtained fitted circles not concentric. For overcoming the problem, we introduced an auxiliary coordinate system, O p -X p Y p Z p . The relationship between O p -X p Y p Z p and O w -X w Y w Z w is depicted in Figure 2. The auxiliary coordinate system helps us map the multiple sets of optical centers to the plane parallel to O w -Y w Z w to find out the rotation center by fitting planar concentric circles.
Ow-XwYwZw is depicted in Figure 2. The auxiliary coordinate system helps us map the multiple sets of optical centers to the plane parallel to Ow-YwZw to find out the rotation center by fitting planar concentric circles.
Here, we emphasize the four coordinate systems used in the calibration for clarity. 1. World Coordinate System Ow-XwYwZw is a right-handed (Xw-Yw-Zw), orthogonal, three-dimensional coordinate system, whose original point Ow is established on the upper left corner of the fixed checkerboard plane; that is, Zw=0 for points on the fixed checkerboard plane. The world coordinate system is selected as reference coordinate system during calibration.
Image Coordinate System
Oi-uivi is an orthogonal coordinate system fixed in the image plane of the camera, where the ui and vi axes are parallel to the upper and side edges of the sensor array, respectively, and the origin Oi is located at the upper left corner of the array. 3. Camera Coordinate System Oci-XciYciZci is a right-handed (Xci-Yci-Zci), orthogonal coordinate system. The origin Oci is located at the camera's optical center, and the Zci axis is perpendicular to the image plane and coincides with the optical axis of the camera. The Xci and Yci axes are parallel to the ui and vi axes of the Oi-uivi, respectively. The plane where Zci =f is the image plane, where f is the principal distance between the optical center and the image plane. 4. Auxiliary Coordinate System Op-XpYpZp is a right-handed (Xp-Yp-Zp), orthogonal coordinate system. Its origin Op is located at the optical center. The plane XpOpZp is a virtual plane, whose axis of Xp and Yp are parallel to that of Yw and Xw in the same direction, respectively, and the axis of Zp is parallel to that of Zw in the opposite direction. Here, we emphasize the four coordinate systems used in the calibration for clarity.
Steps to Determine the Rotation Center of the Rotating Optical Measurement System
, orthogonal, three-dimensional coordinate system, whose original point O w is established on the upper left corner of the fixed checkerboard plane; that is, Z w = 0 for points on the fixed checkerboard plane. The world coordinate system is selected as reference coordinate system during calibration.
2.
Image Coordinate System O i -u i v i is an orthogonal coordinate system fixed in the image plane of the camera, where the u i and v i axes are parallel to the upper and side edges of the sensor array, respectively, and the origin O i is located at the upper left corner of the array.
3.
Camera Coordinate System O ci -X ci Y ci Z ci is a right-handed (X ci -Y ci -Z ci ), orthogonal coordinate system. The origin O ci is located at the camera's optical center, and the Z ci axis is perpendicular to the image plane and coincides with the optical axis of the camera. The X ci and Y ci axes are parallel to the u i and v i axes of the O i -u i v i , respectively. The plane where Z ci = f is the image plane, where f is the principal distance between the optical center and the image plane.
4.
Auxiliary Coordinate System O p -X p Y p Z p is a right-handed (X p -Y p -Z p ), orthogonal coordinate system. Its origin O p is located at the optical center. The plane X p O p Z p is a virtual plane, whose axis of X p and Y p are parallel to that of Y w and X w in the same direction, respectively, and the axis of Z p is parallel to that of Z w in the opposite direction.
Steps to Determine the Rotation Center of the Rotating Optical Measurement System
The procedure for determining the rotation center of the rotating optical measurement system is divided into three steps. First, we collect images when the camera rotates from different positions (i) and calibrate the internal parameters and external parameters (rotation matrix R and translation vector T) of the camera by Zhang's method. Then we can get the coordinates of the optical centers in the unified world coordinate system. Second, with the help of the auxiliary coordinate system, we calculate the rotation center by the LSC fitting method from the obtained world coordinates of the optical centers. At last, the final rotation center is obtained through a weighted algorithm. The flow chart of the procedure is shown in Figure 3.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 18 The procedure for determining the rotation center of the rotating optical measurement system is divided into three steps. First, we collect images when the camera rotates from different positions (i) and calibrate the internal parameters and external parameters (rotation matrix R and translation vector T) of the camera by Zhang's method. Then we can get the coordinates of the optical centers in the unified world coordinate system. Second, with the help of the auxiliary coordinate system, we calculate the rotation center by the LSC fitting method from the obtained world coordinates of the optical centers. At last, the final rotation center is obtained through a weighted algorithm. The flow chart of the procedure is shown in Figure 3.
Calculate the Optical Center of the Camera
In the traditional camera calibration method based on the pinhole camera model, the camera's internal and external parameters are calculated from feature points with known world coordinates and corresponding image coordinates. The relationship between the camera coordinate system and world coordinate system is described by the rotation matrix R and translation vector T. R and T reflect the spatial position of the camera in the fixed world coordinate system. If the homogeneous coordinates of the points in the world coordinate system M=(Xw, Yw, Zw, 1) T and their image coordinates m=(u, v, 1) T are known, the relationship between M=(Xw, Yw, Zw, 1) T and m=(u, v, 1) T is described as where s is an arbitrary non-zero scale factor; fx and fy are scale factors of the u-axis and v-axis, respectively; α is the skew factor of two image axes; and (u0, v0) is the principal point of the camera. Rotation matrix R is an orthogonal unit matrix, T is a three-dimensional translation vector, and A is the internal parameter matrix of the camera, defined as In Zhang's method [27], the target plane is located on the XwOwYw plane of the world coordinate system, where Zw=0. Equation (1) is simplified as Equation (3), In the traditional camera calibration method based on the pinhole camera model, the camera's internal and external parameters are calculated from feature points with known world coordinates and corresponding image coordinates. The relationship between the camera coordinate system and world coordinate system is described by the rotation matrix R and translation vector T. R and T reflect the spatial position of the camera in the fixed world coordinate system. If the homogeneous coordinates of the points in the world coordinate system M = (X w , Y w , Z w , 1) T and their image coordinates m = (u, v, 1) T are known, the relationship between M = (X w , Y w , Z w , 1) T and m = (u, v, 1) T is described as where s is an arbitrary non-zero scale factor; f x and f y are scale factors of the u-axis and v-axis, respectively; α is the skew factor of two image axes; and (u 0 , v 0 ) is the principal point of the camera. Rotation matrix R is an orthogonal unit matrix, T is a three-dimensional translation vector, and A is the internal parameter matrix of the camera, defined as In Zhang's method [27], the target plane is located on the X w O w Y w plane of the world coordinate system, where Z w = 0. Equation (1) is simplified as Equation (3), where is a 3×3 matrix and λ is a constant. The homography matrix H sets up the mapping between the points on the target with the points on the image. Multiple target images with different poses (at least two in theory) are needed to calculate the internal and external parameters of the camera. In our calibration, for providing the fixed reference world coordinate system, we keep the target still and rotate the camera to calibrate R and T. Assuming the coordinates of a point are M c in the camera coordinate system and M w in the world coordinate system, the relationship between M c and M w is connected by the following rigid motion equation: If M c is known, M w can be obtained by Equation (5): Since the rotation matrix R is an orthogonal unit matrix, R T = R −1 , Equation (5) can be rewritten as The coordinates (0, 0, 0) of the optical center in camera coordinate system can be mapped to the world coordinate system by Equation (6). The motion trajectory of the optical centers should be a circular arc on the turntable plane when the camera is rotated by the turntable. In practice, with the help of auxiliary coordinate system O p -X p Y p Z p , Equation (7) is needed to map the optical centers on where R cp is a 3×3 rotation matrix, which describes the rotation relationship between the camera coordinate system and auxiliary coordinate system.
Method for Coincidence of the Optical Center and the Rotation Center
Since not enough of the optical centers can be obtained when the camera is rotating at one position, the camera is moved to different positions to collect more images, as shown in Figure 4, where c i (i = 1, 2, 3) represent the three initial positions of the camera. In order to easily move the camera through the translations to align the rotation center, we need c 1 c 2 ⊥c 1 c 3 , c 1 If Mc is known, Mw can be obtained by Equation (5): Since the rotation matrix R is an orthogonal unit matrix, R T = R -1 , Equation (5) can be rewritten as The coordinates (0, 0, 0) of the optical center in camera coordinate system can be mapped to the world coordinate system by Equation (6). The motion trajectory of the optical centers should be a circular arc on the turntable plane when the camera is rotated by the turntable. In practice, with the help of auxiliary coordinate system Op-XpYpZp, Equation (7) is needed to map the optical centers on the plane parallel to Ow-YwZw, where Rcp is a 3×3 rotation matrix, which describes the rotation relationship between the camera coordinate system and auxiliary coordinate system.
Method for Coincidence of the Optical Center and the Rotation Center
Since not enough of the optical centers can be obtained when the camera is rotating at one position, the camera is moved to different positions to collect more images, as shown in Figure 4, where ci (i=1,2,3) represent the three initial positions of the camera. In order to easily move the camera through the translations to align the rotation center, we need c1c2⊥c1c3, c1c2∥OwZw, c1c3∥ OwYw. The LSC fitting [28] is performed to calculate the rotation center, which satisfies that the sum of the squares of the distances between each sample point and the fitting curve must be minimized. Let (y 0 , z 0 ) be the two-dimensional (2D) coordinates of the rotation center and (y j , z j ) (j = 1~N, N is the number of camera rotation) be the 2D coordinates of the optical centers projected onto the Y w O w Z w plane. In order to obtain (y 0 , z 0 ), the objective function is Theoretically, the radius of the fitted circle varies when the camera is moved, but the circle centers should remain unchanged. However, there are deviations among the fitted centers O ri in the actual fitting. The initial rotation center O r is decided by the average of the O ri . We adjust the camera to O r and calibrate the camera again to get a set of new optical centers O' rj , whose average is O' r . The weighted average value of O ri and O' r is taken as the final rotation center O r = (O y , O z ). The calculation formula is as follows: where O ym and O zm represent the coordinates of the fitted circle centers O ri and O' r in the Y w and Z w directions, respectively; m = 1, 2, . . . n+1, where n is the number of the fitted circles. k ym and k zm represent the weighted factors, respectively, which are decided by the variance σ m = (σ ym , σ zm ) of each group of the optical centers by Equations (11) and (12),
Computer Simulation
In order to verify our method, we carried out computer simulations. The simulated camera had the following parameters: f x = f y = 1700.00 pixels, u 0 = 600.00 pixels, v 0 = 500.00 pixels and α = 0. The size of a square of the simulated checkerboard is 14.175 × 14.175 mm. The simulated rotation center in the world coordinate system is O r = (−80.00, −50.00, 900.00) mm. The simulated camera is placed at Cr = (−80.00, −33.31, 852.87) mm, Cg = (−80.00, −5.26, 852.85) mm and Cb = (−80.00, −33.37, 821.75) mm in the world coordinate system and 50.00 mm, 65.00 mm and 80.00 mm away from the rotation center, respectively, as shown in Figure 5.
For each position, 20 pairs of external parameter matrices (R and T) were assigned to simulate the movement of the camera. A total of 60 simulated images "seen" by the camera were substituted into Zhang's method to calculate the internal parameters A and external parameters R and T, from which the positions of the optical centers were obtained. Then 3.5% Gaussian noise was added to affect the coordinates of each pixel of the images; the same operation is performed to calculate A, R and T and the optical centers. The obtained internal parameters in both cases are shown in Table 1. The trajectories of the three groups of the optical centers projected on the Y w O w Z w plane are shown in Figure 6. Figure 6a is the result without noise, and Figure 6b is the result with noise. The fitted circle curves by the LSC fitting method are shown in Figure 7. The fitted centers and radii are shown in Table 2. Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 18 For each position, 20 pairs of external parameter matrices (R and T) were assigned to simulate the movement of the camera. A total of 60 simulated images "seen" by the camera were substituted into Zhang's method to calculate the internal parameters A and external parameters R and T, from which the positions of the optical centers were obtained. Then 3.5% Gaussian noise was added to affect the coordinates of each pixel of the images; the same operation is performed to calculate A, R and T and the optical centers. The obtained internal parameters in both cases are shown in Table 1. The trajectories of the three groups of the optical centers projected on the YwOwZw plane are shown in Figure 6. Figure 6a is the result without noise, and Figure 6b is the result with noise. The fitted circle curves by the LSC fitting method are shown in Figure 7. The fitted centers and radii are shown in Table 2. For each position, 20 pairs of external parameter matrices (R and T) were assigned to simulate the movement of the camera. A total of 60 simulated images "seen" by the camera were substituted into Zhang's method to calculate the internal parameters A and external parameters R and T, from which the positions of the optical centers were obtained. Then 3.5% Gaussian noise was added to affect the coordinates of each pixel of the images; the same operation is performed to calculate A, R and T and the optical centers. The obtained internal parameters in both cases are shown in Table 1. The trajectories of the three groups of the optical centers projected on the YwOwZw plane are shown in Figure 6. Figure 6a is the result without noise, and Figure 6b is the result with noise. The fitted circle curves by the LSC fitting method are shown in Figure 7. The fitted centers and radii are shown in Table 2. It can be seen from Table 2 that the three circle centers and radii are the same as the setting values without considering noise. When there is 3.5% Gaussian noise, the average of the three circle centers is O r = (−49.98, 900.02) mm, and the errors of the fitted center and radius are less than 0.05 mm and 0.19 mm, respectively. These results confirm that the proposed method can be used to locate the rotation center in the rotating optical measurement system.
Experimental Setting
We carried out experiments to verify our method. The experimental setup is shown in Figure 8a. The rotation control system is composed of a CCD camera (model: Bammer camera TXG50; resolution: 1224 × 1025 pixels) with a 16 mm focus length imaging lens (model: MA1214M-MP), two translation stages (Zolix TSA50-C electric translation platform, with a repositioning precision less than 5 µm; PI-M406, with a repositioning precision of 0.078 µm) and a turntable (Zolix RAP200 electric rotation platform, with a repositioning precision less than 0.005 • ). The translation stages are fixed on the rotation platform, and the camera is fixed on a translation stage. The stepper motor controls the camera to move along the Y w and Z w axes and rotate with the turntable, respectively. During the experiment, the exposure time of the camera is 12,000 µs. The checkerboard image (the size of a square is 14.175 × 14.175 mm) is displayed on an LCD screen (Philips 190V with a resolution of 1440 × 900 pixels; the dot pitch is 0.2835 mm/pixel). The LCD display screen keeps still during the calibration process. Since the LCD display has a completely pure plane and the screen glass is very thin, the refraction phenomenon can be ignored. Of course, a checkerboard target with a higher machining accuracy can also be employed for providing feature points with higher accuracy.
Determination of the Rotation Center
In the experiment, the camera is respectively placed at three different positions, as shown in Figure 4, where c1c2⊥c1c3, c1c2 = c1c3 = 20 mm. At each initial position, the camera is rotated 16 times, for a total of 22.5 degrees with 1.5 degrees per step. Therefore, in total, 48 frame patterns will be
Determination of the Rotation Center
In the experiment, the camera is respectively placed at three different positions, as shown in Figure 4, where c 1 c 2 ⊥c 1 c 3 , c 1 c 2 = c 1 c 3 = 20 mm. At each initial position, the camera is rotated 16 times, for a total of 22.5 degrees with 1.5 degrees per step. Therefore, in total, 48 frame patterns will be captured during the rotation of the turntable and divided into three sets. One set of the patterns is shown in Figure 8b. The screen is placed within the depth of field of the camera lens to avoid the influence of the defocus on the calibration results. The external parameter matrices (R and T) are obtained by Zhang's method, in which the calculated R and T are almost lens-distortion-free. The re-projective pixel error of the calibration results was 0.039 pixels in our experiment.
From the rotation matrices obtained by the calibration, the rotation angles γ ix (j), γ iy (j) and γ iz (j) (i = 1, 2, 3, j = 1, 2, ..., 16) of the X w , Y w and Z w axes of the world coordinate system with respect to the X c , Y c and Z c axes of the camera coordinate system of the initial position can be calculated, respectively. Theoretically, if the optical axis Z c of the camera at initial position is parallel to Z w and the plane O c -X c Z c is parallel to the plane O w -Y w Z w , the rotation angle γ iy (j) should be equal to the actual rotation angle of the turntable, and the rotation angle γ ix (j), γ iz (j) = 0 when the camera is rotated around the Y c axis, as shown in Figure 9a. Three sets of the angle curves are marked with a different color in the figure, where the horizontal axis represents the times of the camera's rotation, which is marked as N = 16(i − 1) + 1~16i. In fact, the rotation angles γ ix (j) and γ iz (j) fluctuate around 0 degree. We use γ' ix (j), γ' iz (j) and γ' iy (j) to express the calculated three rotation angles, respectively, as shown in Figure 9b. If the three angles are used to calculate the optical centers directly, the three fitted centers will not be concentric on the plane O w -Y w Z w . Figure 10a,b shows the projection of optical centers on the plane O w -Y w Z w and the fitted trajectory, respectively. The deviations of the circle centers are large, by which the rotation center can not be found accurately. With the help of the auxiliary coordinate system O p -X p Y p Z p , the optical centers can be mapped on a plane parallel to O w -Y w Z w through the rotation matrix R cp . In R cp , the three rotation angles are ∆γ ix (j) = γ' ix (j) − γ ix (j), ∆γ iy (j) = γ' iy (j) − γ iy (j) and ∆γ iz (j) = γ' iz (j) − γ iz (j). After the angle correction with R cp , the optical centers calculated by Equation (7) are unified to the same world coordinate system. Three sets of the optical centers and the fitted circular arcs with different radii on the O w -Y w Z w plane are shown in Figure 11a Table 3. The average value of the three fitted centers is O r = (−55.20, 961.67) mm, which is regarded as the initial value of the rotation center.
(a) (b) Figure 10. (a) Projection of the optical centers on the YwOwZw plane; (b) fitted circular curve (red*, green* and blue* are the fitted centers of the three fitted curves marked by c1, c2 and c3, respectively).
Three sets of the optical centers and the fitted circular arcs with different radii on the Ow-YwZw plane are shown in Figure 11a,b. Employing the optical center coordinates, the distance among c1, c2 and c3 are calculated, c1c2 = 20.16 mm and c1c3 = 20.08 mm. The errors of the distance are 0.78% and 0.42%, respectively. The fitting results and the root mean square error (RMSE) of the three groups are displayed in Table 3. The average value of the three fitted centers isOr = (-55.20, 961.67) mm, which is regarded as the initial value of the rotation center. In order to obtain a more accurate rotation center, we adjusted the camera toOr, employing the translation stages along the Yw and Zw directions, respectively, and rotated the camera 16 times to calculate the optical centers O'rj, which are distributed over a small range, as shown in Figure 12 Table 4. It is obvious that Or is more reliable thanOr. These results indicate that the camera's optical center has been positioned accurately with the rotation center of the turntable and the systematic uncertainty in our method remains about 0.1 mm. In order to obtain a more accurate rotation center, we adjusted the camera to O r , employing the translation stages along the Y w and Z w directions, respectively, and rotated the camera 16 times to calculate the optical centers O' rj , which are distributed over a small range, as shown in Figure 12 Table 4. It is obvious that O r is more reliable than O r . These results indicate that the camera's optical center has been positioned accurately with the rotation center of the turntable and the systematic uncertainty in our method remains about 0.1 mm. calculate the optical centers O'rj, which are distributed over a small range, as shown in Figure 12. The average value of O'rj is O'r = (-54.59, 962.05) mm. The final rotation center Or is calculated by employing Or1, Or2, Or3 and O'r using Equations (9)- (12), where Or = (-54.69, 961.96) mm. The camera is moved to Or and rotated 16 times to calculate the optical centers O''rj. The standard deviation (STD) of O'rj relative toOr and O''rj relative to Or in the Yw and Zw directions are shown in Table 4. It is obvious that Or is more reliable thanOr. These results indicate that the camera's optical center has been positioned accurately with the rotation center of the turntable and the systematic uncertainty in our method remains about 0.1 mm.
Verification
Two experiments are designed to prove whether the optical center is aligned with the rotation center by our method after the camera has fixed the position O r .
Calculating the Angle Formed by the Two Space Points M 1 , M 2 and the Optical Center
In the experiment, we calculated the angle α formed by the two space points M 1 , M 2 and the optical center, as shown in Figure 13. When the optical center coincides with the rotation center, the angle α shall remain unchangeable when the camera is rotated. However, when the optical center does not coincide with the rotation center, the angle will change with the rotation of the camera on the turntable; that is, α 1 α 2 .
Verification
Two experiments are designed to prove whether the optical center is aligned with the rotation center by our method after the camera has fixed the position Or. In the experiment, we calculated the angle α formed by the two space points M1, M2 and the optical center, as shown in Figure 13. When the optical center coincides with the rotation center, the angle α shall remain unchangeable when the camera is rotated. However, when the optical center does not coincide with the rotation center, the angle will change with the rotation of the camera on the turntable; that is, α1 ≠ α2. In the experiment, the four points pairs with different positions on the checkerboard plane are selected to calculate the angle αk (k=1~4) and marked as a red circle, green triangle, pink diamond and blue square, as shown in Figure 14. Sixteen angles αk (j) (j=1, 2, ..., 16) are calculated when camera is rotated under the condition of the coaxial; the four angle curves are shown in Figure 15a and the four difference curves of the adjacent angles are shown in Figure 15b. We moved the camera away from the rotation center; the calculated angle curves are shown in Figure 15c and the four difference curves of the adjacent angles are shown in Figure 15d. The standard deviation of the four sets of angles in both cases are shown in Table 5. It can be seen from Figure 15 and Table 5 that the In the experiment, the four points pairs with different positions on the checkerboard plane are selected to calculate the angle α k (k = 1~4) and marked as a red circle, green triangle, pink diamond and blue square, as shown in Figure 14. Sixteen angles α k (j) (j = 1, 2, . . . , 16) are calculated when camera is rotated under the condition of the coaxial; the four angle curves are shown in Figure 15a and the four difference curves of the adjacent angles are shown in Figure 15b. We moved the camera away from the rotation center; the calculated angle curves are shown in Figure 15c and the four difference curves of the adjacent angles are shown in Figure 15d. The standard deviation of the four sets of angles in both cases are shown in Table 5. It can be seen from Figure 15 and Table 5 that the angles α k of the four groups of marker points in the coaxial are basically unchanged, and the standard deviation of the angles in the coaxial is obviously smaller than that in the off-axis. It indicates that the optical center coincides with the rotation center of the turntable. In the experiment, we compared the camera coordinates of the spatial points on the target before and after camera rotation. Employing the imaging model, as shown in Figure 1b, the coordinates of spatial point P are (X cl , Y cl , Z cl ) and (X cr , Y cr , Z cr ) in camera coordinate system O c1 -X c1 Y c1 Z c1 and O c2 -X c2 Y c2 Z c2 , respectively. If the plane O c -X c Z c is parallel to plane O w -Y w Z w , when the optical center coincides with the rotation center, the relationship between (X cl , Y cl , Z cl ) and (X cr , Y cr , Z cr ) is expressed by Equation (13), where , and θ is the rotation angle of the turntable.
Otherwise, if the optical center deviates from the rotation center, the relationship between (X cl , Y cl , Z cl ) and (X cr , Y cr , Z cr ) should be expressed by Equation (14), where T is the translation vector between O c1 -X c1 Y c1 Z c1 and O c2 -X c2 Y c2 Z c2 . We shot the checkerboard images when the camera is rotated by the turntable with a different rotation angle θ i and extracted checkerboard corners as marked points. Employing their homogeneous coordinates (X wi , Y wi , Z wi , 1) T in the world coordinate system, image coordinates (u i , v i , 1) T , and the internal parameter matrix A, the rotation matrix R i and translation vector T i between the world coordinate system and the camera coordinate system at different position can be obtained from Equation (1). Then the coordinates (X ci , Y ci , Z ci , 1) T of the marked points can be calculated by Equation (15), Assume that (X cl , Y cl , Z cl ) and (X cr , Y cr , Z cr ) stand for the original coordinates and the coordinates after camera rotation of a spatial point, respectively. The coordinates (X' cl , Y' cl , Z' cl ) stands for the resulted coordinates calculated by Equation (13) from the coordinates (X cr , Y cr , Z cr ). We can compare (X' cl , Y' cl , Z' cl ) with the original coordinates (X cl , Y cl , Z cl ). If they are equal, it is proved that the optical center coincides with the rotation center; otherwise, it is indicated that the optical center of the camera deviates from the center of the turntable.
We first align the optical center with the rotation center by our proposed method. The camera captures the target images when the turntable rotates 0 • , 6 • , 9 • and 12 • , respectively, as shown in Figure 16. The original position of the camera corresponds to 0 • . The coordinates (X cl , Y cl , Z cl ) and (X cri , Y cri , Z cri ) (i = 1, 2, 3) of the marked points are calculated, respectively. Employing Equation (13), (X' cli , Y' cli , Z' cli ) can be calculated by multiplying the corresponding rotation matrix R θi . Comparing them with the coordinates (X cl , Y cl , Z cl ), we can see that the coordinates coincide accurately, as shown in Figure 17, where the original coordinates (X cl , Y cl , Z cl ) are marked as red "+" and the resulted coordinates (X' cli , Y' cli , Z' cli ) are marked as blue "o"; the circular zones are the enlarged parts. The standard deviation in the X c and Y c directions are shown in Table 6.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 16 of 18 coordinates (X'cli, Y'cli, Z'cli) are marked as blue "o"; the circular zones are the enlarged parts. The standard deviation in the Xc and Yc directions are shown in Table 6. We deviated the optical center from the rotation center and repeated the above steps. The captured target images are shown in Figure 18. When Ri acts on the coordinates (Xcri, Ycri, Zcri) (i =1,2,3) of the marked points, there is a significant deviation, as shown in Figure 19. The standard deviation in the Xc and Yc directions are shown in Table 6. coordinates (X'cli, Y'cli, Z'cli) are marked as blue "o"; the circular zones are the enlarged parts. The standard deviation in the Xc and Yc directions are shown in Table 6.
(a) (b) (c) (d) We deviated the optical center from the rotation center and repeated the above steps. The captured target images are shown in Figure 18. When Ri acts on the coordinates (Xcri, Ycri, Zcri) (i =1,2,3) of the marked points, there is a significant deviation, as shown in Figure 19. The standard deviation in the Xc and Yc directions are shown in Table 6. We deviated the optical center from the rotation center and repeated the above steps. The captured target images are shown in Figure 18. When R θi acts on the coordinates (X cri , Y cri , Z cri ) (i = 1, 2, 3) of the marked points, there is a significant deviation, as shown in Figure 19. The standard deviation in the X c and Y c directions are shown in Table 6. Table 6. We deviated the optical center from the rotation center and repeated the above steps. The captured target images are shown in Figure 18. When Ri acts on the coordinates (Xcri, Ycri, Zcri) (i =1,2,3) of the marked points, there is a significant deviation, as shown in Figure 19. The standard deviation in the Xc and Yc directions are shown in Table 6. The experiments verify that our method can align the optical center with the rotation center. When the optical center coincides with the rotation center, the extracted marked points coincide with each other when R θi acts on the coordinates after camera rotation, and the alignment errors in the X c and Y c directions are quite small. Otherwise, the deviation in the X c , Y c direction is large. (a) (b) (c) Figure 19. The comparative diagrams of the marked points of (a) 6°; (b) 9°; (c) 12°. Figure 19. The comparative diagrams of the marked points of (a) 6 • ; (b) 9 • ; (c) 12 • .
Conclusions
We propose a method based on camera calibration with a two-dimensional target to solve the problem of the alignment of the camera's optical center and the rotation center in a rotating optical measurement system composed of a camera and a rotating platform. An auxiliary plane coordinate system is introduced to adjust the external parameter matrix of the camera. The rectified external parameter matrix is used to calculate the optical centers in the unified world coordinate system. Multiple fitted circles can determine the rotation center more accurately than a single fitted circle, which will provide a higher precision in the following application, such as a panoramic mosaic. Simulations and experiments verified the effectiveness of the proposed method.
It should be noted that the optical center of the initial position of the camera should be kept at an appropriate distance from the rotation center during calibration to increase the accuracy of the circle fitting. By the way, although this paper only discusses the alignment of the optical center of the camera with a one-dimensional rotating optical system, the proposed method can also be extended in principle to the problem in the alignment of the optical center with the rotating axis of a two-dimensional rotating optical system. However, the current experimental setup is only suitable for the alignment problem of one-dimensional rotating optical systems. In the following work, we will find a device suitable for two-dimensional (pitch and yaw) rotating platforms. | 11,677.4 | 2020-10-05T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
21cmFAST v3: A Python-integrated C code for generating 3D realizations of the cosmic 21cm signal.
The field of 21-cm cosmology – in which the hyperfine spectral line of neutral hydrogen (appearing at the rest-frame wavelength of 21 cm) is mapped over large swathes of the Universe’s history – has developed radically over the last decade. The promise of the field is to revolutionize our knowledge of the first stars, galaxies, and black holes through the timing and patterns they imprint on the cosmic 21-cm signal. In order to interpret the eventual observational data, a range of physical models have been developed – from simple analytic models of the global history of hydrogen reionization, through to fully hydrodynamical simulations of the 3D evolution of the brightness temperature of the spectral line. Between these extremes lies an especially versatile middle-ground: fast semi-numerical models that approximate the full 3D evolution of the relevant fields: density, velocity, temperature, ionization
Summary
The field of 21-cm cosmology -in which the hyperfine spectral line of neutral hydrogen (appearing at the rest-frame wavelength of 21 cm) is mapped over large swathes of the Universe's history -has developed radically over the last decade.The promise of the field is to revolutionize our knowledge of the first stars, galaxies, and black holes through the timing and patterns they imprint on the cosmic 21-cm signal.In order to interpret the eventual observational data, a range of physical models have been developed -from simple analytic models of the global history of hydrogen reionization, through to fully hydrodynamical simulations of the 3D evolution of the brightness temperature of the spectral line.Between these extremes lies an especially versatile middle-ground: fast semi-numerical models that approximate the full 3D evolution of the relevant fields: density, velocity, temperature, ionization, and radiation (Lyman-alpha, neutral hydrogen 21-cm, etc.).These have the advantage of being comparable to the full first-principles hydrodynamic simulations, but significantly quicker to run; so much so that they can be used to produce thousands of realizations on scales comparable to those observable by upcoming low-frequency radio telescopes, in order to explore the very wide parameter space that still remains consistent with the data.Amongst practitioners in the field of 21-cm cosmology, the 21cmFAST program has become the de facto standard for such semi-numerical simulators.21cmFAST (Mesinger and Furlanetto 2007;Mesinger, Furlanetto, and Cen 2011) is a high-performance C code that uses the excursion set formalism (Furlanetto, Zaldarriaga, and Hernquist 2004) to identify regions of ionized hydrogen atop a cosmological density field evolved using first-or secondorder Lagrangian perturbation theory (Zel'Dovich 1970;Scoccimarro and Sheth 2002), tracking the thermal and ionization state of the intergalactic medium, and computing X-ray, soft UV and ionizing UV cosmic radiation fields based on parametrized galaxy models.For example, the following figure contains slices of lightcones (3D fields in which one axis corresponds to both spatial and temporal evolution) for the various component fields produced by 21cmFAST.
However, 21cmFAST is a highly specialized code, and its implementation has been quite specific and relatively inflexible.This inflexibility makes it difficult to modify the behaviour of the code without detailed knowledge of the full system, or disrupting its workings.This lack of modularity within the code has led to widespread code "branching" as researchers hack new physical features of interest into the C code; the lack of a streamlined API has led derivative codes which run multiple realizations of 21cmFAST simulations (such as the Monte Carlo simulator, 21CMMC, Greig and Mesinger 2015) to re-write large portions of the code in order to serve their purpose.It is thus of critical importance, as the field moves forward in its understanding -and the range and scale of physical models of interest continues to increase -to reformulate the 21cmFAST code in order to provide a fast, modular, well-documented, well-tested, stable simulator for the community.
Features of 21cmFAST v3
This paper presents 21cmFAST v3+, which is formulated to follow these essential guiding principles.While keeping the same core functionality of previous versions of 21cmFAST, it has been fully integrated into a Python package, with a simple and intuitive interface, and a great deal more flexibility.At a higher level, in order to maintain best practices, a community of users and developers has coalesced into a formal collaboration which maintains the project via a Github organization.This allows the code to be consistently monitored for quality, maintaining high test coverage, stylistic integrity, dependable release strategies and versioning, and peer code review.It also provides a single point-of-reference for the community to obtain the code, report bugs and request new features (or get involved in development).
A significant part of the work of moving to a Python interface has been the development of a robust series of underlying Python structures which handle the passing of data between Python and C via the CFFI library.This foundational work provides a platform for future versions to extend the scientific capabilities of the underlying simulation code.The primary new usability features of 21cmFAST v3+ are: • Convenient (Python) data objects which simplify access to and processing of the various fields that form the brightness temperature.• Enhancement of modularity: the underlying C functions for each step of the simulation have been de-coupled, so that arbitrary functionality can be injected into the process.• Conversion of most global parameters to local structs to enable this modularity, and also to obviate the requirement to re-compile in order to change parameters.• Simple pip-based installation.
• Robust on-disk caching/writing of data, both for efficiency and simplified reading of previously processed data (using HDF5).• Simple high-level API to generate either coeval cubes (purely spatial 3D fields defined at a particular time) or full lightcone data (i.e.those coeval cubes interpolated over cosmic time, mimicking actual observations).• Improved exception handling and debugging.
• Simple configuration management, and also more intuitive management for the remaining C global variables.• Comprehensive API documentation and tutorials.
• Comprehensive test suite (and continuous integration).
While in v3 we have focused on the establishment of a stable and extendable infrastructure, we have also incorporated several new scientific features, appearing in separate papers: • Generate transfer functions using the CLASS Boltzmann code (Lesgourgues 2011).
21cmFAST is still in very active development.Amongst further usability and performance improvements, future versions will see several new physical models implemented, including milli-charged dark matter models (Muñoz, Dvorkin, and Loeb 2018) and forward-modelled CMB auxiliary data (Qin, Poulin, et al. 2020).
In addition, 21cmFAST will be incorporated into large-scale inference codes, such as 21CMMC, and is being used to create large data-sets for inference via machine learning.We hope that with this new framework, 21cmFAST will remain an important component of 21-cm cosmology for years to come.
Examples
21cmFAST supports installation using conda, which means installation is as simple as typing conda install -c conda-forge 21cmFAST.The following example can then be run in a Python interpreter.
Performance
Despite being a Python code, 21cmFAST v3 does not diminish the performance of previous pure-C versions.It utilises CFFI to provide the interface to the C-code through Python, which is managed by some custom Python classes that oversee the construction and memory allocation of each C struct.
OpenMP parallelization is enabled within the C-code, providing excellent speed-up for large simulations when performed on high-performance machines.
Note that while a full light-cone simulation can be expensive to perform, it only takes 2-3min to calculate a Coeval box (excluding the initial conditions).For instance, the aforementioned timing for v3 includes 80 minutes to generate the initial condition, which also dominates the maximum RAM required, with an additional ~4 minutes per snapshot to calculate all required fields of perturbation, ionization, spin temperature and brightness temperature.
To guide the user, we list some performance benchmarks for variations on this simulation, run with 21cmFAST v3.0.2.Note that these benchmarks are subject to change as new minor versions are delivered; in particular, operational modes that reduce maximum memory consumption are planned for the near future.At this time, the 21cmFAST team suggests using 4 or fewer shared-memory cores.However, it is worth noting that as performance does vary on different machines, users are recommended to calculate their own scalability.
retain copyright and release the work under a Creative Commons Attribution 4.0 International License (CC-BY).
Figure 2 :Figure 3 :
Figure 2: Brightness temperature lightcone produced by the example code in this paper. | 1,909.6 | 2020-10-22T00:00:00.000 | [
"Computer Science"
] |
Order parameter profiles in a system with Neumann – Neumann boundary conditions
In this article we consider a critical thermodynamic system with the shape of a thin film confined between two parallel planes. It is assumed that the state of the system at a given temperature and external ordering field is described by order-parameter profiles, which minimize the one-dimensional counterpart of the standard φ4 Ginzburg–Landau Hamiltonian and meet the so-called Neumann – Neumann boundary conditions. We give analytic representation of the extremals of this variational problem in terms of Weierstrass elliptic functions. Then, depending on the temperature and ordering field we determine the minimizers and obtain the phase diagram in the temperature-field plane.
Introduction
In the current article we consider one generic mean-field model describing a phase transition in a film in which the surfaces order exactly at the same temperature as the bulk.Mathematically this is described by imposing (Neumann, Neumann) boundary conditions on the order parameter.This type of boundary conditions can be achieved in a magnetic system.They can, in principle, be also accomplished for a fluid, but a very precise arrangement for the surfaces shall be performed so that the wall-fluid interaction exactly matches that one of the interactions inside the fluid system.Let is recall that in the vicinity of the critical temperature T c of the bulk system, one observes a diversity of surface phase transitions [1,2] of different kind in which the surface orders before, together, or after ordering in the bulk of the system, which are known as normal (or extraordinary), surface-bulk and ordinary surface phase transitions.Mathematically these surface transitions correspond to different boundary conditions imposed on the order parameter characterizing the system.For a simple fluid or for binary liquid mixtures the wall generically prefers one of the fluid phases or one of the components.In the vicinity of the bulk critical point the last leads to the phenomenon of critical adsorption [3][4][5][6][7][8][9][10][11][12][13][14][15].This can be modelled by considering local surface field h 1 acting solely on the surfaces of the system.When the system undergoes a phase transition in its bulk in the presence of such surface ordering fields one speaks about the "normal" transition [16].It has been shown that it is equivalent, as far as the leading critical behavior is concerned, to the "extraordinary" transition [2,16] which is achieved by enhancing the surface couplings stronger than the bulk couplings.The usual way one mathematically describes these boundary conditions is to require that the order parameter diverges and, thus, one has the so called (+, +) or (+, −) boundary conditions which differ by the equal, or opposite behaviour of the order parameter in vicinity of the borders of the system.There is also a special case of surface enrichment, when the surface orders simultaneously with the bulk.This is normally termed surfacebulk, or special phase transition.One way to achieve it is to enhance the coupling on the surface and very near to it, so that one achieves the delicate balance needed to get the both orderings to become simultaneously.On a mean field level this can be described by imposing Neumann boundary conditions on the boundary.These are the boundary conditions studied in the current article.
Statement of the problem
Let us consider an Ising type critical thermochemical system having the shape of a thin film confined between two parallel planes placed at a distance L from one another.Then, within the framework of the Ginzburg-Landau theory of phase transitions, see e.g.[17,18], the state of the system is described by the minimizers of the one-dimensional counterpart of the standard φ 4 Ginzburg-Landau Hamiltonian in terms of the order parameter φ(z), z ∈ (0, L) being the independent variable associated with the position of a layer perpendicular to the bounding planes.Here, the parameters τ and η represent the temperature of the system and the applied ordering field, respectively, and the prime indicates differentiation with respect to the variable z.
Given three real numbers L > 0, τ and η, we are interested in the minimizers φ = φ(z) of the functional F φ; τ, η, L , which are: (i) continuous together with their derivatives up to the second order in the interval [0, L]; (ii) satisfy the so-called Neumann -Neumann boundary conditions Actually, (2) are the natural boundary conditions associated with the regarded variational problem, which means that any variation of the order parameter φ is allowed at both end points z = 0 and z = L.
It is clear that each such minimizer should satisfy the corresponding Euler-Lagrange equation, which, on account of expressions (1), reads Obviously, the order parameter φ b (τ, η, L) of the bulk system, which is a constant, with respect to z, solution of equation ( 3) determined as the real root of the cubic polynomial for which attains its minimum, meets the requirements (i) and (ii).Therefore, a minimizer of the regarded functional with the required properties always exists and the question is whether it is φ b or some other (but non-constant) solution φ(z) of equation ( 3) such that the conditions (i) and (ii) hold and Equation ( 3) possesses a first integral ε(φ) of the form i.e. ε = ε(φ) is a certain real number on any non-constant sufficiently smooth real-valued solution of equation ( 3).Thus, the problem we are interested in can be formulated as follows.Given three real numbers L > 0, τ and η, find the non-constant real-valued solutions φ(z) of the equation dφ dz corresponding to certain ε ∈ R, which meet the conditions (i), (ii) and ( 6).
Solution of the problem
Let us first express the general solution of equation ( 8) taking advantage of the fact that the polynomial P(φ) should have at least one simple real root ρ because of the considered boundary conditions (2).Then as P (ρ) = 0, and since the right-hand side of equation ( 8), that is P(φ), is a polynomial of fourth degree with respect to the variable φ, we can, following [19, pp. 452-455], express each solution of equation ( 8) in the form where ℘ (z; g 2 , g 3 ) is the Weierstrass elliptic function with elliptic invariants Now, taking into account the well known properties of the Weierstrass elliptic functions (see, e.g., [20]), it is evident that each function of form ( 10) is real-valued provided that z, ρ, g 2 , g 3 ∈ R, just as in the considered case.Consequently and hence, bearing in mind that η − 2ρ 2ρ 2 + τ 0, since ρ is a simple root of the polynomial P(φ), lim ℘ (z; g 2 , g 3 )| z→0 = ∞ and the Weierstrass equation we see that each function of form (10), which is such that meets the conditions (i) and (ii).
Hereafter we assume L = 1 and solve numerically for ρ the transcendental equation ( 14) for given values of the parameters τ and η.In this way we find the functions of form (10) that satisfy the conditions (i) and (ii).Then, we compare the energy of each such state of the system with the energy corresponding to the constant solution φ b (τ, η, 1) and determine the minimizers.Two minimizers of the functional F φ; τ, η, 1 corresponding to τ = −400 and η = −97.52 are depicted in Figure 1.They both have energy equal to the energy F φ b ; τ, η, 1 of the "finite" bulk system.As a result of a number of lengthy computations, we have obtained the phase diagram shown This is a work in progress, which is focused mainly on the mathematical aspects of the problem.The physics behind this study will be discussed elsewhere.In the current article we have found the general form of the solution, see Eq. ( 10), of the equation (3) for the order parameter profile of a system with a film geometry subjected to the (Neumann-Neumann) boundary conditions -see Eq. ( 2).The usual form of the order parameter profile as a function of the orthogonal variable z is shown in Fig. 1.One observes that they are mirror-symmetric with respect to the middle of the system.We have determined the phase diagram of the film system -it is shown in Fig. 2. The two branches of the diagram below the point T c,L are symmetric with respect to the η = 0 line.For such a system it would be interesting to determine the response functions -the local and the total ones, as well as the force acting between the plates of the system which near T c,L is known as the Casimir force [21][22][23][24].
Figure 2 .
Figure 2. Phase diagram.The blue region in the (τ, η)-plane comprises the values of the parameters τ and η for which there co-exists two non-constant minimizers, while in the white region we find only the constant solution φ b (τ, η, 1).All these three types of minimizers co-exists at the border of the foregoing regions indicated by the black thick curves.The point T c,L of coordinates τ = −227.556,η = 0 marks the critical point of the finite system. | 2,112.6 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
An one-pot three-component process for the synthesis of 6-amino-4-aryl-5-cyano-3-methyl-1-phenyl-1,4-dihydropyrano[2,3-c ]pyrazoles in aqueous media
A three-component process for the one-pot synthesis of 6-amino-4-aryl-5-cyano-3-methyl-1-phenyl-1,4-dihydropyrano[2,3-c ]pyrazoles from 3-methyl-1-phenyl-2-pyrazolin-5-one, aromatic aldehydes and malononitrile using p -dodecylbenzenesulfonic acid (DBSA) as the catalyst (10 mol %) in aqueous media is described. This method provides several advantages such as mild reaction conditions, simple work-up procedure and is environment friendly. In addition, water is chosen as a green solvent.
Introduction
The 21 th century is the modern year of the green chemistry, more and more chemists are devoted to the research of the 'green synthesis' which means the reagent, solvent and catalyst are environmental friendly.Recently organic reactions in water without use of harmful organic solvents have attracted much attention, because water is a cheap, safe and environmentally benign solvent. 1 However, water as a solvent was not frequently used until recently for several reasons such as many organic materials do not dissolve in water and many reactive intermediates and catalysts are decomposed in water.So it is necessary to add some phase-transfer catalyst (PTC) or surfactant such as hexadecyltrimethylammonium bromide (HTMAB), tetrabutylammonium bromide (TBAB), p-dodecylbenesulfonic acid (DBSA), because they benefit the organic materials uniform dispersion in water in the course of synthesis.
It is known that polyfunctionalized benzopyrans and their derivatives are a kind of very useful compounds.They have been widely used as medicine intermediates due to their useful biological and pharmacological properties, such as antibacterial, anticoagulant, anticancer, ARKAT spasmolytic, hypnotic, diuretic, insecticide. 2-6Some 2-amino-4H-pyrans can be employed as photoactive meterials. 7Furthermore, multisubstitutional 4H-pyrans also constitute a structural unit of a series of natural products. 8,9Besides, substituted pyrazole and derivatives can be used as important pharmaceuticals and agricultural chemicals.Usually these compounds are synthesized in organic solvents. 10,11In the course of our investigations to develop new synthetic methods in water.We have completed a series of organic syntheses with water as solvent recently. 12Herein we report a one-pot three-component and highly efficient method for synthesis of 6-amino-4aryl-5-cyano-3-methyl-1-phenyl-1,4-dihydropyrano [2,3-c]pyrazoles catalyzed by DBSA in aqueous media.This is an efficient synthesis in aqueous media, not only the operational simplicity but also gives the corresponding products in good to excellent yields.(Scheme 1).
In the presence of DBSA, 3-methyl-1-phenyl-2-pyrazolin-5-one 1, aromatic aldehyde 2 and malononitrile 3 were performed in water at 60 o C, high yields of products 4 were obtained.The results are summarized in Table 1.As shown in Table 1, we can find a series of aromatic aldehyde 2 were reacted with 1 and 3 in the presence of DBSA in aqueous media at 60 o C, the reaction proceeds smoothly to afford the corresponding 6-amino-4-aryl-5-cyano-3-methyl-1-phenyl-1,4-dihydropyrano[2,3c]pyrazoles 4 in good to excellent yields.No very obvious effect of the electronic nature of substituents in the aromatic ring was observed.Benzaldehyde and aromatic aldehydes containing electron-donating groups (such as alkyl group, alkoxyl group, hydroxyl group) or electronwithdrawing groups (such as halide, nitro group) were employed and reacted well to give the corresponding products 4 in good to excellent yields under this reaction conditions.
The catalyst (DBSA) plays a crucial role in the success of the reaction in terms of the rate and the yields.For example, the reaction of 2,4-dichlorobenzaldehyde, 1 and 3 could be carried out in the absence of DBSA when the mixture (1, 2i and 3) in aqueous media at 60 o C for 6 h, but the yield is poor (24%).Additionally, 4-dimethylaminobenzaldehyde (1l) failed to give the corresponding 6-amino-4-aryl-5-cyano-3-methyl-1-phenyl-1,4-dihydropyrano[2,3-c]pyrazole and the starting materials were quantitatively recovered under the same conditions.The explanation for this result may be due to the strong electron donating dimethylamino group in (1l) which will reduce the reactivity.A degree of tautomerisation may occur in (1l) with formation of quinoid structure and thus decreased reactivity of the aldehyde group (Scheme 2).In conclusion, a procedure for the preparation of 6-amino-4-aryl-5-cyano-3-methyl-1phenyl-1,4-dihydropyrano[2,3-c]pyrazoles catalyzed by DBSA in aqueous media have been developed.This is a one-pot three-component condensation in aqueous media.Water solution is a clean and environmentally desirable system.No harmful organic solvents are used.This report has proposed and demonstrated a new useful and attractive process for the synthesis of these compounds.
Experimental Section
General Procedures.Liquid aldehydes were distilled before use.IR spectra were recorded on a Bio-Rad FTS-40 spectrometer (KBr). 1 H NMR spectra were measured on a Bruker AVANCE 400 (400 MHz) spectrometer using TMS as internal reference and CDCl 3 as solvent.Elemental analyses were determined using Perkin-Elmer 2400 II elemental analyzer.General procedure for the preparation of 4. A mixture of 3-methyl-1-phenyl-2-pyrazolin-5one (1, 2mmol), aromatic aldehyde (2, 2mmol), malononitrile (3, 2mmol), and DBSA (10 mol%) in water (40 mL) was stirred at 60 o C for 3 hours.Then the mixture was cooled to room temperature, solid was filtered off and washed with H 2 O (40mL).The crude products were purified by recrystallization by ethanol (95%) to give 4. Data of compounds are shown below: | 1,113 | 2006-06-29T00:00:00.000 | [
"Chemistry"
] |
Application of Deep Learning and Unmanned Aerial Vehicle on Building Maintenance
,
Introduction
Changes in customer preference may negatively affect building sustainability, well-being, and safety and may eventually increase competitiveness in the market. For proactive and prompt building maintenance and repair work, customers seek quick, effective building monitoring approaches to avoid severe damage and unnecessary expenditure [1]. Conventional approaches for examining building structures typically require the involvement of building surveyors who conduct assessments of building elements. ese assessments include lengthy site inspection for systematic recording of the building elements' physical condition on the basis of note-taking, photographs, drawings, and customer-supplied information [2], followed by analysis of the collected data and writing of a health assessment report of the building. e components of this report include the assessed building's current state, recent updates, maintenance and repair records, and future longterm repair cost estimates [3]. However, this approach is a time-, labor-, and cost-intensive process and can endanger the surveyors' health and safety, particularly when the building to be assessed is a mid-to high-rise structure. Convolutional neural networks (CNNs) have been applied to detect the deterioration of many structures such as roads, bridges, and tunnels but have rarely been employed to detect deterioration of building external walls [4][5][6]. Moreover, unmanned aerial vehicles (UAVs) have wide applications in deterioration detection. Consequently, a UAV-CNN combination for external wall deterioration detection could have practical applications, ensuring surveyor safety.
In this study, we focused on the automated image-based detection and localization of key defects (efflorescence, spalling, cracking, and defacement) in the external wall tiles of buildings. However, this study was only a pilot study and thus has a few limitations: (1) the model could not consider multiple defect types simultaneously; in other words, all the considered images belonged to only one category; (2) the model considered only images with visible defects.
Herein, this study reports a CNN application for the automated assessment of the external wall tile condition of buildings, with a brief discussion of the method for selecting the most common defects of these tiles. First, we provide a brief overview of various applications of CNNs, including deep learning techniques, for resolving computer visionrelated problems, followed by a description of the theoretical basis for the current study. is research proposes a model for detection and localization that is based on transfer learning, involving the use of VGG-16 to execute feature extraction as well as feature classification. Next, the localization problem and the class activation mapping (CAM) technique-incorporated within the defect localization model-are discussed. Subsequently, we discuss the employed dataset, the developed model, and the obtained results, finally followed by conclusions and directions for future studies.
Factors Leading to Building Deterioration.
Building lifespan can vary from decades to centuries. In general, building durability can be increased through constant protection, repair, and maintenance activities [7,8]. e deterioration rate and degree differ among building components, with construction design, material, method, construction quality, and environment being the crucial influencing factors [9]. Several factors leading to building deterioration may be divided into the following categories: natural environment (temperature, relative humidity, sunshine, wind, and water), natural disasters (earthquakes and typhoons), and human factors (design, construction, users, management, and maintenance) [10][11][12][13].
Building External Wall Tile Defects and
eir Types External wall tile defects not only influence the overall appearance of buildings but also endanger public safety; for instance, they may lead to injuries due to their falling. External wall tile defects can be roughly divided into five types: defacement, efflorescence, cracking, spalling, and bulging. Of these, defacement, efflorescence, cracking, and spalling have been the main focus of most studies: (1) Defacement. Defacement, the most significant and common type of external wall tile deterioration in buildings, is closely related to the architectural shape and design of a building and long-term influence of wind and rain on it [14]. Several major factors result in the defacement of external wall tiles. For instance, when rebar is exposed due to external wall cracks, water containing rust from the corroded iron flows out of the walls, defacing the affected areas.
Moreover, installation of accessories can damage external wall tiles, thus promoting algal and fungal growth on the affected walls. (2) Efflorescence. Efflorescence-commonly known as whiskering, saltpetering, or "wall cancer"-often affects the hollow bricks of building finishes, joints of external wall tiles, or joints of stone veneers. Efflorescence prevention in cement mortar or concrete-based structures is impossible. (3) Cracking. e main causes of external wall cracking include overloading of buildings, uneven land subsidence, and violent shaking during earthquakes [15]. e drying shrinkage of external wall concrete, corrosion expansion of rebar, secondary construction of external wall accessories, and man-made disasters of fire and explosion can aggravate this cracking. Furthermore, tile breakage can lead to entry of rainwater into the main bodies of buildings, resulting in internal and external structural deterioration. Hence, cracks on a building's facade can influence the building's appearance and cause rainwater invasion, possibly leading to inconvenience in daily life or loss of property or even affecting building safety and durability. (4) Spalling. Spalling is characterized by falling off of surface decorative materials (e.g., tiles and coating) due to reduction in adhesive strength, aging of cement mortar and concrete, poor tile quality, high temperature caused by fire, or natural forces (e.g., strong wind and violent shaking during earthquakes) [16][17][18]. (5) Bulging. Bulging mainly occurs between concrete and the base cement mortar. Gaps form between the layers of cement mortar and surfaces of external wall tiles, resulting in material separation. Long-term changes in temperature or humidity lead to a reduction in adhesive strength and separation of adhesive interfaces for various adhesives.
e methods of building deterioration detection include visual assessment, percussion-based identification, rebound intensity assessment, ultrasonic wave propagation assessment, pull-out testing, infrared thermography, and UAV use [44][45][46]. Compared with other methods, the application of UAV is a more efficient method to collect huge amount of building data [47,48].
In addition to deterioration detection, UAV can be used in environment monitoring, traffic management, pollution monitoring, and security [49][50][51]. UAV is also an important emerging technology to develop sustainable communities [52].
CNN Use for Building Deterioration Detection.
With the development of deep learning, the applications of automatic defect detection on community infrastructures and built environment are increasing. CNNs have been used for rapid structural damage detection and maintenance cost estimation after a serious earthquake so as to provide a reference for owners and decision-makers to make accurate and timely risk management decisions [53]. Region-based CNN (R-CNN) and faster R-CNN have also been used for road damage detection and classification [54]. Other CNN applications include the detection of concrete cracks [55][56][57], automated detection of deformation at the bottom of steel box girders of long-span bridges [58], and automated detection of building types in street images [59]. Besides, CNNs have also gradually used in building external wall defect detection. Agyemang and Bader applied a CNN for detecting cracks on the building external walls and assessing the defects therein [3]. Perez et al. also used CNNs to detect the building defects [9]. As shown in the related researches, VGG-16 and CAM are the commonly used methods in the application of building defect detection.
In summary, although deep learning has been used in many engineering fields [60,61], it has less been used for detecting external wall deterioration. Moreover, integrating UAV and deep learning applications may increase the practical value of automated external wall deterioration detection.
Materials and Methods
is study developed a deep learning model with the ability to classify defects, namely, efflorescence, spalling, cracking, and defacement, in the external wall tiles of buildings. By applying CNNs, we identified the related limitations and challenges based on the nature of not only the defects to be investigated but also the surroundings: images showing the defect types of different external wall tile sources were collected first, and then, the data were appropriately cut and resized; the obtained dataset was used to train the network model after completion. Next, by using a transfer learning technique with a pretrained VGG-16 model in ImageNet as our model, this study customized and initialized the weights. Subsequently, this study used a separate set of images, not seen by the trained model thus far, to validate and examine the trained model's robustness. Finally, this study applied CAM and addressed the localization problem.
Dataset.
All external wall tile images were obtained using mobile phones, handheld cameras, and drones; thus, they had differences in resolution and size. Accordingly, to increase the study dataset size, the obtained images were sliced into images with a resolution of 224 × 224 and 3024 × 4032 pixels. In total, 5680 images were used as the training dataset for our model, all of which were labeled and categorized as efflorescence (n � 1382), spalling (n � 1386), cracking (n � 1551), and defacement (n � 1361) images ( Figure 1). Additionally, of the images in the dataset, 10% randomly selected were used to form a validation dataset. To prevent overfitting, this study applied a wide variety of image augmentation processes, namely, rescaling, rotation, height, and width shift, to the training dataset. e datasets could be viewed in the public website:
Method for Automated Defect Detection.
is study used a modified model as the feature extractor ( Figure 2) and applied fine-tuned transfer learning to an ImageNet-pretrained VGG-16 network [62]. e mentioned transfer learning is to first conduct training under big data to ensure that the deep learning network has the basic ability to recognize objects. Subsequently, the classification layers of the network are replaced with the required categories to make the network more robust.
is study used VGG-16 because it is powerful yet has simple architecture with relatively few layers.
is architecture comprises five convolutional layer blocks with max pooling for feature extraction; next, three fully connected layers and one final 1 × 1000 Softmax layer come after the mentioned layer blocks. Moreover, in the CNN, the input comprises 224 × 224-pixel RGB images, and the first block consists of two convolutional layers with 32 filters, each size 3 × 3. e second, third, and fourth convolution blocks use filters of sizes 64 × 3 × 3, 128 × 3 × 3, and 256 × 3 × 3, respectively. is simple architecture eases model modification processes for transfer learning and CAM while preserving the model's accuracy.
In the determination of hyperparameters, some of the default values are directly used and some of them are determined by training data testing and modifying. e default values of optimizer (as SGD), momentum (as 0.9), and weight decay (as 5e −4 ) are directly used without modifications [63]. e range of 1r is from 0.001 to 0.01, and the convergency efficiency is better on 0.01 after testing. Although there are many loss functions, the Advances in Civil Engineering cross-entropy loss method is used owing to the research objective of basic classification. Batch size is usually justified by the multiples of 2, and 2 5 is determined by the system performance. To fine-tune the VGG-16 model, the initial four convolutional layer blocks were first used as the generic feature extractor, and then, the final 1 × 1000 Softmax layer was replaced with a 1 × 4 classifier (for efflorescence, spalling, cracking, and defacement). Finally, the newly modified model was retrained to enable only the weights of the fifth convolutional block to update during training.
CAM-Based Object Localization.
Problems in object localization differ from those in image classification. Algorithms can determine the class of image features or objects and detect and label the objects within the image usually by placing a rectangular bounding box, indicating the algorithm's confidence of existence [64]. Moreover, for a detected object, a neural network provides four numbers as the output; these numbers function to parameterize the aforementioned bounding box.
For the identification of discriminative regions in the image, CAM can be combined with classification-trained Advances in Civil Engineering CNNs. In CAM, the height of image regions, which are relevant to a specific class, is determined by reusing CNN classifier layers so as to obtain optimal localization results. In this study, the application of CAM to the current study model increased the accuracy of image localization. Figures 3(a) and 3(b) illustrate the loss and learning curves derived for our model for the training dataset. Epoch, presented on the horizontal axis in both curves, represents the training cycle for in which the entire dataset was entered into the network. erefore, when the loss curve presents a lower value, the probability of image recognition error is low, but when the learning curve presents a value close to 1.0, the model training accuracy is high. As indicated in Figure 3(a), at around the 50th cycle, the loss curve reached stable convergence to achieve good image recognition. As presented in Figure 3(b), model training remained in a good state. e training dataset included 5680 images, and the training involved 500 cycles. As shown in Figure 3, our model was well trained. Moreover, the accuracy for the optimal training dataset was 86%, with a final loss of 0.0576 at the end of the 500th cycle of training; nevertheless, no model overfitting was identified during training. As presented in Table 1, the model's accuracy rates for efflorescence, cracking, and defacement were 91%, 86%, and 98%, respectively, but that for spalling was only 76%.
Defect Localization Using CAM.
To further analyze the reasons for the fact that the accuracy rate for spalling was low, we visualized the dataset by applying CAM, a low-cost computation method. In the resulting image (Figure 4), large network responses, indicated in red, were noted. Figure 4 shows the focus of the various artificial neural networks.
Next, a confusion matrix ( Advances in Civil Engineering that 94.44% and 5.56% of these images presented mosaic tiles and lath bricks, respectively. Because of the small unit area of mosaic tiles, as these tiles fell, they left dirty, black stains behind. Moreover, during the process of capturing images of sample areas, trees may have blocked the light and created shadows ( Figure 6). Similarly, lighting problems during image capture were the reasons for the misclassification of efflorescence as spalling (Figure 7, red circles). us, when sunlight was too bright or when the spalling pattern was irregular, the model misclassified efflorescence as spalling during model training ( Figure 8). Finally, some cracking was also misclassified as spalling during model training (Figures 9 and 10, red circles). 8 Advances in Civil Engineering
Conclusions
In this study, this study combined a UAV with a deep learning model for automated detection of external wall tile deterioration of buildings and made modifications to improve the efficiency of our method. e results indicated that our model had high accuracy and recall, the respective rates of which were 91% and 80% for efflorescence, 76% and 100% for spalling, 86% and 86% for cracking, and 98% and 78% for defacement (Table 1). Compared with traditional detection methods, the use of UAVs is inexpensive and affords higher mobility, efficiency, and safety. However, UAV efficiency can be affected by the climate, lighting, wind, and blind spots in the test area and by the limitation of UAV operational technology. In the future, these limitations may be overcome through the use of relatively robust camera lenses, sensors, systems, and automation technologies, making UAVs safer and more efficient and increasing their application in the field of construction.
In the current study, the recognition accuracy for spalling was slightly low, indicating some limitations in spalling recognition from the existing images. erefore, in future studies, the use of infrared scanners, which detect differences in depth and recognize whether tiles have fallen, is highly recommended to improve recognition accuracy. Besides using larger data, a deeper network can be also considered. Deeper network can identify more detailed characteristics to improve the accuracy. Moreover, in the aspect of simultaneously identifying multiple defect types, different tags can be given in the image and use the corresponding loss functions. In the aspect of normal photos (without deterioration), the normal photo would be also given relatively lower belonging probabilities to the four deterioration types. Two methods are considered to further improve the model adaptation: (1) to set a basic threshold in the model; that is, if the input photos are lower than the threshold, they are classified as background (not belonging to the four types of deterioration); and (2) to take photos of normal exterior wall tiles equivalent to the number of singledeteriorated photos as the background type (the fifth type) and then retrain the model.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,893.4 | 2021-04-19T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Hypofractionated image-guided radiotherapy for the treatment of acoustic neuromas : A dosimetrically acceptable alternative to stereotactic radiosurgery in a resource-constrained environment
Historically, surgery was considered the treatment of choice, but the introduction of stereotactic radiosurgery in the latter half of the 20th century allowed for significantly lower levels of morbidity and similar or better levels of tumour control.3 Radiosurgery is less invasive than surgery and can be performed on outpatients, making it the preferred choice for many. Radiosurgery is commonly used as the favoured treatment option for small to medium ANs.4
Introduction
Acoustic neuromas (ANs) or vestibular schwannomas are benign neoplasms associated with the vestibular cranial nerve.Although benign, untreated lesions may cause symptoms because of local nerve compression of the auditory, vestibular, facial or, less commonly, the trigeminal nerve. 1,2Very large lesions may become life-threatening because of brainstem compression.
There are currently 139 countries classified as LMICs by the World Bank.An analysis revealed that only four LMICs had the required number of linacs for radiotherapy treatment: 55 of the 139 LMICs had no access to radiotherapy treatment at all (40 in Africa), while 80 had some level of access. 6Over 60% of linacs in Africa are situated in South Africa and Egypt. 7Even in South Africa, it is estimated that only 69.7% of the required number of linacs are installed, and that an additional 56 will be required by 2020. 6Very few linacs in South Africa are modified for radiosurgery treatment.
A survey performed at Groote Schuur Hospital (Cape Town, South Africa) in 2013 to determine the level of access to radiosurgery in South Africa found that there were no Gamma Knife or CyberKnife units in the country. 8Protonbased radiosurgery was introduced to the country in 1993, followed by the first linac-based radiosurgery in 1994.More than 2300 patients were treated between 1993 and 2012.Proton-based radiosurgery was used for 22.6% of patients, with 71.4% treated on five modified linacs operating within the private hospital sector.Although 84% of the South African population are treated in state facilities, with no access to private medical facilities, only 6% of the total number of radiosurgery treatments were performed within state sector facilities because of limited resources. 8,9oote Schuur Hospital (GSH) has offered limited radiosurgery to state patients since 2008, treating 25 between 2008 and 2012 using a second-hand Radionics cone-based radiosurgery system.This broke down and became obsolete in 2012, and a new replacement system was not financially possible.This meant that state patients had no access to radiosurgery facilities unless they could afford treatment in the private sector.Consequently, alternative treatment options had to be investigated.
The acquisition of a 6 MV Varian Unique TM linac equipped with advanced mega voltage-based image-guided radiotherapy (IGRT) software allowed for the implementation of hypofractionated IGRT using non-coplanar dynamic conformal arc (DCA) fields.The decision was taken to utilise this technique for the treatment of small to medium ANs.This study aimed to analyse the radiotherapy treatment plans and quality assurance results for this patient group retrospectively in order to compare these with published data for traditional radiosurgery techniques.In this way, the viability of this approach as an alternative treatment option in under-resourced settings could be investigated.The study was approved by the Human Research Ethics Committee of the University of Cape Town (HREF 104/2016).
Equipment
The Varian Unique TM is a 6 MV photon-beam linac equipped with a 120 Millennium multileaf collimator (MLC) allowing 5 mm resolution for central leaves.It is equipped with a highstability Exact-IGRT couch top.Imaging is undertaken using the PortalVision aS1000 amorphous silicon electronic portal imager with pixel size 0.392 mm. 10 IGRT is performed through advanced 2D-2D image matching, allowing for automatic repositioning of patients in longitudinal, lateral and vertical directions.Reported positioning accuracy is within 1.5 mm, with improved results demonstrated with repeated imaging. 10socentric stability on the linac is reported to be < 0.6 mm on average, with a maximum deviation of < 1.8 mm. 10 The Unique allows for DCA and RapidArc TM (RA) radiotherapy.
Patient population and treatment planning
Fifteen AN patients were treated between February 2013 and January 2016 (eight left-and seven right-sided lesions).All of these patients presented with at least moderate hearing loss on the affected side, with nine having no functional hearing.All were immobilised using a Green Clarity TM fivr-point head and shoulder mask fitted to the CIVCO TM carbon fibre S-frame overlay board.CT and MRI images were fused for all of the patients and treatment planning was performed on the Varian Eclipse TM treatment planning system (v11.0)with the analytical anisotropic algorithm (AAA) using a 2 mm calculation grid size.A gross target volume (GTV) to planning target volume (PTV) margin of 2 mm was applied for all patients to allow for the higher expected level of positional uncertainty when compared to traditional radiosurgery treatment options.The mean GTV was 1.6 cc (range 0.6 cc to 9.3 cc), resulting in a mean PTV of 4.0 cc (range 2.3 cc to 15.6 cc).Despite the GTV to PTV margin expansion resulting in a mean increase in volume of 2.3 cc (± 1.36), only 1.1 cc (± 1.01) of this involved brain parenchyma.The remaining margin involved primarily bone and air.
All patients were treated with three non-coplanar DCA fields in three fractions to a total of 19.2 Gy (6.4 Gy × 3 Rx), with one exception, where dose reduction was applied because of tumour size and critical structure constraints (Patient 14; 6.0 Gy × 3 Rx).The prescription dose was planned to coincide with the 80% isodose line.Plans were then renormalised to display coverage of the PTV by the 100% isodose line.
Treatment plan evaluation
Because of the high dose per fraction and the critical location of lesions treated with radiosurgery, plans require a high level of conformity and fast dose fall-off.Several dosimetric indices have been proposed for comparing different radiosurgery plans and modalities, and determining compliance with protocols.
As a first step in the analysis, all plans were evaluated against the Radiation Therapy Oncology Group (RTOG) conformity index (CI RTOG ), along with the homogeneity index (HI RTOG ) and quality of coverage (Q RTOG ) in order to verify radiosurgery protocol compliance. 11e CI RTOG is given by where V RI is the volume encompassed by the prescription isodose, also known as the prescription isodose volume (PIV), and TV is the target volume.The RTOG defines plans with a conformity index between 1.0 and 2.0 as not deviating from protocol, plans with a CI RTOG between 2.0 and 2.5 or between 0.9 and 1.0 as having minor deviations, and plans with a CI RTOG value greater than 2.5 or less than 0.9 as having major deviations from protocol. 11,12e HI RTOG is defined as where I max is the maximum dose in the target and RI is the prescription or reference isodose.Protocol compliance is defined as HI RTOG ≤ 2.0, with a HI RTOG between 2.0 and 2.5 defined as minor and HI RTOG > 2.5 as major deviations. 11,12RTOG evaluates how well the required dose covers the target and is defined as where I min is the minimum dose received by the target.Protocol compliance requires 90% coverage, with 80% and < 80% coverage defined as minor and major deviations, respectively. 11,12ternative indices for plan comparison have been proposed because of the risk of 'false scores' provided by the more simplistic RTOG indices.As a second step, all plans were analysed using the Paddick conformity and gradient indices (CI Paddick and GI Paddick ) and the dose heterogeneity index (D H ). 13,14,15 The Paddick conformity index (CI Paddick ) allows for the simultaneous assessment of conformity and quality of coverage and is defined as where TV is the target volume, TV PIV is the target volume covered by the PIV, and V RI is the total volume covered by the prescription isodose.It is inversely related to the RTOG conformity index. 12,13,15An ideal plan would result in a CI Paddick of 1.0, with a conformity index of ≥ 0.6 considered acceptable. 16e Paddick gradient index (GI Paddick ) attempts to reflect how fast the dose falls off beyond the target in trying to spare normal tissue and critical structures.It is defined as and evaluates the volume of tissue receiving half of the prescription isodose in relation to the volume receiving the prescription isodose. 14,15e dose heterogeneity index (D H ) is defined as and evaluates dose variation within the tumour volume.An index of zero indicates a homogeneous dose distribution.
Normal tissue and critical structure analysis
Radiosurgery and hypofractionated radiotherapy involve a much higher dose per fraction than conventional radiotherapy.Given the increased risk of late normal tissue complications, extra care must be taken to protect normal brain tissue and organs in close proximity to the tumour.Caution should also be exercised when converting tissue tolerance data based on conventional fractionation and applying it to hypofractionation treatments because of the limitations of the linear quadratic model. 17r ANs, critical structures to consider include normal brain parenchyma, the brainstem, the optic nerve or chiasm, and the cochlea.Maximum doses were recorded for the brainstem, optic pathway and cochlea, and were evaluated based on the normal tissue tolerance guidelines from the AAPM TG101 report. 18,19The risk of radiation-induced necrosis of brain tissue was estimated by evaluating the volume of tissue receiving 12 Gy (V12) or 10 Gy (V10) single fraction equivalent (SFE) dose.The linear quadratic model with α/β = 2 was used to convert 12 Gy SFE and 10 Gy SFE doses to three fraction dose levels (12 Gy Eq = 19.65 Gy in 3 Rx; 10 Gy Eq = 16.21Gy in 3 Rx). 19It is worth noting, however, that because ANs are not central lesions, much of the dose spill falls outside of brain parenchyma.There is no consensus as to whether the V12/V10 volumes should include all tissue, be limited to brain tissue, or whether or not the target volume should be included or excluded. 17hree different V12/V10 dose volumes were therefore assessed in this study: (1) tissue V12/V10 including GTV, (2) brain V12/V10 including GTV, and (3) brain V12/V10 excluding GTV (unaffected brain).
Patient-specific quality assurance and positioning
Film dosimetry was performed on every patient plan prior to treatment delivery to compare predicted and delivered dose.Kodak EDR 2 film was used for coronal plane verification of the composite plan, including the couch, gantry and collimator rotations of the original plan.Film dose calibration was verified through absolute dose measurements performed in a custom designed cylindrical water phantom with a PTW Freiburg GmbH 0.016 cc pinpoint ionisation chamber.PTW Verisoft TM (V5.1) software was used to perform a gamma analysis, adopting dose difference (DD) and distance to agreement (DTA) criteria of 3% and 2 mm.
Individual patient position verification was performed by monitoring the positional shifts required during each setup, and by the final position as recorded on the Varian Portal Vision Advanced Imaging module.
Statistical analysis
All GSH DCA results were analysed to determine deviations from the RTOG radiosurgery criteria.The GSH DCA CI Paddick , GI Paddick and D H results were then compared to the Gamma Knife Perfexion TM , Novalis TM DCA, Novalis TM intensity modulated radiotherapy (IMRT) and CyberKnife TM results as published by Gevaert et al. 15 A one sample t-test compared the GSH mean results for each index to a hypothetical mean -the published values for each indexfor each technique.
Radiation Therapy Oncology Group radiosurgery criteria
Analysis of the CI RTOG , HI RTOG and Q RTOG values for all patients revealed only one deviation from protocol (Patient 14).This occurred where dose reduction was applied and tumour coverage compromised in order to comply with brainstem dose constraints.The remaining plans were fully compliant with RTOG radiosurgery criteria (Figure 1).
Study comparison: CI Paddick , GI Paddick , D H
All techniques complied with the minimum published CI Paddick criteria of ≥ 0.6 (Figure 2).The GSH DCA CI Paddick results compared well to those from the Novalis DCA and Novalis IMRT, but were inferior to the Universitair Ziekenhuis Brussel (UZB) published Gamma Knife and CyberKnife results ( p < 0.0001). 15,16e largest variation in CI Paddick values was observed for small volumes, with a trend towards better conformity for larger lesions.This finding was not statistically significant.
The results for the GSH DCA gradient index were inferior to all published techniques (Figure 3), with the Novalis IMRT technique being closest, but still superior to the GSH DCA technique ( p = 0.0003).
With regard to conformity, the gradient index appeared to improve with lesion size.This finding was not statistically significant.
Homogeneity was similar to that reported for the Novalis IMRT and CyberKnife, and proved to be superior to that obtained with the Gamma Knife ( p < 0.0001) and Novalis DCA ( p = 0.0002).
Normal tissue and critical structure analysis
All critical structures were analysed against the AAPM TG101 criteria, with the tolerances for three fraction treatments listed (Table 1). 18,19Two plans had brainstem volumetric doses exceeding 18 Gy in three fractions The risk of inducing radiation necrosis was assessed using V12 Eq and V10 Eq data for tissue, brain (including the GTV), and unaffected brain (excluding the GTV).Risk levels from two publications are superimposed in Figure 4. 20,21 An increased number of data points were observed above the risk lines for the V10 Eq volume analysis compared to the V12 Eq analysis.This indicates that the GSH DCA technique may be more sensitive to the V10 Eq data, and that analysis of the V12 Eq data in isolation may therefore result in under-reporting of the risk of inducing radionecrosis.
Patient-specific quality assurance and patient positioning
Film dosimetry results for the coronal plane showed excellent agreement, with a mean gamma analysis pass rate of 99.8%
Volume (cc)
GSH DCA V12 Eq and V10 Eq V12 Eq and V10 Eq Structures Analysed (± 0.4) with criteria set at 3% DD and 2 mm DTA.The lowest pass rate was 98.7%.
Patient positional shifts were recorded daily.Four treatments required an initial shift of 3 mm and one a 5 mm shift in the longitudinal direction.All other shifts were 2 mm or less, with final mean positional deviation in all three directions < 0.01 cm ± 0.03.
Discussion
Best practice options for radiotherapy as defined in highincome countries are frequently unavailable in LMICs.With population growth expected in many already heavily constrained LMICs, there is both a dire need and an obligation to innovate in order to remove the hurdles to effective care delivery, and to expand available capacity. 22 By reviewing patients with small to medium ANs, this study sought to investigate the viability of adopting hypofractionated IGRT using a non-coplanar DCA technique as an alternative treatment approach in under-resourced settings.
The GSH DCA technique was compliant with the RTOG radiosurgery criteria with the exception of a single largevolume treatment, where dose reduction and decreased tumour coverage were accepted in order to obtain critical structure tolerance.The DCA technique also compares favourably with other linac-based techniques for conformity and homogeneity, allowing compliance with minimum standards.This is an important finding given the lack of radiotherapy treatment options in many parts of the world. 5,6,7,8e gradient index results indicate that the dose fall-off is not as steep as that obtained with traditional radiosurgery techniques, as is expected from the larger penumbra obtained with the 5 mm MLC leaves relative to micromultileaf collimators or divergent cones.This results in a greater volume of tissue being included in the V10 Eq , thereby potentially increasing the risk of inducing radiation necrosis.It also results in the V10 Eq becoming a more sensitive predictor for radiation necrosis than the V12 Eq .However, because of the non-central location of the AN lesions, only 7.36 cc (± 4.47) out of the total of 11.01 cc (± 5.34) of V10 Eq mean volume actually involved brain parenchyma and target.If the actual GTV is excluded from this V10 Eq evaluation, the volume further decreases by a mean of 1.6 cc.The remaining V10 Eq dose was deposited in bone, skin and air.This finding may suggest that there is a decreased risk of radiation necrosis for AN patients when compared to radiosurgery patients with other centrally located lesions, where most (or all) of the V12 and V10 volumes are within the brain parenchyma.
Doses to the brainstem and optic system or chiasm were well controlled, but 13 patients did receive a cochlea dose exceeding the recommended maximum tolerance.This may or may not be clinically significant depending on the degree of hearing loss prior to radiosurgery.Clinical follow-up will be required to investigate the potential impact of these doses on the patient group.
The patient-specific quality assurance results indicate a good correlation between the predicted and delivered dose.Variation in final patient position did not exceed 1 mm, thus confirming that the 2 mm GTV to PTV expansion margin was sufficient to allow for the increased inaccuracy in position when compared to traditional radiosurgery techniques.
Conclusion
The GSH DCA technique allows for dosimetrically acceptable treatment of small to medium ANs where hearing preservation is not a clinical factor.In addition, this technique compares favourably with other mainstream radiosurgery techniques when accepted indices are calculated.Care must be taken with critical structures and normal brain parenchyma, with specific reference to the V10 Eq volumes.Clinical follow-up is required for the GSH DCA patients to determine long-term treatment outcomes.
FIGURE 3 :
FIGURE 3: Paddick gradient index comparison between the Groote Schuur Hospital dynamic conformal arc technique and published results from Universitair Ziekenhuis Brussel for CyberKnife, Novalis intensity modulated radiotherapy, Novalis DCA and Perfexion GK techniques.15
FIGURE 1 :
FIGURE 1: Radiation Therapy Oncology Group radiosurgery criteria results for the Groote Schuur Hospital dynamic conformal arc technique.
FIGURE 2 :
FIGURE 2: Paddick conformity index comparison between the Groote Schuur Hospital dynamic conformal arc technique and published results from Universitair Ziekenhuis Brussel for CyberKnife, Novalis intensity modulated radiotherapy, Novalis DCA and Perfexion GK techniques.
FIGURE 4 :
FIGURE 4: Volume of tissue, brain (including gross target volume) and unaffected brain (excluding gross target volume) receiving 12 Gy Eq and 10 Gy Eq .
TABLE 1 :
Critical structure doses for Groote Schuur Hospital dynamic conformal arc technique.Generated from Groote Schuur Hospital data †, data points exceed recommended maximum dose levels. Source: | 4,230.8 | 2017-06-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Target Verification via Novel Adaptive Segmentation Used to Detect and Track Moving Objects
It is an original study to integrate the utmost activities and academic research. In this research, a new image segmentation (NIS) is present to search object information for initial target verification in the region of interest (ROI) area in global histogram. After that adaptive singular value decomposition (ASVD) is combined to suppress variation in lighting for color images. HSV color model integrates computer vision techniques made to fit the dynamic environments for object detection. In addition, several tracking algorithms are applied to estimate and track the activity data. Experimental results show that the objects could successfully detect and track the sequence of images, performance and the tracking rate in accordance with accurate Kalman filter (HSV) which is better than the other algorithms.
Introduction
A wise family for segmentation which involves integrating features such as brightness, color, mode-finding, and graph partitioning texture over local image patches and then clustering those features based on fitting mixture models is used.The multi-label, an interactive image segmentation algorithm, is formulated in discrete space using combinatorial analogs of standard operators and principles of continuous potential there, allowing it to be applied in arbitrary dimension on arbitrary graphs [1].Besides, the contour detector combines mul- tiple local cues into a globalization framework based on spectral clustering.The segmentation of the state-of-the-art algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical regional tree [2].On the other hand, color detection is a primary aspect of many computer vision applications including face, illicit content detection and other related applications.The HSV (Hue, Saturation and Value) in color space distribution and part of the computer vision technique are applied to detect the object after initial target verification in this paper [3]- [8].
Camshift algorithm, Kalman Filter, frame difference, optical flow and particle filter are often demonstrated in the tracking field.The Camshift algorithm is able to handle dynamic distributions by readjusting the search window size for the next frame based on the moment of the current frame's distribution.It is a variation of the Meanshift algorithm, which is based on the principles of the Meanshift algorithm, and also accounts for dynamically changing distributions [5] [9]; Frame difference method is to recognize the motion of objects found in the two given images and pixels belonging to the same object [10] [11].Besides, the Kalman filter is the field for moving object tracking field in sequential images and has numerous applications: vehicle, surveillance cameras, navigation, perceptual user interface, tracking objects (missiles, face, head, hands) and many computer vision applications [12] [13] [14].
The activities of the Laser sailboat, Paragliding, Windsurfing and Kitesurfing features in our experiment are described as: Laser sailboat is a popular activity of small dinghy sailors.The boat may be sailed by either one or two people.It was designed by Bruce Kirby (Canadian), and emphasizes simplicity and performance in the world; Paragliding is a lightweight, free-flying, foot-launched glider aircraft with no rigid primary structure.It is a recreational and competitive adventure sport of flying.Windsurfing uses the wind to move forward while surfing uses the force of waves.It is a sailing activity, in which a board is powered across the water by the wind.Finally, Kitesurfing is a recreational and adventure sport.It is a surface water sport combining aspects of snowboarding, windsurfing, Paragliding, and gymnastics into one extreme sport.
Novel Adaptive Segmentation (NAS)
The novel adaptive segmentation (NAS) combines both adaptive singular value decomposition (ASVD) to reduce the effect about the light variation, and new image segmentation (NIS) for the region of interest (ROI) area, which is utilized to confirm the initial target verification.
Adaptive Singular Value Decomposition (ASVD)
To reduce the effect of the light variation in object tracking, adaptive singular value decomposition was utilized in every individual color channel of RGB [15].
The color image is compressed by R, G and B color space.The D X is an origi- nal color image with resolution of m n × .
{ }
The singular value decomposition of a color image is shown below: where D ∑ is the singular value matrix, D U and D V are orthogonal matrices.
To reduce the effect of the light variation in target identification, the singular value matrix was multiplied by the weighted compensation coefficient D ξ to deal with low contrast problems from varying light.The ASVD formula is shown in Equation (2).
ED X is the image after adaptive lighting compensation, D ξ is weighted compensation coefficient of color channel D, derived from (3) and ( 4): ( )
R E T R A
C T E D
New Image Segmentation (NIS)
A new image segmentation to segment the region by global threshold is utilized to propose a new method to segment the ROI (region of interest) area.It is rapid and efficient to separate foreground and background data to obtain approximate object data for preliminary target recognition.The derived equation is defined as below: ( ) If u < max2: where u is the mean value of the image; max1, max2 are the mean value in the area below and above u.( ) , g x y is the coordinate value of an image; th denotes the threshold value to extract object area from the image; σ is the weight of the threshold, the optimal value of 0.5 is a choice in our experiment.Figure 2 shows verifiable images about the utmost activities by the NAS.In addition, salt and pepper noise is tested; the 30% noise is limited in our trial to verify targets as in Table 1.
Comparison of NAS with Other Segmentation
We refer to the D. Martin segmentation and simulate the choosing an optimal scale of the entire dataset (ODS) or per image (OIS), as well as the average precision (AP) in Berkeley segmentation database.Figure 3 and Table 2 show the results of our proposed NAS and the comparison to relevant segmentation methods in the Berkeley segmentation database.Besides, we randomly selected portrait images of 100 people to test the NAS in the FEI, FERET_550, CMU-PIE68 (128) and Multi_Pie_session01 (128), respectively.The comparison is shown in Table 3.
Object Detection
The target detection stages include HSV color model, computer vision technique, optimal parameter assignment, and comparison of ASVD with other light compensation.
HSV Color Model
The color space distribution identifies a particular combination of the color The single-hexcone model of color space is the outer edge of the top of the cone with all the pure colors.The H parameter describes the angle around the wheel.The S (saturation) is zero for any color on the axis of the cone, the center of the top circle is white.An increase in the value of the S corresponds to a movement away from the axis.The V (value or lightness) is zero for black.An increase in the value of V corresponds to a movement away from black and toward the top of the cone.
R E T R A
C T E D
Optimal Parameter Assignment
The HSV color model is utilized in four types and five different environments.
The optimal value of blue color is set between 0.61 -0.667, red color value is chosen to approximate zero and green color sets the value between 0.275 -0.33 in our experiment.The above optimal parameter setting could completely achieve object detection as in Figure 5. Figure 6 shows the moving object detection rate by normalization the maximum experimental data.
R E T R A
C T E D
Comparison of ASVD with Other Light Compensation
The other light compensation methods are adaptive singular value decomposition in the two-dimensional discrete Fourier domain (ASVDF) for lighting detection, and a novel illumination compensation method called adaptive singular-value decomposition in the 2D discrete wavelet domain (ASVDW), which could be used to overcome lighting variation problem [17] [18]; The detection performance of ASVD is better than ASVDF and ASVDW with normalized experimental data as in Figure 7.
Object Tracking
The object tracking stages include the Kalman filter, Camshift algorithm and frame difference method.There are applied to take the sail of the Laser sailboat and Windsurfing rig, the wing of the Paraglider and the kite of the Kitesurfing gear.We utilize these algorithms to track moving objects and compare different features in our experiment.All of their theory is briefly described as below.
Camshift Algorithm
The
Kalman Filter Algorithm
The Kalman filter is based on the optimal recursive data processing algorithm and performs the restrictive probability density propagation.It is a set of mathematical equations which establishes an efficient computational (recursive) means to estimate the state of a process in several aspects [19] [20].It calculates estimations of past, present, and future states, and it can do the same even when the precise nature of the modeled system is unknown.
The Kalman filter estimates a process by using a form of feedback control.
The filter estimates the process state at some time and then obtains feedback in the form of noisy measurements.The equations of the Kalman filters fall into two groups.
Frame Difference (Background Subtraction) Method
The background subtraction method is the core of the background subtraction algorithm which is to utilize the difference of the current image and background image to detect and recognize moving objects.It is a simple algorithm, but very sensitive to the changes in the external environment and has poor anti-interference ability.This method could obtain the complete movement data and detect the moving object from the difference of the current image and background image [21] [22] [23] [24].The background subtraction is based on four principal steps which are described as below: 1) Pre-Processing First, spatial smoothing is utilized to eliminate device noise and remove various environmental factors under different light intensities.Another main factor is the data format used by the background subtraction model.
2) Background Modeling
This step uses the new video frame in order to calculate and update the background model.The background model should be robust against environmental changes in the background, but sensitive enough to identify all moving objects of interest.
R E T R A
C T E D
Experimental Results and Discussion
We desire to reduce the effect of light variation before initial target verification.tracking rate, which approach 86.61 by Kalman filter (HSV) and is better than the other algorithms.The Kalman filter (HSV) is very powerful, desirable and robust in cases when the object loses tracking briefly as in Figure 10 and Table 4.
In our experiment, the Kalman filter (frame difference) is worse for tracking moving objects among Windsurfing, Paragliding, Kitesurfing and complicated environment due to weather causes: changes in the clouds, and a violent variation of the wind and waves.Alternatively, airstream influenced by the sunshine and strong wind cause the Paraglider to be unstable and dangerous.Gusty wind and abrupt wave vibration affect the position error in the moving direction of the sea activity.
Conclusion
A new image segmentation is proposed, which is used rapidly and efficiently to judge the approximate object data; after that ASVD is utilized to suppress variation in lighting for color images; HSV color model, some of the computer vision techniques and Kalman filter (HSV) tracking algorithms are suitable for the dynamic environments for moving object detection and tracking in this research.
Figure 6 .
Figure 6.Detection rate of the utmost activities.
principle of the Camshift algorithm is based on the principles of the Meanshift algorithm.It can be described by the four steps.First, set the region of interest in the probability distribution image of the entire image.Second, choose an initial location of the search window.Third, calculate a color probability distribution of the regional center of the search window.Next, iterate the Meanshift algorithm to find the centroid of the probability image.Store the zeroth moment (distribution area) and centroid location.Last, for the following frame, center the search window at the mean location found in Step 4, and set the window size to a function of the zeroth moment (then go to Step 3).The Camshift algorithm flowchart is shown in Figure 8.
Figure 7 .
Figure 7.Comparison of detection rate of the ASVD and relevant studies in the Laser sailboat.
First, time update and measurement update equations.The time update equations are responsible for projecting forward (in time) the current state and error covariance estimates to obtain the priori estimate for the next time step.Second, the measurement update equations are responsible for the feedback.For a linear, discrete-time system, the Kalman filter random process estimated model, ongoing cycle and signal-flow graph are shown in Figures 9-11.
Figure 9 .
Figure 9.A complete picture of the operation of the Kalman filter.
Figure 10 .
Figure 10.The ongoing discrete Kalman filter cycle.
Figure 13 .
Figure 13.Comparison of the tracking rate of the Kaman filter (HSV) and relevant utmost activities.
Adaptive singular value decomposition (ASVD) is utilized in every individual color channel of RGB.The images are demonstrating the color luminance valueR E T R AC T E D Open Journal of Applied Sciences of 180 down to 150 to reduce the illumination variation effect as in Figure1.In addition, Figure7shows the performance of ASVD better than other light compensation of ASVDW and ASVDF.Second, the NIS is proposed to search the target region of interest (ROI) area that is efficient and rapidly to judge the object information for initial target verification.Besides, we compare NAS and other segmentation in the Berkeley segmentation database, in which an average score could achieve 0.64, better than most of the other image segmentation as in Table2.Moreover, salt and pepper and four kinds of the face datasets are a trial of NAS.Where 30% salt and pepper noise is the limit, a 50% score from randomly chosen datasets of 100 frames, shows with light variation is better than without light variation.Third, HSV color model and several computer vision methods are applied to fit the dynamic environments for object detection.Four different utmost activities are utilized to test the above methods.It could completely detect the moving object as in Figure5.Fourth, we apply several different tracking algorithms to track moving objects, which include the Laser sailboat, Windsurfer, Paraglider and Kitesurfer.In our experiment, the sail of the Laser sailboat and Windsurfing rig, the wing of the Paraglider and the kite of the Kitesurfing gear are assumed candidates in the video.Experimental results reveal that objects could be successfully tracked in the sequential images, and the performances of accurate
R E T R A C T E D Open Journal of Applied Sciences
Table 2 .
Compared to relevant studies in the Berkeley segmentation database.
Table 3 .
Comparative results in the face dataset by NIS.
labeling is a cluster, which puts the same label to connect pixels in the same area and different labels in different areas.
Table 4 .
Object tracking rate (%) of the utmost activities. | 3,416 | 2018-10-29T00:00:00.000 | [
"Computer Science"
] |
Oral and Periodontal Implications of Hepatitis Type B and D. Current State of Knowledge and Future Perspectives
Periodontitis is characterized by low-grade inflammation of the periodontal tissues, the structures that support and connect the teeth to the maxilla and mandible. This inflammation is caused by the accumulation of subgingival bacterial biofilm and gradually leads to the extensive damage of these tissues and the consequent loss of teeth. Hepatitis B is a major global health concern; infection with the hepatitis B virus causes significant inflammation of the liver and the possibility of its gradual evolution to cirrhosis. Hepatitis D, caused by infection with the delta hepatitis virus, is manifest only in patients already infected with the type B virus in a simultaneous (co-infected) or superimposed (superinfected) manner. The dental and periodontal status of patients with hepatitis B/D could exhibit significant changes, increasing the risk of periodontitis onset. Moreover, the progression of liver changes in these patients could be linked to periodontitis; therefore, motivating good oral and periodontal health could result in the prevention and limitation of pathological effects. Given that both types of diseases have a significant inflammatory component, common pro-inflammatory mediators could drive and augment the local inflammation at both a periodontal and hepatic level. This suggests that integrated management of these patients should be proposed, as therapeutical means could deliver an improvement to both periodontal and hepatic statuses. The aim of this review is to gather existing information on the proposed subject and to organize significant data in order to improve scientific accuracy and comprehension on this topic while generating future perspectives for research.
Introduction
The oral cavity hosts over 700 bacterial species, which usually co-exist in a harmonious state, called eubiosis when commensal bacteria do not allow harmful ones to trigger diseases [1]. These bacteria can also be found inside the gingival groove, or sulcus, a narrow space delimited by the tooth's surface and the gingiva [2]. If the gingival sulcus is not properly and periodically cleaned by professional and at-home methods, this will allow the emergence of highly pathogenic bacteria. Consequently, this subgingival pathogenic bacterial biofilm will cause periodontal inflammation (periodontitis) [3]. In other words, if the subgingival biofilm is left undisturbed for lengthy periods of time, allowing highly pathogenic bacteria to colonize, the conditions for the onset of periodontitis are met [4]. As a result, these bacteria and their toxins reach the gingival tissues, causing the inflammatory response that is characteristic of periodontitis [3]. This low-grade, local inflammation usually has a gradual evolution, generating a damaging setting due to acidosis and enzyme activation for crucial elements of the periodontium, such as collagen fibers [5]. These fibers are the main component of the periodontal ligament, the structure of which connects the tooth to the alveolar bone. If damaged, the ligament will contribute to the formation of periodontal pockets (deep areas along the tooth's root), which provide the ideal environment for more pathogenic bacteria [5]. Eventually, the alveolar bone itself is targeted by these bacteria and, under the effect of their collagenolytic enzymes and cellular damage, will begin to lose its normal size [6]. Consequently, the teeth will lose their support, increase their mobility, and finally be extracted, with significant consequences on the patient's life quality and general health [6]. In addition, it is important to highlight that it is not only natural teeth that can be affected by the inflammation of their supporting tissues but dental implants too. In this case, the disease is called "peri-implantitis" and can lead to increased implant mobility, poor bone integration and, in some cases, implant removal [7].
Extensive research performed during the last few decades has shown that the consequences of periodontitis go beyond the disruption of normal dental functions [8]. The periodontium is linked to the rest of the body by blood and lymphatic means [9]. As a result, every pathologic alteration in general homeostasis has the potential to affect periodontal health [5]. Periodontitis, on the other hand, can affect a patient's overall health, as well as the clinical presentation of specific conditions [5]. Researchers have investigated the bi-directional relationship between periodontitis and systemic health and sickness, leading to the formation of the "periodontal medicine" concept [10]. This concept incorporates and discusses the mutually influencing interactions that occur between periodontitis and systemic illnesses such as diabetes and cardiovascular disease [11,12]. Other significant correlations have been highlighted between periodontitis and autoimmune diseases such as rheumatoid arthritis and psoriasis [13,14]. In 2018, with the new classification of periodontal diseases, this concept gained clinical relevance, as certain systemic conditions were found to significantly modify the severity and rate of progression of periodontal diagnosis [15].
Hepatitis B is an infectious illness that damages the liver, caused by the hepatitis B virus (HBV) [16]. The virus is spread by contact with infected blood or bodily fluids [17]. In locations where the illness is widespread, infection around the time of birth or when in contact with other people's blood throughout infancy are the most typical ways of contracting hepatitis B [17]. In locations where the illness is uncommon, the most common sources of transmission are intravenous drug use and sexual contact. Working in healthcare, blood transfusions, dialysis, living with an infected person, and traveling to countries with high infection rates are also considered to be significant risk factors [18]. HBV is capable of causing both acute and chronic infection [19]. Many people have no symptoms when they first become infected [20]. During an acute infection, some people may experience vomiting, yellowish skin, weariness, black urine, and abdominal discomfort [18,20]. These symptoms usually last a few weeks, and the first infection is seldom fatal [20]. Once infected, symptoms may develop from 30 to 180 days later [20,21]. If entering a chronic phase, the infection can lead to life-threatening complications, including cirrhosis or hepatocellular carcinoma [22].
Hepatitis D is caused by infection with the hepatitis D virus (HDV) and only occurs in individuals who are already infected with the HBV type [23]. HDV transmission can occur either concurrently with HBV infection (co-infection) or is superimposed on chronic hepatitis B or hepatitis B carrier status (superinfection) [23]. Because of the severity of its effects, an HDV infection in a person with chronic hepatitis B (superinfection) is considered the most dangerous kind of viral hepatitis [24]. In acute infections, these problems include an increased chance of liver failure and a rapid development of liver cirrhosis, as well as an increased risk of developing liver cancer in chronic infections [25]. Hepatitis D has the greatest mortality rate of any hepatitis infection, at 20% when combined with the hepatitis B virus [26]. According to a 2020 prediction, 48 million people are now afflicted with this virus [27].
Previous research was performed on the analysis of possible pathogenic connections existing between periodontitis and chronic hepatitis C (CHC) caused by the infection with the hepatitis C virus (HCV) [28,29]. Thus, it was highlighted that patients with CHC could exhibit significantly more severe oral health challenges, including periodontal ones, caused by the clinical manifestations of periodontitis (gingival bleeding, pocket depth, attachment loss) when compared to healthy controls. Local periodontal inflammation has been shown to have increased strength in CHC patients, as depicted by the immunological quantitative assessment of relevant pro-inflammatory mediators in gingival crevicular samples (GCF) [29]. These mediators include interleukins (IL, such as IL-1α, IL-1β, IL-18), inflammasomes (NLRP3 inflammasome), collagenolytic enzymes (Caspase-1) and pentraxins (PTX, such as PTX-3 and C-reactive protein). All of these pro-inflammatory markers were shown to express more elevated GCF levels in patients with periodontitis and CHC than in periodontitis patients with no CHC or with healthy controls [28]. Interestingly, these markers have also been shown to express serum-elevated levels in CHC patients, suggesting an additional periodontal risk in such patients [29]. Conversely, the implementation of non-surgical periodontal therapy in these patients has delivered less significant improvements in the intensity of local periodontal inflammatory reaction than in non-CHC periodontitis patients, suggesting a limited efficiency of the therapy in their case. Thus, an additional therapeutical focus should be given to these types of patients when seeking periodontal or dental care [28,29].
Given the results generated by our previous research on the topic of possible pathogenic connections existing between HCV infection and periodontitis, we aim to expand the project on HBV and HDV infection and periodontal links. Thus, we performed a review of the existing relevant scientific literature in order to gather data and assess the current state of the art. The aim of this review is to extract and compile available information on the subject so as to set future development of this topic and to generate the scientific background needed for the onset of future projects on possible pathogenic connections existing between periodontitis and HBV/HDV infection.
Materials and Methods
This review followed the criteria and guidelines of the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) (Figure 1).
PICO Question
"Is there currently relevant scientific data on the possible significant pathogenic connections between oral implications and HBV/HDV infection that could offer the background for future development of the subject?" (Population: patients with hepatitis B/D infection; Intervention: oral health status assessment; Comparison: relevant background information; Outcome: development of complementary studies).
Exclusion Criteria of the Studies
Some of the generated search articles were excluded on the following premises: (1) they reported in vitro and experimental animal studies, (2) self-reported studies focusing on dental practitioners and dental students' knowledge of the transmission of the virus, and (3) letters and editorials. All other remnant studies were included for further analysis.
Information Extraction and Review Structuring
The selected papers were carefully read, and relevant information for the review was extracted. The review consisted of three parts: (1) information on HBV infection and oral implications, (2) information on HDV infection and oral implications, and (3) future perspectives and proposed development of the subject.
HBV Infection and Oral Implications
The search for papers on HBV infection and oral implications returned 58 results, ranging from 1977 to 2022. After applying the exclusion criteria, 16 papers were finally selected for critical reading and idea synthesis. Additional papers were extracted from these papers' references list, if they were not generated by the database search.
The earliest papers on the subject were published between 1984-1985, focusing on the detection of HBV antigens (or viral particles) in the oral fluids of infected patients. The study by Polloch et al. identified HBV surface antigens (HBsAg) in the GCF samples of HBV-infected patients [30]. In 1985, Ben-Aryeh et al. performed a similar study, which concluded that HBsAg was found in 90% of the assessed GCF samples originating from HBV-infected patients [31]. The surface antigen was also found in the saliva samples of the same patients. It could be speculated that the saliva might have been contaminated with the blood of gingival origin due to gingival inflammation. However, the authors found no significant correlation between the presence of HBsAg in the saliva samples and the gingival status of the participating patients in terms of gingival inflammation or the gingival bleeding index [31]. This led the authors to suggest that the source of HBsAg presence in the saliva samples was, in fact, the GCF [28]. The virus circulates in the general bloodstream, reaches the lymphatic system and eventually into the GCF due to a difference in osmotic pressure. Then it reaches the saliva, using the GCF as a carrier [32]. This hypothesis is also endorsed by the 1984 study by Hurlem et al., who suggested that HBV-infected patients may pose a higher risk of viral transmission in the dental office by double carriers: saliva and increased gingival bleeding when dealing with gingival inflammation [33].
This hypothesis is further emphasized by a more recent study by Kamimura et al., who found a strong correlation between occult blood traces in saliva and the presence of viral HBV particles [34]. This study highlighted how HBV DNA particles were found in saliva samples, particularly of elderly patients diagnosed with periodontitis [34]. The study further speculates that this may pose an increased risk of horizontal HBV transmission in the family, where the probability of contact with infected saliva is quite elevated [34]. The risk is further enhanced if patients suffer from periodontitis, with an increased gingival bleeding index [34]. A study by Farghaly reached similar conclusions suggesting that patients with periodontitis showed a higher proportion of hepatitis exposure and a higher detectability of salivary HBsAg [35]. Additional risk factors were considered to be the rural residence of patients or a medical history of past blood transfusions. Thus, the study concluded that the presence of periodontitis, severe gingival bleeding and poor oral hygiene were associated with the risk of hepatitis and the detectability of salivary hepatitis markers [35]. Similar results were generated in the study by Sharifian et al., who considered that the most frequent risk factors for HBV infection in studied patients were positive periodontal diagnosis and family history [36].
The clinical settings of unfavorable dental and periodontal diagnosis and liver damage were assessed by Yang et al., who concluded that an increased number of absent teeth were associated with an increased risk of primary liver cancer [37]. A study by Nagao et al. also highlighted that periodontitis might be correlated with viral liver disease [38]. However, the results seem inconclusive so far, as a 2011 study by Anand et al. found that the number of dental caries and the periodontal status of patients with nonalcoholic cirrhosis did not differ significantly from that of the controls without any liver disease [39]. Nevertheless, other oral health issues, such as halitosis, have been directly linked to HBV infection and periodontitis, including a study by Hun Han et al. [40]. The authors concluded that patients with periodontitis, HBV infection and neglected tongue-brushing had the highest prevalence of volatile sulphur halitosis, suggesting that liver function should be evaluated in patients dealing with bad breath [40].
The periodontal management of patients with an HBV infection was studied by Seshima et al., who reported a case of effective, regenerative periodontal therapy [41]. The patient suffered from HBV infection and diabetes mellitus, which can significantly impact the body's healing and regenerative capabilities [41]. However, considering the medical history of the patient, the authors reported a clinical improvement in the periodontal parameters [41]. A study by Ting et al. suggested the use of statins as an adjunctive to periodontal therapy in patients with an HBV infection [42]. This is justified by the antiviral properties of statins, as well as their antibacterial capabilities, including on important periodontal pathogens such as Porphyromonas gingivalis [42]. Concerning the surgical management of periodontal patients with an HBV infection, Hong et al. reported no episodes of postoperative bleeding in patients, despite a significant correlation of the international normalized ration (INR) with HBV infection diagnosis [43]. The authors suggested that it was not only INR values that should be considered when evaluating patients with liver diseases for procedures with a post-surgical bleeding risk [43].
From an immunological perspective, certain inflammation mediators were targeted in the saliva samples of patients with an HBV infection [44]. Pro-inflammatory interleukins (IL-2 and IL-4), as well as anti-inflammatory ones (IL-10), expressed significantly more elevated levels in the saliva samples of HBV infected patients than in the healthy controls, as depicted by the enzyme-linked immunosorbent assay (ELISA) used in this study [44]. The same immunological method (ELISA) was proposed in the study by Gharavi et al. as a diagnosis tool for HBV infection in samples of saliva, with good sensitivity and specificity [45].
HDV Infection and Oral Implications
The search for papers on the oral implications of HDV infections retrieved 13 papers, of which only 3 could be selected, for critical reading after applying the exclusion criteria. This low number of papers on the subject suggests a limited current understanding of the subject and should stimulate future developments of the topic.
In a 1986 article, Cottone et al. raised awareness among dental practitioners and members of the dental office team of the possibility of the transmission of the newly identified, at that time, HDV virus [46]. The authors stated that the hepatitis D virus could pose a serious threat to all members of the dental team and thus encouraged vaccination against the HBV virus, as it would also offer protection against HDV [46].
One of the main reasons why the literature on the oral implications of HDV is limited, is that the viral infection is mainly conditioned by a co-existing or pre-existing infection with HBV. Hence, the patient target group is limited only to HBV-positive persons. Even though the association of the HBV and HDV viruses is generally accepted and agreed upon, some authors have reported exceptions to this. In 2016, Weller et al. detected HDV in the salivary glands of Sjogren syndrome patients [47]. Their micro-array analysis showed that HDV was present in more than 50% of the samples originating from primary Sjogren syndrome patients [47]. The novelty of the study was the fact that the identification of HDV was independent of any HBV presence. This suggests that HDV is able to set up an independent presence without HBV, at least at the salivary gland level, and exhibits a unique tissue tropism [47]. The results of this study raise significant awareness from an oral health perspective, as Sjogren syndrome is considered to be a major trigger for dental and periodontal problems, as well as an extra-hepatic manifestation of liver diseases, including viral infections.
Currently, there is insufficient data on whether HDV particles could be carried by saliva, similar to HBV. Only one study, performed by Isaeva et al., focused on this topic but found no detection of HDV antibodies in saliva samples originating from patients with HBV and HDV infection [48]. Despite the fact that the saliva samples were positive for HBV antigens and antibodies, this was not the case for HDV, suggesting a lower concentration of these elements in the saliva than for HBV [48]. Nevertheless, the matter should be addressed by complementary research in order to increase its scientific understanding.
Future Perspectives
As shown by the literature review (Table 1), currently, there is sufficient data on the oral implications of HBV infection and little, or almost no insight, into these implications in HDV ones. HBV infection and oral implications cover mostly the detection of viral antigens in saliva and gingival fluid and less about the clinical, dental, or periodontal status of infected patients [49]. There is also a gap in the literature regarding the assessment of various pro-inflammatory elements in samples of gingival fluid or saliva, as this can have relevance for the characterization of low-grade inflammatory periodontal reactions and their elements in this type of patient. Regarding HDV infection, this has received little attention from the perspective of oral health implications, mainly because patients with HBV and HDV co-infection or supra-infection may be more difficult to gather for larger studies. The epidemiology of the HDV infection may vary significantly from region to region, and as HBV vaccinations continue to gain popularity, the spread of the HDV virus may also decelerate. and periodontal health status of HBV + HDV infected individuals, such as the number of missing teeth, periodontal diagnosis, and the type of diagnosed periodontal conditions, in terms of the severity and rate of progression, as compared to the controls. The ideal circumstance would be to include patients who do not suffer from other systemic diseases that could influence the manifestation of periodontitis (such as diabetes mellitus), but this would remain to be established by the study design and group characteristics [50]. An immunological analysis via the ELISA method would be necessary in order to measure specifically targeted pro-inflammatory mediators in GCF samples that have relevance in both periodontitis and HBV + HDV infections pathogenesis. The local and systemic effects of periodontal therapy in patients diagnosed with periodontitis and HBV + HDV infection should also be evaluated, from a clinical and immunological standpoint, in order to detect improvements in the expression of inflammatory mediators as a sign of the inflammatory reaction's modulation.
As the prevalence of HDV infection in Romania experienced recent rising trends [51] and considering the general increase in population mobility after the COVID-19 pandemic, the development of such a research project could be of significant interest and deliver valuable and high-novelty results. With the experience gained from the previous HCV infection study, we plan to apply the same principles and management of the project in this new research direction in order to improve existing knowledge and increase scientific awareness of the topic.
Conclusions
The existing literature offers sufficient background information on the oral implications of HBV infection in order to fundamentally support the development of a research project on the topic of HBV + HDV co-infection, where data is scarce and has significant gaps. | 4,842.6 | 2022-09-26T00:00:00.000 | [
"Medicine",
"Biology"
] |
Research on investment portfolio model based on neural network and genetic algorithm in big data era
With the maturity of neural network theory, it provides new ideas and methods for the prediction and analysis of stock market investment. The purpose of this paper is to improve the accuracy of stock market investment prediction, we build neural network model and genetic algorithm model, study the law of stock market operation, and improve the effectiveness of neural network and genetic algorithm. Through the empirical research, it is found that the neural network model can make up for the shortcomings of the traditional algorithm through the optimization of genetic algorithm.
are also increasing [3]. It needs to be emphasized that at present, some stock market prediction methods emphasize ideal state, but due to the complex internal and external environment, various uncertain factors always impact the stock market investment market, which to a certain extent improves the prediction difficulty of the stock market, and greatly reduces the prediction effectiveness of the stock market investment [4]. Therefore, even the prediction methods with high popularity often fail in market prediction [5]. At present, the rapid development of science and technology provides a new way for stock market investment prediction and analysis, especially the growing maturity of neural network theory, which has been well applied in many aspects, such as signal processing, pattern recognition and so on. By analyzing the theory of neural network, it can be found that neural network has great advantages in self-adaptive and self-learning, has the characteristics of nonlinear approximation ability, and has a high degree of agreement with the stock market prediction [6]. Therefore, it is a good attempt to apply neural network to stock market prediction [7].
The specific contributions of this paper include: (1) A literature survey about various existing neural network and genetic algorithms, and analyze their advantages and disadvantages. (2) An effective neural network model and genetic algorithm model for improve the accuracy of stock market investment prediction is proposed. (3) Performance analysis of the proposed algorithm and an evaluation of the algorithm with respect to other existing algorithms.
The remainder of this paper is organized as follows: Related work will be introduced in Sect. 2. Neural network model and genetic algorithm model is explained in Sect. 3. Experimental results and discussion will be presented in Sect. 4 and conclusion will be drawn in Sect. 5.
Related work
In the process of social practice, people need to find the optimal solution in the complex system in order to solve the problem efficiently. However, because the solution space is relatively large, the correlation between the parameters and the target value is difficult to determine, and there are relatively many factors to be considered, so how to deal with the optimization problem must be highly valued. In many cases, people determine the approximate optimal solution by comparing and analyzing the random effective solution [8]. The essence of this method is to randomly extract the parameters of the domain of definition to obtain the optimal solution. This method is simple and easy, but it is only suitable for the field with small search space, but for the field with large search space, it cannot solve the problem simply by exhaustive method More advanced optimization techniques are needed to solve the problem [9]. In contrast, the genetic algorithm with 'survival of the fittest' as the core has great advantages. By introducing competition mechanism into the algorithm, the search efficiency can be improved. The basic process of genetic algorithm is to determine a group of initial solutions in a random way, and then conduct individual search to obtain an independent solution, which is defined as a 'chromosome' . Through the 'fitness value' index, the adaptability of chromosomes in the population can be effectively evaluated, and then whether to select them to enter the next stage can be judged [10]. According to the principle of survival of the fittest, on the basis of continuous crossing, selection and variation, the evolution selection of chromosomes forms a chromosome group with higher adaptability. After reaching a certain number of iterations, the chromosome convergence is completed and the optimal solution of genetic algorithm is obtained [11]. By analyzing the process, we can find that the whole process of genetic algorithm is essentially similar to the genetic principle in biological sense [12].
At the operational level, genetic algorithm is not complex. According to the above discussion, it is essentially an iterative process, that is, it starts from the initial group of individuals, and obtains the approximate optimal solution through continuous cross selection and mutation operation [13].
Markov chain analysis is an important part of genetic algorithm, and its core is the convergence theory [14]. It can be found that the convergence of traditional genetic algorithm is generally based on the Markov chain limit theory. In the practice of solving problems, the ultimate goal of genetic algorithm is to determine the global optimal solution. The essence of the whole process is random search, which has great uncertainty [15]. The operation process of genetic algorithm is continuously optimized under the expected value of the optimal solution, and it is regarded as the initial sequence [16]. Through the convergence theory of genetic algorithm, its convergence can be effectively verified. Not only that, in order to achieve good convergence of genetic algorithm, we must focus on two parameters, one of which is the possibility of breaking away from the satisfactory solution set on the premise of determining the satisfactory solution; the other is the possibility of still not obtaining the satisfactory solution on the premise of not obtaining the satisfactory solution, and the convergence of genetic algorithm is formed on the basis of the above two parameters matching General theory [17]. The convergence research based on the two parameters is pure probability research, which is intuitive and simple in the convergence verification of genetic algorithm [18].
Methods
The purpose of this paper is to improve the accuracy of the stock market investment prediction. By combining the neural network model and the genetic algorithm model, we can predict the operation law of the stock market [19]. This paper relies on the existing theoretical research results to optimize the real number coding scheme and improve the effectiveness of neural network algorithm and genetic algorithm. In this paper, the real number coding method is adopted, the sample segmentation is optimized, the training is strengthened, the training speed and convergence speed of neural network are improved, the local minimum value is obtained, so as to avoid falling into, the threelayer neural network is constructed to determine the global optimal solution, so as to effectively solve the problem.
Genetic algorithm to optimize the learning rules of neural network
In the training process, neural network learning rules need to be set in advance [20,21]. However, whether the learning rules are reasonable or not is uncertain [22,23]. Therefore, it is necessary to design and optimize neural network learning rules with the help of genetic algorithm, so as to improve the ability of neural network algorithm to solve complex problems and the adaptability of the algorithm to uncertain environment [24]. Research results show how to design coding by learning rules is the core problem in the evolution process, so far there are no cases with application value [25,26]. Therefore, the study of learning rules is only the initial stage, and its process is as follows: Step 1 The effective coding method is determined, the learning rules are coded, and the matching between individual and single learning rules is realized.
Step 2 To construct a training set, the elements of the training set are determined firstly, and then the corresponding learning training is carried out according to the matching learning rules.
Step 3 Calculate the fitness of all learning rules.
Step 4 Select and determine learning rules that meet the requirements.
Step 5 Cross selection, individual variation processing, analysis of individual attributes, to determine the next generation of population.
Step 6 Repeat the above steps until the goal of evolution is achieved.
In this paper, after the optimization of genetic algorithm, the connection weight of neural network is improved. By solving the existing problems of neural network, the generalization function of neural network is enhanced [27]. On this basis, the learning model of neural network is constructed, and the global optimal solution is obtained to achieve the ability of solving specific problems.
GA-BP algorithm design
Parameters play a decisive role in the performance of the algorithm model. In this paper, finite length coding is chosen. After the design of the algorithm coding scheme is completed, the parameter coding is transformed into the genetic algorithm coding, and then the function used to accurately evaluate the algorithm performance is determined, and the global search is completed in the parameter space. In this way, not only the space can be expanded, but also the target of regional search can be realized, and a balance state can be achieved between the two. In the initial stage of genetic search, due to the uncertainty brought by cross variation, the search scope has been expanded to a certain extent. After obtaining the high fitness solution, the crossover operation completes the search near the above solution. Therefore, through the genetic operation, we can determine the best combination of parameters to meet the requirements of practical application and solve the problem. In terms of algorithm implementation, the specific process is as follows: (1) Step 1 Randomly forming n codes and forming initial set s. (2) Step 2 Complete the coding in sequence, decode the coding, determine a parameter combination P reflecting BP model, determine BP, evaluate the BP and obtain its corresponding fitness value. (3) Step 3 According to the appropriate value determined in the previous step, determine n individuals, and enter the next generation to obtain the next generation group. In this step, some individuals may need to be selected multiple times. (4) Step 4 According to the probability P and the fitness value of different codes, the parent generation is determined, then cross inheritance is carried out, and the next population is entered after random pairing. (5) Step 5 According to the probability P and fitness, select the parent population that meets the requirements, insert new individuals through mutation inheritance, and achieve the goal of population iteration. (6) Step 6 By repeating the above steps repeatedly, the search target can be achieved on the premise of meeting the standard requirements.
In which, x is the number of input layer nodes, w is the number of hidden layer nodes, a is a constant between 0 and 10.
Neural network toolbox
As a highly complex and comprehensive algorithm model, neural network model has relatively high requirements for toolbox. Through the application of neural network toolbox, the goal of activation function can be realized. At the same time, through the algorithm training, the network designer can complete the specific subroutine, and based on this, promote the learning training, complete the corresponding call requirements to the greatest extent, and improve the effectiveness of neural network learning. In the process of building algorithm model, different types of algorithms are integrated into neural network toolbox, so as to improve the convenience of algorithm design.
Genetic algorithm toolbox
All kinds of algorithms of genetic algorithm ultimately need to act on chromosomes. Chromosomes are essentially vector types, which can be reduced to specific matrices, and matrix operations form operators. It can be seen that the basic data unit has the infinite feature of matrix dimension. Based on this recognition, users can ignore the lowlevel problems related to matrix algorithm, so as to improve the operation efficiency on the basis of programming. In the application process of genetic algorithm, the toolbox can provide the necessary algorithm with the characteristics of scalability and standardization. Through its matrix computing ability, it can improve the efficiency of genetic algorithm, reduce the difficulty of chromosome programming, and improve the difficulty of solving problems.
Sample data
The number of samples is closely related to the accuracy of genetic algorithm, it is closely related to the complexity of mapping relationship, and also to a certain extent depends on data noise. With the increase of the complexity of mapping relationship, the number (1) of training samples is also required to be higher, and with the increase of noise, the number of samples is also increased synchronously. In the aspect of sample selection, we need to adhere to the following three principles: (1) meet the requirements of sample quantity; (2) meet the requirements of sample accuracy; (3) meet the requirements of sample representativeness; (4) meet the requirements of sample distribution.
Stock price return is a common index in the quantitative analysis of stock market investment. However, if the stock market is positioned as a nonlinear dynamic system, the return is not the optimal price alternative transformation, and the factor of forecasting price should be fully considered. Not only that, with the increase of the number of training samples, the amount of calculation will be increased to a large extent, and the convergence speed in the training process will decline, which requires a longer convergence time. If the number of samples does not meet the requirements, the network will not be able to fit the corresponding stock index curve. In this paper, 'gzmt' is chosen as the representative of empirical analysis, and its stock number is 600519. In terms of data selection, this paper adheres to the basic principles of representativeness, continuity, uniform distribution and accuracy to improve the adaptability of this algorithm.
If we want to reduce the prediction error and improve the prediction accuracy, we need to choose a reasonable number of samples to meet the training requirements of neural network. On the basis of fully considering the prediction characteristics of neural network, the training samples are optimized to improve the convenience of detection.
Before the formal learning, the effectiveness of the network largely depends on data processing. Data processing will affect its accuracy and speed. Generally speaking, the network training cannot directly apply the samples obtained, it needs to complete the necessary processing before it can be put into application. In other words, the acquired data samples usually need to be normalized before they can be applied to network training.
Closing price network model
In this paper, 277 historical closing prices are selected as data samples to build the basic data model of stock research. In order to ensure that the data samples meet the use requirements, the data samples must be normalized before they can be put into use. The input node of the network is p = 5, and the output node is t = 1, that is, the closing price data of five trading days before t + 1 is used for prediction. At present, the hidden layer of neural network cannot be determined. In this paper, on the basis of ensuring that the error meets the requirements, in order to reduce the calculation difficulty to the greatest extent, the best number of hidden layers is verified through experiments, so as to obtain the number of hidden layers in a reasonable range. The numerical results show that only through the L-M training method can a high-speed three-layer network be established. At present, 5-12-1 network structure is the most widely used, which uses 5 input nodes, 1 output node and 12 hidden layer nodes. Because BP network security has the characteristics of network generalization, 159 training samples and 122 test samples can be selected according to the data samples. Figure 1 is schematic diagram of stock market forecast.
The main goal of this paper is to optimize BP network and build an efficient and accurate operation model based on genetic network. In the process of forecasting the closing price of GZMT, Guizhou Province, after the introduction of the optimized network model, necessary testing and training are needed. Figure 2 is test chart of BP network model.
In this paper, based on the operation tools provided by neural network toolbox, combined with the algorithm discussed above, and through programming, the calculation of network closing price is completed. After completing the training, we need to rely on the test set to carry out the necessary tests, and then we can determine the samples that integrate with the stock index curve. Figure 3 is neural network fitting curve.
Through sample training, the number of iterations is determined when the target error meets the requirements. According to the network model constructed in this paper, the error curve can be calculated. Figure 4 is error curve.
By training 150 groups of sample data, the error can be reduced to the greatest extent, and most of the errors are reduced to about 0. Therefore, the learning training in this Fig. 3 Neural network fitting curve paper has basically achieved the expected goal. We found that the neural network model has high accuracy in prediction. Figure 5 is fitting curve of test sample.
After three times of training, the SSE of sample data can be determined as 11.5857e−004, which achieves a relatively good fitting effect, and the error analysis also shows that it achieves a good effect. Therefore, we choose L-M back propagation algorithm to carry out the learning and training, which not only has a good approximation effect, but also can achieve a high speed of operation. The most important thing is to avoid the local minimum problem, which needs to occupy a relatively large memory space. It should be emphasized that in the process of learning and training, it is necessary to track and observe the learning rate and target error, so as to reduce the impact on the convergence speed as much as possible. To sum up, this paper can effectively improve the efficiency of stock market prediction through artificial neural network algorithm, which has high application value.
Prediction and analysis of stock market based on GA-BP network model
The specific process of GA-BP network model is as follows: Step 1 Determine the initial function.
Step 2 Complete the fitness training.
Step 3 Obtain the initial population.
Step 4 Call the genetic function.
Step 5The weight and threshold of neural network are determined.
Step 6 To construct the targeted network training.
Step 7 The network is determined by weight and threshold.
Step 8 Tracking and observing the network performance.
Step 9 Solve the prediction problem according to the network.
The neural network designed in this paper selects three-layer structure, inputs the closing price of T − 4 day, T − 3 day, T − 2 day, T − 1 day and t day, outputs the closing price of T + 1 day, and the learning rate is 0.5. After the following conditions are met, the learning is finished: Step 10 Goal ≤ 1e−5 or epochs ≤ 1000.
Step 11 Select 100 iterations, and use GA-BP network training as shown in the figure below. Figure 6 is GA-BP network training. Figure 7 is training iteration diagram.
By comparing the output value with the actual value, it can be found that the neural network test sample in this paper achieves good prediction effect. Figure 8 is GA-BP fitting curve of test sample.
Compared with other algorithms, it is difficult to evaluate through the test indicators in econometric technology. Therefore, this paper summarizes the most widely used centralized evaluation indicators based on the research results at home and abroad.
Mean absolute error: Mean square deviation:
Time
According to the above table, it can be found that compared with the general neural network, the real coded genetic algorithm optimization model not only has high accuracy and good effect, but also can prevent the occurrence of local minima, and can improve the convergence speed, which has high practical value. Table 1 is comparison results of three algorithms. With the development of intelligent technology, neural network has become a frontier interdisciplinary research field, which is conducive to improving the design level and comprehensive performance of neural network system. Based on the existing research results, this paper improves the application efficiency of the algorithm by optimizing the specific links of the neural network. It systematically explains the technical terms and relevant indicators of the stock market investment, explains the common prediction methods, and systematically discusses the research hotspot and difficulty. This paper systematically discusses the related theory and application process, and analyzes the similarities and differences between genetic algorithm and neural network technology, which can avoid the problem of limited minimum. Choose the stock market of our country as the research sample, and then get the feasibility of short-term forecasting stock market, which lays a theoretical foundation for this study. Scientific and reasonable selection of input parameters can not only reflect the scale and quality of information in the stock market, but also avoid the problem of learning and training difficulty or even non-convergence due to information overlapping. As the stock market changes with time, the network training samples also need to be adjusted accordingly, otherwise it will be difficult to guarantee the accuracy of prediction. Therefore, the preprocessing of the original data is an essential part. Through the empirical study, it is found that the neural network model can make up for the existing problems of the common algorithm through the optimization of genetic algorithm. | 4,961.6 | 2020-08-13T00:00:00.000 | [
"Computer Science",
"Business"
] |
Tunable unconventional kagome superconductivity in charge ordered RbV3Sb5 and KV3Sb5
Unconventional superconductors often feature competing orders, small superfluid density, and nodal electronic pairing. While unusual superconductivity has been proposed in the kagome metals AV3Sb5, key spectroscopic evidence has remained elusive. Here we utilize pressure-tuned and ultra-low temperature muon spin spectroscopy to uncover the unconventional nature of superconductivity in RbV3Sb5 and KV3Sb5. At ambient pressure, we observed time-reversal symmetry breaking charge order below \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${T}_{{{{{{{{\rm{1}}}}}}}}}^{*}\simeq$$\end{document}T1*≃ 110 K in RbV3Sb5 with an additional transition at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${T}_{{{{{{{{\rm{2}}}}}}}}}^{*}\simeq$$\end{document}T2*≃ 50 K. Remarkably, the superconducting state displays a nodal energy gap and a reduced superfluid density, which can be attributed to the competition with the charge order. Upon applying pressure, the charge-order transitions are suppressed, the superfluid density increases, and the superconducting state progressively evolves from nodal to nodeless. Once optimal superconductivity is achieved, we find a superconducting pairing state that is not only fully gapped, but also spontaneously breaks time-reversal symmetry. Our results point to unprecedented tunable nodal kagome superconductivity competing with time-reversal symmetry-breaking charge order and offer unique insights into the nature of the pairing state.
Unconventional superconductors often feature competing orders, small superfluid density, and nodal electronic pairing.While unusual superconductivity has been proposed in the kagome metals AV 3 Sb 5 , key spectroscopic evidence has remained elusive.Here we utilize pressure-tuned (up to 1.85 GPa) and ultra-low temperature (down to 18 mK) muon spin spectroscopy to uncover the unconventional nature of superconductivity in RbV 3 Sb 5 .At ambient pressure, we detect an enhancement of the width of the internal magnetic field distribution sensed by the muon ensemble, indicative of time-reversal symmetry breaking charge order.Remarkably, the superconducting state displays nodal energy gap and a reduced superfluid density, which can be attributed to the competition with the novel charge order.Upon applying pressure, the charge-order transitions are suppressed, the superfluid density increases, and the superconducting state progressively evolves from nodal to nodeless.Once charge order is eliminated, we find a superconducting pairing state that is not only fully gapped, but also spontaneously breaks timereversal symmetry.Our results point to unprecedented tunable nodal kagome superconductivity competing with time-reversal symmetry-breaking charge order and offer unique insights into the nature of the pairing state.
Similarly to charge order, superconductivity, with transition temperature T c ∼ 1 K at ambient pressure, was also reported to display intriguing features, such as multiple gaps in (K,Cs)V 3 Sb 5 [31][32][33], diminished superfluid density in KV 3 Sb 5 [24], and double-dome structures in the pressure phase diagrams of all three compounds [34][35][36].However, no consensus on the superconducting gap structure has been reached yet [24,[31][32][33][37][38][39][40], partly due to the challenges of performing spectroscopic studies under extreme conditions including ultra low temperatures and large pressures.Moreover, the role of the unconventional charge order in the emergence of these unusual superconducting features remains unclear, since the former onsets at a much higher temperature than the latter.In this regard, the sensitivity of both T c and T co on applied pressure [34][35][36] offers a rare setting to study the interplay between these two orders with a disorder-free tuning knob.
Here, we tackle these issues by employing zero-field and high-field muon spin relaxation experiments to directly probe the interplay between charge order and superconductivity across the temperature-pressure phase diagram of RbV 3 Sb 5 .This allows us to assess not only the timereversal symmetry-breaking nature of these two states, but also the evolution of the low-energy superconducting excitations as T co is suppressed and T c is enhanced.The latter measurements unearth a remarkable transition from nodal pairing, when superconductivity coexists with charge order, to nodeless pairing, when superconductivity onsets alone.They also reveal distinct relationships between T c and the superfluid density in these two regimes.The same behaviors are also observed in KV 3 Sb 5 , attesting to the robustness of our conclusions for the understanding of the pairing mechanism in the AV 3 Sb 5 family.We discuss different scenarios for the symmetries of both the superconductivity and charge order states that may account for the unusual nodal-tonodeless transition.
A. Probing spontaneous time-reversal symmetry breaking
Scanning tunneling microscopy observes 2×2 charge order in RbV 3 Sb 5 (Fig. 1b and Ref. [14]) with an unusual magnetic field response [14], suggestive of time-reversal symmetry-breaking charge order.To directly probe signatures of time-reversal symmetry-breaking, we carried out zero field (ZF) µSR experiments (see Fig. 1c) on both single crystal and polycrystalline samples of RbV 3 Sb 5 above and below T co .The ZF-µSR spectra (see Fig. 1d) were fitted using the Gaussian Kubo-Toyabe (GKT) depolarization function [41] multiplied with an exponential decay function [24]: (1) Here, ∆/γ µ is the width of the local field distribution due to the nuclear moments and γ µ = 0.085 µs −1 G −1 is the muon gyromagnetic ratio.As we discussed previously in our work that reported time-reversal symmetrybreaking in KV 3 Sb 5 [24], the exponential relaxation rate Γ is mostly sensitive to the temperature dependence of the electronic contribution to the muon spin relaxation.In Fig. 1e, the temperature dependence of the Gaussian and exponential relaxation rates ∆ and Γ for the polycrystalline sample of RbV 3 Sb 5 are shown over a broad temperature range.The main observation is the twostep increase of the relaxation rate Γ, consisting of a noticeable enhancement at T * 1 110 K, which corresponds approximately to the charge-order transition temperature T co , and a stronger increase below T * 2 50 K.To substantiate this result, data from the single crystals are presented in Fig. 1f.The data from the up-down (34) and forward-backward (12) sets of detectors not only confirm the increase of Γ, but also shed more light into the origin of the two-step behavior.In particular, while Γ 34 is enhanced mostly below T * 2 50 K, Γ 12 also features a mild initial increase right below T * 1 110 K. Since the enhanced electronic relaxation rate below T * 1 is seen mostly in Γ 12 , it indicates that the local field at the muon site lies mostly within the ab-plane of the crystal.Below T * 2 , the internal field also acquires an out-of-plane component, as manifested by the enhancement of both Γ 12 and Γ 34 .The increase of the electronic contribution to the internal field width is also accompanied by maxima and minima in the temperature dependence of the nuclear contribution to the internal field width ∆/γ µ , particularly for the up-down set of detectors (Figs.1e and f).
The increase in the exponential relaxation of RbV 3 Sb 5 between T * 1 and 2 K is about 0.05 µs −1 , which can be interpreted as a characteristic field strength Γ 12 /γ µ 0.6 G.While these ZF-µSR results are consistent with the onset of time-reversal symmetry-breaking at T co , highfield µSR experiments, illustrated in the inset of Fig. 1g, are essential to confirm this effect, as we discussed previously [24].Fig. 1g shows the probability distribution of the magnetic field measured at 3 K for the single crystal samples of RbV 3 Sb 5 in the presence of a c-axis magnetic field of 8 T (see Methods for the details of the analysis).The contribution from the internal field is clearly identified.Fig. 1h shows the corresponding temperaturedependent relaxation rate σ HTF for different values of the external c-axis field.For 3 T, it displays a non-monotonic behavior, staying nearly constant across T * 1 and then decreasing to a minimum before increasing again at low temperatures.Upon increasing the external field, the relaxation rate not only shows an increase right at T * 1 110 K, but its temperature dependence below T * 2 is reversed from being reduced to being enhanced upon lowering the temperature.Thus, as shown in Fig. 1, the relaxation rate extracted from the high-field µSR data shows a qualitatively similar two-step increase as the ZF data at the same characteristic temperatures T * 1 110 K and T * 2 50 K -although the features are more pronounced at high fields.Because the temperature dependence of the nuclear contribution to the relaxation cannot be changed by an external field, we conclude that the two-step increase in the relaxation rate is driven by the electronic/magnetic contribution.
Therefore, the combination of ZF-µSR and high-field µSR results on RbV 3 Sb 5 provide direct evidence for timereversal symmetry-breaking below the onset of charge order, which approximately coincides with T * 1 110 K.As we previously discussed for KV 3 Sb 5 [24], one plausible scenario to explain this effect is that loop currents along the kagome bonds are generated by a complex charge order parameter [16,21,22].Within this framework, muons can couple to the fields generated by these loop currents, resulting in an enhanced internal field width sensed by the muon ensemble (see also the Supplementary Information).The lower-temperature increase of the relax- ation rate at T * 2 50 K is suggestive of another ordered state that modifies such loop currents.An obvious candidate is a secondary charge-ordered state onsetting at T * 2 .Indeed, experimentally, it has been reported that some kagome metals may display two charge-order transitions [19,20].Theoretically, different charge-order instabilities have been found in close proximity [18].Because time-reversal symmetry is already broken at T * 1 , it is not possible to distinguish, with our µSR data, whether this secondary charge-order state would break time-reversal symmetry on its own, or whether it is a more standard type of bond-charge-order.As shown in Figure 2a, both T * 1 and T * 2 are suppressed by hydrostatic pressure.More specifically, the two-step charge-order transition becomes a single time-reversal symmetry-breaking charge-order transition at ∼ 1.5 GPa, above which T * 1 = T * 2 shows a faster suppression (see the Methods section for details).
The same ZF-µSR analysis can also be employed to probe whether there is time-reversal symmetry-breaking inside the superconducting state.Because charge order already breaks time-reversal symmetry at T co T c , it is necessary to suppress T co , which can be accomplished with pressure.The maximum pressure we could apply (1.85 GPa) is not enough to completely suppress the charge-order in RbV 3 Sb 5 , but it allows to enter the optimal T c region of the phase diagram (see Fig. 3a) at which only a single time-reversal symmetry-breaking charge order transition is observed (see Fig. 2a).This pressure value is also large enough to assess the pure superconducting state of the related compound KV 3 Sb 5 .In Fig. 2b, we show the behavior of the internal field width Γ, extracted from the ZF-µSR data, across the supercon-ducting transition of KV 3 Sb 5 measured both at ambient pressure (red, where charge-order is present) and at 1.1 GPa (grey, where charge-order is absent).While at ambient pressure Γ is little affected by superconductivity, at the higher pressure there is a significant enhancement of Γ, comparable to what has been observed in superconductors that are believed to spontaneously break timereversal symmetry, such as SrRu 2 O 4 [42].The similar enhancement of Γ below T c ∼ 3 K is observed for RbV 3 Sb 5 at p = 1.85 GPa, as shown in Fig. 2c.This provides strong evidence for time-reversal symmetry-breaking superconducting states in KV 3 Sb 5 and RbV 3 Sb 5 , indicative of an unconventional pairing state.
B. Superfluid density as a function of pressure
An additional property of the superconducting state that can be directly measured with µSR is the superfluid density.This is accomplished by extracting the second moment of the field distribution from the muon spin depolarization rate σ sc , which is related to the superconducting magnetic penetration depth λ as ∆B 2 ∝ σ 2 sc ∝ λ −4 (see Methods section).Because λ −2 is proportional to the superfluid density, so is σ sc .Figures 3 and 4 summarise the pressure and temperature dependences of σ sc (measured in an applied magnetic field of µ 0 H = 5 mT) in the superconducting states of RbV 3 Sb 5 and KV 3 Sb 5 .As the temperature is decreased below T c , the depolarization rate σ sc starts to increase from zero due to the formation of the flux-line lattice (see Fig. 4a).As the pressure is increased, not only T c (as determined from eff and charge order temperature Tco (Ref.[35]) for RbV3Sb5 (c) and KV3Sb5 (Ref.[36]) (d).The arrows mark the critical pressure pcr,co at which charge order is suppressed and the pressure pmax−Tc at which Tc reaches its maximum value.(e) Plot of Tc versus λ −2 eff (0) in logarithmic scale obtained from our µSR experiments in KV3Sb5 and RbV3Sb5.Inset shows the plot in a linear scale.The dashed red line represents the relationship obtained for the kagome superconductor LaRu3Si2 as well as for the layered transition metal dichalcogenide superconductors T d -MoTe2 and 2H-NbSe2 [47,48].The relationship observed for cuprates is also shown [46], as are the points for various conventional superconductors .The error bars represent the standard deviation of the fit parameters.
AC susceptibility and µSR experiments), but also the low-temperature value of σ sc (measured at the baseline of 0.25 K) show a substantial increase for both compounds, as shown in Figs.3a and b.In both cases, T c,ons first quickly reaches a maximum at a characteristic pressure p max−Tc , namely, 3.5 K at p max−Tc 1.5 GPa for the Rb compound and 2.3 K at p max−Tc 0.5 GPa for the K compound.Beyond those pressure values, the transition temperature remains nearly unchanged.The superfluid density σ sc (0.25 K) also increases significantly from its ambient-pressure value upon approaching p max−Tc , by a factor of approximately 7 for the Rb compound and 5 for the K system.In both cases, σ sc (0.25 K) continues increasing beyond p max−Tc , although at a lower rate that may indicate approach to saturation.These behaviors are consistent with competition between charge order and superconductivity.Indeed, as shown in Figs.3c and d, the increase in the superfluid density is correlated with the decrease in the charge ordering temperature T co .More specifically, the pressure values p max−Tc for which T c is maximum are close to the critical pressures p cr,co beyond which charge order is completely suppressed.In fact, as displayed in Fig. 3b, p cr,co essentially coincides with p max−Tc for KV 3 Sb 5 .Competition with charge order could naturally account for the suppression of the superfluid density towards the lowpressure region of the phase diagram, where T co is the largest.Since charge order partially gaps the Fermi surface, as recently seen by quantum oscillation [43] and ARPES [17,32] studies, the electronic states available for the superconducting state are suppressed, thus decreasing the superfluid density [44,45].
Having extracted σ sc , we can directly obtain the magnetic penetration depth λ (see Methods).For polycrystalline samples, this gives an effective penetration depth λ eff , whereas for single crystals, it gives the in-plane λ ab .It is instructive to plot the low-temperature penetration depth not as a function of pressure, but as a function of T c [46].As shown in Fig. 3e, the ratio T c /λ −2 eff for unpressurized RbV 3 Sb 5 is ∼ 0.7, similar to the one previously reported for KV 3 Sb 5 [24].This ratio value is significantly larger from that of conventional BCS superconductors, indicative of a much smaller superfluid density.Moreover, we also find an unusual relationship between λ −2 eff and T c in these two kagome superconductors, which is not expected for conventional superconductivity.This is presented in the inset of Fig. 3e: below p max−Tc , the superfluid density (which is proportional to λ −2 eff ) depends linearly on T c , whereas above p max−Tc , T c barely changes for increasing λ −2 eff .Historically, a linear increase of T c with λ −2 eff has been observed only in the underdoped region of the phase diagram of unconventional superconductors.Deviations from linear behavior were previously found in an optimally doped cuprate [47], in some Fe-based superconductors [48], and in the chargeordered superconductor 2H-NbSe 2 under pressure [47].Therefore, in RbV 3 Sb 5 and KV 3 Sb 5 , it is tempting to attribute this deviation to the suppression of the competing charge ordered state by the applied pressure.More broadly, these two different dependences of T c with λ −2 eff indicate superconducting states with different properties below and above p max−Tc .
To further probe this scenario, we quantitatively analyze the temperature dependence of the penetration depth λ(T ) [49] for both compounds as a function of pressure, see Figs. 4a and b.Quite generally, upon decreasing the temperature towards zero, a power-law dependence of λ −2 eff (T ) is indicative of the presence of nodal quasiparticles, whereas an exponential saturation-like behavior is a signature of a fully gapped spectrum.The low-T behavior of λ −2 ab (T ) for single crystals of RbV 3 Sb 5 and KV 3 Sb 5 , measured down to 18 mK and shown in Fig. 4c, displays a linear-in-T behavior, consistent with the presence of gap nodes.Quantitatively, the curve is well described by a phenomenological two-gap model, where one of the gaps has nodes and the other does not (see Methods).
Such a linear-in-T increase of λ −2 eff (T ) upon approaching T = 0 is also seen in polycrystalline samples for pressures up to p max−Tc .In the case of RbV 3 Sb 5 (Fig. 4a), for the only pressure value available above p max−Tc ≈ 1.5 GPa, the penetration depth curve seems to be better fitted by a model with a nodeless gap.This is much clearer in the case of KV 3 Sb 5 (Fig. 4b): above p max−Tc ≈ 0.5 GPa, λ −2 eff (T ) displays a saturation-like behavior that is well captured quantitatively by a model with a nodeless gap.Since p max−Tc is close to p co,cr , specially for the K compound, these results show that charge order strongly influences the superconducting gap structure in (Rb,K)V 3 Sb 5 , inducing nodes in an otherwise fully gapped pairing state.To the best of our knowledge this is the first direct experimental demonstration of a plausible pressure-induced change in the superconducting gap structure from nodal to nodeless in these kagome superconductors.
One possible explanation for these results is on the changes that the emergence of charge order causes on the Fermi surface.First-principle calculations on AV 3 Sb 5 compounds indicate the existence of multiple Fermi pockets in the absence of charge order [43].The simplest fully-gapped pairing state is an s-wave one consisting of different nodeless gaps (with potentially different signs) around each pocket.The onset of long-range charge order further breaks up these pockets into additional smaller pockets.Depending on the relative sign between the original gaps and on the details of the reconstructed Fermi pockets, accidental nodes could emerge.Such a scenario was proposed in the case of competing s +− -wave superconductivity and spin-density wave in iron-pnictide superconductors [50].
The main drawback of this scenario is that it does not account for the time-reversal symmetry-breaking of the "pure" superconducting state.In this regard, a fully gapped pairing state that also breaks time-reversal symmetry is the chiral d x 2 −y 2 +id xy state [51,52].As long as the charge ordered state preserves the point-group symmetry of the disordered state, the chiral pairing symmetry is expected to be retained below T co , suggesting a nodeless superconducting state.However, if the chargeordered state breaks the threefold rotational symmetry of the lattice, as proposed experimentally [53] and theoretically [18,21] for certain AV 3 Sb 5 compounds, a nodal gap is stabilized for a sufficiently large charge order parameter, as we show in the supplementary material.In this case, the nodal-to-nodeless transition does not coincide with the full suppression of charge order, unless the transition from the charge-ordered superconducting state to the "pure" superconducting state is first-order.We note that the same conclusions would also apply for the triplet chiral p x + ip y state.
II. CONCLUSION
Our results provide direct evidence for unconventional superconductivity in (Rb,K)V 3 Sb 5 , by combining the observations of nodal superconducting pairing and a small superfluid density at ambient pressure, which in turn displays an unconventional dependence on the superconducting critical temperature.Moreover, we find that the hydrostatic pressure induces a change from a nodal superconducting gap structure at low pressure to a nodeless, time-reversal symmetry-breaking superconducting gap structure at high pressure, a behavior correlated with the suppression of time-reversal symmetry-breaking charge order.Our results point to the rich interplay and accessible tunability between nodal unconventional superconductivity and time-reversal symmetry-breaking charge orders in the correlated kagome lattice, offering new insights into the microscopic mechanisms involved in both orders.
III. METHODS
Experimental details: Zero field (ZF) and transverse field (TF) µSR experiments at ambient pressure on the single crystalline and polycrystalline samples of RbV3Sb5 and KV3Sb5 were performed on the GPS instrument and high-field HAL-9500 instrument, equipped with BlueFors vacuum-loaded cryogen-free dilution refrigerator (DR), at the Swiss Muon Source (SµS) at the Paul Scherrer Institut, in Villigen, Switzerland.µSR experiments under pressure were performed at the µE1 beamline of the Paul Scherrer Institute (Villigen, Switzerland using the instrument GPD, where an intense high-energy (pµ = 100 MeV/c) beam of muons is implanted in the sample through the pressure cell.The 4 He cryostats equipped with the 3 He insets (base temperature 0.25 K) were used.A mosaic of several crystals stacked on top of each other was used for these measurements.The magnetic field was applied both in-plane (along the ab-plane) and out-of-plane (along the crystallographic c-axis).A schematic overview of the experimental setup for zero-field and low transverse field measurements is shown in Figure 1c.The muon spin is forming 45 • with respect to the c-axis of the crystal.The sample was surrounded by four detectors: Forward (1), Backward (2), Up (3) and Down (4).A schematic overview of the experimental setup for high-field µSR instrument is shown in the inset of Figure 1g.The muon spin forms 90 • with respect to the c-axis of the crystal.The sample was surrounded by 2 times 8 positron detectors, arranged in rings.The specimen was mounted in a He gas-flow cryostat with the largest face perpendicular to the muon beam direction, along which the external field was applied.Zero field and high transverse field µSR data analysis on single crystals of RbV3Sb5 were performed using both the so-called asymmetry and single-histogram modes [54].
µSR experiment: In a µSR experiment [55] nearly 100 % spin-polarized muons µ + are implanted into the sample one at a time.The positively charged µ + thermalize at interstitial lattice sites, where they act as magnetic microprobes.In a magnetic material the muon spin precesses in the local field Bµ at the muon site with the Larmor frequency νµ = γµ/(2π)Bµ (muon gyromagnetic ratio γµ/(2π) = 135.5 MHz T −1 ).Using the µSR technique, important length scales of superconductors can be measured, namely the magnetic penetration depth λ and the coherence length ξ.If a type II superconductor is cooled below Tc in an applied magnetic field ranging between the lower (Hc1) and the upper (Hc2) critical fields, a vortex lattice is formed which in general is incommensurate with the crystal lattice, with vortex cores separated by much larger distances than those of the crystallographic unit cell.Because the implanted muons stop at given crystallographic sites, they will randomly probe the field distribution of the vortex lattice.Such measurements need to be performed in a field applied perpendicular to the initial muon spin polarization (so-called TF configuration).
Pressure cell: Pressures up to 1.9 GPa were generated in a double wall piston-cylinder type cell made of CuBe/MP35N, specially designed to perform µSR experiments under pressure [56].A fully assembled typical double-wall pressure cell is presented in Extended Data Fig. 1.The body of the pressure cell consists of two parts: the inner and the outer cylinders which are shrink fitted into each other.Outer body of the cell is made out of MP35N alloy.Inner body of the cell is made out of CuBe alloy.Other components of the cell are: pistons, mushroom, seals, locking nuts, and spacers.The mushroom pieces and sealing rings were made out of non hardened Copper Beryllium.With both pistons completely inserted, the maximum sample height is 12 mm.As a pressure transmitting medium Daphne oil was used.The pressure was measured by tracking the superconducting transition of a very small indium plate by AC susceptibility.The filling factor of the pressure cell was maximized.The fraction of the muons stopping in the sample was approximately 40 %.
Crystal structure of RbV3Sb5: Additional characterization information is provided here on the kagome superconductor RbV3Sb5 which crystallizes in the novel AV3Sb5-type structure (space group P 6/mmm, where A = K, Rb, Cs).The crystallographic structure of prototype compound RbV3Sb5 shown in panel (a) of Extended Data Figure 2 illustrates how the V atoms form a kagome lattice (medium beige circles) intertwined with a hexagonal lattice of Sb atoms (small red circles).The Rb atoms (large purple circles) occupy the interstitial sites between the two parallel kagome planes.In panel (b) the vanadium kagome net has been emphasized, with the interpenetrating antimony lattice included to highlight the unit cell (see dashed lines).Extended data Figures 2c shows an optical microscope image of several single crystals of RbV3Sb5 on millimeter paper.The Laue X-ray diffraction image (see the Extended data Figure 2d) demonstrates the single crystallinity of the samples used for µSR experiments.
Magnetization measurements of RbV3Sb5:
The magnetization measurements show the abrupt drop in macroscopic magnetization across T * 1 = TCDW,1 105 K for the field applied along the c-axis, as shown in Extended Figure 3. Interestingly, a shallow minimum around T * 2 = TCDW,2 50 K is also seen in magnetization, followed by sizeable increase at lower temperatures.
Analysis of high field TF-µSR data: Figure 1g shows the probability field distribution, measured at 3 K for the single crystal samples of RbV3Sb5 in the c-axis magnetic field of 8 T. In the whole investigated temperature range, two-component signals were observed: a signal with fast relaxation (low frequency) and another one with a slow relaxation (high frequency).The narrow signal arises mostly from the muons stopping in the silver sample holder and its position is a precise measure of the value of the applied magnetic field.The width and the position of the narrow signal is assumed to be temperature independent and thus they were kept constant in the analysis.The relative fraction of the muons stopping in the sample was fixed to the value obtained at the base-T and kept temperature independent.The signal with the fast relaxation, which is shifted towards the lower field from the applied one, arises from the muons stopping in the sample and it takes a major fraction (80 %) of the µSR signal.This points to the fact that the sample response arises from the bulk of the sample.We note that in high magnetic fields we cannot systematically discriminate between the nuclear and the electronic contribution to the relaxation rate and thus we show the total high-field muon spin relaxation rate σHTF in Figure 1i.
Knight shift of RbV3Sb5: Extended Data Figure 4 shows the the temperature dependence of the Knight shift, measured at various applied magnetic fields.Knight shift is defined as Kexp = (Bint − Bext)/Bext, where Bint and Bext are the internal and externally applied magnetic fields, respectively.Kexp shows a sharp changes across T * 1 and T * 2 , which indicates the change of local magnetic susceptibility with two characteristic temperatures.
Analysis of ZF-µSR data under pressure: As an example, in the Extended data Figure 5 is displaying the zero-field µSR spectra, recorded at p = 1.07 GPa for various temperatures.The experimental data were analyzed by separating the µSR signal on the sample (s) and the pressure cell (pc) contributions [47,57]: Here A0 is the initial asymmetry of the muon-spin ensemble, and As (Apc) and Ps(t) [Ppc(t)] are the asymmetry and the time evolution of the muon-spin polarization for muons stopped inside the sample (pressure cell), respectively.The response of the pressure cell [Ppc(t)] was studied in separate set of experiments.
The sample contribution includes both, the nuclear moment and an additional exponential relaxation Γ caused by appearance of spontaneous magnetic fields: Here P GKT ZF (t) is the Gaussian Kubo-Toyabe (GKT) relaxation function (see Eq. 1) describing the magnetic field distribution created by the nuclear magnetic moments [41].Fits of Eq. 2 to the ZF-µSR pressure data were performed globally.The ZF-µSR time-spectra taken at each particular pressure (p = 0.16, 0.59, 1.07, 1.53, and 1.89 GPa) were fitted simultaneously with As, Apc, and σGKT as common parameters, and λ an individual parameter for each particular data set.The fits were limited to T 150 K, i.e. up to the temperature where the nuclear contribution of RbV3Sb5 remains constant (σGKT const, see Fig. 1e).
Time-reversal symmetry-breaking charge orders under pressure: Here we show (see Extended Data Figure 6a-f) the evolution of the two time-reversal symmetry-breaking transition temperatures T * 1 and T * 2 with the application of hydrostatic pressure.Two step time-reversal symmetry-breaking transition is clearly observed under the pressures of p = 0.16 GPa and 0.59 GPa.At 1 GPa, these two transitions become indistinguishable and above 1 GPa we see only transition at T * 2 , which decreases upon further increasing the pressure.Extended Data Figure 6f shows the pressure evolution of T * 1 and T * 2 , extracted from µSR results, and of previously reported charge order temperature Tco,1 [35].The value of Tco,2 [19] at ambient pressure is also shown.This phase diagram suggests that two time-reversal symmetry-breaking state turn into single time-reversal symmetry-breaking state at ∼ 1.5 GPa, above which T * 2 shows faster suppression and follows the phase line of the charge order.This phase diagram confirms the charge orders being origin for the time-reversal symmetry-breaking in RbV3Sb5.
Macroscopic superconducting properties under pressure:
The temperature dependence of the AC-susceptibility χAC for various pressures, shown in Extended Data Figures 7a and b for the polycrystalline samples of RbV3Sb5 and KV3Sb5, indicates a strong diamagnetic response and sharp superconducting transitions in both samples.This points to the high quality of the samples and providing evidence for bulk superconductivity in these polycrystalline samples.
For RbV3Sb5 and KV3Sb5, the λ −2 (T ) data above pmax−Tc are analysed using two s-wave gaps.At pressure below pmax−Tc, the combination of dominant nodal | cos(6ϕ)|-gap and one s-wave gap is used.
Analysis of the temperature dependence of the penetration depth for the single crystals RbV3Sb5 and KV3Sb5 at ambient pressure: λ −2 ef f (T ) at ambient pressure were analyzed within the framework of quasi-classical Eilenberger weak-coupling formalism, where the temperature dependence of the gaps was obtained by solving self-consistent coupled gap equations.This method is described in details elsewhere [62][63][64][65], including in our recent paper on KV3Sb5 [24].The temperature dependence of λ −2 ab down to 18 mK in the applied field of 5 mT is shown in Extended Data Figure 8 for RbV3Sb5 along with the KV3Sb5 data.A well pronounced two step behaviour is observed in RbV3Sb5, similar to KV3Sb5 [24] which was explained with two gap superconductivity with very weak interband coupling (0.001-0.005) and strong electron-phonon coupling.The interband coupling is extremely small which is sufficient to have same values of Tc for different bands but still shows the two step temperature behaviour of the penetration depth [66].The λ −2 ab (T ) for both (Rb,K)V3Sb5 are well described by one constant gap and one dominant angle-dependent | cos(6ϕ)|-gap, indicating the presence of gap nodes.Upon increasing pressure two step behaviour gets smoothed out, but angle-dependent gap becomes more dominant and persists all the way up to pmax−Tc pcr,co 1.5 GPa and 0.5 GPa for RbV3Sb5 and KV3Sb5, respectively.Above the pressure pmax−Tc.At pressures above pmax−Tc, the λ −2 (T ) is described by constant gaps.
TF-µSR spectra for RbV3Sb5 and KV3Sb5: Extended Data Figures.9a and b show the TF-µSR spectra, measured near ambient pressure above and below the superconducting transition temperature Tc for RbV3Sb5 and KV3Sb5, respectively.Extended Data Figures . 9c and d show the TF-µSR spectra, measured above and below the superconducting transition temperature Tc for RbV3Sb5 at p = 1.85 GPa and for KV3Sb5 at p = 1.1 GPa, respectively.In order to obtain well ordered vortex lattice, the measurements were done after field cooling the sample from above Tc.Above Tc, the oscillations show a damping essentially due to the random local fields from the nuclear magnetic moments.Below Tc the damping rate increases with decreasing temperature due to the presence of a nonuniform local magnetic field distribution as a result of the formation of a flux-line lattice in the superconducting state.Figures 9c and d show that damping in the superconducting state significantly increases upon application of hydrostatic pressure.
Analysis of TF-µSR data under pressure: The TF µSR data were analyzed by using the following functional form: [58] Here As and Apc denote the initial assymmetries of the sample and the pressure cell, respectively.ϕ is the initial phase of the muon-spin ensemble and Bint represents the internal magnetic field at the muon site.The relaxation rates σsc and σnm characterize the damping due to the formation of the FLL in the superconducting state and of the nuclear magnetic dipolar contribution, respectively.In the analysis σnm was assumed to be constant over the entire temperature range and was fixed to the value obtained above Tc where only nuclear magnetic moments contribute to the muon depolarization rate σ.The Gaussian relaxation rate, σpc, reflects the depolarization due to the nuclear moments of the pressure cell.The width of the pressure cell signal increases below Tc.This is due to the influence of the diamagnetic moment of the superconducting sample on the pressure cell, leading to the temperature dependent σpc below Tc.In order to consider this influence we assume the linear coupling between σpc and the field shift of the internal magnetic field in the superconducting state: where σpc(T > Tc) = 0.25 µs −1 is the temperature independent Gaussian relaxation rate.µ0Hint,NS and µ0Hint,SC are the internal magnetic fields measured in the normal and in the superconducting state, respectively.As indicated by the solid lines in Extended data Figs.9a-d the µSR data are well described by Eq. ( 5).
Theoretical model for the nodal-to-nodeless transition: Here we present more details about our theoretical model for the nodal-to-nodeless transition assuming that the "pure" superconducting state is the timereversal symmetry-breaking chiral d x 2 −y 2 + idxy state.
The kagome lattice has point group D 6h and the d-wave order parameter has two degenerate components corresponding to the ∆ d x 2 −y 2 = ∆1 and the ∆ dxy = ∆2 order parameters.The combined order parameter, transforms as the E2g irreducible representation (irrep) of the point group.Writing ∆1 = ∆0 cos θe iφ 1 and ∆2 = ∆0 sin θe iφ 2 the Landau free-energy expansion to quartic order is where φ = φ1 − φ2 is the relative phase between the two order parameters.From here we see immediately the well-known result that for β2 > 0 a relative phase of φ = ±π/2 is preferred, whereas for β2 < 0, a relative phase of φ = 0, π is selected [67].Hereafter we will focus on the β2 > 0 case and take β1 > β2 in order for the free energy to be bounded.In this situation, the φ = ±π/2 phase is selected, implying time-reversal symmetry breaking and a nodeless pairing state.
We now consider what happens inside the charge-ordered (CO) state.Some of the proposed 2 × 2 × 2 charge-order configurations, such as the tri-hexagonal, Star of David, and superimposed tri-hexagonal Star of David phases [18], are triple-QM /triple-QL states that preserve the D 6h point group of the kagome lattice.Here, QM and QL refer to the wave-vectors ( 1 2 , 1 2 , 0) and ( 1 2 , 1 2 , 1 2 ) of the Brillouin zone.In these cases, because ∆ continues to transform as the two-dimensional E2g irrep, the superconducting state is expected to remain chiral and nodeless.
However, other proposed 2 × 2 × 2 CDW phases break the threefold rotational symmetry of the lattice, implying that ∆1 and ∆2 no longer onset at the same temperature.This is the case of the so-called staggered trihexagonal and staggered Star of David phases [18], which are double-QL/single-QM states.In this case, a composite quantity transforming as the E2g irrep of the point group can be constructed from the order parameters of the CO state: where Mi, with i = 1, 2, 3, denote the CO order parameter associated with each wave-vector in the star of QM .
Table I: Comparison between different extrema of the free energy in Eq. ( 10).An additional constraint on the parameters arise as a consequence of the square-root function appearing in the solution for θ.
While ∆ above also transforms as E2g, it cannot be combined with the above composite in the free energy, as it is not gauge-invariant.Nevertheless, we can construct a composite superconducting order parameter combination that is gauge invariant and still transforms as the E2g irrep: The "scalar product" between the composites (8) and ( 9) is now gauge-invariant and transforms trivially under the point group operations and, thus, an allowed term in the free energy expansion.Since we are interested in the fate of the superconducting state inside the CO state, we consider for concreteness and without loss of generality, the particular configuration M1 = M3 = ∆CO/ √ 2, M2 = 0.The full expression for the free energy then reads where λ is a coupling constant, assumed hereafter to be positive.Minimization of the free energy yield two possible minima, whose free energies are given by: As summarized in Table I, solution 1 corresponds to a superconducting state where only ∆1 is non-zero, resulting in a nodal state, since the gap ∆1 must vanish along the kx = ±ky directions.In contrast, in solution 2, both ∆1 and ∆2 are non-zero.Although they have different magnitudes and their relative phase is no longer ±π/2, the total gap function is always finite, implying that solution 2 it is a nodeless state.
The solution that minimizes the free energy depends on the values of α and ∆ 2 CO .The nodeless state (solution 2) takes place as long as the following condition is met: which arises from enforcing the argument of square root in the expression for the angle θ * to be positive (see Table I).When the constraint is saturated, i.e. for we have F1 = F2.For larger values of ∆CO, the nodal state (solution 1) is favored.Defining where t = T T c,0 is the reduced temperature, we can obtain the transition temperatures for both solutions as a function of ∆ 2 CO .We find Hence, for a finite CO order parameter, the nodal state onsets first, followed by a transition at lower temperatures to a nodeless state.Accordingly, for a fixed temperature, there is a nodeless to nodal transition upon increasing ∆CO.This is illustrated in Extended Data Fig. 10, which shows t c,nodal and t c,nodeless as a function of ∆CO.The specific parameters used in making the figure were: α0 = 0.1, β1 = 1, β2 = 0.4, and λ = 0.25.
It is important to emphasize that, even in the nodeless state, the minimum gap value can be very small.To show that, we consider the full gap function where f k 2 x −k 2 y and f kxky are form factors that vanish at kx = ±ky and kx = 0 or ky = 0, respectively.Here, θ and φ are functions of ∆CO and are given in Table I.Evaluating as a function of ∆CO gives the inset of Extended Data Fig. 10, which was obtained using the same parameters as above and setting the reduced temperature to t = 0.4.As expected, the gap minimum vanishes continuously across the nodeless to nodal transition.[19], [35], [36]) and the onset temperatures of the time-reversal symmetry-breaking T * 1 , T * 2 as a function of pressure.
Figure 1 :
Figure 1: (Color online) Time-reversal symmetry-breaking charge order in RbV3Sb5.(a) Schematic example of an orbital current state (red arrows) in the kagome lattice.(b) Scanning tunneling microscopy of the Sb surface showing 2×2 charge order as illustrated by black lines.The inset is the Fourier transform of this image, displaying 1×1 lattice Bragg peaks (blue circles) and 2×2 charge-order peaks (red circles).The latter have different intensities, suggesting a chirality of the charge order.(c) A schematic overview of the experimental setup (see the methods section).(d) The ZF µSR time spectra for the polycrystalline sample of RbV3Sb5, obtained at T = 5 K.The dashed and solid curves represent fits using the Gaussian Kubo Toyabe (GKT) function without (black) and with (red) a multiplied exponential exp(−Γt) term, respectively.Error bars are the standard error of the mean (s.e.m.) in about 10 6 events.The temperature dependences of the relaxation rates ∆ and Γ, which can be related to the nuclear and electronic contributions respectively, are shown in a wide temperature range for the polycrystalline (e) and the single crystal samples (f) of RbV3Sb5.Panel (f) presents Γ obtained from two sets of detectors.The error bars represent the standard deviation of the fit parameters.(g) Fourier transform of the µSR asymmetry spectra for the single crystal of RbV3Sb5 at 3 K in the presence of an applied field of µ0H = 8T.The black solid line is a two-component signal fit.The peaks marked by the arrows denote the external and internal fields, determined as the mean values of the field distribution from the silver sample holder (mostly) and from the sample, respectively.Inset shows the schematic high-field µSR experimental setup (see the methods section).(h) The temperature dependence of the high transverse field muon spin relaxation rate σHTF for the single crystal of RbV3Sb5, normalized to the value at 300 K, measured under different c-axis magnetic fields.(i) The temperature dependence of the relaxation rate, measured under magnetic field values of µ0H = 8 T and 9.5 T.
RbV 3 5 Figure 2 :
Figure 2: (Color online) Time-reversal symmetry-breaking charge order and superconductivity in (K,Rb)V3Sb5 under pressure.(a) The pressure dependences of the transition temperatures T * 1 and T * 2 .Temperature dependence of the zero-field muon spin relaxation rate Γ for KV3Sb5 (b) and RbV3Sb5 (c) in the temperature range across Tc, measured at ambient pressure and above the critical pressure at which Tc is maximum.The error bars represent the standard deviation of the fit parameters.
Figure 3 :
Figure 3: (Color online) Coupled charge order and nodal superconductivity in kagome lattice.Pressure dependence of the superconducting transition temperature (left axis) and of the base-T value of σsc (right axis) for the polycrystalline samples of RbV3Sb5 (a) and KV3Sb5 (b).Here, Tc,ons and T c,mid were obtained from AC measurements and Tc,µSR, from µSR.Pressure dependence of λ −2eff and charge order temperature Tco (Ref.[35]) for RbV3Sb5 (c) and KV3Sb5 (Ref.[36])(d).The arrows mark the critical pressure pcr,co at which charge order is suppressed and the pressure pmax−Tc at which Tc reaches its maximum value.(e) Plot of Tc versus λ −2 eff (0) in logarithmic scale obtained from our µSR experiments in KV3Sb5 and RbV3Sb5.Inset shows the plot in a linear scale.The dashed red line represents the relationship obtained for the kagome superconductor LaRu3Si2 as well as for the layered transition metal dichalcogenide superconductors T d -MoTe2 and 2H-NbSe2[47,48].The relationship observed for cuprates is also shown[46], as are the points for various conventional superconductors .The error bars represent the standard deviation of the fit parameters.
2 )Figure 4 :
Figure 4: (Color online) Tunable nodal kagome superconductivity.The temperature dependence of the superconducting muon spin depolarization rates σsc for RbV3Sb5 (a) and KV3Sb5 (b), measured in an applied magnetic field of µ0H = 5 mT at ambient and various applied hydrostatic pressures.The error bars represent the standard deviations of the fit parameters.The solid (dashed) lines correspond to a fit using a model with nodeless (nodal) two-gap superconductivity.(c) The inverse squared penetration depth λ −2 ab for the single crystals of KV3Sb5 and RbV3Sb5 as a function of temperature at ambient pressure.
1 :
Pressure cell for µSR.Fully assembled typical double-wall piston-cylinder type of pressure cell used in our µSR experiments.The schematic view of the positron and muon detectors at the GPD spectrometer are also shown.In reality, each positron detector consists of three segments.The collimators reduce the size of the incoming muon beam.
Extended Data Figure 2 :
Crystal structure of RbV3Sb5.Three dimensional representation (a) and top view (b) of the atomic structure of RbV3Sb5.(c) An optical microscope images of several single crystals of RbV3Sb5 on millimeter paper.The hexagonal symmetry is immediately apparent.(d) Laue X-ray diffraction image of the single crystal sample of RbV3Sb5, oriented with the c-axis along the beam.
3 : 2 = 2 *Extended Data Figure 4 :
Bulk magnetization for RbV3Sb5.The temperature dependence of magnetic susceptibility of RbV3Sb5 above 1.8 K.The vertical grey lines mark the concomitant time-reversal symmetry-breaking and charge ordering temperatures T * 1 = TCDW,1 110 K, T * TCDWKnight shift for RbV3Sb5.The temperature dependence of the Knight shift for the single crystal of RbV3Sb5, measured at various magnetic fields applied along the c-axis.y c r y s t a l l i n e R b V 3 S b 5 Z e r o -f i e l d p = 1 .0 7 G P a Extended Data Figure 5: Zero-field spectra of RbV3Sb5 under pressure.The ZF-µSR time spectra for the polycrystalline sample of RbV3Sb5, recorded at various temperatures under the applied pressure of p = 1.07 GPa.The solid lines in panel a represent fits to the data by means of Eq. 3. Extended Data Figure 6: (Color online) Pressure evolution of time-reversal symmetry-breaking charge orders in RbV3Sb5.(a-e) The temperature dependence of the absolute change of the electronic relaxation rate ∆Γ = Γ(T ) -Γ(T > 150 K) for the polycrystalline sample of RbV3Sb5, measured at various pressures.(f) The charge order temperatures Tco,1, Tco,2 (after References
5 0 7 : 2 )Extended Data Figure 8 :
y c r y s t a l R b V 3 S b 5 T c , m i d b T c , o n s T c , m i d P o l y c r y s t a l K V 3 S b Macroscopic superconducting properties under pressure.Temperature dependence of the AC susceptibility χ for the polycrystalline samples of RbV3Sb5 (a) and KV3Sb5 (b), measured at nearly ambient and various applied hydrostatic pressures up to p 1.8 GPa.Arrows mark the onset temperature Tc,ons and the temperature T c,mid at which χ dc = -0.5.Temperature dependence of the penetration depth at ambient pressure.The superconducting muon depolarization rate σ sc,ab for the single crystals of RbV3Sb5 and KV3Sb5 as a function of temperature, measured in 5 mT applied perpendicular to the kagome plane.The solid lines correspond to a model with one constant gap and one dominant angle-dependent gap.
9 :
Transverse field µSR spectra.The transverse field µSR spectra for RbV3Sb5 (a,c) and KV3Sb5 (b,d), obtained above and below Tc (after field cooling the sample from above Tc) close to ambient pressure (a and b) and at the maximum applied pressure (c and d).Error bars are the standard error of the mean (s.e.m.) in about 10 6 events.The error of each bin count n is given by the standard deviation (s.d.) of n.The errors of each bin in A(t) are then calculated by s.e.propagation.The solid lines in panel a represent fits to the data by means of Eq. 5.The dashed lines are the guides to the eyes.Extended Data Figure10: (Color online) Calculated superconducting phase diagram.Normalized superconducting critical temperature as a function of the charge order parameter, ∆co.As ∆co is increased, a transition from nodeless to nodal superconductivity occurs.As evidenced in Fig.3c and d, the charge order is suppressed by pressure.As pressure is increased, ∆co is reduced, and the superconducting state goes from nodal to nodeless.The inset shows the minimum gap magnitude as a function of ∆co plotted along the dashed line in the phase diagram.The transition between nodal and nodeless superconductivity is clearly visible. | 11,222.4 | 2022-02-15T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Antimicrobial Spectrum of Titroleane™: A New Potent Anti-Infective Agent
Tea Tree oil (TTO) is well known for its numerous good properties but might be also irritating or toxic when used topically or ingested, thus limiting the number of possible applications in Humans. The aim of the study was to characterize the antimicrobial spectrum as well as the toxicity of Titroleane™, a new anti-infective agent obtained from TTO but cleared of its toxic monoterpenes part. The susceptibility to Titroleane™ of various pathogens (bacteria and fungi) encountered in animal and human health was studied in comparison with that of TTO. Antimicrobial screening was carried out using the broth microdilution method. Activities against aerobic, anaerobic, fastidious and non-fastidious microorganisms were performed. For all microorganisms tested, the MIC values for Titroleane™ ranged from 0.08% to 2.5%, except for Campylobacter jejuni, and Aspergillus niger. In particular, Titroleane™ showed good efficacy against skin and soft tissue infection pathogens, such as methicillin resistant Staphylococcus aureus (MRSA), intra-abdominal infections and oral pathogens, as well as fish farming pathogens. Toxicity testing showed little and similar cytotoxicities between TTO and Titroleane™ of 37% and 23%, respectively at a concentration of 0.025% (v/v). Finally, we demonstrated that the antimicrobial activity of Titroleane™ is similar to that of TTO.
However, some of these molecules (or their oxidation product) are known to be toxic and irritant and are considered as allergens. This includes terpinolene, γ-terpinene, α-pinene, 1,8-cineol and p-cymene [15,16]. Because of these monoterpenes, batches of TTO can have very different toxicological profiles [17]. In order to regulate the quality of TTO, the international Standard ISO 4730, describes the requirements in the amounts of the different compounds.
A proprietary owned process of the fractionation of SETUBIO led to a derivative TTO named Titroleane™, by releasing the monoterpenes contained in TTO [18]. It is enriched in active molecules, such as monoterpenes alcohol-previously identified by Southwell et al. [19] and Carson and Riley [20]-at a guaranteed concentration >60%, with a minimal monoterpenes concentration of <5% instead of 40% in standard TTO. In addition to the production of Titroleane™, a second fraction, called Fraction 2, is generated, mostly composed of monoterpenes. The aim of the study is to evaluate the antimicrobial spectrum of Titroleane™ and to validate that, even with the removal of toxic and irritating molecules (also known for their antimicrobial properties), the activity of Titroleane™ is similar to that of TTO. Industrial applications and other potential properties of Titroleane™ will also be discussed. Table 1 shows the chemical composition of TTO, Titroleane™ and Fraction 2 obtained by Gas Chromatography with Flame Ionization Detector (GC-FID). TTO and Fraction 2 contain high amounts of monoterpenes with a maximal concentration of 72% and 83%, respectively, while Titroleane™ contains less than 3.2%. Conversely, monoterpenes alcohols are present in higher concentrations in Titroleane™ compared to TTO and Fraction 2. Indeed, Titroleane™ contains 71 to 75% of terpinen-4-ol, instead of 48% in TTO and 9.6% in Fraction 2. Titroleane™ also contains 5% to 9% of α-terpineol, while standard TTO never contains more than 5%. Concerning sesquiterpenes and sesquiterpenes alcohols in Titroleane™ and TTO, both rates are low-viridiflorene is present at 3.3% in Titroleane™ instead of 3% in TTO. Similarly, δ-cadinene represents 2.8% in Titroleane™ and 3% in TTO. Fraction 2 contains less than 1% α-terpineol and less than 0.01% sesquiterpenes and sesquiterpenes alcohols. We also wanted to determine the impact of the composition of each extract on plastic containers. Figure S1 shows the impact of extract composition on plastic containers after two-month storage. TTO and Titroleane™ (whose main compound is terpinen-4-ol) have regular containers, while Fraction 2 shows a modified plastic container. This behavior must be due to its high concentration in monoterpenes.
Homogeneity between Batches
As GC-FID analyses on three productions of Titroleane™ have demonstrated similar compositions (data not shown), we also wanted to test the homogeneity of our process by evaluating the antibacterial activity of different batches. Consequently, four different and independent batches (i.e., B1, B2, B3 and B4) were tested for their antibacterial activity against Escherichia coli, Staphylococcus aureus and Yersinia enterocolitica. They demonstrated the same antibacterial activity-the susceptibility of E. coli, S. aureus and Y. enterocolitica to the different productions of Titroleane™ ranged from 2.5% to 1.25%, as shown in Table 2.
MIC of Titroleane™ and Fraction 2
Titroleane™ and Fraction 2 were tested in MIC assays for their respective antibacterial activities. Regarding the production batch B2 in Table 3, the susceptibility of E. coli, S. aureus, and Y. enterocolitica, were 1.25% and 2.5% for Titroleane™ and Fraction 2, respectively. Similar results were measured for production batch B3, with MIC at 2.5% for both fractions on S. aureus and Y. enterocolitica. E. coli was susceptible to 2.5% Titroleane™ and no MIC was recorded for Fraction 2.
Antibacterial Spectrum of Titroleane™
MICs were performed on forty-nine bacteria and five fungi to define the antimicrobial spectrum of Titroleane™. There was no clear trend of inhibition depending on the characteristics of the bacterial strains, as shown in Table 4. Regarding Gram-positive bacteria, Titroleane™ showed good activity toward Bacilliales and Lactobacilliales, with MIC ranging from 0.62% to 2.5% and from 1.25% to 2.5%, respectively. Similar results were obtained for C. xerosis (MIC = 1.25%), and Clostridium sp. (MIC = 0.62% to 2.5%). Titroleane™ showed good efficiency on wild type S. aureus and E. coli (MIC = 0.62%), as well as against MRSA (Methicillin-Resistant S. aureus) (MIC = 1.25%). Considering the inhibition of spore germination, Titroleane™ had a good anti-germination effect on B. atrophaeus spores at 0.62%. Within Gram-negative bacteria, Enterobacteriales, which are of higher concern in threats, were the most sensitive to Titroleane™ (MIC = 0.31% to 1.25%). Bacteroidales were also very susceptible to Titroleane™, mostly B. fragilis with a MIC at 0.08%. Titroleane™ also showed antifungal activity against Candida sp. (MIC = 1.25%), and the mould M. furfur (MIC = 1.25%). No activity was observed against A. niger at tested concentrations.
Activity of Titroleane™ versus TTO
To compare the activity between Titroleane™ and TTO, another series of MICs was performed on two fungi and thirty-one bacteria representing a wide variety of microorganisms. TTO and Titroleane™ showed similar activities toward tested bacteria and yeast, as shown in Table 5. Their MICs only differed for Y. enterocolitica (0.31% TTO versus 1.25% Titroleane™), V. anguillarum (0.15% TTO versus 0.62% Titroleane™) and V. nigripulchritudo (0.15% TTO versus 1.25% Titroleane™).
Cytotoxicity of TTO versus Titroleane™
The cytotoxicity of Titroleane™ was evaluated on human fibroblast in comparison to TTO, as shown in Figure 1. A low and stable concentration of solvent (DMSO) was used to minimize the impact on cell viability. In the same way, extracts were tested at low concentrations, ranging from 0.025% to 0.006%. The control SDS shows significant cytotoxic activity at a rate of 98%. Little toxicity was recorded with a maximum of 37% with TTO and a significant 23% with Titroleane™ at a concentration of 0.025%. No cytotoxicity was measured at 0.006% for both extracts.
Cytotoxicity of TTO Versus Titroleane™
The cytotoxicity of Titroleane™ was evaluated on human fibroblast in comparison to TTO, as shown in Figure 1. A low and stable concentration of solvent (DMSO) was used to minimize the impact on cell viability. In the same way, extracts were tested at low concentrations, ranging from 0.025% to 0.006%. The control SDS shows significant cytotoxic activity at a rate of 98%. Little toxicity was recorded with a maximum of 37% with TTO and a significant 23% with Titroleane™ at a concentration of 0.025%. No cytotoxicity was measured at 0.006% for both extracts. Concentrations ranged from 0.025% to 0.006% (v/v). Experiment was performed three times in triplicates on different days. * = p < 0.05.
Discussion
Novel antimicrobial agents are still needed to counteract infections and, in the last 20 years, the search for novel antimicrobial agents from plants has been of great interest [21].
Facing the worldwide dissemination of resistant pathogens, as well as the lack of new therapeutic options, it is now urgent to discover new active antimicrobial agents to fight and treat resistant infections [22]. In this context, during the last 20 years we have observed a rekindling interest of the scientific community in exploring the plant world in order to find these new antimicrobial agents [23]. Nevertheless, essential oils can also be dangerous because of their composition in active but also toxic molecules. Indeed, standard TTO is well known for its numerous good properties but might be also irritating or toxic when used topically or ingested, thus limiting the number of possible human applications [15,16,24,25].
SETUBIO, the original process of detoxifying Tea Tree essential oil, generates two fractions: one, named Titroleane™, in which bioactive beneficial compounds of TTO are concentrated; a second, identified as Fraction 2, which mainly contains monoterpenes. Indeed, gas chromatography analysis of Titroleane™ showed an important reduction of its monoterpenic content compared to TTO, as shown in Table 1. Titroleane™ contains 100-times less α -pinene, more than five-times less terpinolene, 10-times less 1,8-cineole, 37-times less α-terpinene, 10-times less γ-terpinene and 26times less limonene than TTO. On the contrary, Fraction 2 contains no sesquiterpenes nor sesquiterpenes alcohol, and almost two-fold more γ-terpinene than TTO. Even if TTO and Titroleane™ have quite different compositions, their cytotoxicity on fibroblasts is similar. Concentrations ranged from 0.025% to 0.006% (v/v). Experiment was performed three times in triplicates on different days. * = p < 0.05.
Discussion
Novel antimicrobial agents are still needed to counteract infections and, in the last 20 years, the search for novel antimicrobial agents from plants has been of great interest [21].
Facing the worldwide dissemination of resistant pathogens, as well as the lack of new therapeutic options, it is now urgent to discover new active antimicrobial agents to fight and treat resistant infections [22]. In this context, during the last 20 years we have observed a rekindling interest of the scientific community in exploring the plant world in order to find these new antimicrobial agents [23]. Nevertheless, essential oils can also be dangerous because of their composition in active but also toxic molecules. Indeed, standard TTO is well known for its numerous good properties but might be also irritating or toxic when used topically or ingested, thus limiting the number of possible human applications [15,16,24,25].
SETUBIO, the original process of detoxifying Tea Tree essential oil, generates two fractions: one, named Titroleane™, in which bioactive beneficial compounds of TTO are concentrated; a second, identified as Fraction 2, which mainly contains monoterpenes. Indeed, gas chromatography analysis of Titroleane™ showed an important reduction of its monoterpenic content compared to TTO, as shown in Table 1. Titroleane™ contains 100-times less α -pinene, more than five-times less terpinolene, 10-times less 1,8-cineole, 37-times less α-terpinene, 10-times less γ-terpinene and 26-times less limonene than TTO. On the contrary, Fraction 2 contains no sesquiterpenes nor sesquiterpenes alcohol, and almost two-fold more γ-terpinene than TTO. Even if TTO and Titroleane™ have quite different compositions, their cytotoxicity on fibroblasts is similar. Titroleane™ is, nevertheless, of interest since it is clear of several known allergens [15,16].
The chromatographic profiles were studied on different batches, and both fractions had constant activity toward tested strains, which attest to an accurate production process for Titroleane™. The similar antimicrobial activity observed for fractions is supported by research pursuing isolated single compounds of TTO. First, 1,8-cineol, has been described widely for its antibacterial activity on vegetative bacteria, biofilm [26,27] or fungi [28,29]. Second, α-pinene has also been studied for its antibacterial activity on Gram-negative bacteria and C. albicans [10,[30][31][32]. Third, γ-terpinene has initially been reported to be inactive [20,33], but recent reports described antimicrobial activity and explored its mechanism [34,35]. Fourth, cymene was found to potentiate the antimicrobial activity of other molecules and might indirectly play a role in the antibacterial activity of Fraction 2 [12].
Terpinen-4-ol has been identified in several studies as the principal active component of TTO with α-terpineol, effective, among others, against bacterial skin pathogens [19,20,27,30,36,37]. Terpinen-4-ol was defined as the best antifungal component of TTO, followed by α-terpineol [6]. These two compounds represent more than 70% of the composition of Titroleane™, which could explain the very good antimicrobial activity of the product.
To define the antimicrobial spectrum of Titroleane™, various microorganisms were tested. As shown in Table 4, Titroleane™ has a sporicidal activity against B. atrophaeus and is active on C. sporogenes, Y. enterocolitica, V. cholerae and S. enterica enterica typhimurium. They all represent non-toxigenic surrogate strains involved in bioterrorism and epidemic infection, such as anthrax, botulisms, and plague [38].
Gram-negative bacteria, which are of high concern in threats, showed good susceptibility to Titroleane™. Indeed, its antibacterial activity is particularly pronounced against Enterobacteriaceae, such as E. coli and S. enterica enterica typhimurium, which are responsible of half of the community acquired intra-abdominal infections in the European Union, according to Sartelli M and collaborators [39]. Good efficacy against Bacteroides spp., which is the most represented anaerobic bacteria in Europe for intra-abdominal infection [39], has also been demonstrated. Unfortunately, bacteria involved in disease prevention, such as Lactobacillus spp. and Bifidobacterium spp. [40,41], are also susceptible to Titroleane™. Nevertheless, depending on the concentration used, Titroleane™ could kill or inhibit pathogens or potential pathogens with a limited impact (or side effect) on the commensal ones. This has been previously demonstrated for TTO [42].
Many pathogens involved in skin infection have demonstrated interesting susceptibilities to Titroleane™, such as S. aureus and MRSA, S. epidermidis, E. faecalis, Klebsiella. spp., C. albicans, and P. aeruginosa. These pathogens are responsible for more than 50% of the hospital acquired surgical site infections (SSI) in the US, according to the BiomedTracker Part 1 report of September 2014 [43], and 100% of the French community's acquired skin and soft tissue infections (SSTI), according to the national survey on nosocomial infections in 2012 [44].
Regarding its antibacterial spectrum, we have demonstrated that Titroleane™ is also active against periodontal bacteria [45,46], involved in caries and plaque formation, as well as against acne vulgaris [47] and vaginosis, caused by Candida spp. [48]. Titroleane™ might also be of interest in non-cholerae vibriosis infections, common in fish farming and with a rising incidence in humans [49].
One could expect that the antimicrobial activity spectrum of Titroleane™ is weaker than those of standard TTO, due to the absence of numerous compounds in Titroleane™ that could potentiate its antibacterial activity. Indeed, it has been found that terpinen-4-ol and α-terpineol have an antagonist effect in killing demodex mites, whereas terpinen-4-ol and terpinolene have a synergistic effect. These data would suggest that Titroleane™ could be less active than TTO, by the absence of a synergistic effect brought about by the presence of terpinolene [14]. Surprisingly, as demonstrated by the results in Table 5, Titroleane™ has the same efficacy as TTO. TTO's MIC values presented in this study are supported by many previous articles. Its sporicidal activity against B. subtilis spores at a concentration above 1% has been previously demonstrated [50], as well as its activity against oral pathogens at 0.2% [51]. Several studies also showed its activity against enteric and skin pathogens [52,53] and in vivo testing demonstrated the activity of TTO against bacteria involved in acne [54]. However, previous experiments on P. aeruginosa showed no antibacterial activity of TTO, whereas in the present study, we clearly demonstrated that P. aeruginosa is sensitive. This discrepancy can be explained by the different strains used and/or the methods used in the preparation of the oil [53].
Because Titroleane™ has similar MICs to TTO, one can reasonably believe that it could be studied for the same applications.
Concerning periodontal disease, a TTO gel was found to have a positive effect on chronic periodontitis after local delivery [55]. In addition, an evaluation of the effect of a 0.2% TTO mouthwash on the oral flora of forty volunteers suggests that TTO could reduce the total number of oral bacteria and decrease the dental biofilm [56,57]. Regarding this good efficiency to inhibit the growth of periodontal strains, Titroleane™ could also been used for mouth hygiene, with less known molecules than TTO that are toxic to human health.
Concerning skin infections, wound dressing has been formulated with TTO and studies suggest that it promotes healing as well as the decolonization of S. aureus of human wounds in vivo [58,59], probably because of terpinen-4-ol and α-terpineol, which are able to penetrate the entire thickness of the epidermis [60]. The application of dressing made from Titroleane™ could be used to control SSTI infections in hospitals and in the community.
Numerous formulations and studies were made on the effectiveness of TTO in acne treatment in vitro and in vivo [61], thus, the results of Titroleane™ look promising toward a future treatment for acne.
Studies on TTO compounds suggest that many more properties than its broad spectrum of antimicrobial activity could be allocated to Titroleane™. Further investigations have to be done to fully discover the whole potential of Titroleane™ for human health benefits.
Finally, the toxicity of TTO and Titroleane™ was investigated. The difference between both oils is not as high as we expected but can be explained in two ways. The first reason is in the model used. Half of molecules in TTO are hydrophobic, whereas most of the Titroleane™ molecules are hydrophilic. This means that, in this model (as in most cellular culture models), more molecules are solubilized and in contact with the cells with Titroleane™ than with TTO. In other words, molecules of Titroleane™ must be more accessible to cells than those of TTO. The second reason is that Titroleane™ is a concentrate of TTO's monoterpene alcohol fraction. In this test, however, we used the same concentrations to compare them. The results show that the cytotoxicity of Titroleane™ and TTO are similar, even if their compositions are highly different.
Active Compounds
Standard Tea Tree Essential Oil (TTO), according to the norm ISO/FDIS 4730:2017 was purchased from Helpac (Auzon, France). The compound profile of Titroleane™ was obtained from Lexva Company (Saint-Beauzire, France), by gas chromatography with flame ionization detector (GC-FID).
TTO Fractionation
TTO's fractionation was performed by distillation. TTO was loaded on a fractionating Vigreux-type column (with finger indentations) of 3 m high and 10 cm diameter, at 72 • C at the head of the column. The distillation of TTO was used to separate the mixture into two fractions, based on their volatilities. The fraction containing sesquiterpenes and alcohols was named Titroleane™, while the fraction mostly composed of monoterpenes was named Fraction 2.
Batches were stored in 10 L or 20 L high density polyethylene containers. Samples were collected in 60 mL polystyrene flasks and stored at 22 • C and protected from light. In order to evaluate the impact of the composition on plastic containers, 60 mL polystyrene flasks were observed after two months. Batch B2 and associated TTO and fraction 2 were used for the visual observation.
Industrial Batches
Four industrial batches of Titroleane™, named B1, B2, B3 and B4, were produced at different times. If not specified, Titroleane™ refers to batch B2. The antimicrobial efficacy of these batches was tested on an initial set of three bacterial species: Escherichia coli as a Gram-negative model, Staphylococcus aureus as a Gram-positive model and Yersinia enterocolitica as a foodborne model. Assays were performed once immediately after production.
Samples Preparation
To enhance the solubility of TTO and Titroleane™, Dimethyl sulfoxide (DMSO) (Sigma Aldrich, Saint-Quentin-Fallavier, France) was used as a solvent. Oils were diluted at 10% final volume (v/v) in 10% DMSO. All experiments were performed simultaneously with a control solution of 10% DMSO. Master stocks of bacterial strains and fungi were stored in a −80 • C freezer.
Growth Conditions and Inoculum Preparation
The bacteria and fungi were grown on the plate and then a single colony was transferred to a second plate. For aerobic bacteria, incubation time was 24 h, for fungi and anaerobic bacteria the time was 48 h and for microaerobic strains the time was 72 h. The oxygen-free atmosphere was obtained by using GenBOX™ Anaer or Microaer (bioMérieux, Craponne, France). The inoculum was prepared from isolates of the second agar plate in TPS solution (Tryptone 0.1% (211705, BD, Le Pont de Claix, France) and Sodium chloride 0.85% (S9888, Sigma Aldrich, Saint-Quentin-Fallavier, France)), and was then diluted in adequate broth at 5.10 5 -1.10 6 CFU mL −1 .
All strains were cultivated at 37 • C except for moulds and yeasts, which were grown at 30 Sabouraud dextrose agar and broth + sterile olive oil were used to grow M. furfur (UMIP 1634.86).
Bacillus atrophaeus Spore's Suspension Preparation
Plates with isolated colonies were incubated for two weeks at 37 • C, then used to prepare a highly concentrated suspension in TPS solution. The suspension was pasteurized at 80 • C during 20 min to kill all remaining vegetative cells. The number of spores was then counted using a Malassez cell-counting chamber and diluted to 10 6 spores mL −1 .
Minimum Inhibitory Concentration (MIC) Determination
MICs values were measured using the microdilution broth method, according to the Clinical and Laboratory Standards Institute guidelines [74], with modification on the broths used to fit with organism requirements for growth. Indeed, some authors have shown that there are no significant differences, depending on the culture medium used (i.e. Sabouraud dextrose broth (SDB), RPMI and Christensen's urea broth (CUB)) for the determination of MICs using the broth microdilution (BMD) method, recommended by the CLSI [75]. Hence, standard protocol was adapted to fungi as follows: fungi were diluted at 10 5 CFU mL −1 in Sabouraud broth (Candida albicans, Candida kefyr, Candida tropicalis, Aspergillus niger) or Sabouraud broth + 20% olive oil (Malassezia furfur); plates were incubated aerobically at 30 • C and analyzed after 48 h; then, 96-well microplates were prepared with three controls-growth control without product, negative control without bacteria, and solvent control (DMSO from 2.5% to 0.0049%); wells were prepared by serial dilutions from 2.5% to 0.0049% with 50 µL of product and 50 µL of suspension; microplates were then incubated in growth conditions defined above, then MICs were read visually. The same batch of Titroleane™ was used for the whole spectrum study. A two-fold variation between two MICs, corresponding to one dilution, was considered as non-significant.
In Vitro Cytotoxicity
Human foreskin fibroblast cell lines were purchased from ATCC collection SCRC-1041. Cells were grown in MEM (10370-047, Gibco, Paisley, UK) with the addition of 10% heat-inactivated fetal bovine serum (10270, Gibco, Paisley, UK), 2 mM of L-glutamine (P04-80100, PAN™ biotech, Aidenbach, Germany), 100 µg mL −1 ampicillin (A9518, Sigma-Aldrich, Saint-Quentin-Fallavier, France), 0.1 mU mL −1 of penicillin-streptomycin (P4333, Sigma-Aldrich, Saint-Quentin-Fallavier, France) and 2.5 µg mL −1 of amphotericin B (P06-01050, PAN™ biotech, Aidenbach, Germany). Cells were maintained at 37 • C in a 5% CO 2 humidified atmosphere. The cytotoxicity was carried out by Neutral Red coloration assay for investigating cell viability [76]. Cells were seeded on 96-well plates, at a concentration of 10 5 -10 6 cells per mL in completed MEM, and incubated for 24 h. Various concentrations of samples, all prepared in 0.3% DMSO (final concentration in wells v/v) and sonicated for 10 min before use, were then added to wells with a final volume of 200 µL. Negative (no treatment), solvent (DMSO 0.3%), and positive (0.1% SDS, BP166, Fisher Scientific, Villebon-sur-Yvette, France) controls were included. Each condition was tested three times per assay. Exposure periods of 24 h were chosen for determining and comparing the in vitro cytotoxicity potential of samples. After incubation, the supernatant was removed and cells were washed three times with PBS (10010-015, Gibco, Paisley, UK) before adding Neutral Red (229810250, ACROS Organics Antwerp, Belgium) and solution prepared in completed MEM at 0.005% (v/v). The plate was then incubated for an additional 3 h. The Neutral Red solution was washed three times in PBS, the coloration was solubilized in acetic acid 1%:ethanol 50% (v/v) and the absorbance was measured at 540 nm using a microplate reader (Infinite M200 pro, Tecan, Crailsheim, Germany). The cell survival rate was determined by comparing the absorbance values obtained with treated cells and with DMSO.
Statistical Analysis
The data were analyzed using a one-way ANOVA Dunnet statistical analysis, which allows for the comparison of data to a control-here the solvent control [77]. Statistical analysis was carried out with the free online SAS version (SAS Studio version 9.4), with the use of Oracle VM VirtualBox version 5.1, and VMware Player version 5.0 (SAS University Edition).
Abbreviations
The following abbreviations are used in this manuscript: | 5,686.8 | 2020-07-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Effects of oral florfenicol and azithromycin on gut microbiota and adipogenesis in mice
Certain antibiotics detected in urine are associated with childhood obesity. In the current experimental study, we investigated two representative antibiotics detected in urine, florfenicol and azithromycin, for their early effects on adipogenesis, gut microbiota, short-chain fatty acids (SCFAs), and bile acids in mice. Thirty C57BL/6 mice aged four weeks were randomly divided into three groups (florfenicol, azithromycin and control). The two experimental groups were administered florfenicol or azithromycin at 5 mg/kg/day for four weeks. Body weight was measured weekly. The composition of the gut microbiota, body fat, SCFAs, and bile acids in colon contents were measured at the end of the experiment. The composition of the gut microbiota was determined by sequencing the bacterial 16S rRNA gene. The concentration of SCFAs and bile acids was determined using gas chromatography and liquid chromatography coupled to tandem mass spectrometry, respectively. The composition of the gut microbiota indicated that the two antibiotics altered the gut microbiota composition and decreased its richness and diversity. At the phylum level, the ratio of Firmicutes/Bacteroidetes increased significantly in the antibiotic groups. At the genus level, there were declines in Christensenella, Gordonibacter and Anaerotruncus in the florfenicol group, in Lactobacillus in the azithromycin group, and in Alistipes, Desulfovibrio, Parasutterella and Rikenella in both the antibiotic groups. The decrease in Rikenella in the azithromycin group was particularly noticeable. The concentration of SCFAs and secondary bile acids decreased in the colon, but the concentration of primary bile acids increased. These findings indicated that florfenicol and azithromycin increased adipogenesis and altered gut microbiota composition, SCFA production, and bile acid metabolism, suggesting that exposure to antibiotics might be one risk factor for childhood obesity. More studies are needed to investigate the specific mechanisms.
Introduction
According to data released by the World Health Organization (WHO) in 2008, more than one billion adults worldwide were overweight, and among those individuals, 300 million adults were obese [1,2].Obesity is not only prevalent in adults but also in children [3,4].Obesity is related to specific major chronic diseases, such as type II diabetes mellitus, coronary heart disease, atherosclerosis, and nonalcoholic fatty liver disease [5].However, the etiology of obesity is not fully understood.Recently, the gut microbiota was found to be closely correlated with metabolic and immune function and an important contributor to adipogenesis.For example, Ley et al. found that obesity was related to intestinal microorganisms, with a decreased ratio of Bacteroidetes to Firmicutes observed in obese mice compared to lean mice from the same brood [6].Severe obesity and insulin resistance occurred in mice one week after in vivo inoculation of Enterobacter cloacae B29 that could produce endotoxins [7].The gut microbiota in human intestinal tracts was highly diverse in normal weight subjects compared with obese subjects, with Firmicutes more predominant in obese subjects [8].Obese people with low microbial gene species richness tend to gain more weight with a high fat diet than obese people with high species richness [9].
The gut microbiota has a close relationship with the body's energy homeostasis and fat metabolism, but it also tends to be affected by factors such as breastfeeding, caesarean section, and dietary habits, as well as other factors [10,11].Among them, exposure to antibiotics is a potential threat that cannot be ignored.Due to extensive use in humans and animals, antibiotics have frequently been found in aquatic environments and in food.Thus, they are listed as a class of emerging environmental pollutants [12].Compared to other environmental pollutants, antibiotics can not only do direct harm to the human body but can also interfere with the human microbiome, possibly resulting in metabolic diseases, such as obesity [13].For example, lab studies have found that certain antibiotics, such as vancomycin, penicillin and chlortetracycline, can alter the bacterial composition and endocrine activity of the gut microbiota, affect production of short-chain fatty acids (SCFAs) and the metabolism of bile acids, and increase body fat [14][15][16][17][18]. Several epidemiological studies have reported that antibiotic use in children was positively associated with obesity in children [19][20][21][22].Recently, several studies have reported extensive exposure of school-aged children to antibiotics by measuring antibiotics in urine, and certain antibiotics, such as florfenicol and trimethoprim, were related to obesity [23,24].
Antibiotics have different effects on gut microbiota due to differences in their antimicrobial spectrum and thus differ in their capacity to interfere with fat metabolism.Florfenicol and azithromycin are two typical antibiotics frequently detected in urine, and they have a potential impact on fat metabolism, but there is still a lack of laboratory data regarding their effect on adipogenesis.Since florfenicol first came to the market in Japan in the 1990s, it has been used in a number of countries to treat bacterial disease in animals including cattle, pigs, chicken and fish.Florfenicol is an amphenicol antibiotic and has a broad spectrum of antimicrobial activity that includes a wide range of gram-positive and gram-negative bacteria and mycoplasma [25].Because it is widely used in aquaculture, florfenicol is typically more abundant in water environments than other antibiotics.Azithromycin has relatively broad but shallow antibacterial activities.It inhibits some gram-positive bacteria, some gram-negative bacteria, and many atypical bacteria.Azithromycin is a second generation macrolide that is primarily used to treat respiratory tract and genital tract infections, and it has been recommended by a number of countries and regions as a first-line treatment for these types of infections [26].To support the findings in human populations, the current study aimed to explore the effects of azithromycin and florfenicol on the gut microbiota and adipogenesis in mice.
Mouse model, sample collection, and body measurement
The study was approved by the research ethics committee of Fudan University, and carried out in accordance with the relevant guidelines and regulations.To explore possible effects of early exposure to azithromycin and florfenicol on adipogenesis in mice, we chose to start the experiment when mice were four weeks old, which is in line with previous studies, because this time is considered to be a critical period for gut microbiota development in mice [14].A total of 30 three-week-old C57BL/6 mice were obtained from the Animal Experimental Center of Fudan University.The mice were randomly divided into three groups (azithromycin, florfenicol, and control), and each group included ten mice (five males and five females).The mice were separately bred in polycarbonate cages based on sex and group.Before the experiment, the mice were acclimatized to standardized laboratory conditions at a temperature of 21±1˚C, a relative humidity of 50±10%, and a 12 h light-dark cycle for one week.These conditions were maintained throughout the entire study.Antibiotics were orally administered in drinking water for four weeks starting the fourth week after birth.The drinking water was spiked with azithromycin and florfenicol at a concentration of 35 mg/L and 33 mg/L, respectively.We changed the solutions containing the antibiotics every day.According to the daily consumption by mice, the exposure dose was estimated to be 5 mg/ kg/day for each of the two antibiotics [14].Florfenicol and azithromycin dihydrate with a purity above 98% were purchased from Sigma-Aldrich Corporation (St Louis, MO, USA).During the experiment, no adverse events were observed.At the end of each week during antibiotic administration, the mice were weighed using an electronic scale.At the end of the fourth week during antibiotic administration, the body fat content of all mice was determined after excretion of feces and urine using a MesoQMR nuclear magnetic resonance analyzer for body composition analysis of conscious animals (Niumag Corporation).Mice were killed by CO 2 inhalation and cervical dislocation.Then, mice were dissected, an incision was made in the colon with a sterile scalpel, and the contents of the colon were collected into sterilized centrifuge tubes using sterile forceps.All samples were immediately flash-frozen in liquid nitrogen and stored at -80˚C until use.
16S rRNA gene sequencing and data processing
Total genomic DNA was extracted from thawed colon content samples using a Powersoil DNA Extraction Kit (MoBio, Carlsbad, CA, USA) in 96-well format, and the 16S rRNA gene was amplified with barcoded fusion primers targeting the V3, V4, and V5 regions.Amplicon pools were sequenced on a 2×150 bp Illumina MiSeq platform.Two reads were paired, and paired-end reads were assembled with an overlapping region of at least 20 bp guaranteed; reads containing N were removed.Primers and adaptor sequences, bases with a quality of less than 20 at both ends, and sequences with a length of less than 400 bp were excluded.After pairing the abovementioned sequences, high-quality sequences were classified into multiple operational taxonomic units (OTUs) according to sequence similarity (> 97%).To make the data more interpretable, we edited the OTUs according to their representation among the samples.We ranked the abundance of OTUs from high to low, and OTUs after the top 30 were classified into "Others".This classification reduced the noise of amplicon datasets and avoided spurious associations when there was a preponderance of zero counts.Taxonomic assignment and diversity calculations were also conducted.The distance between samples was calculated using the evolution and abundance information in the sequence of each sample to reflect whether there was a significant difference in the microbial community between samples.Linear discriminant analysis effect size (LEfSe) was employed to detect significant differences in relative abundance of microbial taxa between groups at the species level.
For SCFAs, after 50.0 mg of the colonic contents was weighed and EBA was added as an internal standard, the sample was completely homogenized with 1.0 ml of 0.5 M oxalic acid for 5 min and centrifuged at 8000 r/min for 10 min.The supernatant was filtered with a 0.22 μm nylon membrane, and 1 μl of the filtrate was analyzed using a capillary gas chromatography instrument equipped with a flame ionization detector (FID) (GC-2010 Plus, Shimadzu).The SCFAs were separated on a fused-silica capillary column with a free fatty acid phase (DB-FFAP) and dimensions of 30 m × 0.53 mm i.d.coated with 1.0 μm film thickness.Nitrogen was used as the carrier gas at an initial flow rate of 6.44 ml/min in the constant pressure mode.The initial oven temperature was 50˚C.This temperature was maintained for 1.0 min, raised to 180˚C at 20˚C/min, held for 1.0 min, and then increased to 200˚C at 20˚C/min and held for 2 min.Glass wool was inserted into the glass liner of the injection port, and the split injection mode was used with a split ratio of 30 to 1.The FID temperature and injection port was 240 and 200˚C, respectively.The flow rates of hydrogen, air and nitrogen that composed the gas for the FID were 40, 400 and 30 ml/min, respectively.The SCFAs were quantified using the internal standard curve method.
For bile acids, after 50.0 mg of colonic content was weighed and isotope-labeled internal standards (LCA-d4 and CA-d5) were added, the bile acids were completely homogenized with 1 ml of 0.2 M NaOH for 20 min and centrifuged at 8000 r/min for 10 min.The extraction step was repeated three times, and the supernatants were combined.After the Oasis Prime HLB cartridge (60 mg/3 cc, Waters) was conditioned with 2 ml of methanol and 2 ml of water, the combined supernatants were loaded.The cartridge was washed with 2 ml of 20% methanol in water solution, and bile acids were eluted with 3 ml of acetonitrile:methanol (90:10) containing 1% formic acid.The eluent was evaporated with a weak nitrogen flow, and the bile acids were reconstituted with 0.5 ml of 50% methanol water solution.After the reconstituted solution was filtered through a 0.22 μm nylon membrane, 10 μl of the filtrate was analyzed using ultra-performance liquid chromatography coupled with high-resolution quadrupole time-of-flight mass spectrometry (SYNAPT G2, Waters Micromass, Manchester, UK).Bile acids were separated on an analytical column (Acquity UPLC HSS T3 column, 100 mm × 3.0 mm × 1.8 μm) at a column temperature of 50˚C with a mobile phase of methanol and water containing 0.1% formic acid at a flow rate of 0.55 ml/min.The linear elution program was as follows: from 0 to 3.00 min, increased to 35% from 5% methanol; from 3.00 to 6.00 min, increased to 65% methanol; from 6.00 to 8.00 min, increased to 80% methanol; from 8.00 to 9.00 min, increased to 90% methanol; from 9.00 to 10.00 min, increased to 95% methanol; from 10.00 to 10.50 min, increased to 98% methanol; from 10.50 to 11.00 min, decreased to 5% methanol and maintained until 12.00 min.For mass spectrometry, nitrogen and argon were used as the desolvation and collision gas, respectively.The flow rate of the desolvation gas was set to 800 L/h with a temperature of 400˚C.The capillary voltage was set to 2.8 kV, the cone gas was set to 30 L/h, the source temperature was set to 110˚C, and the sampling cone voltage was set to 30 v. The bile acids were quantified using the isotope-dilution method, and the calibration curve was based on the peak area of the extracted precursor ion chromatogram for each bile acid from the total ion chromatogram at a mass window of 0.02 Da.
Statistical analysis
Quantitative data are reported as the mean ± standard error.The Kolmogorov-Smirnov D test was used to ascertain the normality of the data.Statistical differences in body composition and the colonic SCFA and bile acid content between the groups were tested with one-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction for continuous variables.All analysis was conducted for males and females separately.A p-value less than 0.05 was considered statistically significant.All statistical analyses were performed using R 3.1.1software [28].Reads were assembled using PANDAseq (v.2.7) [29].Trimmomatic (v.0.30) was used to filter primers and adapter sequences [30].USEARCH (v.8.0) was employed to pair assembled and filtered reads [31].The QIIME pipeline with RDP classifier Bayesian algorithm was used for taxonomic assignment with the SILVA_119 16S rRNA database [32,33].OTU classification, UniFrac analysis, and calculation of diversity metrics were also conducted with QIIME pipeline.Linear discriminant analysis effect size (LEfSe) was conducted using Galaxy Online software [34,35].
Body weight and composition
As shown in Fig 1, there was no significant difference in weight among the 3 groups at the beginning of the experiment.The antibiotic groups gained more weight than the control group during the 4-week period, especially the males.The weight gains in the florfenicol group between week 4 and week 1 were larger than in the azithromycin group.The average percent body fat in the florfenicol group (10.62% ± 0.34%) and azithromycin group (10.20% ± 0.28%) was significantly higher than that in the control group (8.73% ± 0.22%).The differences in body fat between the antibiotic groups and the control group were larger in males than in females.The difference in weight gain between the florfenicol group and the control group was also larger in males than in females.The absolute value or rate of lean mass was similar for antibiotic-treated and control mice.
General OTU information
A total of 14,226,262 16S rDNA sequence reads from V3, V4, and V5 regions were obtained from the 30 samples.The number of sequence reads for each sample ranged from 368,026 to 620,382 with an average of 474,208.The average length of the sequence reads was 445 bp.The taxon abundance of each sample was generated into 12 phyla, 23 classes, 33 orders, 61 families, 125 genera and 255 species.Overall, twelve phyla were identified in the study groups.Among them, five phyla were commonly found in each group, and three phyla were dominant at > 1% concentration but varied in relative abundance with sex and antibiotic treatment.The majority of sequencing reads at the phyla level were Firmicutes, Bacteroidetes, and Proteobacteria, which dominated the bacterial community in each of the studied groups.A total of 61 families were detected.Among them, nine families were commonly present, and Lachnospiraceae, Ruminococcaceae, and Helicobacteraceae were the dominant families detected.A total of 125 OTUs were identified at the genus level.The highest number of genera (125) was detected in the male control group, whereas the lowest (76) was found in the female mice treated with azithromycin.Antibiotics significantly decreased the number of genera in each of the antibiotic treatment groups.A total of 255 different species were found, and the highest number of species presented in the control group.
Diversity of gut microbiota
There are two types of alpha diversity indexes, community richness indexes (Chao, ACE) and community diversity indexes (Shannon).Fig 2 shows that the control group had higher richness and diversity index levels than the antibiotic groups (both males and females), and there were no significant sex-related differences.Good's coverage, a measure of sampling completeness, ranged between 91.86% and 96.89% at a 97% similarity level.Beta diversity reflects diversity differences among samples.UniFrac principal coordinates analysis (PCoA) of 15,034 OTUs (grouped at 97% sequence similarity) showed a clear separation between the control and antibiotic samples using an unweighted analysis (Fig 3).The percentages of variation represented by PC1 and PC2 were 26.31% and 15.26%, respectively.In the two antibiotic groups, the male and female groups were also well separated.
Composition of gut microbiota
There were substantial variations in the composition of gut microbiota associated with antibiotics and sex.Scoring and ranking were carried out for these differences.In females, Fig 4a shows that at the level of phylum, the relative abundance of Firmicutes increased and the relative abundance of Bacteroidetes and Proteobacteria decreased significantly in the azithromycin group compared with the control group.The relative abundance of Verrucomicrobia increased One-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction were employed to measure differences between the three groups.The ACE index represents the community richness of the gut microbiota, and the Shannon index represents the community diversity of the gut microbiota.* indicates that the difference between the two line-linked groups was statistically significant (p < 0.05).https://doi.org/10.1371/journal.pone.0181690.g002and that of Deferribacteres decreased significantly in the florfenicol group compared with the control group.In males, the relative abundance of Firmicutes increased while that of Bacteroidetes decreased to nearly zero in the azithromycin group compared with the control group.For florfenicol-treated mice, the relative abundance of Firmicutes was slightly higher in males than in females.For azithromycin-treated mice, females had higher relative abundance levels of Bacteroidetes and Proteobacteria and a lower relative abundance level of Firmicutes than males.
At the family level, Fig 4b shows that in females the relative abundance of Lachnospiraceae, Ruminococcaceae, and Porphyromonadaceae increased, while that of Helicobacteraceae, Rikenellaceae, and Bacteroidaceae decreased significantly in the azithromycin group compared with the control group.In the florfenicol group, the relative abundance of Bacteroidaceae and Verrucomicrobiaceae increased, whereas the relative abundance of Ruminococcaceae, Helicobacteraceae, Rikenellaceae and Porphyromonadaceae decreased compared with the control group.After florfenicol treatment, males had a lower relative abundance of Bacteroidaceae The LEfSe determines the features most likely to explain differences between groups by coupling standard tests for statistical significance with additional tests encoding biological consistency and effect relevance.At the genus level, Christensenella, Gordonibacter, and Anaerotruncus decreased in the florfenicol group; Lactobacillus decreased in the azithromycin group; Alistipes, Desulfovibrio, Parasutterella and Rikenella declined in both the antibiotic groups; and the decrease of Rikenella in the azithromycin group was particularly noticeable.Furthermore, LEfSe analysis identified other representative species of each group as biomarkers to distinguish different groups (Fig 5).
SCFAs and bile acids
As shown in Fig 6, the two antibiotics decreased the concentration of some SCFAs in the colonic contents, and florfenicol showed a stronger effect compared with azithromycin.Florfenicol decreased the concentration of some SCFAs in the colonic contents, especially AA, NBA, and NVA, with a 10-fold lower concentration than that in the control group.However, for azithromycin, only PPA and NVA showed decreased concentrations.There were no significant differences between the male and female groups.
In males, the mice treated with florfenicol had a higher concentration of CDCA and lower concentrations of LCA, DCA, UDCA and HDCA compared with the control group.The concentrations of all bile acids except CDCA and CA in the mice treated with azithromycin decreased compared with the control group, and the mice treated with azithromycin showed no significant difference compared with those treated with florfenicol.There were some differences between the male and female groups.Compared with females, males had lower concentrations of all bile acids in the azithromycin group and lower concentrations of secondary bile acids (LCA, DCA, UDCA, HDCA) in the florfenicol group.
Discussion
In this study, azithromycin and florfenicol were found to increase adipogenesis in mice and alter the overall abundance, diversity and composition of the gut microbiota, the production of SCFAs, and the metabolism of bile acids in colonic contents.The changes in gut microbiota showed some antibiotic-or sex-specific differences.To the best of our knowledge, this is the first time the effects of azithromycin and florfenicol on gut microbiota composition and adipogenesis in mice have been explored.
We observed that antibiotics disturbed the gut microbiota composition.Most previous studies support these finding.For example, oral administration of amoxicillin, cefotaxime, and vancomycin decreased the abundance and altered the gut microbiota composition in rats [17].The gut microbiota diversity indicated by the Shannon index was lower in both the antibiotic groups than in the control group, suggesting that the antibiotics reduced not only the richness but also the diversity of the gut microbiota, which is in line with results from previous studies [17,36].Our study also found that azithromycin decreased the relative abundance of Bacteroidetes and Proteobacteria and increased the relative abundance of Firmicutes, which corresponds with the results of most other studies [6,37].However, in Khan's study, the Bacteroidetes/Firmicutes ratio increased [36].Differences in dose, type of mice and the duration of experiments may be the reason for this inconsistency.Azithromycin increased the relative abundance of Parabacteroides and decreased the relative abundance of Desulfovibrio in the female group, but the opposite results were observed in the male group.This might be related One-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction were employed to measure the concentration differences in bile acids and SCFAs between the three groups.* indicates that the difference between the two line-linked groups was statistically significant (p < 0.05).https://doi.org/10.1371/journal.pone.0181690.g006to the differences in endocrine activity and the genetic background between male and female mice and needs to be further explored.
Both florfenicol and azithromycin decreased the concentrations of SCFAs in the colons of mice, but some differences were seen between the two antibiotics.This may be related to the changes in relative abundance and number of SCFA-producing bacteria in the gut microbiota induced by these antibiotics.For example, the decrease in the relative abundance of dominant propionate-producing Bacteroidetes in the group treated with azithromycin could have caused a reduced concentration of propionate in the colonic sample [38].However, the relative abundance of acetate-producing Blautia increased in all the antibiotic groups except for the group of male mice group treated with florfenicol, but the concentration of acetate in these groups did not increase [39].This may be explained by the possibility that the decrease in other acetate-producing bacteria, such as Ruminococcaceae, or the effects of the decreased absolute amount of Blautia on the production of acetate exceeded the effect of the increased relative abundance of acetate-producing Blautia [38].Moreover, the relative abundance of butyrateproducing Roseburia and Anaerotruncus increased in mice treated with azithromycin but not in mice treated with florfenicol, which might explain the discrepancy in butyrate concentration between the two experimental groups [40].
Some human studies reported a decreased concentration of SCFAs induced by antibiotics, which supports our findings.Høverstad et al. reported that oral intake of particular antibiotics reduced the concentration of SCFAs in fecal samples under a therapeutic dose [41].Young also found that amoxicillin under a therapeutic dose decreased the concentration of butyrate in fecal samples from men [42].However, Cho's result was inconsistent with our findings; in his study, penicillin and vancomycin at an exposure dose of 1 μg per g body weight increased most of the SCFAs in mice [14].The possible reason for this could be the different exposure dose and type of antibiotic.In our study, a therapeutic dose was used, which reduced the total number of SCFA-producing bacteria and then decreased the concentration of SCFAs in the gut.
We found that florfenicol and azithromycin increased the concentrations of primary bile acids in the colon but decreased the concentrations of secondary bile acids, suggesting downregulation of the 7-dehydroxylation process.Some studies have also reported similar effects for other antibiotics, such as vancomycin.In a single-blinded randomized controlled trial with 20 male subjects, 7 days of vancomycin t.i.d. was found to decrease fecal secondary bile acids and increase primary bile acids in plasma and fecal samples under the therapeutic dose [43,44].The 7-dehydroxylation activity of Clostridium and Eubacterium in the gut microbiota is the key process that turns primary bile acid into secondary bile acid [45].However, these two bacterial genera had extremely low relative abundances in the gut microbiota of our experimental mice, suggesting there might be other undiscovered bacteria responsible for 7-dehydroxylation.For example, other Clostridium family microbiota, such as Ruminococcaceae and Lachnospiraceae, might share similar functions with Clostridium scindens, and the relative abundances of these bacteria were also decreased by florfenicol and azithromycin [46].Another possible explanation could be that the florfenicol and azithromycin decreased the total level of gut microbiota in our study as indicated by the Chao and ACE index, and they reduced the number of all bacteria involved in secondary bile acid synthesis.Sayin et al. found that germ-free mice had a higher gene expression level of primary bile acid biosynthesis compared with normal C57BL/6 mice, which supports the above explanation [27].Furthermore, there are other reactions related to bile acid metabolism, such as deconjugation, oxidation and epimerization, and Bacteroidaceae and Lachnospiraceae are the main taxa that participate in these reactions [47].In a human study, after subjects were treated with rifaximin b.i.d. for 8 weeks, the relative abundance of Bacteroidaceae and Lachnospiraceae in fecal samples was negatively correlated with primary bile acids and secondary/primary bile acid ratios in serum samples [46].This finding is consistent with our results, indicating that in addition to the 7-dehydroxylation process other reactions, possibly involving Bacteroidaceae and Lachnospiraceae, also affect the production of secondary bile acid [48].
In our study, male mice in the florfenicol group had lower concentrations of secondary bile acids and male mice in the azithromycin group had lower concentrations of all bile acids.Gut microbiota and sex hormones may contribute to this difference.Males in the florfenicol and azithromycin group exhibited lower concentrations of Bacteroides and Lactobacillus, respectively.These taxa have been associated with bile acid homeostasis, and decreased relative abundance may result in dysfunction of bile acid synthesis [49].In addition, the sex hormone in males is correlated with bile acid concentration, which might result in a decrease in the excretion of bile acid [50].The reason for the sex-associated differences in the alteration of bile acids caused by antibiotics is not clear and should be investigated in future studies.
In this study, both azithromycin and florfenicol were observed to increase adipogenesis in mice, and growth was accelerated in male mice, which is supported by most previous studies.For example, both of the antibiotics decreased the relative abundance of Lactobacillus, and this was consistent with the finding by M. Cox et al.In their study, when the same amount of food was ingested, the penicillin group with a lower relative abundance of Lactobacillus exhibited more weight gain than the control group [51].We found that the increased relative abundance of Rumen caused by azithromycin was associated with adipogenesis in mice.A previous study demonstrated that individuals with a higher abundance of Rumen bacteria had a higher incidence of insulin resistance, fatty liver and low-grade inflammation [52].Azithromycin decreased the relative abundance of Akkermansia muciniphila, which could lead to an increase in intestinal inflammation and obesity [53].The Shannon and Chao index and the body fat measurements of mice in our study also suggested that individuals with low bacterial richness had higher overall body fat than those with high bacterial richness [54,55].Moreover, some human studies have also found a strong association between antibiotic exposure and obesity in boys [20,23,56,57].
SCFAs and bile acids are important metabolites in fat metabolism.They not only adjust the body's energy intake but also affect fat metabolism [58] [59].Our study found that body fat rate was significantly higher and the SCFA content was significantly lower in both of the antibiotic groups compared with the control group.The decreased concentration of SCFAs may contribute to obesity as follows.First, because SCFAs can serve as not only a source of energy but also the main nutrients for intestinal cells to maintain intestinal homeostasis, a decrease in SCFAs may lead to nutrition deficiency in intestinal cells and decreased defense, which leads to increased susceptibility to low-level inflammation, eventually leading to the occurrence of obesity [60].Second, some SCFAs are negatively correlated with the occurrence of obesity.For example, butyric acid and propionic acid can promote gluconeogenesis through the cyclic AMP pathway and brain-gut neural circuits and inhibit the accumulation of fat in adipose tissue [61,62].Therefore, a decrease in the production of these SCFAs may lead to down-regulation of related metabolic pathways, resulting in the occurrence of obesity.
An increased ratio of primary/secondary bile acids was found in our study, and this increase may lead to obesity via several pathways.First, primary bile acids have been reported to have strong antimicrobial activity and could cause a major shift in gut microbiota composition toward Firmicutes and against Bacteroidetes [63].The ratio of Firmicutes to Bacteroidetes increased significantly in our study, which is a sign of obesity [64,65].Second, the increase in the primary/secondary bile acid ratio could also lead to a decrease in basal metabolic rate [58,66].In the case of the same energy intake and daily activities, a lower basal metabolic rate leads to more fat accumulation in the body.In addition to the well-established roles of bile acids in dietary lipid absorption and cholesterol homeostasis, they act as signaling molecules with systemic endocrine functions [67,68].Alterations in bile acid metabolism can lead to insulin resistance and obesity [43].For example, bile acids activate mitogen-activated protein kinase pathways and are ligands for the G-protein-coupled receptor TGR5, and secondary bile acids have a much stronger activating capacity for TGR5 than primary bile acids [69].Downregulation of TGR5 could disrupt energy metabolism and lead to obesity [70].
Conclusions
We found that azithromycin and florfenicol altered gut microbiota composition, the production of SCFAs, and the metabolism of bile acids in colonic contents and increased adipogenesis in mice.Some effects showed a sex difference.These findings support the hypothesis that exposure to antibiotics is one important risk factor for childhood obesity, and the effects may be mediated by gut microbiota.More lab studies are needed to investigate specific mechanisms and to confirm the role of gut microbiota in adipogenesis induced by antibiotics.
Fig 1 .
Fig 1.The weight change within four weeks and body composition after four weeks.One-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction were employed to measure differences in weight gain, body fat (g), body fat rate (%), lean weight (g), and lean rate (%) between the three groups.* indicates that the difference between the two line-linked groups was statistically significant (p < 0.05).https://doi.org/10.1371/journal.pone.0181690.g001
Fig 2 .
Fig 2. Ace index and Shannon index in the samples from the normal and antibiotic-treated groups.One-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction were employed to measure differences between the three groups.The ACE index represents the community richness of the gut microbiota, and the Shannon index represents the community diversity of the gut microbiota.* indicates that the difference between the two line-linked groups was statistically significant (p < 0.05).
Fig 3 .Fig 4 .
Fig 3.The multivariate principal coordinate data analysis of the groups.F1, female control group; F2, female mice treated with florfenicol; F3, female mice treated with azithromycin; M1, male control group; M2, male mice treated with florfenicol; M3, male mice treated with azithromycin.Each group has ten mice (5 male and 5 female subjects).PC1 and PC2 are the two principal coordinate components.PC1 indicates the maximum possible interpretation of the main components of the data variations.PC2 represents the largest proportion of interpretation of remaining variations, etc. https://doi.org/10.1371/journal.pone.0181690.g003
Fig 5 .
Fig 5. LEfSe results in the three groups, representing the relevant features on taxonomic trees.AZM, mice treated with azithromycin; FFC, mice treated with florfenicol.Due to the length of bacterial names, abbreviations are employed under the family level.Each group has 10 mice (5 male and 5 female subjects).https://doi.org/10.1371/journal.pone.0181690.g005
Fig 6 .
Fig 6.Concentration of SCFAs and bile acids in samples from the normal and antibiotic-treated groups.One-way analysis of variance (one-way ANOVA) and multiple t-tests with Bonferroni correction were employed to measure the concentration differences in bile acids and SCFAs between the three groups.* indicates that the difference between the two line-linked groups was statistically significant (p < 0.05). | 7,490 | 2017-07-25T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Knockdown of Stanniocalcin-1 inhibits growth and glycolysis in oral squamous cell carcinoma cells
Abstract Oral squamous cell carcinoma (OSCC) is the most common malignancy among head and neck squamous cell carcinomas. Targeted therapy plays a crucial role in the treatment of OSCC. However, new and more targets are still needed to develop. Stanniocalcin-1 (STC-1) is a glycoprotein hormone that affects the progression of cancers. However, the potential role of STC-1 in OSCC progression remains to be explored. Here, we aimed to elucidate the role of STC-1 in OSCC. We revealed that STC-1 was highly expressed in OSCC tissues and is correlated with poor patient prognosis. Knockdown of STC-1 significantly suppressed the growth of OSCC cells and restrained glycolysis by reducing glucose consumption, ATP production, and lactate levels. Mechanistically, STC-1 ablation inhibited the PI3K/Akt pathway, reducing the phosphorylation levels of PI3K and Akt. In conclusion, STC-1 depletion restrained OSCC cell growth and glycolysis via PI3K/Akt pathway and has the potential to serve as a therapeutic target for OSCC.
Introduction
Oral squamous cell carcinoma (OSCC) is the most common type of malignancy in head and neck squamous cell cancer [1].Despite extensive research efforts to identify pathogenic factors and develop new treatments, the overall prognosis remains poor [2].The occurrence of metastasis is a major factor contributing to poor survival in patients with OSCC and has become a prominent research focus in studies investigating possible potential treatments for OSCC [3].Targeted therapy plays a crucial role in the treatment of OSCC by targeting specific molecular markers or signaling pathways to inhibit tumor growth and spread.For example, targeted drugs against the EGFR, such as Erlotinib and Cetuximab, have been used in the treatment of OSCC and have shown some efficacy [4,5].However, new and more targets are still needed to develop.
Tumor cells primarily rely on oxidative phosphorylation for energy, but their growth is predominantly fueled by glycolysis, a process known as aerobic glycolysis, which is a fundamental biochemical characteristic of cancer cells [6,7].Targeting cancer cell glycolysis is considered a promising approach for therapeutic interventions [6,7].The PI3K/Akt pathway is vital in various cellular functions such as movement, autophagy, and cancer progression, and it is also critical in regulating metabolic adaptations that support cell growth [8].Akt is considered a significant driver of the glycolytic phenotype in tumor cells, making them dependent on glycolysis for survival [9].Akt enhances glucose uptake and glycolysis by increasing the expression of GLUT1, activating glycolytic enzymes, and regulating the expression, activity, as well as the interaction of HK with mitochondria [9].
Stanniocalcin-1 (STC-1) is a glycoprotein hormone implicated in the progression of various cancers [10].STC-1 expression has been found during many developmental and pathophysiological processes in mammals [11,12].Recent studies have shown that STC-1 promotes tumor angiogenesis, glycolysis, and metastasis in several malignancies, including breast, lung, and gastric cancers.Knockdown of STC-1 inhibits glycolysis in prostate cancer [13,14].STC-1 promotes tumor angiogenesis by upregulating VEGF in gastric cancer cells [15].Knockdown of STC-1 inhibits glycolysis in prostate cancer [16].STC-1 enhances the stem-like characteristics of glioblastoma cells by binding to and activating NOTCH1 [17].However, the potential role of STC-1 in OSCC progression remains to be explored.
In this study, we aimed to elucidate the role of STC-1 in OSCC and explore the therapeutic potential of targeting STC-1 in OSCC treatment.We found that STC1 is highly expressed in oral squamous cell cancer cells, and STC1 knockdown can inhibit OSCC cell growth and glycolysis, which is related to the regulation of the PI3K/Akt pathway.
2 Materials and methods
Bioinformatics
Transcriptome data for STC-1 in OSCC were obtained from The Cancer Genome Atlas (TCGA) database using the Genomic Data Commons Data Portal.Differential expression analysis was conducted between normal oral tissues (n = 40) and OSCC tissues (n = 520) using DESeq2 software.Statistical significance was evaluated using an adjusted pvalue threshold of <0.05.Survival analysis was performed using data from the UALCAN database (https://ualcan.path.uab.edu/),where patients were categorized into high and low STC-1 expression groups based on median STC-1 expression levels.Kaplan-Meier survival curves were generated, and the log-rank test was used to determine significance (p < 0.05).
Edu assay
HSC-3 and CAL-27 cells were incubated with Edu agent (Abcam) for 2 h and removed the agent, then photographed by a microscope (Zeiss, German).Edu-positive cells were photographed using a fluorescence microscope (Zeiss, Germany) and quantified as a percentage of total cells.
Glucose intake, lactate, and ATP production test
The glycolysis levels of cells were detected using the kits including glucose intake (ab136955), lactate production (ab65330), and cellular ATP levels (ab83355; Abcam, Cambridge, UK).
Statistics
GraphPad 8.0 software was used and performed.Data were represented as mean ± standard deviation.p < 0.05 was thought as significant.Statistical significance was evaluated using Student's t-test or analysis of variance, and p < 0.05 was considered statistically significant.
STC-1 was highly expressed in human OSCC tissues and cells
To detect the role of STC-1 in OSCC, we first detected its expression in OSCC tissues.Interestingly, we noticed the low transcripts per million (TPM) value of STC-1 in OSCC tissues (Figure 1a).Further through the Ualcan database, we noticed that the expression of STC-1 was correlated with the poor prognosis (survival) of patients with OSCC (p = 0.02, Figure 1b).Subsequently, we detected the expression of STC-1 in OSCC cell lines, including HSC-3 and CAL-27 cells, and normal oral keratinocyte cell line HOK cells.Through quantitative polymerase chain reaction (qPCR) assays, we revealed the high mRNA levels of STC-1 in OSCC cell lines (Figure 1c).Similarly, immunoblot assays confirmed that STC-1 was upregulated in HSC-3 and CAL-27 cells compared to HOK cells (Figure 1d).Therefore, STC-1 was highly expressed in OSCC tissues and cells.
The depletion of STC-1 suppressed the growth of OSCC cells
We next used shRNAs to confirm the role of STC-1 in OSCC cells including HSC-3 and CAL-27 cells and the growth of OSCC cells.STC-1 was downregulated in these cells by the transfection of STC-1 shRNAs, including sh-STC-1#1 and sh-STC-1#2.Through Immunoblot assays, we noticed that the expression of STC-1 was significantly decreased upon the transfection of these shRNAs (Figure 2a).Interestingly, CCK-8 assays confirmed that the downregulation of STC-1 restrained the growth of HSC-3 and CAL-27 cells, with the decreased OD450 value at 1, 2, and 3 days (Figure 2b).Consistently, Edu assays showed that the percentage of Edupositive cells was decreased upon the knockdown of STC-1 in HSC-3 and CAL-27 cells, suggesting the suppression of cell growth (Figure 2c).Collectively, STC-1 knockdown suppressed the growth of OSCC cells.
STC-1 ablation restrained the glycolysis of OSCC cells
Then, we detected the effects of STC-1 in HSC-3 and CAL-27 cells on the cell glycolysis.By the use of the glucose consumption kits, we noticed that knockdown of STC-1 suppressed the glucose consumption in HSC-3 and CAL-27 cells (Figure 3a).Furthermore, we noticed that STC-1 knockdown suppressed the production of ATP levels in these OSCC cells (Figure 3b).Our results further confirmed that the lactate production was suppressed upon STC-1 knockdown in HSC-3 and CAL-27 cells (Figure 3c).Consistently, we performed the Immunoblot assays, and the data confirmed that STC-1 knockdown restrained the expression of HK2 and LDHA, two markers of glycolysis, in OSCC cells (Figure 3d).Therefore, STC-1 knockdown restrained the glycolysis of OSCC cells.
STC-1 knockdown suppressed the PI3K/ Akt pathway in OSCC cells
Finally, the possible mechanism underlying STC-1 knockdown suppressing OSCC progression was investigated.Through Immunoblot, we detected the phosphorylation levels of PI3K and Akt in both HSC-3 and CAL-27 cells.Interestingly, we noticed that STC-1 ablation suppressed the phosphorylation of both PI3K and Akt in HSC-3 and CAL-27 cells, suggesting the suppression of the PI3K/Akt pathway (Figure 4).Therefore, we believe that STC-1 knockdown suppressed the PI3K/Akt pathway in OSCC cells.
Discussion
OSCC remains a significant global health challenge due to its high incidence and mortality rates [4].Current treatments for OSCC include surgery, radiotherapy, and chemotherapy, but the outcomes are often not satisfactory, with a high rate of recurrence and metastasis [1].There is an urgent need for the development of targeted therapies for OSCC, which require the identification and validation of specific molecular targets [2].The selection of these targets is crucial for improving therapeutic efficacy, reducing side effects, and improving the overall survival and quality of life of OSCC patients [5].Key areas of focus include EGFR inhibitors like cetuximab and erlotinib, immunotherapies targeting PD-1/PD-L1 pathways such as pembrolizumab and nivolumab, and exploration of other molecular targets like VEGF, HER2, and the PI3K/AKT/mTOR pathway [18].Efforts are also underway to personalize treatment based on individual tumor profiles [18].However, the integration of targeted therapies into standard OSCC treatment protocols is still in the developmental stages and requires further research.Interestingly, we here indicated that STC-1 is highly expressed in OSCC cells, and knocking down STC-1 can inhibit OSCC cell growth and glycolysis.Therefore, we believed that STC-1 has the potential to serve as a promising target of OSCC.STC-1 is a glycoprotein hormone involved in calcium and phosphate homeostasis, cellular protection against stress, angiogenesis, and inflammation [10,19].The relationship between STC-1 and cancer has become a focus of research, with elevated expression of STC-1 observed in various cancers, including breast cancer, lung cancer, and ovarian cancer [14,20,21].Studies have shown that STC-1 may promote tumor growth by activating signaling pathways such as PI3K/Akt and MAPK, facilitate tumor metastasis by affecting the degradation of the extracellular matrix, regulate the tumor microenvironment by promoting angiogenesis, and inhibit tumor cell apoptosis leading to chemotherapy resistance [22,23].These findings underscore the multifunctional role of STC-1 in tumor biology and highlight the therapeutic potential of targeting STC-1 across various cancers.Strategies targeting STC-1 may help inhibit tumor growth and metastasis and improve chemotherapy efficacy.However, the specific mechanisms of STC-1 in cancer still require further investigation.Herein, our results confirmed that STC-1 ablation inhibited OSCC cell growth and glycolysis.However, the precise mechanism needs further study.
OSCC is closely associated with altered glucose metabolism, particularly an increase in glycolysis, a phenomenon known as the Warburg effect.This metabolic reprogramming is a hallmark of cancer cells, including those in OSCC, and is characterized by a preference for glycolysis over oxidative phosphorylation for energy production, even in the presence of oxygen [24,25].Interestingly, we confirmed that STC-1 could serve as a key regulator in mediating glycolysis in OSCC cells.We believed that STC-1 may affect OSCC progression via mediating glycolysis.
The PI3K/Akt pathway is a critical regulator of cancer metabolism, and our data confirmed that STC-1 ablation suppresses this pathway in OSCC cells.Previous studies have demonstrated that inhibiting the PI3K/Akt pathway can significantly reduce glycolysis and tumor progression in colorectal cancer and nasopharyngeal carcinoma [26,27].Activation of this pathway enhances glucose uptake by increasing the translocation of glucose transporters like GLUT1, upregulates key glycolytic enzymes such as hexokinase, phosphofructokinase, and pyruvate kinase, and inhibits apoptosis, allowing for sustained cancer cell proliferation [26].In addition, Akt regulates HIF-1α, which further enhances glycolysis under hypoxic conditions, and alters the localization of metabolic enzymes like hexokinase to the mitochondria, increasing glycolytic flux [28].Therefore, targeting the PI3K/Akt pathway is a potential therapeutic strategy to disrupt the altered energy metabolism in cancer cells.Notably, this pathway plays a crucial role in the development of OSCC by influencing cell proliferation, survival, angiogenesis, and metastasis [29].Its activation is associated with resistance to conventional therapies, making it a potential target for therapeutic intervention.Our findings confirmed that STC-1 ablation inhibits the PI3K/Akt pathway, thereby inhibiting OSCC cell growth and glycolysis.
Overall, our findings, combined with those from other studies, highlight the potential of STC-1 as a promising therapeutic target in OSCC and other cancers.Future studies will focus on developing specific inhibitors targeting STC-1 and evaluating their efficacy in clinical settings. | 2,661.4 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Design and implementation of a versatile magnetic field mapper for 3D volumes
Graphical abstract
Hardware in context
The Magnetic Field Mapper (MFM) is a robotic sensor that uses a triple-axis magnetometer to map out large areas for magnetic field distribution. The mapping can then be analyzed to find any sources of magnetic interference, imperfections, and perform comparisons with a target field and inhomogeneity. The importance of accurate and quick measurement of permanent magnets' fields cannot be underestimated in research, development and industrial production processes, medical devices, automobiles, motors and electronic design and manufacturing. In some applications such as NMR spectroscopy, extremely high accuracy is required in measuring the field of the NMR magnet to identify homogeneous regions which become candidate sweet spots for high resolution spectroscopy [1,2]. In most cases, 3D mapping is also needed. This distribution of the magnetic field is measured by a Hall probe composed of Hall sensors in three dimensions [3,4] which are then connected to translation stages which allow motion in three dimensions [5][6][7].
Some prior solutions proposed by other works cover very small areas [8]. Commercially available mappers also measure up to AE2 T but their designs are proprietary [9,12]. The geometry used in our proposed MFM is simple and supported by a simple truss structure. The mechanical structure is easy to build and capable of measuring the magnetic field up to AE2 T at about the same precision as other commercially available devices. It is low-cost and easily reproducible. The structure can be activated either manually or programmatically. The three linear stages are constructed on an aluminum plate called the base plate. The linear stages are 600 mm above the base plate, ensuring no ferromagnetic material is in close proximity to the sample space, and are supported by the truss mechanism to make the whole system stable during movement ( Fig. 1(b)).
The approximate magnetic field sensing area is 100 Â 100 Â 300mm. The X, Y and Z movements of the mapper are controlled by homemade motor controllers. Theses controllers can work independently or can be controlled by a user interface using any data acquisition (DAQ) card. For MFM, we use a homemade DAQ system called PhysLogger. The PhysLogger interface not only activates the stepper motors but also records the triple-axis magnetic field measurement. Details of the control and data recording can be seen on PhysLogger's website [17,17].
Description
The whole apparatus is constructed on an 8 mm thick non-magnetic aluminium plate (called the base plate) of size 910 Â 295 mm. A cage of four aluminium extrusion profiles each of cross section 20 Â 20 mm and height 1 m is made as a stand for the Z slide as we have to keep the motors away from the sensor probe. The sensor is mounted on a triple axis translation stage which moves the sensor probe. To ensure structural stability, the trusses are assembled at an angle of 55 w.r.t. the horizontal level of the cage. Three dimensional linear stages are produced by components commonly used in a typical 3D printer such as timing belts, stepper motors, linear rails, lead screws, linear rods and bearings. For example, the Z stage is driven by a lead screw of pitch 6 mm and a timing belt while X and Y stages are driven by lead screws of pitch 8 mm. The translation and triple-axis field sensing is all computer controlled. The complete assembly is shown in Fig. 1(c). A power supply (12 V 5 A) is used to energize the motor controllers.
Design files
The design files and animated videos are located at https:/doi.org/10.17632/7jzgfwhznn.3 whose description is given in Table 1.
3D structure files
The complete assembly of the MFM is located in the step (.stp) file which can be opened in any open source CAD 3D software. Once it opens, all components files will automatically open and become immediately accessible. All the aluminium plates, slides, triangle and rectangular strips and base parts are manufactured on a milling machine. The drawings of the machined parts are also included in a zip folder. The Hall sensor housing is manufactured with a (Creality) resin printer whose.stl file is also situated in the data repository.
Videos
The animated videos (.wmv files) recording the phased, stage-wise construction of the assembly are also located in the given repository location which also contains working video (.mpeg file) demonstrating the GUI and recording of data.
Bill of materials summary
The bill of materials (BOM) has two subsections, one for machined components and other for the purchased components. The designator column in the machined components table directs the reader to the repository. For the brevity, ''Drawings of the machined parts.zip" folder has been relabeled as ''Drawings" in the designator column. All of these components were purchased off-the-shelf from a local market in Lahore and are generally available in any standard mechanical workshop. The ''source of materials" column in the purchased components table includes source for off-the-shelf purchasing. Fig. 1 shows the basic assembly of the MFM which is designed and constructed using the concept of three Cartesian linear stages of the 3D printer. The X slide holds the magnetic sensor probe cavity which is 700 mm away from the truss cage structure. The X slide is mounted on an orthogonal Y slide which in turn connects to a Z slide. These slides are fixed on the truss structure of 1 m height to avoid any ferromagnetic material, within the structure, near the probe. Finally, the entire truss structure rests on an aluminum base plate. We now describe the various segments of the MFM, outting how the entire structure can be built.
The truss assembly
The truss provides stability to the MFM. It is produced quickly and cheaply as it needs no formwork to support its shape. The truss is connected in triangles and behaves like a single solid object. The truss is built using four aluminium extrusion profiles of cross section 20 Â 20 mm and length 1000 mm respectively (see Fig. 2(b). The cross section is a hollow square with diagonal protrusions outside the square, terminating in the vertices of another large square. These four profiles are mounted on an aluminium plate (called cage base plate) ( Fig. 2(a)) of size 180 Â 150 Â 8 mm using M5 tapered head screws. A similar aluminium plate is placed at the top of these profiles to complete the cage as shown in Fig. 2(c) and 2(d). Fig. 2(e) describes the triangles of the truss that are constructed by aluminium strips of dimensions 20 Â 10 Â 260 mm. These strip are cut by a milling machine at an angle of 55 from both the ends. Five of these strips cover one side of the cage. To form the triangles, the first strip is placed at an angle of 55 from the surface and the second strip is placed in such a way that the angle between the two strips is 110 . The triangles are made on only two opposite sides of the cage. The complete truss structure can be seen in Fig. 2(f). The Z slide is placed on an untriangled side. All these strips are mounted on the cage ribs by M4 cap head screws. An animated video of the construction of the truss and the drawings of the machined parts is available in the data repository linkhttps:/doi.org/10.17632/7jzgfwhznn.3 (Truss.wmv, Drawings of machined parts >> Truss).
The Z slide
Two aluminium extrusion profiles (20 Â 20 mm) of length 375 mm, two linear guide rails (15 mm), lead screw (length 440 mm, diameter 10 mm, pitch 6 mm), two ball bearings (FK-12), one ball screw nut (10 mm), two top and bottom plates of aluminium (115 Â 60 mm), aluminum assembly of the slider and two joint plates are configured to construct the Z slide. Both linear guide stages are mounted on the aluminium extrusion profiles being 66 mm apart from each other (see Fig. 3(a) and 3(b)). The linear guide stage and lead screws are joined together with the slider assembly and a ball screw nut, as can be seen in Fig. 3(b). Finally, M3 cap head screws hold these assemblies. Two aluminium plates (115 Â 60 mm) shown in Fig. 3(c) also support this assembly from the top and bottom by ball bearings. The allowable distance travelled by the Z slide is 300 mm. The slide is translated by a NEMA 17 stepper motor which is attached to the slide with a timing belt (S2M 202) and pulleys with 18 and 40 teeth respectively. These are shown in Fig. 3(d) and 3(e). The gear ratio of this assembly is therefore (18=40Þ Â 6 ¼ 2:7 where 6 is the screw pitch. A close up view of the back side is shown in Fig. 3
(f).
This assembly is fastened to the truss structure with two aluminium plates (180 Â 65 Â 8 mm) from top and bottom which is described by Figs. 3(f) and 3(g). The X and Y slides are also mounted on the Z slide at this slider assembly with the help of an aluminium plate (265 Â 60 Â 8). Fig. 3(h) describes these assemblies together while the overall structure incorporating the Z slide is shown in Fig. 3(i). The animated videos of the construction of the Z slide and the drawings of the machined parts are available in the repository https:/doi.org/10.17632/7jzgfwhznn.3 (Z slider assembly.wmv, Z slide assembly.wmv, Mounting Z slide on the truss.wmv, Drawings of machined parts >> Z Slide).
The X and the Y slides
In order to construct X and Y slides, two linear guide stages (9 mm), each possessing two sliders, are mounted on an aluminium plate (213 Â 60 Â 12) with M3 cap head screws. These linear guide stages are 40 mm apart. A lead screw (8 mm) and linear guide stage are assembled together by an H shaped assembly and a screw nut (8 mm). This entire process is illustrated in Fig. 4(a) through 4(c). Both ends of the lead screw are fixed by aluminium plates (60 Â 22 Â 12 mm). One end of the lead screw is attached with a knob using a bearing for manual movement of the stage while the other end is attached with the NEMA 17 stepper motor with flexible coupling shaft (5 Â 8 mm) and an assembly, which can be seen in Fig. 4(d) to 4(f). Moreover, M3 cap head screws are used for fastening the coupling assembly. Both X and Y slides are assembled with the same components and have identical dimensions and sizes. The movement in either the X or Y slide is 110 mm. The gear ratio in these stages is 8 mm as there are no timing belt and pulleys in these stages and the gear ratio is equal to the pitch of the screw lead. The Y slide is attached below the X slide with the H shape assembly part while the magnetic probe assembly is attached with the H shape assembly part of the Y slide. After assembling both of these slides, they are mounted on the Z slide through a connecting aluminium plate. See Fig. 4(f), 4(g) and 4(h). Once again, animated construction videos are in our repository. (X or Y slide.wmv, X and Y slide.wmv, Drawings of machined parts >> X or Y Slide, Drawings of machined parts >> Joint Plate XY&Z.pdf). Fig. 5(a) describes the assembly of the magnetic field probe, which is styled as a cantilever, probe face down, consisting of an aluminum plate (455 Â 60 Â 8), an aluminum rod of minimum height 235 mm and maximum height 1000 mm, which is hollow from the inside and whose length is adjustable. It has a 3D printed sensor housing comprising pockets for each axis. The sensor housing holds Hall sensors (CYSJ902 GaAs Hall Effect Element) in the X, Y and Z directions which measure the respective magnetic fields. The 3D housing is designed in Solid Edge and made on a Creality resin printer and cured by ultraviolet (UV) light. Fig. 5(b) describes the designed sensor housing. After snug placement of the sensors to the respective pockets, the housing is inserted into the hollow aluminium rod whose inside and outside radii are 8 and 10 mm see Fig. 5(c). An adapter is designed and manufactured by turning with threaded M16 on both the sides to attach the aluminium rod to the plate. The plate is then attached to the Y slide H shape assembly with M4 cap head screws see Fig. 5(d). An animated video of final assembly and drawings of machined parts are available in the dataset link https:/doi.org/10.17632/7jzgfwhznn.3 (Probe sensor.wmv, Final assembly.wmv, Drawings of machined parts >> Cantilever.pdf, Drawings of machined parts >> Base Plate.pdf).
Motor controller
To manually control the stepper motors driving the linear stages, we use a home-grown, open-source stepper motor controller device [13]. This device is equipped with a TMC2208 smart motor controller which is controlled using an STM32F103C8T6 powered, Arduino-based development board, also called ''The Blue Pill" [15,15]. With the help of an alphanumeric liquid-crystal display (LCD) screen and push buttons, the microcontroller provides a user interface to input basic controlling parameters such as the mechanical transfer functions, movement speed, and position. The motor controller is also compatible with PhysLogger, which is used to integrate multiple linear stages and for providing a more robust, computer-based user interface [16].
Data acquisition and control
To digitally control the linear stages and acquire and record the sensor measurements, we use PhysLogger, which is a pocket-sized data logging device with robust hardware and a friendly user interface [16]. Primarily, PhysLogger lets us measure and control voltage on multiple physical channels but it can also interface with a family of instruments that can measure and control other derived quantities. One of these instruments is called a Stepper Motor Controller that can drive bipolar stepper motors and another is called a PhysHall which is a magnetic flux sensor equipped with the GaAs Hall effect detection chip [16]. Once connected with the PhysLogger, the user interface allows us to talk to the sensors and actuators with intuitive on-screen widgets that can be customized according to the job requirements. One of these widgets is connected to two of the motor controllers to manually control the movement of the sensing probe in two dimensions. Another widget is an on-screen Fig. 4. X and Y slides stages, a) linear guide stage and the lead screw, b) assembling the linear stages and lead screw with the help of H shape assembly, c) assembly is mounted on the 12 mm thick aluminum plate (213 Â 60), d) motor assembly components, e) assembling the motor coupling with the slide, f) X slide is mounted on the Y slide, g) attaching the X and Y slides with Z slide and h) close up view of all the three slides with the truss structure. data table that records the position of all the linear stages as well as the magnetic field vector components corresponding to a particular three-dimensional position. Once measurement data has been collected on enough physical positions in the vicinity of an assembly of magnets, the data table is exported to any open source data processing software for further analysis.
Operation instructions
Motor controllers are powered up by a 12 V and 5 A direct current (DC) adapter. All the three motor controllers are derived by the same supply. PhysLogger is connected to the PC by using a C-type USB cable. The motor controllers are connected with the digital pins of the PhysLogger by C-C type USB cables. The Hall sensors are connected at the analog channels of the PhysLogger. The user interface is populated by selecting several widgets from the graphical menu to graphically control the motor controllers and collect data. This process involves renaming Hall sensors and motor controllers to match the printed labels on the devices, adjusting gear ratios, pitch of the linear slides, and configuring channels that correspond to the respective Hall sensors. Once the linear stage widgets have been set up to work with the motor controllers, the magnetic probe can be moved in any direction in controlled steps. It means that the probe can be moved to any three dimensional position in the work space where we want to measure the magnetic field. The stages can be moved in fine or coarse steps as listed in the linear stage widget and can be seen in Fig. 6. In parallel, a data table widget is also added to the same work space which has a data column for three hall reading and its three dimensional. The data can be acquired in this data table containing the three axis movement positions of the motors and their corresponding magnetic field in the three dimensions. The data can be entered manually by clicking a manual data entry button or periodically by setting a sampling period. After collecting data, it can be exported for further analysis to Matlab or any other data processing software. The operation of the magnetic field mapper can be seen in the given video link in Table 1. Besides the working of the MFM, there are some important safety instructions which are also listed below.
There should be no ferromagnetic material near the MFM while taking the readings as they can change the values of magnetic field. Do not touch the sensor probe or the magnet which is under test while performing experiments for accurate readings. Use the MFM on a horizontal surface. Avoid exposing the Hall probe to the direct sunlight or to strong light source.
Results
In order to test and validate this device, the magnetic field of different shaped magnets is measured. The first example is of an I-shape assembly which is formed by assembling small cubic magnets. The assembly and its dimensions are shown in Fig. 7(a). The magnetic field is measured along one side of the magnet. The quiver plot in Fig. 7(b) shows the direction of the magnetic field in the xy plane through directed arrows. Furthermore, the results of the three magnetic field components are shown in Fig. 7(c), (d) and (e).
The second example is of a trapezoidal magnet which is magnetized at an angle of 30 w.r.t. the central axis of the magnet and shown in Fig. 8(a). The parallel sides of the magnets are 31:66 mm and 5 mm while the slanting sides are 50 mm each. The magnet is placed on the MFM base plate and the field is measured in a rectangular region outside and surrounding the magnet body whose area is 70 mm  50 mm. The field is measured after every 2 mm. Fig. 8(b) shows the three dimensional magnetic field lines as a quiver plot. The thick arrows identifies the magnetization direction of the magnet which is 30 . Fig. 8 (c), (d) and (e) are respectively showing the magnetic field distribution along x; y and z directions within our region of interest.
The third example is an NMR-specific permanent magnet. Fig. 9(a) shows the magnet (PM-1055-050 N, Metrolab) which has a nominal field of 0:45 T. The movement of the probe is along the z-axis and the field is measured every 2 mm. The total distance covered by the probe inside the magnet is 72 mm. The probe measures the three dimensional magnetic field at each point. The graph in Fig. 9(b) shows the vectorial field indicating a region of high field uniformity with a top-hat profile.
The fourth example is a ring of trapezoidal magnets. This arrangement is a variant of the Halbach design [10] and can be employed in low-field NMR systems [11]. The goal is to keep the magnetic field horizontal and homogeneous in a small region where the sample whose NMR spectrum is to be acquired, is placed. A single Halbach ring is shown in Fig. 9(c) Fig. 8. a) The trapezium magnet. b) magnetic field lines in the 3D plane, with the black arrow or the dashed line representing the manufacturer-specified magnetization direction c), d) and e) describe the magnetic field distribution along the three vectorial directions. and a graph of the magnetic field inside the ring is shown in Fig. 9(d). The three dimensional sensor of the MFM measures three components of the magnetic field at each point inside the ring every 1 mm distance. The inside diameter of the ring is 30 mm. The graph is plotted at fixed values y and z which are zero while x varies from À15 to 15 mm.
Discussion
The field of different magnetic shapes include I-shaped assembly, trapezoidal magnet, a halbach ring and an NMR permanent magnet is measured with the help of our constructed MFM. The results obtained by MFM are highly accurate and reliable. The results obtained from MFM are either according to the datasheet, or in accordance with simulated profiles (with 0:5% error). Principally, the proposed MFM can measure any shape or volume magnet which has dimension under 100 Â 100 Â 300 mm, with a spatial resolution of 0:1 mm in each axis and magnetic field up to AE2 T, with selectable fullscale ranges of AE1:5 T, AE1 T and AE100 mT, with field resolutions of 5 mT, 0:5 mT and 0:05 mT respectively.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 9. a) An NMR permanent magnet in relation to the MFM probe. b) The magnetic field components along three axis as the probe move along the z-axis with 0 to 72 mm and at the fixed setting of x and y. c) A Halbach ring magnet, d) the magnetic field along x, y and z axis components which are measured by the MFM. The plot is shown at a fixed value of y and z which is zero and x varies from À15 to 15. | 5,128.2 | 2022-09-01T00:00:00.000 | [
"Physics"
] |
Current stage of studies of the star configurations at intermediate energies with the use of the BINA detector
The Space Star Anomaly in proton-deuteron breakup cross-section occurs at energies of about 10 MeV. Data for higher energies are sparse. Therefore, a systematic scan over star configurations in the range of intermediate energies between 50 and 100 MeV/nucleon is carried out on the basis of data collected with the large acceptance BINA detector. The preliminary cross section results for forward star configurations at 80 MeV/nucleon slightly surpass the theoretical calculations, but the systematic uncertainties are still under study. Also, a new variable describing rotation of star configurations is proposed. Copyright A. Wilczek et al. This work is licensed under the Creative Commons Attribution 4.0 International License. Published by the SciPost Foundation. Received 26-10-2019 Accepted 20-12-2019 Published 24-02-2020 Check for updates doi:10.21468/SciPostPhysProc.3.005
Introduction
Studies of the most basic properties of nuclear interactions require a system simple enough for the strict calculations to be performed, which at the same time should be complicated enough to make some additional dynamics possible. These conditions are fulfilled by fewbody systems. The system of three nucleons (3N) is the simplest non-trivial environment, in which the nucleon-nucleon (NN) interactions models can be tested. Although NN interactions are dominant in the few-nucleon interactions, the system composed of at least 3 nucleons can not be properly described without introducing additional dynamic ingredient -three-nucleon force (3NF).
State-of-the-art theoretical descriptions of nuclear interactions usually use either of two theoretical approaches in modelling the dynamics: realistic potentials or Effective Field Theory (EFT), as it is not possible to directly apply QCD in this energy region. Phenomenological NN potentials are either complemented with additional diagrams treating excitations on an equal footing with nucleons, as if stable particles were produced (∆isobar in Coupled Channels approach) or with a phenomenological 3NF model. Another approach is more fundamental and applies the Chiral Perturbation Theory in order to reach an EFT description (Chiral EFT) [1,2]. Recent developments of the calculations with realistic potentials involve relativistic effects [3,4] as well as Coulomb interactions [5].
Even though the development of the theories is well advanced, some experimentally discovered effects are still awaiting their correct theoretical description. The discrepancies between the theory and the experiment were observed both in cross sections and in polarisation-dependent observables. The most intriguing inconsistencies are the A y puzzle (for analysing powers) and the Space Star Anomaly (SSA) observed in the differential cross-section of the breakup reaction. The latter effect is surprising, as it occurs at relatively low energies. The effect has not been explained theoretically since its discovery in 1989 [6]. The star configurations are defined in the centre of mass system as a final state of the d + p → p + p + n reaction, where the momenta of the outgoing nucleons form an equilateral triangle. Depending on the angle of inclination with respect to the beam axis (α), one defines the Space Star (α=90 • ), Forward Plane Star (with neutron momentum pointing upstream the beam, α=0 • ), and Backward Plane Star (with neutron moving in the beam direction, α=180 • ). The biggest discrepancies for the Space Star configuration are observed at energies ranging from 7.5 to 13 MeV/nucleon, where the differences reach 15% [7], and the theories overestimate the measurements. The main component of the theoretical cross-sections at such low energies results from the s-wave channel of the NN interaction, what reduces the range of possible explanations for the effect. Expected 3NF and Coulomb effects are very small and they were applied to theories, without clarifying the observed inconsistency. At so low energy, relativistic effects are negligible. Up till now, the calculations do not reproduce the cross-sections for the Space Star Anomaly in reliable way. Interestingly, the discrepancy is twice as much in neutron-deuteron breakup, where the experimental cross-sections exceed by 30% the theoretical predictions [8,9]. In other words, the effect switches the sign under isospin change. A set of very precise cross-section measurements for breakup, in both p + d and n + d collisions in a wide range of energy is needed to solve this charge-symmetry breaking puzzle.
The set of differential cross-sections for the 1 H(d, pp)n and 2 H(p, pp)n reactions collected with the large acceptance BINA detector is used for scanning the star configurations over a wide range of beam energies between 50 and 160 MeV/nucleon. The main aim is to check if the effect is still measurable for energies within this range. The only published data about the Space Star for this energy range showed lack of the effect for 65 MeV [10], while for the second highest energy (19 MeV) the effect was small, but still visible [11].
The BINA experimental setup
Currently, the Big Instrument for Nuclear Polarization Analysis (BINA) detector setup is placed in Cyclotron Centre Bronowice (CCB), PAS Krakow [12]. A liquid hydrogen or deuterium target is placed in the centre of a scattering chamber made of 149 phoswich detectors of triangular cross-section, called Ball. Ball covers polar angles between 40 • and 160 • with angular resolution of about 10 • . The second part of the detector is called Wall. It consists of Multi-Wire Proportional Chamber (MWPC), thin rectangular scintillators for energy loss measurement (∆E) and a calorimeter made of thick scintillator blocks (E). Combination of signals from E and ∆E is used for particle identification. Track reconstruction for forward angles is based upon the MWPC employing 3 detection layers. The Wall covers polar angles θ ∈ (10 • , 35 • ) and its angular resolution is of about 0.5 • . Such a combination of detectors covers a solid angle of almost 4π, what makes the system well suited to the measurements of variety of star geometries.
The experiments performed with BINA at KVI, Groningen, and the ongoing measurements at CCB, Krakow, make possible to extend the experimental data on the Space Star cross section at 50, 80, 108 and 160 MeV/nucleon. New measurements (proton beam at 108, 135 and 160 MeV) were performed in August/October 2019 and will be analysed soon.
Results
The preliminary results (Fig. 1) at 3 different inclination angles α are obtained with the data taken at KVI Groningen in 2011 for d+p→p+p+n reaction at 80 MeV/nucleon. The angle α=20 • corresponds to polar angles (in laboratory system) of registered protons θ 1 =θ 2 =24.2 • ±1 • and relative azimuthal angle The analysis [13,14] relies on reconstruction of proton energies, polar angles and a relative azimuthal angle of the proton pair. It includes correction for detector efficiency, cross-over regions, as well as normalisation to the luminosity. For the star configurations, the correction for loss of coincidences due to double-hits in the same elements has not been implemented yet. The missing efficiency factor depends on angular configuration only, thus, it is energy independent. Preliminary estimations provide the correction factor of about 5%, so the presented cross-section distributions should be increased by roughly 0-10%. Simulations of the losses due to double-hits in all the detector elements and analysis of the systematic uncertainties are under way. 3NF effects are small and Coulomb effects are negligible (see Fig. 1). Relativistic effects should be included, as they account for measurable effect even at 65 MeV [10] The symmetry axis in Fig. 1, corresponds to star configurations. The other points are measured for the same angular configuration of protons, but with asymmetric momentum distribution between them. All the presented data show coincidences of two particles registered in the Wall. This makes possible to check the feasibility of providing new good-quality results on star configurations. The cross-section for the star configurations in deuteron on proton breakup increases with α, until it reaches the value corresponding to α=180 • , which is about 3 times higher than for α=20 • , what makes possible to reduce statistical uncertainties. On the other hand, α >140 • correspond to Ball-Ball coincidences, which due to worse angular resolution are subject to higher systematic error.
New variable -rotation in the plane
Standard analysis of star configurations with respect to α, in case of proton beam and the BINA detector leads to the following picture (Fig. 3, region near β=0 • ): • quite few star configurations are covered by Wall-Wall coincidences (the highest angular resolution), • Ball-Wall coincidences (intermediate angular resolution) have no contributions, • the angles between α=50 • up to α=180 • can be registered as Ball-Ball coincidences (the lowest angular resolution). [5,15]. The normalisation of the experimental data is not finalised and does not contain e.g. effects due to with double-hits. The way for improvement is paved by an application of another variable (Fig. 3). In addition to α one can define β angle, which is the measure of rotation about the axis perpendicular to the plane spanned by the momenta of the reaction products. The axis of the rotation originates at the reaction vertex. The rotation through β=120 • results in a configuration, where neutron and proton are exchanged -e.g. in the case of Forward Plane Star (α=0 • ) neutron will be registered at the same angle, as one of the protons, whereas the second proton will move in the direction against the beam in the centre of mass system.
Applying rotation through β angle one can measure Ball-Wall coincidences in the region not accessible to the standard approach. Although it is still not possible to improve resolution for detecting the Space Star configuration (α=90 • ), the regions between α=0 • and α=60 • , as well as between α=130 • and α=180 • are within the acceptance of the Ball-Wall coincidences.
Conclusion
Although the BINA energy scan is very promising, the current state of analysis does not allow to draw a clear conclusion about the Space Star Anomaly, as the configuration at α=90 • has not been analysed yet. In order to reach the expected number of configurations altogether with the Space Star one has to develop analysis for Ball-Ball coincidences. The analysis of Wall-Wall coincidences is about to be finalised. Preliminary data show required statistical accuracy and their shape is correctly reproduced by stat-of-the-art calculations. The normalisation is to be corrected for loss of data due to double-hits and the systematic effects have to be studied in detail. Scope of the studies can be extended by applying a newly proposed variable, β, which enables to analyse numerous configurations at the same α angle simultaneously. In addition to that, new measurements of the breakup
Wall-Wall
Ball-Wall Ball-Ball | 2,455.6 | 2020-02-24T00:00:00.000 | [
"Physics"
] |
Passivity-Based Control With Active Disturbance Rejection Control of Vienna Rectifier Under Unbalanced Grid Conditions
In this paper, a practical passivity-based control (PBC) with active disturbance rejection control (ADRC) is proposed to improve performance of the Vienna rectifier under unbalanced grid conditions. In general, the traditional double-loop control based on positive and negative sequence transformation is used in Vienna rectifier under unbalanced grid conditions. However, it could not fundamentally solve the additional time delay caused by the second harmonic filter and loss of performance caused by a linear weighted sum of proportional integral (PI) controller. What’s more, the complexity of the controller is high for the positive and negative sequence currents need to be controlled separately. The PBC is a nonlinear controller based on energy dissipation and it has strong robustness to interference. Further, the line voltage based PBC in inner current loop can deal with the voltage unbalance effectively and easily without negative sequence transformation. To improve the disturbance rejection ability, the ADRC is applied in outer voltage loop, which could overcome PI’s drawbacks of step overshoot and slow response. Under unbalanced grid conditions, the proposed control strategy has good performance, easy implementation and less consuming time with PBC control in inner current loop, and it has strong robustness and fast track performance with ADRC control in outer voltage loop. The detailed mathematical model, control principle and controller design of the Vienna rectifier are thoroughly analyzed. In addition, simulation results based on SIMULINK are also given in the paper. Finally, a downsize 5kW Vienna rectifier prototype is built to validate the correctness and effectiveness of the proposed strategy.
I. INTRODUCTION
The Vienna rectifier, which was proposed by Johann W. Kolar in 1997, needs only one power switch in one phase, which has no dead time effect and shares only half of the dc voltage stress. What's more, it can realize perfect sinusoidal current and unity power factor with high reliability and high power-density. Thus, it is widely applied in the unidirectional rectifier, such as active power filter, The associate editor coordinating the review of this manuscript and approving it for publication was Ahmad Elkhateb . power factor correction, communication power supply, uninterrupted power supply, electric vehicle charging and new energy power generation [1], [2].
A voltage unbalance often happens in the power system because of unbalanced load or nonlinear load. Here voltage unbalance means unequal voltage magnitudes at the fundamental frequency (under-voltages or over-voltages), harmonic distortion and dc offset injection. In other words, the three-phase voltages are not sinusoidal or unequal in magnitude. According to [3], the voltages unbalance factor of many grids exceeds 3% in the U.S. But the grid voltage unbalance factor must be less than 2% during a long period of time according to the IEC standard. Excessive voltage unbalance can lead to Vienna rectifier disturbance such as over current, over temperature, output voltage fluctuation and so on. It may even cause the Vienna rectifier to fail to work normally [4]- [6].
In general, the Vienna rectifier is controlled as a positive sequence voltage source [7]- [9], and it works normally under balanced grid conditions. Unfortunately, the control technique of the Vienna rectifier faces the challenge for the negative sequence current, which will be generated under unbalanced grid conditions and threats the safe operation of the Vienna rectifier. Usually, positive and negative sequence transformations are used in traditional current controllers to detect positive and negative sequence currents. And then two currents controllers are used to regulate the positive and negative currents respectively [10], [11]. Although the sequence currents controllers are effective under unbalanced grid conditions, the controllers suffer from additional time delay caused by second harmonic filter, they are more complex because of the need to calculate and control multi parameters, and they are also easily affected by high order harmonics [12]- [14].
Normally, a double-loop based control strategy is used in Vienna rectifier system, which consists of an outer voltage loop and an inner current loop. For inner current loop, a proportional integral (PI) control is initially used, but it is difficult to realize desired performance for the nonlinear Vienna rectifier [15]. The proportion resonant (PR) control can provide fast dynamic response with high stability, but it is too complex to implement in application [16], [17]. The hysteresis control (HC) and direct power (DP) control have simple control structures and fast dynamic responses, but their switching frequencies are not constant, and it is difficult to design their filters [18], [19]. The predictive control (PC) can eliminate forecast error and minimize resonant behavior, but it has the disadvantage of high sensitivity to parameters and heavy computation burden in application [20]. The dead beat (DB) control is one of predictive controls, and it has fast voltage and current regulation capability, but it also puts forward high requirements for the controller performance [21].
For outer voltage loop, a PI controller is normally used, but it has some weakness such as performance loss in the form of a linear sum and complications brought by the integral control. In this paper, an active disturbance rejection control (ADRC) controller is used in the outer voltage loop to guarantee fast tracking and robustness of the system, line voltages based passivity-based control (PBC) controller is used in the inner current loop to guarantee stability and dynamic performance of the system under unbalanced conditions, and the PBC with ADRC control strategy is thoroughly discussed.
The PBC strategy, which was proposed by R. Ortega and M. Spong in 1989 [22], is a nonlinear control based on energy dissipation, it utilizes energy dissipation to control stability and track given object by means of damping injection.
PBC has strong robustness to system parameters deviation and external interference, and it is easily designed and controlled in application. In the electrical application, PBC is used in power conversion system, rectifier, photovoltaic inverter, motor drive, static synchronous compensator, dynamic voltage restorer and dc-dc converter [23]- [27]. Compared with linear control and other nonlinear control strategies, line voltages based PBC control strategy in inner current loop can be easily designed and realized, and be expected to improve Vienna rectifier's static and dynamic performance under unbalanced grid conditions. The ADRC strategy, which was proposed by J. Han in 1980s [28], is a nonlinear robust control strategy, it is driven by estimation error and tracking error, and it can actively estimate and compensate internal dynamic changes and external disturbances in real time. In the electrical application, ADRC is used in dc-dc converter, gyroscopes, permanent magnet synchronous motor and flywheel energy storage system [29]- [32]. Compared with PI control strategy, ADRC control strategy in outer voltage loop is expected to provide the desired current for the inner current loop and improve Vienna rectifier's strong robustness and fast track performance.
In the study, a practical PBC with ADRC control strategy is proposed to improve performance of the Vienna rectifier under unbalanced grid conditions. The Vienna rectifier topology is analysed thoroughly, and the line voltage based mathematical model in Euler Lagrange (EL) form is deduced. In addition, the passivity characteristic of Vienna rectifier is demonstrated, and the implementation differential equations are given. Further, a PBC with ADRC based controller is designed. Besides, the simulation verification based on SIMULINK is carried out, and the downsized 5 kW prototype experiments further verify the proposed control strategy for Vienna rectifier. The rest of this study is organized as follows. In Section II, the Vienna rectifier topology is introduced and analysed thoroughly. In Section III, the mathematical model based on EL equation is proposed. In Section IV, the PBC with ADRC based controller is designed according to the EL model. Later in Section V and VI, the results of simulation and prototype experiment are given separately. Finally, Section VII concludes the study.
II. VIENNA RECTIFIER TOPOLOGY ANALYSIS A. VIENNA RECTIFIER TOPOLOGY
The Vienna rectifier is made up of L, R, SW i , VD i1 , and VD i2 (i = a, b, and c), as depicted in the Fig. 1. Where L and R are lumped inductor and equivalent resistor, respectively. SW i is phase i bidirectional power switch. VD i1 and VD i2 are phase i power diodes.
B. VIENNA RECTIFIER PRINCIPLE
It is assumed that all the components are ideal. Taking the phase i for example, when the power switch VOLUME 8, 2020 SW i is on, the phase i is clamped to the dc midpoint (point O), and the Vienna ac voltage is equal to 0. When the power switch SW i is off and the phase current is in positive direction, the power diode VD i1 will be on and the Vienna ac voltage is equal to v dc1 .When the power switch SW i is off and the phase current is in negative direction, the power diode VD i2 will be on and the Vienna ac voltage is equal to -v dc2 . If defining the switching function as S i , S i can descript the Vienna rectifier states exclusively.
According to modulation principle and KCL, the Vienna rectifier ac and dc voltages can be expressed as, where v rio is the Vienna rectifier ac voltage, v dc and v dc are the sum and difference voltages between the upper and lower capacitors, respectively, i.
is a sign function. when x ≥ 0, sgn(x) is equal to 1, and when x < 0, sgn(x) is equal to −1. It is assumed that the dc capacitor voltages are absolutely balanced due to voltage balance control, then we can get,
III. VIENNA RECTIFIER MATHEMATICAL MODEL A. ABC COORDINATE MODEL
According to KVL, the Vienna rectifier ac voltages and currents can be expressed as, where v ON is the potential difference between dc midpoint (point O) and the ac midpoint (point N). Considering that i a +i b +i c = 0 all the time, we can further get, where v ab , v bc , v ca , S ab , S bc , and S ca are the line voltages and line switching functions, respectively. Combining (3), (4), and (5), we can further get, where S and v are defined zero sequence switching function and zero sequence voltage, respectively.
Then the potential difference v ON can be descripted as, To make the Vienna rectifier operate stably, the synchronous rotation coordinate is adopted. The currents, line voltages, and line switching functions are transformed as follow, where i d , i q , v d , v q , S d , and S q are dq coordinate currents, line voltages and line switching functions, respectively. T 1 and T 2 are the transformation matrices from abc coordinate to dq coordinate. Combining (6), (8), and (9), we can further get,
C. EULER LAGRAN MODEL
We can further get the mathematical model of the Vienna rectifier in EL form, where M is a positive definite symmetric coefficient matrix, i.e. M T = M. J is an anti-symmetric coefficient matrix, i.e. J T = −J. R is a positive coefficient matrix, i.e. R T = R, and it satisfies R > 0, which means that the converter has dissipative character. X is the state variable vector, and V is the control input variable vector.
It is assumed that the system storage energy function is, Then we can get the differential of the function H(X) as, It can be seen that H(X) is positive semidefinite and Q(X) is positive definite. If the output variables matrix Y is equal to X, the energy supply rate X T V is valid to any input variable. According to the passivity-based control principle, the Vienna rectifier is strictly passive.
IV. PBC WITH ADRC CONTROLLER DESIGN A. INNER LOOP PBC CURRENT CONTROL
The PBC control object is that X converges to X * = (i * d i * q v * dc ) T , and the error vector X e = X − X * converges to 0, where i * d , i * q , and v * dc are ac currents and dc voltage objects, respectively. In order to accelerate X e converges to 0, the damping injecting matrix R d = diag{r 11 r 11 r 22 } is used, where r 11 and r 22 are positive damping coefficients. And the mathematical module with state vector X can be further descripted by the module with error vector X e as, And the line voltages based PBC control equation can be selected as, According to (10) and (17), we can get the Vienna rectifier PBC control equations as, And then we can further get the line and phase switching function as,
B. OUTER LOOP ADRC VOLTAGE CONTROL
To implement PBC, the dq coordinate reference current i * d and i * q are used. In general, i * q is equal to 0 to realize unity power factor or be set according to reactive power compensation requirement, and i * d is get from the outer voltage loop, and a traditional PI controller is normally used. To improve the static and dynamic performance, an ADRC controller can be also used to compensate disturbance for the system. The ADRC can be divided into three parts as tracking differentiator (TD), extended state observer (ESO) and nonlinear state errors feedback (NLSEF) [33], [34]. Thus, the controller design can be divided into TD, ESO and NLSEF design, respectively, as depicted in the Fig.2.
1) DESIGN OF TD
A TD module can improve the respond speed and voltage overshoot effectively, and a first-order TD module can be selected to implement the dc voltage reference as, where x 1 is the tracking signal of the dc voltage reference. α 1 is the TD speed coefficient, and the greater α 1 , the higher tracking speed. δ 1 is the TD non-linear function coefficient, and the TD non-linear function can be descripted as,
2) DESIGN OF ESO
An ESO module can realize the real-time dc voltage estimation, and the inputs of ESO are input and output of Vienna rectifier, i.e. the d axial current reference i * d and dc voltage v dc , the outputs of ESO are estimated state and expanded state. The implementation process of ESO is shown as, where z 1 and z 2 are the real-time estimation value and the expanded state value, respectively. e is the error between output estimation and input signal. b is a feedback coefficient of the ADRC output. β 1 and β 2 are weight coefficients, and fal() is a non-linear smooth function, and its definition is, where α is a tracking coefficient, which ranges from 0 to 1, and the smaller the tracking coefficient, the faster the tracking speed. δ is a filter effect coefficient, and the greater filter effect coefficient, the better the filter effect, but the tracking delay time will be increased, so a suitable filter effect coefficient should be selected in the application.
3) DESIGN OF NLSEF
The output of NLSEF is determined by the difference between the reference estimated value and the actual estimated value, i.e., where α 4 and δ 4 are tracking coefficient and filter effect coefficient, respectively, and β 3 is a weight coefficient.
C. DC BALANCE IMPLEMENTATION
In the previous analysis, it is assumed that the voltages of the upper and lower capacitors are completely balanced, but in fact some dc voltage balance control must be adopted to balance the voltages of the upper and lower capacitors. The zero-sequence voltage injection scheme is used in the study, and the injected voltage can be selected as, where v zsvi is the zero-sequence voltage, sgnn(x) is a sign function, when x ≥ 0, sgnn(x) is equal to 1, and when x < 0, sgnn(x) is equal to 0. It can be seen that only when the current direction isn't consistent with dc voltage difference's polarity, the zero-sequence voltage is injected, and the magnitude of the zero-sequence voltage is decided by dc voltage difference and capacitance value. Then the Vienna rectifier ac voltages can be got, and modulated according to space vector modulation or sinusoidal pulse width modulation to drive the power switches SW a , SW b , and SW c .
The ADRC control is used in outer voltage loop to generate active reference current i * d , and the line voltage based PBC control is used in inner current loop to get output voltage reference v rio . The PBC with ADRC control diagram of the Vienna rectifier is illustrated in the Fig. 3.
V. SIMULATION VERIFICATION
To verify the proposed PBC with ADRC control strategy, simulations under balanced and different unbalanced grid conditions are carried out in SIMULINK. The main circuit parameters of the Vienna rectifier are shown in Table 1, and the control coefficients are listed in the Table 2.
A. BANLANCED GRID SIMULATION
In the Fig.4, the three-phase grid has balanced voltages with perfect sinusoidal waves, same amplitude and same phase difference. The Vienna rectifier currents are also balanced with perfect sinusoidal waves, same amplitude and same phase difference, and the currents are in phase with their corresponding voltages. The dc voltage is always stable at 800V, and the upper and lower capacitors share the same voltage. It proves that the proposed PBC with ADRC control strategy can operate the Vienna rectifier stably with unity power factor and low total harmonic distortion (THD) under balanced grid condition. VOLUME 8, 2020
B. UNBALANCED GRID SIMULATION
In the Fig.5, voltages and currents waveforms under different unbalanced grid conditions are shown. In the Fig. 5a, the three-phase grid has balanced voltages at first. At the time t 1 , there is a voltage drop of phase A from 220V to 110V, the Vienna rectifier currents are still balanced perfect sinusoidal waves, the currents amplitudes increase gradually, and the currents are still in phase with corresponding voltages. At the time t 2 , the voltage of phase A returns to the normal value, and the Vienna rectifier currents restore to the original values gradually. In the transient process, there are some dc voltage fluctuations, and the transient time is less than 0.02s.
In the Fig. 5b,, the three-phase grid has balanced voltages at first. At the time t 3 , the fifth harmonic voltages, whose amplitudes are 10% of the fundamental voltage amplitudes, are injected to the voltages of phase A, B, and C respectively. Then, the Vienna rectifier currents are consistently balanced perfect sinusoidal waves without harmonics, phase shift and amplitude change. At the time t 4 , the fifth harmonic voltages are reset to 0, and the Vienna rectifier currents have no obvious change. In the transient process, the dc voltages have no any obvious changes, and it means that the grid harmonics have no influence to the Vienna rectifier.
In the Fig. 5c, the grid voltages are balanced at first. At the time t 5 , the positive dc offset, whose amplitude is 20% of the fundamental voltage amplitudes, is injected to voltage of phase A. Then, the Vienna rectifier currents are consistently balanced perfect sinusoidal waves without dc offset, phase shift and amplitude change. At the time t 6 , the dc offset is reset to 0, and the Vienna rectifier currents have no obvious change. In the transient process, the dc voltages have only very small fluctuations.
From the simulation results, we conclude that in whether voltage drop, harmonic injection or dc offset unbalanced grid conditions, the proposed PBC with ADRC control strategy can operate Vienna rectifier stably with high power quality and good output performance, the grid currents are always balanced perfect sinusoidal waves with unity power factor and low THD, and the output voltage is always stable at the set value.
C. ADRC SIMULATION
DC voltages waveforms in PI and ADRC control are shown respectively in Fig. 6. At the time t = 0, the main circuit breaker is switched on, the power supply is starting to charge the dc capacitor, the dc voltages increase gradually, and the output voltage reaches 360V ultimately. At the time t = 0.1s, the soft start resistor is bypassed, the dc voltages increase rapidly, and the output voltage reaches 515V finally. At the time t = 0.2s, the controller is starting to operate, and the output voltage begins to increase and be stabilized at 800V after a period of transient time. It can be seen that the PI control has about 0.07s response time with voltage overshoot of 100V, but the ADRC control has only 0.03s response time without voltage overshoot. And it shows that the ADRC control has better performance compared with the PI control.
VI. EXPERIMENTAL VERIFICATION
In order to further verify the effectiveness of the proposed PBC with ADRC control strategy, a downsize 5 kW Vienna rectifier prototype is built, as shown in Fig. 7. The main circuit parameters are shown in Table 3, experiments of balanced grid condition and various unbalanced grid condition are also carried out in the prototype. Fig. 8 gives voltages and current waveforms of the Vienna rectifier under balanced grid condition. The proposed PBC with ADRC strategy can operate the Vienna rectifier stably with perfect sinusoidal current and unity power factor, and stabilize output voltage to drive the load at the same time.
In the Fig. 9a, at the first, the Vienna rectifier operates stably under balanced grid condition. At the time t 1 , there is a voltage drop of phase B from 87V to 40V, i.e. the line voltages v ab and v bc change from 150V to 120V, and the line voltage v ca has no change. It can be seen that the phase current i b increases gradually with perfect sinusoidal wave and no phase angle shift. In the Fig. 9b, at the time t 2 , the voltages restore to balanced grid condition, and the phase current i b decreases gradually with perfect sinusoidal wave and no phase angle shift. In all these transient processes, the output voltage remains constant to drive the load all the time, and the grid currents are consistently perfect sinusoidal waves with unity power factor.
In the Fig. 10a, at the first, the Vienna rectifier operates stably under balanced grid condition. At the time t 3 , the fifth harmonic voltage, whose amplitude is 20% of the fundamental voltage amplitude, is superposed to voltage of phase B, the THD of line voltages v ab and v bc increase, and the THD of line voltage v ca has no change. It can be seen that the phase currents i a and i b increase gradually with no fifth harmonic, no phase angle shift and no obvious current THD increase, and the Vienna rectifier currents are also balanced all the time. In the Fig. 10b, at the time t 4 , the fifth harmonic voltage is reset to 0, and the grid voltages have 90 • -lead angle shifts at the same time. It can be seen that the phase currents i a and i b have also 90 • -lead angle shifts, the Vienna rectifier currents enter steady state after about 38ms transient process, and the currents begin to increase gradually. In the transient process, the Vienna rectifier currents are always controlled within the rated current range without overcurrent. VOLUME 8, 2020 And it validates that the proposed solution has not only good steady state performance, but also good transient process performance.
In the Fig. 11a, the Vienna rectifier operates stably under balanced grid condition at first. At the time t 5 , the dc offset of 25V is superposed to phase B voltage, the line voltage v ab has a negative shift, and the line voltage v ca has no change. It can be seen that the Vienna rectifier currents i a and i b are balanced all the time without obvious amplitude change and phase angle shift. In the Fig. 11b, at the time t 6 , the dc offset is reset to 0, and the Vienna rectifier currents are also balanced all the time without obvious amplitude change and phase angle shift.
From the experimental results, we conclude that the proposed solution can ensure the Vienna rectifier's stable operation with good output performance and high power quality under whatever balanced grid condition or various unbalanced grid conditions, which includes voltage drop, harmonic distortion and dc offset injection. The experimental results give strong support to the analysis above given and strengthen the simulation results.
VII. CONCLUSION
This study proposes a practical control strategy based on PBC with ADRC for the Vienna rectifier. The proposed control strategy adopts line voltage based PBC in inner current loop and ADRC in outer voltage loop, there is no need of negative sequence transformation, harmonic and dc elements detection under unbalanced conditions, there is only need of instantaneous voltages and currents detections and controls, and the control strategy has the advantage of easy implementation, less consuming time, good performance, strong robustness and fast track performance. | 5,817 | 2020-01-01T00:00:00.000 | [
"Engineering"
] |
Carrier type inversion in quasi-free standing graphene: studies of local electronic and structural properties
We investigate the local surface potential and Raman characteristics of as-grown and ex-situ hydrogen intercalated quasi-free standing graphene on 4H-SiC(0001) grown by chemical vapor deposition. Upon intercalation, transport measurements reveal a change in the carrier type from n- to p-type, accompanied by a more than three-fold increase in carrier mobility, up to μh ≈ 4540 cm2 V−1 s−1. On a local scale, Kelvin probe force microscopy provides a complete and detailed map of the surface potential distribution of graphene domains of different thicknesses. Rearrangement of graphene layers upon intercalation to (n + 1)LG, where n is the number of graphene layers (LG) before intercalation, is demonstrated. This is accompanied by a significant increase in the work function of the graphene after the H2-intercalation, which confirms the change of majority carriers from electrons to holes. Raman spectroscopy and mapping corroborate surface potential studies.
Scientific RepoRts | 5:10505 | DOi: 10.1038/srep10505 While several groups have investigated the structural properties and electronic band structure of H 2 -intercalated graphene 7,8,10,12 , there are currently no layer-specific studies demonstrating the changes in the local electronic properties (e.g. surface potential or work function) after intercalation of graphene. In this paper, we present the effects of H 2 -intercalation on the local electronic and structural properties of the QFSG. The verification of number of graphene layers was achieved using Raman spectroscopy and mapping, whereas a detailed image of the surface potential of the layer structure was constructed using frequency-modulated Kelvin probe force microscopy (FM-KPFM) 31 . The study of the high-resolution surface potential maps with the aid of Raman spectroscopy provided direct evidence of consequent increase in the number of graphene layers upon intercalation (i.e. (n + 1)LG, where n is the number of graphene layers (LG) before intercalation). This is accompanied by a considerable increase of work function upon intercalation, which is evidence for the change of the carrier type from electrons to holes with the Fermi level straddling either side of the Dirac point as a function of H 2 -intercalation.
Results
As-grown graphene sample. Based on the van der Pauw measurements on the as-grown sample, the carrier concentration and electron mobility were determined as n e ≈ 1.8 × 10 12 cm −2 and μ e ≈ 1370 cm 2 V −1 s −1 , respectively.
To investigate the layer structure of the as-grown graphene sample, Raman spectroscopy and mapping were employed (Figs. 1a-c). G peak intensity and 2D peak shift Raman maps presented in Fig. 1a,b, respectively, clearly demonstrate two main features: the terraces and terrace edges, covered with graphene of different thicknesses. For additional Raman maps, including intensity, shift and full-width-at-half-maximum (FWHM) of G and 2D peaks, see Supplementary Fig. S1. Three individual spectra taken at the terraces and edges are plotted in Fig. 1c. A summary of the Raman peak analysis is presented in Table 1. The red spectrum in Fig. 1c was collected on the terrace of the graphene sample. The top inset in Fig. 1c shows the 2D peak fitted with a single Lorentzian. The single Lorentzian fitting and the narrow FWHM of 35 cm -1 14 , indicate that the areas plotted in red on the Raman map Raman maps (10× 10) μ m 2 of the G peak intensity (a and d) and 2D peak shift (b and e) for the as-grown (a and b) and intercalated (d and e) samples. Raman spectra taken on the terrace and edges showing: (c) for as-grown sample; 1LG, 2LG and 3LG are depicted with red, green and blue lines, respectively; (f) for intercalated sample; 2LG and 3LG are depicted with green and blue lines, respectively. The insets in (c) and (f) show the selected 2D peaks fitted with Lorentzians. (Fig. 1b) are indeed 1LG. This method was repeated for the green areas on the terrace edges where the G peak exhibits significant increase in intensity (Fig. 1a,c) and the 2D peak is broader than that of 1LG (FWHM = 62 cm -1 ). Moreover, the 2D peak at the terrace edge is blue-shifted (by taking the position of the maximum of the overall fit) towards higher wavenumbers by ~33 cm -1 compared to 1LG (Fig. 1b,c). This peak shows the typical line shape of AB stacked 2LG and can be fitted with four Lorentzians 15,16 . While the G peak intensity can be influenced by the twist angle between 2 graphene layers that are not AB stacked 17 , the 2D peak shift and line shape gives a better indication of the number of layers in this particular case. A representative spectrum collected from the blue area of the terrace edge is plotted in blue in Fig. 1c. This blue-shifted 2D peak (~48 cm -1 and ~15 cm -1 compared to the 1LG and 2LG 2D peak, respectively) is much broader (FWHM = 75 cm -1 ) than that of 1LG and 2LG, possibly indicating the presence of 3LG. There are some reports in the literature showing that fitting the line shape of graphene with 6 Lorentzian components is an indication of 3LG 18 . However, it is important to point out that, while our fit of 1 and 2LG with one and four Lorentzians, respectively, clearly shows the expected line shape of 1 and 2LG, the fitting of 3LG with 6 Lorentzians is not entirely justified given the spatial resolution of the system. In this case, the Raman signal contains contributions from both 2 and 3LG. The same could be true for 2LG, as the signal might potentially contain contributions from 1LG, however the area of 2LG where the representative spectrum was taken (Fig. 1c) is larger than the spatial resolution of our system. Some small variations of the 2D peak shift (~6 cm −1 ) within the terrace are visible in Fig. 1b, making non-uniform areas of ~1 μ m in size. Additionally, deviations of the G peak shift (~4 cm −1 ) have been measured and presented in Supplementary Information Fig. S1. It has been shown that in graphene on SiC the presence of residual strain in the carbon lattice can result in variations in the 2D peak shift 19 . Furthermore, these variations may also be related to charge inhomogeneities 20,21 . Since the 2D peak in graphene is directly related to the Fermi energy, the 2D peak shift can be additionally influenced by doping. In particular, because of the linear dispersion relation, 1LG is far more sensitive to doping than thicker layers, where the dispersion relation is parabolic. While many groups 15,20,21 use the position of G and 2D peaks as a powerful technique to measure the carrier concentration of exfoliated graphene on SiO 2 , using these studies as a reference to determine the doping and charge inhomogeneities in CVD graphene on SiC would be inaccurate, since the interactions between graphene and supporting substrate are different. Thus, the combination of strain and charge carrier coupling may be the origin of the fluctuations of 2D and G peak positions on the terraces 19 .
To further evaluate and resolve the different graphene layers in the as-grown sample, FM-KPFM was used to produce the topography and surface potential maps, shown in Fig. 2a,b, respectively. Figure 2a reveals terrace are ~4 μm wide and ~5 nm high. The representative 10 × 10 μ m 2 map of the surface potential reveals SiC terraces covered by a continuous layer of 1LG (Fig. 2b). 2LG covers a small portion of the terrace edges (see a narrow band in top left corner of Fig. 2b), while most of them are covered with 3LG (exhibiting the brightest contrast). In addition to these main features, the terraces are additionally decorated by 2LG islands of ~500 nm in size, as identified from their contrast. Both substrate preparation and CVD growth conditions may result in formation of these 2LG islands. For further assessment of the different contrast levels, the histogram in Fig. 2c was used.
For the as-grown sample, we report the difference in the surface potential (Δ U CPD ) between 1LG and 2LG to be ∆ = − − U 127 mV . When tip-biased is used, the increase in the surface potential and subsequently the brightness of FM-KPFM images, with the increasing number of layers is characteristic for n-type doping. For practical applications, it is often important to define the work function of the material. To quantitatively measure the work function of each graphene layer, the work function of the scanning probe microscopy (SPM) tip was calibrated to be Φ Tip = 4.52 ± 0.05 eV (see Experimental section and Fig. 2d). The schematic energy band diagrams of 1, 2 and 3LG are displayed in Fig. 2e. The corresponding graphene work functions were measured to be Φ 1LG = 4.78 ± 0.03 eV, Φ 2LG = 4.60 ± 0.05 eV and Φ 3LG = 4.50 ± 0.08 eV, respectively (for details regarding tip calibration, see experimental section for details). The decrease in work function (increase of electron
carrier concentration) with increasing number of layers confirms the n-type character of the as-grown sample. Having correlated the Raman characteristics with the surface potential maps, it is then possible to conclude that the SiC terraces are indeed covered with 1LG, whereas thicker graphene (2-3LG) grows at terrace edges, resulting in lower local work function.
Ex-situ intercalated graphene sample. The ex-situ intercalated sample (i.e. H 2 -intercalation of the as-grown sample described above) was measured using Hall effect in the van der Pauw geometry, where the hole carrier concentration and mobility are n h ≈1.5 × 10 13 cm −2 and μ h ≈ 4540 cm 2 V −1 s −1 (i.e. more than three times greater than the as-grown sample), respectively. The transformation of the graphene from an electron to hole-doped material is a fingerprint for the successful intercalation of the sample. Figures 1d-f show the G peak intensity and 2D peak shift maps of the intercalated sample (for additional Raman mapping, see Supplementary Information, Fig. S2). Similar to the Raman analysis of the as-grown sample, individual representative spectra taken at the terraces and edges are plotted in Fig. 1f and a summary of the analysis is presented in Table 1. In Fig. 1e, the 2D peak of the spectrum taken on the terraces (depicted with green) is significantly blue-shifted compared to the as-grown sample (15 cm -1 ). In addition, the peak is broader with a FWHM of 58 cm -1 (top inset of Fig. 1f). The line shape as well as the blue-shift of the 2D peak is a clear indication of AB stacked 2LG covering the terraces, which is in agreement with previous reports on intercalated graphene on 6H-SiC(0001) 15,22 . Analysis of the terrace edges (depicted with blue) demonstrates that the 2D peak has FWHM of ~71 cm -1 and can be fitted with six Lorentzians 18 . This further confirms the increased thickness of graphene at the edges, implying that it is now 3LG, or a mixture of 2 and 3LG, given the spatial resolution of the Raman system. It is important to note that 4LG, which was observed in FM-KPFM was not resolved by Raman due to limitations in spatial resolution. By demonstrating that the terraces are covered with 2LG upon H 2 -intercalation, we prove that the IFL, which was under the 1LG in the case of the as-grown sample, is now transformed into the new first graphene layer. This results in the general rearrangement of the graphene layers to (n + 1) LG, where n is the number of layers before intercalation.
After intercalation, some inhomogeneity of the 2D peak shift map on the terraces (i.e. 2LG) is still observed (Fig. 1e). Compared with the as-grown sample, after intercalation the shifts of the 2D peak position are limited to ~2 cm −1 (~1 cm −1 for the G peak, see Supplementary Information, Fig. S2). This indirectly supports the quasi-free standing nature of the H 2 -intercalated graphene. Having established the successful transformation of the as-grown graphene to QFSG (it should be noted that the sample maintains its excellent morphology, as presented in the topography map in Fig. 3a), we further consider changes in the surface potential map of the intercalated sample to observe the effect of H 2 -intercalation on the local electronic properties of QFSG. The map of the surface potential and the relevant histogram (Fig. 3b,c) show substantial changes as compared to the as-grown sample. Taking into consideration the three contrast levels and correspondingly correlating the three different regions with the Raman maps presented in Fig. 1d,e, we deduce that the terraces are now completely covered with continuous 2LG and the previously observed 2LG and 3LG features (islands and terrace edges) are now transformed into 3LG and 4LG, respectively. The p-type character of the intercalated sample (hole conductivity) results in the decrease in the surface potential with increasing number of graphene layers. This is attributed to the fact that the U CPD between the tip and the sample gets more negative with the increasing number of layers (Fig. 3c), which can be inferred by comparing the schematic band diagrams of the as-grown and intercalated graphene in Figs. 2e,3d. Upon intercalation, the surface potential difference between 2LG and 3LG is ∆ = − U 54 mV CPD 2 3 , whereas between 3LG and 4LG is ∆ = − U 59 mV CPD 3 4 . In addition to the features described above (terrace edges and islands), some bright patches are observed in the middle of the terraces, which show a surface potential difference of ~36 mV with respect to 2LG and results in relatively low contrast difference. These features are of an approximately the same size as inhomogeneities seen in the maps of the 2D peak shift, as seen in Fig. 1e. By analyzing the topography, these patches are elevated by ~ 200 pm with respect to the 2LG. It can be speculated that these features might be due to hydrocarbon species 23 or hydrogen atoms 8 trapped underneath the graphene layers, which slightly elevate graphene. The schematic diagram of the energy band structure for 2LG and 3LG of intercalated graphene is shown in Fig. 3d. The work functions for 2LG and 3LG were calculated as Φ 2LG = 4.98 ± 0.03 eV and Φ 3LG = 5.07 ± 0.04 eV, respectively. It should be noted that measurements on the as-grown and intercalated samples were performed using different SPM tips. In the latter case, the calibrated work function of the tip is Φ Tip = 4.88 ± 0.01 eV. The significant increase in work function as compared to the as-grown sample suggests that the Fermi energy crosses the charge neutrality point, thus providing independent proof that conductivity changes from n-to p-type upon intercalation.
With the annealing of the sample in hydrogen environment at temperatures around 1100 °C, the H 2 molecule will enter the graphene stack from the terrace edges and defect sites, where it's position is more energetically favorable 24 . The intercalated hydrogen molecule will then dissociate into H atoms to form Si-H bonds, which will decouple and lift the IFL from the SiC substrate 8,24,25 and convert it to 1LG (Fig. 3e). As discussed previously, the intercalation of graphene will transform the as-grown graphene from electron-to hole-doped. The hole-doping of the QFSG can be explained by the spontaneous polarization of the substrate, as for example, shown in Ref. 11,26 . In addition to that, by decoupling the IFL, the charge transfer from the SiC is reduced and only environmental p-doping affects the sample. One of the major improvements triggered by the H 2 -intercalation is the significant increase in mobility of the graphene layer. This is typically attributed to the decoupling of the graphene layers from the substrate, where phonon scattering is now suppressed as the main mechanism for the mobility degradation in the case of the as-grown graphene 7,27 . Other mechanisms such as the transformation of graphene to graphane (A form of hydrogenated graphene with sp 3 bonded carbon-hydrogen bonds) has also been examined experimentally and theoretically 13 . In the case of QFSG, the one possible mechanism for mobility degradation is Coulomb scattering from charged impurities 9,27 .
Discussion
We investigated the effects of ex-situ H 2 -intercalation of CVD grown graphene on 4H-SiC(0001) using bulk transport, local surface potential and Raman spectroscopy and mapping. Transport measurements on the as-grown sample demonstrated n-type doping with μ e ≈ 1370 cm 2 V −1 s −1 . Following the ex-situ intercalation, the graphene conductivity switched to p-type, which along with a significant increase in mobility, μ h ≈ 4540 cm 2 V −1 s −1 , is indicative of successful intercalation. The FM-KPFM measurements of the as-grown sample revealed that SiC terraces were covered by predominantly 1LG and additionally decorated with 2LG islands and 2LG/3LG edges. The work function measurements also indicated an increase in the electron concentration (i.e. decrease in work function) as the number of layers increased. On the contrary, the H 2 -intercalated sample exhibited increase of the work function as the number of graphene layers increased. This is the ultimate proof that the Fermi energy crossed the charge neutrality point from n-to p-type upon intercalation. In addition, for the first time a high-resolution image of the surface potential of intercalated graphene was constructed, which provided a detailed understanding of the layer structure and its transformation upon decoupling from the substrate. The Raman studies proved that upon intercalation the 1LG has been transformed into 2LG and, in general, the as-grown layers (n) have been transformed into (n + 1)LG, as followed by conversion of the IFL into 1LG.
Thus, using local, layer-resolving techniques, we demonstrate successful transformation of graphene covalently bound to the substrate into QFSG with superior electronic properties. QFSG is one of the preferable candidates for high-speed electronics, as the decoupling of the IFL from the SiC substrate increases the mobility dramatically, while maintaining the excellent intrinsic electronic and topographic structure.
Sample growth and H 2 -intercalation.
For this study, graphene samples were grown by CVD method at 1600 °C under an argon laminar flow in an Aixtron VP508 hot-wall reactor. Semi-insulating on-axis oriented 4H-SiC (0001) substrates (Cree) 10 × 10 mm 2 size were cut out from 4" wafer and etched in hydrogen at 1600 °C prior to the epitaxy process. Graphene growth was controlled by Ar pressure, Ar linear flow velocity and reactor temperature. The process relies critically on the creation of dynamic flow conditions in the reactor, which control Si sublimation rate and enable the mass transport of hydrocarbon to the SiC substrate. Tuning the value of the Reynolds number enables formation of an Ar boundary layer, which is thick enough to prevent Si sublimation and allowing diffusion of hydrocarbon to the SiC surface, followed by CVD growth of graphene on the SiC surface. The ex-situ intercalation of hydrogen on the same sample was achieved by annealing the sample in molecular hydrogen at temperature of 1100-1200 °C and reactor pressure of 900 mbar. Cooling down in H 2 atmosphere keeps hydrogen atoms trapped between graphene and substrate. Prior to unloading the sample, the process gas was changed back to argon 6,8 . Measurements. The mobility and carrier concentration of the as-grown and ex-situ hydrogen intercalated samples were characterized using Hall effect measurements in the van der Pauw geometry in ambient conditions. Raman maps of 10 × 10 μ m 2 were obtained using a Horiba Jobin-Yvon HR800 system in order to investigate the structure of graphene samples. The 532 nm wavelength laser (5.9 mW power) was focused through a 100 × objective lens onto the graphene sample. The spectral resolution was 1.59 cm -1 . The Raman spectra were initially obtained for a reference SiC substrate, which is then used to subtract the substrate related signal, allowing effective separation of the Raman peaks originating only from the graphene. The Raman maps were constructed by mapping G and 2D peak intensity, shift and FWHM of 3025 individual spectra with XY resolution of 0.2 μ m. The G peak (~1582 cm -1 ) originates from the first order scattering process due to the double degenerate phonon mode vibrations at the center of Brillouin zone 15,28,29 . The 2D peak (~2700 cm -1 ) originates from the double resonance scattering process near the K point. The 2D peak exhibits a dispersive behavior. A characteristic feature of increasing number of graphene layers on SiC is the blue shift of the 2D peak 15 . Furthermore, the 2D peak of 1LG can be fitted with a single Lorentzian, whereas for 2LG and 3LG -with four (indicative of AB stacking) and six Lorentzians, respectively 28,30 , where some limitations of the fitting process are discussed in the text.
Bruker Dimension Icon scanning probe microscope was employed to investigate the surface potential of the graphene samples in ambient conditions. Despite the high resolution of an atomic force microscope, traditional topography maps cannot be reliably used to identify the number of layers in graphene grown on SiC [30][31][32][33] . For this study, doped Si PFQNE-AL tips (Bruker) with a spring constant k ≈ 0.4-1.2 N m −1 were used in single-pass tapping mode. In FM-KPFM, the cantilever is oscillating at its mechanical resonant frequency f 0 ≈ 300 kHz, in addition to a much lower AC frequency voltage, f mod ≈ 5 kHz and V mod ≈2-5 V, to induce a frequency shift (f 0 ± f mod ). The f 0 ± f mod side lobes are monitored by a PID feedback loop, which applies a compensation DC voltage to minimize the side lobes, thus acquiring the surface potential map. Since FM-KPFM detects the force gradient using the frequency shift, it can achieve spatial resolution of < 20 nm, which is limited only by the tip apex diameter 32,34,35 . Thus, FM-KPFM allows determining the surface potential [i.e. contact potential difference (U CPD )] with great accuracy for the number of graphene layers. To further investigate the surface potential of graphene samples, the work function of the PFQNE-AL tip (Φ TIP ) was calibrated against a gold reference sample using the approximation Φ Tip ≈ Φ Au + eU CPD (Fig. 2d), where the work function of gold Φ Au = 4.99 ± 0.003 eV was measured using ultra-violet photoelectron spectroscopy 31 . Following the tip work function calibration, small areas (~2-5 μ m) of the graphene samples were scanned and by using Φ Sample ≈ Φ Tip -U CPD , a work function was determined for each graphene layer. It is important to stress that the surface potential maps and work function measurements were performed on different days; therefore, variations in the relative humidity of the ambient air may lead to changes to the surface doping, justifying the discrepancy. | 5,343.8 | 2015-06-01T00:00:00.000 | [
"Materials Science"
] |
Modelling of a hydraulic system coupled with lumped masses
ABSTRACT A coupled hydraulic-mechanical system with a lumped parametric mechanical part has been set up, measured and mathematically modelled in the frequency domain. The main focus of this article is the identification of unknown system parameters, which depends on the models of coupling and dissipation. The set-up under investigation can be excited hydraulically, by flow rate, or mechanically, by force. The responding pressures of the hydraulic subsystem and the accelerations of the mechanical subsystem are measured, from which transfer functions between excitation and system states can be calculated. The property of reciprocity is used for the processing of measurement data. With a suitable two-step strategy and non-linear optimization unknown system parameters can be identified from measurements. Additionally, the agreement of model and measurement and the physical meaningfulness of these parameters are examined. The proposed model succeeds in predicting measured transfer functions, whose data weren't used for the identification of model parameters.
Introduction
The presented article deals with mathematical modelling and model updating of a coupled hydraulic-mechanical system. Model updating is an active area of research whose goal is to calculate unknown parameters of a dynamic model in such a way that the best possible agreement between modelled and measured system responses is achieved. Various applications have been reported for mechanical systems, where the equation of motion is usually discretized by finite element (FE) techniques.
In model updating, frequency response function (FRF) data from measurements can be linked to the modelled FRFs by a suitable optimization criterion. The best known method to update FRF in connection with FEM is the response function method (RFM) [1], which uses analytical sensitivities to solve the inverse problem. The advantage of this method is that it does not require a complete modal analysis of the system at hand. Since in model updating the convergence of the parameters to the correct values cannot be guaranteed if there are too large model deviations, a function weighted sensitivity method was presented in [2]. With this approach, faster convergence and better robustness to measurement noise were shown. In [3] updating of the equations of motion, assurance criterion (POTMAC and KINMAC [22]) the model and measurement data matched much better than in the comparison of transfer functions, which proved to be more sensitive to the choice of model parameters.
An important aspect of model-updating is the way of updating and the strategy behind it. If there is enough measurement data to perform a complete modal analysis of a given mechanical structure, one can use the modal assurance criterion (MAC) [23], the coordinate modal assurance criterion (COMAC [24]) and the coordinate strain assurance criterion (COSMAC) as shown in [25] for parameter identification. If there is not enough data available, i.e. only certain FRFs, the frequency response assurance criterion (FRAC [26]) can be used and an appropriate optimization criterion can be set up. In addition [25], also describes that the location of the analytically calculated resonance frequencies can be compared with the corresponding measured ones and linked into a suitable quality functional. Another approach is offered by [27]. Here, the Response Surface Method (RSM [28]) and Derringer's function method [29] are combined, using MAC values and FRF data for the formulation of an optimization criterion. A two-step procedure is described that updates mass and stiffness matrices in the first step and only the damping parameters in the second step. This is done since these techniques produce large errors in prediction of FRFs of damped systems near resonance and anti-resonance points if one tries to update them in one step. In a later publication [30] this procedure is successfully applied to detect damage of a cantilever beam. An approach using a classical non-linear optimization problem with constraints, solved via Matlab, is offered by [31]. Here, the analytically calculated FRFs are evaluated at certain frequency points and compared with measurement data. Parameters for the mass matrix and the stiffness matrix as well as viscous damping are calculated in one step. This method was also verified by means of an experiment.
Altogether, there are a lot of publications with several methods of model-updating applied to mechanical systems via FE models. In this paper we are dealing with a coupled hydraulic-mechanical system and the mathematical model cannot be expressed by a standard FE approach. Special emphasis is laid on dissipation. In the course of experiments, it became more and more evident that viscous mechanical damping and linear resistances in the hydraulic subsystem are not sufficient to adequately model the measured dissipation effects. A damping model is considered, that couples the dissipation effects of the mechanical and hydraulic subsystems. Furthermore, three different damping models are compared to validate the necessity of the new form of modelling dissipation in case of FSI. In addition, it becomes apparent that the plates closing the rigid-walled hydraulic volumes are themselves flexible, and thus FSI must be taken into account.
In [32], the hydraulic part of the test-bench shown here, the hydraulicchain-oscillator, was studied separately. First, taking into account the coupling effects of individual closing plates as well as an associated dissipation due to their movement, a more detailed mathematical model than in [18] was created. The used model updating strategy proceeds in two stages, similar to that offered in [27,30], but the optimization criterion is formed from measured FRF data and the analytically calculated ones similar to the method shown in [31]. The difference is that only at the location of resonances and antiresonances the analytically calculated FRFs are compared with the measured ones, i.e not every sampling step for which measured data are available is included in the optimization criterion. Additionally, the resulting optimization task with constraints is transformed into an unconstrained task by incorporating a suitable differentiable function. The physical parameters of the chain oscillator were thus calculated and checked for reasonableness. With this approach, a good agreement between modelled and measured transfer functions could be achieved with realistic system parameters.
In this article, the findings and methods from [32] will now be used and applied to the entire test-bench. The property of reciprocity is used to correct the hard-to-measure flow rate with respect to its amplitude. The new physical models as well as the model-updating strategies will be applied to a coupled hydraulic-mechanical system for the first time. The calculated parameters of the system are checked for their physical meaningfulness to validate the presented model approach. In addition, it is also shown that the model has some predictive power. FRF measurement data not used for model updating are compared with the FRFs calculated by the model and their agreement is quantified by means of FRAC.
Experimental set-up
The photograph in Figure 1 displays the test-bench, some of its important components are marked with numbers and described in Table 1. The hydraulic subsystem consists of four cavities which are connected by short bores. The bores can be regarded as rigid walled pipelines. Cavities and pipelines are milled in a massive steel block and closed by steel plates. It can be seen in Figure 1 that the first cavity is closed by a circular steel plate.
Figure 1. Test-bench
Two plates of different thicknesses are available to close this chamber, a thicker rigid one (like in the photograph), which is subsequently referred to as the rigid plate, and another one, half as thick, which is subsequently referred to as the coupling plate. So it is possible to investigate the system's behaviour depending on which closing plate is used for the first chamber. The cavities in the middle of the block are both closed with one thick plate. The last chamber is always closed by a coupling plate. The vertically arranged mechanical two-mass-oscillator is attached on this plate by a screw in the centre. It consists of two masses of different weight, which are connected with leaf springs. Pressure pulsations of the fluid are transmitted via the coupling plate to the mechanical oscillator, and mechanical vibrations vice versa into the fluid. Trapped air can be removed with minimess connections and an additional outlet, which can be seen on the very right of the photograph. During an experiment, the additional outlet is closed with a ball valve. Figure 2 shows the inside of the first chamber, also in this figure some details are labelled with numbers and the corresponding description can be read in Table 1. The holes inside the chamber are the ends of the pipelines, which are drilled into the steel block. All chambers are sealed with an O-ring as shown in this photograph. This rubber ring lies in a groove milled for it. By screwing on the plate, this ring is pressed into the groove and thus seals the chamber.
Measuring equipment
As can be seen in the schematic of the hydraulic circuit illustrated in Figure 3, there are two alternative ways of excitation. Hydraulic excitation is performed by the injected flow rate Q V , which is controlled by an ultra fast MOOG D760-995A servo valve, whose spool position is measured. A mechanical excitation is exerted by the force F H indicated on top of the two-mass-oscillator. Figure 1 shows that there is a small circular plate on top of the oscillator, where an impact hammer can excite the system. The force of the hammer blow is measured. The pressure in each chamber as well as the pressure p V of the inlet device are measured by pressure sensors. Small asymmetries of the mechanical construction can lead to tilting of the masses. To take this into account, four accelerometers are attached to each mass and the vertical acceleration of a mass is calculated from the mean value of these four signals. The outlet of the system is adjustable by a throttle. The signals of the pressure sensors are acquired with a 24 bit analog input card, the signals of the acceleration sensors and the valve spool position with a 16 bit analog input card. The spool position of the MOOG servo valve is controlled by the National Intruments software tool LabView. The desired output signal is passed through a 16 bit analog output card and then fed to a signal amplifier, from which it is forwarded to the servo valve.
Preparing measurement data
As opposed to pressure, acceleration and force, the flow rate cannot be measured directly. There is an approximation for the determination of the flow rate according to the orifice equation if the oil is flowing through an orifice, where Q N is the nominal flow rate and Δp p N is the ratio of pressure drop across the valve and the nominal pressure. ξ denotes the measured valve spool position relative to full opening. However, the nominal quantities in Eq. (1) are not exactly known. The property of reciprocity can provide a remedy for this issue, as explained in the following.
Reciprocity
At first glance the model appears to have mechano-acoustical properties. Reciprocity relations of a mechano-acoustical system were described in [33] and in more detail including the two-port theory in [34]. In [35], a formal proof of the reciprocity of mechano-acoustical systems was given. It was also demonstrated that the reciprocal property is always valid if the differential equations of motion are symmetrical in the spatial variables. Any asymmetries would become noticeable as a sound absorbing effect. In [34], it is also pointed out that caution should be exercised with respect to reciprocity if there are poor connections in the mechanical system under consideration which may cause non-linearities, or if other nonlinear effects are included.
A vibroacoustical system can be regarded as two-port or as a power-gate. For the test-bench under consideration, Figure 4 shows the two external inputs, the flow rate Q V injected by the valve, and the force F H applied by the impact hammer, as well as the state variables in close proximity to the external inputs, the pressure p V in the inlet device, and the velocity v 10 of the upper mass. Together they build a two-port, respectively, a power-gate. A system is considered reciprocal if there is a special relationship between the transfer functions of the two gates, so that they are coupling-balanced or transmission-balanced. This means that they have the same transmission behaviour in both directions. For a reciprocal model of the test-bench, the transmission behaviour is expressed as If one input is applied separately, the relationships can be established. Depending on the sign conventions of the in-and outputs, there is now a simple relationship between two measured frequency response functions that can be used to calibrate the flow rate.
Reciprocity of the test-bench
The constants in Eq. (1) are now set according to Tab. 2. Q N and p N were taken from [36], where the same servo valve was used, and Δp is the mean pressure drop between supply pressure and the pressure p V in the inlet device.
In case of reciprocity, Eq. (1) for the injected flow rate is adapted for all measurements and now replaced by Eq. (5), including a correction factor α � ¼ 1:156, which is calculated so that the square error between the magnitudes of the two measured transfer functions In Figure
Experimental conditions
The following conditions apply to all experiments listed in this publication. The used fluid is the HLP oil Shell Tellus S2 MA 32, whose temperature is in the range of 32-35 °C for all experiments performed. With a supply pressure of 12 MPa, the mean valve spool position and the adjustment of the outlet throttle are selected such that a mean flow rate of approx. 8.3 � 10 À 5 m 3 /s flows permanently through the system, which causes a mean pressure of approx. 8 MPa in each chamber. In case of flow rate excitation, a flow rate with chirp shape, whose amplitude is approx. 1.3·10 À 5 m 3 /s, is superimposed on the mean flow rate. The chirp signal lasts 4 s and reaches from 20 to 1000 Hz. In case of force excitation, the system's vibrations are measured for approx. 3 s. To reduce the influence of noise, each experiment is repeated eight times and an average of these eight individual measurements is used for the identification. To assess the scatter range of these measurements, the frequency response function between measured flow rate Q M V and the pressure in the inlet pipeline p M V is shown in Figure 6. Red represents the plus/minus range of the double standard deviation, calculated from the eight individual transfer functions, black is the corresponding average. It can be seen that the measurement uncertainties are small.
Mathematical modelling
In this section, the individual components of the test-bench are described mathematically and combined together for a complete model of the test-bench. According to Figure 7, the test-bench can be separated in three different parts and the coupling element. The inlet device (I), the hydraulic chain-oscillator (II) and the two-massoscillator (III) can be modelled independently, while the coupling plate (IV) links the two mass-oscillator with the hydraulic chain-oscillator.
Assumptions for the mathematical model
It is assumed that there is always laminar flow in the pipelines and we are dealing with a linear elastic (compressible) fluid, with viscosity to account for internal friction. Any temperature-dependent properties of the fluid are neglected, since the temperature of the fluid does not change during measurements. The pipelines considered can neither stretch nor move. As can be seen in Figure 7, the pipelines have elbows, which are considered with discrete linear resistances (see sec. 3.2.2). Furthermore, the pressure in the chambers is assumed to cause small elastic bending of the closing plates and the coupling plate, which is treated in sec. 3.4. The clamping situation of the plates is not exactly known, but in sec. 3.4 two different ideal situations will be considered. This expected motion of the plates is allowed to involve special dissipative effects, which are discussed in section 3.6.
The clamping between the lower mass (see m 20 in Figure 7) and the coupling plate is assumed to be ideal and excitations of the oscillator from its housing via the top springs are neglected. In addition, the movement of the mechanical two-mass-oscillator is assumed to cause a point load in the centre of the coupling plate ((IV) in Figure 7). For the mathematical model, the mechanical two-mass oscillator can only move in vertical direction. The masses are rigid, the leaf springs between them can be modelled with linear stiffnesses and it is assumed that the occurring dissipation effect is viscous and linear, which can be seen in detail in section 3.5.
Modelling straight pipelines
A relation between pressures and flow rates in the end points of a pipeline can be derived from the dissipative pipeline model given in [37]. An overview of different models and solutions from the fundamental equations of fluid dynamics can be found in [15]. Neglecting convective terms, the partial differential equations of the pressure and flow rate dynamics are transformed into the Laplacian domain, leaving a system of ordinary differential equations which can be analytically solved. The remaining unknown system functions are defined by the assumed fluid properties. In this article, we to deal with a viscous compressible fluid whose temperature-dependent properties can be neglected. Additionally, it is shown in [38] that the relationship of the solutions of pressure and flow rate dynamics between two points of a pipeline can be described by a power gate. The characteristics in the end points α and β of a pipeline section can therefore be written as where Q αβ;α is flowing into the pipeline section and Q αβ;β is flowing out. Equations (6) and (7) are including the fluid parameters E, ρ, and ν which denote bulk modulus, density, and kinematic viscosity, respectively, as well as the radius r and the length l αβ of the pipeline section; ω denotes the angular frequency and J 0 and J 2 are Bessel functions of the first kind. Equations (7) are valid in case of a viscous compressible fluid without temperature-dependent properties. If other characteristics are to be considered, these functions will have a different formulation.
Modelling pipelines with resistances
As can be seen in Figure 7, the pipelines in the block have elbows. A possible solution to take them into account would be to divide the pipelines into sections and consider their boundary conditions separately, which would increase the modelling effort. Therefore, a simpler solution like the one presented in [39] was used. In order to take into account elbows, as well as possible deviations from a circular cross-section, laminar resistances at the ends of the pipeline can be modelled. This model is schematically illustrated in Figure 8.
The corresponding equations read from which a relationship between the flow rates and the external pressures can be derived in the form
The inlet device
First introduced and modelled in [32], the inlet device (I) in Figure 7 consists of an inlet resistance as well as capacity and an inlet pipeline. Because of a jump in cross-section, the inlet pipeline has to be split into two sections. The relationship between the flow rates Q V and Q 01 is given by Using Equations (6) or (9), the inlet pipeline with the jump in cross-section can be described by In case of using the pipeline model according to Eq. (6) the entries read where Z V2 , γ V2 and Z V1 , γ V1 are defined in the Equations (7), with the corresponding radii r V2 and r V1 . Eliminating the unknown pressure p a , two relevant equations remain, whereby the model structure is invariant of the pipeline model used.
The hydraulic chain-oscillator
Part (II) in Figure 7 shows the hydraulic chain-oscillator, a massive steel block in which four chambers are milled, connected by bores and closed by steel plates. Like described in [32], it can be considered as a cascaded arrangement of capacities and pipelines. Figure 9 shows the conditions in a chamber β and Eq. (14) describes the situation mathematically sC βp β ¼Q αβ;β ÀQ βγ;β À sV β;χ (14) where the capacity due to the oil volume V β in the chamber reads As can be seen in Figure 9, the pressure in the chamber lifts the plate. This effect causes a displacement volume V β;χ depending on the pressure, which can be calculated from Kirchhoff's plate theory [40]. Pressure is applied to the plate within the radius r p , the radius of the sealing O-ring. The radius to clamping is denoted by r L . The boundary conditions are not exactly known, it is assumed that they are between fixed and pinjoined. According to Equations (16), the two volumes are which is denoted as the plate's flexural rigidity, depending on Poisson's ratio ν S , Young's modulus E S and the plate's thickness h. The rigid plate and the middle plate have the thickness h r which is twice the coupling plate's thickness h c . Substituting V β;χ by C β;χpβ , Eq. (14) becomes The pipelines in the block are modelled according to Equations (6), which leads to for the connection between chambers α and β. Alternatively, the pipeline model (9) can be used in the same manner.
The two-mass-oscillator
Part (III) in Figure 7 shows the mechanical part of the construction. Modelled in the Laplacian domain, the following equation describes the subsystem's dynamics and the state vector z m ¼ a 10 ; a 20 ½ �`. M m is the mass matrix of the oscillator with the masses m 10 and m 20 of approximately 10 and 20 kg, respectively. D m denotes viscous damping effects of the leaf springs and K m is the oscillator's stiffness matrix including the leaf spring's stiffnesses k 10 and k 20 .
The coupling element
Part (IV) in Figure 7 constitutes the coupling element. It is realized by a steel plate that links the dynamics of the hydraulic chain-oscillator with the mechanical two-massoscillator. In order to develop a suitable mathematical model of this component, its loading situation is shown in Figure 10. The pressure in the chamber lifts the plate as for the chambers of the hydraulic chainoscillator (see section 3.4). Additionally, the force from the mechanical two-massoscillator acts on the centre of this plate. Again from Kirchhoff's plate-theory [40] a displacement line w χ , depending on the geometry, the boundary conditions and the acting forces can be calculated. The displacement in the centre of the plate can be written in the form where x d ðtÞ is a displacement through dissipative effects, which are not covered by plate theory. Integrating the displacement w χ ðr; tÞ over the entire surface according to the resulting displacement volume reads For fixed and pin jointed boundary conditions, the parameters ζ p , ζ F , ξ p and ξ F can be seen in the Equations (A1) and (A2) in appendix A. Equations (20) and (22) can be rewritten as where k P denotes the plate's stiffness, C P the capacity due to the coupling plate and A K the coupling surface. Furthermore, ζ p ¼ ξ F is valid. This is related to the circular geometry of the plate and the applied Kirchhoff-theory and occurs if the load due to the force F is assumed to be a point load. It can be seen that A K links the states of the mechanical to the hydraulic system. Vice versa, with negative sign, the hydraulic state is linked to the mechanical subsystem. It is assumed that there are three forms of dissipation. A viscous mechanical damping of the plate can be considered with the damping ratio d P . Hydraulic leakage is modelled by the hydraulic resistance R P . Based on the available measurement data coupled damping must be considered in addition. With V d ðtÞ ¼Ṽ d ðtÞ À A K x d ðtÞ and a coupling dissipation area A d a possible dissipation model for the coupling plate reads Inserting Equations (24) in Equations (23) and transforming to the Laplacian domain, the coupling element's dynamics is obtained as |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }
Proof of dissipativity
To prove that the suggested dissipation model extracts energy from the system, a closer look on the system's power gates is necessary. Rewriting the model (25) with a dissipation force F d and flow rate _ V d it leads to If the matrix D χ according to Eq. (25) is positive definite, F d and flow rate _ V d build a positive power with the system's state variables _ x and p. Sylvester's criterion [41] says that a symmetric Hermitian matrix is positive definite if, and only if, all its leading principal minors are positive. For D χ the leading principal minors are det
The dissipation model for closing plates
As can be seen in Figure 7, there is no mechanical system attached on the first two plates. In these cases the dissipation model (24) can be simplified, because no external force acts on the plate.
Combining this relation with Eq. (24b), the dissipation through the plate for a chamber α is given in the form
The model of the test-bench
All the previous partial modelling can now be combined together to obtain a mathematical description for the entire set-up. With the chosen state vector z ¼ a 10 ; a 20 ; p 4 ; p 3 ; p 2 ; p 1 ; p V ½ �` and an input vector u ¼ F H ; Q V ½ �`, the system's dynamics can be written in matrix notation as which can be interpreted as the model's mass matrix, including the masses of the mechanical oscillator, the capacities due to the volumes and closing plates and the end capacity of the inlet device. Additionally, the coupling surface A K and the dissipation surface A d between hydraulic and mechanical subsystems can be seen. A dissipation matrix considering the leaf springs' stiffnesses k 10 and k 20 as well as the stiffness k P of the coupling plate. The coupling between mechanical and hydraulic subsystem is also considered in K.
The matrix T is chosen in the form T 1::2;1::2 ¼ I 2 T 3::7;3::7 ¼ sI 5 (35) to link the state-and input-vector by the given system form (30). Finally, the input matrix B links the input-vector with the system's dynamics. Model (30) can be rewritten as whereby one has computed any interesting transfer function G αβ ðsÞ ¼ẑ α =û β , because in each experiment only one excitation quantity of the input vector u is active at a time.
Reciprocity of the mathematical model
In section 2.3.2 the test bench's reciprocal characteristics were shown by measured transfer functions. The reciprocity properties of the model (30) shall now be investigated. The transfer functions between the flow rate and the velocity of the upper mass, and the force of the hammer blow and the pressure in the inlet device are calculated as This shows that the reciprocal property is lost, because the dissipation area A d is always greater than zero. In the following section 4.2.2, unknown dissipative system parameters are identified using measurement data. Since the measured data is calculated with flow rate excitation according to reciprocity, a corresponding correction must be made here. The requirement for the correctly parametrized model is according to the reciprocity theorem (4), but since the correlation between the measured FRFs and the modelled FRFs is different, must now apply, assuming that only the amplitude level of the flow rate was not measured exactly.
Parameter identification
If a mathematical model is available, two problems have to be solved to identify unknown parameters. First, a suitable strategy is presented to find these parameters, then a proposal is made to deal with numerical issues during this process.
Preface: discussion of the modelling of dissipative effects
In the previous section, a model for the test-bench was developed that contains special dissipative parameters that do not occur in standard modelling of hydraulic systems. These include the sealing gap resistances R H;α according to Eq. (29), R P in Eq. (24) and the inlet resistance R L of the inlet device, as well as the resistances for the pipelines according to pipeline model (9). This section shows why these parameters are indispensable. The aim was to develop a model that approximates measurement data sufficiently well to enable the determination of possible damping parameters. Standard modelling of hydraulic systems, usually, only includes dissipative parameters like the kinematic viscosity and resistances. The resistances are usually supposed to model throttles if such exist in the set-up under consideration. In preliminary experiments, however, it became clear that the kinematic viscosity alone is not sufficient to approximate the dissipation seen in the measured data.
The resistance R L of the inlet device is indispensable to ensure the correct damping of the fifth mode at about 780 Hz of the FRFs , which can be seen in the comparison of model and measurement in the sections 5.1 and 5.2 in Figures 16 and 25. If this parameter would not be present, the kinematic viscosity would have to be chosen unrealistically high. Additionally, without the resistances of the pipelines it is also not possible to keep the kinematic viscosity in a reasonable range. The coupling dissipation according to (25) with the dissipation surface A d is not only necessary to link mechanical and hydraulic dissipations, it also serves to adjust the amplitude level of a modelled transfer function. Together with the coupling surface A K it acts like a gain on the transfer functions. The optimization strategy presented in the following section would fail if the amplitude levels of model and measurement differed too much.
In the upcoming section 4.5 three different cases with different dissipation models are therefore compared. This is done to prove that these non-standard modellings are necessary and also useful.
Strategy
Model parameters can be determined in a two-step procedure. First, the model is considered completely frictionless. For this purpose, all dissipative variables are set to zero and the remaining system is in use. The remaining parameters are optimized so that the pole and zero locations of the model match those of the measurement as closely as possible. When these parameters have been determined, the model is considered with dissipation. The magnitudes of the measurement at the pole and zero locations are used as a criterion to determine the dissipative parameters. The process is first explained on a simple example before it is applied to the model of the test-bench.
Step one
In Figure 11, the transfer function between the force of the impact hammer F H and the acceleration of the upper mass a 10 is shown for the case that there is no oil in the hydraulic chain-oscillator, i.e. without coupling. In this case, the transfer function GðsÞ is used as model which satisfies the necessary number of poles and zeros. GðsÞ contains unknown system parameters x d i , which are responsible for the dissipation, as well as unknown parameters x PZ i , which are mainly responsible for the position of poles and zeros. In Figure 11 GðsÞ, evaluated with initial values, is shown in black. Let G ðsÞ be the frictionless version of GðsÞ, and feed it to an optimization task of the form where ω P n ¼ À js P n are the angular frequencies at a pole location and ω Z m ¼ À js Z m are the angular frequencies at a zero location. The optimized quantities x �;PZ i will be calculated. In Figure 11 G ðsÞj x PZ i ¼x �;PZ i is illustrated in green, where poles and zeros of the model can be seen to agree with the measurement.
Step two
In the second step, the unknown parameters x d i are determined. For this purpose, the transfer function GðsÞ is evaluated with the previously calculated values x �;PZ i , and the amplitude heights at the poles and zeros are adjusted to those of the measurement by optimizing the parameters x d i . Let G M ðsÞ be the transfer function of the measurement, then the following optimization task calculates the optimal parameters x �;d i . In Figure 11 GðsÞ is evaluated with the optimal parameters x �;PZ i and x �;d i shown in red. It can be seen that the model response now agrees well with the measurement.
Numerical issues
For a practical application of the presented strategy, numerical problems have to be treated additionally. The unknown parameters of the vibroacoustical test-bench are spread across a wide range of order of magnitudes. For example, the bulk modulus is expected to be in the range of 1.5-1.7�10 9 N/m 2 , and an unknown capacity due to the rigid plate ranges between 5 and 30 � 10 À 14 m 5 =N.
Many works in the field of numerical mathematics, such as [42], deal with the theoretical concepts of scaling in numerical analysis. Mainly, scaling analysis deals with the invariance of a given mathematical model with respect to its scaling group. A practical application to different mathematical problems is shown by [43]. Now it is possible to move parameters into an equivalent numerical range with the help of scaling, but the problem can arise that the functions, which depend on the parameters, drift apart numerically as a result of this, which in turn has a bad effect on the problem posed. This can be counteracted by additionally scaling the dependent functions. For the solution of an optimization problem with the help of a gradient-based method, it is also necessary to consider the effect of both scalings on the gradients. This could lead to a lengthy search for the best possible set of scaling factors. Moreover, the set of scaling factors is bound to the specific model and cannot simply be applied to another one, for which the calculation of these factors would start all over again. F H An additional aspect is that an unconstrained optimization task is a simpler task than a constrained one. In order to circumvent the problems associated with such a wide numerical range, to counteract possible scaling problems and to obtain an unconstrained optimization task, a function is introduced in a similar way as shown in [44]. Designed in that way, f k is a smooth and monotonically increasing function which maps the infinite interval À 1; 1 � ½ of ξ k onto the interval κ k;min x I;k ; κ k;max x I;k � � and f k ð0; x I;k Þ ¼ x I;k applies. x I;k are the initial values of the unknown physical parameters with their percentage limits κ k;min and κ k;max , between which the optimal value is likely to lie. Figure 12 shows an example of how the function f k works. We want an initial value x I ¼ 5 and f should always be between 2 and 10.
In a constrained optimization task, the functions f k replace the unknown physical parameters x k and the dimensionless parameters ξ k can then be used as optimization variables in the resulting unconstrained task. Applied, for example, on the constrained task (42) the new unconstrained task reads |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } The resulting optimal parameters ξ �;PZ k can then be used to calculate the optimal physical quantities x �;PZ k according to by using Def. (44).
Algorithm
The task (45) is a summation of different partial minimization functions that can now be fed to a numerical solver. Since, the individual partial minimization functions ϕ Z m resp. ϕ P n sometimes have very different magnitude values, they are normalized to one for the initial points. After this, it is possible to give each function a scaling factor α for tuning. All optimization tasks performed in this publication were solved with the help of the software tool Matlab by a solver for non-linear least-squares problems using a trust-region algorithm. This solver finds a minimum of the fitness function k λ μ ð Þk 2 2 ; of a given vector-valued function λ by the parameters μ. Considering now task (45), the single functions ϕ Z m and ϕ P n , which are summed up, can be regarded as elements of a vector valued function λ. The corresponding computed task would then read In analogous way task (43) is processed, which is used for optimization of damping parameters. Stopping criteria are the function-and the iteration (step) tolerance, which are suitably chosen, but equal for every optimization task. In addition, the case of a very slow convergence is covered by stopping the calculation after a maximum number of iterations.
Results of step one
The strategy and the function introduced above are now used to identify the unknown system parameters of the test-bench model (30). In the first step, the searched parameters are the stiffnesses of the leaf springs k 10 and k 20 , as well as the coupling plate's stiffness k P . Furthermore, the coupling area A K and the capacities due to the plates, as well as the oil parameters E and ρ have to be determined. The kinematic viscosity ν, the dissipative coupling area A d , all viscous damping parameters of the leaf springs and the closing plates as well as any hydraulic resistances are set to zero. A frictionless model is used for the calculation. The optimization is done for the case of a hydraulic excitation, including all non-trivial measured pole points (which means that the poles at the frequency 0 Hz are not considered) of all possible transfer functions. In addition, five selected zero points are considered, since in a calculation without them, the model result would deviate from the measurement in the range of these zero points. Furthermore, the two possible plate constellations are covered in one task. Figure 13 shows the evolution of the fitness function for the first 1000 iterations. The fitness function in this case consists of 40 partial functions that are each normalized to one in the initial point, as described in sec. 4.4. In the first 50 iterations a faster convergence can be observed than in the following ones. The first 1000 iterations take the solver about 70.5 s and the rather slow convergence of this calculation after the first 50 iterations justifies a stop after these steps. Table 3 shows the results of the physical parameters calculated by this task. In the course of the previous work [32], values for the bulk modulus, the fluid's density and the capacities could be calculated. These values serve as a guideline for the choice of the initial values of this optimization task. The initial values of the individual parameters lie in their expected range. The bandwidths of the optimization parameters were set according to Def. (44) with the respective percentage deviations κ k;min and κ k;max so that they do not exceed the expected range excessively.
It can be seen that the parameter values except for the coupling plate's capacity C P are in the expected range. The expected ranges for the capacities, the coupling area and the plate stiffness were determined using Kirchoff's plate theory [40] with the two assumed boundary conditions. Since the central plate as shown in Figure 1 is not a circular plate, the expected range of the two central capacities C m cannot coincide with that of the capacity of the circular rigid plate C r , even if it has the same thickness. However, the way it is screwed on suggests that the capacity values must be smaller than those according to the circular plate.
Parameter C end has not been discussed up to here and it is also not part of the optimization. It is only responsible for the 5th pole (at approx. 770 Hz), which one can see in Figures 16 and 25 in sections 5.1 and 5.2. The oil volume of the inlet device without the inlet pipeline was measured and amounts to 10-15 ml. With the calculated bulk modulus, it is possible to give an assessment range for the end capacity within 6.07-9.09 mm^5/N. The best fit with measurement data is given by an end capacity of 7.130 mm 5 N , which confirms the expected range.
The expected ranges for the leaf spring stiffnesses were determined using static beam theory [45], distinguishing between two assumed boundary conditions. In one case, one edge is fixed and the other is free, with a force applied at the free edge. In the other case, one edge is fixed and the other, where a force is applied, is guided. The expected range for the bulk modulus was derived from [46] where experiments on the HLP oil Shell Tellus S32 were done. This oil has similar properties to the oil Shell Tellus S2 MA 32 which is used in the experiments published in this article. Since the chirp excitation causes rapid pressure changes here, the measurement data of the quasi-adiabatic test-cycle in [46], in which the focus was also on rapid pressure changes, was considered. Finally the density should be smaller than 872 kg=m 3 because this value is given in the data sheet of Shell Tellus S2 MA 32 for a temperature of 15 °C and the experiments were done at a measured oil temperature of 32-35 °C.
Results of step two
The results of Table 3 are now used for the second step. Three different types of dissipation models are analysed and their results are compared in Table 4. The evolution of each fitness function is illustrated in Figure 14. Dissipation model (I) is without any laminar resistances at the ends of a pipeline. So the pipeline model according to Equations (6) is in use. Case (II) is a model without hydraulic dissipative effects due to the plate motion, so there is no hydraulic resistance considered for the plates, but hydraulic resistances of the pipelines are modelled like in Equations (9). The third model (III) is considering both effects. In order to allow a qualitative comparison of the optimization process, the same initial values were used for models (II) and (III), only for model (I) an initial value for the kinematic viscosity 36% higher was applied. For all three cases, the same pole and zero points were used for the optimization task.
It can be seen in Figure 14 that the fitness functions are starting at 50 because in this case all three entire fitness functions consists of 38 partial functions, which are all normalized to one in the initial point and four of them are scaled up by a scaling factor α of two as task (47) shows. This choice leads to better results. The fitness function according to model (I) converges fastest, the calculation is completed after 186 iterations and takes about 54.3 s, whereby in this case 10 parameters have to be identified. The fitness function according to model (II) converges slowest, after 200 iterations it no longer decreases strongly per iteration step. For the calculation of the first 500 iteration steps the solver takes about 400 s, while in this case 14 parameters are optimized. This long calculation time and the slow convergence after 200 steps lead to the fact that the solver can be stopped after these 500 iterations without missing a significantly better result. The convergence of the fitness function according to model (III) is better compared to case (II), but also in this case it becomes slower after the first 200-300 iterations. For the first 500 steps, the solver takes about 510 s, which is longer than in case (II), but in case of model (III), 18 parameters are optimized. The optimization is stopped after 500 steps, because of the slow convergence and the duration of the calculation. Table 4 shows the results of the physical parameters, determined by the optimization process for each dissipation model. The kinematic viscosity ν of the oil is in the expected range for the dissipation models (II) and (III). The range was taken from the Shell Tellus hydraulic oil data sheet. As can be seen, the hydraulic resistances at the beginning and end of a pipeline are necessary to keep the viscosity in the realistic range. The hydraulic resistances of the pipelines are in a similar size range for both model approaches. The hydraulic resistances according to the plates are much larger in comparison. For the testbench model (30), this can be interpreted to mean that more friction is generated by the presumed gap effects of the plates than by the elbows of the pipelines.
A d is needed for all dissipative models, because it is independent of the hydraulic resistances due to the plate motion and because it has a dissipative gain effect together with the coupling surface A K on all transfer functions in case of flow rate excitation.
Considering the viscous damping values of the leaf-springs and the coupling plate, the physical reasonableness of the three model approaches can be interpreted. First, it is assumed that the more rigid a mechanical element is, the lower the expected dissipation. It follows that the dissipative damping of the upper leaf-springs has to be the largest, that of the leaf springs in the middle smaller and that of the coupling plate the smallest. This matches for models (I) and (III).
Another interpretation is also possible. Building a stand alone mechanical model with the viscous damping ratios, the stiffnesses of Table 3 and the masses (see Tab. A1), one can calculate the eigenvalues and from these the dimensionless damping ratios. Similar to the model (19) of the mechanical two-mass-oscillator, the autonomous model without excitation reads |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl fflffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl fflffl } which can be written in the form |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } with the eigenvalues It is possible to substitute the above system (48b) with decoupled equations of motion similar to those treated in [47] or [48]. Let the decoupled system be |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } whose eigenvalues are eigðΛ S Þ ¼ À ξ 1 � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi The dynamics of system (49a) can substitute the dynamics of the physical system (48b) if their eigenvalues are equal. With Equations (48c) and (49b), it is possible to calculate the dimensionless damping ratios ξ i and the system's natural frequencies ω i from the complex eigenvalues of the physical model: ffi ffi ffi ffi ffi ffi ffi ffi ffi ; ω ¼ β ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The results for the damping ratios ξ i are shown in Table 5, defined as percentages of critical damping.
The expected range of the damping ratios was taken from [49], covering the cases of a continuous metal structure and a metal structure with joints. As can be seen, one damping ratio of model (II) is far too large. This can be taken as a proof that a coupling dissipation effect is existing. Only model (III) allows realistic kinematic viscosity and damping ratios.
Comparison of measurement and model results
In this section the focus is on the match between the transfer functions of the model (30) and those of the measurements. In case of flow rate excitation, there are seven possible transfer functions, with two different closing plate situations and three different dissipation models. This would cause 42 different FRFs, whose presentation would go beyond the scope of this article. Therefore, three selected transfer functions are shown for each of the different dissipation models and the different closing plate constellations. Like Figure 15 shows, the FRFs between the flow rate and the acceleration of the upper mass, as well as the pressure in the inlet device and the pressure in chamber three will be analysed.
For this purpose the test-bench's model (30) is evaluated with the identified parameters of the Tables 3 and 4, as well as the fixed parameters of Table A1. A reason for this can be the torsion mode of the mechanical two-mass oscillator at about 279 Hz. The mechanical part was measured without hydraulic coupling in an earlier experiment. It is obvious that this mode now couples with the hydraulic system and the two-mass-oscillator loses more energy at this frequency than expected, which is due to the participating torsional vibration. An effect that was not considered in the model (19).
In the FRFs p 3 Q V and â 10 Q V a peak can be seen at 225 Hz. Here, the piston pump that pumps the oil to the test bench appears with its excitation frequency. Also in this case the match between model and measurement for each dissipation case is quite good. Like in the previous case, model (II) and (III) give very similar results, which are both better than model (I).
Comparing these results, measurement as well as modelling, with the previous case, one can see that a different closing plate situation leads to a different FRF shape. Especially, the poles and zeros of FRFs are now more pronounced than before. Furthermore, it should be emphasized that some of the resonances have shifted. In this case, they are around 160, 290 and 390 Hz. The resonance at about 325 has not shifted in the same manner, and the second mechanical resonance at about 700 Hz (see in the FRFs â 10 Q V ) has not shifted. The coupling torsional mode of the two-mass-oscillator at a frequency of approx. 279 Hz again leads to the fact that around the second mode at 290 Hz model and measurement do not correlate that well (see in Figures 27, 30 and 33).
The effect of the piston pump at about 225 Hz can be seen again. The problem of unwanted noise only occurs above 500 Hz for both plate constellations, which only affects the resonance at 700 Hz.
Excitation with an impact hammer
In this section, the focus is on the predictive power of the model. Like Figure 1 shows, the test-bench is excited with an impact hammer. The calculated parameters of Tables 3 and 4 will be used to evaluate the model, but these parameters are determined using measurement data from flow rate excitation. In the following sections 6.2 and 6.3 the FRFs between the force of the hammer blow and the acceleration of the lower mass, as well as force and the pressure in chamber one will be shown, like it is illustrated schematically in Figure 34.
Discussion
As can be seen in Figures 35-46, the prediction of the poles worked well in both plate constellation cases. Of course, the model-measurement match is not as good as in the previous section where the parameters were optimized for it, but the model reproduces the basic shape of the measurement data well. Similar to the case of flow rate excitation, models (II) and (III) give very similar and better results than model (I).
It should be emphasized in this case, that the resonance at about 270 Hz (coupling plate in use) and 290 Hz (rigid plate in use), is worse reproduced in terms of its amplitude height than in case of flow rate excitation. The flow rate excites the two-mass-oscillator via the well-defined centre of the coupling plate. The torsion mode is therefore not excited as much as the vertical one, which leads to less dissipation and better model/measurement match. With force excitation at the top of the two-mass-oscillator, the torsional mode is excited more strongly, which leads to this larger deviation between model and measurement.
Noise is a problem especially in the FRFs from about 400 Hz (this is true for all FRFs between pressure and force). It is not possible to detect the second mechanical resonance from these transfer functions. In addition, the model also indicates that the amplitude level of the second mechanical resonance will be very small. 7. An additional quality review of the graphic results in sections 5 and 6 In order to qualitatively assess the agreement between measured and modelled FRFs, several possibilities are available. In publications dealing with modal analysis, the modal assurence criterion (MAC), as shown in [23], or the coordinate modal assurance criterion (COMAC) [24] is usually used as a quality measure between measured and analytically calculated mode shapes. For this criterion one needs eigenvectors of the measured system, as well as calculated ones from the designed model. Now two inconveniences arise. On the one hand, in order to be able to construct eigenvectors from a test system, one must make a comprehensive investigation of numerous FRFs from this system. On the other hand, it has to be possible to extract eigenvectors at natural frequencies from the mathematical model, which severely limits the model type. From the model (30), for example, eigenvectors cannot be assigned to certain frequencies in a classical sense. However, it is also possible to use experimental and analytical FRFs directly for a correlation study. One can implement the idea of COMAC based on FRF data, which results in the frequency response assurance criterion (FRAC) according to [26]. This criterion is best suited for a calculation of quality measures for the results shown in sections 5 and 6, because with the FRAC two FRFs with the same input-output relationship can be compared. The FRAC between a calculated FRF H and the corresponding measured one H M in a frequency range of f α to f β ¼ f s N þ f α with a constant sampling frequency f s can be defined by which can be interpreted as the normalized quadratic scalar product of the frequency responses over the considered range. The FRAC value is always in the interval [0,1], where 1 means perfect correlation between the two FRFs in the considered frequency range and 0 means none at all. Table 6 shows the FRAC values for the results of section 5. It can be seen that the FRAC values are all clearly above 0.9, with the exception of the FRF p 3 Q V in case of a coupling plate closing chamber one. But also in this case the FRAC value is above 0.85 which means that there is a good correlation. It can be seen that the dissipation models (II) and (III) (see sec. 4.5.2) lead to quite similar FRAC values for each FRF, which are always better than those calculated if dissipation model (I) is in use. Table 7 shows the FRAC values for the results of section 6. It is pointed out again that the measured FRFs from section 6 were not used for the determination of system parameters, which can be a reason for the fact that most of the calculated FRAC values are below 0.9. However, most of them are above 0.85, which indicates a good correlation between model and measurement and confirms the model's good predictive power. Also in this case it can be seen that the dissipation models (II) and (III) lead to very similar FRAC values, but always higher than those calculated when dissipation model (I) is in use. The results from Table 7 also show, compared to the results from Table 6, that the closing plate situation has a stronger effect on the FRAC value.
Conclusion and further investigations
It has been shown that the presented model in combination with the proposed parameter identification strategy leads to a good model-measurement match.
Bending of the outer wall due to oil pressure, as occurs with the closing plates in the test bench, was taken into account with hydraulic capacities. In section 4, it was shown that the necessary capacities agree with the displacement volume calculated by the circular plate theory. Additionally, it was shown that modelling of dissipative effects in the presented way leads to realistic model parameters. This was not possible with a simpler model structure. These results suggest that there are nontrivial dissipative effects in a mechanical hydraulic interaction and that they are coupled as well. Furthermore, it has been shown that elbows in the pipeline have to be taken into account. In case of a suitable dissipation model of the coupling effect it is not only possible to identify system parameters in a physically reasonable range, but it is also possible to make predictive statements with the model as shown in section 6.
In order to quantify the quality of the graphical results from sections 5 and 6, the FRAC values for the corresponding FRFs were calculated, giving an average FRAC of 0.947 for the results of section 5 and one of 0.887 for the results of section 6. This means that the model results fit well to the measured ones. FRFs that are not used to calculate system parameters can be predicted well, under the additional requirement that the model parameters are physically reasonable. A reciprocal system property can help to adjust hard-to-measure quantities such as the flow rate. However, further investigation is needed to reconcile the dissipative coupling effect with the theory of reciprocity, in order to find out what exactly is going on in the process that causes this property to be lost. Potentially, hydraulic resistances due to the closing plate movements approximate a nonlinear effect.
Moreover, it should be proven whether the strategy shown here for the identification of unknown system parameters can also be transferred to other systems. Important quality criteria of the procedure are the duration of the parameter search, the physical meaningfulness of parameter values and the accuracy of the calculated solutions. Of course, the method reaches its limits if an unsuitable mathematical model is proposed for a given set-up. Apart from this, there are still many possibilities to extend the presented updating method so that it converges faster and becomes more robust against numerical problems and measurement noise.
Appendix A. Tables and mathematical expressions
The parameters of the Equations (20) and (22) differ depending on the assumed boundary condition. In case of a fixed boundary condition, they read and in case of a pin-joined boundary condition they read with K P according to Eq. (16c). | 13,944 | 2022-07-07T00:00:00.000 | [
"Engineering"
] |
A method for coupling atoms to continuum mechanics for capturing dynamic crack propagation
. Conventionally, dynamic crack propagation is modelled using fracture mechanics (either linear elastic, or with an extension to confined plasticity). Herein, we propose a different view, based on a coupling between an atomic description at the crack tip and a classical continuum description away from it. The paper presents the theoretical background and some first numerical results. RÉSUMÉ. La modélisation de la propagation de fissure repose principalement sur une bonne utilisation de la mécanique de la rupture (dans le cadre élastique linéaire ou bien dans des extensions à la plasticité confinée). Nous proposons ici une approche différente, tant sur la modélisation numérique que physique, qui consiste à coupler une vision atomique en pointe de fissure à une description classique continue. Cet article présente le cadre théorique du couplage ainsi que quelques résultats numériques en dimension 1.
Introduction
In most approaches to date, fracture is modelled using a continuum approach, either using linear-elastic fracture mechanics, or with an extension to take care of confined inelastic yielding, or crack bridging effects, such as in cohesive-zone models.In this contribution we shall describe a novel approach to fracture.The basicideais that the crack tip is described by an atomistic model while the surrounding will be described as a continuum, discretized via a finite element method that exploits the partition-of-unity property of finite element shape functions.Indeed, byu s i n ge nriched interpolation functions to capture cracks that may be incompatible with the underlying mesh structure no remeshing is needed.At the same time, there is no need to use asymptotic enriched functions at the continuum level, since the crack tip is described by an atomistic model.At the atomistic scale, either a classical molecular dynamics description or quantum mechanics can be employed.The latter approach relies on a very detailed description: The Schrödinger equation is solved as a function of the electronic configuration (Marx et al., 2000).The major inconveniency of such a description is its cost.A more pragmatic approach, that is used here (Zhou et al., 1997;Sutmann, 2002), employs a classical molecular approach with interatomic interactions being captured by a potential.
The coupling of the two descriptions is the major challenge of this contribution.Indeed, away from the crack tip we have a classical continuum, but in its vicinity a discrete atomistic model is used.A crucial difficulty of this approach is that characteristic parameters in space and time are much smaller at the atomic level than those at the continuum level, which are used in the finite element simulation.
Many coupling methods have been developed (Liu et al., 2006;Xiao et al., 2004).Herein, we propose a weak coupling between the two models, namely an energy coupling.Its major advantage is that energy has the same physical meaning in both domains.To be more precise, the continuum equations are cast in a weak form andare coupled to the atomic description via a partition of the energy.
Changing scales in the crack propagation problem
Dynamick crack propagation can be described as follows: A discontinuity, the crack Γ, is included in a structure described as the closure of the connex open set Ω.The displacements are fixed on a boundary ∂ 1 Ω and external loads are given on ∂ 2 Ω.Finally, a volumic forces field, g d , is applied in Ω (Figure 1).
We consider a subdomain Ω m of Ω that includes the crack tip.The goal is to link an atomic description in the subdomain Ω m to a continuum approach in Ω\Ω m ,i .e .outside the zone of the crack tip.In order to separate the two zones, Ω m can be defined spatially as the crak tip domain to which the plastic zone is confined, so that the macro subdomain outside this plastic zone can be captured by an elastic stress-strain relation.
Formulations
Denoting by u and d the displacements in the continuum and in the discrete zone, respectively, we have the initial-value problem in the continuum subdomain: For x ∈ Ω M (t) and t ∈ [0; T ] , knowing u (x, 0) , u (x, 0) , find (u, σ) ∈ U ad × S ad such that: ρü = div σ + g d [1] with: and in the discrete subdomain Ω m , where we build a discrete grid of N a atoms, the second initial-value problem reads as follows: with: The interatomic forces are given by differentation of a potential.A good description relies on the proper choice of this potential.To test the coupling method a classical potential, such as that of Lennard-Jones or of Morse, suffices: The parameters ε, σ, D and a are material properties and r is the interatomic distance, with r e that at equilibrium.We note that other potentials have been developed, allowing for a better description of the metallic bond, for example -EAM potential (Daw, 1989).The atomic problem relies on the resolution of non-linear equations with non-convex potentials.The study of existence and uniqueness properties for this kind of problem is not the aim of this contribution, see e.g., (Lions et al., 1965) for more information.
In order to get an energetic framework and allow for a discretization, the continuum problem is rewritten in a weak format, as follows: or, written in a more concise way: The atomic problem becomes:
Coupling models
In order to achieve an efficient coupling between the two problems introduced before and to specify the coupling conditions, we assume a weak coupling on the common zone.We consider the mechanical energy as a primordial quantity.Dualizing formulations and writing them in a weak format allows us to obtain a global description that preserves the descriptive properties of each model, and focuses on the quantity of interest, the energy, which must not depend on the model.
In the common, or handshaking, zone Ω c = Ω M ∩ Ω m , a velocity coupling is adopted using a weak format.Moreover, the energy is distributed between the two models inside the coupling zone in a partition of the unity sense (Dhia et al., 2005).More precisely, on the whole domain Ω we split the energy between the two models in the following sense: and we subsequently obtain the weak formulation for the distribution of the energy:
. Partition of unity for the energy distribution
As stated before, the displacement and velocity fields in the domains Ω M and Ω m have a different nature.One field is a continuous in its definition space, while the other has a discrete character and is only defined at the geometrical points that correspond to the atoms.In order to construct a velocity coupling, we assume an equality between the velocities as we go from one model to the other.This condition has to be written in a weak form, which means in a "global", or "integral" manner.
Accordingly, a new space is constructed, denoted by M and called the "mediator space", on which we will project the fields u and ḋ in order to compare them.The nature of M is determined by the discrete property of the atomic fields.Indeed, no extrapolation outside the atomic positions is possible if we wish to keep a physical sense at the fine scale.Then, M has to be a sub-space of the physical atomic space.
More precisely, through the operator Π we can project the velocities onto a discrete subset Ω c of the atomic positions included in Ω c .Considering that M has been constructed as a Hilbert space, we introduce a scalar product c from M × M into R, and we write the velocity coupling as: This formulation allows us to finally write the global equations coupled with Lagrange multipliers:
Discretization
We now introduce two main discretizations.First, the macro-problem in Ω M has to be discretized with a finite element interpolation, and subsequently, a time discretization is needed to solve the dynamic global problem.Moreover, we will use a Heaviside enrichment at the crack in order to avoid remeshing by exploiting the partition-of-unity property of finite element shape functions (Moës et al., 1999;Remmers et al., 2003).
Spatial discretization for the continuum problem
The weak formulation allows us to adopt a finite element form for the macroproblem.With the shape functions N i and nodal unknown vectors u i , using the Heaviside step function to take into account the discontinuity, one obtains: With the latter notation, the bilinear form a M (., .)and the linear form, l M (.), become: Moreover, with the classical scalar product, the coupling term in the continuum becomes: which leads to the typical matrix formulation: Remark 1: The added term F L M is a fictitious force due to the coupling with Lagrange multipliers.This force has non-zero value only in the coupling zone Ω c .
Remark 2: The matrices M, K and the vector F M contain information about the energy distribution, i.e. in the domain Ω c , the repartition function α is used to build their elementary terms.
Remark 3: The vector Λ stands for the Lagrange unknowns and its size is equal to the Ω c subset cardinal times the dimension of the considered space.
Time integration scheme
In order to solve the coupled system we use a standard central difference scheme.This scheme is widely used in classical molecular dynamics and is also knowna s Verlet algorithm.Yet, as it is explicit, its stability relies on an appropriate choice of the time step size.For the continuum problem other time schemes could have been used, but for simplicity the central difference scheme has also been adopted for this subdomain.
The global coupling equations are: Similar to the continuum model, the equation for the atomic scale includes the repartition function β, and a fictitious coupling force f L m is introduced.The matrix m is diagonal with elementary term βm i , and the coupling force is: The system can be rewritten with the three main vectors (U, d, Λ): The time scheme relies on discretization with a time step Δt and has four stages: -given the quantities at step n, compute the displacements U n+1 and d n+1 , -compute the accelerations Ün+1 and dn+1 , neglecting the Lagrange forces, -compute the predictive velocities U * n+1 and ḋ * n+1 , -adjust these velocities to give the final velocities Un+1 and ḋn+1 by taking into account the coupling terms and Lagrange multipliers Λ n+1 .
The three first steps are simple and do not need further explanation.The lasts t e p computes the coupling terms and enforces condition (9).
From ( 9) and ( 18) with the matrix M standing for M lumped , the coupling condition becomes: where Λn+1 = 1 2 (Λ n+1 + Λ n ).The new Lagrange multipliers Λn+1 are subsequently computed by solving: [20] with: and b n+1 stands for the weak coupling condition on the predictive velocities.Thus, this term is a measure of the error compared to the solution that satisfies thes y s t e m (18).
A multiscale time decomposition
As we have very different orders of magnitude in the models, it is useful to construct a multiscale time integration scheme.Indeed, at the atomic scale, we need a very fine time scale to achieve sufficient accuracy and to satisfy the critical time step condition.The concept is simple: Let Δt be the fine time step, i.e. for the atomistic problem, and Δ T be the coarse time step, such that Δ T = kΔt with k ∈ N * .The entire atomistic resolution is done at the fine scale, but the macro-problem is solved only at coarse time steps.For consistency between the models, the coupling condition has been enforced at each fine time step, via an interpolation of the macro acceleration.Indeed, there is no need to compute all the macro quantities on the fine grid, but we must approximate the acceleration in order to update the velocities for the coupling condition.
One-dimensional results
As an example we consider a longitudinal bar, which is described with finite elements and with molecular dynamics, with a coupling region in between.The bar is submitted to a traction wave, the initial configuration being displaced on the leftmost ten elements.The right-hand side is fixed.The whole domain is 88.1563nm long and we put 176 atoms in the molecular domain.The interatomic distance is r e = 0.3253nm, and the finite elements size is h = 4r e .In the molecular model, we use a Lennard Jones potential, with ε = 43.306zJand a mass m = 0.00448yg.T h e s e parameters come from FCC Al properties, and the proper material propertiesforthe finite element model are derived from the atomic properties.In Figure 3, we see the displacements in the bar as the wave propagates.The zoom 3(e) shows how the atoms fit the travelling wave in the coupling zone.
We now focus on the mechanical energy transfer when the wave passes through the coupling zone from the finite elements to the molecular domain.In the first case (Figure 4(a)), the coupling length L c covers 4 elements.The wave passes from one domain to the other, but there is a non-negligible energy loss.When L c covers 10 elements, i.e. the second case (Figure 4(b)), the mechanical energy transfer has improved.A good transfer is reached when γ = L c L i → 1, and then we have negligible energy losses (< 1%).On the contrary, when the coupling length is too small compared to the wave length, there is some reflection in the finite element domain.In Figure 4, we have plotted the energy transfer ratio -when the wave passes from the finite element to the molecular domain -as a function of the number of atoms involved in the coupling domain, for different multscale ratios (from h = r e to h = 10r e ).We observe that it does not depend on the multiscale aspect.Only the number of atoms has an influence.To achieve a good energy balance, a sufficient number of atoms in the coupling zone is needed: With approximately 20 atoms we obtain less than 2% of energy loss.
Conclusion
We have given the theoretical concepts of a coupling method between two models that are physically different.The formulation has been written in a weak format in order to preserve the accuracy of each model and the mechanical energy as the fundamental quantity.We have studied this method in a one-dimensional example and have observed the energy transfer as information passes from one domain to the other.The coupling length and the number of atoms involved in the transition zone have a major influence on the energy balance.
Figure 3 .
Figure 3. Wave propagation in a one-dimensional beam
Figure 4 .
Figure 4. Mechanical energy transfer from FE to MD
Figure 5 .
Figure 5. Energy transfer ratio depending on the multiscale ratio | 3,434.2 | 2008-01-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Early access to science research opportunities: Growth within a geoscience summer research program for community college students
Undergraduate research experiences benefit students by immersing them in the work of scientists and often result in increased interest and commitment to careers in the sciences. Expanding access to Research Experience for Undergraduate (REU) programs has the potential to engage more students in authentic research experiences earlier in their academic careers and grow and diversify the geoscience workforce. The Research Experience for Community College Students (RECCS) was one of the first National Science Foundation (NSF)-funded REU programs exclusively for 2-year college students. In this study, we describe findings from five years of the RECCS program and report on outcomes from 54 students. The study collected closed- and open-ended responses on post-program reflection surveys to analyze both student and mentor perspectives on their experience. Specifically, we focus on students’ self-reported growth in areas such as research skills, confidence in their ability to do research, and belonging in the field, as well as the mentors’ assessment of students’ work and areas of growth, and the impact of the program on students’ academic and career paths. In addition, RECCS alumni were surveyed annually to update data on their academic and career pursuits. Our data show that RECCS students learned scientific and professional skills throughout the program, developed a sense of identity as a scientist, and increased their interest in and excitement for graduate school after the program. Through this research experience, students gained confidence in their ability to “do” science and insight into whether this path is a good fit for them. This study contributes to an emerging body of data examining the impact of REU programs on community college students and encourages geoscience REU programs to welcome and support more community college students.
Introduction
Research experiences provide undergraduate students exposure to and immersion in the work of scientists and are one way to increase participation and pursuit of geoscience careers and
Background
The Council on Undergraduate Research defines an undergraduate research experience as "A mentored investigation or creative inquiry conducted by undergraduates that seeks to make a scholarly or artistic contribution to knowledge" [9].Research experiences are implemented as multi-week summer research programs, ongoing research programs throughout the year, or course-based programs [10].The National Science Foundation describes research experience programs as "one of the most effective avenues for attracting students to and retaining them in science and engineering, and for preparing them for careers in these fields" [11].Many studies have explored the success factors and benefits to students that are outlined in the National Academy's report on research experiences [8].Results of previous studies indicate that participants in research experiences were more likely to enroll in and complete a STEM major and more likely to continue toward a graduate degree, e.g., [12][13][14][15].Students often reported that their participation in a research experience confirmed their interest in a STEM career path and that research experience formed a stepping-stone into a STEM career, e.g., [16][17][18].Prior studies also described a wide range of skill and knowledge gains ranging from disciplinary content knowledge that allowed students to situate a research question in their field of study, to the development of research skills such as data collection and analysis in laboratory and field studies, e.g., [16][17][18][19].Studies further described that students developed an overall understanding of the scientific process, including the reading of primary scientific literature; and intellectual skills such as working collaboratively, critical thinking, leadership, and scientific communication, e.g., [16][17][18][19][20]. Finally, prior studies described REU programs as formative for the development of students' identities as scientists and a sense of belonging to the scientific community [21][22][23].Over the course of their research experiences students developed relationships with their mentors, intellectually engaged with the topic, developed ownership of the project, and learned how to overcome hurdles, e.g., [16,17,19,24].All these positive outcomes resulted in students feeling part of the STEM community and developing self-confidence around independent research, which in turn often resulted in interest in or commitment to the discipline [25,26].Participation in an REU program also allowed students to initiate a professional network of scientists, starting with their peers, mentors, members of their research group, or the REU program staff.
Initiatives to overcome barriers to engaging community college students in these research opportunities, as described by Hewlett [27], and expand access to research opportunities at community colleges are gaining momentum [28].And there is a growing body of evidence of the positive impacts REUs have on community college students, e.g., [29][30][31][32].This study contributes to the emerging body of data, examining the impact of an REU program on community college students in the geosciences in particular, and best practices for welcoming and supporting more community college students in geoscience REU programs.
Theoretical framework
The theoretical framework underpinning the learning process that occurs during summer research experiences builds on Vygotsky's social constructivism [24,33].A social constructivist approach emphasizes that learning is a continuous process in which knowledge is constantly reconstructed and new knowledge is integrated with prior knowledge [34].In REU programs, students share their understanding and views of science concepts with their mentors and research groups, other REU students, and REU program staff, using the language of the scientific community, reflecting a common understanding of the scientific community, and demonstrating skills that are used by scientists.Thus, students learn science in an authentic research setting, while they are embedded in the social context of a research lab or group, and through interactions with scientists.As the research mentor engages with the student in sharing knowledge, working through challenges, and discussing the science, social constructivism proposes that students learn and problem-solve beyond their knowledge level.This fosters critical thinking and results in learners that are motivated and independent [33].Hunter et al. [24] further described the extension of the social constructivist pedagogical approach into a learning model that builds on communities of practice, in which newcomers (the students) are socialized into the practice of the community of scientific research, through mutual engagement with, and direction and support from, experienced scientists.Thus, students learn how to conduct research.
The design of the RECCS program follows the social constructivist approach and community of practice framework, as student researchers learn about scientific research in an authentic setting and are paired with research mentors that introduce them to the research culture and provide the context in which students develop into independent researchers.RECCS students develop their own research questions under the guidance of their mentors and are encouraged to relate their findings to the "big picture" in the context of their research group and the broader field.As their projects unfold, students often find the need to revise their questions in response to unexpected results or unforeseen interruptions in their original research plan.Thus, students expand their knowledge of their particular research topic, as well as the research process, or what a scientist does.The RECCS student cohort, peer mentors, and the RECCS program staff form an additional layer of mentors who guide the student in integrating newly learned knowledge with existing knowledge.
Description of the RECCS program
RECCS is an NSF-funded REU program in environmental and geosciences that was designed for community college students in Colorado in consultation with local community college faculty members.RECCS students participate in a nine-week paid summer program where they complete an authentic research project guided by a research mentor or mentor team from research institutions such as the University of Colorado Boulder (CU Boulder), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Geological Survey (USGS).Throughout the summer, RECCS students work towards two program deliverables, a scientific poster and a short scientific oral presentation, about their individual research projects.
The RECCS program staff supports the student researchers individually and as a cohort, complementing the support from research mentors and striving to create an inclusive learning community.Students spend an introductory week with the RECCS program staff learning foundational research skills such as asking research questions, reading scientific papers, interpreting graphs, and exploring what makes a good poster and presentation.Throughout the program they also reflect on the scientific process and the thinking and working like a scientist to foster students' science identity and feeling of belonging within science.The introductory week also includes an overnight trip to the University of Colorado's Mountain Research Station, a field station in the nearby Rocky Mountains, for cohort-building and hands-on fieldwork activities.For the duration of the program, students meet as a cohort with the RECCS program team once a week for a science communication and professional development workshop.Weekly workshop assignments are scaffolded to keep students moving toward the program deliverables and to reflect the scientific process.For example, as students dive into background reading early in the summer, they learn how to draft an introduction to their project.Professional development sessions include presentations from scientists, career panels, training on science ethics, resume writing workshops, and goal setting.These weekly cohort check-ins also allow students to share their current challenges or achievements and to support one another.RECCS alumni provide further support, serving as peer mentors that help students to understand their experience through a peer lens.After the program, RECCS students are invited to join an alumni email list and an alumni LinkedIn group to stay connected and receive information about future professional opportunities such as internships, fellowships, or jobs.
The RECCS application process is competitive, with an average of 50 applicants per year for 10 summer researcher positions.The RECCS program criteria require that students are enrolled in a Colorado community college and have not previously earned a college degree.While applicants need to demonstrate interest and some coursework in STEM, the RECCS selection committee takes a holistic view of student potential, focusing their selection process on students' responses to several open-ended application questions (e.g., potential benefit of the program, an example of overcoming a challenge in a previous work experience, future academic and career goals).This gives students who may be early in their science education, who may not yet have had exposure to the geosciences, or who may be pursuing a second career an opportunity to describe other experiences and assets they bring to the program that might not be represented in traditional metrics, such as courses taken or GPA, alone.
Study participants
This study includes data from five RECCS cohorts (2015-2019) for a total of 54 students.Participants had varied backgrounds and prior experiences.Forty-one percent (41%) were firstgeneration college students, which is higher than the national average for community colleges of 29% [7].Although age was not asked of students in the 2015-2017 cohorts, in the 2018 and 2019 cohorts 39% were considered non-traditional students, defined by NSF as 30 years old or older.(See Table 1 for a summary of student demographics).Finally, at the start of the program, 50 participants (93%) were enrolled in STEM disciplines at their community colleges and four (7%) were enrolled in non-STEM disciplines.
This study also includes data from 41 research mentors, 11 of whom participated more than one summer and thus provided assessment of more than one student in the study.Seventy-six percent of mentors were university-based researchers while 24% were affiliated with a national research lab (i.e., NOAA, USGS) at the time of participation.Most mentors (approximately 70%) worked with another mentor as part of a mentor team, typically made up of a research faculty mentor and a graduate student mentor, while the remainder mentored students individually.
Study design and methods
The study collected quantitative and qualitative data in the form of closed-and open-ended responses on post-program reflection surveys to analyze both student and mentor perspectives on their experiences.In addition, RECCS alumni were surveyed annually to update data on their academic and career pursuits.This study was approved by the University of Colorado Boulder Institutional Review Board Protocol 15-0034.Students and mentors were recruited to the study at the start of each summer (2015-2019).All 54 students and 41 mentors included in Table 1.Study participants (N = 54 students).
Race and ethnicity
White 69% More than one race or ethnicity selected 24% Asian 4% African American or Black 2% Hispanic or Latinx 2%
Student reflection survey
The student reflection survey included intact item blocks from the Undergraduate Research Student Self-Assessment (URSSA) survey, a widely used and validated instrument to assess student outcomes of undergraduate research experiences in the sciences [35]; please see S1 File for URSSA questions included in the student survey.For the first three URSSA question blocks, students self-reported how much personal gain they perceived in three areas: i) research skills, ii) thinking and working like a scientist, and iii) personal and professional gains related to research, on a 5-point Likert scale from no gains to great gains.A final URSSA survey question block was included in the student survey to collect reflections on how the program impacted their academic or professional interests and preparedness.This question included eight statements, such as My research has prepared me for graduate school and Doing research introduced me to a new field of study, rated on a 4-point Likert scale from Strongly disagree to Strongly agree.To add context to this survey question, students were also asked an open-ended follow-up question to elaborate on how the research experience may have influenced their thinking about future career and graduate school plans.
Mentor reflection survey
At the end of the program, mentors were asked to rate students on several aspects of the research experience, such as preparation, work ethic, and quality of deliverables, using a 5-point Likert scale from Well below average to Well above average in terms of mentors' expectations for undergraduate researchers; please see S1 File for questions included in the mentor survey.For students who were mentored by a mentor team, their mentors' ratings were averaged so that there was one rating per student.During analysis, mentor responses for the two highest ratings-Well above average and Above average-were collapsed into one rating for Above average.Likewise, mentor responses for the two lowest ratings-Below average and Well below average-were collapsed into one rating for Below average.
Starting with the 2018 cohort, mentors also rated students' progress as scientists-in-training on a 4-point Likert scale from Very little to Significant.As with the previous mentor survey question, for students who were mentored by more than one person, an average of the mentors' ratings was used in the analysis.Finally, mentors were asked to briefly elaborate on areas in which their mentees made the most progress.using descriptive statistics.Open-ended responses were coded using content analysis [36] and following a hybrid approach that included both deductive and inductive coding [37].In the student survey, after responding to the closed-ended Likert-scale question about the program's impact on their academic or professional interests and preparedness (see above), students were asked to elaborate on how the research experience might have influenced their future plans.Taking a hybrid approach to coding these responses, first, a set of eight a priori codes was created based on the content of the eight statements in the closed-ended Likert question that preceded the open-ended question, such as, Prepared me for graduate school, Prepared me for a job, Introduced me to a new field of study, and Enhanced my resume.During the coding process, six additional (a posteriori) codes were added to the coding scheme to account for responses that did not fit one of the existing (a priori) codes.For example, Increased confidence and Still undecided were added (see complete list of codes in the Results section).In the mentor survey, after responding to the closed-ended Likertscale question described above about student progress as scientists-in-training, mentors were asked to briefly elaborate on the areas students made the most progress.To analyze these responses, a set of a priori codes was created based on the content in the four URSSA question blocks (i.e., Research skills; Thinking and working like a scientist; Personal and professional gains; and Engaged in the behaviors of a researcher).During coding, one additional code was added a posteriori: Awareness of career options (see the complete list of codes in Results section).
For reliability, two researchers independently coded all responses using the software Dedoose and Cohen's kappa was calculated; please see S3 File for reliability data.To reconcile the five codes with lower interrater reliability scores (k<0.50) in the student data set, the two researchers met to refine the definition of those codes and re-apply them to the responses.Final inter-rater reliability for each code ranged from k = 0.51 to 0.84.Reliability for the coded mentor responses was high for all codes (k = 0.92) and did not require further discussion.Code frequencies for the student and mentor open-ended responses were reported in frequency tables.
Tracking alumni academic and career paths
Data on academic enrollment and professional positions of RECCS alumni were collected through an annual alumni survey sent by email to all RECCS alumni.To supplement the surveys, we mined data from the RECCS LinkedIn group, which was a valuable tool for keeping current with alumni who may not have returned the survey.
Finding 1: RECCS students reported gains in research skills; growth in thinking like scientists; and personal gains related to research, such as increased confidence. Students also felt engaged in the real work of scientists during their research experience
Results from the student reflection survey show that as a group, RECCS students reported good gains in all categories that were assessed with respect to engaging in scientific work-Research skills; Thinking and working like a scientist; and Personal and professional gains related to research.Gains were reported on a 5-point scale with mean scores ranging from 4.28 to 4.57 for these three question blocks (Fig 1 ); see S2 File for complete survey data.Students also reported on their time spent engaged in the attitudes and behaviors of a researcher on a 5-point scale.RECCS students felt that they engaged in these behaviors a fair amount of the time during their research experience, a mean of 4.55 for this question block.Compared to the mean scores from a large sample of undergraduate student researchers (n = 506) published by Weston and Laursen [26], RECCS students self-reported higher average gains and time spent engaged in the behaviors of a researcher.
Looking at each item in these question blocks, RECCS students reported the most growth in Preparing a scientific poster (4.78), a research skill that was explicitly taught during the RECCS workshop (Table 2).However, students also reported high growth in areas not directly taught as part of their cohort training but that they experienced in their daily work within their research groups.For example, students gained Confidence in my ability to contribute to science Of these four URSSA constructs, students reported the least gains in research skills (4.28), and particularly for the items Calibrating instruments needed for measurement (3.77) and Keeping a detailed notebook (4.05).On a reflective post-survey, it is difficult to interpret a response of no or little gain without the context of students' skill level before the program.However, lower gains reported might suggest that not all students had the same opportunity to engage in practicing a particular skill during the nine weeks, perhaps due to the emphasis some mentors or research projects might have placed on other skills.
Finding 2: Mentors reported that most students produced high-quality deliverables and made significant progress as scientists-in-training
Most mentors assessed RECCS students' preparation, work ethic, and the quality of their posters and presentations as Average or Above average, in terms of their expectations for undergraduate researchers (Fig 2 ); see S2 File for complete survey data.Approximately half of RECCS students (54%) were rated Above average in preparation and an even greater percentage, nearly three-quarters (73%), as Above average for work ethic.For quality of student deliverables, approximately three-quarters of their posters (77%) and final presentations (73%) were rated as Above average.Mentors in the 2018 and 2019 cohorts were also asked to rate students' progress as scientists-in-training and reflect on the areas they thought students made the most progress.Twenty-nine mentors provided ratings of 20 students.Three-quarters of these students were rated as having made Significant progress as scientists-in-training during the research experience and the remaining one-quarter as having made Moderate progress (Fig 3): see S2 File for complete survey data.
When elaborating on areas of student progress, most mentors listed areas that fell under the code of research skills (82%), such as computer programming skills, lab/field skills, or science communication skills (Table 3).For example, one mentor described their mentee as "Mastering basic microbiology lab skills that they had no prior experience with, and data analysis (they learned a coding language and were able to perform statistical analyses and also generate highquality figures with little assistance!)".Another mentor reflected, "She became much better at expressing why her research was important and in understanding how to express concepts without overstating them".Mentors also described how some students demonstrated thinking more like a scientist by the end of the summer (50%), which included things like critical thinking or gaining a better understanding of the scientific process.One mentor assessed their mentee as having made the most progress "Problem-solving issues and coming up with ideas to test in general."Related to personal gains, mentors recognized a boost in some students' confidence.For example, "I think what was most important was the confidence she gained to work on things outside of her comfort zone".Of the same student, another mentor said, "She expressed at the end that she was not used to being asked what she thought about things".Some mentors noted that their mentees gained awareness of what everyday research was like.For example, "I think he learned a lot about what is really involved in research, and that not all of it is fieldwork or cool results-and there is lots of hard work too.I think that was valuable for him".Finally, one and nearly all agreed that it helped prepare them for a 4-year college.There was a similarly high percentage of students (>90%) who felt it prepared them for advanced coursework, prepared them for a job, and prepared them for graduate school.
When elaborating on how the research experience may have influenced their thinking about their future academic and career plans, students described impacts that went beyond the eight statements listed in the closed-ended question that preceded (see Table 4).Some did say they felt better prepared for graduate school (19%), but more of them expressed a real interest in attending (56%) and some in conducting research more specifically (22%).They described feeling less intimidated and more comfortable in a research environment, realizing that graduate school was feasible, and seeing graduate school as a more attainable goal to achieve.Some credited their mentors and the graduate students they met for influencing their thinking (22%).For example, "Seeing the bigger projects that my mentor and his cohorts were working on made me want to pursue that as a career, and talking to them about graduate school made it less intimidating and more of a feasible possibility".Some students felt the research experience also made them aware of more career options and of the resources available to them.For example, one student wrote, "I now know my options.As a first-generation college student, sometimes I have difficulties finding the appropriate resources.Through this program, I was exposed to new people, new resources, and a richer understanding of graduate school.Cost has always been a concern of mine, but I know that it is not impossible.(Especially in the sciences!!)".
Other students elaborated on how the experience influenced their planned field of study.For example, it clarified their desired field (e.g., "I realized I loved studying science and mathematics!" and "I think that doing the research in my field showed that I still wanted to pursue my original degree.It showed me that my interest for environmental sciences was minimal and my goal for planetary science remained the same").Others were exposed to a new field of interest that changed their plans, like the student who was considering adding a minor in biochemistry.Finally, two students (4%) were admittedly undecided about their future plans and did not elaborate further.
So, where are RECCS alumni now?Since completing the REU, 53 alumni from this study have remained in touch with the program via the alumni survey or the LinkedIn group.To date, 29 alumni (54.7%) have completed a degree (Table 5), most of them (27 out of 29) in STEM fields.Following those 27 alumni from STEM degrees to STEM employment indicates that nearly half were employed in STEM fields at the time of this writing (13 out of 27, 48%).
Limitations
REU programs are intensive research experiences that serve small numbers of students.Thus, the ability to report statistical significance in findings is limited and we ground all quantitative data in qualitative findings.The number of students we are reporting on (N = 54) is fairly small but constitutes five years of program experience.We also want to stress that despite efforts to find the best mentor-mentee match, the experience of each of these 54 students was unique and each mentor-mentee pair experienced the research experience differently.In this study, we tried to identify the commonalities in experiences that students and mentors described but some individual experiences likely differed from the shared or universal patterns we report.While the broad program structure was the same across all five years, we slightly modified the program from year to year.For example, students stayed one night at the university's Mountain Research Station in 2014-2017 but in 2018 we increased the stay to two nights, which may have allowed stronger bonds to develop within the cohort.Another example is that in one of the years, the students had a cohort Facebook group, in other years they didn't, which may have led to different levels of communication.These small changes in program elements may have slightly changed the experience of students in different cohorts.
Student growth throughout a summer research experience has many facets
The results from the RECCS program show the benefits and impacts of the research experience for the community college students in the program across many different aspects.We showed that RECCS students learned scientific and professional skills throughout the program, developed a sense science identity and that students described increased motivation, interest, and excitement for a STEM career path or graduate-level research after the program.Our data suggest that students felt their research project was relevant and inspired a sense of project ownership, and working in a supportive RECCS environment built a sense of belonging to this community of scientists.Students benefited from learning research and technical skills-such as lab techniques, computer programming, and data analysis-as well as soft skills such as communicating and presenting their work.The supportive cohort, weekly training, and regular check-ins by the RECCS staff in addition to the mentor support created an environment that fostered students' self-efficacy, boosted their confidence in their own abilities, and created a sense of community.While similar benefits to participants in a variety of research experiences have been described in the literature, criticism has been raised that limited evidence exists to measure the specific impact of REU programs on participating students [17].Impact analysis and program evaluation in REU programs have usually been based on student reflection and self-report data as the primary data source, which is subject to response bias [38].In this paper, we present data from mentor reflections in addition to student self-report data (see Hunter et al. [24] for a detailed discussion of this approach).We found that most RECCS mentors tended to focus their assessment of student growth on areas of research and technical skills, or critical thinking and problem-solving skills, however more than a quarter also noticed students' personal gains, such as confidence and comfort in a research environment.In a separate study, we measured the skill gains around paper and graph reading of RECCS students using eye tracking technology and showed that student skills over the course of the nine-week program became more expert-like, corroborating the self-reported data of RECCS students and their mentors who describe they gained research skills [20].We argue that along with these gains in skills, increased confidence and the ability of students to see themselves as scientists are critical components of RECCS students' success.Both self-efficacy and science identity are important predictors of success and persistence in STEM, particularly for those from underrepresented groups [39,40].Self-report data presented here suggest gains in RECCS students' self-efficacy and science identity.Based on findings from others who correlate career intention with science identity and self-efficacy [41], we interpret RECCS students' gain in these areas as factors that contributed to their persistence in the sciences.Further work on the impact of science identity on career success in REU program contexts is necessary to confirm this correlation.
Other studies have shown that a cohort and a scaffolded program towards an academic achievement can provide a structure and safety net that allows learners to overcome the negative impact of imposter feelings or self-doubt [42] and that it is important for individuals to feel competent in order to maximize motivation, performance and well-being [43].The structure of the RECCS program may have provided this structure that allowed participants to develop science identity by countering feelings of self-doubt.The weekly workshop assignments and check-ins, along with support from the RECCS staff, peer mentors, and the cohort likely influenced student success within the program.The ongoing alumni support and the network of like-minded students continue to provide this supportive structure as RECCS alums explore their career paths.
Community college students show strong growth as researchers throughout summer REU program and the impacts of such a program is large
In the Braided River career path framework [4], the RECCS program is intentionally designed as a channel for community college students to access research experience early in their careers.The program provides Colorado community college students the chance to work closely with researchers and within research groups from a research-intensive university or national lab for a summer and provides a supportive and encouraging environment, as well as scaffolding toward a final poster and oral presentation.Students spent the majority of their time each week (30 or more hours) with their research mentor teams and about five to ten hours per week with the RECCS staff and their cohort.The time with the cohort and RECCS staff and especially the scaffolded structure of supporting students in developing their program deliverables throughout the duration of the program appeared to provide an important anchor for student success and complemented the work with the mentors.The growth reported by students and mentors also highlights the assets and experiences that students bring to the individual research project and their research groups.While about half of mentors thought that students came in with average or below-average preparation, mentors lauded the above-average work ethic of three-quarters of the students and the similarly excellent quality of the final products (i.e., poster and oral presentation).Through a research experience like RECCS early in their academic careers, students can gain confidence in their ability to "do" science and insight into whether this path is a good fit for them.The summer researchers embraced the opportunity and thrived in the program even though they were in the early stages of foundational coursework at their community colleges.
Summer research experiences provide opportunities to test research as a career pathway early in students' academic experiences
Research experiences provide an opportunity for participants to develop a vision for STEM careers early in their career path and a vision of themselves as researchers, through exposure to role models in their peer mentors and research mentors and through the experience of completing an independent research project [15,44,45].Our data corroborate these findings that a summer research experience is a critical juncture in the career path of community college students and an important tributary in the Braided River career path model.Despite this foray into research being exploratory for some students, most of them finished the summer with a clearer plan for their future academic and research pursuits and were proud of their accomplishments.Considering that nationwide only 14% of STEM-educated workers with bachelor's degrees were employed in STEM jobs [46], RECCS alumni with a science degree have so far remained in the science workforce in greater numbers (48%).
As a Colorado-focused program, about half the RECCS cohort lived within 50 miles of the CU campus, which allowed mentors who had funding available to offer students a research assistant position for the following semester.The opportunity for students to continue working with their mentors in paid research assistant positions, transfer to local Colorado 4-year colleges, and sustain the relationships they made with their cohort illustrates the benefits of an intentionally designed, local program to support and inspire community college students' transition to a four-year degree and beyond.On the alum survey, we heard from students who enrolled at a 4-year institution together with others from their cohort, continuing to provide support for each other through the transition and their college experience.
Over the last three years, the COVID-19 pandemic impacted some of the RECCS alumni who were not employed in STEM jobs.Several alumni described this in the most recent alumni survey, explaining how setbacks during the pandemic forced them to delay applying to graduate programs, to focus their resources on childcare, or to choose a job outside of science out of necessity, but RECCS alumni described being hopeful of and intending to return to STEM fields in the future.More data collected over the coming years as RECCS alumni complete degrees, return to school, or change jobs will support a better understanding of the impact of the pandemic on STEM career trajectories.
RECCS student growth viewed through the lens of social constructivism
The multi-faceted growth exhibited by community college students during their participation in the RECCS program can be understood within the framework of Vygotsky's social constructivism, which underscores the role of social interactions and collaborative learning in cognitive development of learners.Our findings imply that the RECCS program serves as an authentic platform where students engage in a dynamic process of knowledge co-construction.A key component of Vygotsky's theory is the notion that learning is a continuous process of reconstructing knowledge, where new understandings are woven into existing understanding, a process that happens throughout the mentor-mentee interactions and is supported by the weekly trainings.Our findings show that RECCS students acquired scientific and professional skills, increased their confidence and thinking like a scientist, cultivated a sense of science identity and developed interest in graduate school.Through the lens of social constructivism, we show how students' immersion within the RECCS environment fostered a sense of belonging to the community of scientists' students work with throughout the summer.The symbiotic relationship between mentors, mentees, and program staff is an example of the principles of Vygotsky's "Zone of Proximal Development," as students are guided towards becoming confident and self-directed learners [34].In the context of the Braided River career path framework [4], the RECCS program emerges as a potent tributary for community college students to pursue a STEM career.By aligning with Vygotsky's principles, we recognize that RECCS students are active participants in the scientific community.
Conclusion
Community colleges are an important component of the college landscape; about 54% of the U.S. population has attended community college at some point.Our data shows that early research opportunities for community college students such as RECCS can inspire students to advance a career in STEM.The supportive cohort and strong scaffolding appeared to be important success factors for the program.The personal growth towards becoming a STEM researcher that the students and their mentors described included a variety of research and communication skills, the development of a science identity and sense of belonging, and a vision for a career path within STEM.While RECCS community college students entered the program with different levels of preparation, mentors described them as hard-working and as achieving above-average quality in their research products, launching them into a STEM career path early in their academic career.RECCS students enter STEM careers at levels that well exceed the national average.This research clearly supports the value of continuing to invest in community college research opportunities.It not only demonstrates that community college students have both the capacity and work ethic to thrive in a research environment, but that the impact of these programs can help to shape career trajectories and increase persistence in STEM.
( 4 .
76), Confidence in my ability to do research (4.74),Confidence in my ability to do well in future science courses (4.60), and An understanding of what everyday research is like (4.73).These data suggest that students felt better equipped with newly acquired skills, confidence for pursuing scientific research further, and an awareness of what that pursuit entails.The specific attitudes and behaviors of a researcher that students reported engaging in most frequently during the summer were Feeling responsible for the project (4.78), Engaging in real-world research (4.74),Feeling like a scientist (4.63), Feeling part of the scientific community (4.62), and Thinking creatively about the project (4.62).These data suggest that students were developing a sense of belonging to the scientific community.
Fig 1 .
Fig 1. Self-reported data at the end of the research experience.RECCS student responses are compared with responses from undergraduate researchers in a large sample [26].https://doi.org/10.1371/journal.pone.0293674.g001
Finding 3 :
Students agreed that the research experience positively impacted their interest in and preparedness for graduate school Upon completion of the research experience, students reflected on how it impacted their preparedness for and interest in future academic or career opportunities (Fig 4); see S2 File for complete survey data.All of them agreed that the research experience enhanced their resumes,
Fig 4 .
Fig 4. Impact of the research experience on academic and professional preparedness and interest (N = 54).https://doi.org/10.1371/journal.pone.0293674.g004 The question block on research skills included items like Explaining my project to people outside my field; Conducting observations in the field or lab; and Calibrating instruments.The question block about thinking and working like a scientist included items such as Analyzing data for patterns and [26]lem-solving in general.Items assessing personal and professional gains included Confidence in my abilities and Comfort collaborating in a group.A fourth URSSA question block asked students to report how much time they spent engaged in eight different behaviors typical of a researcher, for example, Engage in real-world science research; Feel responsible for the project; and Feel part of a scientific community.Students self-reported their engagement in these behaviors on a 5-point Likert scale from none to a great deal.Following Weston & Laursen[26], a mean value was calculated for each of these four URSSA question blocks.
Table 3 . Percent of mentor responses that included content in each code category for the question, In what areas did your student make the most progress? (N = 28 mentors).
's observations of student growth included a greater awareness of career options, specifically understanding the different levels of academic degrees.Overall these mentor reflections on areas of student growth, while representing only a subset of students from the 2018 and 2019 cohorts, support the findings students self-reported. https://doi.org/10.1371/journal.pone.0293674.t003mentor | 8,958.6 | 2023-12-21T00:00:00.000 | [
"Geology",
"Education",
"Environmental Science"
] |
Interaction, bond formation or reaction between a dimethylamino group and an adjacent alkene or aldehyde group in aromatic systems controlled by remote molecular constraints
Peri – peri interactions in naphthalene systems control the degree of bond formation between a peri -dimethylamino group and a polarised alkene or aldehyde group. Two peri -phenyl groups, which repel, induce closer N ⋯ C interactions or bond formation, while the ethylene link in the corresponding acenaphthene system has the opposite effect, and for the more electron-deficient alkenes lead to formation of a fused azepine ring initiated by the tert -amino effect. In related 1,8-fluorene derivatives N ⋯ C interactions occur for an aldehyde and a moderately polarised alkene, but fused azocines are formed when the alkene is more reactive.
Introduction
Intramolecular interactions between a nucleophile and an electrophile can be considered as representing different stages of the reaction between them, depending on their separation. 1 This approach originated with structural studies on trans-annular N⋯CO interactions in pyrrolizidine alkaloids. 2 However, the peri-naphthalene skeleton has provided a more convenient system open to the study of a wide variety of such interactions starting with the X-ray structural determinations of Dunitz et al. on Me 2 N/CO, MeO/CO and HO/CO systems as in 1-3. 3 Such interactions can change the chemical properties of the groups, e.g. the dimethylamino aldehyde 4 protonates on oxygen, not nitrogen, with HCl to give salt 5 with formation of a N-C bond. 4,5 Studies have been extended to interactions between electron-rich centres and electrophilic polar multiple bonds such as alkynes, nitriles and alkenes in 6-8. [6][7][8][9][10] The interactions between a dimethylamino group and a range of polarised alkenes has been studied the most intensively. For the most electrophilic alkenes a long bond (1.60-1.66 Å) can form between the two groups producing a zwitterionic doubly fused five-membered ring e.g. in 9 and 10. The constraint applied by the peri-naphthalene system has been used for studying other interactions such as unusual hydrogen bonding situations 11 and through space magnetic coupling between specific elements. 12 The 2,2′disubstituted biphenyl system has also been employed, with the 1,5 interactions in naphthalenes replaced by 1,6 interactions with greater freedom of movement between the groups due to the possibility of rotation about the inter-ring bond. Thus, there is a long Me 2 N⋯CHO interaction in 11 (2.989(2) Å), but formation of a zwitterion with a six-membered ring in 12, with a Me 2 N + -C(CN) 2 − bond (1.586(3) and 1.604(3) Å). 13 Recently we demonstrated that for peri-naphthalenes containing a Me 2 N-group adjacent to a -CHC(CN) 2 group, the Me 2 N⋯CC separations can be controlled by substituents at the opposite pair of peri-positions, e.g. an ethylene bridge as in acenaphthene 13 opens up the separation between the two groups, while two peri-phenyl groups, which repel each other, reduces the separation in 14. 14 Furthermore, we observed a temperature variable separation between the groups in the salt 15. Remarkably the Me 2 N⋯C separation at 200 K is 2.098(4) Å but reversibly contracts to 1.749(3) Å at 100 K. From all these data we were able to propose a preliminary reaction coordinate for the reaction between the groups. Here we now report the structures of two families of peri-naphthalenes with a dimethylamino group next to different electrophilic groups with either an ethylene bridge or two phenyl groups at the opposite peri-positions. Furthermore, we report the structures of a small family of 9,9-dimethylfluorenes with dimethylamino and electrophilic groups adjacent, which are designed as modified biphenyl systems in which the phenyl rings are constrained to be close to coplanar but pulled away from each other by the single carbon link between the two rings. A peri-disubstituted acenaphthene system has been used to investigate nucleophilic attack at silicon 15 and to prepare frustrated lone pair systems. 16
Discussion
Peri-Diphenyls derivatives 1,8-Diphenylnaphthalene 17 was converted in three steps to its 4-dimethylamino derivative 16, which was peri-lithiated using n-butyl lithium, and converted to the aldehyde 17 with DMF in 34% yield. The aldehyde underwent Knoevenagel condensations with malonitrile, methyl cyanoacetate and cyclohexan-1,3-dione to give three derivatives 18, 20 and 21 with two nitriles, a nitrile and an ester or a cyclic diketone, respectively, which activate the alkene to nucleophilic attack from the adjacent dimethylamino group (Scheme 1). Solution NMR studies suggests that only 21 has a bond between the peri-groups, since there is no low field 13 C NMR resonance at ca. 160 ppm for the peri-alkene carbon but a signal instead at 89.8 ppm. The crystal structures of the dimethylamino derivative 16, the peri-aldehyde 17 and the Knoevenagel products were determined by X-ray crystallography to study the interaction between the functional groups. We have already reported the structure of 18. 14 Comparison of the structures of the dimethylamino derivative 16 and the peri-dimethylamino-aldehyde 17 show how the nitrogen lone pair which is partially conjugated with the naphthalene ring in 16 has oriented towards the aldehyde carbon in 17 (Fig. 1). Thus, the torsion angles of the N-CH 3 bonds with the aromatic ring in aldehyde 17 are much less asymmetric: 51.6(3)/−78.0(3) cf. 23.9(2)/−105.1(2)°in 16, as the lone pair is rotated further away from alignment with the p orbitals of the aromatic ring (Table 1). The two phenyl rings in aldehyde 17 are tilted at 56.7 and 59.2°in the same sense from the best naphthalene plane, and lie at 20.7°to each other (ESI: † Table S2). The phenyl groups are splayed apart in the naphthalene plane, with displacements of 4-4.5°from their symmetrical positions at their peri-attachment positions, and a contact of 3.012 Å between their ipso carbon atoms. The nearby exo angle ψ between the fused rings in the naphthalene core is expanded to 126.1(2)°. The latter leads to the corresponding exo angle ϕ at the naphthalene between the Me 2 N and CHO groups being contracted to 117.5(2)°to produce a closer Me 2 N⋯CHO separation of 2.309(3) Å. This is 0.18 Å shorter than in the case of 4 4 without the two phenyl groups (2.489(6) Å). In other respects the molecular geometries of 17 and 4 are very similar (Table 1), for example the Me 2 N⋯CO angles are 112.56 (17) and 113.5(3)°r espectively. It is of note that the difference between the exo angles ϕ and ψ in the peri-amino-aldehyde 17 is larger than in the amine 16 which lacks an electrophilic peri-neighbour (8.6 v 5.4°) suggesting that the attractive Me 2 N/CHO interaction contributes to this asymmetry in the exo angles.
We have already reported that a similar effect is shown in a crystal of the toluene solvate of the dinitrile 18, where the installation of the two phenyl groups led to the reduction in the Me 2 N⋯CHC(CN) 2 by 0.054 Å to 2.3603(19) Å 14 compared to the corresponding molecule 8 8 without the phenyl substituents (Table 1). The phenyl groups take similar orientations to those in 16 and 17, and the exo angles, ϕ and ψ, change to 117.74 (12) and 126.24(13)° (Fig. 2). The larger separation between the two functional groups in dinitrile 18, compared to the aldehyde 17 (2.359 cf. 2.309 Å) is mainly attributable to the larger displacements of the functional groups out of the naphthalene plane in opposite directions. It is important to note that the exact molecular structure is dependent on both the attraction between the groups and optimisation of crystal packing, thus for the peri-diphenyl series the Me 2 N⋯C is shorter for the aldehyde than for the dinitrile (2.309 v 2.359 Å), but it is the other way round for the series without the phenyl groups (2.489 v 2.413 Å), though the differences are quite small.
Replacement of one of the nitriles in 18 with a methyl ester group has surprising consequences. Although, this compound has a similar open structure 20 in CDCl 3 solution according to NMR, in the crystal structure it adopts a closed zwitterionic structure 19 where the dimethylamino group has added to the alkene (Fig. 2), promoted by the presence of the diphenyl groups. In contrast, the analogue without the two phenyl groups, cyanoester 23, adopts an open structure in the crystalline state with a Me 2 N⋯CHC(CN)CO 2 Me separation of 2.595(2) Å, an even larger value than in the corresponding dinitrile 8 ( Table 2). 8 The difference in exo angles for the closed structure is, of course, considerably larger than for the aldehyde 17 and dinitrile 18: 17.6°v 8.6-8.9°. The formation of the five-membered ring reduces angle ϕ more and opens angle ψ further, illustrating the interdependence of these two angles. The disposition of the phenyl groups is similar to other diphenyl derivatives (Table S2 †). The Me 2 N-C bond formed between peri-substituents in 19 is 1.6719(14) Å long and the former alkene bond is now 1.4543(14) Å. In the two independent molecules of the biphenyl derivative 12 13 where a dimethylamino group has added to an alkenedinitrile to form a less strained sixmembered ring, the Me 2 N-C bonds (1.586(3) and 1.604(3) Å) are more than 0.07 Å shorter than in 19, and the broken alkene bonds (1.493(3) and 1.487(3) Å) are 0.04 Å longer than in 19. Thus, in the naphthalene 19 the Me 2 N-C bond can be considered as not fully formed, and the alkene bond as not fully broken. An earlier stage in the Michael reaction between the two groups is illustrated in the chloride salt of naphthalene 24 which has one peri-dimethylammonium group in place of the phenyl groups. In this case the Me 2 N-C bond is even longer (1.754(6) Å) than in 19, and the former alkene slightly shorter (1.442(5) Å). 14 The accumulation of positive charge on the dimethylamino group in 19 leads to longer N-CH 3 bonds (1.4899(16) and 1.4932(15) Å cf. 1.4586(18) and 1.4639 (17) in 23 without phenyl groups, and the development of negative charge on Table 1 Selected geometric details of peri-diphenylnaphthalene derivatives 16-18 and their corresponding analogues 4 and 8 without phenyl groups Study of a crystal of the chloroform solvate of 21, the Knoevenagel product formed from the diphenyl aldehyde 17 with cyclohexan-1,3-dione, showed that it also adopts a closed structure, as does its analogue without peri-phenyl groups 25. 10 The Me 2 N-C bond (1.620(4) Å) is 0.05 Å shorter than in the corresponding cyanoester 19 and the bond to the carbanion centre (1.484(4) Å) is 0.03 Å longer due to the stronger carbanion stabilising ability of two ketone groups compared to an ester and a nitrile group. The presence of the two phenyls has made little difference to the Me 2 N-C bond compared to that the analogue 25 without phenyl groups. The expected small increase and decrease in the exo angles ψ and ϕ respectively, are compensated by changes in the angles at the peri-substituents (α, β, δ and ε). It is of note that for the diphenylnaphthalene derivatives whose crystals do not include solvent, 16, 17, and 19, there is common packing motif, in which two molecules lie such that the two phenyl groups of one lie over the naphthalene of the other and vice-versa (Fig. S1, ESI †).
Addition of acid to the aldehyde 17 leads to protonation of the carbonyl group and formation of a N-C bond between the peri-groups. Thus, addition of HCl gave the chloride salt 26 as a DCM solvate, while an attempted Knoevenagel reaction with Meldrum's acid gave the analogous salt with a monomalonate anion 27 (Scheme 2). The crystal structures of both salts were determined (Fig. 3, Table 3). The phenyl groups have enhanced the difference in the exo angles ψ and φ between the peri-positions from 14.4°in naphthalene salt 5 5 which has no peri-phenyl groups, to 18.2-18.4°in cations 26 and 27. This leads to shorter Me 2 N-C(OH) bonds, 1.617(5) and 1.621(2) Å, compared to 5 (1.638(2) Å) without the phenyl groups, and correspondingly the C-OH bonds are slightly longer: 1.360(4) and 1.363(2) v 1.353(2) Å. The anion in each (14) salts form a hydrogen bond to the cation's hydroxyl group (Fig. 3). All these structures with two peri-phenyl groups show evidence of the effect of the repulsion between the two phenyl groups on shortening the interaction or bond between the two opposite peri-groups, and in the case of 19 forcing the formation of the bond in the solid state. Attempted reaction of aldehyde 17 with nitromethane in the presence of ethylenediammonium diacetate led to a small amount of the crystalline bis-imine 22, formed by reaction between two equivalents of the aldehyde 17 and one of the catalyst, whose structure was determined by X-ray crystallography (Fig. 4). The structure is particularly informative with respect to the influence of the peri-phenyl groups. The imine and Me 2 N-groups are not well oriented for mutual interaction, both preferring to conjugate with the naphthalene ring. (15) and 129.12(15)°). In contrast to the other structures discussed above, the phenyl groups have a much lower distorting effect on the two exo angles at either side of each naphthalenes. Thus, instead of a difference of 8.5/8.6°between exo angles ψ and φ seen in 17 and 18, this is reduced to 1.8 and 2.6°in the two naphthalenes of the bis-imine. However, the naphthalene rings are strongly twisted, with angles of 9.9 and 11.5°between their benzene rings' best planes, so that all four sets of peri-substituent atoms are strongly displaced out of their best naphthalene planes, to opposite sides, by 0.271(2)-0.605(2) Å. The relative dispositions of the phenyl groups relative to the naphthalene plane and at the peri-positions, however, remain similar to those in the other diphenyl derivatives (Table S1 †). In the case of bis-imine 22 the phenyl groups do not exert their normal effect because the Me 2 N/CNR interactions are not attractive enough, or possibly repulsive, at ca. 2.5 Å. 19 In this case, faced with two unfavourable peri interactions, the naphthalene rings distort strongly out of the aromatic plane, rather than within it, and so move the peri-substituents apart. Thus, there are limits to the compressive influence of the two peri-phenyl groups on an opposite set of peri-substituents.
Acenaphthene derivatives
To test the effect of tightening the opposite exo angle ψ on the separation of the interacting functional groups a series of acenaphthene derivatives which have an ethylene bridge between the second set of peri-positions was synthesized (Scheme 3). Aldehyde 28 was prepared from the known peri-dimethylamino-bromoacenaphthene 16 by halegonlithium exchange and treatment with DMF in 55% yield. Knoevenagel condensations of aldehyde 28 with malonitrile and methyl cyanoacetate, catalysed by ethylenediamine diacetate in refluxing methanol yielded 29 14 and 30 in 88-91% yields. However, reaction with cyclic dicarbonyl compounds: Meldrum's acid, cyclohexane-1,3-dione and cyclopentane-1,3-dione in DMSO at room temperature with no catalyst, gave directly the fused azepines 34-36, formed by reaction between the functional groups of the initially formed Knoevenagel products 31-33. The structures of 28-30 and 34-36 were determined by X-ray crystallography. For 28-30 the small exo angle ψ within the five membered ring of the acenaphthene (111.25(18)-111.73(17)°) leads to a larger exo angle ϕ between the two functional groups (127.03(18)-129.50(14)°), which increases the distance between them, the opposite to what was observed for the diphenyl derivatives. In the same way, it is the added space between the functional groups in the expected Knoevenagel products 31-33 which leads to a ready reaction between them which is initiated by hydride transfer from the N-CH 3 group to the polarised alkene according to the tertiary amino effect. 20,21 The structure of the aldehyde 28 shows a very significant difference to that of the corresponding naphthalene derivative 4 without the ethylene bridge (Fig. 5, Table 4). The two groups are splayed apart to a Me 2 N⋯C separation of 2.953(2) Å, cf. 2.489(6) Å in 4, due mainly to a widening of the ϕ exo angle to 129.50(14)°, and the aldehyde group has rotated so that it now it lies at just 16.4°to the nearest C, C(H) bond of the aromatic system. Thus, it is not involved in a n-π* interaction of type: Me 2 N⋯CO as it is in 4. The nitrogen atom lies at 2.37 Å from the aldehyde hydrogen atom, and its theoretical lone pair axis lies at 27°to the N⋯H vector. Both the aldehyde and dimethylamino groups are oriented to optimise their conjugation with the acenaphthene ring, though the bond lengths between these groups and the ring are not significantly shortened compared to 4: Me 2 N-C: 1.4221(19) v 1.420(6) Å, and (O)C-C: 1.480(2) v 1.490(6) Å for 28 and 4 respectively. The dimethylamino group is displaced slightly towards the aldehyde, and the aldehyde is displaced more strongly away, but it is the larger exo angle which is the main cause of their increased separation.
The structures of the Knoevenagel products 29 and 30 have a similar pattern of in-plane displacements as in the aldehyde 28 due to the widening of the ϕ exo angle to 127.03(18)-128.62(13)/Å, but the alkenes lie at greater angles (35.9(3)-51.9(3)°) to their acenaphthene rings (Fig. 6, Table 4). The Me 2 N⋯C separations lie in the range 2.755(3)-2.846(3) Å with the shortest for one of two crystallographically independent molecule of dinitrile 29 which has the largest rotation of the alkene group away from the acenaphthene. In this case the Me 2 N⋯CC angle is reduced to 122.68(14)°(cf. 129.28 (11) and 133.00 (15)°in the other cases), and this can be considered as a rather long n-π* interaction. It is known that for a naphthalene with a peri-dimethylamino group located next to an electron deficient alkene, on heating in DMSO at 60°C the groups react to form a fused azepine, for example from the dinitrile 8 or the N,Ndimethylbarbiturate derivative 38 to the fused azepines 37 and 39 (Scheme 4). 22,23 Furthermore, recent related work has reported how 2-naphthol reacts with the peri-pyrrolidinyl aldehyde 40 to give 41. 24 These reactions are triggered by the tertiary amino effect 20,21 whereby a hydride from the N-CH 2 or N-CH 3 group adds to the polarised alkene, and then the iminium cation and the carbanion formed add to each other (Scheme 5). In the case of the attempted preparation of the acenaphthene Knoevenagel products 31-33, the reaction goes directly to the azepine by stirring aldehyde with the dicarbonyl compound in DMSO at room temperature in 40 to 80% yields. The widening of the exo angle allows the groups to get into positions to react more easily. The structures of the resulting three fused azepines 34-36 with various spiro cyclic dicarbonyl systems are shown in Fig. 7, with selected geometric data in Tables S4 and S5 (ESI †).
The structures are similar to related naphthalene based systems 10 except that the widening of the ϕ angle is retained. The N-C(H 2 )-C(spiro) ring atoms are displaced to the same side of the acenaphthene ring, with the strongest displacements for the methylene and spiro carbons (1.163-1.354 and 0.551-0.919 Å respectively). The remaining methylene carbon is displaced to a smaller degree in the opposite sense (Table S5 †
Fluorene derivatives
To provide a skeleton to contrast with the biphenyl system but with the phenyl rings constrained to be near coplanar and with the separation between ortho substituents increased, the 9,9-dimethyl-fluorene skeleton 42 was selected. The parent dimethyl-hydrocarbon was converted to the ortho-diiodo compound 43 by bis-lithiation and treatment with iodine in 25% yield. The structure of the diiodo compound was confirmed by X-ray crystallography, and it shows considerably distortion of the fluorene system to accommodate an intramolecular I⋯I contact distance of 3.6392(4) Å (further details in the ESI †). Mono-Lithiation of diiodo compound 43 and treatment with DMF gave the dimethylamino aldehyde 44 in one step in 42% yield. The reaction proceeds by addition of DMF to the mono-lithiated aromatic, followed by expulsion of dimethylamide which substitutes the adjacent iodide. Barbasiewicz et al. has observed a similar reaction with peri-diodonaphthalene. 25 The aldehyde 44 gave the expected alkenes 45 and 46 by Knoevenagel reaction with malonitrile or nitromethane under reflux in methanol with ethylenediammonium diacetate as catalyst (Scheme 6). In contrast, just stirring aldehyde 44 with benzoyl-nitromethane, Meldrum's acid or cyclopentane-1,3dione in DMSO at 20°C gave fused azocine products 47-49, analogous to the behaviour of the more reactive acenaphthene derivatives (Scheme 7). Interestingly, on recrystallisation, some of the gem-benzoyl-nitro derivative 49 lost the benzoyl group, presumably due to the effect of water in the solvent, to give the fused nitro-azocine 50 (Scheme 7). Similar types of azocines have been reported from disubstituted biphenyls, 21 however, we are not aware of any such derivatives of the fluoreno-azocine ring system in 47-50. The crystal structures of 44 and 45 were determined to examine the interaction between functional groups, and of 48 and 50 to confirm their molecular structures. Table 5). In contrast, in the biphenyl series, the corresponding separations are quite different: 2.989(2) and 1.586(3)/1.604(3) Å respectively. In 44 the lack of rotational freedom in the fluorene has brought the Me 2 N-and -CHO functional groups closer together, and the Me 2 N⋯CO angle is 111.04(16)°, as would be expected for a n-π* interaction. The axis of the nitrogen lone pair lies at 9.5°to the Me 2 N⋯C(O) vector. The greater separation of Me 2 N-and -CHC(CN) 2 groups in the constrained fluorene system 45, prevents the bond formation seen in the biphenyl series. The Me 2 N⋯CC angle is favourable for n-π* interaction (110.80(10)°), though the Me 2 N⋯C distance is particularly long 2.8304(17) Å and the angle between the axis of the N lone pair and the Me 2 -N⋯C vector is 8.5°. In the fluorene plane the Me 2 N-group is displaced towards the alkene which is displaced away, but the favourable alignment of groups in 44 and 45 are achieved by different combinations of (a) displacements of the groups to the opposite sides of the fluorene plane and (b) widening of the exo angles at the intervening ring fusions (Table 5). 122.20 (11) 120.13 (11) 133.71 (12) 133.51 (11) 122.75 (11) 118.92 (12) a ζ, ξ and τ: torsion angles. b ΔN, C: deviations of the 1-and 8-substituent atoms from the fluorene plane. X-ray crystallography also confirmed the structures of two of the fused azocines, 48 and 50 which are shown in Fig. 9 with selected molecular geometry in Tables S6 and S7 (ESI †). The spiro junction in the azocine ring of 48 leads to more sp 3 C-sp 3 C bond strain with C-C bonds of 1.561(2) and 1.565(2) Å to the spiro atom, than in the nitro derivative 50 (1.506(4) and 1.525(4) Å). In the azocine ring the nitrogen atom adopts a moderately pyramidal bonding geometry (sum of angles: 48: 350.60(13)°; 50: 348.8(2)°), and there is significant angular strain along the N-CH 2 -C-CH 2 fragment (112.0-118.26°) with exception of the spiro carbon of 48. The N-C(H 2 )-C ring atoms are displaced to the same side of the fluorene ring, with the strongest displacements for the two carbons (1.430-1.458 and 0.847-0.953 Å). The remaining methylene carbon is displaced to a smaller degree in the opposite sense (Table S7 †). These out of plane displacements are larger than in the related acenaphthene derivatives.
Conclusions
These investigations show that the interaction between an electrophile and a nucleophile in the peri-positions of a naphthalene can be modified by the choice of substituents at the opposite peri-positions. Two phenyl groups repel each other and force the other substituents closer, and in one case, the cyanoester derivative 19, led to them forming a bond, which in the absence of the phenyls they did not. In contrast, in acenaphthene systems, the ethylene group, acts as a short constraint between the peri-positions, widening the separation between the interacting groups, with various consequences. The aldehyde group in compound 28 now has sufficient space to rotate into the plane of the aromatic system so the n-π* interaction is lost, while for the ethenedinitrile in 29 the Me 2 N⋯C interaction in lengthened, and for more reactive electrophiles cyclisation between the N-methyl group and the alkene occurs forming a fused azepine ring as in 34-36, initiated by intramolecular hydride transfer, which the increased space between groups permits. For the 1,6 interactions in the fluorene derivatives studied, two long n-π* interactions were observed for the less reactive electrophiles in 45 and 46, but for the more reactive the corresponding cyclisation to form fused azocine rings in 48 and 50 took place. The peri-diphenylnaphthalene system is the most promising for exploring further nucleophileelectrophile interactions and, in particular, at accessing those separations nearer to the transition state for the direct bond formation. Additions of small groups to the phenyl rings to increase the repulsion between them may lead to even closer peri-interactions at the opposite positions, and complement the very few N⋯C interactions known in the 1.7-2.3 Å range. 14 In this work we did not find, but also did not deliberately look for, polymorphs of the compounds whose crystal structures were determined. It is possible that different polymorphs or solvates may show somewhat different interactions between the groups, as a result of the effects of the different packing arrangements, just as in the crystal structure of the acenaphthene 29 the two independent molecules have differences in their conformations. In the diphenylnaphthalene series, in particular, this may be worth pursuing.
Experimental
Full details of the synthesis and characterisation of new substances and the determination of crystal structures by X-ray diffraction and their crystal data are provided in the ESI. † Crystallographic data are deposited at the Cambridge Crystallographic Data Centre with code numbers CCDC: 2069090-2069106 and 2069108.
Conflicts of interest
There are no conflicts of interest to report. | 5,986.4 | 2021-05-27T00:00:00.000 | [
"Chemistry"
] |
A single quantum dot-based nanosensor with multilayer of multiple acceptors for ultrasensitive detection of human alkyladenine DNA glycosylase
We develop a single quantum dot-based nanosensor with multilayer of multiple acceptors for ultrasensitive detection of human alkyladenine DNA glycosylase.
Introduction
The genetic information of eukaryotes is stored in DNA whose maintenance and stability are vital to the organism. However, the genomic DNA is constantly exposed to various endogenous and environmental threats (e.g., reactive radical species, toxins, and radiation), causing a diversity of damaged bases, lesions, mismatches and base-pair modications in the genome, 1 eventually leading to genomic instability and cancers. 2,3 The base excision repair (BER) pathway acts throughout the cell cycle to remove the damaged bases from DNA. BER is initiated by DNA glycosylases that recognize and catalyse the cleavage of the damaged/mismatched bases, generating an apurinic/ apyrimidinic (AP) site. The repair process is completed by the action of AP endonucleases, deoxyribophosphodiesterases, DNA polymerases and DNA ligases. 4-6 BER enzymes (e.g., DNA glycosylases) play an important role in the repair of DNA lesions and have been associated with both individual and population disease susceptibility. 7 In addition, abnormal DNA glycosylases are associated with a variety of diseases such as cancer, 8,9 cardiovascular disease, 10 neurological disease and inammation, 11 suggesting the important role of DNA glycosylases in cancer diagnosis and treatment.
So far, a series of methods have been developed for the detection of DNA glycosylase. [12][13][14][15][16][17][18][19][20] Gel-electrophoresis coupled with radioactive labelling is the most general method, but it suffers from time-consuming procedures, poor sensitivity, and hazardous radiation. 12 High-performance liquid chromatography needs tedious DNA fragmentation and expensive instrumentation. 13 Magnetic nanoparticle-based separation techniques involve a long analysis time and complicated procedures. 14 Gold nanoparticle-based colorimetric assays enable visual detection of DNA glycosylase, but they require complicated procedures for the preparation and modication of gold nanoparticles. 15 Luminescence assays need additional chemical reagents which increases the complexity of the experiments. 16 Fluorescence methods take advantage of either DNA probes labelled with a uorophore and a quencher 17,18 or articial uorescent nucleotide analogs (e.g., pyrene, 2-aminopurine, and pyrrolo-dC) 19,20 to detect thymine DNA glycosylase (TDG), 18 uracil DNA glycosylase (UDG), 19 and human 8-oxoguanine DNA glycosylase (hOGG1), 20 but few approaches are available for human alkyladenine DNA glycosylase (hAAG) assay. 17 Unlike other DNA glycosylases that are specic for a particular type of damaged base, hAAG excises a diversity of substrate bases damaged by alkylation and deamination (e.g., 3-methyladenine, 7-methylguanine, 1,N 6 -ethenoadenine, and hypoxanthine). 21 The hAAG cleaves the N-glycosidic bond between the sugar and the damaged base (see ESI, Fig. S1 †), and the resulting abasic nucleotide is excised and replaced with a normal nucleotide by the sequential action of endonuclease, polymerase and DNA ligase. 21 Previous research demonstrates that hAAG activity in peripheral blood mononuclear cells from lung cancer patients is higher than in normal people. 22 Moreover, the high expression of hAAG may induce frameshi mutagenesis and microsatellite instability by binding to and stabilizing one and two base-pair loops and shielding them from repair in the presence and absence of the DNA mismatch repair pathway, eventually leading to a high risk of cancer. 23 Therefore, the accurate detection of hAAG activity is essential for biomedical research and clinical diagnosis.
Semiconductor quantum dots (QDs) exhibit unique optical and physical properties (e.g., high brightness, high quantum yield, good stability against photobleaching, narrow emission bands and size-tunable emission spectra) that are not shared by organic dyes and uorescent proteins, and their dimensions are comparable to those of biomolecules. [24][25][26] QDs have found wide applications in imaging, sensing, drug delivery and biomedical research. [27][28][29][30][31][32] Recently, the combination of QDs with singlemolecule/particle detection [33][34][35][36][37] enables the detection of nucleic acids, proteins, and even small molecules with extremely high sensitivity, low sample consumption, rapidity, and simplicity. [38][39][40] In this research, we develop a single QD-based nanosensor with multilayer of multiple acceptors for ultrasensitive detection of hAAG using apurinic/apyrimidinic endonuclease 1 (APE1)-assisted cyclic cleavage-mediated signal amplication in combination with the DNA polymerase-assisted multiple Cy5-mediated FRET. We designed a hairpin probe with a hypoxanthine base (I) modied in its stem as the substrate of hAAG. The presence of hAAG induces the cleavage of the hairpin substrate, generating a trigger which can hybridize with a probe modied with an AP site to initiate the cyclic cleavage for the generation of abundant primers. The resultant primers can subsequently initiate the polymerase-mediated signal amplication to produce the biotin-/multiple Cy5-labeled double-stranded DNAs (dsDNAs) which can assemble onto the QD surface to form the QD-dsDNA-Cy5 nanostructure, leading to efficient FRET from the QD to Cy5 under the excitation of 405 nm. This single QD-based nanosensor can sensitively detect hAAG with a detection limit of as low as 4.42 Â 10 À12 U mL À1 . Moreover, it can detect hAAG even in a single cancer cell, and distinguish the cancer cells from the normal cells.
Results and discussion
Principle of the hAAG assay The principle of the single QD-based nanosensor with multilayer of multiple acceptors for hAAG activity is illustrated in Scheme 1. The reaction system consists of a hairpin probe, an AP probe, and a capture probe. We designed a hairpin probe as the substrate of hAAG. The stem of the hairpin probe contains two complementary strands, with a hypoxanthine base (I) modied in the longer stem (blue + pink color, Scheme 1), which is mismatched with thymine (T) in the complementary strand (blue color, Scheme 1). We designed an AP probe with an AP site (green + purple color, Scheme 1) at the 23rd base from the 5 0 end. The AP probe is paired with the trigger (pink color, Scheme 1) generated by the cleavage of the hairpin probe for the initiation of isothermal strand displacement amplication (SDA) 41,42 in the presence of APE1. In addition, we designed a capture probe (yellow color, Scheme 1) which can hybridize with the primer (green color, Scheme 1) for the initiation of DNA polymerase-assisted amplication. To prevent the nonspecic amplication, we modied the 3 0 termini of all probes with NH 2 .
This assay involves four steps: (1) hAAG-actuated hypoxanthine excision repair reaction, (2) APE1-mediated SDA, (3) DNA polymerase-assisted amplication and the incorporation of multiple Cy5 molecules, and (4) the assembly of biotin-/ multiple Cy5-labeled dsDNAs onto the QD surface, resulting in efficient FRET from the QD to Cy5. In the presence of the hAAG, APE1 enzyme and hairpin probe, hAAG recognizes the I/T base pairs and cleaves the N-glycosidic bond between the sugar and the hypoxanthine base, releasing the hypoxanthine base to form an AP site. 43,44 Then APE1 cleaves the AP site, leading to the break of the hairpin probe into two portions (i.e., a trigger and a stable stem-loop DNA fragment) (see ESI, Fig. S1C †). The resultant triggers (pink color, Scheme 1) can hybridize with the AP probes to form the AP probe/trigger dsDNAs. Subsequently, the APE1 enzyme induces cyclic cleavage of dsDNAs, releasing the triggers and a large number of primers with 3 0 -OH (green color, Scheme 1). The released primers can initiate polymerization with the biotinylated capture probe as the template in the presence of the Klenow fragment, Cy5-dATP, dCTP, dGTP and dTTP, generating stable dsDNAs with the incorporation of multiple Cy5 molecules. These biotin-/multiple Cy5-labeled dsDNAs can self-assemble onto the QD surface via specic biotin-streptavidin binding to form the QD-dsDNA-Cy5 nanostructure. Under the excitation of 405 nm, efficient FRET occurs with the 605QD as the donor and Cy5 as the acceptor (see ESI, Fig. S2 †), and the Cy5 signals can be simply measured using a total internal reection uorescence (TIRF) microscope for the quantication of hAAG activity, while in the absence of hAAG, the hypoxanthine base cannot be cleaved, and no trigger is generated. Consequently, neither APE1-mediated SDA nor DNA polymerase-assisted amplication reaction occurs, and no Cy5 signal is observed. Notably, this assay has four signicant characteristics: (1) APE1 can specically cleave the AP site in the DNA duplex of both the hairpin probe and AP probe/trigger dsDNA, initiating the APE1-mediated SDA for the release of a large number of primers; (2) the capture probe can specically hybridize with the primer, forming dsDNA which may function as the template for the amplication, leading to the incorporation of multiple Cy5 molecules into the resultant dsDNA and eventually the formation of the QD-dsDNA-Cy5 nanostructure; (3) in the single QD-based FRET nanosensor, the QD functions not only as a FRET donor but also as a local nanoconcentrator to assemble multiple Cy5 acceptors. In contrast to the typical QDbased FRET approaches with a single donor-acceptor pair, 38 the assembly of biotin-/multiple Cy5-labeled dsDNAs onto a single QD leads to the formation of the multilayer of multiple Cy5 molecules in the QD-dsDNA-Cy5 nanostructure, signicantly improving the FRET signals; (4) the near-zero background signal results from the specic recognition and cleavage of the hairpin probe substrate by hAAG and a high signal-to-noise ratio of single-molecule detection. Therefore, this single QDbased nanosensor with multilayer of multiple acceptors can be applied for sensitive detection of DNA glycosylase.
Validation of the assay
In order to verify the feasibility of this assay, we performed gel electrophoresis, DNA melting temperature experiments, and uorescence measurements, respectively ( Fig. 1). We used a Cy5-labeled hairpin probe to perform the hAAG-actuated hypoxanthine excision repair reaction (Fig. 1A). In the absence of hAAG, only a 54-nt band of the Cy5-hairpin probe is observed with the co-localization of SYBR Gold and Cy5 (Fig. 1A, lane 1), indicating no occurrence of cleavage reaction. In the presence of hAAG, the Cy5-hairpin probe is cleaved, generating a 15-nt band of the Cy5-labeled trigger (Fig. 1A, lane 2, red color) and a 38-nt band of the stem-loop DNA fragment (Fig. 1A, lane 2, green color), indicating that hAAG can recognize the I/T base pair and specically excise the hypoxanthine with the assistance of APE1.
The cleavage of the AP site in the dsDNA by APE1 is veried by gel electrophoresis (Fig. 1B). The hybridization of the trigger with the AP probe leads to the formation of the trigger/AP probe dsDNA (Fig. 1B, lane 1). The APE1 can cleave the AP site of dsDNA, producing a 22-nt primer (Fig. 1B, lane 2) with the same length as the synthesized primer (Fig. 1B, lane 3), indicating the occurrence of APE1-medicated SDA. To conrm whether the hybridization of the primers with the capture probes can initiate the Klenow fragment polymerase-assisted amplication, we measured the melting curves of products (Fig. 1C). The melting temperature of products is 78 C aer amplication (Fig. 1C, red line), much higher than that before amplication (71 C; Fig. 1C, blue line), suggesting the occurrence of the Klenow fragment polymerase-assisted amplication reaction. We further used uorescence spectroscopy to verify the feasibility of this assay (Fig. 1D). No Cy5 signal is detected in the control without hAAG (Fig. 1D, blue line), indicating no FRET from the QD to Cy5 in the absence of hAAG. In contrast, in the presence of hAAG, a distinct Cy5 signal is observed, accompanied by the decrease of the QD signal (Fig. 1D, red line), suggesting efficient FRET from the QD to Cy5 as a result of the formation of the QD-dsDNA-Cy5 nanostructure. Moreover, the Cy5 uorescence intensity enhances with increasing concentration of hAAG (Fig. 2). In the logarithmic scale, the Cy5 uorescence intensity exhibits a linear correlation with the concentration of hAAG in the range from 1 Â 10 À9 to 1 Â 10 À3 U mL À1 . The regression equation is F ¼ 28.8 log 10 C + 335.7 (R 2 ¼ 0.981), where C represents the concentration of hAAG (U mL À1 ) and F represents the Cy5 uorescence intensity. The detection limit is calculated to be 8.98 Â 10 À10 U mL À1 based on the principle of the control group plus three times the standard deviation. Notably, the Cy5 molecules can covalently bind to the QD surface only through the DNA polymerase-assisted amplication reaction and specic biotin-streptavidin binding in the presence of hAAG, without any nonspecic absorption of Cy5-dATP on the surface of streptavidin-coated QDs due to the presence of biotinlabeled probes (e.g., the capture probe in Scheme 1 and the biotinylated random sequence in Fig. S3, see ESI †).
Calculation of FRET efficiency of the single QD-based nanosensor
In the single QD-based nanosensor, FRET leads to the simultaneous quenching of the QD donor emission and sensitization of the Cy5 acceptor emission. The FRET efficiency (E) can be quantied based on eqn (1) 25 where F DA is the 605QD uorescence intensity in the presence of the Cy5 acceptor, and F D is the 605QD uorescence intensity in the absence of the Cy5 acceptor. Under the optimal experimental conditions (see ESI, Fig. S4-S7 †), the FRET efficiency is calculated to be 84.9% according to eqn (1). This value is close to that (85.3%) obtained by the single QD-based nanosensor (see ESI, Fig. S4 †). Such a high FRET efficiency is reasonable theoretically. In the single QD-based nanosensor, the FRET efficiency (E) can be calculated based on eqn (2) 25 where R 0 is the Förster distance, n is the average number of acceptor molecules interacting with one donor, and r is the distance from the acceptor to the donor centre. The Förster distance (R 0 ) is estimated to be 7.7 nm for the 605QD/Cy5 pair. 40 We calculated the separation distance between the 605QD donor and the Cy5 acceptor in the FRET-based nanosensor. In theory, ve Cy5-dATP can be added to the end of each primer as a result of the DNA polymerase-assisted amplication. The distances between Cy5 molecules in the extended primer and the surface of streptavidin-coated QD are estimated to be 1.36 nm, 3.06 nm, 4.76 nm, 6.46 nm, and 7.82 nm, respectively, based on the assumption that the average length of the nucleotide in dsDNA is 0.34 nm. Taking into account the radius of the streptavidin-functionalized QD (7.5-10 nm) (see ESI, Fig. S8 †), the corresponding QD-Cy5 separation distances are calculated to be 11.36 nm ($1. (E) for a single QD with multiple acceptors can be calculated to be 77.7% for Cy5 in position-1 (i.e., E 1 ¼ 77.7%), 60.2% for Cy5 in position-2 (i.e., E 2 ¼ 60.2%), and 42.0% for Cy5 in position-3 (i.e., E 3 ¼ 42.0%) based on eqn (2). In the QD-Cy5-Cy5-Cy5 conguration, the theoretical total FRET efficiency (E th ) can be obtained from the individual single-pair FRET efficiencies (i.e., E 1 , E 2 , and E 3 ) based on eqn (3) 45 where In this QD-dsDNA-Cy5 nanosensor with multilayered Cy5 acceptors, the total efficiency (E th ) is calculated to be 85.1%, consistent with the value obtained by uorescence spectroscopy measurement (84.9%) and the value obtained by the single QD-based nanosensor (85.3%). We further investigated the distribution of energy in this QD-Cy5-Cy5-Cy5 conguration. The total FRET efficiency can be derived from three QD-Cy5 subsystems (i.e., the QD/Cy5 pairs with Cy5 in position-1, Cy5 in position-2, and Cy5 in position-3, respectively) based on eqn (4)-(6), 46 where E 0 1 is the FRET efficiency of the QD/Cy5 pair with Cy5 in position-1, E 0 2 is the FRET efficiency of the QD/Cy5 pair with Cy5 in position-2, and E 0 3 is the FRET efficiency of the QD/Cy5 pair with Cy5 in position-3. The calculated results are as follows: E 0 1 ¼ 51:9%; E 0 2 ¼ 22:5%; E 0 3 ¼ 10:8%, and the total FRET efficiency is theoretically estimated to be 85.1%, consistent with the value obtained experimentally by uorescence spectroscopy measurement (84.9%) and the single QD-based nanosensor (85.3%), further conrming the formation of the QD-dsDNA-Cy5 nanosensor with multilayer of multiple Cy5 molecules.
To verify efficient FRET between the QD and Cy5 in the QD-dsDNA-Cy5 nanostructure, we further measured the uorescence lifetime of the QD (Fig. 3). The average lifetime (s) of the QD in the control group without hAAG is 25.2 ns, whereas the s of the QD is reduced to 4.4 ns in the presence of 0.1 U mL À1 hAAG. The FRET efficiency is calculated to be 82.5% according to eqn (7) where s D is the uorescence lifetime of the QD alone, and s DA is the uorescence lifetime of the QD in the presence of hAAG. This value is close to that obtained by uorescence spectroscopy measurement (84.9%) and the single QD-based nanosensor (85.3%).
Measurement of hAAG activity by single-molecule imaging
We employed the single QD-based nanosensor to measure hAAG activity. In the control group without hAAG, only the 605QD signals are observed in the donor channel (Fig. 4A), without Cy5 uorescence signals observed in the acceptor channel (Fig. 4B). When hAAG is present, both the 605QD uorescence signals (Fig. 4D) and the Cy5 uorescence signals (Fig. 4E) are observed simultaneously as a result of FRET from the 605QD to Cy5 in the QD-dsDNA-Cy5 nanostructure, with the yellow signals indicating the colocalization of 605QD and Cy5 (Fig. 4F). The near-zero background signal observed in the negative control (Fig. 4B) is crucial for the sensitive detection of hAAG activity. Notably, the uorescence intensity of 605QD in the presence of hAAG (Fig. 4D) is much lower than that of 605QD in the absence of hAAG (Fig. 4A) due to efficient FRET from the 605QD to Cy5, but the number of QDs remains almost unchanged. Therefore, the simple quantication of Cy5 counts can be used for accurate measurement of hAAG activity. In addition, we used transmission electron microscopy (TEM) to characterize the obtained QD-dsDNA-Cy5 nanostructures (see ESI, Fig. S10 †). The observed single QD with good dispersion clearly indicates the formation of the single QD-based nanosensor.
Detection sensitivity
To evaluate the sensitivity of the single QD-based nanosensor, we measured the Cy5 counts in response to variable concentrations of hAAG under the optimal experimental conditions (see ESI, Fig. S4-S7 †). As shown in Fig. 5, when the concentration of hAAG increases from 1.0 Â 10 À11 to 0.1 U mL À1 , the Cy5 counts enhance correspondingly. In the logarithmic scale, the Cy5 counts exhibit a linear correlation with the concentration of hAAG in the range from 1 Â 10 À11 to 1 Â 10 À3 U mL À1 (inset of Fig. 5). The regression equation is N ¼ 164.8 log 10 C + 1955.5 (R 2 ¼ 0.995), where C represents the concentration of hAAG (U mL À1 ) and N represents the Cy5 counts. The detection limit is calculated to be 4.42 Â 10 À12 U mL À1 based on the principle of the control group plus three times the standard deviation. The sensitivity of the single QDbased nanosensor has improved by as much as 7 orders of magnitude compared with that of the magnetic nanoparticlebased separation approach (1 Â 10 À4 U mL À1 ), 14 and hyperbranched signal amplication-based uorescent assay (9 Â 10 À5 U mL À1 ), 17 and is 203-fold higher than that of ensemble uorescence measurement (Fig. 2). The improved sensitivity can be attributed to (1) the specic hAAG-induced hypoxanthine excision repair, 43,44 (2) the generation of large amounts of primers induced by APE1-mediated amplication, (3) the formation of the QD-dsDNA-Cy5 nanosensor with the multilayer of multiple Cy5 molecules, and (4) the near-zero background and high signal-to-noise ratio of single-molecule detection. 33,35 Detection selectivity To evaluate the selectivity of the single QD-based nanosensor, we used T4 polynucleotide kinase (PNK) and three DNA glycosylases including human 8-oxoguanine-DNA glycosylase 1 (hOGG1), uracil DNA glycosylase (UDG), and thymine DNA glycosylase (TDG) as the interference enzymes. PNK can catalyse the transfer of phosphate from the gamma position of adenosine triphosphate to the 5 0 -hydroxyl group of the DNA substrate. 40 The hOGG1 can remove the damaged 8-hydroxyguanine (8-oxoG) from 8-oxoG/C base pairs in dsDNA and hydrolyze the 3 0 -phosphodiester bond of the abasic site. UDG can remove the uracil base from DNA and generate an abasic site by catalysing the hydrolysis of the N-glycosidic bond between deoxyribose and the uracil base. TDG can selectively remove T from G/T mismatches through the DNA BER pathway. 5 In theory, none of these interference enzymes can recognize and remove hypoxanthine from the hairpin probe substrate. As shown in Fig. 6, a high Cy5 signal is observed in response to hAAG (pink column, Fig. 6), while no signicant Cy5 signal is detected in the presence of reaction buffer (red column, Fig. 6), UDG (violet column, Fig. 6), hOGG1 (yellow column, Fig. 6), TDG (cyan column, Fig. 6), and PNK (green column, Fig. 6). This can be explained by the fact that only hAAG can generate the biotin-/multiple Cy5-labeled dsDNAs which can assemble on the surface of the 605QD to obtain the QD-dsDNA-Cy5 nanostructure with the multilayer of multiple Cy5 molecules. These results clearly demonstrate the excellent selectivity of the single QD-based nanosensor towards hAAG.
Kinetic analysis
We used the single QD-based nanosensor to measure the kinetic parameters of hAAG by incubating 0.1 U mL À1 hAAG with 1 U of APE1 and varying concentrations of the hairpin probe substrate in 5 min reaction at 37 C. The enzyme kinetic parameters of hAAG are obtained by tting the experimental data to the Michaelis-Menten equation: where V max represents the maximum initial velocity, [S] represents the concentration of the hairpin probe substrate, and K m is the Michaelis-Menten constant. As shown in Fig. 7, the initial velocity of hAAG enhances with increasing concentration of the hairpin probe substrate. The V max is evaluated to be 10.99 s À1 and K m is calculated to be 32.97 nM for hAAG. The K m value is consistent with that obtained by the radioactive assay (13-42 nM). 47 These results demonstrate that the proposed method can accurately determine the kinetic parameters of hAAG.
Inhibition assay
We used cadmium (CdCl 2 ) as a model inhibitor of hAAG to demonstrate the feasibility of the single QD-based nanosensor for the inhibition assay. CdCl 2 can inhibit the activity of hAAG towards a DNA oligonucleotide containing 1,N 6 -ethenoadenine (3A) and hypoxanthine (I), and it exhibits a metal-dependent inhibitory effect on hAAG catalytic activity at concentrations of 50-1000 mM. 48,49 We measured the relative activity of hAAG in response to different concentrations of CdCl 2 , and we found that the relative activity of hAAG decreased with increasing concentration of CdCl 2 . The IC 50 value is the inhibitor concentration required to reduce enzyme activity by 50%. The IC 50 value is determined to be 83.66 mM (Fig. 8), which is smaller than the value of hAAG alone measured by the radioactive assay (120 mM). 48 This can be explained by the fact that CdCl 2 inhibits the nuclease activity of APE1 in the range of 10-100 mM (ref. 50) and the inhibition of APE1 by CdCl 2 contributes to the inhibition of whole reaction.
Detection of cellular hAAG activity
Accurate detection of intracellular hAAG is essential for clinical diagnosis and treatment. To demonstrate the capability of the single QD-based nanosensor for real biological sample analysis, we measured hAAG activity in both cancer and normal cells. The cancer cells include the human lung adenocarcinoma cell line (A549 cells) and human cervical carcinoma cell line (HeLa cells), and the normal cells include the human hepatocyte cell line (HL-7702 cells), normal human lung cell line (MRC-5 cells) and human embryonic kidney cell line (HEK-293 cells). As shown in Fig. 9A, fewer Cy5 counts are measured in HL-7702 cells (Fig. 9A, green column), MRC-5 cells (Fig. 9A, violet column), and HEK-293 cells (Fig. 9A, red column), just little higher than those measured in the control group with the inactivated A549 cell extracts (Fig. 9A, yellow column), indicating the lack of hAAG in normal cells. In contrast, more Cy5 counts are measured in cancer cells including A549 cells (Fig. 9A, pink column) and HeLa cells (Fig. 9A, blue column), indicating the presence of hAAG in A549 cells and HeLa cells, consistent with previous research. 14,17,49 These results demonstrate that this single QD-based nanosensor can be used to distinguish cancer cells from normal cells.
We further used A549 cells as a model to investigate the relationship between the Cy5 counts and cell number. As shown in Fig. 9B, the Cy5 counts enhance with increasing number of A549 cells in the range from 1 to 5.8 Â 10 4 cells. In the logarithmic scale, the Cy5 counts exhibit a linear correlation with the number of A549 cells from 1 to 1000 cells (inset of Fig. 9B), and the corresponding equation is N ¼ 703.6 log 10 X + 204.9 (R 2 ¼ 0.9947), where N is the Cy5 counts and X is the number of A549 cells. The detection limit is calculated to be 0.67 cell based on the evaluation of the average response of the negative control plus 3 times the standard deviation, indicating the feasibility of the single QD-based nanosensor for the detection of hAAG at the single-cell level.
Conclusions
In summary, we developed a single QD-based nanosensor with multilayer of multiple acceptors for ultrasensitive detection of DNA repair enzyme hAAG. Through a three-step reaction including hAAG-specic cleavage of the hairpin probe, APE1mediated cyclic cleavage and DNA polymerase-assisted ampli-cation, multilayered Cy5 molecules can be assembled onto a single QD, signicantly enhancing the FRET efficiency and improving the detection sensitivity. This single QD-based nanosensor has signicant advantages as follows: (1) the introduction of endonuclease APE1 eliminates the design of a complex sequence with restriction sites for signal amplication; (2) the formation of the single QD-based nanosensor with the multilayer of multiple Cy5 molecules is conrmed theoretically and experimentally, signicantly amplifying the FRET efficiency; (3) the introduction of single molecule detection with a high signal-to-noise ratio and near zero background, greatly improving the detection sensitivity. The sensitivity of the single QD-based nanosensor (detection limit of 4.42 Â 10 À12 U mL À1 ) has improved by 7 orders of magnitude compared with that of the magnetic nanoparticlebased separation approach (1Â 10 À4 U mL À1 ) 14 and hyperbranched signal amplication-based uorescent method (9 Â 10 À5 U mL À1 ). 17 Notably, this method can even detect hAAG in 1 single cancer cell and distinguish the cancer cells from the normal cells. In addition, this single QD-based nanosensor can be used for the kinetic study and inhibition assay, holding great potential for further applications in biomedical research, drug discovery and clinical diagnosis. Importantly, this single QDbased nanosensor may combine with appropriate DNA substrates to become a universal platform for the detection of various DNA repair enzymes.
Preparation of the hairpin probe
The 10 mM hairpin probe was incubated in the buffer containing 150 mM MgCl 2 and 1 mM Tris-HCl (pH 8.0) at 95 C for 5 min, followed by slowly cooling to room temperature to form the hairpin structure. The obtained probes were stored at 4 C for further use.
Enzyme reaction and the formation of the QD-dsDNA-Cy5 nanostructure Enzyme reaction involves three consecutive steps. First, the 0.48 mM annealed hairpin probe was incubated in 10 mL of reaction solution containing variable concentration of hAAG, 1 U of APE1 and 1Â NEBuffer 4 at 37 C for 1 h. Second, 1.44 mL of 10 mM AP probe and 2 U of APE1 were added to the solution, and the mixture was incubated at 37 C for 40 min, followed by at 95 C for 20 min. Third, the amplication reaction was carried out in 25 mL of solution containing 0.576 mM capture probe, 8 mM Cy5-dATP, 200 mM dGTP, 200 mM dCTP, 200 mM dTTP, 2 U of Klenow fragment, and 1Â NEBuffer 2 at 37 C for 90 min in the dark. The reaction was terminated by heating at 75 C for 20 minutes. Then enzyme reaction products and the 605QDs with a nal concentration of 5 nM were incubated in 80 mL of solution containing QD incubation buffer (3 mM MgCl 2 , 100 mM Tris-HCl, and 10 mM (NH 4 ) 2 SO 4 , pH 8.0) at room temperature for 15 min to form the QD-dsDNA-Cy5 nanostructure. GAT GAA TCC TAG ACT ATT TTT ATA GTC TAG GAT TCI TCG TGA CAA TAC AAC-Cy5 Trigger TCG TGA CAA TAC AAC -NH 2 Primer CGC TGG AGC TGA GTT GTT GTA T
Gel electrophoresis and uorescence measurement
The DNA products were analyzed using a Bio-Rad ChemiDoc MP Imaging System (Hercules, CA, USA). The products stained with SYBR Gold were analyzed by 12% polyacrylamide gel electrophoresis (PAGE) in TBE buffer (44.5 mM Tris-boric acid, 1 mM EDTA, pH 8.2) at a 110 V constant voltage for 50 min. The uorescent DNA fragments of the enzyme reaction products were analyzed using an illumination source of Epi-green (460-490 nm excitation) and a 516-544 nm lter for SYBR Gold uorophores, and an illumination source of Epi-red (625-650 nm excitation) and 675-725 nm lter for Cy5 uorophores.
The uorescence signals of reaction products were measured using an F-7000 spectrometer (Hitachi, Japan) equipped with a xenon lamp as the excitation source. The excitation wavelength was 405 nm, and the spectra were recorded in the range from 550 to 750 nm. Both the excitation and emission slits were set to 5.0 nm. The uorescence intensities at 605 nm (the maximum emission of QDs) and 670 nm (the maximum emission of Cy5) were used for data analysis. The uorescence lifetime of QDs was measured using an FLS1000 (Edinburgh Instruments, UK).
Measurement of melting curves
For the melting curve assay, the amplication products were analyzed using a Bio-Rad CFX Connect real-time system (Hercules, CA, USA) with SYBR Green I as the indicator, and the uorescence intensity was monitored at intervals of 30 s. The 50 nM capture probe, 50 nM primer, 10 mM dATP, 250 mM dGTP, 250 mM dCTP, 250 mM dTTP and 2 U of Klenow fragment were incubated at 37 C for 1 h to obtain the amplication products. The pre-amplication products only include the 50 nM capture probe and 50 nM primer. Each sample was monitored at temperatures of 50 C-95 C. The specic melting temperature was dened as the inexion point at which ÀdF/dT reaches a maximum (where F is the uorescence intensity and T is the temperature).
Single-molecule detection and data analysis
In single-molecule measurement, the reaction products were diluted 200-fold in QD incubation buffer. The 10 mL of samples were put on a coverslip for TIRF microscopy (Nikon, Ti-E, Japan) imaging. The 405 nm laser was used to excite the 605QDs. The photons from the 605QD and Cy5 were collected by an oil immersion 100Â objective (Nikon, Japan), and were split up into the 605QD channel (573-613 nm lter) and Cy5 channel (661.5-690.5 nm lter) by a dichroic mirror, and were imaged by a digital CMSO EMCCD camera (Hamamatsu Photonics K. K., Japan) with an exposure time of 500 ms. For data analysis, a region of interest of 600 Â 600 pixels was selected for Cy5 molecule counting by using Image J soware. The number of Cy5 was obtained by calculating ten frames.
Inhibition assay
For the hAAG inhibition assay, variable concentrations of CdCl 2 were added to the glycosylase reaction mixture, followed by the above three-step reaction. The relative activity of hAAG (RA) was measured according to eqn (9): where N 0 is the Cy5 counting number when hAAG is absent, N t is the Cy5 counting number when hAAG is present, and N i is the Cy5 counting number in the presence of both hAAG and CdCl 2 . The IC 50 value was calculated from the curve of RA versus the CdCl 2 concentration.
Cell culture and preparation of cell extracts Different cell lines including A549 cells, HeLa cells, HL-7702 cells, MRC-5 cells and HEK-293 cells were cultured in Dulbecco's modied Eagle medium (DMEM; Invitrogen, USA) containing 10% fetal bovine serum (FBS; Gibco, USA) and 1% penicillin-streptomycin (Invitrogen, USA). The cells were cultured in a humidied incubator with 5% CO 2 at 37 C. The nuclear extracts were collected using the nuclear extract kit (ActiveMotif, Carlsbad, CA, USA) according to the manufacturer's protocol. The obtained supernatant was subjected to the hAAG activity assay.
Conflicts of interest
There are no conicts to declare. | 7,508.2 | 2019-08-06T00:00:00.000 | [
"Chemistry",
"Physics"
] |
ADAM17 mediates OSCC development in an orthotopic murine model
Background ADAM17 is one of the main sheddases of the cells and it is responsible for the cleavage and the release of ectodomains of important signaling molecules, such as EGFR ligands. Despite the known crosstalk between ADAM17 and EGFR, which has been considered a promising targeted therapy in oral squamous cell carcinoma (OSCC), the role of ADAM17 in OSCC development is not clear. Method In this study the effect of overexpressing ADAM17 in cell migration, viability, adhesion and proliferation was comprehensively appraised in vitro. In addition, the tumor size, tumor proliferative activity, tumor collagenase activity and MS-based proteomics of tumor tissues have been evaluated by injecting tumorigenic squamous carcinoma cells (SCC-9) overexpressing ADAM17 in immunodeficient mice. Results The proteomic analysis has effectively identified a total of 2,194 proteins in control and tumor tissues. Among these, 110 proteins have been down-regulated and 90 have been up-regulated in tumor tissues. Biological network analysis has uncovered that overexpression of ADAM17 regulates Erk pathway in OSCC and further indicates proteins regulated by the overexpression of ADAM17 in the respective pathway. These results are also supported by the evidences of higher viability, migration, adhesion and proliferation in SCC-9 or A431 cells in vitro along with the increase of tumor size and proliferative activity and higher tissue collagenase activity as an outcome of ADAM17 overexpression. Conclusion These findings contribute to understand the role of ADAM17 in oral cancer development and as a potential therapeutic target in oral cancer. In addition, our study also provides the basis for the development of novel and refined OSCC-targeting approaches.
Introduction
ADAM17 (A Disintegrin And Metalloproteinase) or TACE (TNF-alpha Converting Enzyme) is a surface membrane associated protein responsible for the cleavage of several membrane proteins, which is a biological cell process called shedding [1,2]. Among the shed proteins, ADAM17 releases ectodomains of important signaling molecules such as TNF-α, TGF-α, EGF, HB-EGF and VEGFR2 and adhesion molecules, such as L-selectin, syndecans, CAMs (cell adhesion molecules) and cadherins [1,3]. ADAM17 expression, likewise other ADAM family members, is up-regulated in many types of cancers, correlating with tumor progression and aggressiveness [4]. The molecules that are shed by ADAM17 are mostly signaling molecules that regulate cell proliferation, survival, migration and invasion properties associated with malignant cells resulted mainly from the crosstalk between ADAM17 and epidermal growth factor receptor (EGFR) [1,5]. Interestingly, EGFR is a widely studied oncogene in head and neck tumors [6] and agents targeting EGFR have emerged as a potential adjuvant therapy for OSCC [7,8]. Then, despite the close relationship between ADAM17 and EGFR, it is still not clear the role that ADAM17 plays in oral cancer development. Metalloproteinases are particular important in oral cancer progression, and squamous cell carcinoma of head and neck has been classified as the fifth most common type of cancer in the world [9].
To map the effect of ADAM17 overexpression in oral cancer, ADAM17-overexpressing cells have been subjected to in vitro analyses of proliferation, migration and adhesion and to orthotopic murine tumor formation, followed by MS-based proteomics and biological network analysis. Here we show that overexpression of ADAM17 in SCC-9 cells increases cellular viability, migration, adhesion and tissue collagenase activity. In addition, the ADAM17 knockdown decreased adhesion and proliferation in A431 cells. The SCC-9 cells have also been able to increase tumor size and proliferation in the orthotopic murine tumor model comparing to control (SCC-9 cells overexpressing GFP), and MS-based proteomics of those tumors revealed up-regulation of several Erk regulatory proteins, which are associated with the Erk phosphorylation. These results can open novel avenues for the understanding of the role of ADAM17 and its downstream signaling components in oral cancer development.
Cell lines
The human OSCC (oral squamous cell carcinoma) cell line, SCC-9, was obtained from American Type Culture Collection (ATCC), and cultured as recommended. SCC-9 cells are originated from a human squamous carcinoma of the tongue. Human Epidermoid Carcinoma A431 (epidermoid carcinoma cell line originated from skin) cell line was grown in Roswell Park Memorial Institute (RPMI) −1640 medium supplemented with 10% FBS and antibiotics at 37°C in a 5% CO 2 air atmosphere.
Generation of stably transfected cells SCC-9 cells were stably selected for expression of ADAM17 (AD17) or GFP. Briefly, cells were transfected with pcDNA-ADAM-HA, kindly provided by Dr. Ulrich from Max-Planck Institute of Biochemistry [10], or control pcDNA-FLAG-GFP, using Lipofectamine PLUS (Invitrogen) following manufacturer's instructions. After transfection, G418 antibiotic was added to cultures at a concentration of 0.8 mg/ml and incubated for about 2 weeks, until complete death of untransfected cells. Cells were then splitted and frozen as mix population stably transfected cells expressing ADAM17 or GFP.
Generation of stably ADAM17 knockdown A431 cell line The lentiviral particle production and transduction of A431 cells for ADAM17 knockdown were achieved with the Mission® shRNA Vector pLKO.1-puro System using shRNA Plasmid pLKO.1-Neo-CMV-tGFP (Sigma-Aldrich). Experimental procedures were carried out according to the manufacturers' instructions. After transduction, G418 antibiotic was added to cultures at a final concentration of 0.8 mg/ml and incubated for about one week, until complete death of untransfected cells.
Orthotopic murine model of oral squamous cell carcinoma SCC-9 cells stably expressing ADAM17 or GFP were grown until 75% confluence and 2.5 × 10 5 cells in 20 μl of phosphate-buffered saline (PBS) were implanted into the right lateral portion of the tongue of 6-to 8-week-old male Balb/c nude mice (n = 3), using a syringe with a 30 gauge disposable needle (BD Biosciences). This procedure was approved by the Institutional Committee for Ethics in Animal Research of the University of Campinas. Mice were sacrificed 20 days after implantation and the tumor tissues either from SCC-9 cells overexpressing ADAM17 or GFP were immediately harvested.
Tumors size measurement
Measurements of tumor size were made using a caliper and tumor volumes calculated as: volume = 0.5 × [major value] × [minor value] 2 [11]. Tumors were then frozen in dry ice for further analysis (n = 3).
Proliferative activity of tumors by immunohistochemical expression of Ki-67
The proliferative activity of the orthotopic tumors was analyzed by immunohistochemical expression of Ki-67 using the monoclonal mouse anti-Ki67 (clone Mib1, Dako) diluted 1:200, followed by streptavidin-biotin peroxidase complex method (Biotinylated Link Universal Streptavidin-HRP, Dako). Positive cells were calculated by counting labeled nuclei (positive-cells) of six high power fields (magnification of 400×) from each case with the aid of the Image J software and expressing the data as percentage.
Protein extraction from tumors and in solution trypsin digestion
Each tissue sample was homogenized with liquid nitrogen using mortar and pestle. Tissue protein from each tumor (n = 2) were separately resuspended in 100 μl of extraction buffer (8 M urea, 75 mM NaCl, 50 mM Tris, pH 8.2, protease inhibitor cocktail complete mini-EDTA free (Roche) and incubated at room temperature for 30 min under agitation. After centrifugation at 12,000 × g for 10 min at 4°C, the supernatant was quantified using the Bradford method [12].
Briefly, the extracted proteins were reduced (5 mM dithiothreitol, 25 min at 56°C), alkylated (14 mM iodoacetamide, 30 min at room temperature in the dark) and digested with trypsin (Promega). The peptides were purified on StageTips C18 [13], dried down in a vacuum concentrator and reconstituted in 0.1% formic acid.
LC-MS/MS analysis and data analysis
An aliquot containing 2 μg of proteins was analyzed on a LTQ Orbitrap Velos mass spectrometer (Thermo Fisher Scientific) connected to nanoflow liquid chromatography (LC-MS/MS) by an EASY-nLC system (Proxeon Biosystem) through a Proxeon nanoelectrospray ion source as described by Aragão et al., [14]. Briefly, the peptides were separated by a 2-90% acetonitrile gradient in 0.1% formic acid using an analytical column PicoFrit Column (20 cm × ID75 μm, 5 μm particle size, New Objective), at a flow of 300 nl/min over 212 min. All instrument methods for the LTQ Orbitrap Velos were set up in the data dependent acquisition mode. The full scan MS spectra (m/z 300-2000) were acquired in the Orbitrap analyzer after accumulation to a target value of 1e 6 . Resolution in the Orbitrap was set to r = 60,000 and the 20 most intense peptide ions with charge states ≥ 2 were sequentially isolated to a target value of 5,000 and fragmented in the linear ion trap by low-energy CID (collision induced dissociation) with normalized collision energy of 35%. Two independent biological samples of control tumor and tumor overexpressing ADAM17 were analyzed and they were run in duplicates.
The raw files were processed using the MaxQuant version 1.2.7.4 [15] and the MS/MS spectra were searched using the Andromeda search engine [16] against the Uniprot Human Protein Database (release July 11th, 2012; 69,711 entries) with a tolerance of 20 ppm for precursor ions and 1 Da for fragment ions were set parameters for protein identification and a maximum of 2 trypsin missed cleavage. Carbamidomethylation of cysteine was set as a fixed modification, and oxidation of methionine and protein N-terminal acetylation were chosen as variable modifications. Both peptide and protein identifications were filtered at a maximum of 1% false discovery rate. Bioinformatics analysis was performed using the software Perseus v.1.2.7.4 [15] available in the MaxQuant environment and reverse and contaminant entries were excluded from further analysis.
Protein abundance was calculated on the basis of the normalized spectral protein intensity (LFQ intensity). The data were converted into log2. Two independent experiments with two technical replicates for each condition were group in control tumor and tumor overexpressing ADAM17 and a paired t-test was applied for testing of differences in protein intensities between these groups.
Biological network analysis
Differentially expressed proteins were uploaded into the Ingenuity Pathways (IPA; Ingenuity Systems) Knowledge Base as a tab-delimited text file of gene names. Biological networks were generated using their Knowledge Base for interactions between mapped Focus Genes (user's list) and all other gene objects stored in the knowledge base. In addition, functional analysis of the networks was performed to identify the biological functions and/or diseases that were most significant to the genes in the network. The significance of functional enrichment was computed by a Fisher's exact test (p < 0.05). A detailed description of IPA can be found on the Ingenuity Systems website.
Immunoblotting
For identification of ADAM17 mature form, protein extraction was performed using detergent containing lysis buffer (50 mM Tris pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100) supplemented or not with 20 μM BB-2516 and 10 mM 1,10-phenanthroline (Sigma), as described by McIlwain et al., [17]. Cells were obtained from a confluent 10 cm plate and lysis was carried out on ice for 15 minutes followed by centrifugation at 12000 × g for 10 minutes. The supernatant was quantified and 40 μg of total protein was applied on SDS-PAGE.
For the expression analysis of EGFR and its phosphorylated form, the membrane protein enrichment was performed. Cells were lysed with syringe in nondetergent lysis buffer (25 mM Tris-HCl pH 7.5, 10 mM CaCl 2 ) supplemented with protease and phosphatase inhibitors (1 mM PMSF, cocktail protease inhibitor, 10 mM sodium pyrophosphate, 1 mM beta-glycerophosphate, 1 mM Na 3 VO 4 , 1 mM NaF). The lysate was centrifuged for 200 × g for 10 min and the supernatant was submitted to ultracentrifugation under 100,000 × g for 1 h. The pellet was ressuspended in RIPA buffer (25 mM HEPES pH 7.5, 150 mM NaCl, 1 mM EDTA, 1% NP-40, 0.2% Sodium deoxycholate) containing phosphatase inhibitors.
Real time quantitative PCR
Total RNA was obtained using the TRIzol reagent (Invitrogen) and 2 μg of total RNA were used for retrotranscription using the First-Strand cDNA Synthesis Kit (GE Healthcare). Real-time quantitative PCR for ADAM17 was performed using SYBR Green PCR Master Mix (Applied Biosystems), and the dissociation curves were performed to confirm the specificity of products. The threshold cycles (CT) values of target gene were normalized relative to glyceraldehydes-3phosphate dehydrogenase gene, and relative expression ratios were calculated by the 2-ΔΔ Ct method. Three independent experiments were performed with triplicates. The following PCR primers were used: ADAM17 forward, 5′-GGACCCCTTCCCAAATAGCA-3′ and reverse 5′-ATGGTCCGTGAGATCCTCAAA-3′; GAPDH forward 5′-GAAGGTGAAGGTCGGAGTCAAC-3′ and reverse 5′-CAGAGTTAAAA GCAGCCCCTGGT-3′.
Functional assays
Analysis of ADAM17 activity by HB-EGF shedding on AP reporter assay HB-EGF shedding assay was performed as described by Le Gall et al., [18] with some modifications [19]. Briefly, HEK293 cells stably transfected with HB-EGF-AP were seeded into 100-mm dishes and co-transfected with transient pcDNA-ADAM17-HA or GFP vector (negative control). After 48 h, cells were trypsinised, counted to 3 × 10 5 cells per well and seeded in 24 wellplate (Corning). In the following day, the cells were starved during 4 h and activated with PMA (50 ng/ml) during 30 min or 1 h in a phenol-free medium. The cleavage of HB-EGF-AP was measured after overnight incubation. Briefly, 100 μl of conditioned media were collected of each well and added to individual wells of a 96-well plate containing 100 μl of AP buffer (0.5 M Tris-HCl, pH 9.5, containing 5 mM p-nitrophenyl phosphate disodium, 1 mM diethanolamine, 50 μM MgCl 2 , 150 mM NaCl, 5 mM EDTA) and measured at 405 nm. Two independent experiments were performed with triplicates.
Cell viability assay by MTT SCC-9 GFP and SCC-9 ADAM17 cells were seeded in 96-well plates and incubated for 7 days. MTT (12 mM tetrazolium 3-(4, 5-dimethylthiazol-2)-2, 5-diphenyltetrazolium bromide) was added and cells were incubated for 4 h at 37°C, in the dark. The media were removed and 100 μl of HCl 1 N and isopropanol (1:25) was added in each well and incubated for 15 min at room temperature under gentle agitation. Finally, absorbance was measured at 595 nm. Three independent experiments were performed with triplicates.
Transwell migration assay SCC-9 GFP and SCC-9 ADAM17 cells were plated in the upper chambers of 8 mm pore transwells (HTS Transwell-96 Well Plate, Corning) after a starvation period of 4 h. The cells were allowed to migrate towards the lower chamber containing EGF at concentration of 100 ng/ml. At the end of the assay, cells at the top chamber were removed with a cotton swab and the cells at the bottom of the insert filter were fixed with 10% formaldehyde for 10 min, washed with PBS and stained with 1% toluidine blue solution in 1% borax for 5 min. The dye was eluted using 1% SDS and the absorbance was measured at 620 nm. Three independent experiments were performed with triplicates.
Cell adhesion assay SCC-9 GFP and SCC-9 ADAM17 cells or A431/untreated (mock), A431/control (scrambled) and A431/ shRNA ADAM-17 were submitted to adhesion assay as described by Aragão et al., [19]. Briefly, 10 6 cells were plated in 100 mm dishes and another 96-well plate was coated with Matrigel™ (2 μg per well; BD Biosciences). After 24 h, cells were trypsinised and seeded in the coated 96-well plate, previously washed three times with PBS and blocked with 3% BSA (bovine serum albumin) during 2 h. The adhesion was evaluated during 1 h in serum-free media supplemented with 3% BSA, the wells were washed 3 times and cells were fixed with 10% formaldehyde. Cells were stained with 1% toluidine blue containing 1% borax for 5 min. The dye was eluted using 100 μl 1% SDS and the absorbance was measured at 620 nm. Three independent experiments were performed with five replicates.
Bromodeoxyuridine-labeling (BrdU) index A431/control (scrambled) and A431/shRNA ADAM-17 cells were plated in 96-well plates at a density of 10,000 cells per well in medium containing 10% of FBS. After 16 h, the cells were washed with PBS and cultured in serum-free medium for 24 h. Following serum starvation, the medium was replaced by medium containing 2% or 10% of FBS. Proliferation rates were determined 24 h after incubation by measuring BrdU incorporation into DNA, (Cell Proliferation ELISA BrdU Colorimetric, Roche Applied Science, Germany). Briefly, BrdU antigen was added to the cultures in 1:10 dilution and kept for 2 h at 37°C in 5% CO 2 . After incubation, the medium was removed and manufacturer's protocol was followed. Absorbance was measured at 450 nm with correction at 690 nm. One experiment was performed with five replicates.
Collagenase activity in tumors and conditioned media
Zymography was performed with total protein extract from tumors (3 μg) and A431 conditioned media (10 μg) scramble and knockdown for ADAM17. For A431 conditioned media, cells were washed three times in PBS and incubated for 24 h in the serum-free medium. The media were collected and the final concentration of 1 mM PMSF (phenylmethylsulfonyl fluoride) was added to the media. Briefly, cell debris were eliminated by centrifugation at 4,000 × g during 5 min at 4°C and subsequently concentrated using a 3,000 Da centrifugal filter (Millipore, Billerica, MA) at 4,000 × g at 4°C. Samples (proteins from tumor and conditioned media) were submitted to 1-D electrophoresis on 12% SDSpolyacrylamide gels containing 1 mg/ml gelatin under nonreducing conditions, and gelatinolytic activity was performed as previously described [20]. Gels were stained with Coomassie blue and destained. Gelatin digestion was identified as clear bands against a blue background. One experiment was performed for the analysis of tumor samples and two independent experiments were performed for the conditioned media.
Statistical analysis
For the functional experiments, the Student's t-test, Fisher's exact test or ANOVA followed by Tukey test was used with the significance level stated at 0.05 (GraphPad Prism version 5 for Windows).
SCC-9 cells overexpressing ADAM17 have higher sheddase activity on HB-EGF
Stable overexpression of ADAM17-HA in SCC-9 cells was confirmed by immunoblotting ( Figure 1A). SCC-9 cells overexpressing GFP (control) were selected in parallel and the expression was checked by fluorescence (data not shown). To show that overexpressed ADAM17-HA has its mature form, we performed cell lysis in the presence of BB-2516 and 1,10-phenanthroline. The result in Figure 1B indicates the expression of the mature form of ADAM17-HA in SCC-9 cells in the presence of those inhibitors.
In order to evaluate the activity of ADAM17-HA recombinant protein on cells overexpressing ADAM17, we have transfected pcDNA-ADAM17-HA in HEK293 cells stably expressing an HB-EGF-AP construct, which allows detection of shed HB-EGF in culture supernatants, a known target of ADAM17 [18]. The cells were stimulated by PMA and the results indicate increased shedding of HB-EGF (n = 2, Student's t-test, DMSO: p = 0.0038, PMA: p = 0.0066) in cells transfected with empty vector or pcDNA-ADAM17-HA either stimulated or not with PMA. Immunoblotting performed as control indicates the same levels of ADAM17-HA and total proteins ( Figure 1D).
SCC-9 overexpressing ADAM17 shows higher cell viability, migration and adhesion SCC-9 cells overexpressing ADAM17-HA have been evaluated in viability, migration and adhesion assays. First, SCC-9 cells were seeded in 96-well plates and, after 7 days, cell viability was evaluated by MTT assay. SCC-9 cells overexpressing ADAM17-HA had increased viability compared with control ( Figure 2A, n = 3, Student's t-test, p = 0.0004). For migration evaluation, SCC-9 cells overexpressing ADAM17-HA or GFP were seeded in 96-well transwell plates containing EGF in the lower chamber. After 24 h, migration to the lower chamber was measured by colorimetric assay. SCC-9 cells overexpressing ADAM17-HA showed an increased migration in the presence of EGF ( Figure 2B, n = 3, Student's t-test, p = 0.0316).
In adhesion assay, SCC-9 cells overexpressing ADAM17-HA or GFP were seeded in 96-well plats coated with Matrigel. After 1 h, cells were fixed, stained and adhesion was measured by colorimetric assay. As seen in Figure 2C, ADAM17-HA increased adhesion of SCC-9 cells (n = 3, Student's t-test, p = 0.0001).
ADAM17 knockdown promoted lower adhesion and proliferation in A431 cells
To further validate these data in another cell line, we have used A431 carcinoma cell line silenced for ADAM17 expression in adhesion assay. As shown in Figure 2D, knockdown of ADAM17 decreases adhesion of A431 cells (n = 3, p < 0.0003, ANOVA followed by Tukey test). We also performed a proliferation assay by measuring BrdU incorporation into DNA in the Figure 2 ADAM17 regulates cellular viability, migration, adhesion and proliferation. A: SCC-9 cells stably expressing ADAM17-HA or FLAG-GFP were seeded in 96-well plates. After 7 days cell viability was measured by MTT assay. Three independent experiments were performed (n = 3, Student's t-test, p = 0.0004). B: SCC-9 cells stably expressing ADAM17-HA or FLAG-GFP were seeded in serum free media in the upper chamber of 96-well transwell plates. EGF at concentration of 100 ng/ml was added in serum free media in the lower chamber (n = 3, Student's t-test, p = 0.0316). C: SCC-9 cells stably expressing ADAM17-HA or FLAG-GFP were seeded in Matrigel coated 96-well plates. After 1 h, cells were stained and adhesion measured (n = 3, Student's t-test, p = 0.0001). D: ADAM-17 knockdown decreased adhesion of A431 cells. A431/untreated (mock), A431/control (scrambled) and A431/shRNA ADAM-17 cells were seeded in Matrigel coated 96-well plates. After 1 h, cells were stained and the cell adhesion was measured (n = 3, distinct letters represent significant differences at p < 0.0003, ANOVA followed by Tukey test). E: ADAM-17 knockdown decreased proliferation of A431 cells. Proliferation assay was performed in A431/control (scrambled) and A431/shRNA ADAM-17 cells by measuring BrdU incorporation into DNA in the presence of 2% or 10% FBS (n = 1, quintuplicate, distinct letters represent significant differences at p < 0.05, ANOVA followed by Tukey test).
presence of 2% or 10% FBS and we observed lower proliferation in ADAM17 knockdown A431 cells compared with the control cells ( Figure 2E, n = 1, quintuplicate, p < 0.05, ANOVA followed by Tukey test).
Tumors overexpressing ADAM17 have increased size and showed higher proliferative activity An orthotopic murine tumor formation model using SCC-9 cells overexpressing ADAM17 or GFP was performed. After 20 days, tumors were excised and had their size measured. As seen in Figure 3A, tumors induced with SCC-9 cells overexpressing ADAM17 had increased size compared to SCC-9 GFP cells (n = 3, Student's t-test, p = 0.0467). SCC-9 cells overexpressing ADAM17-HA (AD17-HA) induce higher proliferative activity by immunohistochemical expression of Ki-67 compared to SCC-9 GFP cells (n = 6, Student's t-test, p < 0.0001) ( Figure 3B).
MS-based proteomics and biological network analysis indicate up-regulated proteins in the Erk pathway
After protein extraction from tumors (n = 2) and trypsin digestion, mass spectrometry analysis was performed by LC-MS/MS, followed by protein identification using MaxQuant and analysis using Perseus software. A total of 2,194 proteins were identified at a false discovery rate (FDR) of less than 1%. The normalized spectral protein intensity (LFQ intensity) given by MaxQuant algorithm was converted into Log2 values. Normal distribution was verified by the histogram graph applied after normalization (Additional file 1: Figure S1). Correlation analysis between all of the individual replicates resulted in R values of at least 0.93, indicating high reproducibility among the samples (Additional file 1: Figure S2). We next performed statistical analysis to explore global proteomic difference between tumor overexpressing ADAM17 and control tumor samples. 200 proteins showed statistically significant expression (Student's t-test, p < 0.05, Additional file 2: Tables S1 and Additional file 3: Table S2). Among them, 110 proteins were down-regulated and 90 were up-regulated in tumor tissues overexpressing ADAM17. Hierarchical clustering of significantly changing proteins was performed using the Z-score calculation on Log2 intensity values and it is represented as a heat map ( Figure 4A).
To further explore the biological network of the identified proteins, we have examined functional pathway enrichment of the differentially expressed proteins by using Ingenuity Pathway Analysis (IPA) (Additional file 4: Table S3). 199 query molecules, out of 200, were eligible for network analysis based on the IPA Knowledge Base criteria. The top two networks have been merged to obtain a global view of the proteins that were differentially regulated between the tumor tissues: control and overexpressing ADAM-17 ( Figure 2A). The global network contained 58 proteins from the input data out of 70. The network revealed protein interactions mainly in the context of cancer (p = 1.57E-06), solid tumor (p = 4.61E-06), carcinoma (p = 9.78E-06), proliferation (p = 2.00E-07) and cell death (p = 6.36E-05), encompassing more than 50% of the differential regulated proteins presented in the network, 39, 34, 33, 32, 28 respectively (Table 1). Additional file 4: Table S3 shows all functions and diseases related to the genes in the network and the respective p-value given by Fisher's exact test. It is also interesting to obverse that Erk, Erk 1/2 and NF-κB and p38 MAPK represent the major hubs in the network, indicating some disrupted pathways by which the overexpression of ADAM-17 might be involved.
Erk activation was validated by immunoblotting in tumor tissue overexpressing ADAM17 and in A431 knockdown for ADAM17 gene Immunoblotting has showed that Erk phosphorylation was increased in tumors overexpressing ADAM17 ( Figure 5A-B, n = 2, Fisher's exact test, p = 0.0034). To further validate these data, we used A431 carcinoma cell line knockdown for ADAM17 gene ( Figure 5C) to analyze Erk phosphorylation state and it confirmed lower phosphorylation levels in Erk ( Figure 5D-E, n = 2, Fisher's exact test, p = 0.0001).
SCC-9 cells overexpressing ADAM17 induced EGFR phosphorylation
To investigate a downstream EGRF activation by SCC-9 cells overexpressing ADAM17, a total expression of EGFR and its phosphorylated levels have been evaluated. We first prepared enriched membrane protein extracts and then immunoblotting was performed. As shown in Figure 6, SCC-9 cells overexpressing ADAM17 have increased EGFR phosphorylation compared with SCC-9-GFP control cells.
ADAM17 expression increases collagenase activity
Collagenase activity of tumors has also been evaluated by zymography. After protein extraction from tumors, SDS-PAGE was performed in nonreducing conditions and it indicated higher collagenase activity for a~100 kDa band in tumors overexpressing ADAM17 compared to control ( Figure 7A and B, n = 3, Student's t-test p = 0.0285). In addition, we have performed the analysis of collagenase activity in the secretome of A431 knockdown for ADAM17 and we confirmed the results, showing lower collagenase activity in the~100 kDa compared to scramble shRNA cell line ( Figure 7C-D, n = 3, Student's t-test p = 0.0370).
Discussion
In this report we have provided novel evidences demonstrating ADAM17 overexpression has a potential role in oral cancer development. To first address the concern of A B Figure 4 Bioinformatic analysis of ADAM17-regulated proteome. (A) Clustering of significantly up-and down-regulated proteins in tumor samples compared with control, Student's t-test, p < 0.05, obtained in Perseus software. (B) Global interaction network by IPA consists of 56 (28%) of 200 differential expressed proteins, up-regulated proteins (red) and down-regulated proteins (green), plus additional interacting molecules that were not identified in this study (white). The two top biological networks generated by IPA were merged to obtain a global view. Major hubs in the network were highlighted in blue.
whether the overexpressed ADAM17-HA was active and functional, we validated the presence of mature form of the recombinant protein ( Figure 1B), which is detected in the presence of inhibitors BB-2516 and 1,10-phenanthroline [17]. We were able to demonstrate an increase of HB-EGF shedding activity in cells overexpressing ADAM17 ( Figure 1C). These results are in agreement with recent studies, which used these methods to demonstrate functionality and activity of ADAM17 [17,18]. In a second step, we examined the effect of ADAM17 overexpression in SCC-9 cells in events associated with oral cancer development. We indeed demonstrated that SCC-9 overexpressing ADAM17 showed an increase of cellular viability, migration and adhesion in vitro ( Figure 2). Our data further demonstrated, by silencing ADAM17 expression, a decrease of adhesion ( Figure 2D) and proliferation ( Figure 2E) in A431 cells. These events dictated by ADAM17 were previously associated with other cancer cell lines [1,[21][22][23][24][25][26].
ADAMs have been associated with many types of cancer, including brain, gastric, breast, prostate and lung [1]. Many models have been used to study ADAMs roles in cancer, for example, cells overexpressing ADAM12 were used in a tumor model in a previous study by Rocks et al., [27], but it failed to induce tumors. In another study, knockdown of ADAM15 decreased malignant properties of prostate cancer PC-3 cells, such as migration and adhesion [21]. Although some studies have also shown an important role of ADAM17 in head and neck cancer [28][29][30][31][32][33], none of them investigated the ADAM17-mediated signaling components that might be involved in oral cancer development.
Firstly, we demonstrated, in an orthotopic tumor model for oral cancer, that tumors overexpressing ADAM17 presented an increase in size and showed higher proliferative activity by immunohistochemical expression of Ki-67 ( Figure 3A and B, respectively). Secondly, to further provide the significance of ADAM17 in this process, MSbased proteomics were enabled to map some pathways regulated by ADAM17 in tumors induced by SCC-9 cells overexpressing this metalloproteinase. 2,194 proteins were identified in the tumors by MS and 200 proteins showed differential expression with p-value < 0.05. Amongst the regulated pathways found by biological network analysis, Table 1 Top 10 functions and diseases annotation given Erk signaling cascade was found predicted to be regulated ( Figure 4). As expected, the Erk activation by phosphorylation was confirmed by immunoblotting ( Figure 5). Erk is a key component of the MAP kinase cascade, which is triggered by growth factors and most of them are substrates of ADAM17 [1,3,22]. The signal transduction is mediated by a MAPK cascade, including Ras, Raf, MEK and the Erk 1/2 [34,35]. In addition, Erk pathway is a downstream signaling pathway of EGFR activation, transmitting several proliferative signals that bind and activate EGFR [5,34]. Some of the ligands of EGFR include important shed substrates of ADAM17, such as HB-EGF and TGF-α [1,3]. ADAM17 is a major sheddase responsible for EGFR signaling [1,5], which is a widely studied oncogene in head and neck tumors and an potential therapeutic target for OSCC treatment [7,8,[36][37][38]. In fact, the overexpressing of ADAM17 in SCC-9 cells induced higher activation by phosphorylation of EGFR ( Figure 6).
NF-κB pathway was also regulated by the overexpression of ADAM17 as shown in Figure 4B. NF-κB pathway is known to regulate the immune response to infection and it is also referred as a survival pathway of the cell, Figure 6 EGFR shows increased activation in SCC-9 cells overexpressing ADAM17 (AD17-HA). Immunoblotting showing increased EGFR phosphorylation in SCC-9 ADAM17-HA. Immunoblotting was performed using anti-EGFR, anti-phospho EGFR and anti-GAPDH antibodies. Phosphorylation levels were calculated by band intensity using ImageJ software and the GAPDH normalized intensity values are presented under the blots. presenting a negative regulation of the apoptotic process [39]. Several reports show that dysregulation of NF-κB pathway is related to cancer, and recently to oral cancer development [40]. Other studies have also shown that different inhibitors of NF-κB pathway may be important to hinder tumor progression and to induce tumor cell death [41,42]. TNF-α, one of the main targets of ADAM-17, is known to initiate one of the signaling cascades that ultimately leads to NF-κB pathway activation, linking ADAM17 overexpression to NF-κB pathway regulation, as observed in our data [43]. Accordingly, our data provides additional support to the fact that NF-κB pathway is also involved in development of oral cancer. The resulting effects of activation of EGFR, Erk and NF-κB pathways include regulation of cell adhesion, cell cycle progression, cell migration, cell survival, differentiation, metabolism, proliferation and apoptosis, which are all dysregulated in many cancer types [25,34,35,[44][45][46][47]. These features were evidenced by the main function and disease annotations of the proteins associated with ADAM17 overexpression identified by MS (Table 1).
Further, we also observed that protein extracts from tumors overexpressing ADAM17 showed an increase of collagenase activity (Figure 7). Some studies had demonstrated that ADAM17 up-regulates other metalloproteinases, such as MMP-9 [48,49] and, interestingly, this activation was shown to be mediated by Erk pathway [50]. Despite a very complex signaling cascade, the increased collagenase activity found in the extracts of tumors overexpressing ADAM17 could also be associated to Erk phosphorylation, since Erk pathway was also activated in our model.
Conclusion
In summary, our study shows that ADAM17 overexpression interferes in the biological processes associated with oral tumorigenesis and it is able to promote an increase tumor size and proliferation in an orthotopic murine model for oral cancer development. MS-based proteomics of tumors overexpressing ADAM17 indicated that the proteins modulated by ADAM17 are involved in the activation of Erk signaling pathway, which was also Figure 7 Tumors induced by injection of SCC-9 cells overexpressing ADAM17 (AD17-HA) have increased collagenase activity. C = tumors induced with SCC-9 overexpressing FLAG-GFP. ADAM17 = tumors induced with SCC-9 overexpressing ADAM17-HA. A: Collagenase activity is increased in SCC-9 overexpressing ADAM17-HA. B: Collagenase activity levels of~100 kDa band in Figure A were calculated by band intensity using ImageJ software (n = 3, Student's t-test p = 0.0285). C: shRNA-ADAM17 A431 cell conditioned media have reduced collagenase activity. D: Collagenase activity levels of~100 kDa band in Figure C were calculated by band intensity using ImageJ software (n = 2, Student's t-test p = 0.0370). evidenced by EGFR activation and higher collagenase activity in tumors overexpressing ADAM17. Taken together, our findings indicate potential proteins regulated by ADAM17 overexpression and demonstrate the potential role of ADAM17 in the development of oral cancer. The understanding of the mechanism by which ADAM17 is associated with oral cancer progression will provide the basis for the development of novel and refined OSCCtargeting approaches. | 7,726.6 | 2014-02-05T00:00:00.000 | [
"Biology",
"Medicine"
] |
Maintenance of aversive memories shown by fear extinction-impaired phenotypes is associated with increased activity in the amygdaloid-prefrontal circuit
Although aversive memory has been mainly addressed by analysing the changes occurring in average populations, the study of neuronal mechanisms of outliers allows understanding the involvement of individual differences in fear conditioning and extinction. We recently developed an innovative experimental model of individual differences in approach and avoidance behaviors, classifying the mice as Approaching, Balancing or Avoiding animals according to their responses to conflicting stimuli. The approach and avoidance behaviors appear to be the primary reactions to rewarding and threatening stimuli and may represent predictors of vulnerability (or resilience) to fear. We submitted the three mice phenotypes to Contextual Fear Conditioning. In comparison to Balancing animals, Approaching and Avoiding mice exhibited no middle- or long-term fear extinction. The two non-extinguishing phenotypes exhibited potentiated glutamatergic neurotransmission (spontaneous excitatory postsynaptic currents/spinogenesis) of pyramidal neurons of medial prefrontal cortex and basolateral amygdala. Basing on the a priori individuation of outliers, we demonstrated that the maintenance of aversive memories is linked to increased spinogenesis and excitatory signaling in the amygdala-prefrontal cortex fear matrix.
Results
Analyses of middle-term extinction processes. CFC. Fifteen animals (5 mice/phenotype) (Fig. 1A) were submitted to CFC with repetitive sessions at day 0 (Conditioning), 1 st (Retrieval), 2 nd , 3 rd , 7 th , and 14 th (Extinction). AV, BA and AP animals showed similar responses in the Conditioning phase (AV vs. BA: P = 0.97; AV vs. AP: P = 0.76; BA vs. AP: P = 0.75) and similar consolidation processes of aversive memories in the Retrieval phase (AV vs. BA: P = 0.59; AV vs. AP: P = 0.14; BA vs. AP: P = 0.30). BA mice progressively extinguished fear memories over time and, from day 7 th their freezing times returned to a similar level to during the Conditioning phase. Interestingly, no extinction of fear memories was observed in AV and AP phenotypes. At day 14 th the freezing times of AV (P = 0.38) and AP (P = 0.20) mice did not significantly differ from those of the Retrieval phase. Both AV and AP phenotypes showed freezing times that were similar during the entire task and higher than those of BA animals at days 7 th and 14 th . A two-way ANOVA (phenotype x day) of freezing times revealed significant phenotype (F 2,12 = 10.89; P = 0.0021) and day (F 5,60 = 18.67; P < 0.0001) effects. The interaction was also significant (F 10,60 = 2.98; P = 0.004). Significant post-hoc comparisons on interaction are shown in Fig. 1B. Electrophysiological results. Spontaneous excitatory postsynaptic currents (sEPSCs) in mPFC pyramidal neurons were recorded in the three phenotypes and were completely abolished when an AMPA/Kainate receptor antagonist (6-cyano-7-nitroquinoxyline-2,3-dione, CNQX) and a NMDA receptor antagonist (amino-5-phosphonovaleric acid, AP-5) were added to the artificial cerebrospinal fluid (ACSF) solution, thereby confirming that synaptic events were mediated by ionotropic glutamatergic receptors ( Fig. 2A). Analysis of the sEPSCs demonstrated that the mean inter-event interval was significantly shorter in neurons in AV and AP mice than in BA mice, thereby indicating that neurons in AV and AP animals exhibited an increased sEPSC frequency in comparison to neurons in BA animals, as indicated by post-hoc comparisons on one-way ANOVA of frequency values (F 2,12 = 5.25; P = 0.02). The frequencies of sEPSC exhibited by AV and AP mice were not significantly different (P = 0.06) (Fig. 2B). Conversely, a one-way ANOVA of sEPSC amplitude failed to reveal a significant phenotype effect (F 2,12 = 0.16; P = 0.85) (Fig. 2C). Furthermore, analyses of current kinetics (Fig. 2D) showed that the rise time (F 2,12 = 1.14; P = 0.35) and decay time (F 2,12 = 1.31; P = 0.31) were not significantly different among neurons from AV, AP and BA mice.
Morphological reconstruction of pyramidal mPFC neurons. All recorded neurons filled with biocytin in fluorescent Nissl-stained slices were located in layer II-III of the mPFC area and exhibited the morphological features of cortical pyramidal neurons, as described in the Methods (Fig. 3).
Results of analyses of long-term extinction processes. CFC. Fifteen animals (5 mice/phenotype) were submitted to a CFC protocol with a long-term extinction phase (days 2 nd , 3 rd , 7 th , 14 th , 21 st , 28 th and 60 th ). Once again, AV, BA and AP animals showed similar responses in the Conditioning phase (AV vs. BA: Scientific RepoRts | 6:21205 | DOI: 10.1038/srep21205 P = 0.83; AV vs. AP: P = 0.79; BA vs. AP: P = 0.94) and similar consolidation processes of aversive memories in the Retrieval phase (AV vs. BA: P = 0.52; AV vs. AP: P = 0.94; BA vs. AP: P = 0.57). BA mice progressively extinguished fear memories over time. In fact, from day 7 th to 60 th BA animals exhibited freezing times that were similar to those they had displayed during the Conditioning phase. Conversely, even at day 60 th , AV and AP animals had not extinguished fear memories. Namely, at day 60 th the freezing times of AV (P = 0.46) and AP (P = 0.87) mice did not significantly differ from those of the Retrieval phase. During the entire task, both AV and AP phenotypes showed freezing times that were similar to each other and significantly higher than those of BA animals from days 7 th to 60 th . A two-way ANOVA (phenotype x day) of freezing times revealed significant phenotype (F 2,12 = 17.36; P = 0.0003) and day (F 8,96 = 12.13; P < 0.0001) effects. The interaction was also significant (F 16,96 = 2.85; P = 0.0008). Significant post-hoc comparisons on interaction are shown in Fig. 4. Morphological results. AV, BA and AP phenotypes exhibited different spine densities and similar dendritic branching of apical and basal arborizations of BLA (Fig. 5) and IL/PL mPFC (Fig. 6) pyramidal neurons.
BLA. As for apical dendrites, post-hoc comparisons of the significant ANOVA of spine density (F 2,12 = 66.35; P < 0.0001) indicated that AV animals had the highest spine density in comparison to AP (P = 0.001) and BA animals (P = 0.0002). Even AP mice had a spine density that was higher (P = 0.0002) than that of BA animals. ANOVAs of dendritic length (F 2,12 = 0.41; P = 0.67) and nodes (F 2,12 = 0.52; P = 0.61) did not reveal a significant difference among phenotypes.
As for basal dendrites, post-hoc comparisons of significant ANOVA of spine density (F 2,12 = 49.85; P < 0.0001) indicated that AV animals showed the highest spine density in comparison to AP (P = 0.002) and BA animals (P = 0.0002). Even AP animals had a spine density that was higher (P = 0.0002) than that of BA animals. ANOVAs of dendritic length (F 2,12 = 0.26; P = 0.77) and nodes (F 2,27 = 0.43; P = 0.66) did not reveal a significant effect. mPFC. As for apical dendrites, post-hoc comparisons of significant ANOVA of spine density (F 2,12 = 73.97; P < 0.0001) indicated that AV animals had the highest spine density in comparison to AP and BA animals Freezing times of AV, BA and AP phenotypes were similar in Conditioning and Retrieval phases, and they significantly increased between Conditioning and Retrieval phases (## at least P = 0.001). On day 14 th the freezing times of AV and AP mice still did not significantly differ from those of Retrieval phase and were significantly higher than those of BA animals at day 7 th and 14 th (* at least P = 0.03). From day 7 th on, the freezing times of BA mice were similar to those of Conditioning phase. Data are presented as means ± SEM.
As for basal dendrites, post-hoc comparisons of significant ANOVA of spine density (F 2,121 = 59.63; P < 0.0001) indicated that AV animals had the highest spine density in comparison to AP (P = 0.0005) and BA animals (P = 0.0002). Even AP animals had a spine density that was higher (P = 0.0002) than that of BA animals. ANOVAs of dendritic length (F 2,12 = 0.18; P = 0.83) and nodes (F 2,12 = 1.14; P = 0.35) did not reveal a significant effect.
Discussion
The neurobiology of fear memory and extinction has been typically studied by analyzing changes that occur in average populations 18 . However, the study of neuronal mechanisms that characterize the outliers allows us to study trauma-related diseases 27 . Following this approach, we investigated fear extinction and its neuronal correlates in outlier animals that are characterized by approach and avoidance behaviors that we have previously demonstrated to be associated with different tuning of CB 1 signaling in the amygdala 11 .
Although AV, BA and AP phenotypes exhibited similar behaviors in the Conditioning and Retrieval phases, only BA mice reduced their freezing times over time. Indeed, no fear extinction was observed in the AV and AP groups, even at day 60 th of CFC. In parallel, AV and AP mice exhibited increased excitatory neurotransmission in mPFC pyramidal neurons, in comparison to BA animals. The higher sEPSC frequency was not associated with different sEPSC amplitude or kinetic properties (rise and decay times). Such an electrophysiological pattern indicates increased glutamate release at the presynaptic level and no change in the postsynaptic features of the pyramidal neurons. Thus, the modified glutamatergic neurotransmission in the pyramidal neurons of AV and AP Freezing times of AV, BA and AP phenotypes were similar in Conditioning and Retrieval phases, and they significantly increased between Conditioning and Retrieval phases (### at least P = 0.0006). On day 60 th the freezing times of AV and AP mice still did not significantly differ from those of Retrieval phase and were significantly higher than those of BA animals from day 7 th to 60 th (** at least P = 0.01). From day 7 th on, the freezing times of BA mice were similar to those of Conditioning phase. Data are presented as means ± SEM. phenotypes might be related to increased excitatory afferents reaching the mPFC. This finding is nicely consistent with the high spine density of mPFC layer II-III pyramidal neurons of AV and AP mice. Similarly, enhanced spinogenesis was observed in BLA pyramidal neurons. Notably, it has been demonstrated that in mouse cortical layer II-III pyramidal neurons, glutamatergic signals trigger growth of new spines that express glutamatergic receptors and are rapidly functional 28 . Thus, the lack of fear extinction, as shown by the outlier AV and AP mice, was associated with aberrant neuronal activation in amygdaloid-prefrontal circuit that allows us to advance a possible neuronal substrate that is linked to the rigid maintenance of aversive memories.
The basic scheme for fear conditioning, reconsolidation, and extinction (Fig. 7, in particular 7 C) proposed by LeDoux 29 , computationally modeled by Anastasio 30 , and recently reviewed by Tovote et al. 31 , assumes that thalamic and cortical glutamatergic projections that convey the signals of the CS (e.g., context) associated with US (e.g., pain) converge on pyramidal neurons and GABAergic interneurons in the BLA. Information is then relayed to the medial part of the amygdaloid central nucleus (CEm), which in turn projects to PAG and pre-motor structures that mediate the CR of fear. BLA pyramidal neurons can also relay signals to the medial ITC (ITCm) that in turn inhibit the lateral part of the amygdaloid central nucleus (CEl) projecting to CEm. However, BLA pyramidal neurons can also excite the lateral ITC (ITCl), which inhibit ITCm that in turn inhibit CEl. This network elicits the disinhibition of CEl and the activation of CEm. The pyramidal neurons of layers II-III and V of the IL/PL subregions of mPFC project not only to the BLA but also directly to ITCl and ITCm 16,32 . Accordingly, the activation of lateral amygdala pyramidal neurons drives the formation of fear memories 33 , while BLA lesions disrupt fear CR 17 .
Notably, BLA pyramidal neurons arborize within layers II-III of the mPFC 32 . This unique anatomy enables bidirectional communication between the pyramidal neurons of the BLA and those of mPFC. During fear reconsolidation, the enhanced BLA activity triggers the activity of PL pyramidal neurons that synapse within the BLA, thus modulating the expression of fear responses 34 . Conversely, once extinction is reached, PL activity is inhibited 35 , whereas IL activity is stimulated 36,37 . The IL pyramidal neurons involved in the fear response after extinction can suppress CEm responses via BLA GABAergic interneurons and ITC 37,38 . However, experimental studies of the involvement of IL in extinction have yielded inconsistent results. Although IL lesions fail to impair extinction recall 39 , optogenetic silencing of IL has no effect 40 , or even facilitates extinction 41 . Interestingly, optogenetic activation of IL has no effect on fear expression that is not yet extinguished, thereby suggesting that IL activation alone is not sufficient to suppress fear expression 41 . The findings of the current study concur with previous findings that CS-evoked IL firing is greater in rodents that fail to acquire extinction than in rodents that do 42 . Moreover, inactivation of the mPFC immediately before extinction facilitates rather than impairs extinction 43 .
In the BLA, GABAergic interneurons (which inhibit glutamatergic pyramidal neurons) are the only ones to contain CB 1 receptors, which mediate retrograde signaling and depolarization-induced suppression of inhibition. Thus, amygdaloid CB 1 receptor activation efficiently inhibits GABA release, controlling the efficacy of its own synaptic input in an activity-dependent manner 19 . As we previously demonstrated 11 , AV and AP animals have increased CB 1 density and functionality in the amygdala. Remarkably, greater CB 1 availability in the amygdala is associated with increased severity of trauma-related psychopathology, thereby suggesting a key role for compromised endocannabinoid function in endophenotypic and phenotypic expression of threat symptomatology in humans 21 . Notably, mutant mice lacking CB 1 receptors on GABAergic neurons emit mainly active fear responses (escape attempts and risk assessment), in contrast to mice lacking CB 1 receptors on glutamatergic neurons, which emit only passive fear responses (freezing) during the extinction process 44 . Despite the current study being unable to directly test the link between CB 1 receptor expression and behavioral and neurobiological correlates, by considering as a whole the ensemble of our studies regarding this model [10][11][12][13] , it can be hypothesized that in AV and AP mice, the disinhibition of BLA pyramidal neurons is potentiated (Fig. 7D). Because some BLA pyramidal neurons transfer the conditioned fear signals from the intra-amygdaloid circuit to mPFC 45 , it can be reasonably supposed that in AV and AP phenotypes, the increased excitatory activity of BLA pyramidal neurons is associated with increased output to mPFC pyramidal neurons. Indeed, increases in sEPSC frequency and spine density in layer II-III mPFC pyramidal neurons were found only in the AV and AP animals. The mPFC pyramidal neurons in turn may send enhanced glutamatergic output to BLA neurons and thereby increase spinogenesis of BLA neurons, as found in AV and AP mice. We propose that the impaired fear extinction of AV and AP mice is associated with the increased disinhibition of BLA pyramidal neurons.
The lack of extinction, increase of excitatory neurotransmission, and increase of spinogenesis in BLA and mPFC pyramidal neurons of AV and AP mice are highly reminiscent of sensitization by stressful encounters that favors fear responses, sustained neuronal hyperexcitability, increased dendritic branching and spinogenesis in BLA pyramidal neurons 46,47 . Stress-induced extinction deficits are associated with dysmorphic mPFC pyramidal neurons, decreased NMDA receptor expression, and altered cue-evoked IL firing 48,49 . Thus, extinction-impaired AV and AP mice exhibit modifications in the same BLA-mPFC loop that was made dysfunctional by stress, thereby shifting the balance of the fear matrix to favor the pro-fear pole over the pro-extinction pole 45 . Considered as a whole, our behavioral, electrophysiological and morphological findings demonstrate that the impaired fear extinction of AV and AP animals is related to increased activation of the BLA-mPFC network. Such increased excitatory activity could be linked to a sort of entrapment in the retrieval/reconsolidation process.
Because all same-sex members of inbred strains are genetically identical, the individual differences we observed must reflect allelic and functional differences that are probably controlled by epigenetic factors, such as maternal care or social hierarchy 50 . Based upon the a priori individuation of phenotypes (independently of fear conditioning testing), we suggest that the inability to extinguish fear memories is associated with specific neuronal encoding in the fear matrix. Interestingly, healthy monozygotic twins of human patients affected by anxiety-related disorders exhibit fear circuit functional alterations that represent a risk factor for psychopathology 51 . Hyperactivation of the amygdala in relation to attentional bias to threat has been found in individuals with stress-and anxiety-related disorders 27,52 . Furthermore, a less-functional endocannabinoid-related gene variation is associated with reduced risk for PTSD and rapid amygdala habituation to threatening stimuli 53,54 . Collectively, our data propose that in avoiding or approaching mice, the improper inhibition of fear is associated with increased spinogenesis and excitatory signaling in the BLA and mPFC. Exposure to risk factors in individuals already at risk owing to their genetic and neurobiological constitution is likely to induce anxiety-or stress-related behaviors. On a brighter note, given that impaired fear extinction may be reversible 55 , investigating its neural mechanisms may open important opportunities for drug discovery and development of next-generation therapies (diet or exposure-based therapy) that could reverse impairments in extinction.
Methods
Experimental procedure. The methods were carried out in accordance with the approved guidelines.
Male C57BL/6JOlaHsd mice (n = 169; approximately 40 days old at study onset) (Harlan, Italy) were used. All animals were housed 4 per cage, with food (Mucedola, Italy) and water ad libitum. The mice were kept under a 12-h light/dark cycle, controlled temperature (22-23 °C), and constant humidity (60 ± 5%). All efforts were made to minimize animal suffering and reduce the number of animals used, per the European Directive (2010/63/EU). All procedures were approved by the Italian Ministry of Health.
Basing on extreme or mean distribution values of behavioral responses, we selected AV (n = 10), BA (n = 10) and AP (n = 10) animals by testing 169 mice in the Approach/Avoidance (A/A) Conflict Task 10-12 . After two weeks, AV, BA and AP animals were submitted to CFC with repetitive (days 1 st , 2 nd , 3 rd , 7 th , 14 th , 21 st , 28 th , 60 th ) extinction sessions.
At day 14 th of CFC, half of the mice (5 mice/phenotype) were sacrificed to record sEPSCs from mPFC pyramidal neurons by whole cell patch-clamp electrophysiological recordings. At the end of CFC (day 60 th ) the remaining animals were sacrificed for morphological analyses of pyramidal neurons in the BLA and mPFC (PL-IL) using the Golgi-Cox technique. The timing of this double methodological approach was designed to investigate the plastic properties of cortical and amygdaloid pyramidal neurons in relation to middle-and long-term fear extinction.
Scientific RepoRts | 6:21205 | DOI: 10.1038/srep21205 Behavioral analyses. A/A Conflict Task. The test has been previously described [10][11][12] . The apparatus consisted of a Plexiglas Y-maze with a starting gray arm from which two arms (8 cm wide, 30 cm long and 15 cm high) stemmed, arranged at an angle of 90° to each other. A T-guillotine door was placed at the end of the starting arm to prevent backward movements of the animal. An arm entry was defined as four legs entering one of the arms. The two choice arms differed in both color and brightness. One of the two arms had a black and opaque floor and walls and no light inside, while the other had a white floor and walls and was lit by a 16-W neon lamp. Notably, the colored "furniture" and the neon lamp were exchangeable between arms to alternate the spatial positions of the white and black arms. The apparatus was placed in a room that was slightly lit by a red light (40 W) and it was always cleaned thoroughly with 70% ethanol and dried after each trial to remove scent cues. At the end of each choice arm, there was a blue chemically inert tube cap (3 cm in diameter, 1 cm deep) that was used as a food tray. The depth of the tray prevented mice from seeing the reward at a distance but allowed easy access to the reward (i.e., eating as well as the appreciation of reward scent, not reducing the olfactory cues).
Because appetites for palatable foods have to be learned, a week before testing the animals were exposed in their home cages for three days to a novel palatable food (Fonzies, KP Snack Foods, Munchen, Germany) 56 . Fonzies (8% protein, 33% fat and 53% carbohydrate, for a caloric value of 541 kcal/100 gm) consisted of corn flour, hydrogenate vegetable fat, cheese powder and salt.
At the beginning of behavioral testing, mice were subjected to 1-day habituation phase in which the Y-Maze arms were opened to encourage maze exploration. During habituation, no food was present in the apparatus. At the end of this phase and during successive testing, to increase the motivation to search for the reward, the animals were slightly food deprived by limiting food access to 12 hours/day. This regimen resulted in no significant body weight loss, as indicated by body weight measures performed at the end of the habituation phase and before testing.
The testing phase started 24 h after the habituation phase at 8 a.m. and consisted of two 10-trial sessions. In Session 1 (S1), the slightly food-deprived animal was placed in the starting stem and could choose to enter one of the two arms, both containing the same standard food reward. After eating, the animal was allowed to stand in its cage for a 1 min-inter-trial interval. At the end of each trial, the reward was replaced. The spatial position of each arm (black and dark or white and lighted) was side balanced during the whole test, to exclude any side preference.
During Session 2 (S2), which started 24 h after S1, the white arm was rewarded with the highly palatable food (Fonzies), while the black arm remained rewarded with the standard food. Notably, the A/A test required to choose between two conflicting drives: reaching a palatable reward placed in an aversive (white and lighted) environment; or reaching a standard food placed in a reassuring (black and dark) environment.
The A/A conflict index represents the difference in the number of white choices between S1 and S2. Given that this index was normally distributed (mean = Δ + 1, SD = ± 1.7), it allowed us to identify three categories of mice: AV, BA and AP animals. In particular, BA animals reacted with balanced responses between approach and avoidance behaviors and their values in the A/A conflict index corresponded to the mean of the distribution. The two opposite tails of the distribution curve represented the few subjects that exhibited responses that were unbalanced toward one of the conflicting inputs: AV animals had values of the A/A conflict index corresponding to minus two standard deviations of the mean, while AP animals had values corresponding to plus two standard deviations of the mean. This test was implemented only to select animal phenotypes (Fig. 1A).
CFC. The test has been previously described 25 and its details are provided in the Supplementary Information. Briefly, during the Conditioning phase, a mouse was allowed to explore the conditioning chamber (Ugo Basile, Italy) for 3 min. Subsequently, three foot-shocks (0.5 mA, 2.0 s, 1 min inter-shock interval), which represent the US, were emitted. One minute after the third foot-shock, the animal was returned to its home cage.
After 24 h, during the Retrieval phase, and on testing days 2 nd , 3 rd , 7th , 14 th , 21 st , 28 th and 60 th (i.e., the Extinction phase), the mouse was placed again for 6 min into the conditioning chamber, which represented the CS. No shocks were delivered during the Retrieval and Extinction phases. Notably, during the Retrieval phase, re-exposure to the CS previously paired with the aversive US induced a strong freezing response that progressively declined owing to extinction processes 29 . Freezing times during the first 3 min for each phase were compared among phenotypes.
Slice preparation and electrophysiological recordings. Electrophysiological protocols have been previously described 57 and details are provided as Supplementary Information. Briefly, at CFC day 14 th , the brains of 15 mice (5 mice/phenotype) were cut in 250 μ m coronal sections, incubated in oxygenated ACSF, transferred to a recording chamber and submerged in continuously flowing oxygenated ACSF for electrophysiological recordings.
To study the sEPSCs using a Cs-methanesulfonate internal solution, whole-cell patch-clamp recordings in voltage clamp mode (holding potential − 70 mV) were performed from mPFC pyramidal neurons, visually identified in slices using an upright infrared microscope (Axioskop 2 FS, Zeiss, Germany), a 40× water-immersion objective (Achroplan, Zeiss, Germany), and a CCD camera (Cool Snap, Photometrics, AZ, USA).
In some experiments the AMPA/Kainate receptor antagonist CNQX (10 μ M) and the NMDA receptor antagonist AP-5 (50 μ M) were added to the ACSF. In this condition all synaptic events were blocked, indicating that they were due to glutamatergic receptor activation 57 .
Spontaneous synaptic events were analyzed off-line using the Mini Analysis Program (Synaptosoft Inc., USA). sEPSCs were manually detected using a 10 pA threshold crossing algorithm. The inter-event interval, event amplitude and kinetic parameters (rise and decay times) were compared among phenotypes.
Labeling of Recorded Cells. The cell labeling protocol has been previously described 57 and details are provided in the Supplementary Information. Briefly, to morphologically identify the recorded cells, some neurons were filled with 2% biocytin (Sigma-Aldrich, Saint Louis, MO, USA) through the recording pipette. Immediately after recording, slices containing biocytin-loaded cells were incubated with Cy2-conjugated Streptavidin (1:200; Jackson Immunoresearch Laboratories, PA, USA). To assess the cytoarchitectonic areas and layers, slices were counterstained with NeuroTrace ® 640⁄660 deep-red Fluorescent Nissl Stain (Invitrogen) and mounted using an anti-fade medium (Fluoromount; Sigma-Aldrich, Saint Louis, MO, USA). Neurons of interest were identified using a 10× objective (Plan-Apochromat, Zeiss, Germany) and captured on a 40X oil immersion objective through a confocal laser-scanning microscope (CLSM700; Zeiss, Germany).
Morphological analyses. Golgi-Cox technique.
Morphological analyses have been previously described 58 , and details are provided in the Supplementary Information. Briefly, at the end of the CFC (day 60 th ), the brains of 15 mice (5 mice/phenotype) were processed by the Golgi-Cox technique to analyze neuronal morphology (dendritic length and nodes, spine density) in 200 μ m coronal slides at the level of the BLA (from − 0.70 to − 2.06 mm in relation to bregma) and mPFC (PL cortex: from + 2.96 to 1.42; IL cortex: from + 2.10 to 1.34 mm) 59 . The coronal sections were stained according to the method described by Gibb and Kolb 60 . The stained sections were analyzed by using a light microscope (Axioskop 2, Zeiss, Germany) with a 100X oil-immersion objective lens. A researcher unaware of the specimen phenotype performed morphological analyses using Neurolucida v 11 (MicroBrightField, VT, USA) software to reconstruct dendritic arbors.
Morphological features of mPFC -BLA pyramidal neurons. mPFC pyramidal neurons were identified by the presence of: a typical conic soma; a distinct single apical dendrite arising from the vertex of the soma, coursing toward the pial surface and entering molecular layer I; several basilar dendrites arising from the base of the soma, thinner and shorter than those of the apical arborization; and dendritic spines along both apical and basal arborizations. According to these criteria, 30 pyramidal neurons (2 neurons per animal for a total of 10 neurons per phenotype) belonging to layer II-III of IL/PL mPFC were selected.
BLA "pyramidal" neurons encompass a broad, continuous morphological spectrum, from neurons that are virtually identical to cortical pyramidal neurons to neurons that closely resemble cortical spiny stellate cells. Furthermore, bitufted/bipolar cells are present. According to these criteria, 30 pyramidal neurons (2 neurons per animal for a total of 10 neurons per phenotype) belonging to the BLA were selected.
In each selected neuron, the apical and basal dendritic trees were separately examined using Sholl Analysis. The parameters analyzed were: dendritic length (in μ m), calculated by summing the length of all processes passing through each shell; dendritic nodes, calculated by summing all points from which dendritic branches arose; terminal spine density (spine density/25 μ m), calculated by measuring a 25 μ m length of the dendritic terminal and counting the number of spines along the segment.
Statistical analyses. All data were tested for normality (Will-Shapiro's test) and homoscedasticity (Levene's test). Statistical analyses were applied on a per mouse basis. Behavioral and morphological data were compared using ANOVAs, followed by Newman-Keuls tests when appropriate. Electrophysiological data were compared by using ANOVAs and Kolmogorov-Smirnov (K-S) tests. Values of P < 0.05 were considered statistically significant. | 6,777.4 | 2016-02-15T00:00:00.000 | [
"Biology",
"Psychology"
] |
Some phenomena in tautological rings of manifolds
We prove several basic ring-theoretic results about tautological rings of manifolds W, that is, the rings of generalised Miller--Morita--Mumford classes for fibre bundles with fibre W. Firstly we provide conditions on the rational cohomology of W which ensure that its tautological ring is finitely-generated, and we show that these conditions cannot be completely relaxed by giving an example of a tautological ring which fails to be finitely-generated in quite a strong sense. Secondly, we provide conditions on torus actions on W which ensure that the rank of the torus gives a lower bound for the Krull dimension of the tautological ring of W. Lastly, we give extensive computations in the tautological rings of CP^2 and S^2 x S^2.
1. Introduction 1.1. Recollections on tautological rings. A smooth fibre bundle π : E → B with closed d-dimensional fibre W equipped with an orientation of the vertical tangent bundle T π E has characteristic classes defined as follows. For each characteristic class c ∈ H k (BSO(d)) of oriented d-dimensional vector bundles, we may form κ c (π) := π c(T π E) ∈ H k−d (B), the generalised Mumford-Morita-Miller class (or κ-class) associated to c, by evaluating c on the vector bundle T π E and integrating the result along the fibres of the map π. This construction may in particular be applied to the universal such fibre bundle, whose base space is the classifying space BDiff + (W ) of the topological group of orientation-preserving diffeomorphisms of W , to give universal characteristic classes κ c ∈ H * (BDiff + (W )). If c has degree d then κ c is a degree zero cohomology class, and may be identified with the characteristic number W c(T W ) of W .
If we work in cohomology with rational coefficients then H * (BSO(d); Q) is generated by the Pontrjagin and Euler classes, and in this case we define the tautological ring R * (W ) ⊂ H * (BDiff + (W ); Q) to be the subring generated by all classes κ c . Our goal is to describe some quantitative and qualitative properties of these rings, for certain manifolds W . Before doing so, we introduce some variants. The topological group Diff + (W, * ) of diffeomorphisms of W which fix a marked point * ∈ W has a homomorphism to GL + d (R) by sending a diffeomorphism ϕ to its differential Dϕ * at the marked point. On classifying spaces this gives a map s : BDiff + (W, * ) −→ BGL + d (R) BSO(d) and for each c ∈ H * (BSO(d); Q) we may also form s * c ∈ H * (BDiff + (W, * ); Q). We let the tautological ring fixing a point R * (W, * ) ⊂ H * (BDiff + (W, * ); Q) be the subring generated by all the classes κ c and s * c.
Finally, if BDiff + (W, D d ) is the classifying space of the group of diffeomorphisms of W which are the identity near a marked disc D d ⊂ W , then we let the tautological ring fixing a disc R * (W, D d ) ⊂ H * (BDiff + (W, D d ); Q) be the subring generated by all the classes κ c . The inclusions of diffeomorphism groups therefore yield Q-algebra homomorphisms R * (W ) −→ R * (W, * ) −→ R * (W, D d ) whose composition is surjective.
These rings have been studied by Grigoriev [Gri17], and by Galatius, Grigoriev, and the author [GGRW17], mainly for the manifolds W = # g S n × S n with n odd. This is the natural generalisation of the case of oriented surfaces, i.e. n = 1, which has been studied in great detail: see e.g. [Mum83,Loo95,Fab99,Mor03]. Our purpose here is to explain to what extent those results apply to more general manifolds. We will only consider even-dimensional manifolds. For odd-dimensional manifolds the classes κ c have odd degree and so anticommute and are nilpotent, and tautological rings in this situation seem to have a different flavour.
1.2. Finiteness. Our first result concerns conditions under which the rings R * (W ) and R * (W, * ) are suitably finite.
Theorem A. Let W be a closed smooth oriented 2n-manifold, and assume that either (H1) H * (W ; Q) is non-zero only in even degrees, or (H2) H * (W ; Q) is non-zero only in degrees 0, 2n and odd degrees, and χ(W ) = 0. Then (i) R * (W ) is a finitely-generated Q-algebra, and (ii) R * (W, * ) is a finitely-generated R * (W )-module.
The result under hypothesis (H2) generalises a theorem of Grigoriev [Gri17], and proceeds by establishing the same basic source of relations among κ-classes found by Grigoriev. In the case 2n = 2 this source of relations had been established by the author [RW12], using ideas of Morita [Mor89a,Mor89b]. As the later results of [Gri17] and the results of [GGRW17] are deduced almost entirely from this basic source of relations, the same results largely follow assuming only hypothesis (H2). For example, for g > 1, k odd, and n ≥ k, it follows that Q[κ ep1 , κ ep2 , . . . , κ epn−1 ] −→ R * (# g S k × S 2n−k )/ √ 0 is surjective, which was obtained in [GGRW17] only in the case k = n. We give details of this in Section 4.1.
The result under hypothesis (H1) is entirely new and its method of proof is novel. We consider a fibre bundle W → E π → B as determining a parametrised spectrum over B, and hence its rational "cochains" as giving a parametrised HQ-module spectrum over B. We then use the notion of Schur-finiteness from the theory of motives to obtain a Cayley-Hamilton-type trace identity for endomorphisms of this cochain object, which establishes concrete relations among κ-classes. Later we shall describe some explicit calculations done using these relations.
(i) χ(W ) = 0 and the fixed set W T is connected, or (ii) the fixed set W T is discrete and non-empty. Then Kdim(R * (W )) ≥ k.
For example, if W 2n is a quasitoric manifold then case (ii) gives the estimate Kdim(R * (W )) ≥ n. As another example, if W = # g S n × S n with n odd then it is a consequence of the localisation theorem in equivariant cohomology (which we shall discuss in Section 3.1) that any torus action on W has connected fixed set, so by case (i) restricting the SO(n) × SO(n)-action on W constructed in [GGRW17,§4] to a maximal torus (which has rank n − 1) we obtain Kdim(R * (W )) ≥ n − 1 for g > 1, which recovers the calculation of that paper. This example admits many variants: the construction of [GGRW17,§4] can be easily modified to give a SO(k) × SO(2n − k)-action on # g S k × S 2n−k , so for any odd k and any n we have Kdim(R * (# g S k × S 2n−k )) ≥ n − 1.
We shall say more about this example in Section 4.1.
Examples.
In the last section of the paper we exhibit several phenomena in tautological rings by calculations for specific manifolds. The following result is complementary to Theorem A, and shows that the hypotheses of that theorem cannot be completely removed.
Theorem C. There are closed smooth manifolds W for which R * (W )/ √ 0 is not finitely-generated as a Q-algebra. There are examples of any dimension 4k + 2 ≥ 6, and in dimensions 4k + 2 ≥ 14 such manifolds can also be assumed to be simplyconnected.
To show the effectiveness of the relations between κ-classes arising in the proof of Theorem A, we apply them to the simplest manifold whose tautological ring is not yet known, namely CP 2 . These relations, along with relations associated to the Hirzebruch L-classes coming from index theory, give the following.
Theorem D. The ring R * (CP 2 ) has Krull dimension 2. The ring R * (CP 2 , D 4 ) is a vector space of dimension at most 7 over Q.
In fact, we show that the ring R * (CP 2 )/ √ 0 is equal to either , whose variety is the union of a line and a plane, or Q[κ p 2 1 , κ ep1 , κ p 4 1 ]/(4κ p 2 1 − 7κ ep1 ), whose variety is a plane. It would be interesting to determine which case occurs, and very interesting if it is the first case.
Finally, we give a calculation which shows that the lower bound of Corollary B is not always sharp. The 3-torus cannot act effectively on S 2 × S 2 , and yet Theorem E. The ring R * (S 2 × S 2 ) has Krull dimension 3 or 4.
The lower bound on the Krull dimension comes from a 1-parameter family of 2-torus actions, to which the method of proof of Corollary B is applied. The upper bound comes from the relations between κ-classes which we found in the proof of Theorem A.
Acknowledgements. I am grateful to Søren Galatius for an enlightening discussion of the ideas in Section 2 of this paper, and to Jens Reinhold and Dexter Chua for spotting several errors. I would also like to thank the anonymous referee for their useful suggestions. I was partially supported by EPSRC grant EP/M027783/1.
Tautological relations and finite generation
Unless specified, all cohomology in this paper will be taken with Q coefficients.
In this section we describe some techniques for obtaining relations between tautological classes, which for some manifolds W suffice to establish that R * (W ) is finitely-generated. The techniques we will introduce are perhaps more important than any particular application that can be made, but Theorem A will be a consequence.
2.1. Integrality. One consequence of conclusion (ii) of Theorem A is that R * (W, * ) is integral over R * (W ). In fact, this integrality statement implies the two finiteness statements, as follows.
For the sake of clarity, we will first formulate and prove a purely algebraic statement of which this proposition is a consequence.
Lemma 2.2. Let π : B → E be a homomorphism of Q-algebras, g : E → B be a homomorphism of B-modules (where E is made into an B-module via π), and C ⊂ E be a finitely-generated subalgebra, generated by {c i } i∈I . Let R ⊂ B be the subalgebra generated by g(C). If each c i is integral over π(R) ⊂ E, with for some a i,j ∈ R, then R is generated by the finitely-many elements Proof. By definition R is generated by the elements g( c ki i ), so we must show that these lie in the subring generated by the indicated elements. By assumption we may write c ki i as a Q[π(a i,j )]-linear combination of terms c j i with j < n i , and so we may write c ki i as a Q[π(a i,j )]-linear combination of terms c mi i with m i < n i . As g is B-and hence R-linear, we may therefore write g( c ki i ) as a Q[a i,j ]-linear combination of terms g( c mi i ) with m i < n i . Thus g( c ki i ) lies in the subring generated by the indicated elements.
Proof of Proposition 2.1. The universal fibre bundle with fibre W may be identified with the natural projection This gives a Q-algebra homomorphism by pullback and an H * (BDiff + (W ); Q)-module homomorphism by fibre integration. As we have described in the introduction, taking the differential at the marked point gives a map s : BDiff + (W, * ) → BGL + d (R) BSO(d), which classifies the vertical tangent bundle of the universal fibre bundle p.
For the second part, as H * (BSO(d)) is a finitely-generated Q-algebra, we know that R * (W, * ) is a finitely-generated R * (W )-algebra, so under the integrality assumption it follows that R * (W, * ) is in fact finitely-generated as a R * (W )-module.
Thus in order to prove Theorem A we shall actually show that R * (W, * ) is integral over R * (W ).
2.2.
Outline. To motivate the proof of Theorem A let us first explain its proof under hypothesis (H1) and an additional assumption: that the universal smooth oriented fibre bundle W → E π → B = BDiff + (W ) satisfies the Leray-Hirsch property in rational cohomology, i.e. that π 1 (B) acts trivially on H * (W ) and the Serre spectral sequence for π : E → B collapses. (The proof of Theorem A under hypothesis (H1) is a technical device which allows the following argument to be made without this additional assumption.) Under this assumption, H * (E) is a free finitely-generated H * (B)-module, say with basisx 1 , . . . ,x k ∈ H * (E) lifting a basis x 1 , . . . , x k for H * (W ). Furthermore, as W has all its cohomology in even degrees, H ev (E) is a free finitely-generated module over the commutative ring H ev (B), with basis thex i . For , and by the Cayley-Hamilton theorem (for finite modules over a commutative ring, alias the determinantal trick) we have χ x (x) = 0 ∈ H ev (E). Furthermore, the coefficients of the characteristic polynomial χ x (z) may be expressed as polynomials in the elements which make sense as H ev (E) is a finite free H ev (B)-module. The following lemma relates such traces to fibre-integration and the Euler class of the vertical tangent bundle.
Lemma 2.3. For any x ∈ H ev (E) we have We apply the above discussion to x = c(T π E) for c ∈ H * (BSO(2n)) a characteristic class of oriented 2n-dimensional vector bundles. Then the polynomial χ x (z) is monic, has coefficients in the subring generated by the κ ec i = π e(T π E) · c(T π E) i , and satisfies χ x (c(T π E)) = 0. Thus we deduce that c(T π E) = s * c is integral over R * (W ) and hence that R * (W, * ) is integral over R * (W ). Theorem A in the case we are considering follows by applying Proposition 2.1. It remains to prove this lemma.
Proof of Lemma 2.3. Rational cohomology classes are determined by their evaluations against rational homology classes, and any rational homology class is carried on a map from a smooth oriented manifold. So we may assume that π : E → B is a fibre bundle over a smooth oriented manifold, still satisfying the Leray-Hirsch property.
The pairing is non-singular, as in the basisx i its matrix X agrees modulo the ideal H * >0 (B) of H * (B) with that of the intersection form of W in the basis x i , so det(X) ∈ H * (B) is a unit modulo H * >0 (B), and hence is a unit as the ideal H * >0 (B) is nilpotent. Let us writex ∨ i for the dual H * (B)-module basis of H * (E) with respect to this pairing, characterised by x i ,x ∨ j = δ ij . Then for any x ∈ H ev (E) we have so to establish the claimed formula we must show that ix i ·x ∨ i = e(T π E) ∈ H ev (E). The diagonal map ∆ : E → E × B E is a map of smooth oriented manifolds, whose normal bundle is identified with T π E. Thus the Euler class e(T π E) ∈ H d (E) may be described as ∆ * ∆ ! (1). It is therefore enough to show that This is the parametrised analogue of the classical formula [MS74, Theorem 11.11] for the Poincaré dual of the diagonal, and we shall prove it in the same way. For As the classes (b ·x ∨ j ) ⊗x k generate H * (E × B E) as a Q-module, it follows from Poincaré duality for E × B E that ∆ ! (1) = ix i ⊗x ∨ i , as required. 2.3. Parametrised spectra and Schur functors. The technical device we shall use to attempt the argument of the previous section without the Leray-Hirsch assumption is to consider a fibre bundle as a parametrised manifold over its base, and make the argument in the parametrised setting. In order to do so, we shall suppose that B is a connected CW-complex, and work in a symmetric monoidal category (Sp (apart from this map, the notation (−) ! will always denote Gysin maps). The functor r * is strong monoidal.
Our argument applies more generally than to oriented fibre bundles: for now, we let π : E → B be a Hurewicz fibration (later we will add a finiteness hypothesis to 1 However, our arguments are not model-dependent and can be applied in the ∞-categorical formalism of Ando, Blumberg, Gepner, Hopkins, and Rezk [ABG + 14, ABG11], and presumably even in more naïve models of parametrised spectra. the fibres of π). This defines a parametrised spectrum Σ ∞ B E ∈ Sp /B ; we shall abuse notation and continue to call it E. Note that r ! (E) ∈ Sp is the suspension spectrum Σ ∞ E + .
The ring spectrum HQ has a 2-periodic version and we write π * (HP Q) = Q[t ±1 ] with t ∈ π 2 (HP Q). The constant parametrised spectra H B Q := r * (HQ) and HP B Q := r * (HP Q) define ring objects in Sp /B , and the main objects we will consider are the function objects These are again ring objects, using the fibrewise diagonal map on E and the multiplication on H B Q and HP B Q; we write µ for the multiplication on either object. The map E → * gives ring maps H B Q → C and HP B Q → CP , making them H B Q-and HP B Q-modules respectively. Let us write (H B Q-mod, ⊗, H B Q) for the homotopy category of H B Q-module spectra, with derived smash product of H B Q-modules as the symmetric monoidal structure and unit H B Q; similarly write (HP B Q-mod, ⊗, HP B Q) for the homotopy category of HP B Q-module spectra. We have C ∈ H B Q-mod and CP ∈ HP B Q-mod, and we can calculate Both H B Q-mod and HP B Q-mod are Q-linear tensor categories (i.e. categories enriched in Q-modules, equipped with a symmetric monoidal structure which is an enriched functor) which are idempotent complete (the retract associated to an endomorphism e : X → X which is idempotent up to homotopy may be taken to be the homotopy colimit of a diagram X e → X e → X e → · · · of modules over the appropriate ring object).
We must now recall a little representation theory of symmetric groups; we need nothing beyond Lecture 4 of [FH91]. To each partition λ of n there is associated an irreducible representation S λ of Σ n , with character χ λ . This character takes rational (in fact, integer) values, so we may form the element which is central (as χ λ is a class function) and idempotent (the coefficient dim S λ n! is chosen to make this so). For any object X in a Q-linear tensor category (D, ⊗, 1), the action of the nth symmetric group Σ n on X ⊗n yields a map of Q-algebras so e(d λ ) is an idempotent endomorphism of X ⊗n in D; if D is idempotent complete then we write S λ (X) for the corresponding retract of X ⊗n in D: this defines the Schur functor S λ (−) on D. In this paper the trivial and sign representations will play the most prominent role, and we write or, if we wish to emphasise the ambient category, ∧ n D and Sym n D . The categories H B Q-mod and HP B Q-mod are idempotent complete Q-linear tensor categories, so there are defined Schur functors on both categories. Furthermore, let us write (V Q , ⊗ Q , Q) for the symmetric monoidal category of graded Q-modules, ]-modules (where t has degree 2). These are also idempotent complete Q-linear tensor categories. Taking homotopy groups defines functors π * (−) : which are strong monoidal (by the Künneth theorem, as every graded Q-or Q[t ±1 ]module is free). Taking derived homotopy fibres at b ∈ B defines functors is also strong monoidal. In particular, all of the above functors preserve Schur functors.
Lemma 2.4. Let X ∈ H B Q-mod or HP B Q-mod be such that for each fibre X b we have S λ (π * (X b )) = 0. Then S λ (X) * .
Proof. As taking homotopy groups preserves Schur functors, we have π * (S λ (X b )) = S λ (π * (X b )) which vanishes by assumption. Thus S λ (X b ) * , so as taking derived fibres preserves Schur functors it follows that S λ (X) b * . Thus the map from S λ (X) to the terminal object is an equivalence on derived fibres, and hence an equivalence, as taking derived fibres reflects isomorphisms.
2.4. Duals, trace, and transfer. We recall the framework of categorical traces, from [DP80]. If (C, ⊗, 1) is a symmetric monoidal category and X ∈ C is an object, a strong dual of X is an object X ∨ ∈ C and morphisms ε : are the identity maps of X and X ∨ respectively. If f : X → Y is a map of objects having strong duals, then the dual of f is If f : X → X is an endomorphism of X, the trace of f is the composition This agrees with the perhaps more obvious choice but the first definition is that of [DP80]. Generalising this more obvious choice, if f : A ⊗ X → B ⊗ X is a morphism then the trace of f over X is the composition If X is in addition equipped with a comultiplication d : X → X ⊗ X then the transfer of f is First, consider the symmetric monoidal category given by the homotopy category of (Sp /B , ∧ B , S 0 B ).
Lemma 2.5. If π : E → B is a Hurewicz fibration and its fibre has the homotopy type of a finite CW-complex, then its associated parametrised spectrum Σ ∞ B E is a strongly dualisable object in the homotopy category of parametrised spectra.
Suppose then that π : E → B is a Hurewicz fibration and its fibre has the homotopy type of a finite CW-complex. Recall that we abuse notation by writing E for Σ ∞ B E. The fibrewise suspension of the fibrewise diagonal map ∆ : E → E × B E gives a comultiplication on the object E, we may thus form On applying r ! : In particular for c ∈ H * (BSO(2n)) we have that trf * π (c(T π E)) = κ ec is a tautological class.
Let us now consider the symmetric monoidal categories (H B Q-mod, ⊗, H B Q) and (HP B Q-mod, ⊗, HP B Q).
Corollary 2.6. If π : E → B is a Hurewicz fibration and its fibre has the homotopy type of a finite CW-complex, then Proof. The functor has a monoidality given by the adjoint of the morphism given by evaluation and product. This is a strong monoidality: the induced morphism on derived fibres is a weak equivalence (by the Künneth theorem, as every π * (HQ) = Q-module is free). As E ∈ Ho(Sp /B ) is strongly dualisable by Lemma 2.5, so is C = F B (E, H B Q), because strong monoidal functors preserve (strong) duals. The argument for CP is identical.
Schur-finiteness and trace identities. Deligne has introduced [Del02, §1]
the notion of Schur-finiteness of an object X in an idempotent complete Q-linear tensor category to be the property that S λ (X) is trivial for some partition λ n. In this section we consider this notion applied to the category (HP B Q-mod, ⊗, HP B Q) and so consider an HP B Q-module X such that S λ (X) * , and let us in addition suppose that X is dualisable in HP B Q-mod. Let us write X ∨ for the dual of X, with duality structure given by η : Given an endomorphism f : X → X, we may form the endomorphism and take the trace over the last (n − 1) copies of X, i.e. apply the construction Tr X ⊗n−1 (−) described in the previous section, to obtain an endomorphism of X. This endomorphism is null because the idempotent e(d λ ) : X ⊗n → X ⊗n factors through S λ (X) which is contractible by assumption.
We now translate this into formulas. We have d λ = dim S λ n! σ∈Σn χ λ (σ) · σ so the essential calculation is to describe the endomorphism of X obtained from σ • (X ⊗ f ⊗n−1 ) : X ⊗n → X ⊗n by taking the trace over the last (n − 1) copies of X. This is a universal construction in idempotent complete Q-linear tensor categories, and has been worked out by Abramsky [Abr05, Proposition 3]. In the notation of that proposition, one takes A 1 = B 1 = X and U 2 = · · · = U n = X, then f 1 = Id X and f 2 = · · · = f n = f , and π = σ. The trace of σ • (X ⊗ f ⊗n−1 ) over the last (n − 1) copies of X is then given by Here p −1 σ = Id X , and g 1 is given by composing the f i along the cycle in σ starting at 1: as f 1 = Id X , if this cycle is (1, p 2 , . . . , p k ) then this gives g 1 = f •k−1 ; L(σ) is the set of cycles in the permutation σ which do not contain 1, and for such a cycle l = (p 1 , p 2 , . . . , p k ) we have s l := Tr(f p k • · · · • f p1 ) = Tr(f •k ).
We originally learnt this idea from the thesis of del Padrone [Pad06] (see [Pad06, Proposition 2.2.4] for a closely related result).
2.6. Proof of Theorem A under the first hypothesis. In this case we will work with the periodic chains CP . For each b ∈ B we have and so by 2-periodicity we have an isomorphism Here the right-hand side is to be interpreted as the tensor product of graded Q-modules, which in degree k is −i+2n=k Under hypothesis (H1) the cohomology H * (W ) is concentrated in even degrees, so if it has total degree k then we have ] = 0 and so it follows from Lemma 2.4 that ∧ k+1 CP * . As π : E → B is a fibre bundle with compact fibres Corollary 2.6 applies to it, so CP is dualisable in HP B Q-mod and hence the discussion of the previous section applies. Thus, as CP is a ring object in HP B Q-mod, for any multiplication by x yields an endomorphismx : CP → CP , and in this case composing the map (2.2) with 1 ∈ [HP B Q, CP ] HP B Q-mod gives the identity Corollary 2.7. The polynomial is monic of degree k and satisfies ρ x (x) = 0 ∈ H 2 * (E).
The Becker-Gottlieb transfer trf * π is the map on cohomology induced by the composition shows that trf * π (t) is the trace oft ∨ : CP ∨ → CP ∨ , which is the same as the trace oft : CP → CP : thus In particular we have Tr(x •i ) = trf * π (x i ). Substituting this into (2.3) therefore shows that The coefficient of x k is the sum over the k!-many (k + 1)-cycles σ ∈ Σ k+1 of sign(σ) = (−1) k , which is k!(−1) k . Thus after dividing by this coefficient we see that ρ x (x) = 0 as required.
Applying this to the universal fibre bundle p : BDiff + (W, * ) → BDiff + (W ) and the cohomology class s * c, and using the identity trf * and hence that R * (W, * ) is integral over R * (W ). Theorem A under hypothesis (H1) follows by applying Proposition 2.1.
2.7.
Proof of Theorem A under the second hypothesis. In this case we will work with the non-periodic chains C. We shall first prove the following generalisation of a theorem of Grigoriev [Gri17].
To begin with, we prove the following extension of Corollary 2.6, which is the appropriate form of Poincaré duality in our setting.
Lemma 2.9. If π : E → B is a Hurewicz fibration over a CW-complex, its fibre F has the homotopy type of a finite Poincaré complex of dimension n, and π 1 (B) acts trivially on H n (F ), then an orientation of F determines an identification of the dual of C with Σ n C in H B Q-mod.
Proof. This is proved in Section 3.1 of [HLLRW17] in dual form, where, passing to rational coefficients, the equivalence is expressed as The domain of this morphism is Σ n C and as We now consider a smooth oriented fibre bundle π : E → B as in the statement of Theorem 2.8, with B a CW-complex; this satisfies the hypotheses of the previous lemma. Fibre integration π ! : H * (E) → H * −2n (B) is realised in the category H B Q-mod by the morphism π ! : C → Σ −2n H B Q dual to the unit ι : H B Q → C, using the self-duality of C described in Lemma 2.9. We may thus define an H B Qmodule D by the homotopy fibre sequence realises capping with the fundamental class so is surjective. By the associated long exact sequence we have and the remaining even homotopy groups are zero. Now the map realises the unit so is an isomorphism, and it follows that π odd (D b ) = H −odd (W ) and the even homotopy groups of D b vanish. As the (d + 1)-st symmetric power of the graded Q-module H −odd (W ) vanishes, the rest follows from Lemma 2.4.
The map
so taking homotopy cofibres of these two maps gives a morphism then it lifts to a map to D and hence determines a mapā : The class π ! (a · b) may therefore be represented by Hence the class π ! (a · b) N may be written as as the degree k ofā is even so no sign is incurred in rearranging the factors. As k is even, the mapā N : (Σ k H B Q) ⊗N → D ⊗N factors through Sym N (D) which is contractible as long as N ≥ d + 1. Hence π ! (a · b) d+1 = 0. Similarly, if a = b then we can chooseb =ā : Σ k H B Q → D in which case the map a N ⊗ā N : which is contractible as long as 2N ≥ d + 1. Hence π ! (a 2 ) d+1 2 = 0. Now that we have Theorem 2.8, the entirety of Section 5 of [Gri17] goes through with only notational changes, as this only uses the statement of Grigoriev's theorem. In particular, for p ∈ H * (BSO(2n)) of even degree and χ = χ(W ) = 0, the analogue of [Gri17, Example 5.19] gives the relation sign(σ) · κ ec l(γ 2 ) · · · κ ec l(γ q(σ) ) · c l(γ1)−1 ∈ R * (W, * ) for each c ∈ H * (BSO(2n)), where k = dim Q H * (W ). These may of course be pushed forward to obtain relations in R * (W ). More generally, the trace identity technique of Section 2.5 may be used to find relations among tautological classes for any manifold. Recall that given a fibre bundle W → E π → B we have formed an associated object CP ∈ HP B Q-mod. Let us write d ev = dim Q H ev (W ) and d odd = dim Q H odd (W ). The first ingredient is the following consequence of a calculation of Deligne.
Proof. By Lemma 2.4 it is enough to verify that
where the latter Schur functor is taken in V Q . By [Del02, Corollary 1.9] a (Z/2-)graded vector space is annihilated by S λ (−) under the given assumption on its (super)dimension.
It is a simple exercise with the Murnaghan-Nakayama rule to show that the character χ λ vanishes on all n-cycles if both d ev > 0 and d odd > 0. As d ev cannot be zero, because H 0 (W ) = 0, it follows that this relation is a monic polynomial in c (after perhaps scaling by a rational number) if and only if d odd = 0. (This accounts for why we restricted to manifolds with only even rational cohomology in the first case of Theorem A.)
Torus actions
In this section we suppose that we have a smooth action of the torus T = (S 1 ) k on a d-dimensional orientable manifold W . We write W T for the fixed set of this action. The Borel construction gives a smooth fibre bundle and the action of T on the tangent bundle T W → W gives a vector bundle T T W := T W/ /T → W/ /T , which is the vertical tangent bundle of the smooth fibre bundle π. Following the usual notation of equivariant cohomology we write . , x k ] and H * T (W ) = H * (W/ /T ; Q). As (3.1) is a smooth fibre bundle, there is a ring homomorphism ρ : R * (W ) → H * T , and we denote by R * T ≤ H * T its image. Pulling back π along itself gives a smooth fibre bundle over W/ /T with canonical section, and so a ring homomorphism ρ * : R * (W, * ) → H * T (W ), and we denote by R * T ( * ) ≤ H * T (W ) its image. Our goal in this section is to describe conditions on the manifold W and the action of T on W which allow us to estimate the Krull dimension of R * (W ) as Kdim(R * (W )) ≥ k. We will regularly use the following standard piece of commutative algebra: when one ring is integral over another they have the same Krull dimension, by the "going up" and "going down" theorems [AM69, Ch. 5]. Our most general result is as follows.
Theorem 3.1. Let T act smoothly and effectively on a connected closed orientable manifold W . Let V 1 , V 2 , . . . , V p be an enumeration of the T -representations arising as normal spaces to points on W T , and let B i denote the Euler characteristic of the subspace of W T consisting of those path components having normal representation V i .
If some y i ∈ Q[y 1 , y 2 , . . . , y p ] is integral over the subring generated by p i=1 B i y n i , n = 1, 2, 3, . . . , then H * T is integral over R * T . In particular Kdim(R * (W )) ≥ k. It is perhaps not clear when the hypothesis of this theorem is likely to hold. The following lemma, which we learnt from [BCES16], gives a simple criterion.
Lemma 3.2. Suppose that we have discarded the B i which are zero, and that this is not all of them. If the remaining numbers B 1 , B 2 , . . . , B p have all partial sums non-zero, then Q[y 1 , y 2 , . . . , y p ] is finite over the subring generated by Claim. If (B + ) = (y 1 , y 2 , . . . , y p ) then Q[y 1 , . . . , y p ] is finite over B.
Our proof of this claim follows the discussion at [Jef]. Under the assumption the quotient ring Q[y 1 , y 2 , . . . , y p ]/(B + ) has every y i nilpotent, so is a finite Q-module; let z 1 , z 2 , . . . , z m ∈ Q[y 1 , y 2 , . . . , y p ] be lifts of these finitely-many generators, which can be taken to be homogeneous as the ideal (B + ) is homogeneous. We claim that these generate Q[y 1 , y 2 , . . . , y p ] as a B-module; let M ⊂ Q[y 1 , y 2 , . . . , y p ] be the B-submodule that they generate.
As the z i are homogeneous, and B is generated by homogeneous elements, M is a graded submodule of Q[y 1 , y 2 , . . . , y p ] with the monomial-length grading. Suppose p ∈ Q[y 1 , y 2 , . . . , y p ] is an element of minimal grading which does not lie in M . Then we may write , y 2 , . . . , y p ], and b j ∈ (B + ). But the b j have strictly positive degree, so the V j have strictly smaller degree than p so must lie in M , and hence p does too, which proves the claim.
In order to prove the lemma we must therefore show that (y 1 , y 2 , . . . , y p ) = (0, 0, . . . , 0) is the only simultaneous solution to the equations p i=1 B i y n i = 0 for n ∈ N. If (y 1 , y 2 , . . . , y p ) ∈ Q p is a solution, then grouping terms with y i = y j together we obtain distinct rational numbersȳ i solving the equations where eachB i is a partial sum of the B i , and hence non-zero by assumption. But this means that the vector (B 1ȳ1 , . . . ,B qȳq ) is in the kernel of the (transposed) Vandermonde matrix associated to (ȳ 1 , . . . ,ȳ q ), so as theȳ i are all distinct it follows that (B 1ȳ1 , . . . ,B qȳq ) = 0, and as theB i are all non-zero it follows thatȳ i = 0 as required.
The following corollary, whilst not so powerful as Theorem 3.1, is often easier to apply as one does not need to classify the normal representations at the fixed set.
. . , x ] generated by the i=1 A i x n i onto the subring B ⊂ Q[y 1 , y 2 , . . . , y p ] generated by the p i=1 B i y n i . If x i is integral over A then there is a polynomial q(x) = a i x i with coefficients in A such that q(x i ) = 0. Then q (y) = φ(a i )y i is a polynomial over B such that q (φ(x i )) = 0, so y j = φ(x i ) is integral over B, and hence Theorem 3.1 applies.
Example 3.4. There are several standard conditions which oblige a torus action on a manifold W to have connected fixed-set. For example (i) Let W have dimension 2n, and suppose that all its cohomology apart from H 0 (W ; Q) and H 2n (W ; Q) lies in odd degrees, and that there is some cohomology in odd degrees. Then W T is connected (by the localisation theorem in equivariant cohomology, which we will describe in the following section). (ii) If W has trivial even-dimensional rational homotopy groups, then W T is empty or connected [Hsi75, Theorem IV.5].
In such cases χ(W T ) = χ(W ), so if this is non-zero then the hypotheses of Corollary 3.3 are satisfied.
Example 3.5. Suppose that the action of T k on W has isolated fixed points, or more generally that all A i are equal and non-zero. Then the subring generated by the i=1 A i x n i is the subring of symmetric polynomials in Q[x 1 , x 2 , . . . , x ], and every x i is integral over this so the hypotheses of Corollary 3.3 are satisfied.
This immediately implies that if W 2n is a quasitoric manifold (that is, the "toric manifolds" of [DJ91]) then Kdim(R * (W )) ≥ n, as such manifolds by definition have an action of an n-torus with isolated fixed points. Slightly more subtly, if G/K is a homogeneous space of rank zero (i.e. rk(G) = rk(K)) then a common maximal torus T of G and K acts on G/K with fixed points given by the finite set (W G (T ) · K)/K ⊂ G/K, where W G (T ) := N G (T )/T denotes the (finite) Weyl group of G, so Kdim(R * (G/K)) ≥ rk(G).
3.1. The localisation theorem. We now prepare for the proof of Theorem 3.1. Let X 1 , X 2 , . . . , X be the components of the fixed set W T , with d i := dim(X i ), let ν Xi be the normal bundle of X i in W , and let ν i be the T -representation which arises as each fibre of ν Xi . Let us write A i := χ(X i ), and write V 1 , V 2 , . . . , V p for an enumeration of the T -representations ν i which arise. Then we have B i = j s.t. νj =Vi A j . Let us write ρ i : H * T (W ) → H * T (X i ) for the restriction map in equivariant cohomology, and π ! : for the fibre integration maps. As the T -action on X i is trivial we have X i / /T = BT × X i , and so the fibre integration map (π i ) ! is simply given by slant product with the fundamental class of X i . As T acts on the normal bundle ν Xi → X i , there is an induced vector bundle ν T Xi is an isomorphism. Even more is true: Atiyah See [AP93, p. 366] for a textbook exposition of the localisation theorem.
3.2. Proof of Theorem 3.1. Using the diagram (3.3) to compute κ ep I = π ! (e(T T W )p I (T T W )) ∈ S −1 H * T , which we know lies in the subring H * T , gives When we multiply by e(T X i ) and integrate over X i the latter terms do not contribute, so as Xi e(T X i ) = χ(X i ) = A i we get Grouping these terms by the representation types V j instead gives Applying this to p I = p n j we find that T for all j and n.
Applying the hypothesis of the theorem for each j, we find that there exists an i such that all p j (V i ) lie in a common integral extension R * T ⊆ R ⊆ H * T . On the one hand R is integral over R * T . On the other hand by a theorem of Venkov [Ven59] the ring H * T is finite over the subring generated by the p j (V i ) (because V i is a faithful representation of T , by the standard lemma given below), and hence is finite (and so integral) over R . It follows that H * T is integral over R * T , so in particular they have the same Krull dimension, namely k.
Lemma 3.6. If T acts effectively and smoothly on a connected closed manifold W , then any T -representation arising as the normal space to a point on W T is faithful.
Proof. We may choose a T -invariant Riemannian metric on W , so the exponential map exp : T W → W is equivariant; the restriction of the exponential map to a fibre T x W → W is a diffeomorphism when restricted to a neighbourhood of 0 ∈ T x W .
If the action of T on the normal space V to W T at x had a non-trivial kernel {e} < T ≤ T then the T -action on T x W = T (W T )⊕V is trivial. By exponentiating, it follows that T fixes an open neighbourhood of x ∈ W . Thus the fixed set W T is a submanifold of W which contains an open subset; as W is connected it follows that it is the whole of W . This contradicts the action being effective.
3.
3. An extension. The discussion so far gives a technique more general Theorem 3.1, but difficult to formalise in a single result. It is best described through an example.
Proposition 3.7. Let T act effectively on W with two fixed components X 1 and X 2 . Suppose that χ(X 1 ) = −χ(X 2 ) = 0 but that the normal T -representations ν 1 and ν 2 at X 1 and X 2 have all Pontrjagin classes distinct (when they are non-zero). Then Kdim(R * (W )) ≥ k.
Proof. We have that for all j and n, and p j (ν 1 ) − p j (ν 2 ) = 0 ∈ R * T . Hence Therefore after inverting the finite set . , x k ], we find that the p j (ν 1 ) lie in S −1 R * T , and hence by Venkov's theorem [Ven59] that S −1 H * T is a finite S −1 R * Tmodule. As S −1 H * T still has Krull dimension k (there is a maximal ideal m of H * T not containing the product of the finitely-many elements in S-as the intersection of all maximal ideals is zero-whence (S −1 H * T ) S −1 m ∼ = (H * T ) m so S −1 m is a maximal ideal of S −1 H * T of height k), it follows that S −1 R * T has Krull dimension k and so Kdim(R * T ) ≥ k. Hence Kdim(R * (W )) ≥ k.
4.1.
Manifolds with mostly odd cohomology. Let W be a 2n-dimensional manifold whose cohomology is only non-trivial in degrees 0, 2n, and odd degrees, let d = dim Q H odd (W ), and suppose χ(W ) = 2 − d = 0. Then by Theorem A the Q-algebra R * (W ) is finitely-generated and R * (W, * ) is a finite R * (W )-module. Furthermore, by our method of proof, Grigoriev's theorem holds for these manifolds (our Theorem 2.8). Therefore the results of Sections 2 and 3 of [GGRW17] hold for W as well, as Grigoriev's theorem was the only external input. So if d > 2 then By Example 3.4 (i), if T = (S 1 ) k acts on such a manifold W then the fixed set W T is connected, so Kdim(R * (W )) ≥ k. The construction of [GGRW17, Section 4.1] can be mimicked to obtain an action of SO(k) × SO(2n − k) on # g S k × S 2n−k for any k, and the calculation of the characteristic classes κ epi for the associated bundle is entirely analogous.
We obtain the following generalisation of the results of [GGRW17].
Corollary 4.1. For k odd and g > 1 we have and hence R * (# g S k × S 2n−k , D 2n ) is a finite-dimensional Q-vector space.
As in [GGRW17] results can be obtained for g = 0 or 1 too, but we shall not write them out here.
Quasitoric manifolds.
A quasitoric manifold W 2n has by definition a smooth action of T = (S 1 ) n with isolated fixed points, so has Kdim(R * (W )) ≥ n by Corollary 3.3 . Furthermore, the integral cohomology of W is supported in even degrees, so its rational cohomology is too, and therefore by Theorem A the Q-algebra R * (W ) is finitely-generated and R * (W, * ) is a finite R * (W )-module. 4.3. Non-finite generation. We shall give some examples of manifolds W for which R * (W ), and in fact even R * (W )/ √ 0, is not finitely-generated. We shall do so by constructing actions of a torus T on W and showing that the tautological subring R * T ≤ H * T is not finitely-generated. As H * T is an integral domain the natural surjection R * (W ) → R * T factors through R * (W )/ √ 0, which therefore shows that R * (W )/ √ 0 is not finitely-generated. Before attempting this method there is an important observation to be made.
Observation 4.2. Let T = (S 1 ) k act on W satisfying the hypotheses of Theorem 3.1; then that theorem shows that the inclusion R * T → H * T is integral. As H * T is Noetherian, and H * (BT ; H * (W )) is a finitely-generated H * T -module, it follows from the Serre spectral sequence for the Borel construction that H * T (W ) is a finitely-generated H * T -module and hence is integral over H * T . Therefore the morphism R * T → H * T → H * T (W ) is integral, so R * T → R * T ( * ) is integral too. It then follows from applying Lemma 2.2 as in the proof of Proposition 2.1 that R * T → R * T ( * ) is finite and R * T is a finitely-generated Q-algebra.
So to pursue the programme we have suggested one should only try to use torus actions which do not satisfy the hypotheses of Theorem 3.1. The following allows us to construct manifolds with torus actions having prescribed normal representations and Euler characteristics of its fixed sets.
Construction 4.3. Fix a positive odd integer n and an even integer k. Let Σ(k) 2n be the 2n-manifold of Euler characteristic k obtained as # g S n × S n (if k is nonpositive) or g S 2n (if k is positive). Let H(k) 2n+1 be the manifold with boundary Σ(k) 2n given by g S n × D n+1 or g D 2n+1 respectively. Let T be a torus, and suppose we are given even integers B 1 , B 2 , . . . , B p and distinct faithful complex T -representations V 1 , V 2 , . . . , V p , which are all of the same dimension and which have no trivial subrepresentations. Then we can form the manifold which has a T -action on the right-hand factors. We may then let M be the disjoint union M = M (1) M (2) · · · M (p).
As V i is representation having no trivial subrepresentations, T acts freely on S(V i ) and its only fixed point on D(V i ) is 0. Thus M (i) T = Σ(B i ) 2n × {0}, and the normal representation at these fixed points is given by V i .
Each V i may be written as a sum L 1 ⊕ · · · ⊕ L m of 1-dimensional complex Trepresentations; if a unit vector v ∈ S(V i ) is written in components as (l 1 , . . . , l m ) with all l j non-zero, then a t ∈ T which stabilises it must act trivially on each L j , so must act trivially on V i , so t must be the identity as V i is a faithful Trepresentation. Thus such a v ∈ S(V i ) must lie in a free orbit, so in particular each path component of M (i) has a free orbit. If one prefers a connected manifold, such free orbits in two different path components have tubular neighbourhoods T -equivariantly diffeomorphic to T × D 2n+2m−rk(T ) , which can therefore be cut out and the remaining pieces glued together T -equivariantly along the common boundaries T × S 2n+2m−rk(T )−1 . Doing this enough times yields a connected Tmanifold with the same fixed-point data, and hence by localisation with the same characteristic classes.
Lemma 4.4. The T -manifold M so obtained has κ p I = 0 and Proof. The second statement follows from (3.4 ). An analogous calculation shows that .
The bundle ν Xi → X i is trivial, so the equivariant bundle ν T Xi is isomorphic to the pullback of V i to X i / /T = BT × X i . Thus the total Pontrjagin class satisfies which pushes forward to zero (as dim(X i ) = 2n > 0), so κ p I = 0.
We now give our example.
Example 4.5. Let T = (S 1 ) 2 and V 1 be the 2-dimensional complex T -representation with weights {x 1 + x 2 , x 2 }, and V 2 be the 2-dimensional complex T -representation with weights {x 1 , x 2 }. Construction 4.3 with B 1 = 2 and B 2 = −2 yields a Tmanifold W (which may be chosen to have any dimension at least 6 and congruent to 2 modulo 4) having κ p I = 0 and . For the chosen representations the total Pontrjagin classes are 2 ). Here p 2 (V 1 ) = p 2 (V 2 ) = 0 and 1 , so the only non-zero κ ep I in this quotient ring are x 2 all have different degrees and are non-zero as they are not divisible by x 2 2 . On the other hand, multiplication of any two positive-degree elements in S is zero, as each positivedegree element is divisible by x 2 so a product is divisible by x 2 2 . Thus S is infinitelygenerated, so R * T is too, and hence R * (W )/ √ 0 is too.
Let us record some observations about this example.
Remark 4.6. If we suppose that n ≥ 5 is odd and the T -manifolds M (2, V 1 ) and M (−2, V 2 ) are glued along a free orbit as suggested above, then the (2n+4)-manifold M obtained is simply-connected and has the same integral homology as Remark 4.7. Although this tautological ring is not finitely-generated, Proposition 3.7 applies to this torus action and gives Kdim(R * (W )) ≥ 2. (Specifically, we have Remark 4.8. Choosing * ∈ X 2 gives a map R * (W, * ) → H * T , whose image is generated by the κ ep I = 2(p I (V 1 ) − p I (V 2 )) along with the characteristic classes of the representation V 2 , which are e(V 2 ) = x 1 x 2 , p 1 (V 2 ) = −(x 2 1 + x 2 2 ), and p 2 (V 2 ) = x 2 1 x 2 2 . Rearranging a little shows that this is the subring generated by e(V 2 ) and the p j (V i ), so is finitely generated. (Similarly if we choose * ∈ X 1 .) This raises the interesting possibility that R * (W, * ) might be finitely-generated in more generality than R * (W ) is.
4.4. The complex projective plane. Let us consider the manifold CP 2 , whose cohomology is supported in even degrees. Thus by Theorem A the Q-algebra R * (CP 2 ) is finitely-generated and R * (CP 2 , * ) is a finite R * (CP 2 )-module. We will explain estimates on the generators for these algebras, using the relations developed in Section 2.5. The computations were done with assistance from Maple™.
(4.4) (More generally, one could fully polarise this relation, by writing c = u + t · v + s · w, expanding out and taking the coefficient of ts: this gives a trilinear form in the variables (u, v, w) which vanishes for all u, v, w ∈ H * (BSO(4)) = Q[p 1 , e], and there is no reason to take these to be linear terms. However, we will not pursue this here.) The relations (4.1), (4.2), (4.3) and (4.4), multiplied by monomials in Q[p 1 , e] and pushed forward, show that certain κ e a p b 1 ∈ R * (CP 2 ) are decomposable. Specifically κ xp 3 1 is decomposable for any monomial x = 1, e, p 1 κ xep 2 1 is decomposable for any monomial x = 1, e, p 1 κ xe 2 p1 is decomposable for any monomial x = 1, e, p 1 κ xe 3 is decomposable for any monomial x = 1, e, p 1 .
the last of which shows that the generator κ p 4 1 may be eliminated from the ring Q[e, p 1 , κ p 2 1 , κ p 4 1 , κ ep1 ]/ √ J. One may also deduce from these relations that κ ep1 and κ p 2 1 are integral over Q[e, p 1 ], so that R * (CP 2 , * )/ √ 0 is finite over Q[e, p 1 ].
4.4.2. Fixing a disc. As passing from R * (CP 2 , * ) to R * (CP 2 , D 4 ) in particular kills e and p 1 , we deduce from the above that Corollary 4.11. R * (CP 2 , D 4 ) is a finite-dimensional Q-vector space.
In fact, setting K = J + (e, p 1 ) and simplifying, we find that K is generated by
Remark 4.14. In [Fin77,Fin78] there is given an analysis of S 1 -actions on simplyconnected 4-manifolds, from which it is possible to deduce-through a very laborious consideration of cases and analysis of fixed-point data-that for any circle action on CP 2 we have 4κ e 2 = κ ep1 and so by the first Hirzebruch relation we have 4κ p 2 1 − 7κ ep1 = 0. Alternatively, this may be proved using Hsiang's splitting theorem for the S 1 -equivariant cohomology of CP 2 [Hsi75, Theorem VI.1]. 4.4.4. The tautological variety. We find it quite revealing to consider the (reduced) tautological ring R * (CP 2 )/ √ 0 by considering its associated variety V CP 2 . The choice of generators κ p 2 1 , κ p 4 1 , and κ ep1 of R * (CP 2 ) presents V CP 2 as a subvariety of A 3 , and it follows from Corollary 4.10 that V CP 2 is contained in the union of the plane P := {4κ p 2 1 − 7κ ep1 = 0} and the line L := {κ p 2 1 − 2κ ep1 = 0, 316κ 3 ep1 − 343κ p 4 1 = 0}. Furthermore, it follows from the calculation of Section 4.4.3 that V CP 2 contains P, so the variety V CP 2 is either P or P ∪ L. It would be extremely interesting if L ⊂ V CP 2 , but no method for showing this seems to be available. (Each circle action on CP 2 gives a homomorphism R * (CP 2 )/ √ 0 → Q[x 1 ] and hence a morphism A 1 → V CP 2 , but by Remark 4.14 all such morphisms have image in P.) Similarly, by the calculation of Section 4.4.1 the four elements e, p 1 , κ p 2 1 , and κ ep1 generate R * (CP 2 , * )/ √ 0, which presents the associated variety V (CP 2 , * ) as a subvariety of A 4 . Eliminating the variable κ p 4 1 from the radical ideal described in Section 4.4.1 shows that V (CP 2 , * ) is contained in the union of the plane {4κ p 2 1 − 7κ ep1 = 0, κ ep1 − 4p 1 + 4e = 0} and the lines {κ p 2 1 − 2κ ep1 = 0, e = 0, κ ep1 − 7p 1 = 0} {κ p 2 1 − 2κ ep1 = 0, 2κ ep1 − 7e = 0, 5κ ep1 − 7p 1 = 0}. It follows from the calculation of Section 4.4.3 that the plane is contained in V (CP 2 , * ) . 4.5. The manifold S 2 × S 2 . The cohomology of S 2 × S 2 is supported in even degrees. Thus by Theorem A the algebra R * (S 2 × S 2 ) is finitely-generated and R * (S 2 × S 2 , * ) is a finite R * (S 2 × S 2 )-module. | 14,841.4 | 2016-11-09T00:00:00.000 | [
"Mathematics"
] |
Lightweight concrete with EVA recycled aggregate for impact noise attenuation
The purpose of this study is to evaluate the acoustic performance of lightweight concrete with ethylene vinyl acetate copolymer (EVA) residues to reduce impact noise on floors. Three types of concrete with three different mix proportions were evaluated. The method adopted includes the characterization of water absorption, voids and density of the samples. The experimental study of noise impact followed the procedures of ISO 140. The results indicate that the lightweight concrete with EVA recycled aggregate can reduce impact noise levels by up to 15 dB and the highest percentage of coarse aggregate EVA does not entail a higher acoustic performance.
solution is not adopted due to lack of efficiency and to increasing costs of material and structure weight.
Brazilian standard ABNT NRB 15575-3 (5) characterizes residential building floors as the element responsible for providing sound insulation, depending on use of distinct housing units or between rooms of the same unit, when for the night's rest, domestic leisure or intellectual work.Table 1 shows the performance rating criteria recommended for the standardized weighted sound impact levels (L' nT, w ) provided by the structural slab.
Layers with deformable elastic materials are very important as the first energy absorption.Moreover, combined or not with these layers, the floating floor presents the most satisfactory results (3).
Floating floors are a commonly used solution to reduce impact noise.It involves placing resilient material between the structural concrete slab and the sub floor, which can improve by up to 20 dB isolation from the sounds of impact.Insulators (resilient materials) may be rubber pads, cork and other materials evenly distributed, or plates of glass wool, rock wool, expanded polystyrene, among others (4).
Experimental studies have contributed to develope products whose performance can be compared to traditional materials available.Studies evaluated and compared materials using different waste types in mitigating impact noise on floors.Materials using waste types such as carpets (6), recycled rubber (7, 8), coconut fiber (9) and footwear industry waste with PU and ethylene vinyl acetate copolymer (EVA) (10-12) provide performance similar to glass wool.The materials were used in floating floor system, as a layer between the sub floor and structural concrete slab.
The purpose of this study is to evaluate the acoustic performance of a new material with ethylene vinyl acetate copolymer (EVA) recycled aggregate replacing conventional coarse aggregate in the production of lightweight concrete sub floor for residential buildings.
INTRODUCTION
Lightweight concrete is characterized by the use of lowdensity aggregates with high amount of voids between the particles or by the replacement of solid material by air, which can be achieved through the incorporation of air or foam, or a low specific mass can be achieved producing concrete without fines.
The low density of the mixture is achieved due to the use of lightweight aggregates which produces specific characteristics such as low density, ranging from 300 kg/m³ to 1.800 kg/m³ and compressive strength, ranging from 0.3 MPa to 40 MPa.The coarse and fine aggregates are considered lightweight when their density is less than 1.120 kg/m³ and greater than 880 kg/m³, respectively (1).
These features indicate that the lightweight aggregate can be used for acoustic performance qualification in buildings, especially for the impact noise isolation of floors.
The noise in buildings can spread through the air, the airborne noise, or through the structures themselves, defined as impact noise.The impact noise is produced by percussion of solid bodies on a floor, transmitted through the structure and re-radiated by it into the air (e.g.falling objects, footsteps, hammering, percussion instruments, etc. (2).
The transmission through the structure is the shortest and most direct path transmission of impact noise.A hard floor that deforms slightly before the impact, loads and transmits the noise in a very short time whilst on a deformable floor the transmission time is greater and therefore, the amplitude transmission of impact force is smaller.In both cases the sound response is very distinct, and it is produced higher sound frequencies in the first aspect and lower in the second (3).Bistafa (4) explains that even in thick and dense concrete slabs, impact noise level is high.Even if the sound transmission level is reduced by increasing thickness, such a
Water absorption, voids and specific mass
The methodology adopted in this work follows procedures in accordance with ISO 6783 (15).The tests were made after 28 days of curing with two specimens with 100 mm in diameter and 200 mm in height.
The specimens were dried in an oven at 60 ºC until they reached mass constancy.The temperature was chosen to preserve the characteristics of EVA aggregates.After that, the specimens were kept submerged in water during 72 hours in a climatized room to a temperature of 23 °C ± 2 °C.Afterwards, they were boiled for five hours.
Acoustic performance
To determine the weighted normalized impact sound pressure levels the specimens were tested using the method described by ISO 140-7 (16) which determines procedures for field measurements, and the ISO 717-2 (17) which defines the method of obtaining the singlenumber for impact noise on floors.
The tests sounds were generated with a normalised tapping machine Bruel & Kjaer type 3207.The noises were generated in the source room, on the floor immediately above the receiving room, where three measurements were carried out with the sound level analyzer Quest, in third octave bands in the frequency range 100 Hz to 3150 Hz in three different positions.
The rooms have hard surfaces and are separated by a structural concrete slab with a thickness of 100 mm and built with masonry walls coated with plaster and paint.Both rooms dimensions are 4.64 m x 3.5 m x 2.76 m, with a total area of 16.24 m² and a volume of 44.82 m³.
The sample tested was 1 m² which consisted of four
Materials
In this study two types of coarse aggregate were used: natural and industrially produced.The natural coarse aggregate comes from granite rock, with maximum characteristic dimension of 9.5 mm.The choice for this type of natural aggregate was due to similar EVA coarse aggregate grain size.The natural aggregate was previously washed, dried in an oven until mass constancy and kept packed in sealed plastic containers until the moment to be used.
Characterization of aggregates of EVA was performed according to methods specified by ISO 6782 (13).However, adjustments were made to enable the testing.
In the granulometric analysis test conducted according to ISO 6274 ( 14), the samples showed mass difference during weighing of the fractions retained in the sieves according to the weight of the total sample.The solution was to weigh both the total sample and the fractions only after mass constancy, being held at a temperature of 60 °C (13).
In the EVA aggregate specific mass test, which followed ISO 6783 (15), the samples floated when immersed, so it was necessary to make an adjustment to keep the EVA aggregate underwater.It was necessary to install a barrier screen at the test apparatus, in order to prevent the material floated to the surface.
The coarse EVA aggregate used in this study comes from two types of recycling processes.The coarse aggregate EVA, named EVA1, is an artificial aggregate obtained from an industrial process that removes the dust generated in the grinding step of the waste generated by EVA footwear industry.
The coarse aggregate EVA, named EVA2, is obtained through an artisanal recycling process of waste of footwear companies and ground and wrapped for sale.In the production of this aggregate it is not given the treatment to the EVA powder generated during the process.The choice for this type of EVA coarse aggregate is based on the possibility of comparing two different samples of lightweight aggregates.
The concrete was cast with three different types of mortar.The mix proportion 1:1:4 features 80% of coarse aggregate and 20% of fine aggregate; the mix proportion 1:1.5:3.5 is composed by 70% of coarse aggregate and 30% of fine aggregate; and the 1:2:3 mixture contains 60% of coarse aggregate and 40% of fine aggregate.The mix proportions samples and designations adopted The unit mass was lower for EVA aggregates as well.
The EVA1 presented unit mass corresponding to 8.5% of unit mass of natural coarse aggregate, while the EVA2 presented unit mass corresponding to 5% of unit mass of natural coarse aggregate.That is, the low mass of the EVA also occurs in the voids presence.
The EVA1 aggregate presented fineness modulus 4% lower than the natural coarse aggregate while the EVA2 aggregate of fineness modulus was 12% lower than the natural coarse aggregate.As they have the same maximum characteristic dimension and distribution of the particles, it is concluded that the aggregates are similar in size and distribution.
However, it is observed that the EVA aggregates require more water to wet their grain than the natural aggregates do and the EVA2 needs more water to wet their grain than the EVA1, whereas EVA1 and EVA2 have a water absorption 42.5 and 44.5 times higher, respectively, than the natural coarse aggregate.However, the EVA2 absorption of water was 5% higher than the EVA1.
plates of 50 cm x 50 cm x 3 cm.Thus, the results are valid for the acoustic performance comparisons among the samples, and can reveal the influence of aggregate recycled EVA in the acoustic insulation of floors.
The results treatment consists in obtaining single-number quantities for the standard impact sound levels pressure (L' nT ).This number results from the comparison of the sound spectrum curve measured and the reference curve by ISO 717-2 (17), which expresses the acoustic performance in dB of the floor system tested.Nine samples prepared in laboratory placed on an uncoated concrete slab were tested.
Materials characterization
The cement used was the CPV-ARI, due to its high initial resistance and the need to quickly unmold the material molded into plates with small thickness compared to width and length.The mechanical properties of cement are shown in Table 3.
It is observed that the properties of the cement CPV-ARI meet the regulatory requirements, approving the material for testing.trace reference, whereas the average bulk density of EVA2 concretes is 63%.
Among the concretes with EVA, EVA1 showed more satisfactory results than EVA2, with lower water absorption and voids.In terms of bulk density, the samples with EVA1 had lower values than EVA2.
In addition, the lower the content of mortar, that is, the greater amount of coarse aggregate in relation to the fine aggregate, the lower values of bulk density.For example, the trace with EVA1 1:1:4 has a dry bulk density 15% lower than the trace with the same materials 1:1.5:3.5 and 18% than the characteristic 1:2:3.
Acoustic performance
Figure 1 combines the results of all samples tested with their respective values of L'nT, pointing out that the sample called slab corresponds to the values measured under the slab of the receiving room, without the use of material between the slab and the tapping machine.
Water absorption, voids and bulk density of concretes
Table 5 presents the results of water absorption, voids and bulk density of concretes.
It is observed that, in general, the concrete with EVA aggregates showed higher water absorption, higher amounts of voids and lower bulk density be it dry, saturated and real.
It is noticed that most of the absorption occurs in trace 1:1:4, molded with EVA2, which had the highest water absorption.This sample had water absorption 8.9 times greater than the absorption of the reference trace and 2.2 times greater than the trace with EVA1.Parallel to it, the sample that showed the highest percentage of voids was the 1:1:4 with EVA2, with a percentage of 38.56%.
It was also noted that the average bulk density of EVA1 concretes corresponds to 46% of the bulk density of the general, the third group, using the EVA2 aggregate, gave intermediate results between the concrete and the reference EVA1.
The L' nT,w values can be comparatively analyzed in Figure 2, with the grouping by type of material composition of the specimens.
The samples made with natural coarse aggregate obtained values between 71 and 72 dB, with a minimum performance rating for structural concrete slabs and off the minimum performance standards for affordable coverage, according Brazilian standard NBR 15575-3 (5).It is observed that the variation of the ratio coarse aggregate and fine aggregate had little influence on the final results.The samples with EVA1 showed greater variation, with values between 54 and 62 dB, and only specimens with 1:1.5:3.5 mixture can be classified with superior performance.In the specimens prepared on the basis of EVA2 the results ranged from 56 to 62 dB, with minimum performance rating for affordable roof terrace.
Relation between impact noise and voids
The test results of impact noise and amount of voids showed an inverse relation, as seen in Figures 3, 4 and 5.
It can be said for the specimens studied in this article that the increase in amount of voids leads to better acoustic performance in slabs and toppings.
The results can be divided into three distinct groups in relation to coarse aggregate mix proportions.
The samples made with natural coarse aggregate had the highest measured values, and, among them, the lowest value was the sample 1:1:4 (Na).The frequencies up to 160 Hz with natural aggregate samples showed higher values than those of simple slab, featuring those frequencies near resonance frequency, with the simultaneous vibration of the set, which leads to the amplification of sound.In this case, there is a solidarity movement and phase composition of the sample and the concrete slab, stimulated by low frequency of 160 Hz.
The second group, formed by the specimens made from EVA1 aggregate, presented intermediate values of sound pressure levels at similar frequencies from 315 Hz, except for trace 1:2:3 (E1C), which showed higher values.The best results were obtained for specimens prepared on the basis of EVA with a higher proportion of coarse aggregate, i.e., with traces of 1:1:4 and 1:1.5:3.5, which submitted lower densities.Nevertheless, the specimens with EVA1 residues on the 1:2:3 mix proportion presented sound pressure levels above the simple slab at frequencies of 125 and 160 Hz, with the influence of the resonance frequency of the system.
In the third group, the sample with 1:1.5:3.5 (E2B) mix proportion presents a different behavior, as of 2000 Hz the specimens showed similar values to the second group.In Hormigón ligero con agregado reciclado de EVA para atenuación del ruido de impacto Lightweight concrete with EVA recycled aggregate for impact noise attenuation In samples with industrialized EVA residues the reduction in the proportion of coarse aggregate raised the values of dry density.However, changes in these proportions did not follow the same trend in noise levels measured, as it is observed in Figure 7.
The group of samples prepared with EVA residues obtained by means of recycling craft presented direct relation between the dry bulk density increases and noise levels measured.In Figure 8 these relations can be observed comparing the two upward graphs profiles, indicating that the increase in dry bulk density corresponds to an increase in noise levels measured.In these specimens group the increase in dry density and the reduction in the proportion of coarse aggregate contribute to the less performance of impact noise.
Relations between impact noise and real bulk density
The relation between the impact noise level and bulk density showed a variation between the results of natural aggregate specimens and specimens with EVA.
Most of the results show that the increase of the bulk density caused worse acoustic performance to the impact noise in the specimens studied.In specimens with natural aggregate the reduction in the proportion of coarse aggregate resulted in higher noise levels measured.However, the bulk density did not follow the same trend, with a reduction in value between Nb and Nc specimens, with 70% aggregate and 60% respectively (Figure 6).Dry bulk density (g/cm 3 ) LnTw (dB) dB in the system with efficiency.Furthermore, the use of material of different composition on the slab prevents the resonance effect of the system, which occurred by the presence of the natural aggregate both on the slab as in the samples with natural aggregate.However, it also shows that the highest percentage of coarse aggregate EVA does not increase the performance of acoustic noise impact.In the samples studied the reduction of 80% to 60% of coarse aggregate resulted in better acoustic performance, with 15 dB in noise levels reduction measured, from 77 dB to 62 dB.
The relations obtained between the measured sound levels, voids and bulk density indicate that the major benefit in reducing the weight provided by the lightweight aggregate structures could be a higher acoustic quality in concrete floor.
CONCLUSIONS
The concrete molded with EVA presented lower levels of bulk density of fresh concrete in comparison to the concretes with natural aggregates.It can be followed that the higher the percentage of lightweight aggregate added to the mix, the lower values of density.
In testing, the impact noise lightweight concrete achieved the best acoustic performance, with satisfactory performance for structural slabs.In the case of accessible coverage, the classification of acoustic performance decreased.However, other available coatings that can help with soundproofing should be considered for this kind of roof.
It was noted that the incorporation of EVA as resilient material on the sub floor could break the rigidity of floors
Figure 1
Figure 1.L'nT sound pressure levels by frequency.
Figure 2 .Figure 3 .
Figure 2. Weighted standardized impact sound pressure level of samples.
Figure 4 .
Figure 4. Relation between impact noise and voids: samples of EVA1 course aggregate.
Figure 6 .Figure 8 .
Figure 6.Relation between dry bulk density and impact noise levels: samples of natural course aggregate.
Table 1
Brazilian standard ABNT NBR 15575-3 recommended classification criteria for the acoustic performance for residential floors (5).
Table 2
Concrete proportions prepared in laboratory.
to Table4, it is observed that the EVA1 aggregate presents bulk density corresponding to 6% of the bulk density of the natural coarse aggregate; in contrast, for the EVA2 aggregate this difference is 7%.Similar relationships are observed for the dry surface aggregate, showing that EVA aggregates do with the density far below the natural aggregates, as it was expected.
B. F. Tutikian et al.According
Table 4
presents the physical characteristics of the used aggregates, including natural, artificial, fine and coarse.
Table 3
Mechanical properties of the cement CPV-ARI.
Table 4
Physical characterization of aggregates.
Table 5
Water absorption, voids and bulk density of concretes. | 4,274.2 | 2013-06-30T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Tetraspanin CD151 as an emerging potential poor prognostic factor across solid tumors: a systematic review and meta-analysis
Tetraspanin CD151, also known as PETA-3 or SFA-1, has been reported to predict prognosis in various solid tumors. Yet, the results of these studies remained inconclusive. Here, we performed this meta-analysis of relevant studies published on the topic to quantitatively evaluate the clinicopathological significance of CD151 in solid tumors. The relevant articles were identified via searching the PubMed, Web of Science and Embase database. The pooled hazard ratios (HRs) and corresponding 95% confidence intervals (CI) of overall survival (OS) and disease-free survival (DFS) were calculated to evaluate the prognostic value of CD151 expression in patients with solid tumors. A total of 19 studies involving 4, 270 participants were included in the study, we drew the conclusion that CD151 overexpression was associated with statistically significant poor OS (pooled HR = 1.498, 95% CI = 1.346-1.667, P<0.001) and poor DFS (pooled HR = 1.488, 95% CI = 1.314-1.685, P<0.001). Furthermore, the subgroup analysis revealed that the associations between CD151 overexpression and the outcome endpoints (OS or TTP) were significant within the Asian region and European, as well in patients with breast cancer or gastric cancer. Taken together, the incorporative HR showed CD151 overexpression was associated with poor survival in human solid tumors. CD151 could be a valuable prognosis biomarker or a potential therapeutic target of solid tumors.
An increasing number of studies showed that increased CD151 expression in tumor tissue was associated with cancer patients' poor survival [10-13, 15, 17-27]. However, in invasive lobular breast cancer [15] and endometrial cancer [16], overexpression of CD151 might have diverse, even opposing roles, correlating with better clinical outcomes. The results of those individual studies were inconsistent. Thus, this comprehensive metaanalysis was performed to clarify the possible prognostic role of CD151 expression in solid tumors.
Demographic characteristics
125 articles were retrieved from the different databases (PubMed, Embase, and Web of Science). As showed in the search flow diagram (Figure 1), 125 records were initially retrieved through the pre-defined search strategy. Because of repeated data, 65 records were removed. After browsing the retrieved titles and abstracts, 39 records were excluded due to no relevant endpoint provided. The remaining 26 records were downloaded as full-text and accessed very carefully. Among them, another 18 studies were excluded, including 1 study that was experimental study, 7 studies that were without prognosis data. After selection, 18 published studies including 4, 270 patients were finally selected for this meta-analysis. The median sample-size was 145, with a wide range from 30 to 886. Among all cohorts, Asian (n = 14) became the major source region of literatures, followed by Austria (n = 1), Poland (n = 2), USA(n = 1), UK (n = 1). Newcastle-Ottawa Scale (NOS) were applied to analyze these studies. The quality scores of these studies ranged from 6 to 9, indicating that the methodological quality was high.
As for the cancer type, two studies evaluated non-small cell lung cancer; Two studies focused on glioblastoma; Three studies evaluated breast cancer; One study evaluated gallbladder carcinoma; Three studies evaluated gastric carcinomas; One study evaluated CCRCC or clear cell renal cell carcinoma; One study evaluated esophageal squamous cell carcinoma; One study evaluated pancreatic cancer; Two studies evaluated liver cancer; One study evaluated prostate cancer; One study evaluated colon cancer, and one study evaluated endometrial cancer. Of all the studies, 19 of them focused on OS, other 10 studies focused on DFS.
Evidence synthesis
As described, this meta-analysis was based on two outcome endpoints: OS and DFS. 19 studies were included in the current meta-analysis of OS. A fixedeffects model was utilized to calculate the pooled hazard ratio (HR) along with the 95% confidence interval (CI). Heterogeneity test reported a P value of 0.054 and an I 2 value of 36.9%. The results indicated that CD151 overexpression was significantly associated with cancer patients' poor OS (pooled HR =1.498, 95% CI = 1.346-1.667, P<0.001) (Figure 2). 10 studies were included in this current meta-analysis of DFS. Due to the fact that the heterogeneity test reported a P value of 0.146 and an I 2 value of 32.8%, a fixed-effects model was again applied. The results demonstrated again a significant association between CD151 expression and DFS (pooled HR = 1.488, 95% CI = 1.314-1.685, P<0.001) (Figure 3). Subgroup study was then performed, the results indicated that the associations between CD151 overexpression and patients' poor OS and patients poor DFS were also significant in
Analysis of possible publication bias and sensitivity analysis
In this study, we utilized the Begg's funnel plot as well as the Egger's test to evaluate the possible publication bias. As demonstrated, the funnel plots shapes for the OS and DFS had no obvious heterogeneity (Figure 4), and Egger's tests revealed only slight publication bias concerning OS (P=0.097), but not DFS (P=0.839). Therefore, we performed trim and fill method to make pooled HR more reliable, and the pooled HR P value was also less than 0.01 (Figure not shown). Next, we performed the sensitivity analysis to determine the robustness of the above results. Not a single study was identified to dominate the current meta-analysis, and deletion of each single study had no significant impact on the general conclusions ( Figure 5).
DISCUSSION
Tetraspanin CD151 is composed of various structure domains [4]. Site-directed mutagenesis studies have revealed that CD151 could be palmitoylated at multiple sites, which is vital for the assemble of tumor endothelial marker (TEM) network [28,29]. For instance, CD151 interacts with α3β1 and α6β4 integrins via intracellular N-terminal and C-terminal cysteine palmitoylation [28]. Studies have demonstrated that integrin subunits α3 and α6 may directly associate with CD151 at QRD194-196 site [30].
As the first identified member of the tetraspanin family, CD151 was shown to participate in tumor cell behaviors, including cell growth, proliferation, motility and invasiveness, possibly via interacting with a number of different proteins. CD151 silence could decrease above cancer cell behaviors [31][32][33][34][35][36]. Existing evidences have also shown that CD151 expression is also important for tumor neoangiogenesis and epithelial-tomesenchymal transition [35,37].
In addition, clinical studies had investigated the potential prognostic value of CD151. Most of these studies, however, include only limited number of patients, and the results are inconclusive. CD151 overexpression often predicts unfavorable outcome in many cancer, such as gastric [11,14,24], prostate [10] and non-small cell lung cancer [18,27]. On the other hand, it is a favorable prognostic indicator in invasive lobular breast cancer [15] and endometrial cancer [16]. To our knowledge, the present study is the first and most full-scale meta-analysis systemically exploring the possible prognostic role of CD151 up-regulation in solid malignancies.
We Similarly, further subgroup analysis demonstrated the associations between CD151 overexpression and poor OS and DFS were significant within Mongoloid patients, as well poor OS in Caucasian. The results showed that there was no difference OS or DFS HR among different descent populations. When data was stratified according to cancer type, associations between CD151 overexpression and poor OS were also significant in breast cancer and gastric cancer. Together, our quantitative results strongly supported the current mainstream viewpoint that an undesirable impact of CD151 redundancy was correlated with not only the overall survival, but also disease-free survival. Additionally, Several important implications in this meta-analysis were displayed. High CD151 expression is very likely a general cancer's poor prognostic marker. We included a total of twelve different cancer types [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. The pooled results showed that high CD151 expression was associated with patients' poor OS and DFS. These conclusion could possibly be extended to all the solid tumors. Finally, it foreshadow the potential of CD151 developing a valuable therapeutic target and prognostic biomarker for solid tumor.
In conclusion, the significant association between CD151 over-expression and cancer patients' poor survival was clearly demonstrated in the present meta-analysis. These results indicate that CD151 could be a potential prognostic biomarker and a promising therapeutic oncotarget for solid tumors.
Publication search
This present meta-analysis was performed with the guidelines from Preferred Reporting Items for Systematics Reviews and Meta-Analyses [38]. Databases including PubMed, EMBASE and Web of Science were searched from their incipiency to June, 2016 using the search terms: 'CD151', 'Tetraspanin' and 'cancer or tumor or neoplasm or carcinoma' and 'prognosis or prognostic or survival or predict or outcome or alive' and the following limits: Human, article in English. All potentially eligible studies were retrieved. The bibliographies in these studies were also carefully scanned to identify other possible eligible studies and extra studies. At a situation when multiple studies of the same patient population were identified, the published report with the largest sample size was included.
Inclusion criteria
To be eligible for selection of this meta-analysis, studies: (a) should test CD151 expression for prognostic value in cancer; (b) CD151 expression was tested by immunohistochemistry (IHC) or quantitative real-time polymerase chain reaction (qRT-PCR); (c) should have hazard ratios (HRs) with 95% confidence intervals (CIs), or should enable calculation of these statistics from the data presented; (d) should classify CD151 expression as "high" and "low" or "positive" and "negative"; (e) should be written in English.
Exclusion criteria
Exclusion criteria were: (a) literatures such as letters, editorials, abstracts, reviews, case reports and expert opinions were excluded; (b) experiments were performed in vitro or in vivo, but were not associated with patients; (c) articles showing on HRs with 95% CI of OS, or the Kaplan-Meier curves which had inadequate survival data for further analysis; (d) The duration of the follow-up study was less than three years.
Data extraction
Two outcome endpoints were analyzed: overall survival (OS) and disease-free survival (DFS). Parameters of these literatures were extracted from each single paper, including the first author's surname, publication year, country origin, number of total patients, antibodies utilized, types of measurements, score for CD151 assessment, CD151's cut-off values, as well as OS and DFS. The main features of these selected studies were summarized in Table 1.
Cancer-specific survival, disease-specific survival and five-year OS were combined into OS. On the other hand, cumulative recurrence and progression-free survival were combined into DFS. OS was expressed as time (in months) from cancer diagnosis to death. To assess prognostic value of CD151 expression, multivariate hazard ratio (HR) was extracted. For the articles in which prognosis was plotted only as the Kaplan-Meier curves, the Engauge Digitizer V4.1 was then applied to extract survival data, and the Tierney's was utilized to calculate the HRs and 95% CIs [39].
Statistical analysis
Pooled HRs and 95% CIs for two outcome endpoints (OS and DFS) were calculated. Statistical heterogeneity was assessed through the Chisquare test and I-square test, which was checked through the Q test, and a P value >0.10 indicated a lack of heterogeneity. We also quantified the effect of heterogeneity via I 2 =100%×(Q -df)/Q. I 2 values of <25% could be considered "low", values of about 50% could be considered "moderate", and values of over 75% could be considered "high" [40]. According to the absence or presence of heterogeneity, randomeffects model or fixed-effects model was applied to merge the RR, respectively. Without statistical heterogeneity, a fixed-effects model was employed to calculate the pooled HRs, otherwise the random-effects model was performed [41]. Funnel plots and the Egger's test were utilized to determine the possible publication bias [26]. Sensitivity analysis was also tested. Statistical analyses were performed via the Stata 14.0 (StataCorp, College Station, TX). P values for all comparisons were two-tailed.
CONFLICTS OF INTEREST
The authors have declared that no competing interests exist. There were two parts of data (a and b) in the study Romanska HM [15]. | 2,683.8 | 2016-11-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Introduction
The aim of image registration is to establish spatial correspondences between two or more images of the same/or different scene acquired at different times, from different viewpoints, and/or by different sensors. Usually the ability to capture complex and large image deformations is vital to many computer vision applications including image registration and atlas construction. The problem becomes more challenging when the object in the image or edge of the image undergoes severe deformation [1].
Take medical image registration for example, tissues and organs or body itself are prone to deform, move, and rotate under most circumstances. Most methods iteratively reach a satisfying overlap under specific mathematical criterions, maximizing or minimizing deformation energy as described in (1). Fixed image is defined as , while moving image as . Registration aims to find the optimal model that best satisfies energy . As a result, model , objective function (similarity metric), and optimization method constitutes the three main components of image registration. Consider According to a state-of-the-art survey [2], registration can be classified into rigid and nonrigid registration. Rigid models restrain the optimum to a few parameters to achieve global registration, while nonrigid models recover local deformation through physical model like elastic or viscous, or statistical model or support vector regression framework, and so forth. In order to fully overlap two images, researchers commonly adopt the two-step strategy, which contains initial registration and following iterative registration [3].
In the two-step strategy, registration firstly begins with a global affine transformation for initial global alignment, take state-of-the-art method FLIRT [4] and ELASTIX [5,6] for example. Or fiducial markers are firstly detected through feature descriptors, for example, the SIFT method [7], so the initial registration is carried out to establish correspondences between these point sets. In preregistration procedure, rotation, scale and translation of the moving image are modified by the calculated affine matrix. After that, nonrigid registration iteratively goes on. One severe problem of preregistration in affine matrix is that when large distortion and rotation both exists, accuracy is limited by correspondences between those region-based descriptors. If descriptor itself is not accurate, problem becomes more severe. Once descriptors fail to discover point correspondences, accuracy of following registration would be badly influenced. As a result, imprecision may also be introduced misleading the following procedure. Besides, traditional FLIRT and ELASTIX method declares that images for registration must be with the best quality, otherwise poor registration may occur. In order to address these above limitations and capture very complex and large deformations, we proposed a new approach for image registration based on a twolayer deep adaptive registration framework. Firstly, in the preregistration procedure, rotation, scale and translation extent between two images are obtained separately to achieve initial registration. This is quite different from traditional "one time calculated" affine matrix. For rotation parameter, a CNN classifier is trained offline in order to identify the level of current image rotation under sever distortion. Then scale and translation parameters are obtained. An optimum preregistration is calculated relating to above gained parameters. As for 3D images, a triplanar 2D CNNs [8] around each voxel is utilized for calculation of final affine matrix. Until now, preregistration is done. Secondly, the rectified images are further recovered through the following nonrigid demons registration procedure. In the next circle, the former registration further facilitates results of the later registration of last iteration. This iterative procedure is carried out until an optimum overlap between the two images is achieved. Besides, PCA is introduced to extract the most valuable features, and detected features are put into SSD, Kendall, Pearson and Spearman, and so forth to form new similarity metric. Also, a triplanar 2D PCA is proposed to process 3D registration problem and Figure 1 gives details of the algorithm. As a result, convergence speed is accelerated while maintaining the same registration accuracy. Figures 1, 2 and Algorithm 3 illustrate work flow of our framework of processing 2D and 3D image registration.
The work introduced in this paper contributes in the following aspects: images, triplanar 2D CNNs is constructed to estimate parameters of affine matrix. This new preregistration performs better than state-of-the-art ELASTIX and SURF-based methods.
(ii) A two-layer adaptive registration framework is constructed and it performers better than other so-called two-step strategies.
(iii) PCA is used to extract valuable features and introduced into traditional similarity metric as SSD, Pearson, and so forth. For 3D images, triplanar 2D PCA is proposed to process 3D registration problem. Experiment results show that convergence speed is accelerated with the new similarity standard.
(iv) The proposed framework is tested under both synthetic and nature 2D and 3D images under various extent deformation. Experiment results show that our two-layer deep adaptive registration framework is able to identify the extent of rotation under sever deformation more precisely and correct large and complex distortions with high dice ratio than the comparative methods as it adaptively modify differences between images while others does not have any deep insight of deformation between images.
The rest of the paper is organized as follows. The whole architecture of the proposed two-layer adaptive registration framework for 2D and 3D images is illustrated in Section 2; Section 3 explains methodology of our CNNs classifier preregistration; Section 4 introduces our preregistration in combination with demons nonrigid registration and our new PCA related similarity metric; the proposed methods are evaluated in Section 5 under different datasets and evaluation principles; finally, the conclusion of this work is given in Section 6.
Architecture
2.1. CNNs for 2D Images. The whole workflow of our 2D image preregistration compared with traditional method is illustrated in Figure 1. In traditional algorithms, an affine matrix is calculated through correspondences between detected features, containing information of rotation, scale, and translation. This procedure is significantly influenced by accuracy of detected feature points. And under sever deformation images, traditional feature methods usually corrupt. Our algorithm processes each of the three above elements separately. By refining each procedure, accurate correspondences between fixed and moving image is obtained. It works as follows.
(i) For rotation, firstly the CNNs classifier is trained offline in order to rectify rotation extent of image under sever deformation. The trained CNNs classifier can identify as much as 360 classes of rotation.
(ii) For scale, image size information is utilized to achieve consistency between fixed and moving image.
(iii) For translation, centroid of each image is calculated through statistical algorithm and translation is achieved by utilizing position information of centroids.
Triplanar
CNNs for 3D Images. Different from the 2D image preregistration, CNNs classifier here is used for the slice location of one voxel ( , , three directions) instead of the rotation identifier. The work flow of 3D image preregistration is showed in Figure 2. The main procedure includes sampling, slices classification, transform matrix calculation, and image transformation by the matrix. Using CNNs on 3D image registration is a new attempt to resolve image registration for high deformation. Detailed method is introduced in Section 5.2.
Preregistration
Our strategy consists of firstly preregistration through CNNs classifier on both 2D and 3D images and then utilizing CNNs and demons algorithm adaptively in the following nonrigid registration and finally improving similarity metric for acceleration of registration convergence speed. In this section, we show our preregistration methodology by introducing our CNNs rotation classifier.
Why We Use CNNs
(1) The Robustness of Classification. CNNs are a kind of data based classification method which undertakes training by appropriate amount of data. CNN is suitable for nearly any types of data and can make classification with high accuracy, especially for the low quality of fMRI, CT images or images under high deformation (Experiment in Section 5 shows these two kinds of real data are suitable for CNN processing method). Detailed CNNs structure and back propagation training method will be described in Section 3.5.
(2) Automatic Image Feature Perception. Nearly all kinds of preregistration method are based on precise feature perception so that the different feature perception methods are playing the key role in this procedure. Traditional image feature perception method is usually based on expert designated data feature. Usually experts give some fixed method to detect specific features of limited kinds of images. For example FLIRT method using inter-model voxel similarity measures where correlation ratio and mutual information are used to detect voxel relationships of different parts. This method has high limitation to the image sources, quality and variable settings. When exceptional case happens, some large deformation images are input for example, it will not work well. While the features from CNNs method are learned by network itself from training data such as edge, brightness, high or low frequency feature, distribution features and so on. Once the training data is updated, the network will get fit for more features automatically at the same time. Although long training time and complicated network variable learning makes CNN method not so easy to use, because of its high accuracy, it is still the image processing trend and future.
(3) High Efficiency Classification. Although the data training time of CNNs is long (depending on the detailed training method, network layer structure and hardware equipment like GPU), the total time spent on testing or classification is very short. Once the network is trained well, the only time consuming for processing is as short as linear operation.
Above all, even though there are some good affine transformation methods based on expert knowledge, we still need a smarter one to adapt to more complex image processing tasks in the future.
Theory of CNNs.
The concept of deep learning was raised by Hinton and Salakhutdinov [9] in 2006, and it has brought great advances to machine learning since then. Deep learning aims to construct/use brain simulations to recognize data such as image/video, audio and text in an unsupervised way. Deep learning framework uses a multilayer "encoder" network to transform the high-dimensional data into a lowdimensional code and a similar "decoder" network to recover the data from the code. Outputs of low layer network acts as inputs of higher layer network. The whole network aims to equal inputs and outputs without loss of information. By using lower layer features to represent higher layer feature/classification, distributed feature representation of data is found. Auto encoder, Sparse coding, Restricted Boltzmann Machine (RBM), Deep Belief Networks (DBNs) and CNNs are five kinds of deep learning framework. Convolutional neural networks are excellent deep learning architectures, which were firstly introduced by Fukushima [10] and applied for handwritten digit recognition. Image recognition and segmentation tasks have also successfully used CNNs since then, with an error rate as low as 0.23 percent on the MINST database [11]. Besides, it is of high speed and accuracy for image classification in [12]. In facial recognition [13,14] and video quality analysis [15], CNNs also gained large decrease in error rate and root mean square error.
A CNN is a multilayer perceptron consisting of multilayers, each layer with a convolutional layer followed by a subsampling layer. Through locally connected networks, stationary features of natural images are exploited by the network topology. Firstly, images are sampled into small patches. In the convolutional layers, small feature detectors are learned based on these extracted samples. Then, a feature is calculated by convolution of the feature detector and the image at that point. In the sampling layer, the number of features is reduced to reduce computational complexity and introduce invariance properties. One significant property of features learnt by CNNs is invariance to translation, rotation, scale and other deformations. This twice feature extraction structure enables CNNs with high distortion tolerance when identifying input samples.
CNNs Methodology.
The goal of CNNs has no difference with other classification methods. They both focus on minimal total square error. Here we use to denote the class number, and to denote the training dataset, the total square error function can be shown: Here is the -dimension of the dataset, the stands for the output from the network, activation function in CNNs is sigmoid function for faster convergence rate. For each single dataset , (2) can be describes as (3). The final aim of CNNs is to achieve smallest total square error between and . Consider For traditional full connection neural network, BP (Back propagation method) is used to calculate partial derivative to get the minimum square error, usually the current layer, the output of can be shows as (4), where is sigmoid function. Consider Unlike (4), as (5) shows, for the convolutional layer , the image features ( ) from prior layer is convoluted by kernel which is different in different layers, is the offset of sigmoid function . Consider For the sample layer, the image feature numbers and styles are the same with prior layer except the feature size is scaled down. Each feature contains a multi and addition kind offset. The down sample size in this paper is 2 which means the next layer image size is shrink two times by both weight and height. So through combination of (4) and (5), we can get sample equation (7) in which stands for the value of no. output with no. input features. By calculating and training kernels by back propagation method we can finally get the best features from different layers with high classification accuracy. Consider Constraint condition ∑ = 1, and 0 ≤ ≤ 1. As shown in Figure 3, input images are defined as input layer; detailed introduction can be found in Sections 3.4 and 3.5. Hidden layer is the four pairs of convolutional and subsampling layer, which are denoted as ( =1,2,3,4) , ( =1,2,3,4) and called local connection layer. The output layer is a combination of full connection layer and softmax classifier for classification. Each layer of ( =1,2,3,4) and ( =1,2,3,4) is constructed with multi-maps and each map is consisted of multi independent neural cells. Let ( −1) and ( ) be the input and output for the th layer, ( ) × ( ) and ( ) × ( ) be the size of the input and output map, ( ) and ( ) be the number of input and output maps respectively of that layer. According to CNNs, ( ) = ( −1) ( ) = ( −1) .
CNNs Structure Design.
We adopt a ten-layer CNNs perceptron network (input and output layers are included; convolutional and sample layers are separately calculated). Key variables setting including kernel size and sample rate of different layers in proposed CNN is showed in Table 1 and Figure 3. Learning rate alpha = 1, variable update batch 1st convolutional layer 9 * 9 3 1 s t s a m p l e l a y e r 2 4 2nd convolution layer 5 * 5 5 2 n d s a m p l e l a y e r 2 6 3rd convolutional layer 5 * 5 7 3 r d s a m p l e l a y e r 2 8 4th convolution layer 5 * 5 9 4 t h s a m p l e l a y e r 2 10 Output layer None size = 10, iteration times = 1000, any training and test images are normalized to 128 * 128 size gray images with [0, 1] pixel size.
Training Image Rotation Classifier through CNNs.
Our input images for training are difference images between fixed and moving image: − . is under deformation with different extent of rotation. Each rotation angle of 360 ∘ is defined as one class, producing as much as 360 classes. Two distinguishing characters of CNNs are perception field and shared weights. Perception field means each neural cell in each layer is not connected wtih all neural cells in adjacent layers, but limited to a local area of neural cells (9 * 9 as in Figure 3). Shared weights means the connection weight parameters (9 * 9) of every neural cell to the local area cell are the same. As shown in Figure 3, suppose size of input image is 128 * 128. After convolution with filters, the kernel size of which is 9 * 9, image changes into Ts1 of 120 * 120 size. Image then scales into Tc1 60 * 60 in layer S1. After four pairs of and , the original image is represented as Ts4 of only 4 * 4 matrix. In this hidden layer, all neural cells on feature maps are not all connected but with same weights. As a result, only 9 * 9 weight parameters need to be calculated, greatly reducing computation complexity. An all connection exists between Ts4 matrix and output layer, eliminating disparity caused by partial connection in the hidden layer. Then softmax classifier identifies the matrix and outputs the detected results. After that, the parameters are fine-tuned through back propagation of 1000 times until convergence. After all these steps, a finite classifier is obtained.
Diffeomorphic Log Demons Registration.
In the 19th century, Maxwell firstly introduced the concept of demons to illustrate a paradox of thermodynamics. In 1998, Thirion [16] proposed a registration algorithm under demons model, which had a high registration precision and efficiency through pixel velocities caused by edge based forces.
Vercauteren et al. [17] proposed nonparametric diffeomorphic demons algorithm. It considers the demons algorithm as a procedure of optimization on the whole space of velocity fields and adapts that procedure in a space of diffeomorphic transformations. The transformation result is smoother and more accurate. Then Vercauteren et al. [18] brings the process into log-domain, that is, he uses a stationary velocity field. Besides, the algorithm is symmetric with respect to the order of the input images. Lorenzi et al. [19] implements a symmetric local correlation coefficient to log-demons diffeomorphic algorithm. Lombaert et al. [1] proposed spectral log-demons to capture large deformations. Peyrat et al. [20] implements multichannel demons to register 4D time-series cardiac images.
(ii) Diffeomorphic Log Demons Algorithm. Here, diffeomorphic log demons algorithm is briefly reminded. A diffeomorphic transformation is related to the exponential map of the velocity field V : = exp(V) (Algorithm 1) [1]. The log-demons framework alternates between optimization of a similarity metric updated by Euler-Lagrangian function in (10). In general, procedure of diffeomorphic log demons framework is described in Algorithm 2. Consider
New Similarity Metric by Combination of PCA.
Mathematically, PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system to extract the greatest variance in the data set. As a result, it is able to avoid influences caused by image biases. Traditionally, PCA is used for dimensionality reduction to facilitate classification, visualization, communication, and storage of high-dimensional data. Here, PCA is applied in both 2D and 3D medical and usual images, and the detected feature representations are used as inputs of similarity metric to achieve anatomical correspondence and assist optimization procedure in registration. There are many classical metric measures, such as SSD, mutual information (MI), cross correlation (CC), pattern Input: Velocity field V. Output: Diffeomorphic map = exp(V).
(i) Choose such that 2 − V is close to 0 e.g., such that max ‖2 − V‖ ≤ 0.5 pixels (ii) Scale velocity field ← 2 − V. for times do (iii) Square ← ∘ . end for Algorithm 1: Exponential = exp(V) [1]. intensity and also their corresponding improved edition. In this paper, Pearson, Spearman, kendall, SSD together with extracted features by PCA are utilized as the new similarity metric. Pearson, Spearman and Kendall are concepts in statistics and are frequently used in data mining. Pearson is short for Pearson product-moment correlation coefficient (PPMCC), which was developed to measure the linear correlation between two variables. Spearman's rank correlation coefficient is a nonparametric measure of statistical dependence between two variables. Both of their value is between +1 and −1. Spearman has no requirement on variables, while pearson insists variables meets normal distribution. Our utilization of log demons registration avoids the influence brought by this.
(i) For 2D images of size × , firstly, PCA is applied to both fixed image and registered moving image , gaining pca and pca . Thus, most important information of image can be fully utilized by combination of pca and pca as inputs of pearson, spearman, and so forth, forming new similarity metric.
(ii) For 3D images of size × × , firstly, PCA is applied to every slice of axis and gains a series of pca ( = 1, 2, . . . , ). By summarizing each of pca ( = 1, 2, . . . , ), pca is obtained. The same operation is carried out on and axis data, obtaining pca , pca . Then, PCA of both fixed ( , , ) and registered moving ( , , ) image is calculated. Thus, information of image can be fully utilized by combination of ( , , ) and ( , , ) as inputs of PPMCC, Spearman, and so forth. Workflow of this part is shown in Figure 4.
Two-Layer Iterative Registration Framework.
Traditionally, the two step registration means an initial affine registration in the very beginning to coarsely rectify deformation and a following iterative registration to optimize a similarity metric achieving fine registration. We also adopt the two step strategy. But before the two step registration, we build up a classifier offline under CNNs training to identify rotation between fixed and moving image under very large distortions, then scale and translation. Also in each iteration, the initial and following registration are carried out iteratively. This feed-back procedure assists achieving higher Registration: Input: Images , and initial velocity field V Output: Transformation = exp(V) from to Pre-registration: through SURF related affine transformation or ELASTIX to globally register and Repeat Demons registration: registration accuracy comparing with traditional SURF and affine method. Besides, at the end of each iteration, we utilized a new similarity metric by combining PCA with traditional SSD, pearson, and so forth, fully containing most important features of image. As a result, convergence speed is highly accelerated than traditional SSD without PCA while maintaining the same registration accuracy. Algorithm 3 shows the over flow of the framework.
Experiment Results
In this section, the performance of the whole two-layer registration method is evaluated on both 2D and 3D images, synthetic and nature datasets. For comparison, traditional two step methods, ELASTIX and SURF related algorithm are used to preregistrate moving and fixed image. Then demons nonrigid registration is conducted. These methods are set as the baseline methods, which are denoted as ELASTIX+demons and SURF+demons. They all firstly use detected features initially to register images through affine transformation and original SSD as similarity metric under the diffeomorphic log demons framework. Our method is different from their framework both in preregistration and following nonrigid registration framework. For 2D images, firstly train a rotation classifier through CNNs and preregister moving image under large distortion and rotation, then together with scale and translation transformation, preregistration is done. For 3D images, a pretrained triplanar 2D CNNs is utilized to locate voxels, establishing feature correspondences. Finally, PCA related similarity metric iteratively registering images under diffeomorphic log demons framework.
The improvement of our two-layer method in registration accuracy, robustness to large deformation and rotation, and convergence speed are all assessed with ground truth data. images. We also tested other number layer CNNs, results showed that ten-layer CNNs achieved highest score when classifying rotation of deformed images. Four kinds of 2D source images [24][25][26][27] served as samples. An example of sample image is shown in Figure 5. Take image 1 for example, linear transformation like rotation or translation is added to image 1 by multipling rotation matrix coded through matlab; then four kinds of large and complex nonlinear transformation is added to 1 through special processing by photoshop. 1 image with only rotation is denoted as 1 ∘ , with only deformation noted as 1 , and with both rotation and deformation denoted as 1 ∘ . The same notation is with 1 and Lena image. Figure 6 is an illustration after all those processing. In order for accurate identification of rotation, here for training, difference image of and (with only rotation) 1 = ‖ 1 − 1 ∘ ‖ is input of CNNs. After training, each angle of 360 ∘ is defined as one class, obtaining 360 class of distortion. For other CNNs, number of classes is 180, 90, 36.
Our test is carried out on computer of windows 7 system, with 8 GB RAM, i7-4770 CPU @ 3.4 GHz. Take BrainWeb data [23,24], for example, Table 3 shows test results of the classifier according to these data.
As we can see from Table 3, when input images are resized into 64×64 pixels, identification of rotation can reach as much as 99.86% for classifier 90; while images are resized into 28×28 pixels, the identification accuracy for classifier 36 is 99.97%. All these are done under condition that training data is also for testing. When the testing data of BrainWeb is put into the trained classifier, accuracy reaches 99.56%, even lower than the training data itself, but still very high according to many usual classifiers. For Lena, ITK and T1 training data, classifier 36 gains an accuracy rate of 99.94%. Number of iteration is set to 1000 for every training.
(2) CNNs Preregistration Test. SURF related method, ELASTIX and our CNNs method are tested. Here, SURF related method means using firstly SURF algorithm to detect features and then affine transformation to initially register images. Figure 5: An example of original sample image.
Lena ∘ r F 1 ∘ r T 1 ∘ r (ii) However, when rotation and large deformation simultaneously appears in moving image as Lena ∘ in Figure 8, both ELASTIX and SURF method crushed. Under such circumstances, in our tests SURF only found one pair of correspondence points. As there are not enough feature correspondences, initial registration failed.
(iii) On the contrary, our trained CNNs classifier and following scale and translation operation directly identified Lena image's rotation angle accurately (90 ∘ rotation), and turned it back to Lena as in Figure 9. For better comparison, we used software to show ways of rotation processing in CNNs as SURF's manner, feature detecting and matching in Lena analogy-CNN . As enough number of so-called features are detected, CNNs is able to recover rotation added to Lena .
(3) Accuracy Evaluation of Registration. Mathematically, dice ratio is used to evaluate overlap between two datasets. It is defined in (11). In this section, both dice ratio and subjective human evaluation method is used to assess accuracy of ELASTIX and SURF related registration and our method result After preregistration in Section 5.1.1, ELASTIX and SURF related method performs diffeomorphic log demons algorithm iteratively to achieve for best registration; while our method iteratively carries out CNNs classifier and diffeomorphic log demons algorithm to optimize registration. This new two-layer registration framework makes full use of both preregistration and following demons method and registration results show that it indeed improves accuracy. Figure 10 shows registration procedure and result of ELASTIX (Figure 10(c)) and SURF (Figure 10(b)) related method, while Figure 11 shows that of our method. When both rotation and deformation exists in image 1 , our registration result 1 − 1C+demons is much better than 1ELASTIX+demons and 1SURF+demons apparently. Besides, to test dice ratio of registration, original fixed image 1 and registered moving image of two methods are put into function (11) separately. Dice ratio of ELASTIX and SURF-demons method is 0.889 and 0.88, while our CNNs-demons-iterative method achieves as much as 0.8964.
Lung Atlases.
Description of lung dataset can be found in Table 2 [28]. Empire 10 lung datasets are firstly used for the MICCAI conference 2010. It contains 20 intra-patient thoracitic CT image pairs. Figures 12 and 13 shows our Figure 14. The shortest time of one slice is more than 1000 ms (1 s) and time for slice 8 is 3500 ms. Although training of our CNNs classifier costs long time, it is offline. And our CNNs rotation, scale and translation operation costs a total of only 39 ms. As a result, it is quite attractive for real-time clinical applications.
Brain Atlases.
We select the cross section 2D image of the BrainWeb MRI 20 object, 10 for training and the other 10 for testing. From Figure 15, we can see that our proposed preregistration can rectify both rotation and translation more successfully than traditional Elastix affine registration.
An Attempt on 3D Image Registration by Using CNNs.
For the 3D image registration part we focus on the brain atlases registration and give a CNNs 3D image registration method. We train brain atlas from 18 people's 3D image data in Brain-Web Brain database by four steps: (1) Randomly select 10 label points by Normal distribution in 3D image. (2) Adjust 3D brain image and separate it to 2D image on three directions ( , , ). (3) Test each 2D slice position by triplanar aftertrained CNNs classifier (each dimension enjoys one CNN network) and get the right slice position (predicted voxels).
(4) Adjusting the 3D image to make label voxels and predict voxels that enjoy smallest hamming distance. Experiments shows the high accuracy CNNs classify results will greatly improve moving 3D image's similarity to the fixed 3D image. The detailed procedure is shown in Figure 16. Figure 17(e). The course-to-fine (in here, three level is recommended) registration strategy is adopted in here. In Figure 17, horizontal axis stands for iteration times and vertical axis stands for the values of metric. Firstly, mean convergence extent of the three-level registration is calculated. Then normalization is carried out on the mean value. Several conclusions can be gained:
Convergence Speed
(1) both PCA related and original SSD methods converge regularly, (2) as a whole, PCA-SSD and PCA-Pearson methods perform best and converge faster than original SSD metric; (3) PCA-spearman metric firstly converges fastest, but latterly it slows down; (4) Kendall metric performs worst compared with other metrics.
Conclusion
In this paper, a comprehensive method of constructing rotation classifier for images under severe deformation and rotation was proposed through CNNs. The classifier is able to identify distortion as much as 360 classes according to analysis of rotation angles. The classifier is utilized to assist our proposed two-layer deep adaptive registration framework. In each registration iteration, preregistration with identification of the trained classifier, scale, and translation operator and following diffeomorphic log demons registration facilitates each other one after another. Besides, proposed PCA related similarity metric helps achieve faster convergence speed.
The new two-layer registration framework is compared with traditional diffeomorphic log demons registration in combination with state-of-the-art ELASTIX and SURF preregistration. As baseline method carries out preregistration only once, large deformations cannot be fully modified. From tests on different image resources containing various kinds of both 2D and 3D, MRI, and CT datasets, our framework indeed outperforms the baseline method on both registration quality and convergence speed. In the following work, we would combine other kinds of deep learning framework as independent subspace analysis (ISA) [30], sparse coding [31], and so forth to improve current registration. Also, more performance tests of the proposed two-layer registration framework should be carried out on more data resources. Besides, the proposed method performance should be compared with other deep learning models. | 7,533.4 | 2015-05-18T00:00:00.000 | [
"Computer Science"
] |
Direct CP violation and the $\Delta I=1/2$ rule in $K\to\pi\pi$ decay from the Standard Model
We present a lattice QCD calculation of the $\Delta I=1/2$, $K\to\pi\pi$ decay amplitude $A_0$ and $\varepsilon'$, the measure of direct CP-violation in $K\to\pi\pi$ decay, improving our 2015 calculation of these quantities. Both calculations were performed with physical kinematics on a $32^3\times 64$ lattice with an inverse lattice spacing of $a^{-1}=1.3784(68)$ GeV. However, the current calculation includes nearly four times the statistics and numerous technical improvements allowing us to more reliably isolate the $\pi\pi$ ground-state and more accurately relate the lattice operators to those defined in the Standard Model. We find ${\rm Re}(A_0)=2.99(0.32)(0.59)\times 10^{-7}$ GeV and ${\rm Im}(A_0)=-6.98(0.62)(1.44)\times 10^{-11}$ GeV, where the errors are statistical and systematic, respectively. The former agrees well with the experimental result ${\rm Re}(A_0)=3.3201(18)\times 10^{-7}$ GeV. These results for $A_0$ can be combined with our earlier lattice calculation of $A_2$ to obtain ${\rm Re}(\varepsilon'/\varepsilon)=21.7(2.6)(6.2)(5.0) \times 10^{-4}$, where the third error represents omitted isospin breaking effects, and Re$(A_0)$/Re$(A_2) = 19.9(2.3)(4.4)$. The first agrees well with the experimental result of ${\rm Re}(\varepsilon'/\varepsilon)=16.6(2.3)\times 10^{-4}$. A comparison of the second with the observed ratio Re$(A_0)/$Re$(A_2) = 22.45(6)$, demonstrates the Standard Model origin of this"$\Delta I = 1/2$ rule"enhancement.
I. INTRODUCTION
A key ingredient to explaining the dominance of matter over antimatter in the observable universe is the breaking of the combination of charge-conjugation and parity (CP) symmetries. The amount of CP violation (CPV) in the Standard Model is widely believed to be too small to explain the dominance of matter over antimatter, suggesting the existence of new physics not present in the Standard Model. CPV in the Standard Model is highly constrained, requiring the presence of all three quark-flavor doublets and described by a single phase [3]. These properties imply that the "direct" CPV in K → ππ decays is a highly suppressed O(10 −6 ) effect in the Standard Model, making it a quantity which is especially sensitive to the effects of new physics in general, and new sources of CPV in particular.
Direct CPV was first observed in K → ππ decays by the NA48 (CERN) and KTeV (FermiLab) experiments [4,5] in the late 1990s, and the most recent world average of its measure is Re(ε /ε) = 16.6(2.3) × 10 −4 [6], where ε is the measure of indirect CPV ( |ε|= 2.228(11) × 10 −3 ). However, despite the impressive success of these experiments, it was only recently that a reliable, firstprinciples Standard Model determination of ε that could be compared to the experimental value became available. This is due to the presence of low-energy QCD effects that are difficult to model reliably.
Lattice QCD is the only known technique for determining the properties of low-energy QCD from first principles with systematically improvable errors. In this regime the high-energy physics is precisely captured by the ∆S = 1 weak effective Hamiltonian, where the Fermi constant G F = 1.166 × 10 −5 GeV −2 , V q q is the Cabibbo-Kobayashi-Maskawa matrix element connecting the quarks q and q and τ = −V * ts V td /V * us V ud . The quantities z i and y i are the Wilson coefficients that encapsulate the high-energy behavior, and which have been computed to next-to-leading-order (NLO) in QCD perturbation theory and to leading order (with some important NLO terms) in electroweak perturbation theory [7], in the MS scheme as a function of the scale µ. The task of the lattice calculation is to determine the matrix elements ππ|Q i (µ)|K 0 of the weak effective operators Q i renormalized in a scheme which can be defined non-perturbatively.
A further perturbative calculation is subsequently necessary to match such matrix elements to those in the MS scheme. Conventionally, as shown in Eq. (1), the weak Hamiltonian is expressed in terms of 10 operators {Q i } 1≤i≤10 (as defined, for example, in Eqs. (4.1) -(4.5) of Ref. [7] ) that are linearly dependent due to the Fierz symmetry. More convenient for our purposes is a second, 7-operator "chiral" basis [8] {Q j } j=1,2,3,5, 6,7,8 in which the operators are linearly independent and transform as irreducible representations of SU(3) L × SU(3) R . The relationship between these bases is discussed in more detail in Sec. VI B.
For an isospin-symmetric lattice calculation it is convenient to formulate the K → ππ matrix elements in terms of two amplitudes, A 0 and A 2 , where A I = (ππ) I |H W |K 0 and the subscript indicates the isospin representation of the two-pion state. These correspond to ∆I = 1/2 and ∆I = 3/2 decays, respectively. From these amplitudes, ε can be obtained directly as where δ I are the ππ scattering phase shifts and ω = Re(A 2 )/Re(A 0 ). Note that the effects of isospin breaking and electromagnetism are not included in our simulation and are instead treated as systematic errors as discussed in Sec. VIII D.
In In order to obtain on-shell kinematics, i.e. to ensure that E ππ , the energy of the two-pion final state, satisfies E ππ = m K , we exploit the possibility of choosing appropriate spatial boundary conditions. With periodic boundary conditions for all the quarks, the ground state of the two-pion final state corresponds to E ππ = 2m π , with each of the pions at rest, and the state with E ππ = m K appears as an excited state. We would therefore need to resort to multi-state fits to rather noisy data in order to obtain the physical amplitudes. The change in the finite-volume corrections induced by modifying the boundary conditions is exponentially small [9,10] or else accounted for by the Lüscher and Lellouch-Lüscher [11,12] prescriptions with minor alterations [13].
In our calculation of A 2 [2,14,15] we employ antiperiodic spatial boundary conditions (APBC) for the down quark in some or all directions, which results in the charged pions also obeying cor-responding antiperiodic boundary conditions. The momenta of the charged pions are therefore discretized in odd-integer multiples of π/L in these directions, where the spatial volume of the lattice is V = L 3 . Since only the spectrum of the charged pions is changed by the APBC, we compute K + → π + π + matrix elements of operators which change I z , the third component of isospin, by 3/2 and then use the Wigner-Eckart theorem to obtain the physical K + → π + π 0 amplitude which is proportional to A 2 . Note that in order to ensure that E ππ = m K , L must be appropriately tuned.
The technique of using APBC on the down quark naturally breaks the isospin symmetry. For the ∆I = 3/2 calculation this symmetry breaking does not pose an issue because the final state of the measured K + → π + π + matrix element is the only doubly-charged two-pion state and therefore cannot mix with other ππ states due to charge conservation. However, the final state in the ∆I = 1/2 matrix elements has isospin 0 and is a linear combination of |π + π − and |π 0 π 0 states. Thus the breaking of isospin symmetry at the boundaries results in different energies for the |π + π − and |π 0 π 0 states and the APBC technique cannot be used. As a result, for the calculation of the ∆I = 1/2 amplitude we instead utilize G-parity boundary conditions (GPBC). G-parity is the combination of charge conjugation and a 180 • isospin rotation about the y-axis,Ĝ =Ĉe πÎ y .
The charged and neutral pions are both odd eigenstates of this operation, hence when applied as a boundary condition all pion states again become antiperiodic in the spatial boundary. While more general than the APBC approach, the use of GPBC introduces a number of technical and computational difficulties that we discuss in Ref. [10] and below.
Note that due to the ππ interaction being repulsive in the I = 2 channel but attractive in the I = 0 channel, the finite-volume ππ energies in these two representations differ at fixed lattice size and it is therefore not possible to use the same ensemble to measure both the ∆I = 3/2 and ∆I = 1/2 decay amplitudes with on-shell kinematics. In this document we present a detailed update of the calculation of A 0 and will combine it with the results for A 2 given in Ref. [2].
Among the necessary ingredients in the lattice calculation of the K → ππ matrix elements are the two-pion energies E ππ and the amplitudes ππ|O ππ |0 , where O ππ is an interpolating operator that can create the required two-pion state from the vacuum. These quantities are determined by correlation functions describing the propagation of the two-pion state. The matrix elements ππ |Q i |K 0 are obtained from K → ππ correlation functions in which the Euclidean time dependence is exponential in E ππ , and the amplitudes corresponding to the annihilation of the two-pion state (and the creation of the kaon state) have to be removed. From the measurement of E ππ and using the Lüscher formula [11], we also determine the s-wave isospin 0 ππ-phase shift, δ 0 (m K ), which enters the expression relating the matrix elements to (ε /ε), Eq. (2). The derivative of the phase-shift with respect to the energy is additionally required to determine the power-like (i.e. nonexponential) finite-volume corrections through the Lellouch-Lüscher formula [12] (cf. Sec. VI A).
The observation of a discrepancy from the predicted phase shift increased our motivation to extend the earlier calculation by increasing the statistics and using more sophisticated methods to better analyze the I = 0 two-pion system. In Ref. [1] we observed excellent stability in the determination of the ground-state two-pion energy E ππ ; the result was consistent between oneand two-state fits to our data (i.e. whether we assumed that just the ground-state was propagating or allowed for a contribution from an excited state) and was also independent, within the uncertainties, of the time separation between the insertion of the creation and annihilation operators (the O ππ introduced above). Nevertheless, we considered the best explanation for the discrepancy to be contamination from one or more excited states whose contribution with increasing time is masked by the rather rapid reduction in the signal-to-noise of our data. Therefore, in addition to increasing our statistics by more than a factor of 3, we have introduced two additional ππ interpolating operators. For our original calculation we used a ππ operator comprising two quark bilinear operators that create back-to-back moving pions of a particular momentum. Alongside this operator, which we label ππ(111), we have now added a scalar operator σ = 1 √ 2 (ūu +dd), and an operator creating pions with larger relative momenta that we label ππ(311). Here the number appearing in the parentheses of the ππ(. . .) operators is related to the components of the pion momentum in lattice units: (xyz) → (±x, ±y, ±z)π/L (the total ππ momentum is zero in all cases). Here and for the remainder of this document we will assume the lattice size L to be in lattice units unless otherwise stated. All three operators, once suitably projected onto a state that is symmetric under cubic rotations, have the same quantum numbers as the s-wave I = 0 two-pion state of interest and as such project onto the same set of QCD eigenstates, albeit with different coefficients.
In Ref. [18] we demonstrate that a simultaneous fit to the 3 × 3 matrix of ππ two-point correlation functions in which the two-pion states are created or annihilated by one of these three operators, results in a substantial reduction in the statistical and systematic errors. We find that, once the excited states are taken into account, the resulting I = 0 ππ-scattering phase shift at E lat ππ = 479.5 MeV is δ 0 (E lat ππ ) = 32.3(1.0)(1.8) • , where the errors are statistical and systematic, respectively. This significant increase in our result for δ 0 (E lat ππ ) brings us into much closer agreement with the dispersive prediction, which at our present value of E lat ππ is δ 0 (E lat ππ ) disp = 35.9 • , obtained using Eqs. 17.1-17.3 of Ref. [16] with m π = 139.6 MeV. (We refer the reader to Ref. [16] for estimates of the error on the dispersive prediction.) In this paper we present results for the ∆I = 1/2 K → ππ matrix elements obtained from our expanded data set of 741 measurements, using all three ππ interpolating operators.
In this analysis we also include an improved non-perturbative determination of the renormalization factors relating the bare matrix elements in the lattice discretization to those of operators renormalized in the RI-SMOM scheme (see Sec. V). Perturbation theory is then required to match the operators renormalized in the RI-SMOM scheme to those in the MS scheme in which the Wilson coefficients have been computed. This calculation utilizes step-scaling to raise the matching scale from 1.53 GeV to 4.01 GeV, significantly reducing the systematic error associated with the perturbative matching.
Throughout this document results are presented in lattice units unless otherwise stated.
While the current paper is intended to be self-contained it should be viewed as the third in a series of three closely related papers. The first of these is Ref [10] which gives a detailed discussion of the implementation and properties of lattice calculations which impose G-parity boundary conditions. The second paper is Ref. [18] in which the same ensemble of gauge configurations and many of Green's functions used in the current paper are analyzed to study ππ scattering. This second paper contains the two-pion, finite-volume energy eigenvalues from which the I = 0 and I = 2 ππ scattering phase shifts are derived as well as the matrix elements of the ππ interpolating operators between the corresponding energy eigenstates and the vacuum which are used in the current paper.
For the convenience of the reader we summarize the primary results of this work in Tab. I. For further discussion we refer the reader to Sec. VIII. It is important to stress that the results and uncertainties in Tab. I have been obtained by combining a number of elements. The major direct contribution from this work is the evaluation of the matrix elements (ππ) I=0 |Q i |K 0 in isosymmetric QCD, with the operators renormalized in the RI-SMOM( / q, / q) scheme (see Tab. in Tab. XXVII can be used to improve the precision in the determination of A 0 .
The layout of the remainder of this paper is as follows: In Sec. II we introduce our lattice ensemble and give a general overview of our measurement techniques. In Sec. III we discuss and present results from fits to the single-pion, two-pion and kaon two-point correlation functions, the values of which are required as inputs to the fits of the K → ππ three-point correlation functions from which the matrix elements of the bare lattice operators are determined. In Sec. IV we discuss the measurement of these three-point functions and provide the results from the fits. In Sec. V we discuss our procedure for the non-perturbative renormalization of the operators Q i , the results of which are combined with the matrix elements of the bare lattice operators and other inputs to determine A 0 and ε /ε in Sec. VI. We follow this by a detailed discussion of the systematic errors in Sec. VII and present our final results for the matrix elements, decay amplitudes, and ε /ε, together with a discussion of the ∆I = 1/2 rule, in Sec. VIII. Finally we present our conclusions in Sec. IX. There are two technical appendices in which we present the Wick contractions of some of the correlation functions used in this project.
II. OVERVIEW OF MEASUREMENTS
In this section we provide an overview of the calculation, including information on the ensemble and the measurement techniques. The dislocation suppressing determinant ratio (DSDR) term [19] is a modification of the gauge action that suppresses the dislocations, or tears in the gauge field that enhance chiral symmetry breaking at coarse lattice spacings. This enables the calculation to be performed with larger lattice spacings, and hence larger physical volumes, at fixed computational cost, ensuring good control over finite-volume systematic errors. We use G-parity boundary conditions (GPBC) in three spatial directions in order to obtain nearly physical kinematics for our K → ππ decays.
The lattice parameters are equal to those of the 32ID ensemble documented in Refs. [20,21] except for the boundary conditions and that we now simulate with a lighter, physical pion mass of 142 MeV versus the 170 MeV pion mass of the 32ID ensemble. This allows the use of existing measurements such as the lattice spacing, and also enables the computation of the non-perturbative renormalization factors in a regime free of the complexities associated with GPBC.
The ensemble used for our 2015 calculation comprised 864 molecular dynamics (MD) time units (after thermalization), upon which 216 measurements were performed separated by 4 MD time units. Subsequent to the calculation, it was discovered [22] that an error existed in the generation of the random numbers used to set the conjugate momentum at the start of each trajectory, which gave rise to small correlations between widely separated lattice sites. While the resulting effects were determined to be two-to-three orders of magnitude smaller than our statistical errors, we nevertheless do not include these configurations in the present calculation.
In the period following our previous publication, we have dramatically increased the number of measurements. Configurations were generated on seven independent Markov chains originating from widely seperated configurations of our original ensemble. Subsequent algorithmic improvements, particularly the introduction of the exact one-flavor algorithm (EOFA) [23][24][25] The bootstrap measurement of the goodness-of-fit is a technique developed specifically for this and our companion work [18], and is detailed in Ref. [27]. To summarize, the goodness-of-fit is typically parameterized by a p-value that represents the likelihood that the data agrees with the model, allowing only for statistical fluctuations. The p-value is computed by first measuring wherex i are the ensemble-means of the data at coordinate i, p the fitting parameters, f the model function, and (cov) ab is the covariance matrix. The value obtained for q 2 is then compared to the null distribution that describes how this quantity varies between independent experiments if only statistical fluctuations are allowed around the model. The null distribution is typically assumed to be the χ 2 distribution, but this is inappropriate when the fluctuations in the covariance matrix between experiments become significant, as is the case for our ππ measurements [27]. In that work we demonstrate that the null distribution can be estimated directly from the data through a simple bootstrap procedure, allowing for a more reliable p-value that is free from assumptions. This procedure also has the benefit of allowing us to neglect the autocorrelations in the determination of the covariance matrix on each bootstrap ensemble, which dramatically improves the statistical error but changes the definition of q 2 in a subtle way that cannot be accounted for by traditional methods.
C. Measurement technique
Measurements are performed using the all-to-all (A2A) propagator technique of Ref. [28], whereby the quark propagator is decomposed into an exact low-mode contribution obtained from a set of, in our case 900, predetermined eigenvectors, and a stochastic approximation to the highmode contribution. This allows for the maximal translation of correlation functions in order to take full advantage of each configuration, as well as easy implemention of arbitrarily smeared source and sink operators. We perform full spin, color, flavor and time dilution such that the stochastic source is required only to produce a delta-function in the spatial location.
For all quantities we use smeared meson sources with an exponential (1s hydrogen wavefunctionlike) structure, where α = 2 is the smearing radius and x and y are the spatial coordinates of the two quark operators. Several technicalities must be considered when using G-parity boundary conditions, including limitations on the allowed quark momenta which has implications for the cubic rotational symmetry, the preservation of which is essential for producing an operator that projects onto the rotationally-symmetric (s-wave) ππ state. These are detailed in Ref. [18].
More specific details of the various measurements are provided in the following sections.
III. RESULTS FROM TWO-POINT CORRELATION FUNCTIONS
In order to compute the K → ππ matrix elements it is necessary to measure the energies and amplitudes of the pion, kaon and ππ two-point Green's functions. In this section we present results for the kaon two-point function and summarize the results of Ref. [18] for the pion and ππ twopoint functions. We also detail the determination of the energy dependence of the phase shift at the kaon mass scale, which is used to obtain the Lellouch-Lüscher [12] of the opposite quark flavor. In Ref. [10] we introduced a notation whereby the quark field and its G-parity partner are placed in a two-component vector, where C is the charge conjugation matrix. We will refer to the index of these vectors as a "flavor index". In this notation the propagator becomes a 2×2 "flavor matrix", and Pauli matrices inserted appropriately describe the flavor structure. In this notation the Wick contractions assume an almost identical form to those of the periodic case.
The strange quark is introduced into the G-parity framework as a member of an isospin doublet that includes a fictional degenerate partner, s , into which the strange quark transforms at the boundary. The corresponding field operator is With the introduction of this extra quark flavor a square-root of the s/s determinant is required in order to generate a 2+1 flavor ensemble [10].
B. Kaon two-point function
Following Ref. [10], a stationary (G-parity even) kaon-like state can be constructed as where K 0 is the physical kaon and K 0 a degenerate partner with quark contents u. This |K 0 state can be created using the following operator where p = (1, 1, 1) π 2L is the quark momentum and Θ is defined in Eq. (4). Note that in the above equation and for the other operators presented in this document, the projection operators 1 2 (1 ± σ 2 ) appear; these are necessary to define quark field operators that are eigenstates of translation and hence have definite momentum [10].
The two-point function is measured for all t 1 and t 2 , and subsequently averaged over t 2 at fixed t = t 1 − t 2 . The data are folded in t, i.e. data with t = t 1 −t 2 are averaged with those with t = L T − (t 1 −t 2 ), where L T is the lattice temporal extent, to improve statistics. We perform correlated fits to the following function, where the second term accounts for the state propagating backwards in time through the lattice temporal boundary. The chosen fit range, p-value and the results of the fit are given in Tab. II. In physical units our kaon mass is 490.5(2.4) MeV, which is within 2% of the physical neutral kaon mass.
C. Pion two-point function
The isospin triplet of pion states can be constructed from the operators listed in Sec. V.A. of Ref. [10]. Due to the isospin symmetry the resulting two-point functions all have the same Wick contractions, and are most conveniently generated with the neutral pion operator, where p π = p 1 + p 2 is the total pion momentum and P p π is a flavor projection operator of the form 1 2 (1 ± σ 2 ) whose sign depends on the particular choice of the quark momentum, per the discussion in Sec. IV.G. of Ref. [10]. We measure the two-point function with four different momentum orientations related by cubic transformations in order to improve the statistical error: p π ∈ {(1, 1, 1), (−1, 1, 1), (1, −1, 1), (1, 1, −1)} in units of π/L. The corresponding choices of quark momentum are given in Ref. [18]. The two-point function is again averaged over all source timeslices and also over all four momentum orientations, and the data folded to improve statistics. Correlated fits are performed to the function, Details of the strategy for measuring the I = 0 ππ two-point function can be found in Ref. [18].
In summary, we construct three operators with the quantum numbers of the I = 0 ππ state: The first and second operators, labeled ππ(111) and ππ(311), comprise two single-pion operators carrying equal and opposite momenta separated by ∆ = 4 timeslices in order to reduce the overlap with the vacuum state. The pion momenta in the former reside in the set (±1, ±1, ±1)π/L, and those of the latter in the set (±3, ±1, ±1)π/L and permutations thereof. We average over all nonequivalent directions of the pion momentum in order to project onto the rotationally symmetric state. The final, σ operator corresponds to the scalar two-quark operator 1 √ 2 (ūu +dd). As mentioned previously, the pion and σ bilinear operators are smeared with a hydrogen wavefunction (exponential) smearing function of radius 2 lattice sites in order to improve their overlap with the lowest-energy states.
Two-point correlation functions are constructed from pairs of source and sink operators thus, where we include an explicit vacuum subtraction. Here t 1 specifies the earliest time in which any fermion operator appearing in the annihilation operator O † α is evaluated, and likewise t 2 is the latest time appearing in the creation operator O β , such that t = t 1 −t 2 is the time of propagation of the shortest-lived pion state. We average over many t 2 at fixed t = t 1 − t 2 and the data are folded to improve statistics as follows: where ∆ i = 4 for the ππ(111) and ππ(311) operators and zero for the σ operator. To the matrix of correlation functions we perform simultaneous correlated fits to the functions, We will use the result obtained by uniformly fitting to the temporal range 6 − 15 with all three ππ source/sink operators and allowing for two intermediate states (i max = 1), which represents the "best fit" in Ref. [18]. The results, reproduced from that work are given in the second column of Tab. III for the convenience of the reader. Note that our ππ and kaon energies differ by 2.
where the error is statistical only, and as such our K → ππ calculation is not precisely energyconserving. The effect of this difference is incorporated as a systematic error on our final result, as discussed in Sec. VII B.
It is interesting to compare the statistical errors of our ππ ground-state fit parameters to those of our 2015 analysis, which was performed using a single operator (ππ(111)) and the same t min = 6 as our present analysis. Previously we obtained (74) .
Comparing these to the results of this work in Tab. III we find that the error on the ground-state amplitude has reduced by a factor of 1.9 and the energy by a factor of 6.7. The former is compatible with the expected 741/216 = 1.9 reduction in errors due to the increased statistics, but the latter has improved by a far greater amount. In Ref. [18] we demonstrate that this improvement in the errors is a result of the additional operators, in particular the σ operator, which vastly enhance the resolution on the ground-state energy.
The I = 0 ππ scattering phase shift is obtained via Lüscher's method [11,29] and has the value, where the errors are statistical and systematic, respectively. The procedures by which we estimate our errors are detailed in Ref. [18].
Our decision to fit the ππ two-point function with two states limits the number of states that we can include in our K → ππ matrix element analysis. In order to study the possibility of residual contamination from a third state we repeat the analysis of the ππ two-point function with 3 states, the results of which are given in the third column of Tab. III. For a stable fit to the ππ data we found it necessary to use t min = 4, which is lower than the t min = 6 used for the primary fit and which exposes the result to enhanced excited state contamination. However comparing the results between the second and third columns of Tab. III we find little relative difference in the parameters associated with the ground-state, suggesting any such effects on the K → ππ matrix elements are small.
E. Phase-shift derivative at the kaon mass
As detailed in Sec. VI A, the Lellouch-Lüscher finite volume correction to the K → ππ matrix elements requires the evaluation of the derivative of the phase-shift with respect to the ππ energy evaluated at the kaon mass scale, or more specifically with respect to the variable q = kL/2π π is the square of the interacting pion momentum. This derivative cannot presently be obtained experimentally at this energy scale, and therefore an interpolating ansatz or direct lattice measurement is required.
In Ref. [18], alongside the stationary state examined above, we also compute the ππ energy at several non-zero center-of-mass momenta, allowing us to obtain the phase-shift at two values of the rest-frame energy that are lower than the kaon mass as well as a threshold determination of the scattering length. These results are also close to their corresponding dispersive predictions, albeit with somewhat larger excited-state systematic errors. Using these results we can directly measure the derivative of the phase-shift with respect to the energy using a finite-difference approximation, for which we obtain We can also obtain the derivative from the dispersive prediction of Colangelo et al [16]. The derivative with respect to s = E 2 ππ , computed at our lattice ππ energy using Eqs. 17.1-17.3 of Ref. [16] with m π = 135 MeV, is found to be where the error is the statistical error arising from the uncertainty in the lattice spacing and measured lattice ππ energy. Note that this result is obtained at the physical pion mass, which is 5% smaller than our lattice value. In order to estimate the impact of the difference in pion masses on this derivative we use NLO chiral perturbation theory [16,30] (ChPT) to estimate the derivative with respect to energy at k = 0.1 GeV, at which ChPT is expected to be reliable. Assuming that the slope with respect to √ s is roughly constant (which is well motivated by the dispersion theory result, cf. Fig. 7 of Ref. [16]) we estimate the change in dδ 0 ds evaluated at our lattice ππ energy as 1.2%. This value is small relative to the final systematic error we assign to the derivative in Sec. VII D and can therefore be neglected here. Finally, applying ds/dq = 4.18(5) × 10 5 MeV 2 , where again the errors are statistical, we obtain The near-linearity of the dispersive prediction suggests that a linear ansatz, may also be appropriate. With this ansatz we find (24) dδ 0 dq = 1.259 (36) rad .
Given that the derivative of the phase shift is a subleading contribution and that the above values are all in reasonable agreement, we expect that the Lellouch-Lüscher factor can be obtained reliably. The variation in these results will be taken into account in our systematic error in Sec. VII D.
In our 2015 work [1] we also considered a linear ansatz in q, This value is not as well motivated as the ansatz in Eq. (24) and is in disagreement with all four of the above results. Given the good agreement between our measured phase-shifts and the above estimates of the derivative with the dispersive predictions, we will not include this result in our systematic error estimate.
F. Optimal ππ ππ ππ operator
For use later in this document we define here an optimal operator that maximally projects onto the ππ ground state relative to the first-excited state.
Under the excellent assumption that the backwards-propagating component of the time dependence is small in the fit window, the two-point functions can be described as a sum of exponentials: where again Greek indices denote operators and Roman indices states. We wish to define an optimized operator that projects onto the ground state: for which where the approximate equality indicates that additional exponential terms resulting from excitedstate contamination, although suppressed, still exist for an optimal operator composed of a finite number of ππ operators. Expanding the Green's function, Without loss of generality we can fix A 0 opt = 1, which alongside Eq. (29) is sufficient to define r i : If the number of states is equal to the number of operators this can be interpreted as a matrix equation, where the row index of A is the state index i and the column index the operator index α. Here0 is a unit vector in the 0-direction, and as such which gives (34) i.e. r is the first column of the inverse matrix.
As our ππ fits include only two states, we drop the noisier ππ(311) operator in order to form a square matrix of correlation functions. We then obtain where the elements are the coefficients of the ππ(111) and σ operators, respectively. In Fig. 1 we compare the effective energy obtained with the optimal operator to that of the ππ(111) and σ operators alone. We observe a marked reduction in the ground-state energy and a noticeable improvement in the length of the plateau region resulting from the removal of excited-state contamination, as well as a significant improvement in the statistical error. This optimal operator will also be used in our matrix element fits in the following section.
DECAYS
In this section we detail the measurement and fitting of the K → ππ three-point Green's functions, from which the unrenormalized matrix elements (ππ) I=0 |Q i |K 0 are obtained.
A. Overview of measurements
On the lattice we measure the following three-point functions, where t denotes the time separation between the kaon and four-quark operators, and t K→snk sep the time separation between the kaon and the ππ "sink" operator, O snk . As described in Ref. [31], the Wick contractions of these functions fall into four categories based on their topology, as illustrated in Fig. 2.
Note that here and below we take care to differentiate between the G-parity kaon stateK 0 , which is a G-parity even eigenstate of the finite-volume Hamiltonian, and the physical kaon K 0 that is not an eigenstate of the system. The matrix elements of the physical kaon are related to those of the G-parity kaon by a constant multiplicative factor of √ 2 that serves as the analogue of the Lellouch-Lüscher finite-volume correction as described in Sec. VI.B. of Ref. [10].
In order to maximize statistics we translate the three-point function over multiple kaon timeslices and average the resulting measurements. As the statistical error is dominated by the type3 This divergence is removed by defining the subtracted operators [31,32], We will henceforth denote the unsubtracted operator with a hat notation,Q i . The coefficients α i in Eq. (37) are defined by imposing the condition, where we have allowed α i to vary with time as this was found to offer a minor statistical improvement. Although the matrix element of this pseudoscalar operator vanishes by the equations of motion for energy-conserving kinematics and is therefore not absolutely necessary for our calculation, the subtraction reduces the systematic error resulting from the small difference between our ππ and kaon energies while simultaneously reducing the statistical error and suppressing excitedstate contamination.
Due to having vacuum quantum numbers, the I = 0 ππ operators project also onto the vacuum state and this off-shell matrix element dominates the signal unless an explicit vacuum subtraction is performed, However, due to our definition of the subtraction coefficient α i in Eq. (38), the vacuum matrix elements appearing in the right-hand side vanish making this subtraction unnecessary. In practice this cancellation is not exact in our numerical analysis for the following reason: While the ππ "bubble" 0|O † snk |0 is formally time-translationally invariant we observed a minor statistical advantage in evaluating this quantity with the ππ operator on the same timeslice as it appears in the full disconnected Green's function that is being subtracted, such that it is maximally correlated.
Therefore, for the right-most term in Eq. (39) we compute where t K is the kaon timeslice and {t K } the set of timeslices upon which measurements were performed, i.e. with the product of the K →vacuum matrix element and the ππ bubble performed under the average over the kaon source timeslice rather than after. As suggested by the above, the coefficients α i (t) are computed separately from the t K -averaged matrix elements and therefore the cancellation between the two terms in brackets is exact only up to the degree to which the time translation symmetry is realized at finite statistics. Due to our large statistics we found the difference in the fitted Q 6 matrix element obtained with and without the vacuum subtraction to be at the 0.1% level. Our final choices of cut incorporate data from this set in the range 6 ≤ t ≤ 11 (cf. Sec. IV E 4). In this window we observe that for both the C 2 and C 6 correlation functions, the contribution of the noisy, type4 disconnected diagrams is largely consistent with zero, albeit with much larger errors for the former. C 2 appears dominated by the type1 and type3 diagrams, which both contribute with the same sign, with a negligible contribution from the type2 diagrams. The contribution of the type1 and type3 diagrams appears to behave similarly for the C 6 three-point function, however here we observe a strong cancellation between those and the type2 diagrams.
The subtraction coefficients α i are computed via Eq. (38) as the following ratio of two-point functions, where the average of the correlation functions over the kaon source timeslice is implicit as above.
The Wick contractions for the 0|Q i (t)OK0(0)|0 two-point functions are identical to the components of the type4 K → ππ diagrams that are connected to the kaon. While these connected components are formally independent of the sink two-pion operator, in practice these quantities were computed using code that was organized differently for the ππ and σ operators. As described in Appendix B of this paper and Appendix B.2 of Ref. [33], the factors entering the type4 diagrams that determine the α i were constructed from two separate bases of functions of the quark propagators, one for the σ and the other for the ππ(. . .) operators, where for each basis γ 5 hermiticity was used in a different way. While γ 5 hermiticity is an exact relation, the fact that we are using a stochastic approximation for the high modes of the all-to-all propagator allows small differences to arise between the values of the α i computed in these two bases. We therefore have separate results for the α i from the ππ and σ three-point functions calculations.
In Fig. 4 we plot the time dependence of the α i for all ten operators. We observe excellent agreement between the results obtained from the two different bases of contractions as expected.
For a number of operators we find statistically significant but relatively small excited-state contamination for small t that in all cases appears to die away by t = 6. While the effects of this contamination are unlikely to significantly affect our final results, the cuts that we later apply to our fits nevertheless exclude data with t < 6.
The K → ππ matrix elements of the pseudoscalar operatorsγ 5 d are required to perform the subtraction of the divergent loop contribution. In this section we independently analyze these matrix elements in order to understand their time dependence and the corresponding effect of the subtraction on the amount of excited state contamination in the final K → ππ result.
In the limit of large time separation between the source/sink operators and the four-quark operator, only the lowest-energy ππ and kaon states are present. Since the pseudoscalar matrix elements vanish by the equations of motion when the decay conserves energy and the kaon and ππ groundstate energies in our calculation differ by only 2%, we expect the subtraction to result in only a negligible shift in the central value but a marked improvement in the statistical errors in this limit.
However at finite time separations, the contributions of the excited states may take a long time to die away due to the increasing magnitude of the corresponding matrix elements between initial and final states of different energies. It is this concern that prompts us to study this system more carefully.
The lattice three-point function for a generic sink ππ operator, O snk , has the following time dependence: where the subscript 'in' refers to the incoming kaonic state, 'out' to the outgoing two-pion state, and M i j P is the matrix element for the term involving in and out states i and j, respectively. It is convenient to define an "effective matrix element" by dividing out the ground-state time dependence and operator amplitudes, is the separation between the four-quark operator and the sink and Note that M eff,snk P is dependent on the sink operator through the terms involving the excited states, in which a ratio of ground and excited state amplitudes appears.
We measure the correlation function Eq. (42) for each of our three two-pion operators. Note that a vacuum subtraction is also required here and is performed in the same way as for the four-quark . The corresponding data for the ππ(311) operator is much noisier and has therefore been excluded. The form of this plot can be explained as follows: As E 0 in ≈ E 0 out we expect M 00 P to be small. If we then assume that the dominant excited state contributions come from the term involving the excited kaon state and ground ππ state (i = 1, j = 0) and the term with the ground kaon state and the first excited ππ state (i = 0, j = 1), then we expect the data to behave as This ansatz then implies an exponentially falling contribution from the excited pion state and an exponentially growing piece from the excited kaon state, giving rise to a bowl-like shape assuming that A 1 in and A 1 out have the same sign, which appears to the the case here. Furthermore, the exponentially-growing piece in t is expected to be larger for smaller t K→snk sep , and indeed we observe that the turnover point at which the exponentially-growing term begins to dominate occurs sooner for smaller t K→snk sep .
While the effective matrix elements of both sink operators initially trend towards zero, for the more precise ππ(111) data it seems that none of the five data sets are statistically consistent with zero at their maxima, suggesting we do not reach the limit of ground-state dominance. This is not necessarily an issue for our calculation given that the subtraction will heavily suppress these contributions in our final result, and furthermore the inclusion of multiple sink operators will improve our ability to extract the ππ ground-state matrix element. In order to disentangle these two effects it is convenient to examine the three-point function for the optimized sink operator discussed in Sec. III F. The time dependence of M eff,snk P for this operator is also shown in Fig. 5.
By definition this operator heavily suppresses A j out for j > 0, and indeed we find the data to be much flatter in the low-t region and also considerably closer to zero. The exponential growth and t K→snk sep dependence that enters due to the excited kaon term is expected to be largely unaffected by this transformation, however it seems that in several cases the plateaus extend much further into the large-t region than previously. It is likely that is due to an accidental cancellation owing to the fact that A 1 out is positive for the ππ(111) operator and negative for the σ operator (cf. Tab. III) and hence the exponentially-growing terms for these operators have opposite signs.
We conclude by discussing the expected size of the excited-state contamination in the matrix elements of the subtracted four-quark operators arising from the pseudoscalar operator. In the K → ππ calculation, this dimension-3 operator is introduced to remove what in the continuum limit would be a quadratic divergence resulting from the self-contraction between two of the four quark operators appearing in those operatorsQ i with a component transforming in the (8, 1) or In our lattice calculation these terms behave as 1/a 2 when expressed in physical units. To leading order in a this 1/a 2 coefficient does not depend on the external states and is therefore removed from our 0|(ππ)Q iK 0 |0 amplitude by the subtraction defined above, even though the coefficients α i are determined from the 0|Q iK 0 |0 matrix element in Eq. (41). Because of the chiral structure of the (8, 1) and (8,8) operators, these coefficients have the structure: α i ∼ m s −m d a 2 + . . . [8], where the ellipsis represents terms which are not powerdivergent.
Thus, thesγ 5 d subtraction removes the leading 1/a 2 term in the matrix element ofQ i , leaving behind a finite piece of size ∼(m s − m d )Λ 2 QCDs γ 5 d. This remainder is not physical and depends on the condition chosen to define the α i . However, it will contribute to our final result if E ππ = m K . For the ground-state component (i = 0, j = 0) this term is thus heavily suppressed by the factor (E 0 ππ − m K ). However for the excited states we expect this piece to be on the order of the physical contribution from the dimension-6 four-quark operator. As such it may result in a modest enhancement of the excited state matrix elements. Providing we are able to demonstrate that we have the excited ππ and kaon states under control through appropriate cuts on our fitting ranges, this should pose no obstacle to our calculation.
D. Description of fitting strategy
For a lattice of sufficiently large time extent that around-the-world terms in which states propagate through the lattice temporal boundary can be neglected, and assuming that the four-quark operator is sufficiently separated from the kaon source that the kaon ground state is dominant, the three-point Green's functions C i of the weak effective operators defined in Eq. (36) have the general form, where M j i = (ππ) j |Q i |K 0 is the matrix element of the four-quark operator Q i with the ππ state j, with M 0 i corresponding to the physical K → ππ matrix elements required to compute A 0 . The factor of 1/ √ 2 relates the matrix element involving the kaon G-parity eigenstate to that of the physical kaon [10]. Here A K is the amplitude of the G-parity kaon operator, A j snk are the amplitudes of the sink operator with the state j, and E j is the energy of that state. These parameters are fixed to those obtained from the two-point function fits in Sec. III: A K and m K to the results given in Tab. II, and A j snk and E j to the results obtained from the three-operator, two-state ππ fits given in the second column of Tab. III.
We perform simultaneous correlated fits over multiple sink operators to the form Eq. (48) in order to determine the matrix elements M j i , allowing for one or more states j. Independent one-state fits are also performed to the optimized sink operator defined in Sec. III F. The fits are performed to each weak effective operator separately, in the 10-operator basis (the relationship between these 10 linearly-dependent operators serves as a useful cross-check of the fit results) using the strategy outlined in Sec. II B. We apply a cut t min on the separation t between the kaon and the fourquark operator in order to isolate the ground-state kaon, and also a cut t min on the separation t = t K→snk sep − t between the four-quark and sink operators. These cuts, the number of sink operators, and the number of excited ππ states included in the fit are varied in order to study systematic effects.
For use below we again define an "effective matrix element" in which the ground-state ππ and kaon amplitudes and time dependence are multiplied out, These effective matrix elements converge exponentially to the ground-state matrix element at large t . Note that, unlike in Sec. IV C, we are assuming that a cut, t min , on the separation betwen the kaon and four-quark operators has been applied that is sufficient to isolate the contribution of the kaon ground state. As a result, these effective matrix elements can be assumed to be independent of t K→snk sep and a weighted average of our five datasets of different t K→snk sep can be applied to improve the statistical resolution of the data presented in our plots.
E. Fit results
In this section we examine the results of fitting various subsets of our data, with the goal of finding an optimal fit window in which systematic errors arising from both excited ππ and kaon states are minimized.
In Figs. 6 and 7 we plot the fitted ground-state matrix elements M 0 i as a function of t min for various choices of t min , the number of sink operators and the number of states. The three-operator fits are performed using the ππ(311), ππ(111) and σ sink operators; for the two-operator fits we drop the noisier ππ(311) data; and for the one-operator fits we further drop the σ data. The one-operator, one-state fits are equivalent to those performed in our 2015 work, albeit with more statistics and more reliable ππ energies and amplitudes.
The discussion below will be focused on these figures. We will first discuss general features addressing the quality of the data and the reliability of the fits, and will then concentrate on searching for evidence of systematic effects (or lack thereof) arising from kaon and ππ excited states.
Based on those conclusions we will then present our final fit results.
Discussion of data and fit reliability
We will first comment on the fits to the optimal operator, labeled "opt." in the figures. This approach is outwardly advantageous in that the fits are performed to a single state and the covari- appearance in the legend. The legends are given in the format #ops × #states followed by the cut t min on the time separation between the kaon and the four-quark operators. Here "opt." is the fit to the optimal operator and "sys." is used to estimate the systematic error resulting from a third state. effective matrix elements of this optimal operator to that of the ππ(111) and σ operators alone, where we note a marked improvement in the quality of the plateau. This behavior, which is also accounted for implicitly in the multi-state fits, demonstrates the power of the multi-operator technique for isolating the ground state. In Figs. 6 and 7 we observe that the fit results for the optimal operator agree very well with the multi-state fit results in all cases. While this approach does not appear to offer any statistical advantage, the strong agreement suggests that our complex multi-state correlated fits are under good control.
In Figs. 6 and 7 we observe for several ground-state matrix elements a trend in the fit results up to an extremum at t min = 7, followed by a statistically significant correction at the level of 1-2σ for the fits with t min = 8. In this and Sec. VII A we present substantial evidence that the systematic errors resulting from excited kaon and ππ states are minimal, which makes it unlikely that this rise is associated with excited state contamination. Certainly if it were due to excited ππ states we would expect an improvement as more sink operators are added, but there is little evidence of The p-values assessing how well the data with t ≥ 7 is described by the model for the C i correlation functions obtained by fitting to 3 operators and 2 states with t min = 5 and t min = 6.
such, and likewise if excited kaon states were the cause we would expect an improvement as we increase the t min cut, whereas no significant change is observed. The most likely explanation is a statistical fluctuation in our correlated data set, and indeed in Fig. 8 we see evidence of such a fluctation peaking at t = 7 which is likely driving this phenomenon.
Given the above, an interesting question we can ask is whether the models we obtain from our fits with t min = 5, which in all cases lie within the plateau region before this rise, are a good description of the subset of data with t ≥ 7, or in other words how likely it is that these data are consistent with this model allowing only for statistical fluctuations. In Tab. IV we list the p-values for these data using the model obtained by fitting to 3 sink operators and 2 states with t min = 5 and t min = 6, computed using the technique discussed in Sec. II B (with no free parameters). We observe excellent p-values in all cases bar M 0 3 , and to a lesser extent M 0 4 . The lower p-values for these operators are common for all of the multi-operator fits and are likely associated with the statistical fluctuations described above which are more apparent for these matrix elements (cf. Fig. 6). We expect that such unusual statistical fluctuations will be found when so many different operators and fitting ranges are examined. Of most importance in a calculation of Im(A 0 ) is M 0 6 , for which we find that the model obtained with t min = 5 is an excellent description of the data with t ≥ 7. The p-value is in fact little different from the value p = 0.525 obtained by fitting to these data directly, suggesting that the models are equally good descriptions despite the tension in the ground-state matrix elements.
For M 0 7 and M 0 8 (and to a lesser extent, M 0 10 ) we observe a discrepancy between the one-operator and multi-operator results at the 1-2σ level that persists even to large t min . Given the very clear plateaus in the multi-state fit results, this disagreement is likely due again to statistical effects in these correlated data. This is evidenced for example in Fig. 9 in which we overlay the M eff,snk 8 effective matrix element for the ππ(111) and σ sink operators by the multi-operator fit curve.
We observe that the fit curve for the ππ(111) operator is completely compatible with the data but favors a value that is consistently within the upper half of the error bar, suggesting that the apparent flatness of the ππ(111) effective matrix element represents a false plateau, and the fact that the multi-operator method is capable of resolving the behavior is a testament to its power.
Excited kaon state effects
We now address excited kaon state effects. Because the data rapidly becomes noisier as we move the four-quark operator closer to the kaon operator and thus further away from the ππ operator, such effects are not expected to be significant. The simplest test is to vary the cut on the time separation between the kaon operator and the four-quark operator, t min . The first three points from the left of each cluster in Figs. 6 and 7 show the result of varying t min between 6 and 8 at fixed t min . As expected we observe no statistically significant dependence on this cut.
We can also test for excited kaon effects by examining the data near the kaon operator in more detail, alongside looking for trends in the five different K → sink separations at fixed t . The optimal operator proves convenient for examining this behavior as it neatly combines the two dominant sink operators and should be flat within the fit window. In Fig. 10 we plot the data for the M eff,snk 4 and M eff,snk 6 effective matrix elements with a distinction drawn between data included and excluded by a cut on the kaon to four-quark operator time separation of t min = 6. We find no apparent evidence of excited kaon state contamination even for data excluded by the cut, nor do we observe any trends of the data in the K → sink separation.
We therefore conclude that excited kaon effects in our results are negligible.
Excited ππ state effects
The dominant fit systematic error is expected to be due to excited ππ states. Fortunately, given that we can change both the number of operators and the number of states alongside varying the fit window within a region where our data is most precise, there are a number of tests we can perform to probe this source of error.
We begin by comparing the multi-operator fits to the one-operator (ππ(111)) fit, the latter being equivalent to the procedure used for our 2015 work. In the majority of cases we see little evidence of excited state contamination in the one-operator data, as evidenced by its agreement with the multi-operator fits as well as the strong consistency between the fits as we vary the fit window. However for the M 0 5 and M 0 6 matrix elements we observe strong evidence of excited-state contamination in these fits at smaller t min . Fig. 6 clearly demonstrates how these effects are suppressed as we add more operators: Initially the one-operator results converge with the 3-operator results at t min = 5 and 6, respectively, at which point the excited states appear to be sufficiently suppressed. Introducing a second operator and state we eliminate part of this contamination and the convergence appears earlier, at t min = 4 and 5, respectively. Finally, in adding the third operator we find results that are essentially flat from t min = 3. This suggests that the 5% excited-state systematic error on our 2015 result which used t min = 4 was significantly underestimated for these matrix elements.
In general we observe excellent agreement between two and three-operator fits with two-states.
Unfortunately, as mentioned above, the ππ(311) data are considerably noisier than those of the other operators, and the associated ππ energy and amplitudes are less-well known, and as such these data contribute relatively little to the fit. Nevertheless we do observe that for the M 0 5 and M 0 6 matrix elements, the introduction of the third operator results in values that for low t min (3 or 4) are in considerably better agreement with the results for larger t min , suggesting that in the regime in which these data are less noisy (i.e. closer to the ππ operator) the third operator is acting to remove some residual excited-state contamination. We conclude that it is beneficial to include the third operator. In order to study the possibility of residual contamination from a third state we perform threeoperator, three-state fits to the matrix elements using the ππ two-point function fit parameters given in the third column of Tab. III and the same fit ranges for t and t used in the three-operator, two-state fits. The results for the ground-state matrix elements are also included in Figs. 6 and 7 with the label "sys.". We find that including this third state has very little impact and the results agree very well with the three-operator, two-state fits. This again suggests that we have the ππ excited-state systematic error under control.
A further test for excited-state contamination is to study the agreement of the fit curves with the data outside of the fit region. To this end in Fig. 11 we plot the ππ(111) and σ operator data for the M eff,snk 6 effective matrix element overlaid by the fit curves for the 3-operator, 2-state fits, and for the 3-operator, 3-state fits described above, using t min = 4 and 5. The fitted groundstate matrix elements in these cases are all in complete agreement to within a fraction of their statistical errors. We observe that the 3-operator, 2-state fit curve with t min = 5 describes well the ππ(111) data at t = 4 but shows a tension for the σ data at this timeslice. Fitting with t min = 4 does not resolve this tension, suggesting the effects of a third state are visible in the σ operator data at t = 4. This is consistent with the pattern of couplings of the operators to the states in Tab. III which show a significant reduction in the couplings to higher states for the ππ (111) operator but almost equal-sized couplings of the σ operator to all three states. The 3-operator, For our final result we choose to focus upon the three-operator, two-state fits. While the majority of the corresponding curves in Figs. 6 and 7 are essentially flat from t min = 3, we opt for a conservative and uniform cut of t min = 5 at which we can strongly claim an absence of significant excited-state effects. In the Sec. VII A we will consider means by which we can assign a systematic error to this result.
Final fit results
As discussed above we choose the 3-operator, 2-state fit with t min = 5 for our final result. As we observe no significant dependence on the cut on the separation between the kaon and four-quark operators we will choose t min = 6. In Tab. V we present the full set of p-values and parameters for these fits. We obtain acceptable p-values in the majority of cases, with the notable exception of the Q 3 four-quark operator for which p = 4%. We find that this p-value is not improved by increasing t min , and also that the p-value of the one-operator, one-state fit with the same fit range -with which our chosen value is in excellent agreement -has a p-value of 15%. The low probability is therefore unlikely to be associated with any systematic effect and can be attributed to lowprobability statistical effects. Comparing these values to those in Tab. V we find that the errors have reduced by factors of 2.8 and 2.4 for M 0 2 and M 0 6 , respectively. Comparing the 3-operator, 2-state fits to the 1-operator, 1-state fits in Fig. 6 we observe that the larger improvement for M 0 2 can be explained by the additional operators, however for M 0 6 these two approaches have similar errors. The fact that the error on M 0 6 has improved considerably more than the factor of 1.9 expected by the increase in statistics can therefore be attributed to the improved precision of the ππ two-point function fits observed in Sec. III D.
V. NON-PERTURBATIVE RENORMALIZATION OF LATTICE MATRIX ELEMENTS
The Wilson coefficients are conventionally computed in the MS (NDR) renormalization scheme, and therefore we are required to renormalize our lattice matrix elements also in this scheme. This is achieved by performing an intermediate conversion to a non-perturbative regularization invariant momentum scheme with symmetric kinematics (RI-SMOM). As the name suggests, these schemes can be treated both non-perturbatively on the lattice (provided the renormalization scale is sufficiently small compared to the Nyquist frequency π/a) and in continuum perturbation theory (providing the renormalization scale is sufficiently high that perturbation theory is approximately valid at the order to which we are working). Thus, we can use continuum perturbation theory to match our RI-SMOM matrix elements to MS, avoiding the need for lattice perturbation theory. The matching factors have been computed to one-loop in Ref. [34].
In our 2015 calculation we computed the renormalization matrix at a somewhat low renormalization scale of µ = 1.529 GeV in order to avoid large cutoff effects on our coarse, a −1 = 1.38 GeV ensemble. Due to this low scale, the systematic error associated with the perturbative RI to MS matching was our dominant error, with an estimated size of 15%. In this paper we utilize the stepscaling procedure [35,36] (summarized below) in order to circumvent the limit imposed by the lattice cutoff and increase the renormalization scale to 4.0 GeV at which the error arising from the use of one-loop perturbation theory is expected to be significantly smaller. A separate step-scaling calculation to 2.29 GeV was performed in Ref. [37] and we will utilize those results to study the scale dependence of the perturbative and discretization errors in our operator normalization.
A. Summary of approach
Due to operator mixing, the renormalization factors take the form of a matrix. This is most conveniently expressed in the seven-operator chiral basis in which the operators are linearly independent and transform in specific representations of the SU(3) L ⊗ SU(3) R chiral symmetry group, an accurate symmetry of our DWF formulation even at short distances. In this basis the renormal-ization matrix is block diagonal, with a 1×1 matrix associated with the Q 1 operator that transforms in the (27, 1) representation, a 4 × 4 matrix for the (8, 1) operators Q 2 , Q 3 , Q 5 and Q 6 , and a 2 × 2 matrix for the (8,8) operators Q 7 and Q 8 .
In the RI-SMOM scheme the renormalized operators are generally defined thus, where Einstein's summation conventions are implied and the label "RI" is used as short-hand for the RI-SMOM scheme. The renormalization factors are defined via where the index m is not summed over. Here α − δ are combined spin and color indices, Z q is the quark field renormalization, q is a four-momentum that defines the renormalization scale and P m are "projection matrices" described below. The quantities F im on the right-hand side are found by evaluating the left-hand side of the equation at tree level. Γ RI im are computed as where the sum is performed over the full four-dimensional lattice volume and q = p 1 − p 2 . Here E m are a set of seven four-quark operators that each create the four quark lines that connect to the weak effective operator, where the momentum arguments indicate the incoming momenta and the quark momenta satisfy symmetric kinematics: p 2 1 = p 2 2 = (p 1 − p 2 ) 2 = q 2 ≡ µ 2 . The subscript "amp." in Eq. (53) implies that the external propagators are amputated by applying the ensemble-averaged inverse propagator, such that the resulting Green's function has a rank-4 tensor structure in the spin-color indices.
These Green's functions are not gauge-invariant, hence the procedure must be performed using gauge-fixed configurations, for which we employ Landau gauge-fixing. The use of momentumspace Green's functions introduces contact terms that prevent the use of the equations of motion so that additional operators, beyond those needed to determine on-shell matrix elements, must be introduced if all possible operator mixings are to be included, as is required if the RI-SMOM scheme is to have a continuum limit. These are discussed below.
Note that the Wick contractions of Eq. (53) result in disconnected penguin-like diagrams that interact only by gluon exchange; these diagrams are evaluated using stochastic all-to-all propagators and are typically noisy, requiring multiple random hits and hundreds of configurations. The presence of disconnected diagrams also precludes the use of partially-twisted boundary conditions and therefore limits our choices of the renormalization momentum scale to the allowed lattice momenta.
The quark field renormalization Z q is also computed in the RI-SMOM scheme via the amputated vertex function of the local vector current operator,qγ µ q, from which we compute Z V /Z q where Z V is the corresponding renormalization factor for the local vector current. The factor Z V is not unity as the local vector current is not conserved, however it can be computed independently from the ratio of hadronic matrix elements containing the local and conserved (five-dimensional) vector current allowing Z q to be obtained from the above. Alternatively, Z q can also be computed from the local axial-vector current operatorqγ µ γ 5 q. Again the ratio Z A /Z q is determined from a three-point function evaluated in momentum space and, providing non-exceptional kinematics are used, is equivalent up to negligible systematic effects at large momentum [38]. The quantity Z A is then determined by comparing the pion-to-vacuum matrix elements of the local and approximately conserved (five-dimensional) axial current.
The independent projection matrices P m contract the external spin and color indices, and are chosen with a tensor structure that reflects that of the operator with the same index. For the weak effective operators, we can choose both parity-even and parity-odd projectors, which project onto the parity-even and parity-odd components of the amputated Green's function, respectively, and which should both provide the same result due to chiral symmetry. In practice however we have found that the parity-odd choices are better protected against residual chiral symmetry breaking effects that induce non-zero mixings between the different SU(3) L ⊗ SU(3) R representations (cf. Sec. 4.5 of Ref. [39]), and so we will use the parity-odd projectors exclusively. We consider two different projection schemes: the "γ µ scheme", for which the parity-odd projectors have the spin structure, (55) P γ µ m = ±γ µ ⊗ (γ 5 γ µ ) − (γ 5 γ µ ) ⊗ γ µ , and the " / q scheme" with spin structure For the full set of parity-odd and parity-even projectors we refer the reader to Sec. 3.3.2 of Ref. [33].
Similar choices of γ µ and / q projector exist also for the quark field renormalization. We will follow the convention of describing our RI-SMOM schemes with a label of the form SMOM (A, B) where the quantities A and B in parentheses describe the choices of projector for the four-quark operator and Z q , respectively. In this work we consider only the SMOM(γ µ , γ µ ) and SMOM( / q, / q) schemes as previous studies of the renormalization of the neutral kaon mixing parameter B K indicate that the non-perturbative running is better described by perturbation theory for these two choices than for the two mixed schemes [40]. We will compare our final results obtained using both intermediate schemes in order to estimate the systematic perturbative and discretization errors in computing the RI to MS matching.
B. Operator mixing
The seven weak effective operators mix with several dimension-3 and dimension-4 bilinear operators. For the parity-odd components these are S 1 =sγ 5 d, where the arrow indicates the direction of the discrete covariant derivative. These are accounted for by performing the renormalization with subtracted operators, The subtraction coefficients b j are obtained by applying the following conditions, = 0 with symmetric kinematics at the scale q 2 . The projection operators can be found in Sec. 7.2.6 of Ref. [37]. In practice we find that the subtraction coefficients are small due to the suppression of the mixing by a factor of the quark mass as a result of chiral symmetry, and also the observation that the amputated vertex function Eq. (53) with a four-quark external state and a two-quark operator necessarily involves only disconnected diagrams that are small at large momentum scales due to the running of the QCD coupling.
Mixing also occurs with the dimension-5 chromomagnetic penguin operator and a similar electric dipole operator, conventionally labeled Q 11 and Q 12 , respectively [41]. These operators do not vanish by the equations of motion and therefore contribute also to the on-shell matrix elements, but break chiral symmetry and as such are expected to be heavily suppressed [41,42]. It is therefore conventional to neglect their effects in, for example, the determination of the Wilson coefficients [7]. In our DWF calculation the dimension-1 mixing coefficients of these dimension-5 operators will be of order the input quark masses used in our RI-SMOM calculations or the DWF residual mass -effects, when combined with the required gluon exchange, should be at or below the percent level. Thus, in this work we neglect these operators.
In addition to the lower-dimension operators there is also mixing with both gauge-invariant and gauge-noninvariant dimension-6 two-quark operators. These operators enter at next-to-leading order and above, and are therefore naturally small provided we perform our renormalization at large energy scales.
The gauge-noninvariant dimension-6 operators vanish due to gauge symmetry and in many cases also by the equations of motion, and therefore do not contribute to on-shell matrix elements [43]. These operators enter the renormalization only at the two-loop level [8] and above, and given that the RI→ MS matching factors are at present only available to one loop, the systematic effect of disregarding these operators is likely to be much smaller than our dominant systematic errors. Nevertheless we are presently investigating position-space renormalization [44] which does not require gauge fixing and therefore does not suffer from such mixing, and as such we may be able to remove this systematic error in future work.
Of the gauge-invariant dimension-6 operators, is the only operator that mixes at one loop [45], with all others entering at two-loops and above. In Ref. [37] we have investigated the impact of including the G 1 operator in our RI-SMOM renormalization and have computed the subsequent effect on the K → ππ amplitudes. This can be achieved without the need for measuring matrix elements of G 1 between kaon and ππ states by taking advantage of the equations of motion to rewrite those matrix elements for on-shell kinematics in terms of the matrix elements of the conventional four-quark operators, such that the entire effect of this operator is captured by changes in the values of the (8, 1) elements of the renormalization matrix. Note that at present the results including the G 1 operator have been computed only at the 2.29 GeV renormalization scale and not the 4.0 GeV scale used for our final result. However, as demonstrated in Ref. [37] and also in Sec. VII F, the effects of including G 1 are at the few percent level as expected, implying that the resulting systematic error is small compared to our other errors.
C. Step-scaling
Step-scaling [36] allows for the circumvention of the upper limit on the renormalization scale imposed by the lattice spacing through independently computing the non-perturbative running of the renormalization matrix to a higher scale using a finer lattice. The multiplicative factor relating the RI-SMOM operators renormalized at two different scales can be obtained from the ratio where µ 1 is a renormalization scale that lies below the cutoff on the original coarser lattice while µ 2 is a higher scale, likely inaccessible on the coarser lattice. The quantity Λ RI (µ 2 , µ 1 ) is computed on finer lattices for which µ 2 also lies below the cutoff and can be applied thus, in order to raise the renormalization scale to µ 2 , giving the renormalization matrix Z RI←lat (µ 2 ) which non-perturbatively converts our course-lattice operators into an RI scheme defined at a scale µ 2 potentially much larger that the inverse of our coarse lattice spacing. We will take advantage of this technique to avoid having to match perturbatively to MS directly at the lower energy scales allowed by our coarse, a −1 = 1.38 GeV lattice.
D. Details and results of lattice calculation
We use the step-scaling procedure to obtain the renormalization matrix at a scale of µ 2 = 4.006 GeV by matching between our β = 1.75, a −1 = 1.378(7) GeV (32ID) ensemble and a second, finer ensemble with β = 2.37 and a −1 = 3.148(17) whose properties are described in Ref. [21] under the label "32Ifine". These ensembles have periodic spatial boundary conditions rather than G-parity boundary conditions, but as previously mentioned, boundary effects can be neglected for these high-energy Green's functions. Such quantities are also constructed to be insensitive to the quark mass scale, and therefore we can disregard the unphysically heavy 170 MeV and 370 MeV pion masses on the 32ID and 32Ifine ensembles, respectively. Note also that, although we do not take the continuum limit of the step-scaling matrix computed on the 32Ifine ensemble, the fine lattice spacing and the typically small size of discretization effects on such quantities [46] suggest the induced error is also negligible compared to our other errors. We remind the reader that these calculations do not include the G 1 operator, and its absence in our calculation is treated as a source of systematic error in Sec. VII.
Due to the presence of disconnected diagrams in our calculation, the choices of quark momenta are restricted to the discrete values allowed by the finite-volume. The closest match between allowed momenta on the 32ID and 32Ifine ensembles that can be chosen as an intermediate scale is µ 32ID 1 = 1.531 GeV and µ 32Ifine 1 = 1.514 GeV, respectively. The fact that these scales differ by 1.1% introduces a systematic error that, given the slow evolution of the QCD β -function, can be treated as negligible.
We obtain the quark field renormalization for the 32Ifine ensemble via the vector current operator as described in Sec. V A. For the 32ID ensemble we use the axial-vector operator as the corresponding renormalization factor, Z A has been measured to much higher precision than Z V (0.05% versus 1.2%, respectively) [47]. The measurements of Z A and Z V are treated as statistically independent from those of the amputated vertex functions and are incorporated into the calculation using the superjackknife technique.
On the 32ID ensemble we extend the calculation at µ we obtain the renormalization matrices Z RI←lat The results for the step-scaling matrix Λ(4.006 GeV, 1.514 GeV) i j in both schemes are given in Tab. VII. In Tab. VIII we combine these step-scaling results with the 32ID Z RI←lat results to produce the final renormalization matrices at 4.0 GeV, where the errors on the two independent ensembles have been propagated using the super-jackknife procedure.
As mentioned previously, we will also utilize step-scaled renormalization matrices computed at µ 2 = 2.29 GeV both with and without the G 1 operator included. This calculation used an intermediate scale of µ = 1.33 GeV to match between the coarse and fine ensemble. Details of this calculation can be found in Ref. [37]. In that work the statistical errors on Z V and Z A were not included in the results, and Z V was used rather than Z A in the determination of Z q on the 32ID ensemble. In order to match the procedure outlined above we have reanalyzed the data from that work, the results of which are presented in Tab. IX for µ = 1.33 GeV and Tab. X for µ = 2.29 GeV. Note, at present only results in the SMOM(γ µ , γ µ ) scheme are available with G 1 included. calculations are listed in Tab. XI and their uncertainties are accounted for as a systematic error in the following section. In this table the value of Re(A 2 ) was obtained from the experimental measurement of K + → π + π 0 decays, and the value of Re(A 0 ) from K S → π + π − and K S → π 0 π 0 decays. The relationship between the isospin amplitudes and the experimental branching fractions and decay widths is described in detail in Secs. III.A and III.B of Ref. [14].
VI. RESULTS FOR
As previous mentioned, the Wilson Coefficients that incorporate the short distance physics "integrated out" from the Standard Model are known in perturbation theory in the 10-operator basis to NLO in the MS scheme. Partial calculations at NNLO are available in the literature [48][49][50][51][52], together with a preliminary study on a direct lattice determination [53]; in this manuscript we utilize the complete NLO results of Ref. [7] in the MS-NDR scheme for our central values, and the LO predictions to assign a systematic error due to the truncation of the perturbative series. For consistency with the NLO determination of the Wilson coefficients we follow Ref. [7] in utilizing the 2-loop determination of α s given in Ref. [7] (and the 1-loop determination for the LO Wilson coefficients used to estimate the systematic error) despite the fact that a 4-loop calculation is available [54]. In order to fix the parameters of the 2-loop (1-loop) calculation, a value of α s at a reference scale is required, and to minimize the perturbative truncation error it is desirable that this scale be close to the typical scale of the physical problem, in our case O(2 GeV). We therefore utilize the 4-loop calculation of α s to run the value of α where δ 0 is the I = 0 ππ scattering phase shift and φ is a known function [11] of q = Lk 2π , appropriately modified for our antiperiodic pion boundary conditions [13], with k the interacting pion momentum defined via (70) with (upper) and without (lower) the effects of the G 1 operator included. This matrix converts the lattice matrix elements computed in this paper to the SMOM(γ µ , γ µ ) scheme at µ = 1.33 GeV Note that Eq. (69) differs by a factor of two from the corresponding equation in Ref. [12] due to our different conventions on the decay amplitude (cf. Ref. [31]).
The calculation of the Lellouch-Lüscher factor requires the derivative of the phase shift with respect to interacting pion momentum, or correspondingly the ππ energy, evaluated at the kaon mass. The determination of this derivative is detailed in Sec. III E where we present values obtained both directly from the lattice and also from the dispersive prediction. Given the good agreement between our phase shifts and the dispersive predictions [18] we will use the dispersive result given in Eq. (22). The variation in the results will be incorporated as a systematic error in Sec. VII D.
We find
B. Renormalized physical matrix elements
The infinite-volume matrix elements of the seven chiral-basis operators Q R j in a scheme R at the scale µ can be expressed without ambiguity in terms of the matrix elements M lat j = ππ|Q lat j |K of the corresponding lattice operators: where a is the lattice spacing, Z R←lat (µ) a 7 × 7 renormalization matrix and F the Lellouch- [6], apart from those of Re(A 0 ), Re(A 2 ) and their ratio, ω, which were taken from Ref. [1]. Here φ ε is the phase of the indirect CP-violation parameter ε.
The CKM ratio τ = −V * ts V td /V * us V ud is obtained using the Wolfenstein parameterization expanded to eighth order, with parameters taken from the aforementioned review. The impact upon our result of the errors on those quantities marked with a ( * ) is incorporated as a systematic error in The ten conventional, linearly-dependent operators Q i are defined in terms of the seven independent operators Q j as follows: (73) where 1 ≤ i ≤ 10, j runs over the set {1, 2, 3, 5, 6, 7, 8} and the matrix T is given by which can be found as Eqs. (58) and (59) of Ref. [34]. This relationship applies both to RI scheme and bare lattice operators.
In our lattice calculation we have evaluated the matrix elements of all ten linearly-dependent operators Q i as given in Tab. V. This gives us a consistency test of the three Fierz identities: these identities are obeyed to within statistical errors and with an absolute size at the 1% level, validating our code. We do not expect the Fierz relations to be obeyed to floating point accuracy since our use of all-to-all propagators introduces a stochastic element into the inversion of the Dirac operator and our use of γ 5 hermiticity differs between the ten operators introducing statistical noise in different ways into each evaluation.
Since the Fierz identities are not obeyed exactly by the data in Tab. V, we have a choice as to how the ten linearly-dependent matrix elements M lat i in that table are to be combined to give the seven independent matrix elements M lat i needed on the right-hand side of Eq. (72). To this end we choose to treat M lat i as fit parameters whose best fit values are obtained by minimizing the correlated χ 2 : The result is an optimal combination that provably minimizes the statistical error on the resulting M lat i . The 10 × 10 covariance matrix C i j is estimated by studying the variation of the bootstrap means of the matrix elements, and is given in Tab. XII. Note that we use the same covariance matrix for the fit to each bootstrap sample (a frozen fit) and therefore do not take into account in our errors the fluctuations in the covariance matrix over bootstrap samples. However such effects are expected to be minimal due to our large number of configurations. The results for the bare matrix elements obtained by this procedure, along with those obtained by applying Eq. (73) to convert those results back into the 10-basis, are given in Tab. XIII. These results are quoted in physical units and incorporate the Lellouch-Lüscher finite-volume correction.
The results for the seven operators converted to the SMOM(γ µ , γ µ ) and SMOM( / q, / q) schemes are given in the left two columns of Tab. XIV. The right two columns of that table show the matrix elements of the ten conventional operators in the MS scheme obtained from the left two columns by an application of Eqs. (73) and (74). For the convenience of the reader in utilizing these results we also provide the covariance matrices for the SMOM( / q, / q) scheme matrix elements, which we will use as our central values in Sec. VIII, and also the MS matrix elements derived from them, in Tabs. XV and XVI, respectively. We can now obtain A 0 from our lattice calculation as follows: The Wilson coefficients have been computed to next-to-leading order in QCD and electroweak perturbation theory in the MS scheme [7], and at µ = 4.006 GeV take the values given in Tab. XVII.
For the CKM matrix element ratio τ we use the value given in Tab. XI. Combining these with the MS-renormalized matrix elements obtained in Tab. XIV we obtain the following for the SMOM( / q, / q) intermediate scheme, systematic errors is presented.
The contributions of each of the ten operators to the real and imaginary parts of A 0 are given in Tab The real and imaginary parts of A 0 comprise different linear combinations of the same basis of real lattice matrix elements. As the real part of the amplitude is precisely known from experiment and is not expected to receive significant contributions from new physics, we can use this quantity to replace part of the lattice input and thereby improve the precision of the imaginary part. The appropriate procedure is discussed in Refs. [55,56] in the context of the conventional basis of 10 non-independent operators, where the latter authors use it to eliminate the Q 2 matrix element. For our purpose it is more convenient to express the method in terms of the unrenormalized matrix elements in the 7-operator basis. We write Im where the M lat j = ππ|Q k |K are the matrix elements of the unrenormalized lattice operators in the 7-basis in infinite-volume and physical units, and are the "lattice Wilson coefficients". Here T i j is the 10 × 7 matrix expressing the 10 linearlydependent operators in terms of the seven independent operators in the chiral basis, given in Eq. (74). The matrix Z MS←lat is the product of the 7 × 7 perburbative matrix expressing the seven MS operators in terms of the seven RI operators and the non-perturbative 7 × 7 matrix which determines the RI operators in terms of the lattice operators.
We can then use Eq. (79) to remove the matrix element of the operator Q from Im and choose In Tab We could instead choose the parameter λ to give that result for Im(A 0 ) with the smallest statistical error. Since the value obtained for λ from this procedure is extremely close to that needed to remove the matrix element M lat 3 , we adopt the simpler procedure of eliminating M lat 3 and the results given in Eqs. (85) and (86).
The resulting real part of the complex phase, is in complete agreement with the value of 0.9998(2) obtained by combining PDG inputs [6] and the dispersive values for the phase shifts [16]. (26) and for the SMOM(γ µ , γ µ ) intermediate scheme, where the error is statistical only.
It is illustrative to break the value of Re(ε /ε) into the so-called "QCD penguin" ImA 0 ReA 0 and "electroweak penguin" ImA 2 ReA 2 contributions, the sum of which is equal to Re(ε /ε). These terms have opposite sign such that the sum involves an important cancellation. For the electroweak penguin contribution we find Using the results for Im(A 0 ) obtained using the SMOM( / q, / q) intermediate scheme we find (96) Re ε ε QCDP = 0.00297 (26) , and likewise for the SMOM(γ µ , γ µ ) intermediate scheme, Re ε ε QCDP = 0.00283 (25) .
We observe that the two terms cancel at the 27(4)% and 28(4)% level relative to the QCDP contribution for the SMOM( / q, / q) and SMOM(γ µ , γ µ ) results, respectively. This degree of cancellation is considerably less than the 71(36)% observed in our 2015 analysis. Here the errors are statistical only.
We can also compute a purely lattice value of Re(ε /ε) using Re(A 0 ) from Eqs. 6. The change in the value of the experimental inputs, notably that of the CKM ratio τ from 0.001543 − 0.000635i to 0.001558 − 0.000663i.
We first note that repeating the ππ two-point function analysis for our larger data set but with a one-state fit to a single operator (ππ(111)), and a fit range 6-25 to match that of the 2015 analysis, yields a result (in lattice units), indicating that a 3.4× increase in statistics is not sufficient to account for the difference.
Again we observe a small increase but insufficient to account for the difference.
The result in Eq. (105) differs now from our primary result only in the ππ and K → ππ fitting strategies. Adopting the final fit ranges determined for the ππ and K → ππ fits in Secs. III and IV, such that the analysis now differs only in the number of ππ operators, gives This result is now much closer to our final result. The behavior we observe here is consistent with that displayed in Fig. 6 where we plot the dependence of the fitted matrix elements on the cut t min and the number of ππ operators included in the fits to the matrix elements (the ππ two-point function fits remain unchanged between the results displayed in this figure). This figure shows a significant discrepancy between the Q 6 matrix element obtained from a one-operator, one-state fit with t min = 4 and the plateau observed when further operators are included. With increased statistics the onset of the apparent plateau for the one-operator, one-state fit does not occur until t min = 5 (equal to the t min = 5 used to obtain the result in Eq. (106)) but the resulting value for the Q 6 matrix element is still several standard deviations larger than the strong plateau observed in the multi-operator fits.
We therefore conclude that the difference in Re(ε /ε) between our present and 2015 analysis results can be attributed primarily to unexpectedly large excited-state contamination in our previous analysis masked by the rapid reduction in the signal to noise ratio, and that multiple operators are essential to isolate the ground-state matrix element even with large statistics.
VII. SYSTEMATIC ERRORS
In this section we describe the procedure used to estimate the systematic errors on our results.
We will quote the values as representative percentage errors on either the matrix elements or on A 0 as appropriate. A discussion of the systematic errors in the ∆I = 3/2 calculation can be found in Ref. [2].
A. Excited state contamination
In Sec. IV E we devoted considerable effort to finding an optimal fit window in which excited state effects are minimal. We were unable to find evidence of such effects arising from excited kaon states, which is to be expected given both the large relative energy of such states and also the fact that the rapid growth of statistical noise as the four-quark insertion is moved away from the ππ operator implies that the data furthest from the kaon operator dominates the fit results. As such we do not assign a systematic error to possible contamination from excited kaon states.
As for the contribution of excited ππ states, we found little evidence for such effects even within the single operator fits to the ππ(111) data, except for the Q 5 and Q 6 matrix elements where the single-operator fits showed statistically significant deviations from the common plateau region that did not die away until t = 6. We observed that by adding further sink operators and allowing for more ππ states substantially reduced the excited-state contamination such that the fits were highly consistent even if we include data at times as low as t = 3. Despite this we chose a conservative uniform cut of t min = 5 for our fits.
In order to assign a numerical error to the contamination from excited ππ states, we consider the comparison of the 3-operator, 3-state fit with t min = 4 and the 3-operator, 2-state fit with t min = 5, the latter being our chosen best fit. The former includes a third state and with t min = 4 appears capable of describing the data well outside of the fit range, as we observed in Fig. 11 (lower-left panel). We compute relative differences under the bootstrap between the values of the groundstate matrix elements, the results of which are shown in Tab. XX. The only statistically resolvable difference, at 1.5σ , is for the Q 9 matrix element, which has only a negligible contribution to Im(A 0 ). For the dominant Q 4 and Q 6 matrix elements the differences cannot be resolved within our errors. We therefore conclude that the excited state systematic error is likely to be much smaller than our dominant systematic errors and can be neglected.
B. Unphysical kinematics
As our values of E ππ and m K differ by 2.2(3)%, the K → ππ matrix elements are not precisely on shell. As discussed in Sec. IV, the primary result of these unphysical kinematics is the rise of a divergent contribution from the pseudoscalar operatorsγ 5 d that vanishes when on shell by the equations of motion. In order to suppress this error we perform an explicit subtraction of the pseudoscalar operator that leaves behind a finite, regulator-independent term that represents the dominant remaining systematic error from the unequal kaon and ππ energies. As we are close to being on shell we can reasonably assume a linear ansatz for the dependence of our result on the energy difference E ππ − m K . We estimate the associated systematic error by observing the change in the Q 2 matrix element as the kaon mass is increased by 4.5%. The measurement was performed using 69 configurations of our original ensemble [1], with 3 different K → π time separations (10, 12, and 14), and we observed a 6.9% increase in the matrix element. We scale this increase by the relative difference between our kaon and ππ energies, giving 3%.
Another means of estimating this systematic error is to vary the subtraction coefficients α i by an amount consistent with the expected size of the residual contribution of the pseudoscalar operator.
Given that the operator is dimension-3, its coefficient is originally O(m s /a 2 ) where the strange quark mass is in physical units. After the subtraction is performed, the residual term is expected to be of size O(m s Λ 2 QCD ), which has a relative size of ∼a 2 Λ 2 QCD , or ∼5%, of the original contribution, for Λ QCD = 300 MeV. Increasing the subtraction coefficients α i by this amount gives rise to the differences in the unrenormalized lattice matrix elements given in Tab. XXI. The observed variations are generally consistent with the above, but to be conservative we assign a relative systematic error of 5% on the matrix elements resulting from the off-shell difference E ππ = m K . We use the value provided in Ref. [1] as an estimate of the finite lattice spacing systematic error. This was obtained by comparing the values of the ∆I = 3/2 matrix elements between the continuum limit [2] and the calculation [14] performed on our 32 3 × 64, β = 1.75 (32ID) lattice.
The parameters of the latter ensemble are identical to those used in this work to compute A 0 , albeit without G-parity boundary conditions and with a larger-than-physical light quark mass giving a unitary pion mass of 170 MeV. The MS values for the three continuum matrix elements that contribute to A 2 are obtained by combining the continuum values of those matrix elements in the SMOM( / q, / q) scheme (Tab. XIV of Ref. [2]) with the RI→ MS renormalization matrix computed on the 32ID lattice (Eq. 66 of Ref. [2]). As such this estimate addresses only the discretization errors on the matrix elements and not to those on the renormalization factors (which are expected to be small). We find the values given in Tab. XXII. Averaging the three relative errors we arrive at an estimate of 12% discretization errors on the matrix elements.
D. Lellouch-Lüscher factor
As described in Sec. VI A, the calculation of the Lellouch-Lüscher factor, F, that accounts for the power-law finite-volume corrections to the matrix element, requires an ansatz for the derivative of the ππ phase shift with respect to energy. In Sec. III E we present values for this derivative obtained from three methods: • The Schenk parameterization [57] of the dispersive energy dependence obtained in Ref. [16] • A linear approximation in the ππ energy above threshold, dδ 0 dE ππ = δ 0 E ππ −2m π , which is inspired by the dispersive low-energy dependence found in Ref. [16] and can be related to dδ 0 /dq via Eq. (70).
• A direct lattice calculation of the phase shift at energies close to and including the kaon mass.
Ignoring the noisier of the two lattice determinations, the results varied between dδ 0 dq = 1.26 and 1.41, a 12% spread. The resulting values of F differ by 1.5% since the dominant contribution arises from the derivative of the analytic function φ . We therefore assign a 1.5% systematic error to the matrix elements from this source.
E. Exponentially-suppressed finite volume corrections
We expect the remaining finite volume corrections to our matrix elements to be dominated by the (exponentially-suppressed) interactions between the final state pions that are not accounted for by the Lüscher and Lellouch-Lüscher prescriptions. In Refs. [2,14] we performed an in-depth analysis of the finite-volume errors on the matrix elements that comprise A 2 using SU(3) chiral perturbation theory, in which the mesonic loop integrals are replaced by discrete sums over the allowed momenta. We do not expect these effects to depend strongly on the form of the fourquark operator, and indeed comparable O(6 − 6.5%) corrections were estimated for both classes of operator that enter the calculation of A 2 . We therefore assign a representative 7% systematic error to the matrix elements. In the calculation of our step-scaled non-perturbative renormalization factors with scale µ = 4.01 GeV we have not incorporated the effects of the G 1 operator. A previous lattice study [37], performed in the SMOM(γ µ , γ µ ) scheme and utilizing step-scaling from a low-scale of µ = 1.33 GeV on our 32ID ensemble to a high scale of 2.29 GeV on a finer lattice, revealed the effects on A 0 of including this operator to be on the order of a few percent when combined with the matrix elements measured in our 2015 work [1]. Unfortunately the statistical errors on the differences in the renormalized matrix elements at µ = 2.29 GeV with and without G 1 included were found to be too large to resolve the effect with any precision, and we find that this also applies to the matrix elements obtained in the present work. (The renormalization matrices with and without G 1 at µ = 2.29 GeV can be found in Tab. X.) As discussed in Ref. [37], the increase in the relative error on the bootstrap differences is associated largely with the step-scaling matrix Λ RI that describes the running between the low and high energy scales. However it is reasonable to expect that the largest effects of neglecting G 1 appear at the low energy scale in the step-scaling where the QCD coupling is larger. We therefore it is convenient to study instead the differences between the elements of the 7×7 lattice→ MS renormalization matrix where H is the perturbative matching matrix. In the absence of systematic errors the matrix R MS←RI←lat is independent of the intermediate RI scheme. We can then study this systematic error by examining the matrix where I is the 7 × 7 unit matrix and |.| implies that the absolute value of each element is taken.
The ratio of R-matrices in this equation converts from the lattice scheme to MS through one intermediate scheme, and converts back to the lattice scheme via the other scheme, and hence becomes the unit matrix if no systematic errors exist. The difference from the unit matrix is therefore a measure of the size of the systematic error: Under the reasonable assumption that the systematic errors in the two schemes are comparable in size, we expect the elements of Ξ to vary between zero and approximately twice the size of the systematic error present in each. We therefore assign a percentage systematic error that is one half of the largest observed element of Ξ at a scale µ.
In Tab. XXIV we tabulate the non-zero elements of Ξ for various MS scales and step-scaling procedures. Once again we observe that the effects of including or discounting the G 1 operator, while harder to statistically resolve after passing through the step-scaling procedure, are at the percent scale.
As expected there is a general trend towards smaller values as we increase the scale that appears consistent with the factor of three decrease in α 2 s between 1.33 GeV and 4.01 GeV that is expected to describe the scaling of the missing NNLO terms. Unfortunately the statistical errors on the results at 4.01 GeV are too large to resolve the residual systematic effects. Nevertheless, considering the results of this table and also the 3-4% differences observed in ReA 0 and ImA 0 between the schemes in Sec. VI C, we assign a 4% systematic error to the non-perturbative renormalization factors.
H. Parametric errors
We propagate the parametric uncertainties shown in Tab. XI to ReA 0 and ImA 0 . For ReA 0 the largest such uncertainty is the charm-mass dependence, which, however, is only a 0.3% effect.
For ImA 0 , the largest uncertainty is 5% from the τ parameter, 3% from α s , and less than 1% from the charm and top quark masses. The other uncertainties have been estimated but are negligible compared to those quoted. We therefore estimate a total parametric uncertainty of 6% for ImA 0 and 0.3% for ReA 0 . of Im(A 0 ) are quite consistent, in the range 11-15%. This suggests that the bulk of the observed difference arises from the perturbative 3-to-4 flavor matching and running above the charm threshold, which is common to all of these determinations, and that improved theory input for the 3-to-4 flavor matching could significantly reduce it. (Note that in our calculation we take the matching scale across a flavor threshold equal to the corresponding quark mass in order to avoid large logarithms. Additional insights could be gained by studying the dependence on this matching scale as in Ref. [52].) In conclusion we assign a 12% systematic error on both ReA 0 and ImA 0 associated with the NLO determination of the Wilson coefficients.
J. Error budget
We divide the systematic errors into those that affect the calculation of the matrix elements of the MS weak operators Q j and those that enter when these matrix elements are combined to produce the complex, physical decay amplitude A 0 . The former are collected in Tab respectively. Note that our 4% estimate of the renormalization systematic error includes both lattice systematic errors and those associated with the truncation of the perturbative series in the RI→ MS matching. While the latter are inappropriate to apply to matrix elements in the non-perturbative schemes, due to our estimation procedure we are at present unable to isolate these two effects and as such apply the full 4% systematic error also to these RI matrix elements.
VIII. FINAL RESULTS AND DISCUSSION
In this section we collect our final results including systematic errors and discuss the implications of our results. For consistency with our previous work we will use the SMOM( / q, / q) intermediate scheme for our central value.
A. Matrix elements
The renormalized, infinite-volume matrix elements in the RI and MS schemes are given in Tab. XIV, where the errors are statistical only. The corresponding relative systematic errors can be found in Tab. XXV. For the convenience of the reader we have reproduced the matrix elements in the SMOM( / q, / q) scheme including their systematic errors in Tab. XXVII. In order to allow the reader to compute derivative quantities from these matrix elements, the covariance matrices for the renormalized matrix elements in the SMOM( / q, / q) and MS schemes at 4.01 GeV can be found in Tabs. XV and XVI, respectively.
B. Decay amplitude
For the real part of the decay amplitude we take the value from Eq. (77a) and apply the systematic errors given in Tab The breakdown of the contributions of each of the 10 operators to these amplitudes can be found in Tab. XVIII. We observe that, at the scale at which we are working, the dominant contribution to Re(A 0 ) (97%) originates from the tree operator Q 2 , while Q 1 has a contribution of about 13% that is largely cancelled by that of the penguin operator [58, 59] Q 6 . Likewise, the dominant contribution to Im(A 0 ) is from the QCD penguin [58,59] operator, Q 6 , with a 14% cancellation from Q 4 . The "∆I = 1/2 rule" refers to the enhancement by almost a factor of 450 of the I = 0 K → ππ decay rate relative to that of the I = 2 decay, corresponding to the experimentally-determined ratio Re(A 0 )/Re(A 2 ) = 22.45 (6). A factor of two contribution to this ratio arises from the perturbative Wilson coefficients [60][61][62]. While the remaining factor of ten has been viewed for some time as a consequence of the strong dynamics of QCD, the origin of this large factor has remained something of a mystery with no widely-accepted dynamical explanation.
In the past [14,15,63], and most recently in Ref. [2], when simulating with physical pion masses we have observed a sizeable cancellation between the two Wick contractions of the dominant (27, 1) operator contributing to the ∆I = 3/2 decay amplitude, leading to a significant suppression of Re (A 2 ). In these calculations we reproduced the experimental value of Re (A 2 ) and concluded that this cancellation was likely to be a very significant element in the ∆I = 1/2 rule. We stress that the cancellation between the two leading contributions to Re (A 2 ) depends sensitively on the light quark mass and becomes much less significant as the light quark mass is increased above its physical value. Note also that such a cancellation is not consistent with naïve factorization, which predicts that both contributions have the same sign and differ in size by a factor of three due to color suppression, although calculations using chiral perturbation theory and the 1/N c expansion [64,65] have previously suggested a reduction in A 2 .
In order to obtain a quantitative, first-principles result for Re (A 0 )/Re (A 2 ), we also require knowledge of Re (A 0 ) which we provide in Eq. (109) of the present paper. Combining this with our earlier result for A 2 [2], we obtain where the errors are statistical and systematic respectively. The value in Eq. (111) agrees very well with the experimental result, demonstrating quantitatively that, within the uncertainties, the ∆I = 1/2 rule is indeed a consequence of QCD and thus providing an answer to an important long-standing puzzle.
For ε /ε we use Eq. (2), combining the lattice values for the imaginary parts of the decay amplitudes with the experimental measurements of the real parts. The systematic error for Im(A 0 ) is taken from Tab. XXVI and that of Im(A 2 ) from Eq. 64 of Ref. [2]. The statistical and systematic errors on Im(A 0 ) and Im(A 2 ) are combined in quadrature and are therefore enhanced by the cancellation between the two terms in Eq. (2). However, one further important systematic error should be addressed: that arising from the effects of electromagnetism and the isospin-breaking difference, m d − m u , between the down and up quark masses.
While for most quantities these corrections enter at the 1% level or below, for ε this familiar situation does not hold. As can be seen from the formula used to compute ε in the Standard Model given in Eq. (2), the I = 0 and I = 2 amplitudes A 0 and A 2 enter with equal weight. However, as is summarized by the ∆I = 1/2 rule, the amplitude A 2 is 22.5 times smaller than A 0 . Thus, a 1% correction to A 0 can introduce an O(20%) correction to A 2 and a potential correction to ε of 20% or more.
The effects on ε of electromagnetism and m d − m u have been the subject of active research for some time [66][67][68]. The most recent results are those of Cirigliano et al. [68]. They provide a correction that is appropriate for our calculation in which the contribution of the electro-weak penguin operators Q 7 and Q 8 has been included. Their result is parametrized byΩ eff which is introduced into a version of Eq. (2) which incorporates these effects: and findΩ eff = (17.0 +9.1 −9.0 ) × 10 −2 . Here we are reproducing Eqs. (54) and (60) Since a careful discussion of these corrections is beyond the scope of this paper we choose to treat these effects of isospin breaking as a systematic error whose size is given by the effect of includingΩ eff in Eq. (112). We find where the errors are statistical and systematic, with the systematic error separated as isospinconserving and isospin-breaking, respectively. We note that if we were to apply this negative correction directly to our result for Re(ε /ε), the central value obtained, 0.00167, would nearly coincide with the experimental value, albeit with appreciable errors.
Our first-principles calculation of ε /ε also allows us to place a new, horizontal-band constraint on the CKM matrix unitarity triangle in theρ −η plane. In Fig. 12 we overlay this band with constraints arising from other sources. We find that our result is consistent with the other constraints and does not at present suggest any violation of the CKM paradigm. For more information on how this band was obtained, as well as the corresponding plot obtained using our 2015 results, we refer the reader to Ref. [72]. The error bands represent the statistical and systematic errors combined in quadrature. Note that the band labeled ε is historically (e.g. in Ref. [72]) labeled as ε /ε, where ε is taken from experiment.
Möbius domain wall ensemble with the Iwasaki+DSDR gauge action, with an inverse lattice spacing of 1.378(7) GeV and physical pion masses. G-parity boundary conditions are used in the three spatial directions which induces non-zero momentum for the ground-state pions so that the energy of the lightest two-pion state matches the kaon mass to around 2%, thereby ensuring a physical, energy-conserving decay.
The new calculation reported here is based on an increase by a factor of 3.4 in the number of Monte Carlo samples and includes two additional ππ interpolating operators, which have dramatically improved our control over contamination arising from excited ππ states. The greater resolution among the excited finite-volume ππ states provided by our now three interpolating operators has allowed us to resolve the approximately 2σ discrepancy between our earlier lattice result for the I = 0 ππ scattering phase shift and the dispersive prediction, as will be detailed in Ref. [18]. These improved techniques result in a significant, 70% (2.6σ if σ is determined from only the statistical error) relative increase in the size of the unrenormalized lattice value of We are presently working [73] to circumvent this issue by computing the 3-to 4-flavors matching non-perturbatively using a position-space NPR technique [44].
Finally in the current calculation we have adopted a new bootstrap method [27] to determine the χ 2 distributions appropriate for our calculation in which the data is both correlated and non-Gaussian. The resulting improved p-values provide better guidance in our choice of fitting ranges and multi-state fitting functions.
Finite lattice spacing effects remain a significant source of systematic error as at present we have computed ε at a single, somewhat coarse lattice spacing. In the future we intend to follow the procedure used in our A 2 calculation [2] to compute A 0 at two different lattice spacings, allowing us to perform a full continuum limit. This is hampered by the need to generate new ensembles with GPBC, which alongside the high computational cost of the measurements and the need for large statistics requires significantly more computing power than is presently available.
A second important systematic error, which we plan to reduce in future work, comes from the effects of electromagnetic and light quark mass isospin breaking. As discussed in Sec. VIII D, the small size of the amplitude A 2 relative to A 0 gives a potential twenty times enhancement of such effects which are normally at the 1% level. The effects of electromagnetism and the quark mass difference m d − m u have been studied in considerable detail using chiral perturbation theory and large N c arguments, most recently in Ref. [68]. We take the size of their correction as an important systematic error for our present result and are exploring possible methods to also use lattice techniques to determine these effects [74,75].
The third error here is the systematic error associated with isospin breaking and electromagnetic effects, and the first and second errors are the statistical error and the remaining systematic error.
These values are consistent within the quoted errors.
We believe that ε continues to offer a very important test of the Standard Model with exciting opportunities for the discovery of new physics. For this promise to be realized substantially more accurate Standard Model predictions are needed. Important improvements can be expected from a simple extension of the work presented here, studying a sequence of ensembles with decreasing lattice spacing so that a continuum limit can be evaluated. In addition, we are developing a second, complementary approach to the study of K → ππ decay which is based on periodic boundary conditions. This avoids the complexity of the G-parity boundary conditions used in the present work but requires that higher excited ππ states be used as the decay final state [76]. More challenging is the problem posed by the inclusion of electromagnetism where new methods [74,75] are needed to combine the finite-volume methods of Lüscher [11] and Lellouch and Lüscher [12] with the long-range character of electromagnetism.
ACKNOWLEDGMENTS
We would like to thank our RBC and UKQCD collaborators for their ideas and support. We are also pleased to acknowledge Vincenzo Cirigliano for helpful discussion and explanation of isospin breaking effects. The generation of the gauge configurations used in this work was primar- operators can be found in in Appendix B.1 and B.2 of Ref. [33].
For this appendix we will utilize the notation described in Sec. III A whereby the quark field operators are placed in two-component "flavor" vectors ψ l and ψ h for the light and heavy quarks, respectively, and the corresponding propagators are matrices also in this flavor index. In this notation the creation operator for the G-parity even neutral kaon analog has the form, where the physical component corresponds to the usual neutral kaon operator (cf. Sec. VI.A of Ref. [10]). The σ creation operator has the form, For convenience we will treat the meson bilinears as point operators in which both quarks reside on the same lattice site. (In our actual lattice calculation we use more elaborate source and sink operators but those details are not needed to specify how we evaluate the Wick contractions.) The ten effective four-quark operators Q i for i ∈ {1 . . . 10} written in the above notation are given in Sec. 3.2.2 of Ref. [33]. While the exact forms are not important for this discussion, we highlight the fact that the operators are written in terms of a common set of matrices, where F i are diagonal flavor matrices that pick out either the upper (0) or lower (1) element of the vector upon which they act: The matrices M µ i,V ±A appear inside products of two bilinear operators and the space-time index µ is summed over implicitly. Following the notation of Ref. [33] we will suppress this index. contractions. The type3 and type4 diagrams are those that contain a quark loop at the location of the four-quark operator, with type4 corresponding to that subset of those diagrams that are disconnected (i.e. for which the σ operator self-contracts). For the ππ(. . .) operators the remaining, connected, contractions can be subdivided based on whether the two pion bilinear operators are directly connected by a quark line (type2) or not (type1); no such distinction exists of course for the σ sink operator. Hence we classify all remaining diagrams as type1.
As in Ref. [33] it is convenient to write the ten expressions A i in terms of a common basis of, in this case 23, functions D α (Γ 1 , Γ 2 ) where the subscript indexes the function and Γ 1,2 are spin-flavor matrices.
We will first write down the expressions for the correlation functions A i in terms of these functions and will conclude the section with their definition. We list the contributions for each of The type1 contractions are: (A9a) D 1 (Γ 1 , Γ 2 ) = tr Γ 2 G l y,x γ 5 G h x,y Γ 1 G l y,z G l z,y (A9b) D 6 (Γ 1 , Γ 2 ) = tr G h x,y Γ 1 G l y,x γ 5 tr G l z,y Γ 2 G l we use the shorthand ABC . . . to denote the n-point Green's functions of the operators A, B, C, and so on, in descending time order.
It is easy to see that the A vac | 28,236.6 | 2020-04-20T00:00:00.000 | [
"Physics"
] |
Study on the Implementation Method of Data Review for a Certain Weapon System in VxWorks
In the researching and using process of a certain large weapon system, the practical navigation data was the most important and effective way for the system analysis and the fault reoccurrence. Aiming at the characteristics of the weapon launching and control system such as multiple data types, complex data property, and complex data management, the embedded computer system based on VxWorks is used in this article to realize the review of multiple kinds of data by the timer interrupt mode, and through relative experiments, the result showed that the real-time review system of the large-capacity practical navigation data designed in this article was effective and reliable, completely fulfilling the design requirements.
Introduction
In the researching process of the underwater weapons, to check and evaluate the launching and control system and the self-navigation system, the corresponding practical navigation experiments must be implemented to test the functions and reliabilities of the weapon system and the self-navigation system.However, because of expensive practical experiment cost and the influences of many factors such as weather and sea state, the experiment times would be less, and for the weapon system, the laughing state, sea state, and exterior environment would be different, and the targets would be largely different too, so each state in each laughing process should be recorded real time for the data review and analysis in the future.For the researching processing, this systematic work is very important, and it could largely reduce the experiment times, save cost, reduce the research period, effectively analyze the experiment data, and find and analyze problems by reviewing the fault problem especially.
System components
The upper computer and the lower computer in the system are all military reinforced computers, with VxWorks operation system.The system components are seen in Figure 1.In Figure 1, the upper computer mainly deals with the operation inputs and displays the data real time, and the lower computer mainly deals with the interaction with various exterior devices, and the interfaces of exterior devices are seen in Table 1.The function components are seen in Figure 2.
(1) Main function indexes of the AD interface 12 high speed input channels; 16bits resolution, independent converter in each channel; 400 sampling rate highest, and adjustable program; 32K dynamic cache; Interrupt sources.There are 7 interrupt sources, and the No.2, the No.3, and the No.4 interrupt source respectively are the cache empty interrupt source, the full interrupt source with less 1/4 cache data, and the full interrupt source with less 3/4 cache data.When the data in the cache are less than the corresponding capacity, the D/A interface would send corresponding interrupt request to the computer, and set up the interrupt flag as "1".This interrupt flag could be used for the software inquiry.In the successive data review, to prevent data overflow or the cache empty and the data sampling clock samples the data in void, the data of the board card in the cache should be in 1/4 full to 3/4 full.
Review data source
For the real-time data review system, the data which need be reviewed have many types and complex structures, and large data belong to random data such as the network data, the series data, and the USB data.At the same time, the data acquired by the AD have many characteristics such as large data quantity, high requirements to the system, and more resource occupations.Therefore, the data sources should be processed respectively, and the random data should be time-stamped, and because of concrete data sampling rate and exact sampling time, the data acquired by AD needs not timestamp, but the sampling time should be sighed when the acquirement time starts, and the data format is the pure binary mode, and there are most 12 data channels.The types of the review data sources are seen in Table 2.
Software design
There are three data transmission modes to realize the signal output, i.e. the inquiry mode, the timer interrupt mode, and the hardware interrupt mode.The VxWorks embedded real-time operation system is adopted in this system and the frequency of data review could achieve most 50 KHz, and the review interval between each two pieces of data is 0.02ms.Because the inquiry mode could occupy vast CPU resources, and low speed could induce that the data review could not be aligned, only the timer interrupt mode is adopted in this system.
Confirmation of data frame
The dynamic cache of the D/A board adopted in this system is 32K, and the data flow modes of cache include the opening data flow mode and the double-cache data flow mode.In the storing and reviewing process of data, the AD data act the clock, and each 500 pieces of data are a frame structure, and the network data, the series data, CAN bus data, the digital IO data, and the USB data arrived randomly would be recorded at the same time.In the reviewing process, the data are stored and read according to the structure of frame.The frame structure is seen in Figure 3.The AD high speed data is stored according to the structure of double-cache, and in the storing and reviewing process, one data frame is a data block which is the reference to store the data and refresh the screen, and the random data arrived randomly would be stored in same one block data, and each piece of data would be time-stamped when it is stored, and it would be reviewed according to the timestamp, and the time precision is 0.04 second, which could not only ensure the continuity of vision, but fulfill the requirement of the system to the resources.
Software design
According to the frame time defined, the time interval of the timer interrupt is 40ms, so "tick=40ms" must be defined before the interrupt transfer of software, which is realized by the order of "sysClkRateSet(400)" in the program.The main function to realize the timer interrupt can be described as follows.STATUS sysClkConnect ( FUNCPTR routine, /* routine called at each system clock */ /* interrupt */ int arg /* argument with which to call routine */ ) void sysClkEnable (void) The software flow chart of timer interrupt is seen in Figure 4.
Result of data review
The system is tested based on the practical navigation experiment, and the experiment data quantities respectively include following aspects.
(1) AD data quantity: about 12 million bytes, and one exterior device which analog channel number is 4, and the sampling rate is 10K, and the data resolution rate is 8 bit, and the data review time is 20 minutes.(2) Network data quantity: 2.7 million bytes, and 8 exterior devices.
The processing method of data loss in the data review is to analyze the data acquired by AD, and if there is data loss, it indicates that the faults exist in the data review, because the data acquisition of AD is acquired and reviewed according to the sampling rate, so this channel should be the main observation channel, and the data reviewed acquired by AD is seen in Figure 5. From Figure 5, the review system designed in the system has not data loss, and the data clock is not distorted, and through the CPU analysis, the review system occupied less system resources, and the memory occupation was less than 30M, and the CPU occupation time was less than 50%, which were all fulfill the successive requirements of the system.
Conclusions
The design and implementation of the data review system of certain weapon system are introduced in this article, and this system includes about 20 exterior devices, and the data types are very complex, and the data quantity is large.Through effectively analyzing the characteristics of the data, the data frame structure is designed, and the review data flow control is completed by the timer interrupt mode.The practical navigation experiment showed that the data review system designed in this system was stable and reliable, and it could effectively record and review the data without errors.References Bradley K,Strosnider J K. ( 1998).An Application of Complex Task Modeling.Real-Time Technology and Applications Journal.No.43(4).P. 85-90.Wind River Inc. (1999) (2) Main function indexes of the network interface 10M/100M self-adaptive network interface; The communication is implemented by the UDP/IP protocol.(3) Main function indexes of the series communication controller 4 RS232 interfaces, with the work Baud rate of 9600bps; 2 RS485 interfaces, with the work Baud rate of 9600bps.(4) Main function indexes of the CAN bus controller The work Baud rate is 1Mbps.(5) Main function indexes of the DA interface 12bits resolution.
Figure
Figure 2. Function Components | 1,992.2 | 2011-04-05T00:00:00.000 | [
"Computer Science"
] |
CNN BASED VEHICLE TRACK DETECTION IN COHERENT SAR IMAGERY: AN ANALYSIS OF DATA AUGMENTATION
The coherence image as a product of a coherent SAR image pair can expose even subtle changes in the surface of a scene, such as vehicle tracks. For machine learning models, the large amount of required training data often is a crucial issue. A general solution for this is data augmentation. Standard techniques, however, were predominantly developed for optical imagery, thus do not account for SAR specific characteristics and thus are only partially applicable to SAR imagery. In this paper several data augmentation techniques are investigated for their performance impact regarding a CNN based vehicle track detection with the aim of generating an optimized data set. Quantitative results are shown on the performance comparison. Furthermore, the performance of the fully-augmented data set is put into relation to the training with a large non-augmented data set.
INTRODUCTION
Synthetic aperture radar (SAR) imagery allows for two different approaches to change detection: amplitude change detection or coherent change detection if provided with a coherent image pair. The coherence image as a product of a coherent image pair can expose even subtle changes in the surface of a scene, such as the tracks made by vehicles. Several approaches to vehicle track detection exist in the literature, including e.g. the use of convolutional networks (Quach, 2017) or conditional random fields (Malinas et al., 2015). Others seek to enhance the coherence image with the aim of boosting a threshold-based change detection method (Hammer et al., 2021).
As is common for all image classification via machine learning models, the large amount of required training data often is a crucial issue. A general solution for data scarcity is data augmentation, where different techniques are used to expand the existing data set in size and quality (Shorten and Khoshgoftaar, 2019). Current techniques for coherent track detection seek to make synthetic data look more like measured data using machine learning algorithms (Lewis et al., 2019) or insert simulated tire tracks into non simulated images to obtain a larger variety of images (Turner et al., 2012). Most fundamental are techniques using geometric and color space modifications, however, these standard techniques were predominantly developed for optical imagery, thus do not account for SAR specific characteristics and thus are only partially applicable to SAR imagery. Several of these techniques can be ruled out merely by considering the specific properties of SAR images, however, for some the question arises, how well they are suited for the task of coherent change detection and what impact they have on the actual track detection performance. * Corresponding author In this paper several data augmentation techniques (geometric and color space transformations) are investigated for their performance impact regarding a convolutional neural network (CNN) based vehicle track detection. With the aim of generating an optimized training data set, they are compared among one another and subsequently put into relation to the training with a larger non-augmented data set. It is of interest if the augmentation of samples originating from a single image can compare with the un-augmented large data set extracted from multiple diverse images.
The paper is structured as follows: Section 2 contains a description of the data set used in this study. In Section 3 the process of data augmentation is specified. Section 4 describes the CNN architecture and training process. The results are presented in Section 5, while Section 6 contains the conclusions and an outlook to future work.
DATA
The experiment is conducted on an airborne interferometric SAR data set of POLYGONE area, located in southern Rhineland-Palatinate, Germany, where between the two overflights three distinguishable vehicle tracks were generated per vehicle movement. The tracks overlap to an extent and feature an axle width of 2.0 m ±0.2 m and a wheel width between 0.37 m and 0.4 m. Otherwise this area was not affected by human action in-between the times of the overflights. Figure 1 shows an optical image of the scene, where the area of vehicle movement is marked in orange. The recorded data set consists of multiple coherent image pairs, showing the same scene under different aspect angles. A manually performed vector-based extraction of the three vehicle tracks yields the corresponding reference data in the form of a binary image distinguishing track from background.
SAR imagery
The SAR data set was part of a measurement campaign in 2015, recorded by the SmartRadar experimental sensor of Hensoldt Sensors GmbH, mounted on a Learjet. This is an X-band sensor with resolution in the decimeter range. The six image pairs used in this study were recorded during two overflights over Bann B of POLYGONE Range, approximately 4 hours apart. In-between this time the vehicle movement took place, whereas at the time of the overflights the scene was static. In Table 1 basic properties of the POLYGONE acquisition are summed up. For this investigation six image pairs have been selected featuring only very small acquisition angle differences, so that high coherence values can be achieved. All image pairs were coregistered and subsequently the coherence was computed using the classical formula and a 7 x 7 pixel window. Coherence takes values in the interval [0,1] where zero reflects total incoherence (black) and 1 implies a fully coherent signal (white). In Figure 2, the resulting coherence images C1-C6 are depicted, showing the whole scene of the POLYGONE area. Note that all SAR images in this paper are visualized with range direction on the x-axis and azimuth on the y-axis. In all six images wide horizontal stripes are visible which are caused by a flawed motion compensation during SAR data processing. For the task at hand, however, this is not considered to be a problem. As a matter of principle a high coherence level surrounding the changed regions is essential for a successful coherent change detection. The grassland cropped shortly before the measurement campaign features such an area of high coherence, thus making the vehicle track detection in this area possible. For the images C1-C6 the coherence levels vary somewhat, which was to be expected, since the acquisition angles and the angle difference between each image pair differ. For this, see the azimuth angle αM of each master image, the azimuth difference ∆α regarding each image pair, and the mean local coherence levelγC of said grass-land area (measured in a 500 x 500 pixel window) in Table 2 Figure 2. Coherence images: Image C1 for the basis of an augmented data set, Images C2-C5 for the generation of a large un-augmented data set, and Image C6 for the testing.
Since the images are not calibrated and recorded under different aspect angles, they show varying track orientation and backscattering and they also result in a different coherence level, thus qualifying for independent training and test imagery. In the main part of this investigation, Image C1 was used as source material for the training data set (un-augmented and augmented), whereas Image C6 functioned as test environment. Note the profound orientation difference of the two images. Images C2 -C5 subsequently were used in combination with Image C1 to provide for a large, diverse un-augmented data set.
Reference data
Reference imagery was generated in three steps: Firstly, a manual vector-based extraction of the three vehicle tracks was conducted, where both lanes were accounted for; secondly, the tracks were rasterized to generate a mask; and lastly, the mask was subjected to a morphological dilation operation to enforce a standard wheel width. The result is a binary image distinguishing track from background. Figure 3 shows the process of reference generation for the vehicle tracks in Image C1. Images C2 -C6 were processed accordingly.
DATA AUGMENTATION
Using but one image for the extraction of a training data set, as in this case, inevitably leads to some deficits regarding object orientation, and backscattering variety. Data augmentation methods can be used to widen that variety, so that the trained model generalizes better. This raises the question whether standard data augmentation techniques on a small data set can equal the use of a a larger data set with a wider variety. In that regard the interferometric data set of POLYGONE is very suitable for an investigation.
Standard methods
When it comes to data augmentation a wide range of methods have been introduced, ranging from standard basic image manipulation to more sophisticated methods. GAN-based augmentation or other deep learning based augmentation methods are amongst the most advanced methods and are capable of very complex image generation. For the task at hand, however, these methods are somewhat disproportionate and hence are not taken into account.
The focus instead is on the standard methods of image manipulation such as geometric and color space transformations. However, since these standard augmentation techniques were developed primarily for optical use, not all are reasonable for the SAR specific case. One such case is the widely used method of scaling. While for the optical use the simple resizing of an image to imitate another resolution or object size is valid, in the SAR specific case this disregards the more complex workings of image processing. Not only the radiometry would be tampered with, but for more complex objects with specular reflections also the characteristic features of the signatures. Further, the resolution of the SAR image depends only on the sensor and the acquisition mode and is constant over the entire image. There are further techniques that are obsolete due to the missing radiometric background, such as e.g. shearing, noise injection. Since in SAR images small changes in aspect angle can res- ult in profound signature changes, in particular due to multibounce reflections, rotation augmentation usually is an unsuitable method as well. However, for flat objects at ground level, such as in this case it is a different matter. The absence of orientation dependent specular reflections and the object flatness eliminate the main objections. Whether a difference in range and azimuth pixel spacing may cause unrealistic distortions when rotated is deemed insignificant when compared with the high potential of rotation augmentation. Many color space transformations rely on the presence of multiple channels, and for this reason cannot be transferred to SAR imagery. However, simple modifications can also be applied for grayscale images.
In the following, five augmentation techniques are applied to the training samples from Image C1, including translation (A), flipping (B) and rotation (C), as well as changes to contrast (D) and brightness (E).
Applying augmentation methods
Original samples Based on a single coherence image C1 a base training data set of 2000 samples of the size 128 x 128 pixels was extracted, with the samples being perfectly centered on the tracks. The reference map provides the corresponding label data, accordingly. Figure 4 shows an exemplary set of these training data. Based on these samples, multiple training data sets were generated via the data augmentation techniques A-E.
Translation To avoid positional bias in the data, translation augmentation was used, where the sample centers are moved with a random displacement offset. Taking into account the sample size of 128 x 128 pixels and the track width a maximal translation offset of 45 pixels was chosen so that the maximal cut-off was no more than 35% of the sample. Figure 5 a) shows an exemplary set of these training data.
Flipping Random horizontal and vertical flipping of the original samples was conducted. The resulting training data are depicted in Figure 5 b).
Rotation Based on the original sample centers and Image C1 a rotation was executed for each sample using nearest-neighbor interpolation. For an optimal coverage of track orientations the rotation was executed randomly in the interval between 0 • and 359 • . Note that there is little variation regarding the orientation of the vehicle tracks in the original samples, suggesting the rotation augmentation to be able to improve the model training considerably. Figure 5 c) shows the exemplary rotated samples.
Contrast and brightness
The challenge of a track detection ultimately is to work with image pairs of poor coherence, so the contrast is bound to be smaller than that of the given training data. To take this into consideration both contrast and brightness were modified to a small extent. For both, there was made a point of making sure the resulting values were in the range a typical coherence image takes, between 0 and 1. Figures 5 d) and e) depict the corresponding augmentation samples with a distinct change in gray values.
Training data sets
In this paper two aspects are targeted: Firstly, the investigation of the five data augmentation techniques and their performance impact in a vehicle track detection; and secondly, the aim of generating an optimized training data set that can match the performance of a larger non-augmented data set. For the first aspect, six training data sets were generated (one for the original un-augmented data and one for each augmentation technique, respectively, each consisting of 2000 samples. They are denoted as listed in Table 3. Regarding the optimized training data set, a further set was generated combining all the augmentation techniques A-E. Some exemplary training samples are depicted in Figure 6. Lastly, a large un-augmented data set was created by extracting samples not only from Image C1, but also from Images C2 -C5, resulting in a data set of 10.000 images. In total, this leads to the generation of 8 training data sets listed in Table 3.
CNN TRAINING
The U-Net architecture has been shown to be very effective for fast semantic segmentation of images, and hence was considered a suitable choice for the task at hand. In our experiment a standard 4-layer U-Net architecture was used, consisting of a contracting path, a bridge segment and an expansive path. The encoder subnetwork consists of two sets of convolutional and ReLU layers at a time followed by a max pooling layer. In return, the decoder subnetwork involves a transposed convolutional layer and two sets of convolutional and ReLU layers at a time. The 4-layer structure represents a good compromise between the position independence of the features and the fact that too much information is lost when the images in the lowest U-Net layer become too small. Note that with an input image size of 128 x 128 pixels, the lowest layer image has but a size of 16 x 16 pixels. The network was then trained with an Adam optimizer and fed with the different training data sets respectively, whereas 200 samples of each training data set were used as validation data. Table 4 shows the final validation accuracies and losses for different training data sets. The trained models were then applied to the test image C1.
RESULTS
In the following the predictions for test image C6 regarding the individual trained networks are described. Aside from a visual inspection, two quality measures are used to assess the track detection performance. Firstly, a detection performance ratio (DPR) is introduced, which describes the ratio of detected pixels in the local track area and calls upon the reference mask to be able to do so. To capture the line continuity of the track detection, the segmentation result is converted into connected components with an 8-pixel connectivity. As a second criterion the maximal length Lmax of the major ellipse axis of the components is explored, where the full length of the vehicle tracks would equal an Lmax of 3745.6 pixels.
Effect of data augmentation
In Figure 7, two details of the prediction results for Image C6 are visualized, regarding a training with the un-augmented data set DSorig and the augmented data sets DSA-DSE. For a better visual impression, the segments (red) are superimposed on the corresponding coherence image. Quantitative results are provided in Table 5. The performance regarding the unaugmented data set DSorig serves as a basis for the assessment of the augmentation impact. So it is of relevance how well this simple training data set can perform. Figure 7 a) demonstrates quite well that the un-augmented samples lack the variety to generalize the network. In particular, the lack of track orientation in the training samples becomes apparent,
DPR [%]
Lmax since the performance varies profoundly with the track orientation. Several track segments aligned more in azimuth direction even show an acceptable result. With a DPR of 11.8% and an Lmax of 313.9 pixels this is used as a basis of comparison. Figures 7 b)-f) show the effect of the individual data augmentation techniques. As was expected, the rotation augmentation has the most profound impact on the track detection performance (see Figure 7 d)), with the observed orientation dependent performance differences seemingly eliminated completely.
Overall, a good performance can already be achieved with but this augmentation technique, also showing in the high values of the chosen performance measures, a DPR of 67.5% and an Lmax of 2192.9 pixels. In comparison, all other techniques have a much smaller effect on the performance. The flipping augmentation, even though by far not as powerful as the rotation technique, has some effect in the same direction. Most track orientations still cannot be detected, however, the performance measures (DPR of 13.3% and an Lmax of 374.7 pixels) show a certain improvement to the un-augmented data set. Employing contrast augmentation leads again to a small performance increase (DPR of 20.4% and an Lmax of 376.4 pixels). This improvement is probably due to the somewhat lower coherence level in test image C6 compared to the training image C1. Augmentation by translation and brightness modifications show no clear improvement over the un-augmented data set. However, for the optimized data set DSA−E they seem to improve the robustness of the model, so that the optimized data set deliberately includes all five augmentation techniques. The segmentation result of the network trained with data set DSA−E can be observed in Figure 8. The data augmentation with a combination of all five augmentation techniques could again considerably improve the track detection results and achieve a DPR of 72.5% and an Lmax of 2278.1 pixels.
Data augmentation vs larger un-augmented data set
To put the performance of the fully augmented data set into relation, a performance comparison to the network trained on the larger un-augmented data set DSBigData is provided in the following. A visual impression is given in Figure 9, showing the segmentation result for both the training with the fully augmented data set and the training with the large un-augmented data set. Although both show a good line continuity, the results of the large un-augmented data set surpass those of the fully augmented data set. This also is reflected in the performance measures, listed in Table 6. The un-augmented data set pro- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2022 XXIV ISPRS Congress (2022 edition), 6-11 June 2022, Nice, France a) b) Figure 9. Segmentation of test area (Image C6); a) augmented data set DSA−E, b) large un-augmented data set DSBigData.
duces a DPR of 79.4% and an Lmax of 3663.2 pixels. Even though standard methods of image manipulation such as geometric transformations and color space transformations were not able to fully replace the use of a more diverse large data set, the performance increase is profound.
CONCLUSION AND OUTLOOK
The experiment was conducted on an airborne dual-pass SAR data set of POLYGONE area, located in southern Rhineland-Palatinate, Germany, where between the two overflights three distinguishable vehicle tracks were generated per vehicle movement. The data set consists of multiple coherent image pairs, showing the same scene under different aspect angles. A manually performed vector-based extraction of the three vehicle tracks provided the corresponding reference data in the form of a binary image distinguishing track from background.
It was discussed which standard augmentation techniques are not reasonable for the SAR specific case and which are to be investigated. Based on a single coherence image a base training data set of 2000 samples was extracted and subsequently multiple training data sets were generated via different data augmentation techniques. These include geometric transformations such as translation, flipping and rotation, as well as color space transformations such as changes to contrast and brightness. A second coherence image of the overflight functioned as test data, showing the same three vehicle tracks in a different orientation. A CNN with a 4-layer U-Net architecture Figure 10. Foot tracks in the POLYGONE data set.
was introduced and trained with an Adam optimizer for the different training data sets respectively. The performance of the trained models was then assessed on the test image, whereas e.g. line continuity served as a quality criterion. As a result the impact each augmentation technique has on the track detection performance can be rated. As a last step the training with augmented data was put into relation to the training with nonaugmented data. For this, four additional coherence images of the scene were exploited to receive a large data set matching the augmented data sets in size and featuring the three vehicle tracks for multiple orientations and coherence levels. Finally, a performance comparison was conducted between the best results regarding the augmented data versus the results of training with non-augmented data. Concluding, this brings into light how well the data augmentation techniques can imitate an actual larger data set.
Future work could include, how well this approach can be expanded to the prospect of foot track detection. The POLY-GONE data set provides the means for such an investigation, with Figure 10 showing the area in question and the segmentation result regarding the network trained on vehicle tracks. | 4,999.4 | 2022-05-30T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Computer Science"
] |
Different types of soluble fermentable dietary fibre decrease food intake, body weight gain and adiposity in young adult male rats
Background Dietary fibre-induced satiety offers a physiological approach to body weight regulation, yet there is lack of scientific evidence. This experiment quantified food intake, body weight and body composition responses to three different soluble fermentable dietary fibres in an animal model and explored underlying mechanisms of satiety signalling and hindgut fermentation. Methods Young adult male rats were fed ad libitum purified control diet (CONT) containing 5% w/w cellulose (insoluble fibre), or diet containing 10% w/w cellulose (CELL), fructo-oligosaccharide (FOS), oat beta-glucan (GLUC) or apple pectin (PECT) (4 weeks; n = 10/group). Food intake, body weight, and body composition (MRI) were recorded, final blood samples analysed for gut satiety hormones, hindgut contents for fermentation products (including short-chain fatty acids, SCFA) and intestinal tissues for SCFA receptor gene expression. Results GLUC, FOS and PECT groups had, respectively, 10% (P < 0.05), 17% (P < 0.001) and 19% (P < 0.001) lower food intake and 37% (P < 0.01), 37% (P < 0.01) and 45% (P < 0.001) lower body weight gain than CONT during the four-week experiment. At the end they had 26% (P < 0.05), 35% (P < 0.01) and 42% (P < 0.001) less total body fat, respectively, while plasma total glucagon-like peptide-1 (GLP-1) was 2.2-, 3.2- and 2.6-fold higher (P < 0.001) and peptide tyrosine tyrosine (PYY) was 2.3-, 3.1- and 3.0-fold higher (P < 0.001). There were no differences in these parameters between CONT and CELL. Compared with CONT and CELL, caecal concentrations of fermentation products increased 1.4- to 2.2-fold in GLUC, FOS and PECT (P < 0.05) and colonic concentrations increased 1.9- to 2.5-fold in GLUC and FOS (P < 0.05), with no consistent changes in SCFA receptor gene expression detected. Conclusions This provides animal model evidence that sustained intake of three different soluble dietary fibres decreases food intake, weight gain and adiposity, increases circulating satiety hormones GLP-1 and PYY, and increases hindgut fermentation. The presence of soluble fermentable fibre appears to be more important than its source. The results suggest that dietary fibre-induced satiety is worthy of further investigation towards natural body weight regulation in humans.
Background
There is growing interest in targeting increased satiety as a potential countermeasure to the current global obesity epidemic, often embracing pharmacological and surgical approaches (eg [1][2][3]). However, an alternative physiological approach is to use food constituents that naturally increase satiety and reduce overall caloric intake. Dietary fibre is one such food constituent variously associated with decreasing appetite and body weight, but data from human trials remain equivocal [4][5][6][7][8]. This may be resolved in the first instance using laboratory animals where there is complete control over the diet and the opportunity to collect gut tissue samples in which to explore underlying molecular mechanisms. Studies in rats and mice have indeed shown that diets containing increased amounts of dietary fibre result in lower body weight and/or adiposity; however, effects on food intake and satiety are inconsistent, in part due to the variability in study designs, for example in terms of fibre type, duration of feeding, dose rate, and age and phenotype of the experimental animals [9][10][11][12][13]. Herein the aim was to control for the variations found in previous studies by quantifying responses to different types of soluble dietary fibre within the same study, using a fixed duration of feeding, a fixed dose rate and conventional young adult male rats. While results from this animal model may not apply directly to solve the obesity epidemic in humans, they may contribute to the scientific knowledge base leading towards that ultimate goal.
Dietary fibre is a general term describing indigestible carbohydrates that can be broken down by bacterial fermentation in the large intestine and includes insoluble molecules such cellulose that are poorly fermented but also soluble molecules that tend to be more highly fermented [10,14,15]. This study utilised three commonlyoccurring yet contrasting types of soluble fermentable dietary fibre, with cellulose included as an insoluble control fibre for comparison. Beta-glucan is a highly polymerised branch-structured glucose polysaccharide found in cereals and bran, pectin is a highly polymerised galacturonic acid polysaccharide present in fruit and vegetables, and fructo-oligosaccharide (FOS) is a fructose polymer with a contrastingly low degree of polymerisation, present in fruit and vegetables as well as in artificial sweeteners. These soluble fibres also demonstrate different physicochemical attributes, for example pectin and beta-glucan have high viscosity whereas FOS has negligible viscosity, while all are highly fermentable [10,14,15]. Choice of these fibres was influenced by existing evidence for responses to their inclusion in diets for ad libitum-fed rats and mice in a wide range of study designs. Thus dietary inclusion of oat beta-glucan at 7% for 6 weeks decreased caloric intake and there were trends towards decreased body weight gain and decreased fat pad weights in adult diet-induced obese mice [9] but given at 5% for 6 weeks had no effect on food intake or body weight while decreasing epididymal fat pad weights in growing rats [10]. Food intake and growth rate were decreased in weanling rats given dietary pectin at 8% for 14 days [11] whereas food intake increased and body weight was unaffected in young rats given 5% pectin for 21 days in another unrelated study [12]; effects on body composition were not reported in these studies. In growing rats, dietary inclusion of FOS at 10% for 3-4 weeks decreased food energy intake, body weight gain and adipose tissue mass [13] but at 5% for 6 weeks in another study only decreased epididymal fat pad mass without affecting intake or body weight [10], while 6% of a similar carbohydrate type, galacto-oligosaccharide, in the diet for 3 weeks also reduced epididymal fat-pad weight [16]. Guided by these previous studies, the present study used normal outbred young adult rats, a standard dietary fibre inclusion rate of 10% and a duration of 4 weeks to compare and quantify intake, body weight and body composition responses to different soluble fibres for the first time within the same study.
Several decades of research have identified the key role played by gastrointestinal peptide hormones in the control of food intake and hunger/satiety [17]. Thus ghrelin secreted from the stomach stimulates food intake while cholecystokinin (CCK), peptide tyrosine tyrosine (PYY) and glucagon-like peptide-1 (GLP-1) from the intestine signal satiety and inhibit food intake [18]. It is postulated that the satiating effects of dietary fibre are mediated at least in part by these gut hormones since, for example, FOS feeding is associated with increased circulating GLP-1 and decreased ghrelin in rats [19] and oat beta glucan increases PYY secretion in diet-induced obese mice [9,20], but there are no reports on the effects of dietary pectin on these gut hormones. Gut hormone secretion was therefore assessed in our model from both plasma concentrations (ghrelin, CCK, PYY and GLP-1) and gut tissue gene expression (PYY and GLP-1). Since these hormones are secreted by enteroendocrine cells within the gut mucosa, it was also pertinent to investigate how these cells sense the presence of dietary fibre in the gut lumen. The products of dietary fibre fermentation are organic acids, mainly the short-chain fatty acids (SCFAs) acetate, propionate and butyrate, but also others such as succinate and lactate [21]. The presence of SCFA-activated receptors in colonic mucosa provides grounds for implicating SCFA sensing in the stimulation of gut satiety hormones [22]. Furthermore SCFA activation of free fatty acid receptors (ffar)2 and ffar3 has been associated with stimulation of PYY and GLP-1 secretion in human and rodent models [23][24][25]. The succinate receptor (sucnr1) is also expressed in intestinal mucosa [26] but has not been linked to gut hormone secretion. The present study therefore examined in the animal model concentrations of fermentation products (including SCFA) in the lower gut contents and gene expression for SCFA receptors in the lower gut tissue.
This experiment investigated the hypothesis that soluble dietary fibre from different sources decreases voluntary food intake, body weight and body fat mass in normal healthy young adult male rats and that the increased satiety may be brought about by increased amounts of fermentation products in the lower gut signalling via SCFA-activated receptors to stimulate gut satiety hormone secretion. The article reports that these soluble dietary fibres did indeed decrease food intake, weight gain and adiposity, and these changes were associated with increased hindgut fermentation and increased circulating concentrations of satiety hormones GLP-1 and PYY.
Diets
All diets during the study were purified, based on the AIN-93 M diet (American Society for Nutrition, Bethesda, MD, USA) for the maintenance of adult rats, and were made and supplied by Special Diet Services Ltd, Witham, Essex, UK ( Table 1): The control diet contained 5% w/w cellulose (CONT), an additional control diet contained 10% w/w cellulose (CELL), and the experimental diets contained 10% w/w fructo-oligosaccharide (FOS), oat beta-glucan (GLUC) or apple pectin (PECT) ( Table 1). During the final week before the experiment all rats were habituated to the purified CONT diet.
Animals, experimental procedure and tissue collection
All animal experimental procedures conformed to the UK Home Office Animal (Scientific Procedures) Act 1986, met institutional and national guidelines for the care and use of animals and were approved by local ethical review at the University of Aberdeen Rowett Institute of Nutrition & Health. The animal rooms were maintained at 21 ± 2°C and 55 ± 10% relative humidity, cages contained sawdust bedding with shredded paper for nesting and plastic tunnels for further enrichment, water was available ad libitum and the lighting regime was a standard 12 h light, 12 h dark.
After 1 week acclimatisation to individual housing while on CONT diet, young adult outbred male Sprague Dawley rats (12 weeks old, 467 ± 6.0 g; from Charles River Laboratories UK) were randomly allocated to weight-matched groups and offered the pelleted experimental diets ad libitum for 4 weeks (n = 10/group). Voluntary food intake was measured daily (by weighing food into the hopper and weighing uneaten food 24 h later) and body weight was measured twice a week. Body composition was measured in conscious rats at the start (0 week) and end (4 weeks) of the experiment by magnetic resonance imaging (MRI; EchoMRI 2004, Echo Medical Systems, Houston, TX, USA), which measures the masses of fat and lean tissues in live animals using nuclear magnetic resonance (NMR) technology (validated in [28,29]): the fat reading includes all of the fat molecules in the body expressed as equivalent weight of canola oil, and the lean reading is a muscle mass equivalent of all the body parts containing water, excluding fat, bone and substances that do not contribute to the NMR signal such as hair and claws. Oat beta-glucan (Cambridge Commodities Ltd., Ely, Cambridgeshire UK). 4 Apple pectin (Solgar Apple Pectin; Revital Ltd.). 5 ME, metabolisable energy calculated from Atwater Fuel Energy of diet components. 6 GE, gross energy determined using Gallenkamp Adiabatic Bomb Calorimeter. 7 Total carbohydrate by standard acid-hydrolysis followed by colorimetric glucose determination. 8 Total nitrogen determined using VarioMax CN Analyser. 9 Total fat measured using standard acid-hydrolysis and gas chromatography. 10 NSP, non-starch polysaccharides, analysed by the Englyst procedure [27], using gas chromatography and spectrophotometry to measure constituent sugars, and including 11 fructose detection for FOS diet by ultra performance liquid chromatography. 12 GLUC diet analysed specifically for beta-glucan by assay kit (K-BGLU; Megazyme, Bray, Co. Wicklow, Ireland).
After the final MRI scan, rats were euthanised by decapitation under general inhalation anaesthesia (isoflurane; IsoFlo®, Abbott Animal Health, Maidenhead, Berkshire, UK), approximately 1-3 h after lights on. Final (trunk) blood samples were collected into chilled tubes containing EDTA as anti-coagulant and a peptidase inhibitor cocktail containing general protease inhibitor (cØmplete; Roche Diagnostics Ltd, Burgess Hill, West Sussex, UK) and dipeptidyl peptidase-4 inhibitor (Ile-Pro-Ile; Sigma), centrifuged immediately at 3000 g for 10 min, then plasma was stored at -20°C until analysis. The gut was dissected out for sampling the contents from caecum and colon and tissue from distal ileum and proximal colon. Caecum and colon contents were stored at -20°C until analysis. Tissue samples were immersed in RNAlater (QIAGEN, Crawley, UK) for 5 days at 4°C and then stored at -80°C until analysis.
Analysis of fermentation products
The concentrations of SCFA and other organic acids produced by bacterial fermentation in caecum and colon contents were determined by capillary gas chromatography using the method developed by Richardson et al. [30]. Briefly, samples were diluted with distilled water (1/4) and 2-ethylbutyric acid (5 mmol/L) was added as internal standard. Samples were then extracted in diethyl ether, derivatised with N-tert-butyldimethylsilyl-N-methyltrifluoroacetamide and analysed on Agilent GC HP-1 capillary columns. This method detects formate, acetate, propionate, butyrate, isobutyrate, valerate, iso-valerate, lactate and succinate.
RNA isolation and real-time qPCR
Total RNA was extracted from the gut tissue samples using RNeasy® Mini Kit (QIAGEN, Crawley, UK). Briefly, weighed samples were homogenised using Zirconia beads (BioSpec, Bartlesville, USA) and a Precellys®24 bead-based homogeniser (Bertin Technologies, Ann Arbor, USA), the homogenates were centrifuged in RNeasy spin columns and on-column DNase digestion was conducted before eluting the RNA. Total RNA was quantified by measuring the absorbance at 260 nm using a NanoDrop spectrophotometer and purity assessed by measuring the ratio of absorbance at 260 and 280 nm. Overall quality was also assessed using an Agilent Bioanalyzer (Agilent Technologies, Santa Clara, USA) and all samples had RIN values >8.9. RNA was then stored at -80°C. The synthesis of cDNA from the RNA was carried out using a high capacity cDNA kit and completed using a GeneAmp® PCR System 9700 thermal cycler (both from Applied Biosystems, Warrington, UK).
Relative gene expression analysis was carried out in line with the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines [31]. Initially, the expression of three candidate reference genes β-actin (beta-actin), B2M (beta-2-microglobulin) and Tbp (TATA box binding protein), together with ffar2, was determined by qPCR using a 7500 Fast Realtime PCR System (Applied Biosystems). B2M had the closest level of expression and similar PCR efficiency to ffar2 and was therefore chosen as the most appropriate reference gene for this study. Commercially available PCR primers were used (QIAGEN, Crawley, UK) for Tbp (reference # QT01605632), β-actin (QT00193473), B2M (QT00176 295), ffar2 (QT00384860), ffar3 (QT01299144), sucnr1 (QT00454496), GLP-1 (QT00192661), and PYY (QT02 352637). RNA abundance was measured by qPCR with a 7500 Fast Realtime PCR System (Applied Biosystems) using the SYBR green detection method.
Statistics
Daily food intakes and twice weekly body weight data were analysed by repeated measures ANOVA (General Linear Model with time, diet and their interaction as factors; Minitab version 16, Minitab Inc., State College, PA). Cumulative food intake, initial and final MRI data, changes in body weight, fat mass and lean mass during the experiment and final plasma hormone concentrations were analysed by one-way ANOVA (Minitab). Pairwise group comparisons were then made by the Tukey method. Pearson's product-moment correlation was used to examine the relationships between food intake and bodyweight gain, changes in fat and lean mass, and plasma gut satiety hormone concentrations (Minitab). For qPCR data, group-wise comparisons of gene expression ratios were performed using the public domain program Relative Expression Software Tool (REST; http:// www.gene-quantification.info/) [32]. The gene expression values were normalised to the reference gene B2M and the data are presented as median expression levels relative to the level in the CONT group. Overall, P values of 0.05 or less were considered to be statistically significant.
Food intake, body weight and body composition
For daily food intake, repeated measures ANOVA revealed highly significant effects of diet, time and their interaction (all P < 0.001; Figure 1a). There were no differences between CONT and CELL group daily intakes at any time point. GLUC, FOS and PECT groups all had lower daily intakes than the CONT group (the primary control group given 5% w/w insoluble fibre), with the differences being greatest during the first week but nonetheless present throughout the experiment (Figure 1a). Compared with the CELL group (the additional control group given 10% w/w insoluble fibre), daily intakes were significantly lower throughout (P < 0.001-0.05) in the GLUC group from day 2 and in FOS and PECT groups from day 1 (Figure 1a). Cumulative intake was significantly lower for rats fed GLUC (P < 0.05), FOS and PECT (both P < 0.001) than CONT and CELL, with no significant difference between CONT and CELL groups nor between GLUC, FOS and PECT groups ( Table 2). Repeated measures ANOVA revealed significant effects on body weight of time (P < 0.001) and diet group (P < 0.01) with order of decreasing magnitude CELL > CONT > GLUC > FOS > PECT (Figure 1b). In view of the divergence in body weight between the groups, MRI values for total fat and lean mass are expressed as a percentage of the final body weight. Initial percentage total fat and lean were not different between the groups (Table 2). Compared with CONT, overall body weight gain was lower in GLUC (P < 0.01), FOS (P < 0.01) and PECT (P < 0.001) groups, body fat mass gain was lower in GLUC, FOS and PECT (all P < 0.001) but there was no difference in lean tissue mass gain ( Table 2); differences in final body weight did not reach significance, final percentage total body fat was lower in GLUC (P < 0.05), FOS (P < 0.01) and PECT (P < 0.001) groups, and final percentage total lean tissue was higher in FOS (P < 0.05) and PECT (P < 0.01) groups (Table 2). Percentage total lean tissue was also greater in PECT than GLUC groups (P < 0.05), but there were no differences in fat or lean (changes in absolute mass or final percentage values) between CONT and CELL groups.
Across all groups (n = 50) there were highly significant correlations between cumulative food intake and body weight gain (P < 0.001) and fat mass gain (P < 0.001) and a weaker correlation with lean tissue gain (P < 0.01) (Figures 2a-c).
Plasma satiety hormone concentrations
Plasma concentrations of PYY and total GLP-1 were elevated in GLUC, FOS and PECT groups compared with CONT (all P < 0.001), and GLUC rats had lower PYY than FOS and PECT groups (both P < 0.05) and lower total GLP-1 than FOS rats (P < 0.05) (Figure 3a,b); there were no differences between CELL and CONT groups. There were no differences between any of the groups for active GLP-1, CCK or ghrelin concentrations (Figure 3c-e). Across all groups (n = 50) there were highly significant correlations between cumulative food intake and plasma concentrations of total GLP-1 (r = 0.516, P < 0.001) and PYY (r = 0.623, P < 0.001).
Concentrations of fermentation products in caecum and colon contents
Total concentrations of fermentation products were significantly greater in the caecum of FOS, GLUC and PECT groups than CONT or CELL groups, greater in the colon of FOS and GLUC than CONT and CELL groups, and greater in the colon of PECT than CELL group (P < 0.05-0.001; Table 3). The main fermentation products detected were acetate, propionate, butyrate and succinate; lactate concentrations were low and were not significantly different between groups in caecum or colon, and concentrations of formate, iso-butyrate, valerate and iso-valerate were minimal (not reported). In caecal contents, compared with CONT, the FOS group had higher succinate (P < 0.001), lower acetate (P < 0.001) and lower propionate (P < 0.01), GLUC had higher succinate (P < 0.001) and lower acetate (P < 0.001), PECT had higher succinate (P < 0.01) and the CELL group was not different (Table 3). Butyrate concentrations did not differ from CONT, but were lower in PECT than FOS group (P < 0.01). Succinate was higher (P < 0.01) and acetate was lower (P < 0.001) in FOS and GLUC groups than in the PECT group. Similar group differences were seen in colonic contents where, compared with CONT, the FOS group had higher succinate (P < 0.001) and lower acetate (P < 0.001), GLUC had higher succinate (P < 0.001) and lower acetate (P < 0.001), PECT had higher succinate (P < 0.001) and the CELL group was not different (Table 3). There were no differences in colonic butyrate concentrations, but succinate was higher (P < 0.001) and acetate was lower (P < 0.01) in FOS and GLUC groups than in the PECT group.
Gene expression for SCFA and succinate receptors and for satiety hormones in ileum and colon tissue
Relative to CONT group, ffar2 gene expression was increased in the distal ileum of the FOS group (P < 0.01; Table 4); ffar3 gene expression was decreased in proximal colon of FOS and PECT groups (P < 0.05-0.01); sucnr1 gene expression was increased in distal ileum and proximal colon of CELL group (P < 0.05-0.01); GLP-1 gene expression was decreased in distal ileum of PECT group (P < 0.05); PYY gene expression was decreased in distal ileum of PECT group and in proximal colon of CELL group (P < 0.05); and relative expression of all other gene/tissue combinations was not significantly different (Table 4).
Discussion
This experiment has shown how three very different soluble fermentable dietary fibres, beta-glucan, FOS and pectin, all similarly decreased voluntary food intake, weight gain and adiposity in normal healthy young adult male rats over a 4-week period, and the associated increased plasma concentrations of PYY and GLP-1 were strongly indicative of increased satiety. While these data demonstrate an animal model of fibre-induced satiety, the dietary inclusion rate (10% w/w, or 6.6 mg/kJ) was approximately twice the US recommended daily fibre intake of 3.3 mg/kJ for men [33,34] and further work is required to translate the findings to humans. Whereas 10% w/w insoluble fibre cellulose had no effect on food intake compared with the control group, the same inclusion rate of the three soluble dietary fibres each had similar effects, decreasing cumulative intake by 10-19% over 4 weeks and daily intakes by~12% after an initial week of adjustment. Although the immediate decreases in intake following introduction of the high fibre diets may have reflected some initial unfamiliarity and low palatability to the rats, the consistently lower daily intakes thereafter once they had become accustomed to the diets were more likely indicative of increased satiety since they were associated with increased circulating concentrations of two important satiety hormones. The three soluble dietary fibres specifically increased concentrations of total GLP-1 and PYY in plasma samples taken in the early part of the light phase at the end of the trial, indicative of raised tonic secretion of these hormones, and the magnitude of the increases were similar to those associated with increased satiety in other rat models [16,35]. Predictably, changes in active GLP-1 were not detected since it only has a 1-2 minute half-life and samples were not taken immediately after feeding episodes; nonetheless, measurement of total GLP-1 includes both the intact hormone and its primary metabolite and thereby provides an accurate indication of overall GLP-1 secretion [36]. Tonic plasma concentrations of ghrelin and CCK were unchanged. This was perhaps not surprising since ghrelin is secreted in the stomach and CCK is produced by I-cells that are located mainly in the duodenum and jejunum whereas the presence of indigestible dietary fibre in the gut is most likely to be detected nearer to where it is fermented in the large intestine. In line with this argument, PYY and GLP-1 are both secreted from L-cells situated mainly in the distal small intestine and proximal large intestine [18]. Our data are consistent with findings in rats fed dietary resistant starch where total GLP-1 and PYY were up-regulated in a sustained day-long manner through fermentation [35]. It is inferred that significant satiety-inducing effects of dietary fibre only manifest themselves after several days of consistently increased fibre intake and it is tempting to speculate that this reflects a minimum exposure time for the underlying chronic changes in the gut environment and in tonic gut satiety hormone secretion to develop. The reductions in food intake on the three diets containing soluble fermentable fibre were associated with significant reductions in body weight gain and it is noteworthy that this was attributable to the arrested accumulation of fat with lean tissue growth maintained. Both body weight gain and fat mass gain were closely correlated with the cumulative food intake (Figure 2). These Table 3 Concentrations (mmol/l) of fermentation products in caecum and colon contents of rats given different dietary treatments findings add important total body composition data to existing reports of dietary oat beta-glucan decreasing epididymal fat pad weights in growing rats [10] and of dietary FOS decreasing fat pad mass in rats [10,13] and lend support to the notion that fermentable dietary fibre has potential as a tool for body weight and body fat regulation through its effects on satiety. While differences in eating behaviour, physical activity and heat expenditure (from increased fermentation) may also have contributed to the observed differences between the soluble fibre-fed rats and controls, the overall outcome was a similar decrease in food intake, body weight and body fat in all three soluble fibre-fed groups. Soluble dietary fibres may influence satiety by virtue of their physicochemical properties, for example those with high viscosity such as pectin and beta-glucan tend to slow the rate of gastric emptying and overall gut transit time, which can lead to reduced intakes [10,14,15]. However the present intake responses were also shown with FOS, which has negligible viscosity [10]. The common feature of the three fibres studied herein was their fermentability in the large intestine, resulting in greatly increased total concentrations of fermentation products in the caecum and colon contents. The highest concentration was that of succinate, whereas dietary studies in the literature appear rarely to report succinate concentrations, concentrating rather on the main three SCFAs. Succinate is produced by intestinal bacterial species of the Bacteroides and Clostridium genera [21], numbers of Bacteroides have been shown to increase in rats given dietary inulin and FOS [37] and increased caecal succinate concentrations of comparable magnitude to those reported herein have previously been reported in rats given 6% FOS [38] or 5% pectin [39]. It appears from these latter studies and from the present one that the production of large amounts of succinate is favoured when rats are given a rapidly soluble and fermentable fibre source in a purified diet and when the protein source is a highly digestible one such as casein; this is thought to be because a lack of nitrogen relative to the carbohydrate availability in the large intestine promotes high succinate concentrations [38]. Moreover, succinate is absorbed by the intestinal mucosa more slowly than other SCFAs [40], which would lead to its relative accumulation. However, it is unlikely that the increased succinate per se elicited the satiety response in our model since succinate concentrations were lower in PECT than GLUC or FOS groups, yet the satiety response was similar. Furthermore, although increased luminal succinate was associated with increased PYY and GLP-1 secretion in the fermentable fibre-fed groups, there was no evidence for increased succinate signalling to the gut enteroendocrine cells via its receptor sucnr1. Indeed, the only increase in sucnr1 gene expression was unexpectedly seen in the CELL group, in the absence of elevated luminal succinate and without changes in PYY and GLP-1 secretion. It is unknown whether succinate activates the SCFA receptors, ffar2 and ffar3 [41].
After succinate, in terms of relative concentrations, acetate was the dominant fermentation product in all three fermentable fibre groups, but whereas concentrations of propionate were greater than butyrate in PECT and GLUC groups, the reverse was observed in the FOS group. The differences in fermentation products between the soluble fibres may reflect different patterns of fermentation and/or different rates of fermentation and turnover, which the present measurements at a single time point would not have detected. Nonetheless, given the similar levels of satiety observed in all groups, it is hard to attribute relevance of these SCFA differences to signalling satiety.
It was perhaps surprising that there were no increases in concentrations of the main three SCFAs in caecum Abbreviations are: ffar free fatty acid receptor, sucnr succinate receptor, GLP-1 glucagon-like peptide-1, PYY peptide tyrosine tyrosine. 2 Groups were given diets containing 10% w/w fibre as cellulose (CELL), fructo-oligosaccharide (FOS), oat beta-glucan (GLUC) or apple pectin (PECT); n = 10/group. and colon contents of rats fed the fermentable fibres compared with controls, and indeed concentrations were unexpectedly lower for acetate in FOS and GLUC groups and for propionate in the FOS group. There may be a number of reasons for this, apart from the overwhelming dominance of the succinate: for example, measuring concentrations at a single time point would have given no indication of biologically significant changes in rates of turnover. Nonetheless, the volume of contents and hence the total pool of SCFAs would have been increased in the fermentable fibre groups because the large intestine was visibly enlarged. This observation was supported by measurements of increased caecum and colon weights in rats given 10% PECT in a separate experiment (Adam et al., unpublished results) and is in agreement with published data on rats fed the dietary fibre inulin [42]. Although the caecum is the primary site for dietary fibre fermentation, SCFA concentrations and group differences were similar in colon and caecum contents. There was also visible leakage of caecal contents back into the distal ileum, indicating that enteroendocrine L-cells in both the distal ileum and proximal colon would have been exposed to SCFAs, from where satiety responses may have been elicited, and a recent report indicates that L-cell numbers are fairly evenly distributed along the rat jejunum, ileum and colon [43]. Whereas a recent review concludes overall that SCFAs may not play a role in appetite regulation [44], in vitro evidence shows that they can trigger GLP-1 release from cultured colonic L-cells via the SCFA receptors ffar2 and ffar3 [25] and stimulate GLP-1 (proglucagon) mRNA expression in cultured enteroendocrine STC-1 cells [35]. Some in vivo evidence indicates that oral administration of butyrate or propionate decreases food intake and butyrate stimulates GLP-1 and PYY secretion in mice, but these were ffar3-deficient animals indicating that ffar3 is not involved in SCFA stimulation of these gut satiety hormones [45]. The present data are consistent with this since we found no evidence for increased ffar3 gene expression and indeed there was even a decrease in expression in the proximal colon of FOS and PECT groups (Table 4). Similarly we found little evidence for increased gene expression for ffar2, the one exception being an increase in the distal ileum of FOS-fed rats. The absence of increased gene expression found here for PYY and GLP-1 in distal ileum and proximal colon, despite the greatly increased plasma concentrations of these hormones, was another apparently anomalous finding and gene expression for both was even decreased in the PECT group. Since PYY and GLP-1 are secreted from intestinal L-cells, the increased plasma concentrations must reflect either increased secretory activity per cell or increased number of L-cells. The apparent anomalies are most likely attributable to an overall increase in L-cell number along the enlarged intestines, which was not detectable here since the real time PCR method measures expression per mg tissue. The same PECT diet given for 4 weeks to adult male rats in a separate experiment increased the weights and lengths of both colon and small intestine (Adam et al., unpublished results). This is not without precedent since a doubling of the Lcell population in the rat intestinal mucosa occurs with no change in L-cell density due to general gut hypertrophy both after Roux-en-Y gastric bypass and in the Zucker Diabetic Fatty rat model [43,46]. These authors also report increased plasma GLP-1 concentrations in their models, associated with the increased L-cell number and overall increased proglucagon (GLP-1) gene expression (by in situ hybridisation) without any increase in gene expression per cell. Other reports of increased L-cell differentiation and number in the proximal colon of rats fed fermentable indigestible carbohydrate lend support to this argument [13,23]. Altogether the present data are consistent with increased circulating PYY and GLP-1 concentrations arising from overall increased numbers of L cells along the ileum and colon exposed to increased amounts of fermentation products, but further mechanistic investigations need to acknowledge the potential influence of gut hypertrophy in this model.
Conclusions
These data show that sustained daily intake of diverse types of soluble dietary fibre increases satiety and decreases overall food intake, weight gain and adiposity, associated with increased circulating GLP-1 and PYY and increased hindgut fermentation. Moreover it appears that the presence of soluble fibre is more important than the source of soluble fibre for reducing diet intake and body weight gain. Therefore, the study demonstrates an animal model of dietary fibre-induced satiety that may be used for further investigation of underlying mechanisms and provides proof of principle worthy of investigation in humans towards a natural means of body weight regulation. | 7,683.2 | 2014-08-14T00:00:00.000 | [
"Agricultural And Food Sciences",
"Medicine",
"Biology"
] |
Modal analysis of a multiple structure
The aim of the work was to carry out a modal analysis of a multiple structure. Own forms of oscillation of the structure for five tones were obtained through the vector iteration process and were presented in table and graphic form. Using the different methods (Time history, SRSS and CQC), a calculation of the displacements was performed. Theoretically, all three methods are described and the results of the calculation for each of them are obtained. A comparison of the results, for the three methods in a given time interval, is graphically shown. Also, the results are compared which are all the same in all three methods. The modal seismic analysis of Spectral Theory was also performed. It can be concluded that by comparing the method of Time history and Spectral theory their results correspond to the maximum modal displacement.
Introduction
In this paper, the procedure of modal analyzes of five-storey structure is proved. The second point briefly describes the modal analysis theoretically, and then defines the design of structure and the data required to perform calculations of the modal structural analysis of structure.
The third point briefly describes the vector iteration process that produces its own forms of oscillation of the structure for all 5 tones and finally presents own forms of oscillations of the structure in a table and graphically.
In the fourth point he proved the calculation of the shift according to the methods Time History, SRSS and CQC. The methods Time History, SRSS and CQC were first described theoretically, after which, after the proven calculations and the results obtained for all three methods, a comparison of the displacements for all three methods in a given time interval and the results are commented.
In the fifth point, the calculation of the modal seismic analyzes of the spectral theories for the first mode is proved and the results of the time-dependent shift are graphically presented. Finally, a comparison of the maximum modal displacement obtained with respect to Time History and Spectral Theory is presented graphically.
A conclusion is drawn at the end of this paper.
Geometry and construction data
Modal analysis is a dynamic analysis of linear systems with N degrees of freedom, which is based on the method of developing in its own forms or tones. This method is applicable if the time dependence of the excitation force of all masses is the same or relatively the same, which in the case of earthquake load satisfies the required condition. The solution to the problem is to solve the matrix differential equation: (1) The calculation of the modal analysis of a five-storey structure with the dimensions of columns b / h = 30/30 cm was carried out, the beams of the construction were observed as absolutely rigid in the direction of oscillations, the inter-storey height of the structure is 3.0 m, the span of the beams is 6.0 m, and the masses are concentrated in the level of interstorey structures (beams), respectively m1 = m2 = m3 = m4 = 6 tonnes, m5 = 3 tonnes The masses are numbered from the first to the fifth floor. The damping coefficient of the structure is ξ=5 and the acceleration of the substrate is ag=0.35g. their text citation.
Vector iteration to obtain its own forms of oscilation
When calculating modal structural analysis, we first need to determine the bending moment diagram of the unit masses of each floor. By multiplying these diagrams, we get members of a flexibility matrix that looks like this: For the purposes of calculating modal structural analysis, we also need a mass matrix: Multiplying the mass matrix and the flexibility matrix, we obtain a dynamic mass matrix: (5) Once we have determined the dynamic matrix of masses, we go into a vector iteration process to obtain our own oscillation forms
Displacement comparison over Time history, SRSS and CQC methods
Time history represents the actual displacements of a given mass on the structure and depends on the relative displacement of mass in tone and number of modes: SRSS -the square root of the sum of squares -is a modal combination that squares each peak value of each shape. The root of the sum of the squares of these peaks gives a complete answer: This modal combination assumes that the maximum modal values are statistically independent. As these are squares of values, each peak value becomes positive. In the case of structures where a large number of natural frequencies are almost identical, this assumption will not be valid and this modal combination will then not give satisfactory values for the overall response.
CQC -complete quadratic combination, is the product of the peak values of the i-th and n-th form and the correlation coefficient for the two forms. The correlation coefficient varies between values 0 and 1 and is equal to the unit value for i = n.
Modal seismic analysis by spectral theory
Modal seismic analysis by spectral theory was performed only for the first mode because the oscillation periods with the remaining tones are too small.
The calculation of the 1940 El Centro earthquake displacement response spectrum, 5% attenuation (Figure 8) was used and the results are given below.
Conclusions
The modal analysis results for the observed structure are fully in line with expectations. It can be seen that in its own forms of oscillation of the structure for the first tone, the displacement of each part is in the same direction, and as each tone increases, the number of changes in the direction of oscillation increases. At the fifth tone of oscillation, it can be seen that each subsequent mass changes the direction of oscillation.
After calculating the displacement according to Time history, SRSS and CQC methods, it can be seen that the results according to all three methods are approximately the same, only there are small differences in mass 1 between Time history and the remaining two (SRSS and CQC) methods.
Finally, the calculation of the modal seismic analysis by spectral theory was performed. This calculation shows a good match between the maximum modal displacement obtained by the Time history method and the Spectral theory. | 1,435 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |