text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Measurement of Energy Spectrum of Ultra-High Energy Cosmic Rays Ultra-High Energy Cosmic Rays (UHECRs) are charged particles of energies above $10^{18}$ eV that originate outside of the Galaxy. Because the flux of the UHECRs at Earth is very small, the only practical way of observing UHECRs is by measuring the extensive air showers (EAS) produced by UHECRs in the atmosphere. This is done by using air fluorescence detectors and giant arrays of particle detectors on the ground. The Pierre Auger Observatory (Auger) and Telescope Array (TA) are two large cosmic ray experiments which use such techniques and cover 3000 km$^2$ and 700 km$^2$ areas on the ground, respectively. In this paper, we present the UHECR spectrum reported by the TA, using an exposure of 6300 km$^2$ sr yr accumulated over 7 years of data taking, and the corresponding result of Auger, using 10 years of data with a total exposure exceeding 50000 km$^2$ sr yr. We review the astrophysical interpretation of the two measurements, and discuss their systematic uncertainties. of a pure proton composition, the ankle can be explained by the electron-positron pairproduction from interactions of the CR protons with the CMB photons [6]. In the case of a mixed composition, on the other hand, propagation effects are complicated by the fact that the primary nuclei also suffer interactions that cause a progressive reduction of their mass numbers. Other alternative models assert that a cut-off on the acceleration mechanism at the sources may play some role in the explanation of the observed suppression of the cosmic ray flux [7]. Historically, observation of the cut-off in the energy spectrum was a technically challenging task. Because the rate of CRs of energies greater than 10 20 eV is as low as 1 event per square kilometer per century, experiments with very large effective areas, long observation periods, and good energy resolution were required to see the effect. AGASA (Akeno Giant Air Shower Array) [9] and Hi-Resolution Fly's Eye (HiRes) [8] were the first cosmic ray detectors large enough to measure the energy spectrum of UHECRs above 10 19.0 eV, as shown in Fig. 1. There were two major differences in their results. First, there was a difference in the overall energy scale, which came from the difference in the techniques employed by the two experiments. AGASA used an array of scintillation counters that were detecting EAS particles at the ground level, while HiRes employed fluorescence detectors that were sensitive to fluorescence light emitted due to the energy deposition of the EAS particles in the atmosphere. The systematic uncertainties in determining the CR primary energy were ∼ 20 % in both experiments. The second important difference was in the shape of the spectrum above the ankle. The HiRes spectrum showed a steepening in the spectrum at 6 × 10 19 eV [8] as predicted by the GZK theory [4,5], whereas the AGASA spectrum extended well beyond the cut-off energy [9]. The tension between the two major experiments in the 1990's led to an idea of the hybrid detection of UHECRs, where both surface detectors and fluorescence detectors can be used within a single experiment. The Telescope Array (TA) [10] and Pierre Auger Observatory (Auger) [11] are modern hybrid cosmic ray experiments. This paper describes recent measurements of the UHECR spectrum by the Telescope Array (TA) experiment [10] and the Pierre Auger Observatory (Auger) [11]. The TA is a cosmic ray observatory that covers an area of about 700 km 2 in the northern hemisphere, and Auger has an effective area of 3000 km 2 in the southern hemisphere. Both experiments use two types of instruments, surface detectors (SDs) and fluorescence detectors (FDs). The hybrid detection technique, where the CR showers are simultaneously observed with the FDs and SDs at the same site, allows a very precise determination of the CR energies and arrival directions. The FDs measure fluorescence light emitted by the atmospheric molecules excited by the charged particles in the EAS, and observe the longitudinal development of the EAS using mirror telescopes coupled with clusters of photo-multiplier tubes. Because the FDs are mostly sensitive to the calorimetric energy deposition in the atmosphere, the energy determination of the primary CRs is nearly independent of the details of the hadronic interactions within the EAS, where there are considerable uncertainties in different models. The FDs operate at a ∼ 10% duty cycle because the FD data can be collected only during nights with low moonlight background and with dry air and clear skies. The SDs, on the other hand, directly measure EAS particles at the ground level at a nearly 100% duty cycle, regardless of the weather conditions. This paper is organized as follows. The 2 Telescope Array and Auger Instruments TA Detectors The Telescope Array experiment [12] is located in Millard County, Utah (USA) at a 39.3 • N latitude and ∼ 1400 m altitude above sea level. The The TA SD consists of 507 particle counters arranged on a 1.2 km spaced square grid and covers an area of ∼700 km 2 on the ground. Each surface detector unit, shown on the left panel of Fig. 3, consists of two layers of 3 m 2 , 1.2 cm thick plastic scintillators [13]. Scintillation light in each layer is collected and directed to the photo-multiplier tube (PMT) by the wavelength-shifting fibers. There is one PMT for each layer. Outputs of the PMTs of the upper and the lower layers are individually digitized by 12 bit flash analog-to-digital converters (FADCs) at a 50 MHz sampling rate. The TA includes three fluorescence detector stations that overlook the surface array. Middle Drum (MD) FD is located in the northern part of the TA, and Black Rock Mesa (BR) and Long Ridge (LR) FDs are in the southern part. The MD station utilizes 14 refurbished telescopes previously used in the High-Resolution Fly's Eye (HiRes) experiment [14]. Each telescope consists of a ∼ 5 m 2 spherical mirror 4/31 and an imaging camera of 256 PMTs that uses a sample-and-hold readout system. The telescopes are logically arranged in two layers, called rings, that observe two different elevations. Physically, the MD telescopes are arranged in pairs: ring 2 telescopes that observe higher elevations are placed next to the ring 1 telescopes. The station covers 110 • in azimuth and 3 • to 31 • in elevation. Black Rock Mesa and Long Ridge stations have each 12 fluorescence telescopes that are also arranged in two rings. The telescopes use a new design shown in the right panel of Fig. 3. Each mirror of the BR and LR telescopes is composed of 18 hexagonal segments. The radius 5/31 of curvature of each segment is 6067 mm, and the total effective area of the mirror is 6.8 m 2 . The imaging camera of a BR-LR telescope consists of 256 PMTs that are read out by a 40 MHz FADC system. Each station covers 108 • in azimuth and 3 • to 33 • in elevation [15]. The calibration of the TA FDs was carried out by measuring the absolute gains of dozens of "standard" PMTs that are installed in each camera using the CRAYS (Calibration using Rayleigh Scattering) system in the laboratory [16]. The rest of the PMTs in the cameras are relatively calibrated to the standard PMTs by using Xe lamps installed at the center of each mirror [17,18]. Auger Detectors The Pierre Auger Observatory [19] is located in a region called Pampa Amarilla, near the small town of Malargüe in the province of Mendoza (Argentina), at ∼35 • S latitude and an at altitude of 1400 m above sea level. The observatory is in operation since 2004 and its construction was completed in 2008. A sketch of the site is shown in the right panel of Fig. 2. The Auger SD consists of 1660 water-Cherenkov detectors (WCDs) arranged on a hexagonal grid of 1.5 km spacing. The effective area of the array is ∼3000 km 2 . Each WCD unit, shown in the left panel of Fig. 4, is a plastic tank of a cylindrical shape, 10 m 2 × 1.2 m, filled with purified water. Cherenkov radiation, produced by the passage of charged particles through the water, is detected by the three PMTs, 9" in diameter each. The signal of the PMTs is digitized by an FADCs at a 40 MHz sampling rate. Because WCDs extend 1.2 m in the vertical direction, the Auger SD is sensitive to the cosmic ray showers that are developing at large zenith angles. The Auger FD consists of 24 telescopes placed in four buildings located along the perimeter of the site. A sketch of a telescope is shown in the right panel of Fig.4. Each telescope has a 3.5 m × 3.5 m spherical mirror with a curvature radius of 3.4 m. The coma aberration is eliminated using a Schmidt optics device, which consists of a circular diaphragm of radius 6/31 1.10 m and a series of corrector elements mounted in the outer part of the aperture. An ultraviolet transmitting filter is placed at the telescope entrance in order to reduce the background light and to provide the protection from the outside dust. The focal surface is covered by 440 PMTs, 22 rows x 20 columns, and the overall field of view of the telescope is 30 • in elevation and 28.6 • in azimuth. The PMTs use photocathodes of an hexagonal shape and are surrounded by light concentrators in order to maximize the light collection and to guarantee a smooth transition between the adjacent pixels. The signal from each PMT is digitized by a 10 MHz FADC with a 12 bit resolution. The Auger FDs are calibrated using a portable cylindrical diffuser, called the drum [20]. During the calibration process, the drum is mounted to the aperture of each telescope, and provides a uniform illumination of the entire surface area that is covered by the PMTs of the telescope. The drum is absolutely calibrated using a NIST-calibrated photodiode, and provides an absolute end-to-end calibration of all pixels and optical elements of every Auger FD telescope. The long-term time variations in the calibration of the telescopes are monitored using LED light sources that are installed in each building. Auxiliary Facilities at the TA and Auger In order to reconstruct the shower energy from the FD information accurately, it is necessary to know the attenuation of the light due to the molecular and aerosol scattering as the light propagates from the shower to the detector. The molecular scattering can be calculated from the knowledge of the air density as a function of height, and the aerosol content of the atmosphere is monitored each night during the FD data collection. The aerosols are measured in the TA and Auger using similar instruments. These include central laser facilities (CLFs) placed in the middle of the arrays, and standard LIDAR (LIght Detection and Ranging) stations [19,21]. Auger CLF has recently been upgraded to include a backscatter Raman LIDAR receiver. Other instruments like infra-red cameras are also employed in both experiments to continuously monitor the cloud coverage. Both the TA and Auger collaborations have enhanced the baseline configurations of their detectors to lower the minimum detectable shower energies. The TA low energy extension (TALE) consists of 10 additional fluorescence telescopes viewing higher elevation angles [22,23], from 32 • to 59 • , installed at the MD site, and an infill array of the same scintillation detectors as those used by the main TA SD array. In a similar way, Auger has installed three additional High Elevation Auger Telescopes (HEAT), viewing from 30 • to 60 • in elevation, at the Coihueco site (see Fig. 2). HEAT overlooks a 27 km 2 region on the ground that is filled with additional WCDs using 750 m spacing [19]. An important calibration facility, called the Electron Light Source (ELS) [24], has been implemented by the TA collaboration. The ELS is a linear accelerator installed in front of the TA Black Rock Mesa FD station at a distance of 100 m from the detector. The ELS provides a pulsed beam of 40 MeV electrons that are injected into the FD field of view. The pulse frequency is 1 Hz and each pulse has a duration of 1 µs and an intensity of about 10 9 electrons. The ELS beam mimics the cosmic ray air showers and provides an effective test not only for the FD calibration but also for the other kinds of detectors, such as radio antennas. 7/31 Both TA and Auger experiments use FD measurements to set their energy scale. The FD measures fluorescence photons produced by de-excitation of the atmospheric molecules (nitrogen and oxygen) that have been excited by the charged particles in the EAS, and provides a nearly calorimetric estimate of the total EAS energy. The fluorescence photons are emitted isotropically in the wavelength range between 290 and 430 nm. The most significant line emission at 337 nm contributes ∼ 25% of the total emission intensity. The number of emitted photons is proportional to the energy deposited by the charged particles in the EAS. The proportionality factor, called the fluorescence yield (FY), is measured by several experiments using accelerator beam and radioactive sources (see [25] for a review on this topic). shows the numbers of detected photoelectrons versus slant depth along the shower propagation axis. Points with error bars represent the data, reconstructed fluorescence light is shown in red color, and the Cherenkov contribution is shown in blue [26]. In the case of Auger (right), reconstructed energy deposition (points with error bars) is plotted versus slant depth along the shower axis. Red curve shows the Gaisser-Hillas function fitted to the data points [27]. As the EAS develops in the atmosphere, fluorescence light emitted at different altitudes triggers the FD pixels (PMTs) at different times. Pointing directions of the triggered pixels and the pixel time information are used to reconstruct the full geometry of the cosmic ray shower event, which includes the event arrival direction and the impact parameter (distance between the FD station and the EAS axis). Additionally, information from the triggered surface detector stations on the ground can be added to constrain the FD timing fit and improve the resolution of the EAS geometry. Events that are reconstructed using FD and SD information simultaneously are called the hybrid events. 8/31 Examples of reconstructed FD longitudinal profiles are shown in Fig. 5. The energy deposition (dE/dX) is determined as a function of the slant depth X along the shower axis using the intensity of the signal of the triggered pixels. The dE/dX reconstruction procedure requires the absolute calibration of the FD telescopes, knowledge of the attenuation of the light due to the scattering by the air molecules and the aerosols, and the absolute fluorescence yield. The integral of the dE/dX profile gives the calorimetric energy of the shower: E cal = (dE/dX) dX. The total (and final) energy of the primary cosmic ray is obtained from E cal after the addition of the so called invisible (or equivalently missing) energy E inv , which is the energy that is carried away by the high-energy muons and neutrinos that do not deposit their energies in the atmosphere and thus cannot be seen by the FD. For a typical EAS, E inv is of the order of 10 to 15% of the total primary energy. Further details on the energy scale and invisible energy corrections in TA and Auger will be discussed in Sec. 3.2 and 3.3. Surface Detector Energy Reconstruction The SD energy scale in both TA and Auger is calibrated by the FDs using well reconstructed hybrid events. This is done by comparing the SD energy estimators with the energies obtained from the corresponding FD longitudinal profiles on an event by event basis. The SD energy estimators in TA and Auger are obtained using conceptually similar analyses [28,29]. The energy of the primary CR particle, arriving at a given fixed zenith angle θ, is assumed to be directly related to the intensity of the shower front at a certain distance from the shower core. This relation depends on θ because the effective amount of the air material that the EAS propagates through, before reaching the ground level, increases as 1/cos(θ). The shower axis and the point of impact on the ground are determined from the timing and the intensity of the signal in the triggered SD stations. The best energy estimator is obtained by evaluating the intensity of the shower front at an optimum distance r opt from the shower core. This is done using analytic functions with parameters determined from the fit to the intensity of the signal as a function of the distance from the shower core (see Fig. 6). The optimal energy determination distance for the TA SD with 1200 m spacing is r opt = 800 m. For Auger, whose spacing is 1500 m, r opt is 1000 m. A similar reconstruction technique is used for the Auger events detected by the 750 m array placed in front of the HEAT telescopes (see Sec. 2.3). In this case r opt is 450 m. Next, the intensity of the signal at the optimal distance r opt is corrected for the zenith angle attenuation. In TA, this correction is made using a detailed Monte Carlo simulation [30,31], and an energy estimator E SDMC is obtained. E SDMC represents the reconstructed TA SD energy prior to the calibration of the energy scale by the FD. Standard TA SD reconstruction uses events with θ < 45 • . In the case of Auger, the zenith angle attenuation is derived from the data using the "Constant Integral Intensity Cut" method [32]. The Fig. 7 Calibration of the SD energy using FD. Left: TA analysis [28]. For each hybrid event, E FD is plotted versus E SDMC /1.27 (shown as cross marks). 1.27 is the calibration factor that is required to match the SD energies estimated using the SD Monte Carlo technique with the energies that have been measured by the FD [28]. Right: Auger analysis [29]. SD energy estimators S 38 , S 35 , and N 19 are plotted versus corresponding FD energies, for the hybrid events relevant for each type of Auger SD analysis. Solid lines represents the fits to the power-law function described in the text. In Auger, the reconstruction technique described above is applied to the showers of θ < 60 • for the 1500 m array and θ < 55 • for the 750 m array. A different reconstruction technique is used for the Auger 1500 m array in the case of inclined showers (θ > 60 • ) [33]. In these showers, the electromagnetic component is largely absorbed by the atmosphere and the signal in the WCDs is dominated by muons. The muon patterns (maps) are asymmetric because of the deflections of the muons in the magnetic field of the Earth. These maps are calculated for different zenith and azimuth angles using Monte Carlo simulations. The normalization of the maps, called N 19 , is fitted to the data and provides an energy estimator for the inclined showers. Correlations between the SD energy estimators and the FD energies are shown in Fig. 7. In TA, the final event energy is determined by scaling the result from the Monte Carlo simulations E SDMC by a factor E SDMC /E FD = 1.27, to match the FD energy scale [28]. The same factor is used for all energies in TA. In Auger, the correlations between the SD energy estimators and the FD energies are well described by a power law function E FD = AS B , where S is S 38 , S 35 , or N 19 , depending on the type of the Auger SD analysis. The parameters A and B are obtained from the fits to the data [29]. Quantities relevant for the TA and Auger SD energy calibration are summarized in Tables 1 and 2. The TA and Auger SD energy resolution is summarized in the last rows of the tables. Systematic Uncertainties of the Energy Scale Since both experiments calibrate their surface detectors to the FDs, the systematic uncertainties of their energy scales reduce to those of the FDs. Therefore, an effort has been made in both TA and Auger collaborations to understand the uncertainties that affect the reconstruction of fluorescence detector events [31,34,35]. Table 3 shows a summary of the TA and Auger FD systematic uncertainties in terms of five major contributions: fluorescence yield, atmospheric modeling, FD calibration, determination of the longitudinal profile of the shower, and the invisible energy correction. The fluorescence yield model used by the TA collaboration is based on a combination of the measurements of the absolute yield by Kakimoto et al. [36], in the 300 to 400 nm 11/31 Table 3 Systematic uncertainties on the energy scale for TA [34] and Auger [35]. For Auger, the variation of the uncertainties refers to the energy range between 3 × 10 18 eV and 10 20 eV. [38], with an uncertainty of 4%, the wavelength spectrum [39], and the dependence on pressure [39], temperature, and humidity [40,41] of the emission bands at different wavelengths. Contributions of the FY models to the systematic uncertainty on the energy scale are 11% for TA and 3.6% for Auger. In both cases, the FY model contributions are dominated by the systematic uncertainties on the absolute FY. In TA, the aerosol transmission is estimated using a median value of the aerosol optical depth profiles measured by the LIDAR [21]. The uncertainty on the shower energy determination, obtained by propagating the standard deviation of the LIDAR measurements, is under 10%. The Auger collaboration uses hourly estimates of the aerosol profile provided by the laser facilities placed in the middle of the SD array [42,43]. Uncertainties of these measurements contribute less than 6% to the reconstructed shower energy. A minor contribution to the systematic uncertainty arises from an imprecise knowledge of the atmospheric density profiles. Both TA and Auger use the Global Data Assimilation System (GDAS) that provides atmospheric data in 1 • × 1 • grid points in longitude and latitude (∼ 110 km × 110 km) all over the world, with a time resolution of 3 h [44]. A detailed discussion on the implementation of the GDAS atmospheric profiles in the Auger FD event reconstruction can be found in [45]. The uncertainties due to the calibration of the FD telescopes contribute ∼10% for both TA and Auger. They are dominated by the uncertainties on the absolute calibration described in Sec. 2. The uncertainties from the relative calibration systems, which allow one to track the short and long term changes of the detector response, are taken into account in both experiments, and are small in comparison to those due to the absolute calibration. The uncertainties arising from the reconstruction of the longitudinal shower profiles are obtained by comparing different reconstruction techniques, as well as from studying the energy reconstruction biases with Monte Carlo simulations. Contributions due to the shower profile reconstruction are ∼9% for TA and ∼6% for Auger. 12/31 The last important contribution is the uncertainty due to the determination of the invisible energy E inv . The TA collaboration mainly estimates E inv from Monte Carlo simulations of the proton air showers, using the QGSJetII-03 hadronic interaction model. For TA, the contribution to the systematic uncertainty on E due to the missing energy correction is estimated to be 5%. The Auger collaboration derives the invisible energy correction using data [46]. This is done by exploiting the WCDs sensitivity to the muons of the showers. The muons are mostly originating from the pion decays, with an associated muon neutrino (or muon antineutrino), and therefore, the signal in the WCDs is sensitive to the muon size of the shower, and it is well correlated with the E inv . This analysis allows to keep the uncertainty from the invisible energy estimate on the Auger energy scale well under 3%. Recently, the TA collaboration has also performed a check of the missing energy calculation by using inclined showers of the data, following this method [46]. The total uncertainty on the energy scale is obtained by adding in quadrature all individual contributions. It is found to be 21% for TA, and 14% for Auger. In addition, for Auger, the total uncertainty includes a further contribution of 5%, which has been evaluated by studying the stability of the energy scale in different time periods and/or under different conditions. Energy Scale Comparison between the TA and Auger Understanding all contributions to the difference in the energy scale between the two experiments is a difficult task, since many factors are related to the performance of the detectors and to the differences of the analyses techniques used by the two collaborations. On the other hand, two important contributions, the fluorescence yield and the invisible energy, can be considered as external parameters of the experiments, in the sense that they are related to the general properties of the atmospheric showers and thus they can be easily implemented in the CR event reconstruction chains in both collaborations. Therefore, the difference in the energy assignment can be addressed by studying the differences in the fluorescence yield (FY) model and in the invisible energy corrections. The impact of the FY on the reconstruction of the fluorescence events has been studied in detail since many years [47][48][49]. In the left panel of Figure 8, we report the results of the studies performed in [49]. Red points describe the effect (on Auger shower energies) of changing the fluorescence yield model from the FY model used by Auger to the FY model used by TA. The energy shift is ∼ 12% at 1 EeV and is slightly smaller at the highest energies. This energy shift is the result of the combined effect to change the absolute intensity of the fluorescence yield and all parameters describing the relative intensities of the spectral lines and their dependence on the atmospheric conditions. The effects of the single components can be disentangled by the following argument. The absolute FY from Kakimoto et al. [36], when normalized to the intensity of the 337 nm line, where the Airfly experiment made a precise measurement of the absolute FY, differs from that of the Airfly measurement [38] by ∼ 20% (Airfly FY is higher) [25]. If the absolute FY from Kakimoto et al. [36] was used in the reconstruction of the Auger events, while retaining all other parameters of the Airfly model, one would expect the Auger energies to increase by ∼ 20%. From this, we conclude that the effects of the FY parameters, other than the absolute FY, are of the order of −10%. About half of this effect is due to the removal of the temperature and humidity dependence of 13/31 the quenching cross sections (see also [51]), effects that are properly accounted for in Auger experiment. We note that the 20% difference between the Kakimoto et al. and Airfly absolute FYs is outside of the range defined by the uncertainties stated by the two measurements, 10% [36] and 3.9% [38], respectively. It is not surprising that the ∆E/E results of the TA and Auger (black and red points in figure 8 on the left) are different. For each experiment, the spectrum of the fluorescence photons detected by the FD is necessarily different from the one emitted at the axis of the cosmic ray shower: the fluorescence photon spectrum is folded with the FD spectral response, and the atmospheric transmission also dependents on the wavelength. Since the Auger and TA FD spectral responses and atmospheric transmission conditions are generally different, we expect larger differences for the higher energy showers that are occurring farther away from the telescopes. A better agreement between the energy shifts can be obtained by correcting the Auger energy shift for the effects due to the different spectral response. The results of this analysis are shown in the left panel of Figure 8 [49] with blue dots, which are now in a better agreement with the TA energy shift (black points). Following the above studies we conclude that, despite the above mentioned inconsistency between the Airfly [38] and Kakimoto et al. [36] absolute FYs, the difference in the energy scales of TA and Auger due to the use of a different FY model are at the level of 10 − 15% and are roughly consistent with the estimated uncertainties presented in Sec. 3.2. 14/31 The validity of the estimations of the uncertainties on the FY has been also addressed by the ELS facility at the TA experiment. Preliminary results of several ELS runs, under different atmospheric conditions, have been presented in [52]. The ELS results are in a better agreement with the Airfly FY model. The invisible energy (E inv ) corrections implemented by the TA and Auger are shown in Figure 9 [31,34,35]. They are presented in terms of the percent contribution to the total shower energy E. At 10 19 eV, the TA invisible energy correction is 7%, while that of the Auger is 13%. The difference between the two corrections is about 6% (slightly smaller at higher energies). The two corrections agree within the systematic uncertainties quoted by the two collaborations that are shown using dashed bands in Fig. 9. As already addressed in Sec. 3.2 the invisible energy of TA has been estimated using Monte Carlo simulations of proton primaries with the QGSJetII-03 hadronic interaction model. For a heavier composition, the invisible energy correction would be larger. The assumption of the proton primaries is consistent with the light composition observed by TA through the measurement of the mean value of the maximum of the shower development ( X max ) [53]. It is worthwhile noting that the inference on mass composition strongly depends on the hadronic interaction models used to interpret X max [54]. The Auger measurements of X max [55] are consistent with those of the TA [56], but they generally support a heavier mass composition. The Auger invisible energy correction has the advantage to be essentially insensitive to the hadronic interaction models since it is derived from the data. It has rather high values, even higher than the one predicted by the simulations for iron primaries. These higher values are due to the excess of muons measured by Auger in highly inclined events [57]. 15/31 We can conclude this section by estimating the energy shifts of the Auger and TA energy scales by changing both the FY and the invisible energy. As a first approximation they can be obtained by combining the two energy shifts previously presented. The energies of the TA events would be decreased by about 9% ((1 − 14%) × (1 + 6%) = −0.91) while the energies of Auger would be increased by about +5% ((1 + 12%) × (1 − 6%) = 1.05). TA and Auger Energy Spectrum Energy spectrum is obtained by dividing the energy distribution of cosmic rays by the accumulated exposure of the detector. The calculation of the exposure for the surface detector is generally robust, especially above the energy threshold where the array is fully efficient regardless of the event arrival direction. For the fluorescence detector, on the other hand, the calculation of the exposure should take into account the detector response as a function of energy and distance between the shower and the telescope, conditions of the data collection, and the state of the atmosphere. Large exposures accumulated by the surface detectors of Auger and TA experiments make it possible to study the UHECR flux at very high energies in different declination bands, and the measurements can be used to constrain the astrophysical models. TA Data The TA collaboration has measured four independent energy spectra [28]. The highest energies are covered by the SD, the intermediate energies are covered by the BR and LR telescopes [61,62], and the lowest energies are measured by the TALE telescopes using Cherenkov light. The TALE events have been divided into two categories, one in which the fluorescence light is dominating the flux of photons detected by the telescopes (TALE Bridge) [23], and another one where the Cherenkov light is the dominant component (TALE Cherenkov) [22]. The exposures for the four different reconstruction methods are shown in the left panel of Fig. 10 and the energy spectra are shown in the left panel of Fig. 11. The spectrum obtained by combining the four measurements is presented in the right panel of Fig. 11. The TA spectrum, including the TA low energy extension, covers over 4.7 orders of magnitude in energy, starting at 4 × 10 15 eV, just above the knee. The analysis of the TALE data has allowed to observe the low energy ankle at ∼ 2 × 10 16 eV and the second knee at ∼ 2 × 10 17 eV. The ankle and the cut-off of the UHECR spectrum are confirmed with the improved statistics by the BR and LR FD and by the SD. At the very high energies the combined spectrum is dominated by the SD measurements. The TA SD is fully efficient above 8 × 10 18 eV and its energy scale is fixed by the FD as described in Sec. 3.1. The TA SD exposure accumulated over 7 years of data taking is ∼6300 km 2 sr yr. This is estimated using a detailed Monte Carlo simulation that takes into account the detector effects and includes the unfolding corrections that have to be applied to the observed event energy distribution to take into account the bin-to-bin migrations due to the finite resolution of the detector [30]. Due to the steepness of the spectrum, the effects of the resolution would otherwise be causing a positive bias in the observed flux, since the upward fluctuations of the energies are not fully compensated by downward fluctuations. 16 It is customary to characterize the shape of the spectra using suitable functional forms. As seen in Fig. 11, the TA collaboration uses power laws with break points that correspond to the energies at which the spectral indexes change their values. Above ∼ 3 × 10 17 eV the function is expressed as and the values of the fitted parameters are shown in Table 4. Auger Data Auger collaboration has measured the energy spectrum using four different techniques. The first two measurements cover the highest energies. The measurements consist of two data sets of vertical and inclined events seen by Auger surface detector array of 1500 m spacing. Large size of Auger water tanks as well as the overall surface area coverage are the key factors that enabled the Auger collaboration to perform a high precision measurement of the UHECR energy spectrum with relatively high statistics. All four Auger spectra overlap in the region of the ankle. The cut-off is precisely measured by the 1500 m array with an exposure of 42500 km 2 sr yr for the vertical and 10900 km 2 sr yr for the inclined showers. The data covers a period of about 10 years. The SD exposure is a purely geometrical quantity, which is based on the calculation of the number of active elemental hexagon cells of the array as a function of time, with an uncertainty of better than 3% [63]. As can be seen in Fig.12, Auger collaboration characterizes the energy spectrum using a functional form that is different from that used by the TA. The function used by Auger consists of a power law below the ankle and a power law with a smooth suppression at the 18/31 highest energies Here γ 1 and γ 2 are the spectral indexes below and above E ankle , respectively, and therefore they have the same meaning as the corresponding TA parameters. E s is, with a good approximation, the energy at which the spectrum drops to a half of what would be expected in the absence of the cutoff, and ∆γ is the increment of the spectral index beyond the suppression region. J 0 is the overall normalization factor, that is conventionally chosen to be the value of the flux at E = E ankle . The values of the parameters are shown in Table 5. Table 5 Values of the parameters of the functional form that fitted to the combined Auger energy spectrum [29]. Statistical and systematic uncertainties are shown. Auger spectrum parameters Auger SD spectrum is corrected for the effects of the detector resolution using a forwardfolding approach. First, a Monte-Carlo simulation of the detector is used to calculate the resolution bin-to-bin migration matrix. Next, the measured Auger spectrum (before it has been corrected for the effects of the resolution) is fitted to the convolution of the functional form (described above) and the bin-to-bin migration matrix. Once the best fit parameters (Table 5) are obtained, the resolution correction factor is calculated by dividing the fitted spectrum function by the convolution of the fitted spectrum function and the bin-to-bin migration matrix. The final Auger spectrum result is obtained by applying this resolution correction factor to the initial measurement of the spectrum. Comparison of the TA and Auger Results It is customary, in both TA and Auger, to present the cosmic ray spectrum as flux J(E) multiplied by the third power of energy (E 3 ) (see Fig. 11 and 12). In this representation, the low energy ankle and the ankle are clearly seen as the local minima, while the second knee and the high energy suppression appear as the local maxima. Figure 13 shows superimposed TA and Auger spectra simply as J(E) vs E. Stronger features, ankle and the suppression, are still seen in the two results, even without multiplying them by E 3 . Combined energy spectra of TA and Auger above 3 × 10 17 eV are presented in Fig. 14 (left panel). There is clearly an overall energy scale difference between the two measurements, which is emphasized by the multiplication of the two results by the third power of the energy. The offset appears to be constant below the cut-off energy, above which the TA flux becomes significantly higher than that of Auger. 19 Fig. 13 The TA and Auger combined energy spectra J(E) as a function of E presented at the 34rd International Cosmic Ray Conference (ICRC 2015) [28,29]. A more quantitative statement can be made by considering the ratio of the Auger and TA fluxes, shown in the right panel of Fig. 14. Below ∼ 2 × 10 19 eV, the Auger flux is ∼20% lower than the TA flux and the difference between the two measurements becomes large for E > 2 × 10 19 eV. It should be noted that below 2 × 10 19 eV, the two spectra agree within the systematic uncertainties of the two experiments: a shift in the energy scale of less than 20% (a negative energy shift for TA or a positive energy shift for Auger) would bring the two measurements to an agreement. This shift is well within the uncertainties described in Sec. 3.2, and it can be attributed to the different models of the fluorescence yield and/or the invisible energy correction used by the two collaborations (see Sec. 3.3). Another way to address the differences between the two measurements is to compare the fitting parameters of the functional forms that describe the shapes of the spectra (see Sec ankle presented in Table 4 and 5 can be compared directly. As expected, they are in good agreement. In the region of the cut-off, on the other hand, the comparison is more difficult, since the parameters that define the two functional forms have different meanings. However, an unambiguous comparison can be made using the parameter suggested in [6] that defines the position of the observed cutoff. This is the energy E 1/2 , at which the integral spectrum drops by a factor of two below that which would be expected in the absence of the cutoff. E 1/2 has been calculated by both collaborations. For TA, E 1/2 = 60 ± 7 EeV (statistical error only) [28] and for Auger, E 1/2 = 24.7 ± 0.1 +8.2 −3.4 EeV [29] (statistical and systematic error). The two values of E 1/2 are significantly different, even after taking into account the systematic uncertainties in the energy scales of the two experiments. The difference between the TA and Auger spectra in the region of the cut-off is very intriguing. Because the TA experiment is in the Northern hemisphere and Auger is in the Southern hemisphere and the two experiments look at different parts of the sky, this could be a signature of anisotropy of the arrival directions of the ultra-high energy cosmic rays. Moreover the highest energies are the most promising for the identification of the sources of cosmic rays since the deflections of the trajectories of the primaries in the galactic and extra-galactic magnetic fields are minimized. However the measurement of the spectrum at the cut-off is affected by large uncertainties. In addition to the poor statistics, the analysis is complicated by the steepness of the flux: large spectral index amplifies the uncertainty of the energy scale and it increases the unfolding corrections required to take into account the bin-to-bin migrations due to the finite energy resolution. A continuous and increasing effort is being made by the two collaborations at establishing a better control of these effects and evaluation of the systematic uncertainties. Discussion The TA and Auger collaborations have developed analyses to constrain the astrophysical models using measurements of the energy spectrum. Observed features in the UHECR spectrum can reveal astrophysical mechanisms of production and propagation of the UHECRs. 21/31 Moreover, thanks to the unprecedented statistics accumulated by the two experiments, the collaborations have started studying the energy spectrum in different regions of the sky. This represents a big step forward in the cosmic ray field: combined analyses of anisotropies of the arrival directions of cosmic rays, using high statistics whole-sky data, and the features in the energy spectrum can significantly improve our understanding of the nature of the UHECRs. Fitting Energy Spectrum to Astrophysical Models The basic assumption of the models developed by the TA and Auger is that UHECRs are accelerated at the astrophysical sources (the bottom-up models). In fact, most of the so-called top-down models, in which the primaries are generated by the decay of the super heavy dark matter, or topological defects, or exotic particles have been excluded by strong upper limits on ultra-high energy photons and neutrino fluxes [11,58]. The basic approach developed by TA and Auger to interpret the UHECR spectrum consists of assuming a distribution of identical sources, a mass composition and an energy spectrum at the sources. Then the spectrum at Earth is simulated taking into account the interactions of the primaries with the cosmic radiation (CMB, infrared, optical and ultraviolet) and the magnetic fields encountered during their path. The models are characterized by the parameters whose values are determined from the fit to the experimental data. Fig. 15 Interpretation of the TA SD spectrum with astrophysical models [59], in terms of the cosmic ray spectral index at the sources p and the cosmological evolution parameter m. In the TA model [59] the sources are distributed either uniformly or according to the largescale structure (LSS) described by the distribution of the galaxies from the Two Micron All Sky Survey [60]. Only proton primaries are simulated. This composition assumption is justified by measurements of the mean X max made with the TA FD [53]. The spectrum of cosmic rays at the sources is parametrized using αE −p (1 + z) 3+m , where z is the redshift and the parameter m describes the cosmological evolution of the source density (for m = 0 the source density is constant per comoving volume). The maximum energy at which the primaries are accelerated is fixed at 10 21 eV, well above the energy of the GZK effect. 22/31 The results of the TA analysis are shown in Fig. 15. The model that fits the SD spectrum [28] well is shown using solid and dashed lines, for uniform and LSS density distribution of the sources, respectively. The confidence regions of the model parameters are shown in the right figure. The fitted parameters are p ≈ 2.2 and m ≈ 7 [59]. The latter indicates a very strong evolution of the sources. The conclusion of the analysis is that the TA spectrum is well described by the interaction of the protons with the CMB: the GZK cut-off [4,5], due to the photo-pion production, and the ankle due to the electron-positron pair production [6]. Fig. 16 Interpretation of the Auger spectrum with astrophysical models [69] where X max distribution is predicted assuming the EPOS-LHC UHECR air shower interaction model. In Auger model [69,70], the UHECR mass composition is not fixed, but is fitted to the Auger X max data [55], simultaneously with the fit to the UHECR energy spectrum [29]. The sources have an isotropic distribution in a comoving volume. The nuclei are accelerated with a rigidity-dependent mechanism up to the maximum energy E max = ZR cut (Z is the charge of the nuclei and R cut is a free parameter of the model). The spectrum of the sources is parametrized with αE −γ . The results of the analysis are presented in Fig. 16. The model that fits the measured spectrum and the mean and the standard deviation of X max best is shown using solid lines in the left panel. The model describes the measurements at energies above the ankle. The deviance (equivalent to a χ 2 per degree of freedom) as a function of the fitted parameters is shown in the right figure. The absolute minimum corresponds to a very hard injection spectrum (γ 1) and a low maximum acceleration energy, which is below the energy of the GZK cut-off. This suggests that the observed break of the spectrum is mainly due to a cut-off at the sources rather than to the effects of propagation. There is another less significant minimum at γ ≈ 2. In this case, the value of R cut is larger and the propagation effects contribute to the break in the spectrum. 23/31 The TA and Auger analyses lead to different conclusions. This is due to the difference in energies at which the cut-off is observed and to the different primary mass composition assumptions in the models. In TA, the primaries are protons, while in Auger, the composition is mixed and has a trend with energy toward heavier elements in the suppression region. It is worth mentioning that the Auger and TA measurements of X max agree within the systematic uncertainties [56] but the inferred mass composition results are different because different hadronic interaction models and Monte Carlo codes have been used to interpret the data in the two experiments. Moreover, the sensitivity of the experiments to the mass composition measurements in the suppression region is strongly limited by the reduced FD duty cycle, a limitation that the Auger Collaboration plans to overcome with an upgrade of the SD detector as described in Sec 6. Study of the Declination Dependence of the Energy Spectrum The TA and Auger collaborations have started studying the energy spectrum in different declination bands. The exposures of the two SD detectors versus declination, for one year of data taking, are shown in Fig. 17 [29]. For TA, the exposure refers to the events detected by the SD with zenith angles θ below 45 • . For Auger, the exposures are for the 750 m (θ < 55 • ) and 1500 m (θ < 60 • and > 60 • ) arrays. The Auger exposure obtained by adding the three contributions is also shown. Fig. 17 Exposure as a function of the declination (also called directional exposure) for TA and Auger SDs [29]. For Auger, the exposure is shown for the 750 m array (infill) and for the showers detected by the 1500 m array at zenith angles below (vertical) and above (inclined) 60 • . The zenith angle range for TA is limited to 45 • . The study of the spectrum in different declination bands became possible due to the large statistics accumulated by the two experiments. The study is motivated by the recent indications of anisotropy of the arrival direction of cosmic rays. The TA collaboration has found an excess of events of E > 5.7 × 10 19 eV in the so called hot spot, an angular region of radius 20 • in the direction of (α = 148.4 • , δ = 44.5 • -right ascension and declination), near 24/31 the Ursa Major [64,65]. The Auger collaboration reported an indication of a dipole amplitude in right ascension for the events of energies above 8 × 10 18 eV, which corresponds to a reconstructed dipole with (α, δ) = (95 • ± 13 • , −39 • ± 13 • ). Also, Auger has found another, less significant dipole amplitude, at the lower energies [66]. 18 Fig. 18 The preliminary results of the energy spectra of TA [50] (left) and Auger [29] (right) in different declination bands. The TA collaboration has measured the SD energy spectrum in two declination bands, δ > 26 • and δ < 26 • . For this analysis, TA events with zenith angle (θ) up to 55 • have been selected. In comparison to the standard SD spectrum calculation, which is done using events with θ < 45 • , this analysis allows to increase the statistics and to lower the minimum declination of the events from about −6 • to −16 • . However it requires an higher energy threshold at 10 19 eV, which is above the ankle. It has been shown that the two TA spectra calculations are fully consistent above 10 19 eV [67]. The declination dependence of the TA spectrum using 6 years of data [50] is shown in the left panel of Fig. 18. Solid lines represent the (fitted) power laws with one breaking point. The definition of the so-called second break point energy (E 2 ) is equivalent to E break of Table 4. The corresponding values are: E break = (69 ± 5) EeV, for δ > 26 • , and E break = (42 ± 6) EeV, for δ < 26 • . Even if the sensitivity of the analysis is low due to the limited statistics, it is interesting to note that the tension with Auger data (which observes the suppression at a significantly lower energy -see Sec The Auger collaboration has measured the energy spectrum in four declination bands with an exposure of about 42500/4 km 2 sr yr each [29]. The results are presented in the right panel of Fig. 18. There is no significant declination dependence of the flux. It has been demonstrated that the small differences between the fluxes are consistent with the expectation from the dipole anisotropy [66]. The analysis is limited to the declinations up to +24.8 • since it uses only the events detected by the 1500 m array with zenith angles < 60 • . A systematic study of the difference of the spectra measured by the two experiments in the same declination band is of crucial importance, since it will help to understand whether 25/31 the differences between the spectra addressed in Sec. 4.2 have been caused by the systematic uncertainties of the experiments or these differences are due to an anisotropy signal. It is worth noting that, even if the spectra are compared in a declination band accessible by the two experiments, such analysis would not allow to arrive to a definitive conclusion if the shapes of the directional exposures in the common declination band are not similar, because the spectra would be affected by a potential anisotropy signal in different ways. As shown in Fig. 17, this is the case of the comparison of the TA spectrum with the Auger one obtained with the vertical events (θ < 60 • ). In fact, the two directional exposures have an opposite trend, increasing function of the declination for TA and decreasing for Auger. At the time of writing this paper, the Auger collaboration has not presented the declination dependence of the energy spectrum obtained using the inclined events (θ > 60 • ). We remark the importance of this measurement, since in the common declination band, the directional exposure for the Auger inclined events is of a similar shape to that of TA. Moreover, the comparison could be extended to higher declinations, up to 44.8 • , whereas the vertical event Auger analysis goes only up to 24.8 • degrees. At the 2016 Conference on Ultra-High Energy Cosmic Rays, Kyoto (Japan) the two collaborations have presented a new and promising analysis method [67], proposed by the members of the working group, aimed at combining the results of the anisotropy searches within the TA and Auger [68]. It consists of comparing the results of an alternative flux estimate, obtained by counting the numbers of events in the energy bins and weighting them by the inverse of the directional exposure. The resulting flux does not depend on the shape of the directional exposure and therefore, it should be same for TA and Auger. If a difference is found, it is to be ascribed to the experimental effects, and it should be consistent with the systematic uncertainties 3.2. The analysis presented in [67] is still preliminary. However, it has marked the road that should be followed to understand the differences in the measurements of the energy spectrum at the highest energies. It is worth to note that the application of this method requires a very good control of the systematic uncertainties. This alternative flux estimate should be consistent with the standard flux calculation if the arrival directions of cosmic rays are distributed isotropically, and this is possible only if the systematic uncertainties on the event reconstruction and the exposure calculations are well understood. The TA collaboration has shown that the two flux estimation methods are consistent in the declination band accessible by Auger with vertical events (δ < 24.8 • ) and in the full declination band −16 • < δ < 90 • (which includes the hot spot). Conclusions and Outlook The Telescope Array and the Pierre Auger Observatory are the two largest cosmic ray detectors built so far. Their large exposures have allowed an observation of the suppression of the flux of cosmic rays at the very high energy with unprecedented statistics and precision. Both experiments combine the measurements of a surface array with the fluorescence detector telescopes. The hybrid system allows to measure the cosmic rays with an almost calorimetric energy estimation, which is less sensitive to the large and unknown uncertainties due to limited knowledge in the hadronic models, that are extrapolated well beyond the energies attainable in laboratory experiments. Having a precise estimate of the energy scale is of crucial importance for the measurement of the energy spectrum. In fact, the uncertainty 26/31 in the energy estimation (∆E/E), when propagated to the energy spectrum (J), is amplified by the power index (γ) with which the flux falls off with energy (∆J/J ≈ γ ∆E/E). The TA and Auger measure the cosmic rays in the northern and southern hemispheres, respectively. At energies below the suppression, the fluxes are expected to be the same because of the high level of isotropy in the arrival directions of the cosmic rays [10,11]. A good control of the systematic uncertainties of the energy scale of the two experiments is demonstrated by a remarkable agreement attained in the determination of the ankle at about 5 × 10 18 eV. The energy of the ankle measured by TA is only +8% larger than the one measured by Auger (see Tables 4 and 5), which is roughly in agreement with the 20% difference in the flux normalization below the cut-off shown in Fig. 14. The difference in the ankle positions is fully consistent with the uncertainties in the energy scales quoted by the two experiments (21% and 14% for TA and Auger, respectively) and, it is expected to be reduced if the two collaborations adopt the same model for the fluorescence yield and for the invisible energy correction (see Sec. 3.3). Despite the good agreement in the region of the ankle and even at the lower energies, the TA and Auger spectra differ significantly in the region of the suppression (see Fig. 14). As discussed in Sec. 4.2, this discrepancy can be also quantified comparing the values of the E 1/2 parameter [6] that describes the position of the cut-off. The values reported by the two collaborations differ by a factor 2.5, which is well beyond the systematic uncertainties of the energy determination. Understanding the difference between the two spectra in the region of the cut-off is one of the major issues in the study of the UHECRs. At these extreme energies the deflections of the trajectories of the primaries in the galactic and extra-galactic magnetic fields are minimized, allowing the source identification, and therefore the spectra at Earth detected in the two different hemispheres could be different. The two collaborations have started studying their spectra in different declination bands. For TA, these studies are very relevant because of the hot spot near the Ursa Major constellation [64,65]. As shown in Sec. 5.2, these studies have a great potential but are currently limited by the statistics. Another important finding of these studies is that the declination range of the exposures of the two experiments partially overlap. This offers the possibility of making a comparison of the spectra in the same region of the sky [47,50,67]. Any discrepancy found would be indicative of an experimental effect that's due to the systematic uncertainties. One should note that the spectrum steepness in the energy region of the suppression amplifies the uncertainties in the energy scale and the event bin-to-bin migration that is due to the finite energy resolution. These effects, in addition to the limited statistics, make the measurement of the flux at the energies of the suppression very challenging. The features of the energy spectrum at very high energies are sensitive to the production and the propagation of the UHECRs and have been used to constrain astrophysical models. As shown in Sec. 5.1, the TA spectrum is well fitted by a model in which the primaries are protons (hypothesis consistent with the TA FD measurement of the mean X max [53]) and therefore the ankle is explained by the proton interactions with the CMB via electronpositron pair production [6] and the cut-off is explained by the GZK effect [4,5]. The Auger interpretation of their energy spectrum is more complicated. The inclusion of the trend toward heavier nuclei at the highest energies inferred from the FD measurements [55] 27/31 leads to a scenario in which the observed break of the spectrum is not due to the effects of propagation. In this model the nuclei are accelerated by a rigidity-dependent mechanism with a cut-off that is observed in the spectrum measured at Earth. The studies presented by the TA and Auger demonstrate that the knowledge of the chemical composition plays an important role in the interpretation of the features of the energy spectrum. The results on < X max > of the two experiments are consistent [56], but the inferred mass composition answers are different because the two collaborations have assumed different hadronic interaction models and used different Monte Carlo procedures. Extrapolation of the hadronic models beyond the energies attainable by accelerator physics is one of the major issues in understanding the air showers produced by the UHECRs. The shower development is mainly influenced by the particle production in the forward region, where the accelerator data are available only for energies up to a few hundreds of GeV [72]. A big improvement in this field will be possible by building a fixed target experiment using the beam of the LHC collider. The two collaborations will be taking data in the next years and are working on improving their detectors. The TA collaboration will quadruple the area of the SD array to approximately the current size of Auger, which is 3000 km 2 . This extension is called the TA×4 [73] and it will be realized by adding 500 surface detectors using 2.08 km spacing. The aim is to improve the measurement of the cosmic rays beyond the suppression energy, as well as the sensitivity to the hot spot and other astrophysical sources. Also, two additional FD stations will be constructed to overlook the new SD array and to improve the composition studies at the highest energies. The Auger collaboration will upgrade the SD array by mounting scintillator detectors on the top of each WCD station. The upgrade of the Pierre Auger Observatory is called AugerPrime [74]. The combined analysis of the signal of the two detectors will allow to extract the muonic shower component and to extend the composition sensitivity of the detector into the flux suppression region, where the FD measurements are limited by the duty cycle. This will allow to improve the understanding of the origin of the cut-off and to select light primaries for the anisotropy studies. Even if the primary scope of TA and Auger is to study cosmic rays at the highest energies, an effort has been made with the TALE and Infill detectors to lower the minimum detectable shower energy threshold. The TALE FD energy spectrum has made it possible to observe the low energy ankle and the second knee. A similar result could be obtained with the HEAT telescopes of Auger. Building surface arrays of closer spacing is feasible for large collaborations such as TA and Auger and it would allow to extend the measurements down to the energies of the knee. The next decade will offer many opportunities to understand the origin of the UHECRs. TA×4 and Auger will view the full sky with a total collection area of 6000 km 2 . The two collaborations are working together on combining their measurements. The declination band accessible by the two experiments is instrumental in achieving a better understanding of the systematic uncertainties and the differences in the energy scales. This will allow us to measure the energy spectrum from the knee up to the suppression and beyond in the entire sky with an unprecedented statistics and precision, which in turn will allows us to measure the energy spectra of cosmic rays in different declination bands or sky patches. So far, anisotropy studies using small (a few degree) or intermediate angular scales were carried 28/31 out independently from the energy spectrum studies, although there were several studies of the energy dependencies of anisotropies. We emphasize here that the energy spectrum, the number of cosmic ray particles per time in a unit area from a given direction in a given energy range is, by definition, a function of the direction. The measurement of the full-sky energy spectrum by the future Auger and TA will make a crucial contribution to identifying the sources of ultra-high energy cosmic rays. Fig. 19 Energy spectra measured by IceCube [75], Yakutsk [76], KASCADE-Grande [77], HiRes I and HiRes II [78], Telescope Array [28] and Auger [29]. We conclude this review with a compilation of recent experimental data on the energy spectrum presented in Fig. 19.
14,814.8
2017-05-25T00:00:00.000
[ "Physics" ]
Artificial Neural Network Assisted Variable Step Size Incremental Conductance MPPT Method with Adaptive Scaling Factor : In conventional adaptive variable step size (VSS) maximum power point tracking (MPPT) algorithms, a scaling factor is utilized to determine the required perturbation step. However, the performance of the adaptive VSS MPPT algorithm is essentially decided by the choice of scaling factor. In this paper, a neural network assisted variable step size (VSS) incremental conductance (IncCond) MPPT method is proposed. The proposed method utilizes a neural network to obtain an optimal scaling factor that should be used in current irradiance level for the VSS IncCond MPPT method. Only two operating points on the characteristic curve are needed to acquire the optimal scaling factor. Hence, expensive irradiance and temperature sensors are not required. By adopting a proper scaling factor, the performance of the conventional VSS IncCond method can be improved, especially under rapid varying irradiance conditions. To validate the studied algorithm, a 400 W prototyping circuit is built and experiments are carried out accordingly. Comparing with perturb and observe (P&O), α -P&O, golden section and conventional VSS IncCond MPPT methods, the proposed method can improve the tracking loss by 95.58%, 42.51%, 93.66%, and 66.14% under EN50530 testing condition, respectively. Introduction Solar power generation (SPG) has become one of the most valuable green energy sources due to its advantages of cleanliness, safety, non-pollution, inexhaustibility, and no need for rotating components in the process of assembly and operation. Moreover, solar power generation systems (SPGSs) are also widely accepted and used in remote areas because of their easy installation [1][2][3]. However, solar power generation systems are expensive, and changes in irradiance level and ambient temperature will affect their power output. Hence, many scholars have studied how to increase generation efficiency so that these systems can output the maximum available power. Although numerous maximum power point tracking (MPPT) methods have been proposed in the literature, the traditional MPPT technologies, such as hill-climbing, the perturbation and observation method (P&O), and the incremental conductance method (IncCond), are still the most easily implemented and most widely used algorithms [4,5]. The main problem of the traditional MPPT methods is the tradeoff of the perturbation step size (PSS) during the MPPT process because the PSS has considerable effects on the tracking speed and steady-state oscillation of MPPT. If the PSS is too small, the tracking speed will be too slow. On the other hand, if the PSS is too large, the steady-state oscillation will be intensified. In order to solve this problem, researchers have proposed many new MPPT algorithms that can be applied in fast varying solar irradiance conditions and that have rapid tracking speeds and low steady-state oscillation . These methods can be divided into three input. In the output signals aspect, common ANN output includes the corresponding duty cycle value of the MPP (D MPP ) [30], the corresponding current command value of the MPP (I MPP ) [31,33], the corresponding voltage command value of the MPP (V MPP ) [32,33,35], the power value of the maximum power point (P MPP ) [34,37], and the tracking direction command of the duty cycle [36] In this study, an ANN-assisted MPPT method is proposed. The proposed method utilizes a neural network to obtain an optimal scaling factor according to current irradiance level using the measured voltage and current value of two consecutive perturbation points as the ANN's input. To the best of the authors' knowledge, the proposed ANN architecture has never been shown in the literature. Compared with the adaptive scaling factor method proposed by [32], the proposed method does not require intensive simulations to gain optimal scaling factor. It can also avoid the complicated calculation, which is required when applying state estimation methods to calculate the irradiance level. The proposed method has the advantages of easy implementation, simple calculation, and optimal performance in a fast varying solar irradiance condition. The main novelty of the proposed MPPT technique are listed below: • An optimal scaling factor according to the operating conditions is applied in the proposed VSS IncCond MPPT technique to obtain fast tracking response and reduced steady-state oscillations • It requires fewer voltage samples than other techniques to identify the optimal scaling factor and hence enhances the tracking ability and dynamic efficiency. • High-cost irradiance and temperature sensors are not needed comparing with other ANN-based MPPT methods. • The proposed method can be easily integrated into the conventional VSS IncCond MPPT method, which improves its tracking performance To verify the proposed MPPT method's effectiveness, the proposed method is compared with several MPPT methods suitable for fast-varying environments proposed in other literature, including the VSS IncCond, α-P&O [33], and golden section [34] MPPT methods. According to the simulation and experimental results, this proposed method has the optimal tracking accuracy, and both tracking time and tracking loss are only preceded by the golden section method in the uniform insolation conditions. When applied in the varied insolation condition, the tracking time, tracking accuracy, and the overall tracking loss of the proposed method are best among all the compared methods. When utilized in the fast varying irradiation conditions, the proposed method is far superior to other methods. As a result, to conclude from its overall performance, the proposed method has great performance in different operating conditions. Mathematical Modeling and Conventional Variable Step Size Incremental Conductance MPPT Algorithm In this section, the mathematical modeling of the solar cell will be presented first. Additionally, the description of conventional VSS IncCond MPPT method and some discussions about the effect of the scaling factor on the MPPT performance will be provided. Solar Cell Characteristics This study uses the common single diode model to simulate the solar cell characteristics; Figure 1 shows the solar cell model used by this study; through Figure 1, the relation of output current and voltage can be expressed as below: where q, K, T, and N are the electron charge (1.602 × 10 −19 C), Boltzmann constant (1.38065 × 10 −23 J/K), panel temperature in Kelvin, and the number of cells connected in series, respectively. I g (S, T) is the photoelectric current for a certain solar irradiance level S (W/m 2 ) and panel temperature T, and I s (T) is the reverse saturation current under a specific panel temperature. A, R S , and R P are the diode ideality factor, equivalent series resistance, and equivalent shunt resistances, respectively. I g (S, T) and I s (T) can further be expressed by where α Isc is the temperature coefficient of I SC , I SC stands for the short-circuit current of the PV module, T 0 is cell temperature under standard test condition (STC), C 0 is the temperature coefficient, E g is the material bandgap. After formulating the relation of the solar panel output voltage and current, the output power can be acquired by multiplying the output voltage by the current. Figure 2 illustrates the characteristic curves of power versus voltage and their absolute slope values under different irradiance levels of the solar panels used in this study. In Figure 2, the slope of the P-V curve is defined as the derivative of power to voltage (dP/dV). Variable Step Size Incremental Conductance MPPT Algorithm Conventional IncCond MPPT method uses fixed PSS to tack the MPP; however, although larger PSS acquires faster tracking speed, the oscillation generated around MPP decreases its tracking accuracy and increases the power loss. On the contrary, although smaller PSS can achieve higher tracking accuracy and lower power loss, it leads to slower tracking speed. Therefore, how to make a tradeoff between tracking speed, tracking accuracy, and power loss is an essential problem in the conventional fixed PSS IncCond MPPT method. Hence, the literature [9] proposed a VSS IncCond MPPT method to alleviate this problem. In the VSS IncCond MPPT method, its PSS can be calculated through Equation (4). It is generated by multiplying a scaling factor by a P-V curve slope value (dP/dV) of its operating point (OP). As Figure 2 shows, the P-V curve slope value is larger when OP is far from MPP; thus, a larger PSS can be taken to approach MPP. When OP approaches MPP, the slope value becomes smaller. Moreover, when OP equals to MPP, the slope value becomes 0. In this way, the steady-state oscillation can be minimized, thus achieving the purpose of solving the tradeoff problem between tracking speed and tracking accuracy. In Equation (4), scaling factor M is usually a fixed value, and it can be further designed in accordance with Equation (5). In Equation (5), |dP/dV|∆V max represents the maximum slope when PSS is ∆V max under STC. However, as Figure 2 shows, the maximums of P-V curve slopes are different under various irradiance levels; therefore, providing that in order to reach the same ∆V max , the required optimal M values vary under different irradiance level as well. When under high irradiance levels, a smaller M should be selected due to the higher slope; when under low irradiance levels, a larger M should be picked. In other words, a large value of M is beneficial for the tracking speed; however, it will result in substantial oscillations around the MPP. In contrast, a small value of M can cause the MPP tracking speed becomes slow. For different conditions, the required scaling factor is different, which limits the universality of the method. Therefore, to ensure the VSS IncCond MPPT method has the optimal tracking speed, tracking accuracy, and power loss under each irradiance levels, this study proposed an improved VSS IncCond MPPT method to select the most appropriate scaling factor for the current operating environments. This study uses ANN to conduct the calculation of the optimal scaling factor; the design and implementation of ANN will be explained in the next section. Description of the Proposed Method The skill proposed by this study firstly uses ANN to estimate the optimal scaling factor (OSF) that should be used in current irradiance levels, then using VSS IncCond MPPT method to track the maximum power point. This section will first introduce the OSF estimation method implemented by ANN and the way to conduct VSS IncCond MPPT by using the estimated OSF. Neural Network Design and Implementation Neural network builds mathematical models by imitating the data processing of biological neural network, using a neural network can simulate the system behaviors, which are complicated and not easy to model, without the premise of precise mathematical models; a typical artificial neural network mathematical model is composed of several neurons, as Figure 3 illustrates. The relation between the neuron's input and output is shown in Equation (6); it is an output gained by multiplying the input by weight and sums then converts through an activation function. The so-called neural network training means to meet the expected output by modifying weights and bias. During training, ANN usually generates initial weight values between +1 and −1 randomly. The weights function is similar to the effect of synapsis; if the weights are larger, the connected neurons will be activated easily, then the impact on the network becomes more evident; on the contrary, there will be a smaller impact on the neural network when the weights are smaller. where X, W and b are input, weight and Bias, ∑ is summation, and f is activation function, its objective is to map the Σ value to the corresponding output, Y is output, the users' expected results, i should be in the range of [1, n], where n is the size of input data. This study uses a back-propagation neural network; its architecture can be divided into Input Layer, Hidden Layer, and Output Layer. Input Layer is composed of single or multiple neurons; the number of input neurons is related to the problems to be solved. Hidden Layer is a single or multiple layer neurons between Input Layer and Output Layer; its purpose is to represent the nonlinear relation between input and output. Hidden Layer's configuration should be decided based on the complexity of the problems; however, there is currently no standard method to determine the configuration of Hidden Layer; a better setting value is usually acquired through multiple tests. Output Layer is the ANN's output; likewise, the number of output neurons is related to the problems. Figure 4 shows a schematic diagram of the proposed OSF estimation neural network architecture, and is explained as follows. This study designed a four-layered neural network architecture, which includes one Input Layer, two Hidden Layers, and one Output Layer, respectively. The neural network input parameters used by this study are two solar cell operating voltages V(t), V(t) + ∆V, and the power change ∆P measured by these two consecutive perturbations; ∆P is defined as P(V(t) + ∆V)−P(V(t)); the output parameter is the OSF; the two Hidden Layers contains 10 neurons respectively, and the activation function is Tansig, as shown in Equation (7). Levenberg-Marquardt method is used as the training method in this study. y(n) = e n − e −n e n + e −n = Tansig(n) (7) The accuracy of the neural network is directly related to the completeness of the training set. In the following, the method of generating the training set of the proposed neural network will be described in detail. First, set the irradiance level as 100 W/m 2 ; then initialize the voltage as 0 V; after that, in every interval of ∆V, the voltage values of the previous and current points, as well as the power difference between these two points, need to be calculated as the neural network input; Matlab is then used to conduct the simulation of IncCond method to find out the OSF value under this irradiance level, and the obtained OSF is then utilized as the output parameter of the ANN. When the operating voltage reaches the VOC under certain irradiance level, adds 100 W/m 2 to the irradiance level and sets the voltage to the initial value, 0 V. This procedure should be conducted repeatedly until the irradiance level reaches 1000 W/m 2 . This study uses 0.1 V for ∆V; therefore, a total of 16,691 training data can be obtained. Additionally, this study acquires a total of 2630 verification data under the simulated irradiance level of 210 W/m 2 , 510 W/m 2 , and 810 W/m 2 . Flow Chart of Proposed ANN-Assisted VSS IncCond MPPT Algorithm The conventional VSS IncCond MPPT algorithm has advantages in high tracking speed, high accuracy, low tracking loss, etc. However, the fixed scaling factor makes its performance significantly decrease when irradiance level changes; therefore, this study proposed an ANN-assisted VSS IncCond MPPT algorithm. Figure 5 is the flow chart of the proposed method, and the dashed line part in this figure is the conventional VSS IncCond MPPT method. As Figure 5 implies, the proposed method will first generate the two OPs-V(t), and V(t) + ∆V, then record its voltage and current information; next, input V(t), V(t) + ∆V, and the power change ∆P to the trained ANN to obtain the OSF of current irradiance level; then conduct the VSS IncCond MPPT method by using this scaling factor value. When the irradiance level changes, the system should still operate the two OPs to estimate the new OSF value. This study selects the OSF based on the trained ANN, which can ease the scaling factors tradeoff problem. The proposed method has advantages in high tracking speed, non-steady-state concussions, etc. under any circumstances. Performance Index (PI) To fairly evaluate and compare the test results of different tracking methods, a performance index (PI) is defined in this study. Figure 6 is a typical MPP tracking response. The criterion defined for each measured item illustrated in Figure 6 is described as follows: (1) Tracking time (Tr): the time required for the tracking power increases to 95% of maximum power; (2) Steady-state tracking accuracy (Acc): dividing the Steady-state average power by the specific MPP; (3) Tracking energy loss (Loss): the shaded area as shown in Figure 6. It records the power for a pre-set duration and takes the absolute value of the MPP value minus the tracked power value. Figure 7 shows the block diagram of the proposed MPPT system; this study uses MATLAB/SIMULINK to establish a simulation platform. To verify the superiority of the proposed method, this method compared the tracking performance with four methods proposed in the literature, including conventional perturb and observe (P&O) method, conventional VSS IncCond approach [18], alpha-perturb and observe (α-P&O) techniques [38], and golden section (GS) method [39]. In P&O, a small voltage perturbation changes the solar panel's power. Providing that the power variation is positive, voltage perturbation remains in the same track. However, if power difference is negative, the perturbation direction will be reversed. The concepts of α-P&O method and the conventional P&O technique are similar, which continuously perturbs around the MPP. Unlike conventional P&O, the perturbation step of α-P&O gradually decreases until it reaches the minimum step allowed by the system. The golden section method converges to the MPP by interval shrinking. In the beginning, two points are chosen from the search space with known boundaries, and the two points will be assessed. Then, a new point is generated accordingly. At a given iteration, the algorithm has a new narrowed interval bounded by the new point as well as one of the initial points based on the evaluation results. The algorithm keeps iterating (interval shrinking) until the interval becomes small enough. The parameters used by the simulation are shown in Tables 1 and 2; the simulation results are shown in Figure 8 and Table 3. Figure 8 is the tracking waveforms of each compared method in a total simulation time of 1 s under the irradiance level of 500 W/m 2 ; Table 3 lists the tracking performances of each different method in a total simulation time of 1 s under the irradiance level of 100-1000 W/m 2 (100 W/m 2 in each interval, ten kinds in total). In Table 3, the light red parts indicate the methods with the best PI under a single test condition; the methods with the worst PI are marked in light green. As Figure 8 and Table 3 show, the P&O method uses a larger PSS to attain a shorter dynamic tracking time, which results in a steady-state oscillation, making its average tracking accuracy below 98%; tracking loss above 30 W. As the conventional VSS IncCond method uses a constant M value that is suitable only to the irradiance level of 1000 W/m 2 , it has a slower tracking speed under other irradiance levels. Moreover, the tracking accuracy is even below 60% under 100 W/m 2 because it did not reach a steady state at the end of the test. The other three methods' performances are close to each other on the three performance indexes. Moreover, GS has the best performance in the average tracking time; the proposed method and the α-P&O are the second, but the differences are not evident. The proposed method has the best performance with 99.91% in the average tracking accuracy; followed by 99.26% in GS and 98.97% in α-P&O. Lastly, as GS can reach steady-state more rapidly, it has the best performance with 14.45 W in its average tracking loss, followed by 21.97 W in the proposed method and 27.96 W in α-P&O. Regarding to Table 3, the proposed method has the best performance of the tracking accuracy under each irradiance level. As for tracking speed, the proposed method and GS lead one another alternately. Moreover, the proposed method is inferior to GS in terms of tracking loss. In this study, the optimal scaling factors for different irradiance level are obtained via intensive simulations. To fairly evaluate and compare the acquired results of different scaling factor values, tracking energy loss (E loss ) is utilized as performance index in this study. Tracking energy loss is adopted because it can simultaneously take the tracking time and steady-state tracking accuracy into account. Figure 2 shows a typical tracking response of one MPP tracking curve under certain irradiance level and panel temperature. The tracking energy loss in this study is defined as the area between the exact MPP and the power tracking curve within a certain time interval, as shown in the shaded part of Figure 2. To take both transient and steady-state responses into account, the total simulation time was set as 10 s. This study simulated 10 possible operating conditions (10 irradiance levels (100-1000 W/m 2 ) with the interval 100 W/m 2 under constant panel temperature (25 • C)). In this study, the range of tested scaling factor is 0.1-10.0. With the interval of 0.01, there are 990 possible scaling factor values and the scaling factor with the best tracking energy loss under 10 different operating conditions will be recorded. The parameters used for the simulations are as detailed in Tables 1 and 2, the obtained optimal scaling factor values are listed in Table 3. Figure 8 and Table 3 summarize the test results under the uniform insolation condition; however, a typical SPGS is often subjected to irradiance change. Therefore, this study focuses on the irradiance change in Figure 9 next to conduct MPP tracking with these five different MPPT methods mentioned previously and illustrate the acquired tracking waveforms into the same figure for comparison. The calculated tracking time, tracking accuracy, and tracking loss of each method in each section are listed in Table 4. In Figure 9, the irradiance change is listed as following: 515 W/m 2 from 0 to1 s; 1000 W/m 2 from 1 to 2 s; 325 W/m 2 from 2 to 3 s. As Figure 9 and Table 4 show, the P&O method's steadystate oscillation makes it perform the worst in tracking accuracy and tracking loss. Similarly, as VSS does not utilize the optimal scaling factor value for each irradiance level, a longer tracking time is observed in the first step, which results in higher tracking loss. It is notable that when the irradiance level varies, α-P&O needs to adjust the PSS back to the maximum, and GS needs to restart tracking procedures. These facts result in slower tracking speed and higher tracking loss. Particularly, as the variation of the voltage command is large when GS restarts tracking process, the average tracking loss increases significantly, only preceded by P&O. As Table 4 indicates, in comparison with P&O, the proposed method improves tracking speed by 78.84%, tracking accuracy by 1.94%, and tracking loss by 42.98% on average under the tested scenario. In comparison with α-P&O; it improves the tracking speed by 78.84%, tracking accuracy by 0.5%, and tracking loss by 3.35%. In contrast with GS, it improves the tracking speed by 56%, tracking accuracy by 0.92%, and tracking loss by 42.5% on average. In comparison with VSS, it improves tracking speed by 42.1%, tracking accuracy by 0.11%, and tracking loss by 37.69% on average. Lastly, this study focuses on the tracking performance when there is fast change in irradiation. Figure 10 shows the simulated results of the proposed method, and Table 5 indicates the total loss. In Figure 10, the simulated scenario are listed as following: the irradiance level is 300 W/m 2 from 0 to 3 s; the irradiance level increases by 100 W/m 2 every 1 s from 3 to 9 s until it reaches 1000 W/m 2 ; the irradiance level is fixed as 1000 W/m 2 from 9 to 19 s; the irradiance level decreases by 100 W/m 2 every 1 s to 300 W/m 2 from 19 to 25 s; the irradiance is set as 300 W/m 2 from 25 to 35 s; this is the testing condition with the fastest irradiance change in EN 50530:2010 standard [35]. As Table 5 shows, the steady-state oscillation of P&O makes its tracking loss greater than 331 W as the worst of all; GS needs to restart tracking procedure due to irradiance change; therefore, its tracking loss reached 247 W. These two values are much higher than those of the other three compared methods. As Table 5 implies, in the tracking loss aspect, compared with P&O, α-P&O, GS, and VSS, the proposed method can improve by 95.46%, 41.73%, 93.92%, and 67.56%, respectively. Experimental Result To further verify the correctness of the proposed method, this study also carries out experiments on these five MPPT method mentioned above. In this paper, a 400 W prototyping circuit is implemented from which experiments are carried out accordingly. Figure 11 shows a photo of the proposed system. In Figure 11, a low-cost DSP TMS320F280049 from Texas Instruments (Dallas, TX, USA). is used to realize the MPPT algorithms mentioned above. The experiments are performed with an AMETEK TerraSAS DCS80-15 solar array simulator(San Diego, CA, USA) in SAS mode as a power source. The parameter used in the experiment is identical to the ones used in the simulation. Figure 12 and Table 6 illustrates the tracking performance of these five MPPT methods under the uniform insolation condition; Figure 13 and Table 7 shows the tracking performance of these five MPPT methods under varying irradiation condition; Figure 14 shows the tracking waveform of the proposed method under the EN50530 testing condition, and Table 8 summarizes the tracking performance of each method in the EN50530 testing condition. As Figures 12-14 show, the obtained experimental results is similar to the ones acquired from the simulation, proving the accuracy of the simulated results. As Table 6 shows, similarly, P&O possesses the worst average tracking accuracy and tracking loss under the uniform insolation conditions. Since conventional VSS IncCond MPPT method does not adopt OSF according to irradiance levels, it has a slow tracking speed and poor tracking accuracy in the low irradiance levels. The other three methods' performances are close to each other on the three performance indexes; likewise, GS has the best performances in terms of average tracking time and average tracking loss; the proposed method outperforms others in the average tracking accuracy aspect. As Table 7 indicates, in the tested irradiance change environments, in comparison with P&O; the proposed method improves tracking speed by 78.84%, tracking accuracy by 1.58%, and tracking loss by 45.59% on average; compared with α-P&O; it enhances the tracking speed by 78.84%, tracking accuracy by 0.9%, and tracking loss by 6.67% on average; in contrast with GS, it improves the tracking speed by 56%, tracking accuracy by 0.53%, and tracking loss by 45.3% on average; in comparison with VSS, it enhances tracking speed by 42.1%, tracking accuracy by 0.01%, and tracking loss by 34.49% on average. As Figure 8 illustrates, in the tracking loss aspect under the EN50530 testing condition, compared with P&O, α-P&O, GS, and VSS, the proposed method can improve by 95.58%, 42.51%, 93.66%, and 66.14%, respectively. Table 9 shows the average performance indices rankings of the simulation and experimental data of the five methods compared in this study under the three test scenarios. From Table 9, it can be observed that except for the ranking of the tracking accuracy of the GS method drops slightly, which is because the simulation platform adopts the floating point format while the experimental platform adopts the fixed point format, the rankings of the data acquired in the simulation and experiment of the other methods are the same on the performance indices. From the comprehensive ranking of the data in Table 9, it can be known that for uniform insolation condition, since the GS method adopts the segmented search method, it can rapidly track to the vicinity of the MPP with larger PSS. Therefore, its tracking speed and tracking loss are the best among the five methods. In contrast, conventional VSS IncCond MPPT ranks last on all of the performance indices since it cannot adopt the optimal scaling factor according to the current irradiance level, leading to a long tracking time under low irradiance circumstances. On the other hand, the method proposed in this study ranks first in tracking accuracy and ranks second in both tracking speed and tracking loss. Therefore, the GS method and the proposed method can be regarded as the most applicable of the five compared methods under uniform insolation conditions. On the contrary, conventional VSS IncCond MPPT is less applicable among the five methods. In terms of the varying irradiation condition and EN50530 test condition, since GS must restart the tracking mechanism when the irradiance varies, its tracking accuracy and tracking loss become worse; also, this phenomenon becomes more obvious when the irradiance varies more frequently. On the other hand, the conventional P&O method has the worst performance in tracking accuracy and tracking loss among the five methods since it adopts larger PSS, which leads to an obvious steady state oscillation phenomenon. Relatively, the proposed method ranks first in each performance index since it can effectively converge to MPP due to the VSS method. Additionally, it employs ANN to calculate the optimal scaling factor according to the irradiance level. In summary, the proposed method can be regarded as the most suitable method among the five methods under the varying irradiance circumstances. On the contrary, the GS and conventional P&O methods are less suitable among the five methods. Conclusions A neural network assisted VSS IncCond MPPT method with adaptive scaling factor for rapidly irradiance changing conditions is proposed in this study. Using any two operating points on the characteristic curves, an optimal scaling factor cam be acquired using the proposed neural network to enhance the performance of conventional VSS IncCond MPPT technique. Experimental results show that the proposed method performs well both under varying irradiance condition and EN50530 testing conditions. Comparing with P&O, α-P&O, golden section and conventional VSS IncCond MPPT methods, the averaged tracking time/total tracking loss can be improved by 78.84%/45.59%, 78.84%/6.67%, 56.00%/45.30%, and 42.10%/39.49% under the tested varying irradiance condition. In addition, the proposed method can achieve the highest averaged tracking accuracy. Moreover, the tracking loss can be reduced by 95.58%, 42.51%, 93.66%, and 66.14% under EN50530 testing condition. The significant contribution of this study is that fast and accurate tracking can be accomplished without the demand for extra expensive irradiance and temperature sensors. The proposed method is uncomplicated; it can be integrated effortlessly into conventional VSS IncCond MPPT algorithms, allowing the developed MPPT solution to be more applicable for solar generation system applications.
7,058.8
2021-12-23T00:00:00.000
[ "Engineering", "Computer Science" ]
Anatomical considerations for appropriate mini-plate positioning in open-door laminoplasty to avoid plate impingement and screw facet violation This study aimed to describe a safe zone for mini-plate positioning that can avoid instrument-related complications in laminoplasty. Fifty-one patients who underwent laminoplasty and were followed up for at least 1 year were retrospectively reviewed. The posterior surface length and inferior pole angle of the lateral mass were measured at each level using computed tomography. The safe zone was defined based on these measurements. Incidences of screw facet violation and plate impingement were recorded. Patient-reported outcome measures were compared between the appropriate position (AP) and inappropriate position (IP) groups. Among 40 patients included, 15 (37.5%) had inappropriate plate positioning, causing screw facet violation or plate impingement, which more commonly occurred at distal (C5, C6) and proximal (C3, C4) levels, respectively. Lateral mass posterior surface length was shorter at the proximal levels, and the inferior pole angle of the lateral mass was smaller at the distal levels, signifying that the lateral mass became thin and long at the distal levels. Patient-reported outcome measures were not significantly different between the two groups. However, cervical range of motion at the final follow-up was significantly less in the IP group (p = 0.01). The suggested safe zone demonstrates that inserting the mini-plate with plate-to-lateral mass inferior pole distances of 4–5 mm and 5–6 mm at the C3–C5 and C6–C7 levels, respectively, would avoid instrument-related complications. The risk of plate impingement was higher at the proximal level, whereas the risk of screw facet violation was higher at the distal level in open-door cervical laminoplasty. These risks coincide with anatomical differences at each level. Despite inappropriate positioning of the mini-plate, clinical outcomes were not adversely affected. Scientific Reports | (2022) 12:5560 | https://doi.org/10.1038/s41598-022-09434-z www.nature.com/scientificreports/ caused by instrument positioning [10][11][12] . It has also been reported that facet joint violation by mini-screws can decrease cervical range of motion (ROM) 9 . While previous studies have described optimal insertion points and trajectories for lateral mass screws or pedicle screws, not many have demonstrated an optimal method for mini-plate fixation for laminoplasty 3,8,13,14 . Therefore, this study was conducted to (1) describe the incidence of screw facet joint violation and plate impingement in open-door laminoplasty using mini-plate fixation, (2) define a safe zone to avoid instrument-related complications, and (3) identify whether inappropriately positioned mini-plates would adversely affect clinical outcomes. Materials and methods Study design and participants. This was a retrospective cohort study approved by the institutional review board of our institute (Dongguk University Ilsan Hospital Institutional Review Board 2021-09-020). All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was waived owing to the study's retrospective nature. The study was conducted in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement for cohort studies. The medical records of 51 patients who underwent laminoplasty for cervical myelopathy caused by spondylosis or ossification of the posterior longitudinal ligament between September 2012 and March 2019 were retrospectively reviewed. Patients (1) who underwent surgery due to trauma, infection, or tumor; (2) who lacked radiographic or clinical data; and (3) who had a follow-up period of less than 1 year were excluded. Patients with screw facet joint violation or possible plate impingement with cranial lateral mass observed on postoperative computed tomography (CT) were classified into the inappropriate position patient group (IP group). Patients with no identifiable mini-plate-related complications were defined as the appropriate position patient group (AP group). Surgical technique. Patients were placed in a prone position with their heads located on the Mayfield headrest. A midline posterior approach was used to expose the spinous process and lamina of the indicated levels. Dissection was performed until the lamina-lateral mass junction was exposed. The spinous processes were resected at the base. Open-side and hinge-side troughs were made at the lamina-lateral mass junction. After the lamina was carefully opened to avoid complete fracture of the hinge side, a mini-plate (Centerpiece, Medtronic, Minneapolis, MN, USA) was used to maintain the lamina opening. We attempted to position the mini-plate at the center of the posterior surface of the lateral mass. Two 5-mm screws were used to fix the plate at the lamina, and two 5-mm screws were inserted at the lateral mass to anchor the plate. Screws into the lateral mass were inserted perpendicular to the posterior surface of the lateral mass. For C7, partial laminectomy rather than open-door laminoplasty was performed to preserve muscle insertion in the spinous process 15 . Variables and radiographic measurements. The neck pain visual analog scale (VAS), arm pain VAS, and neck disability index (NDI) were recorded preoperatively and at each postoperative follow-up. Threedimensional CT scans were taken preoperatively for surgical planning and at 2 days postoperatively to evaluate adequate decompression and instrument position 16 . Possible plate impingement was diagnosed when the cranial edge of the mini-plate reached the caudal edge of the lateral mass of adjacent proximal level (Fig. 1A). Screw facet violation was defined as the screw penetrating the ventral surface of the lateral mass detected on axial or sagittal reconstructed CT images (Fig. 1B). The posterior surface length of the lateral mass was measured as the distance between the cranial and caudal edges of the lateral mass on sagittal CT images. A sagittal image showing a pedicle-lateral mass junction was selected because the mini-plate is usually fixed to the lateral mass at this location. Within the same sagittal image, the inferior pole angle of the lateral mass was measured as the angle between the line drawn through the posterior surface of the lateral mass and the line drawn through the ventral-inferior border of the lateral mass that composed the facet joint. Measurements were performed bilaterally, and the mean value was used for evaluation ( Fig. 2). Cervical lordosis was measured in the lateral view in the neutral position based on the angle between the lines passing through the lower margin of C2 and C6 or C7. The sagittal vertical axis (SVA) of C2-C7 was defined as the horizontal distance between the vertical line from the center of C2 and the posterior-superior aspect of C7. Cervical ROM was measured as the change in the angle between the lower margin of C2 and the lower margin of C7 on dynamic (flexion and extension) radiographs. Definition of safe zone. A safe zone that can avoid both screw facet violation and plate impingement at the proximal adjacent level was defined. To define the safe zone, we first measured the size of the mini-plate (Centerpiece, Medtronic, Minneapolis, MN, USA) used for all patients included in the study. The distances between the edge of the plate and the center of the screw fixation hole, between the center of each screw fixation hole, and between the cranial and caudal edges of the plate were 2 mm, 4 mm, and 8 mm, respectively (Fig. 3A). Second, the minimum distance from the inferior pole of the lateral mass needed to avoid screw facet violation was calculated. The minimum distance for screw placement (x) was calculated using the inferior pole angle (a) and the length of the screw using the following equation: [x = length of screw ÷ tangent (a)] where tangent (a) would be the same as [length of screw ÷ x] (Fig. 2). Finally, a safe zone that could avoid both screw facet violation and plate impingement was defined as the distance between the caudal edge of the mini-plate and the inferior pole of the lateral mass. We assumed that the screw was inserted perpendicular to the posterior surface of the lateral mass. The minimum distance of the safe zone would be [distance needed to avoid screw facet violation (x) -2] where the plate caudal edge-to-plate caudal screw hole center distance is 2 mm. The maximum distance of the safe zone would be [lateral mass posterior surface length -8] since the plate cranial edge-to-plate caudal edge distance is 8 mm (Fig. 3B). Statistical analysis. Student's t-test was performed to compare the patient-reported outcome measure results between the IP and AP groups. Paired t-test was performed to compare preoperative and postoperative values. The Mann-Whitney U test and Wilcoxon signed rank test were used to analyze cervical sagittal alignment and ROM, as these parameters did not demonstrate a normal distribution according to the Shapiro-Wilk Results Patient characteristics and inappropriate plate positioning. Forty patients with 120 levels met the inclusion criteria and were included in the study. Screw facet violation was observed in 8 patients (20.0%) and in 8 levels (6.7%). Furthermore, plate impingement was detected in 9 patients (22.5%) and in 11 levels (9.2%). In total, 15 patients (37.5%) had inappropriate plate positioning and were classified as the IP group (age, 67.7 ± 9.8 years; male, 76.2%; follow-up, 75.0% [15/20] (Table 1). Screw facet joint violation was more frequently observed at the distal levels, including C5 and C6, whereas no screw violation was observed in C3. In contrast, plate impingement was more frequently detected at the proximal levels including C3 and C4, whereas no plate impingement was detected at the most caudal level, C6 (Table 2) (Fig. 4). No plate/screw pullout or breakage occurred. www.nature.com/scientificreports/ Radiographic results. The lateral mass posterior surface length was longer at the distal levels. Furthermore, the inferior pole angle of the lateral mass tended to decrease at these levels. Because of the smaller inferior pole angle at the distal levels, the calculated minimum distance from the inferior pole to insert screws by [x = length of screw ÷ tangent (a)] was higher at the distal levels (Table 3) (Fig. 4). Lordosis of C2-C7 did not demonstrate significant intergroup differences during any postoperative follow-up periods. Furthermore, there was no significant difference in C2-C7 SVA between the AP and IP groups. However, cervical ROM significantly decreased at the final follow-up in the IP group (p < 0.01), while it did not change significantly in the AP group (p = 0.91). Cervical ROM at the final follow-up was significantly smaller in the IP group than in the AP group (p < 0.01) ( Table 4). Among the 40 patients included, 25 (62.5%) underwent a 1-year follow-up CT, which enabled evaluation of hinge site union status. Among the 76 segments evaluated, three (3.9%) demonstrated hinge site non-union. One segment was associated with screw facet joint violation, while the other two segments were not associated with screw violation or plate impingement. Patient-reported outcome measures. Neck pain VAS, arm pain VAS, and NDI significantly improved after the operation in both groups (P < 0.01). No significant difference in patient-reported outcome measures between the AP and IP groups was observed at each follow-up period (Table 5). Table 6 and Fig. 5. When inserting a 5-mm screw for the caudal screw for mini-plate lateral mass fixation, 2-to 3-mm distancing of the plate from the inferior pole of the lateral mass was required for the C3, C4, and C5 levels. However, at the C6 or C7 levels, a distance of approximately 5 mm from the inferior pole of the lateral mass was needed to avoid screw facet violation. Distancing the mini-plate by more than 5-6 mm at C3-C5 and 7-8 mm at C6-C7 from the inferior pole of the lateral mass was not safe because it would cause plate impingement at the cranial level. The safe zone was narrower when inserting a 7-mm screw for the caudal screw as more distance is needed to avoid screw facet violation. Inserting a 7-mm screw for the caudal screw at the C7 level leaves no safe zone due to the thin lateral mass at this level, as demonstrated by the small inferior pole angle at C7. Considering the median value of the safe zone, for C3-C5, leaving a 4-to 5-mm distance when inserting a 5-mm screw and distancing the mini-plate 5-6 mm when inserting a 7-mm screw from the inferior pole would avoid both screw facet violation and plate impingement. For C6-C7, leaving 6-7 mm when inserting a 5-mm Discussion Optimal mini-plate insertion for cervical laminoplasty would adequately prevent the reclosure of an open hinge without instrument-related complications such as screw pullout, plate breakage, screw facet violation, and plate impingement with approximate level 3,5,8,17 . Several studies have demonstrated that aggravation of kyphosis, decreased ROM, and postoperative neck pain are common after laminoplasty [10][11][12]18 . Although injuries to the posterior neck musculature have been commonly discussed as a factor causing these adverse outcomes, inappropriate instrument positioning such as plate impingement or screw facet violation would also have a negative effect on axial symptoms and may have been underestimated 3,8,9 . While transfacet fixation has been reported as a viable technique for fusion, screw facet violation by mini-screws does not limit facet joint motion and would accelerate the degenerative process at the involved level 19 . Chen et al. demonstrated that screw facet joint violation during mini-plate fixation results in decreased ROM and aggravated neck pain, although neurological recovery was not affected 9 . Two previous studies have suggested safe mini-screw insertion points for laminoplasty 3,8 . Chen et al. demonstrated the safe zone using 3D image rendering 8 . However, the safe zone definition in this study is complex considering that intraoperatively, surgeons can only adjust the plate position in the cranial or caudal direction. Min et al. also demonstrated the minimal safe distance of mini-screw insertion 3 . The limitation of this study is that the measuring method has not been objectively described. Furthermore, although these two studies suggest that a certain distance is needed from the inferior pole of the lateral mass to avoid screw facet violation, they did not consider the possibility of plate impingement when the plate is located too cranially 3,8 . Therefore, the present study attempted to define the safe zone of mini-plate placement by considering both minimum (to avoid screw facet violation) and maximum distances (to avoid plate impingement) from the inferior pole of the lateral mass. www.nature.com/scientificreports/ In this study, screw facet violation was more common at the distal levels, including C5 and C6, whereas plate impingement was more common at the proximal levels, such as C3 and C4. The results of radiological measurements demonstrate that this trend is consistent with the anatomical differences between each level. The possibility of plate impingement would be higher at the proximal level because the posterior surface length is shorter at these levels. However, a smaller inferior pole angle at the distal level signifies a thin lateral mass at these levels, which increases the possibility of screw facet joint violation. Considering such anatomical differences at each level, locating the mini-plate more caudally at the proximal levels and more cranially at the distal levels would help avoid instrument-related complications. The safe zone was described based on the distance between the inferior pole of the lateral mass and the caudal edge of the mini-plate. Although previous reports have used the screw insertion area as the reference point, we used the caudal edge of the mini-plate because it is easier to identify intraoperatively. The suggested safe zone demonstrates that inserting the mini-plate with a plate-to-lateral mass inferior pole distance of 4-5 mm for the C3-C5 levels and 5-6 mm for the C6-C7 levels would avoid instrument-related complications. Min et al. also demonstrated that more distance from the inferior border of the lateral mass is needed at distal levels to avoid screw facet joint violation 3 . It is known that the lateral mass is generally thin at C7, which makes pedicle screw a more preferred choice than lateral mass screw 20,21 . The results of the present study also demonstrated that inserting a 7-mm screw for mini-plate fixation in C6-C7 would be unsafe owing to the thin lateral mass at these levels as demonstrated by the small inferior pole angle. This finding supports performing partial laminectomy rather than laminoplasty at C7 due to a higher chance of instrument-related complications at this level 21,22 . Laminoplasty is often performed in patients with cervical spondylosis, which distorts the anatomical landmarks due to bony spurs and spondylolisthesis. Lee et al. demonstrated that screw facet joint violation is more common in severely degenerative cervical spine than in mildly degenerative spine 7 . Although the suggested safe zone in the present study could be used as a reference while placing the mini-plate, such distortion of anatomic landmarks would make it difficult to identify appropriate insertion points. Therefore, individual assessment and preoperative planning with radiographic measurements used in this study would further enhance the safety of laminoplasty. Although shorter screws can prevent facet joint violation, weak fixation strength caused by decreased screw length can lead to screw pull-out. While this phenomenon was not observed in the present study, previous reports have demonstrated that screw pull-out can occur even with 5-mm screw fixation 5 . Park et al. recommended using two screws with relatively longer screw length for lateral masses in order to increase resistance to output force 17 . Therefore, fixation with a longer screw at the optimal area is needed, rather than a decrease in the screw length. The results of the present study demonstrate that clinical results, such as neck pasin VAS or NDI, were not adversely by affected screw facet joint violation or plate impingement. However, cervical ROM was adversely affected by inappropriate positioning of the instrument. This corresponds to the findings of Chen et al., which suggested that screw facet joint violation is related to decreased ROM 9 . However, both studies included a small number of patients with instrument-related complications, which warrants further evaluation. This study had several limitations. First, the lateral mass posterior surface is not a plane surface, but rather has a round curvature, and the measuring method of the present study would have limitations reflecting such curved surfaces. However, within the confinement of using two-dimensional images, the round curvature of the lateral mass cannot be completely measured. Furthermore, the minimal safety distance to avoid screw facet violation demonstrated in this study corresponds to that reported in other previous reports 3,8 . Second, as previously discussed, the study has limited capacity to demonstrate the clinical impact of inappropriate instrument positioning due to the small sample size. Finally, the study is not free from the possibility of selection bias because it was a retrospective, single-center study. In conclusion, the risk of plate impingement was higher at the proximal level, whereas the risk of screw facet violation was higher at the distal level in open-door cervical laminoplasty. These risks coincide with the anatomical differences at each level. The demonstrated safe zone can be used as a reference for plate positioning. Despite inappropriate positioning of the mini-plate, the clinical outcomes were not adversely affected. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
4,373.6
2021-11-10T00:00:00.000
[ "Medicine", "Engineering" ]
A compact single channel interferometer to study vortex beam propagation through scattering layers We propose and demonstrate a single channel interferometer that can be used to study how vortex beams propagate through a scatterer. The interferometer consists of a multifunctional diffractive optical element (MDOE) synthesized by the spatial random multiplexing of a Fresnel zone plate and a spiral Fresnel zone plate with different focal lengths. The MDOE generates two co-propagating beams, such that only the beam carrying orbital angular momentum is modulated by an annular stack of thin scatterers located at the focal plane of the Fresnel zone plate, while the other beam passes through the centre of the annulus without any modulation. The interference pattern is recorded at the focal plane of the spiral Fresnel zone plate. The scattering of vortex beams through stacks consisting of different number of thin scatterers was studied using the proposed optical setup. Conflicting results have been reported earlier on whether higher or lower charge beams suffer more deterioration. The proposed interferometer provides a relatively simple and compact means of experimentally studying propagation of vortex beams through scattering medium. S. 1 Fabrication of MDOEs The design patterns were transferred to chromium coated mask plates using laser fabrication method in a conventional mask writer. The images of the mask patterns are shown in supplementary figure S1. The amplitude masks were used for the fabrication of two level binary phase elements using UV lithography on S.2 Simulation results Simulation was done using Fresnel approximations at λ = 632 nm, other parameters used were a sampling period of 4 µm, and a sampling space of 1000 × 1000 pixels, ( 1 ) = 25 cm and ( 2 ) = 30 cm. In the first step, the variation in the scattering ratio due to increasing number of layers in the stack of scatterers was studied. A scatterer is designed using Gerchberg Saxton algorithm (GSA) [1][2][3] with a scattering ratio given by b/B, where B is the length of the spectrum domain and b is the length outside which the intensity is zero. The process is shown in figure S4. A complex amplitude C comprising a constant amplitude (White window at the input) and random phase distribution is Fourier transformed and the resulting amplitude is constrained to only have values within the scattering window of length b while the phase is retained. The process is iterated to obtain the random phase distribution, which will have a scattering ratio of σ = b/B. The procedure is repeated figure S8. In this case, the scattering ratio is varied but the maximum phase retardation was maintained constant and it is seen that with an increase in the scattering ratio, the results improve contrary to the belief that strong scatterers distort more compared to weaker counterparts. However, when the phase retardation was increased along with the scattering ratio, the behavior reversed. The interference patterns for the case of 0.6π phase retardation for p = 1, 2 and 4 with a maximum phase retardation of 0.6π, 1.2π and 1.8π are shown in figure S9. In this case both the phase retardation as well as the scattering ratio were increased. Figure S8: Interference patterns for different scattering ratios and topological charges L=1 to 5. Figure S10: Plot of the correlation results for L=3 between different phase retardations (0.2π-π in steps of 0.2π) and the case in the absence of a scatterer. Blue -0.2π, Yellow -0.4π, Meroon -0.6π, Violet -0.8π and Green -π. S.3 Study of scattering characteristics of scattering stack The scattering characteristics of a stack of a scatterer was studied using an experimental setup as shown in figure S11. Light from a He-Ne laser (λ= 632.8 nm) is passed through a neutral density filter and a stack of scatterers. The stack of scatterers was created by stacking one scatterer over the other. The maximum scattering degree of the scatterer is measured using trigonometry as = (x/2L). From the scattering degree, the maximum period of the scatterer can be approximated as Λ = (λ/sin ). The values of the scattering degree, period, etc., for the different number of scatterers that make the stack are given in Table -S1. With an increase in the number of layers, the scattering degree increased, while the effective scattering period decreased as described in the previous section. Supplementary Figure S11: Experimental setup for studying the scattering characteristics of a stack of scatterer.
1,026.2
2019-12-04T00:00:00.000
[ "Physics" ]
Eicosapentaenoic acid loaded silica nanoemulsion attenuates hepatic inflammation through the enhancement of cell membrane components Background Liver inflammation is a multistep process that is linked with cell membrane fatty acids composition. The effectiveness of eicosapentaenoic acid (EPA) undergoes an irreversible change during processing due to their unsaturated nature; so the formation of nanocarrier for EPA is crucial for improving EPA’s bioavailability and pharmacological properties. Objective In this study we aimed to evaluate the efficiency of EPA alone or loaded silica nanoemulsion on the management of hepatic inflammation induced by diethyl nitrosamine (DEN) through the enhancement of the cell membrane structure and functions. Methods The new formula of EPA was prepared to modify the properties of EPA. Forty-eight male Wistar albino rats were classified into: control, EPA, EPA loaded silica nanoemulsion (EPA–NE), DEN induced hepatic inflammation; DEN induced hepatic inflammation treated with EPA or EPA –NE groups. Plasma tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), liver hydroxyproline (Hyp) content, and liver oxidant and anti-oxidants were estimated. Urinary 8- hydroxyguanozine (8- OHdG) and erythrocyte membrane fatty acids fractions were estimated by High-performance liquid chromatography (HPLC). Also, histopathology studies were done to verify our hypothesis. Results It was appeared that administration of EPA, in particular EPA loaded silica nanoemulsion, ameliorated the inflammatory response, increased the activity of the anti-oxidants, reduced levels of oxidants, and improved cell membrane structure compared to hepatic inflammation induced by DEN group. Histopathological examination confirmed these results. Conclusion EPA and notably EPA loaded silica nanoemulsion strongly recommended as a promising supplement in the management of hepatic inflammation. Introduction The liver is the largest organ in the body responsible for nutrient metabolism and protein synthesis [1], it also involved in biotransformation of food and drugs [2]. Liver cancer is considered one of the most common causes of cancer-related death [3]. Open Access *Correspondence<EMAIL_ADDRESS>Liver carcinoma is a multistep process including numerous risk factors that promote damaging of genes as well as molecular and cellular deregulations and hepatocytes transformation [4]. Hepatocellular carcinoma (HCC) represents more than 90% of liver cancers and constitutes a major worldwide health problem. HCC is the ending process of chronic liver diseases, starting from fibrosis and cirrhosis [5]. Numerous investigations have indicated that the most common causative incidents in liver diseases are oxidative stress and inflammation. An uncontrolled and prolonged imbalance between the generation of free radicals and their removal by defensive mechanisms causes serious damage to cells, with potentially consequences for the entire organism, resulting in a wide range of chronic diseases [6][7][8]. During liver damage, ROS can trigger the expression of pro-inflammatory genes, which is a risk factor for liver diseases [6]. The influx of neutrophils, monocytes, and lymphocytes, as inflammatory cells, to the stimulation' site is a fundamental component of inflammation. The activated inflammatory cells at the site of inflammation release chemical mediators such as chemokines, cytokines, eicosanoids and nitric oxide causing elevation in superoxide, hydroxyl radical, hydrogen peroxide and tissue damage [6,9,10]. Thus, the over expression of the proinflammatory genes activates an intracellular signaling cascade generating additional ROS which in turn increase the oxidative stress and inflammatory lesion which drive the progression of liver disorders [6]. Most therapeutics of liver disorders is designed to interact with the proteins and nucleic acids also alteration in the cell membrane lipid composition is associated in functionality of cancer cells. Treatment with Polyunsaturated fatty acids (PUFA) is becoming a very exciting alternative, above all omega -3 fatty acids have been shown to exert a number of beneficial biological properties, including anti-cancer effects; supplementation of n-3 polyunsaturated fatty acids (PUFA) has inverse association with the risk of HCC [11]. Eicosapentaenoic acid (EPA), an omega-3 polyunsaturated fatty acid can modify cell membrane configuration as well as equilibrium between ceramide and sphinomyelin and hence changing its properties like permeability and fluidity [12]. In addition, it has inhibitory effect on the growth of tumor cells with little or no cytotoxic effect on the normal cells [11]. EPA is commonly found in fish oil, and its effectiveness undergoes an irreversible change during processing due to their unsaturated nature; so the formation of nanocarrier for EPA is crucial for improving EPA's bioavailability and pharmacological properties, and widening its use in biomedical fields. Aim of the work From this point of view, this study aimed to evaluate the efficiency of EPA alone and EPA loaded silica nanoemulsion on the management of hepatic inflammation induced by Diethyl nitrosamine (DEN) in experimental rats via the modulation of the erythrocyte membrane composition. Eicosapentaenoic acid, Diethyl nitrosamine (DEN), HPLC standards for 8-hydroxyguanozine and fatty acid fractions were purchased from Sigma Chemical Company, St. Louis, MO, USA. All using chemicals were HPLC grade. Preparation of Eicosapentaenoic acid (EPA) nanoemulsion To prepare EPA in nanoemulsion form, tetraethyl orthosilicate (TEOS; 5 mL) was dissolved in water for 5 min. at room temperature using magnetic stirring. After that chemphore (4 mL /25 mL H 2 O) containing 15 mL of EPA were added to TEOS solution and kept under vigorous stirring using ultrasonic homogenizer (15 min). At the end of homogenization, the colorless solution was turned to milky solution confirming the emulsification of EPA. Characterization TEM images of EPA loaded silica nanoemulsion were acquired by JEM-2200-FS Field emission transmission electron microscope (JEOL, Japan) to an operating voltage of 200 kV. The grids were given hydrophilic treatment using Joel Datum HDT-400 hydrophilic treatment device. TEM samples were prepared by dropping the diluted nanoemulsion into the carbon coated hydrophilic copper TEM grids. The particle size distribution and polydispersity index of EPA loaded silica nanoemulsion (EPA-NE) were determined in triplicates by a photon correlation spectroscopy (PCS) using a zetasizer (Malvern Zetasizer Nano ZS90, UK). Approximately, 1 mL of EPA loaded silica nanoemulsion were diluted with 1 mL of deionized water. The dissolved samples were sonicated for 30 min in ice bath. The samples were placed in a zetasizer and the particle size, and polydispersity index were then observed. Experimental design Animals Male Wistar albino rats weighting 150 ± 10 g were obtained from the animal house of National Research Centre (NRC), Giza, Egypt. The animals were housed in stainless steel cages at the temperature range of 22 + 2 °C, under a 12-h light/12-h dark cycle, and allowed to acclimatize for a period of 10 days to the experiment. The whole experiment was approved by the ethical committee of NRC (Ethical approval number 19 212). Induction of hepatic inflammation DEN was dissolved in 0.9% normal saline, rats were induced with a single injection of DEN in a dose of 1 mg /kg b.wt. by intraperitoneal injection (i.p.) [13]. Experimental design Forty-eight male Wistar albino rats were classified into six groups as follow (8 rats in each group): Negative control group: rats were orally administered with 0.5 ml vehicle solution (0.9% normal saline). EPA group: rats were administered with EPA (500 mg/kg b.wt.) per day orally for 4 weeks [14]. EPA loaded silica nanoemulsion group (EPA-NE): rats were administered with EPA loaded silica nanoemulsion (500 mg/kg b.wt.) per day orally for 4 weeks. DEN induced hepatic inflammation group: rats were treated with diethyl nitrosamine (DEN) (1 mg/kg b. wt.) in a single dose [13]. DEN induced hepatic inflammation and treated with EPA group (Treated I): rats were treated with DEN (1 mg /kg b.wt.) in a single dose and in the same time treated with EPA (500 mg/kg b.wt.) per day orally for 4 weeks. DEN induced hepatic inflammation and treated with EPA -NE group (Treated II): rats were treated with DEN (1 mg /kg b.wt.) in a single dose and in the same time treated with EPA loaded silica nanoemulsion (500 mg/kg b.wt.) per day orally for 4 weeks. After the experimental period; 24 h urine samples were collected from each animal using metabolic cages for determination of urinary 8-hydroxyguanozine. Rats were fasted for twelve hours and the blood was withdrawn from the optical vein. The blood samples were collected with EDTA to isolate erythrocyte membrane as described previously [15] and the separated plasma was used for biochemical parameters estimations. Liver was discarded quickly from each rat and washed with ice-cold saline. The first part of the liver was homogenized in 0.1 M Tris buffer for biochemical estimations. The second part was used for histopathological study. Determination of liver functions Plasma Alanine Aminotransferase (ALT) and aspartate aminotransferase (AST) activities were estimated by colorimetric method [16]. Also, albumin and total protein were determined colorimetrically [17]. Determination of plasma inflammatory markers Plasma tumor necrosis factor alpha (TNF-α) and interlukin -1 beta (IL-1β) were estimated by enzyme linked immunosorbent assay using ELISA kit according to manufacture procedures. Determination of liver hydroxyproline content (Hyp) To estimate hydroxyproline content, a colorimetric experiment was done using the Patiyal and Katoch technique. Briefly, liver Sects. (0.5 g) were hydrolyzed (20 h in 6 mol/L HCl at 100 °C), diluted in ultrapure water, and centrifuged to eliminate contaminants. At room temperature, samples were incubated for 10 min in 0.05 mol/L chloramine-T (Fisher, Fair Lawn, NJ, USA), followed by a 15-min incubation in Ehrlich'sperchloric acid solution at 65 °C. At 561 nm, sample absorbance was measured, and the value for each sample was computed using a hydroxyproline standard curve [21,22]. Determination of erythrocyte membrane fatty acids fractions Erythrocyte membrane fatty acids fractions including EPA, arachidonic acid (AA), linoleic acid (LA), and alpha linolenic acid (ALA) were estimated by high performance liquid chromatography (HPLC). The cell membrane was homogenized in a 2% acetic acid / diethyl ether combination (2:1 volume ratio). After filtering and centrifuging the solution at 500 xg, the organic phase was evaporated to dryness. The extract was dissolved in acetonitrile (200 µL). HPLC condition The HPLC system (Agilent technologies 1100 series) was equipped with a quaternary pump (Quat Pump, G131A model) and the C18 column (260 X 4.6, particle size 5 µm). As the mobile phase, an acetonitrile/water (70/30) v/v mixture was utilized, and it was provided by isocratic elution at a flow rate of 1 ml/min and a wave length of 200 nm. The peak areas of standards were determined after repeated dilutions were injected. By graphing peak areas vs. concentrations, a linear standard curve was drowning. The standard curve was used to calculate the concentration of the samples [23]. Determination of urinary 8-hydroxyguanozine (8-OHdG) Rats were fasted and placed in metabolic cages for 24 h to collect urine. Urine samples were stored at-20 °C until analyzed. The concentration of 8-OHdG was determined using HPLC system [24,25]. In brief, the 8-OHdG standard was dissolved in ultrapure water, and then successive dilutions were made and injected onto HPLC to create a standard curve with varying concentrations. Processing of samples Strata C18-E (55 um, 70A) column was used to extract 8-OHdG from a 1 ml urine sample. The eluents were dried with a nitrogen gas stream and reconstituted in 5 mL of ultrapure water. HPLC was injected with 20 µL of each sample. HPLC condition The mobile phase is a 25/10/965 v/v mixture of acetonitrile, methanol, and phosphate buffer. Phosphate buffer was prepared by dissolving 8.8 g of potassium dihydrogen phosphate (KH 2 PO 4 ) in 1000 ml ultrapure water and adjusting the pH to 3.5.The buffer was then filtered twice using a 0.45 m pore size sterile membrane filter before passing over an HPLC reverse phase column and electrochemical detector with a cell potential of 600 mV at a flow rate of 1 ml/min. The urine 8-OHdG concentration was derived from the standard curve and divided by the urinary creatinine, which was obtained using Larsen's kinetic technique [26]. Liver histopathological examination At the end of the experiment, sections from the liver were fixed in 10% buffered formalin for 24 h, and then dehydrated with grades of ethanol (70%, 80%, 90%, 95% and 100%). Dehydration was then followed by clearing the samples in two changes of xylene. Samples were impregnated with two changes of molten paraffin wax and embedded and blocked out. Sections (5 µm) were stained [27][28][29] Oxidative stress: molecular perception and transduction of signalswith the following conventional histological stain: hematoxylin and eosin routinely processed. Statistical analysis SPSS 13.0 statistical software was used to examine the data (SPSS, Chicago, IL). Data were analysed using repeated-measures one way ANOVA. A statistically significant probability was defined as one with a probability of less than 5% (P 0.05). Results and discussion Encapsulation of EPA into silica nanoemulsion was carried out using water/oil nanoemulsion. Emulsifying agent such as Tween 80 and chemphore were used to further disperse the formed nanoemulsion. Thus, our work is to utilize the pores of silica nanoemulsion as a holder for EPA drug. The porous silica nanoemulsion was prepared via using TEOS as precursor and chemphore as surfactant to kep the particles of the formed silica nanoemulsion a way from each others with the aid of ultrasonic homogenizer that achieve the full dispersition for the formed nanoemulsion with there is no phase separation for the components of the resultant nanoemulsion. As clearly remarkabe from Fig. 1a that the silica nanoemulsion are prepared with porosity and formed as hollow spherical particles. The DLS of silica nanoemulsion (Fig. 1b) displayed that the particles are phormed with very small size (61.58 nm) with PdI = 0.05. As known from the value of PdI, it can be concluded that the particles are formed with monodisperes particles with no noticable for the agglomeration. In comparison with EPA encapsulated silica nanoemulsion, the particles are filled with EPA. Via eye visullization, silica nanoemulsion (Fig. 1a), the spherical particles are appeared as porous particles, these porosity are appeared as black particles while encapsulation of EPA (Fig. 1c). The average particle size analyzer for EPA loaded silica nanoemulsion (Fig. 1d) is increased to 90 nm with pdI equal to 0.074. The particle size was enlarged due to the encapsulation of EPA. Overall, the hydrodymanimc average size is around 100 nm which is not significantly affected on the efficiency of EPA. Additionally, there is a difference for the diameter size for the produced nanoemuslion when evalauted via TEM and DLS which is mainly attributed to the difference in the utilized technique. DEN is a secondary alkylating agent which generates ROS leading to an oxidation of DNA and/or RNA molecules. It is generally found in smoked and fried food items. It is a confirmed hepatotoxic in rodent models [30]. ALT and AST, the liver biomarker enzymes, are suggestive of the beginning of hepatocellular injury. These enzymes are existed in the cytoplas and liberated into the blood due [31]. It was found that the rats administered DEN significantly increased levels of serum ALT and AST (Fig. 2). Excessive generation of free radicals or depletion of antioxidant enzymes cause oxidative damage by disrupting the cellular redox balance between the production of oxidants and antioxidant defense [32]. The parenchymal cells of the liver are the most vulnerable cells to oxidative stress [33], which is owing to the capacity of certain organelles found inside parenchymal cells (mitochondria, microsomes and peroxisomes) to produce free radicals that can cause fatty acid oxidation, making the liver a key target for ROS damage [34]. Oxidations of biological macromolecules especially DNA, proteins, and lipids are considered the Levels of serum ALT and AST in different studied groups. P a value: significant difference compared to control group. P b value: significant difference compared to DEN induced hepatic inflammation group. P c value: significant difference compared to treated I group most trademark oxidative stress induced by DEN [35]. Level of liver GSH, activity of antioxidant enzyme SOD, and level of TBARS were measured during this investigation, and the results revealed that administration of DEN significantly elevated the extent of lipid peroxidation, as shown by the elevation of TBARS and decreased level of GSH, and SOD activity (Fig. 3). SOD and GSH are the first line of antioxidant defense system in cells; they protect the cells by scavenging free radicals, altering them to less toxic metabolites [36]. Results from this study showed a reduction in the concentration of GSH and activity of SOD enzyme after administered of DEN in hepatic inflammation group. These findings are consistent with the observations of Adebayo and his colleagues who reported the increase in the levels of TBARS with a concomitant decrease in antioxidant defense system in the DEN-administered mice [31]. In the present investigation, the supplementations of EPA especially EPA loaded silica nanoemulsion ameliorated activities of serum ALT and AST, enhanced the anti-oxidative status (GSH, SOD), and blunted the oxidative stress level (TBARS) in the treated groups (Figs. 2 and 3). In support, in animal models of liver injury evoked by drugs, alcohol or other, EPA terminated oxidative stress, and mitochondrial dysfunction [37,38]. Further, EPA was described to motivate reactions that regulate the expression of detoxifying/ antioxidant genes and obstruct inflammation [39]. EPA reacts directly with the negative regulator of Nrf2, Keap1, and initiates dissociation of Keap1 with Cullin 3, thereby inducing Nrf2directed antioxidant gene expression [40,41]. Moreover, Tanaka et al. found that EPA administration to mice significantly reduces hepatic TBARS through enhanced expression of zinc, copper, and manganese-SOD [42]. Oxidative DNA damage is associated with hepatocarcinogenesis development. Over production free radicals such as ROS is considered an important factor in genetic instability during liver inflammation [43]. 8-OHdG is another significant biomarker of ROS-induced oxidative DNA damage, as well recognized a risk factor for hepatocellular carcinoma in patients with chronic liver inflammation [44]. In this study, the 8-OHdG level in urine of DEN-administered rats was significantly elevated as compared with the control group (Fig. 4). Consistent with this finding, recent studies reported that cigarette smoke, which is rich in DEN, causes an accumulation of 8-OHdG in the lungs as a result of increases oxygen free radicals, leading to inflammatory responses, fibrosis, and tumor growth [45,46]. On the other hand, we found that EPA supplementations particularly EPA loaded silica nanoemulsion reduced the level of urinary 8-OHdG urine levels compared to DEN induced hepatic inflammation group (Fig. 4); which confirmed the ability of EPA to counteract oxidative modification of DNA. Previous study demonstrated that fish oil rich with EPA were significantly decreased the levels of oxidative DNA damage (8-OHdG) in male cigarette smokers and attributed that the beneficial effect of EPA on suppressing the generation of reactive oxygen species [47]. Furthermore, dietary fish oil protects against colon cancer in rats by reducing oxidative DNA damage, as measured by a quantitative immunohistochemistry study of 8-OHdG [48]. Chronic inflammation of the liver is a well-recognized risk factor for carcinogenesis, the molecular link between inflammation, hepatic fibrogenesis, and hepatocellular carcinoma [49]. Regarding DEN-induced hepatic inflammation group, concentrations of TNFα, IL-1b were significantly elevated (Fig. 5). Consistent with our study, Ding et al. found that TNF-α, IL-1b where up-regulated during DEN-exposed, causing hepatic inflammation [50]. These results were due to the de-alkylation of DEN to its active mutagenic metabolites which modulated substances such as 3-methylcholanthrene and phenobarbital (PB) which in turn increase hepatic demethylase activity. Besides, oxidative stress induced by DEN is well documented as a factor to the pathogenesis of hepatic carcinogenesis [51]. Fig. 4 Urinary 8-hydroxyguanozine levels in different studied groups. P a value: significant difference compared to control group. P b value: significant difference compared to DEN induced hepatic inflammation group. P c value: significant difference compared to treated I group DEN activates the myeloid differentiation primary response 88 (MyD88) dependent and MyD88-independent signaling cascades [52]. The MyD88-dependent signal transduction activates NF-kB through activation of its inhibitory protein IkBa, which allows NF-kB nuclear translocation and controls the expression of a multitude of pro-inflammatory cytokines and other immune-related genes, such as TNF-a, IL-1, IL-1β, IL-6, and IL-12 [49]. Liver inflammation is a hallmark of early-stage fibrosis, which can extend to severe fibrosis and cirrhosis [53]. Fibrosis is characterized by the accumulation of collagen and other extracellular matrix components [54]. One of the main approaches for fibrosis quantification and also for the therapeutic assessment of new anti-fibrotic drugs is to determine the hydroxyproline (Hyp) level of the liver [55]. In the current study, increased production of fibrillary collagens was confirmed by a significantly increased hydroxyproline level in the DEN-administered group (Fig. 5). A number of studies have also shown the fast buildup of collagen during nitrosamine-induced liver fibrosis [56,57]. Serum Inflammatory markers and liver hydroxyproline content in different studied groups. P a value: significant difference compared to control group. P b value: significant difference compared to DEN induced hepatic inflammation group. P c value: significant difference compared to treated I group Interestingly, our results showed less severe inflammatory response in groups administered EPA essentially EPA loaded silica nanoemulsion reported by the reduction in serum levels of TNF-α, IL-1b compared to DEN induced hepatic inflammation group (Fig. 5). Our findings are following Albracht-Schulte et al. who reported that EPA exhibits protective effects in liver steatosis and inflammation through decreased NF-kB and pro-inflammatory cytokines, such as TNF-α and Mcp-1, as well as increases in anti-inflammatory cytokines, such as IL-10 via up-regulation of miR-let-7 and inhibition of MAPK/ ERK/JNK pathways. Furthermore EPA can well incorporate into the liver, and suppress gene expression of the pro-inflammatory cytokines, IL-1b, IL-6 and interferon-γ [37]. We have reported in this study that EPA, considerably EPA loaded silica nanoemulsion represses DEN-induced nodule formation and suppresses sub-sequential fibrosis which is demonstrated through decreased hepatic Hyp content in the treated groups compared to DEN induced hepatic inflammation (Fig. 5). These results supported by findings of Harada et al. who reported the inhibitory effect of EPA on hepatic fibrosis by lowering hepatic Hyp content, and gene expression of collagen, and transforming growth factor -β1 in rats fed a methionine-and choline-deficient diet by immediately reducing the level of ROS [42]. We evaluated the erythrocyte membrane fatty acid fractions including ALA, AA, LA, and OA as shown in Table 1. Our results showed a significant decrease in the erythrocyte membrane ALA content accompanied with a significant increase in the erythrocyte membrane AA, LA, and OA content in DEN-induced hepatic inflammation group compared to the control group (Table 1). In the current investigation, DEN-induced liver inflammation resulted in oxidative stress, as demonstrated by a rise in liver TBARS and a decrease in antioxidant enzyme activity (GSH and SOD), resulting in a substantial reduction in erythrocyte membrane ALA concentration. Rolo et al. proposed a link between low n-3 PUFA levels and oxidative stress, claiming that excessive reactive oxygen species formation owing to mitochondrial malfunction causes lipid peroxidation, which promotes inflammation and stellate cell activation, leading to fibrogenesis. In individuals with liver inflammation, higher levels of oxidative stress and lipid peroxidation have been found [58]. The increase in the erythrocyte membrane AA, LA, and OA content in DEN-induced hepatic inflammation group may be related to the oxidative stress and associated with inflammatory cascades observed in our study. We believe that raised AA levels are the initial stage in the progression of inflammation and eventually, cell death since the AA catabolism pathway, performed by cyclooxygenase and lipoxygenase, produces lipid proinflammatory mediators [59]. Consistent with this, we observed increased expression of the pro-inflammatory cytokines including IL-1β and TNF-α along with the increased level of AA (Table 1 and Fig. 5). The cellular membranes and the capabilities we have to modify its composition and function constitute a strong weapon in the treatment of hepatic inflammation and cancer. The cell membrane and its components must be considered as important aspects in cancer treatment, and novel therapeutic techniques should be developed [4]. From this point of view, we evaluated the potential effect of EPA alone and EPA loaded silica nanoemulsion in improving the cell membrane efficiency upon administration of DEN. Data obtained showed that along with the reduction of the oxidant and inflammatory markers after administration of EPA alone or EPA loaded silica nanoemulsion, a significant increase in the erythrocyte membrane content of ALA and a significant decrease in AA, LA, and OA was observed compared to DEN induced hepatic inflammation group (Table 1and Fig . 5). It is worth to mention that there was a significant increase in the erythrocyte membrane content of ALA accompanied with a significant decrease in AA, LA, and OA in the EPA loaded silica nanoemulsion group compared to EPA only (Table 1) confirming the effectiveness of the prepared EPA in a nanoemulsion form. Besides, EPA is an omega 3 fatty acid which is characterized by its role in replacing AA in the cell membranes and gave the more elasticity and flexibility [4]. Thus in this work, EPA as an animal source of omega 3 effectively increased omega 3 fatty acids and decreased omega 6 and 9. Additionally, in EPA loaded silica nanoemulsion treated (Table 1). In agreements with our results, Giordano and Visioli reported that moderate/appropriate amount of n-3 PUFA has an antioxidant effect. In addition, a recent study found that supplementing with LC n-3 PUFAs reduced hepatic oxidative stress and triglyceride accumulation in fatty liver caused by a high-fat diet [60]. In the current study, EPA and in particular EPA loaded silica nanoemulsion supplementation improve antioxidant status by restoring antioxidant enzyme activity and GSH levels, preventing DEN-induced hepatic oxidative damage. Omega-3 fatty acids are promising nominees for treating a wide range of inflammatory responses which accompanies many diseases such as atherosclerosis, diabetes [61], and asthma [62]. EPA is a biologically active polyunsaturated ω-3 FA [38]. Concomitant with the biochemical analysis, histological examination of liver sections of negative control group showed the normal structure of the hepatic lobules. The hepatocytes radiated from the central vein. It exhibited vesicular nuclei, some of it are binucleated and are separated with sinusoids (Fig. 6a). On the other hand, liver sections of negative control group showed normal portal tract with its structures; branches of portal vein, hepatic artery and bile duct (Fig. 6b). Oral administration with EPA in a dose of 500 mg/kg/ day for 4 weeks showed normal structure of hepatocytes and sinusoids (Fig. 6c) and portal tracts (Fig. 6d). On the other hand, administration of EPA loaded silica nanoemulsion in a dose of 500 mg/kg/day for 4 weeks ) showing: a the structure of hepatic lobule appeared more or less as normal one b congestion of portal tract appeared associated with mild inflammatory infiltration. Hydropic degeneration was seen (H&E stain, Scale bar: 5 µm). c and d showing rat that administered DEN and in the same time treated with EPA loaded silica nanoemulsion (treated II group). Where c showing the hepatic lobule appears nearly like normal control around the portal tract. d showing the portal tract appears nearly like normal control (H&E stain, Scale bar: 5 µm) displayed normal structure of hepatocytes and sinusoids (Fig. 6e). Normal structure of the portal tracts was appeared. Some hepatocytes that surround the portal tract exhibited necrotic feature (Fig. 6f ). Examination of liver sections from rat that orally administered with DEN showed loss of the normal hepatic structures, including laminae, sinusoids and dilated portal tract (Fig. 7a). Formations of pseudogland can be observed in high-grade macronodules (Fig. 7b). Complete destruction of the sinusoidal architectural pattern, along with a fibrotic stroma, lytic hepatocellular necrosis, no congestion was observed (Fig. 7c). Congestion in the portal tracts that associated with necrotic feature of the surround hepatocytes were noticed (Fig. 7d). Histopathological examination of liver sections from rats administered DEN and in the same time treated with EPA showed the structure of hepatic lobules (Fig. 8a). In addition, Fig. 8b showed congestion of portal tracts that associated with mild inflammatory infiltration, and hydropic degeneration. Microscopic examination of liver sections from rats that orally administered DEN and in the same time treated with EPA loaded silica nanoemulsion showed the hepatic lobules (Fig. 8c) and the portal tracts (Fig. 8d) appeared nearly like normal control. This study clearly exhibited that oral administration of EPA alone or EPA loaded silica nanoemulsion ameliorated hepatic inflammation induced by DEN in comparison to DEN induced hepatic inflammation group in both macroscopically and microscopically examinations. Additionally, in EPA nano emulsion treated group; this effect was increased to become more or less near the control group. We suggesting that this effect is related to the role of nanoemulsion characterized by smaller size that enhance the effect of EPA and facilitate its penetration to the cell membrane. Conclusion The cellular membrane is a highly ordered and sophisticated part of the cell that is responsible for cell structure and the outside interactions. The understanding of lipid content, particularly the changes found in hepatic inflammation offers a significant potential to attenuate inflammation, oxidative stress, and reduce the risk of cancer. The anti-inflammatory and anti-oxidant effects of the EPA in a nanoemulsion form will be far better than those seen for EPA only given their excellent bio-physicochemical properties.
6,662.6
2022-09-07T00:00:00.000
[ "Medicine", "Chemistry", "Environmental Science", "Materials Science" ]
The RELEVANCE OF HANLE EFFECT ON NA AND FE LIDARS A laser resonant scattering process involves two steps, excitation and emission. That emission occurs spontaneously is well accepted. That the atoms involved in the emission are excited coherently by a laser beam leading to a nonisotropic angular distribution of emission (an antenna pattern) is not well known. The difference between coherent and incoherent excitation leads to the Hanle effect. In this paper, I discuss the physics of Hanle effect, and its influences on the backward scattering intensity of Na, K, and Fe atomic transitions and the associated Na and Fe resonant fluorescence lidar systems. INTRODUCTION It is well known that the dipole moment responsible for non-resonant Rayleigh scattering is coherently excited by a laser beam. In a resonant scattering process, the scattered photons are spontaneous emissions from atoms excited during the process. That such atoms, quite different from those involved in a fluorescence light, are also coherently excited is not as well known. Unlike incoherent excitation in the case of a fluorescence light, which leads to isotropic emission, the coherent excitation by a laser beam leads to angular distribution of emitted (or scattered) light, an antenna pattern if you wish. The difference between the two leads to the Hanle effect. We discuss the physics and formulation of the Hanle effect, and apply it to the backscattering of Na, K, and Fe atoms, and in turn treat the associated consequences on the associated resonance fluorescence lidar signals. METHODOLOGY A semiclassical/quantum mechanical treatment aimed at lidar applications was recently presented [1]. The resonant scattering cross-section from initial state | f ! via excited state | F ! to the final state | f ' ! is given in (16) of the paper. By respectively, transition wavelength, excited state degeneracy, ground state degeneracy, Einstein A coefficient from excited to initial state, fractional transition rate, and absorption line-shape. The backscattering strength factor C fFf ' (ê;ê ') is responsible for the angular distribution in question and it depends on the incident and scattering polarizations, ê andê ' . Referring to the basic scattering coordinate and geometry shown in Fig. 3 of She et al. [1], there are 4 cases denoting as: C e e C e e C e e { as given in (B8a) and (B8c) of [1]. To allow for the possibility of more than one emission final state, we define the differential cross section, summing all allowed emission we have the desired differential cross-section as Eq. (2): We denote the quantity in the square bracket as the distribution function of the fluorescence intensity, To make a complex discussion manageable, we simplify our discussion to a judiciously selected case with incident polarization as , as in the textbook discussion of classical scattering, giving the fluorescence intensity as Eq. (3): Here the coefficient is given in terms of the 6-j coefficient in (16) of [1]. With the above simplification, we now consider two cases for pathedagogy reason and lidar application. Physics demonstrated in a simple system We now apply these formulae to the simplest quantum structure without spin-orbit or hyperfine interaction, and substitute f f ' 0 and F 1 into Eq. (3) the 6-j coefficients for (2) ' fFf B . As a result, we obtain 1 3 g , D 10 1and (2) 101 5 / 9 B , leading to the fluorescence intensity functions, , an angular distribution agreeing with the classical results in textbook [2]. This clearly demonstrated the difference from the spontaneous emission distribution that the Hanle effect is not of quantum origin. Rather, it is because of the excitation of the atomic system by a laser-like source, which coherently prepares the mixed excited sub-states before fluorescence emission. Indeed, the Hanle effect has a classical explanation and is most prominent when the excited sub-states are degenerate, i.e., when there is no Zeeman splitting at B = 0; thereby, it is an effective tool for measuring the lifetime of atomic excited states [3]. Hanle effect of Na, K and Fe transitions In order to apply the formulation to atoms of practical interest, we need to evaluate this factor for the transitions of Na, K, and Fe atoms. Though tedious, this can be down in a straightforward manner. The results for Na and K are given in Table 2 Radiation (antenna) patterns of laser induced fluorescence from Na and Fe atoms Using the recipe presented for the simple case of zpolarized incident beam, we compute and plot the radiation patterns for Na atom in Fig. 3.1.a and that for Fe atom in Fig. 3.1.b below. The backward values, l l Relevance to resonance fluorescence lidars We consider the relevance of Hanle effect and apply the results outlined in Section 3.1 to Na and Fe fluorescence lidars. We do not discuss K lidar in part because its similarity with Na lidar and smaller atmospheric abundance. The simplest lidar (often broadband) for measuring metal density uses a transmitting laser beam. In this case, the Hanle effect could amount to up top 12 % density error. For atmospheric temperature and wind measurements, typically, intensity ratios are invoked. If the two or three frequencies in the ratio(s) are from the same atomic transition, as in the 3-frequency Na lidar [5], then the correction of the Hanle effect does not affect the temperature or wind retrieval. However, the accompanying Na density determination will still be affected. A different situation exists in the temperature determination with iron Boltzmann lidar, in which, the intensity ratio of 374 nm and 372 nm transitions is used. In this case, unless the multiplicative factors on the intensity of the two lines due to the Hanle effect are the same, they should be included. Fortunately, in this case, the multiplicative factors are nearly the same, respectively 1.077 and 1.087, ignoring the Hanle effect is justifiable. We had only discussed the Hanle effect at B = 0. This is also justifiable, not only because the earth magnetic field is small compare to the internal magnetic field in the atom [5], also the Hanle effect is largest at B=0. Note added in proof: A more extensive version of this paper has since been published [7].
1,444.8
2020-01-01T00:00:00.000
[ "Physics" ]
Softening Bilevel Problems Via Two-scale Gibbs Measures We introduce a new, and elementary, approximation method for bilevel optimization problems motivated by Stackelberg leader-follower games. Our technique is based on the notion of two-scale Gibbs measures. The first scale corresponds to the cost function of the follower and the second scale to that of the leader. We explain how to choose the weights corresponding to these two scales under very general assumptions and establish rigorous Γ-convergence results. An advantage of our method is that it is applicable both to optimistic and to pessimistic bilevel problems. Introduction Bilevel optimization is defined as mathematical programming where an optimization problem contains another one as a constraint. It consists of decision making problems with hierarchical leader-follower structure and has a natural interpretation in game theory. Bilevel problems have a long history that dates back to von Stackelberg [18] and have been intensively studied from a theoretical point of view as well as in applications to various domains including traffic planning, security, supply chain management, principal-agent models, production planning, market deregulation, optimal taxation, parameter estimation, see the book [11], the recent surveys [7,9] and the references therein. In the context of Cournot duopolies, von Stackelberg investigated the leader-follower model where the leader firm maximizes profit under the constraint that the follower firm reacts with an optimal choice of the quantity that is supposed to be unique. Later on, Leitmann [14] discussed the case where the optimal solutions for the follower's problem form a set that the leader has to take into account in order to solve her own optimization problem. In case the follower's program has several solutions, we see that there is some ambiguity even in the definition of the leader's program. In the literature, several concepts have been considered. The optimistic (or strong) Stackelberg solution assumes a cooperative like behavior between the agents: the leader expects the follower to choose solutions leading to the best outcome for her. On the contrary, the pessimistic (or weak) Stackelberg solution assumes that the follower always breaks ties by choosing the worst actions for the leader which corresponds to a security strategy for her, see [2,6]. Some intermediate cases can also be considered. In [1], a cooperation degree is assumed leading to the optimization of a convex combination of the best and the worst payoff value for the leader, while, in [16], a probabilistic information about the follower's behavior is assumed resulting in the optimization of an average payoff. Both optimistic and pessimistic bilevel programs are challenging and often difficult to solve in practice. In the present paper, we present a new and quite simple (unconstrained) approximation scheme for such problems based on the notion of two-scale Gibbs measures. Our method is directly inspired by the classical Laplace method: Gibbs probability measures which have a density proportional to e −λu with respect to a reference measure with full support concentrate on the set where u is minimal as λ → ∞. We refer to Hwang [13] for a fine study of the method and precise statements in smooth finite-dimensional situations. In the context of bilevel optimization, we have to take into account the objective of both the leader and the follower and a single parameter λ is not enough to capture the nested structure of the leader-follower problem. This is why we introduce two-scale Gibbs measures where the first scale (with weight λ) takes into account the follower's objective and the secondary one (with a smaller weight to be chosen properly) takes into account the leader's objective. We investigate in details convergence issues (both in the pointwise and -convergence sense) and the choice of the secondary scale only in terms of the reference measure and the modulus of continuity of the leader and follower objective functions. Our method is flexible enough to cope with both the optimistic and the pessimistic case. Of course, the idea of regularizing bilevel problems is not new. In particular, the fact that adding to the lower level objective function a small multiple of the upper level objective function can be used to select solutions is classical, see [10,15] and has been used in [17] where an efficient numerical scheme is introduced for convex bilevel problems. The more recent article [4] proposes a smoothing strategy combining logarithmic penalties and Tikhonov regularization for two-stage stochastic programming. However the approximation approach we propose is different in nature, it consists in approximating the value function of the leader by an integral with respect to a Gibbs measure. In the euclidean setting, this yields a smooth approximation of this value (which may not even be lsc in the pessimistic case) for which we establish convergence results for an arbitrary sampling measure with full support provided the parameters are chosen appropriately. Eventhough, the two-scale Gibbs measures we consider involve the exponential of the sum of the lower level objective function with a small multiple of the upper level objective function, the concentration of these measures on suitable solutions of the lower level depends in a nonlinear way on the parameters. The paper is organized as follows: the setting of our analysis is introduced in Section 2. In Section 3, we recall some basic facts on Gibbs measures and analyze the convergence of two-scale Gibbs measures in a simple case. In Section 4, we give a construction for the weights which guarantees convergence of the corresponding two-scale Gibbs measures under general assumptions. We then establish -convergence results, first for the optimistic case in Section 5 and then in the pessimistic case in Section 6. Finally, Section 7 concludes with some remarks and examples. Setting Throughout this paper, X (strategy set for the leader) and Y (strategy set for the follower) will be two compact metric spaces. We will also assume that the cost functions of both the leader and the follower are continuous, ϕ ∈ C(X × Y ), ψ ∈ C(X × Y ) will denote the cost function of the leader (who chooses x ∈ X) and the follower (choosing y ∈ Y ) respectively. The Stackelberg problem is the program of the leader which reads Under our assumptions, it is obvious that (2.1) admits at least one solution but finding such solutions in practice is a challenging task due to the constraint y ∈ arg min ψ(x, .). Of course, one can rewrite (2.1) as a minimization problem with respect to x only: Problem (2.1) is usually refered to as the optimistic problem since it assumes that in case the follower has several optimal strategies she will break ties by choosing one which is optimal for the leader. The pessimistic problem consists, on the contrary, in assuming that the follower actually breaks ties by choosing strategies which are the worst for the leader. The corresponding bilevel pessimistic program therefore consists in In general, the pessimistic value ϕ * is not lower-semicontinuous (lsc) so (2.4) does not necessarily admit solutions which makes the pessimistic problem more involved than the optimistic one and requires some suitable relaxation of ϕ * . Our goal is to approximate the bilevel problem (2.1) by a family of unconstrained ones (we will also address the approximation of the pessimistic bilevel problems (2.4) in Section 6). We shall indeed prove that the somehow rough function ϕ * can be approximated by a family of more regular ones defined by an integral depending on a parameter. By approximated we mean both in the pointwise sense and in the sense of -convergence we recall below (see [5] or [8] for an overview of -convergence and its applications): Definition 2.1 Let F : X → R and let for every λ > 0, F λ : X → R, then F λ is said to -converge to F as λ → +∞ if the following two conditions are satisfied: • for every x ∈ X and every family (x λ ) λ>0 converging to x as λ → +∞, one has the -liminf inequality: • for every x ∈ X, there exists a (so-called recovery) family (x λ ) λ>0 converging to x as λ → +∞ such that the following -limsup inequality holds Our approximation is a variant of the celebrated Laplace method, which as far as we know, has not been investigated in the bilevel framework. First of all, we give ourselves a Borel probability measure ν on Y (which we will denote ν ∈ P(Y )). Recall that the support of ν, denoted spt(ν), is the smallest closed subset of Y having full mass for ν. We assume that ν has full support spt(ν) = Y . (2.6) For r > 0 and y ∈ Y , we denote by B r (y) the closed ball of radius r centered at y and set for every r ≥ 0, Note that the full support assumption (2.6) and the compactness of Y ensure that α ν (r) > 0 for every r > 0. Note however that α ν need not be neither continuous nor strictly increasing (take Y finite for instance). Given x ∈ X, λ > 0 and δ > 0, we consider the probability measure on Y μ λ,δ (dy|x) := Z λ,δ (x)e −λ(ψ(x,y)+δϕ(x,y)) ν(dy) (2.8) where Z λ,δ (x) is the normalizing constant which makes μ λ,δ (.|x) a probability measure i.e. Our main result is that one can choose the secondary scale δ = δ λ with lim λ→+∞ δ λ = 0, lim λ→+∞ λδ λ = +∞ (2.9) in such a way that the family -converges and converges pointwise to ϕ * as λ → ∞. Our construction of δ λ only depends on the function α ν defined in (2.8) and a modulus of continuity of ϕ and ψ, it will be detailed in Section 4. On standard Gibbs Measures In this section, we temporarily leave the approximation of Stackelberg problems and focus on the asymptotic behavior of Gibbs measures. Given ν ∈ P(Y ) with full support as in (2.6), λ > 0 and w ∈ C(Y ), we define the Gibbs measure Of course, ν λ,w is unchanged if one adds a constant to w so there is no loss of generality in normalizing w in some way, and the most natural way is to assume that its minimum is 0. The following elementary result will be used intensively in the sequel: . Proof Let us write so putting eveything together yields (3.2). Since, for every λ, ν λ,w is a probabilty measure and Y is compact, there is a sequence λ n → ∞ andν ∈ P(Y ) such that, ν λ n ,w weakly 1 star converges toν as n → ∞. Since the set {w > ε} is open, it follows from Portmanteau's theorem (see [3]) that Hence, for every ε > 0 thanks to (3.2) and the fact that ν({w ≤ ε 2 }) > 0 (because ν has full support and the minimum of w is 0), we get ν({w > ε}) = 0 letting ε → 0 + and using the fact that w is continuous, we conclude thatν concentrates on the set where w = 0. We thus recover the well-known fact that Gibbs measures concentrate on the set where the potential is minimal: and ν λ,w be defined by (3.1), then any weak star cluster point of ν λ,w as λ → ∞ has its support in argmin Y w. Convergence of two-scale Gibbs Measures in a Simple Case To understand how to approximate bilevel problems with Gibbs measures, we first have to understand the following question. Given two functions u and v in C(Y ), we want to find a weight δ λ such that δ λ → 0, λδ λ → +∞ as λ → +∞ in such a way that the two-scale Gibbs measure ν λ,u+δ λ v concentrates when λ → ∞ on the double argmin set: Note that for fixed x ∈ X, and for u(y) = ψ(x, y), v = ϕ(x, y), the set above corresponds to the solutions of the lower level problem which minimize the leader's cost. Throughout this paragraph, as well as in Section 4, we will focus on this subproblem and will therefore consider functions u and v that depend only on the variable y ∈ Y . The convergence analysis to the leader's value function is postponed to Sections 5 (optimistic case) and Section 6 (pessimistic case). We will give a general constructive answer ensuring the concentration on the double argmin set in paragraph Section 4 (depending on the modulus of continuity of u and v and the function α ν in (2.7)). Yet, for now, we prefer to focus on a rather simple case where the explicit choice δ λ := 1 √ λ works (as well as many other simple ones, see Remark 3.4 below). This simple case corresponds to the extra assumptions that both u and v are Hölder continuous and the function α ν is bounded from below by a power function. Denoting by dist the distance on Y and diam(Y ) its diameter, these assumptions mean that there exist and To shorten notations, let us set which corresponds to the two-scale Gibbs measure ν λ, Proposition 3.3 Assume that u and v satisfy (3.4), that ν satisfies (3.5) and define ν λ by (3.6) then any weak star cluster point of ν λ as λ → ∞ has its support in the double argmin set argmin argminu v. Proof To ease notations, let us normalize u and v in such a way that Also define w λ := u + v √ λ and observe that (3.7) implies that min Y w λ ≤ 0. Let then λ n → ∞,ν ∈ P(Y ) such that ν λ n weakly star converges toν. Let ε > 0, for λ large enough It then follows from the inequality (3.2) of Lemma 3.1 that so that, again thanks to Portmanteau's Theorem,ν({u > ε}) = 0 andν is supported by argmin u = {u = 0}. In particular, with (3.7), v ≥ 0 on spt(ν). To conclude, we thus have to show that for every ε > 0,ν({v > ε}) = 0. Since u ≥ 0 and min Y w λ ≤ 0, we have {v > ε} ⊂ w λ > min Y w λ + ε √ λ so using Lemma 3.1 again we get Let y λ be a point where w λ achieves its minimum, then it follows from (3.4) that for λ ≥ 1, in the ball of center y λ and radius where the last inequality follows from (3.5). With (3.8), this yields since for every ε > 0 the right hand side tends to 0 as λ → ∞, Portmanteau's Theorem again allows us to conclude thatν({v > ε}) = 0. above is just for illustrative purpose and by no means the only possible one or optimal in any sense. It is indeed straightforward to check, with the same proof as above, that under the assumptions of Proposition 3.3, any choice of δ λ such that guarantees that the corresponding two-scale measures ν λ,u+δ λ v tend to concentrate on the double argmin set (3.3) as λ → +∞. In particular any power choice for δ λ i.e. δ λ = λ −γ with γ ∈ (0, 1) (or much larger weights such as δ λ = 1 log(λ) , δ λ = 1 log log(λ) ) ensures convergence to the double argmin set. Note that smaller weights such as δ λ = log(λ) λ violate condition (3.9). Assumptions (3.4) and (3.5) are essential if one wishes to use power like weights. In Section 7, we will consider examples where α ν is much smaller than a power function. In such cases, the choice δ λ = 1 √ λ may rule out the desired convergence property (see Example 7.1). Even worse, it may be the case that no power-like weight converges to the double argmin set (see Example 7.2). Choosing the Weights Under General Assumptions Now we consider the general case where u and v are continuous and ν has full support. We wish to find secondary weights δ λ satisfying (2.9) in such a way that defining the two-scale Gibbs measures, every weak star cluster point of ν λ as λ → ∞ is supported by the double argmin set (3.3). Since u is continuous and Y is compact, u is uniformly continuous on Y , hence satisfies ω u (t) → 0 + as t → 0 + and ω u is a modulus of continuity of u in the sense that Hence ω 1 := 2 max(ω u , ω v ) satisfies Therefore if ω 2 is the concave envelope of t → ω 1 (t) + t, we have By construction, ω 2 is strictly increasing and concave (hence continuous) on the whole of R + and ω 2 (t) → 0 + as t → 0 + . From now on, we fix an increasing and concave modulus ω (possibly diffrent from ω 2 ) such that (4.3) holds. We then denote by , R * + → R * + the inverse of ω, := ω −1 , by construction we thus have Recalling that α ν is defined by (2.7), we define for every t > 0 Note then that θ is nondecreasing, lsc (this is why we define it as a left limit) and θ(t) → −∞ as t → 0 + . Proposition 4.2 Let δ λ be defined as in Lemma 4.1 and ν λ be defined by (4.1). Then any weak star cluster point of ν λ as λ → ∞ has its support included in the double argmin set (3.3). Proof Again we can normalize u and v so that (3.7) holds and set w λ := u + δ λ v, since δ λ → 0, one can proceed as in the proof of Proposition 3.3 to show that for every ε > 0, ν λ ({u > ε}) tends to 0 as λ → +∞ and thus deduce that any weak star cluster point of ν λ as λ → ∞ has its support included in {u = 0} = argmin u. To conclude that such weak star cluster points are in fact supported by the double argmin set (3.3), it is enough to show that for every ε > 0, ν λ ({v > ε}) tends to 0 as λ → +∞. To prove this, we observe that since u ≥ 0, {v > ε} ⊂ {w λ ≥ δ λ ε} and since min Y w λ ≤ 0 (because of (3.7)), we have {v > ε} ⊂ {w λ ≥ min Y w λ + δ λ ε}. Since ν λ = ν λ,w λ , our basic inequality (3.2) in Lemma 3.1 gives: . (4.12) Choose now λ large enough so that δ λ ≤ 1, doing so ω is a modulus of continuity of w λ . Hence if y λ is a minimum point of w λ , the ball B δ λ ε 2 (y λ ) is contained in hence by definition of , α ν and θ we get replacing in (4.12) gives for λ large enough, δ λ ≤ ε so that, using the monotonicity of θ: and since the right hand side tends to +∞ as λ → +∞ thanks to (4.8), we obtain that for every ε > 0 and by construction for every ε > 0: Now observe that the quantity in (4.15) only depends on u and v through the function θ i.e. through the modulus ω. So the speed at which ν λ ({v > min argmin u v + ε}) converges to 0 is uniform with respect to v and u admitting ω as modulus of continuity. This uniform behavior will be useful in Section 6 devoted to the pessimistic case. Remark 4.4 Let us also emphasize that what really matters for convergence is the fact that the left-hand side of (4.14) tends to 0 as λ → +∞ for every ε > 0 and not really the precise construction of δ λ . Our construction based on Lemma 4.1 is just a cooking recipe which guarantees this property, and we are not claiming that it is optimal in any sense. In fact, the choice δ λ = 2 √ t λ in Lemma 4.1 is a bit arbitrary, taking δ λ = t γ λ with γ ∈ (0, 1) or δ λ = −t λ log(t λ ) would have worked just as well. As a consequence of the previous -convergence, we immediately have: Corollary 5.2 Let ϕ * be defined by (2.3) and ϕ λ be as above, then and if x λ is a minimizer of ϕ λ and x is a cluster point of x λ as λ → +∞ then x is a minimizer of ϕ * . The Pessimistic Case Let us now address the pessimistic case and the approximation via Gibbs-measures of the pessimistic value function ϕ * defined in (2.5). Since ϕ * is not necessarily lsc, we have to relax it and consider its lsc envelope (that is the largest lsc function lying below ϕ * on X) which we denote by ϕ * and is given by The relevance of the lsc envelope of ϕ * for pessimistic bilevel problems was already emphasized in [12] and [11]. We construct δ λ exactly as in paragraph Section 5, and for every x ∈ X, we consider the Gibbs measure: . We consider then the approximations Our convergence result concerning the pessimistic value is the following: Theorem 6.1 Let ϕ * be defined by (2.5) and ϕ + λ be as above. Then, as λ → +∞, ϕ + λ converges pointwise to ϕ * and -converges to its lsc envelope, ϕ * , defined in (6.1). Examples and Remarks In practice, our convergence results imply that the initial optimistic/pessimistic bilevel problems can be approximated by the (nonconstrained) minimization of ϕ λ /ϕ + λ for large λ. Since ϕ λ and ϕ + λ are typically smooth, one can use for instance gradient descent methods to minimize these functions. If one wishes to implement our approximation on concrete problems (which, at the moment, we leave for future works), a certain number of issues have to be seriously addressed. Among them one can think of the choice of the reference measure ν as well as of the numerical method to efficiently compute the integrals which appear in ϕ λ . But the first question that naturally comes to mind is the choice of the weight δ λ for the secondary scale. Since δ λ captures the tradeoff between the upper and lower level, a small δ λ will result in a good accuracy for the follower's optimizing behavior but might too slowly take into account the leader's objective. We already saw that δ λ cannot be chosen too small for the convergence to be guaranteed but choosing it too large may affect the speed of convergence to the value function of the leader. A universallly good choice for δ λ is certainly impossible and the aim of this final section is precisely to illustrate on some particular examples the behavior of our approximations. A Case where δ λ Cannot be too Small We first consider an example where the assumption (3.5) of a power-like lower bound on α ν is relaxed. This example shows that one cannot hope for a universal choice of δ λ , in particular δ λ = λ − 1 2 does not guarantee convergence to the double argmin set if α ν happens to be too small near 0. δ 3 + e −λ 3/4 δ f ∞ since the right hand-side goes to 0 as λ → +∞, we deduce that there exists a neighbourhood of 0 which has zero measure for any weak cluster point of ν λ , in particular none of these cluster points can concentrate on {0}. In an example like this one, one typically has θ(t) ∼ −1 t 3 , so applying the cooking recipe of Lemma 4.1 one finds t λ ∼ λ − 1 4 and weights δ λ which guarantee convergence are δ λ = λ −γ with γ ∈ 0, 1 4 or δ λ = λ − 1 4 log(λ). A Case where δ λ Cannot be Power-like We now consider a variant of the previous example where any power choice for δ λ fails to guarantee convergence of the two-scale measure to the double argmin set. and since γ < 1+γ 2 , we reach the conclusion that ν λ ([−δ, δ]) tends to 0 as λ → ∞, ruling out the convergence of ν λ to the Dirac mass at 0 (recall that {0} is the double argmin set in this example). In other words, no power like secondary weight gives convergence. But using Lemma 4.1 and Remark 4.4 the (very slowly decaying!) weight δ λ = log(log(log(λ))) log(log(λ)) ensures the desired convergence. Of course, one may think that it is crazy to use such a pathological reference measure, but a rough behavior of u, v or the boundary of the set Y in higher dimensions may generate similar pathologies as well. A Case Where the Pessimistic Value is not lsc The following explicit example illustrates the convergence of the approximations in a case where the pessimistic value is not lsc. Since ϕ and ψ are Lipschitz and ν is the Lebesgue measure any choice of δ λ of the form δ λ = λ −γ with γ ∈ (0, 1) ensures the validity of our convergence result. In the optimistic case, the leader minimizes the optimistic value x + y = x and the solution is 0. Now, the approximation scheme proposed in the paper can be explicitly computed. For a given λ > 0, consider In the pessimistic case, the leader minimizes the pessimistic value which is not lsc at 0 and the infimum of ϕ * which is 0 is not achieved. Note that the lsc envelope of ϕ * coincides with the optimistic value ϕ * . In this case, for a given λ > 0, consider We know that ϕ + λ pointwise converges to ϕ * and -converges to ϕ * as λ → +∞. The pointwise convergence is of course slower near 0 and we have tested various exponents for δ λ (the square root as above but also γ = 1 4 , for which the convergence is even slower and γ = 3 4 which seems to give more accurate approximations) (Figs. 1, 2, 3 and 4). The Choice of δ λ is Critical in Practice Even in the case where argmin ψ(x, .) is a singleton for any x ∈ X, so that optimistic and pessimistic solutions coincide, the optimistic and pessimistic λ-approximations converge, with a different convergence speeed, to the common value ϕ * = ϕ * , as shown in the next example in which X is two-dimensional and for which the choice of δ λ seems to be crucial for practical convergence. The Stackelberg solution isx = (0, 0) andȳ = 0. Since ϕ and ψ are Lipschitz we can chose any power function for δ λ , δ λ = λ −γ . Our approximation is given by: ϕ λ (x) = Y ϕ(x, y)e −λψ(x,y)−λ 1−γ ϕ(x,y) dy Y e −λψ(x,y)−λ 1−γ ϕ(x,y) dy (and the pessimistic approximation ϕ + λ is given by a smilar formula, just by changing the sign of the term involving ϕ) which converges as λ → +∞ to ϕ * which achieves its minimum at (0, 0). We illustrate the convergence with various exponents and values of λ. The convergence turns out to be very bad for γ = 1 2 but very good for γ close to 1 as in the case γ = 9 10 (Figs. 5, 6, 7, 8, 9 and 10).
6,580
2019-10-01T00:00:00.000
[ "Computer Science" ]
Critical Cosmology in Higher Order Gravity We construct the higher order terms of curvatures in Lagrangians of the scale factor for the Friedmann-Lemaitre-Robertson-Walker universe, which are linear in the second derivative of the scale factor with respect to cosmic time. It is shown that they are composed from the Lovelock tensors at the first step; iterative construction yields arbitrarily high order terms. The relation to the former work on higher order gravity is discussed. Despite the absence of scalar degrees of freedom in cosmological models which come from our Lagrangian, it is shown that an inflationary behavior of the scale factor can be found. The application to the thick brane solutions is also studied. I. INTRODUCTION It is well known that higher-derivative gravity has a scalar degree of freedom in general [1][2][3]. In cosmological models of higher-derivative gravity, the scalar mode is expected to play an important role [4][5][6]. On the other hand, some cases are also known that higher order terms in curvatures for a gravitational action do not affect cosmological development of a scale factor. For example, it is known that terms which consist of contraction of Weyl tensors in a gravitational Lagrangian do not change evolutional equations for a scale factor in a model with homogeneous and isotropic space. The other special combinations of curvatures are known. In the specific dimension, the Euler form as a Lagrangian does not produce the dynamics of gravity at all, because the action becomes a topological quantity in such a case. The dimensionally continued Euler densities have also been studied [7][8][9][10][11][12][13], because of their relation to the effective Lagrangian of string theory, and are found to give no scalar mode since the second derivative of the metric disappears in the action if we perform integration by parts. The absence of scalar modes is interesting for studying black holes in the theory, because the scalar modes lead to singularities, in general, which avoid expected horizons. In recent years, it turned out that there is a special case where a scalar mode disappears in higher-derivative gravity. Originally, this fact was found in research of a three-dimensional theory of massive gravity [14] and an extended version in four dimensions was proposed [15]. The authors of those papers intended to study the renormalizability and unitarity of gravitation theory in a maximally-symmetric spacetime. Thus, the absence of a massive scalar mode is at least a necessary condition of such theories referred as critical gravity. 1 Until now, however, only the cases with curvature tensors of a limited number have been investigated in higher dimensions [16,17]. We are interested in the higher order theory of gravitation in which a scalar mode does not appear in a general higher dimensions. In the present paper, we generalize the structure of the Lagrangian of critical gravity, to models with higher order terms in curvature tensors in higher dimensions. We show that such extensions can be attained by use of the Lovelock tensors. In order to offer a systematic way to construct the required higher order term, we take an explanatory approach by assuming the Friedmann-Lemaître-Robertson-Walker (FLRW) metric. In this approach, the absence of second derivatives of the scale factor from the Lagrangian with appropriate total derivatives is considered as a necessary condition for disappearing scalar modes. It should be noted that the combination defined in D-dimensional spacetime is used in critical gravities in three and four dimensions [14,15]. This term can be considered as a trace of multiplication of the Einstein tensor and a linear combination of the Einstein tensor and its trace part: where G µν = R µν − 1 2 R g µν is the Einstein tensor. Incidentally, R µν − 1 2(D−1) R g µν is known as the Schouten tensor up to the factor (D − 2) −1 . Now, if the FLRW metric is assumed, the time-time component of the Einstein tensor does not include the second time-derivative of the scale factor. The Schouten tensor appearing the above term is made for its spatial component to have no second time-derivative of the scale factor. Therefore, the trace of the product of two tensors is linear in the second time-derivative of the scale factor, and if a surface term is suitably assigned, the Lagrangian is expressed only with the scale factor and its first time-derivative. Thus, additional scalar modes do not appear. From this observation, we find that it can be extended by using the Lovelock tensor instead of the Einstein tensor when the dimension of spacetime is higher. In the present paper, we do not analyze the massive tensor modes in our models. Thus, the genuine criticality as quantum gravity is left for future works. The FLRW geometry is known to be conformally flat [18], i.e., the Weyl tensor for the FLRW cosmological metrics vanishes. An extension of Lovelock gravity for conformally-flat geometry was considered by Meissner and Olechowski [19]. They showed that the extension is possible for a (R) n term in D dimensions provided n < D, whereas the Lovelock gravity has the (R) n term at most 2n < D (where (R) denotes a general curvature tensor). Oliva and Ray also construct higher-derivative gravity with the second order equation utilizing the Weyl and Riemann tensors [20]. Their Lagrangian involves up to (R) n term with 2n < D because of the use of the generalized Kronecker delta as in the case of the Lovelock gravity. In the present paper, we show that it is possible to continue the higher-curvature terms beyond the number of dimensions. The outline of this paper is as follows. In §2, we construct the candidate Lagrangian for cosmological models without scalar modes in the tensorial form as the first step. The confirmation of the property of the model Lagrangian is performed in §3, substituting the FLRW metric. In §4, we show that an extension to more higher order terms in curvatures can be obtained. Using the higher order Lagrangian so far obtained, we propose (toy) models for the scale factor with inflationary behavior is shown in §5. In §6, the application of our Lagrangian to constructing domain wall solutions is studied. In the last section, we offer some concluding remarks. ORDER TERM IN CRITICAL GRAVITY In this section, we construct the higher order term in curvatures by generalizing that of critical gravity. We will verify the absence of scalar modes in cosmological models with the terms in the next section. First, we introduce the dimensionally continued Euler density L (n) = 2 −n δ σ 1 τ 1 ···σnτn λ 1 ρ 1 ···λnρn R λ 1 ρ 1 σ 1 τ 1 · · · R λnρn σnτn , (2.1) where the generalized Kronecker delta is defined as (2. 2) The dimensionally continued Euler density L (n) consists of n-th order in the curvature tensors ((R) n ). For example, for n = 1, we find the Einstein-Hilbert term and for n = 2, we find the Gauss-Bonnet term as is well known. Here, we construct the new combination of the Lovelock tensor and the metric multiplicated by the trace of the Lovelock tensor. That is, It is worth noting that (2.10) For example, for n = 1, we obtain which is proportional to the Schouten tensor. Therefore we obtain the following combination which appears in critical gravities [14,15]. Now, we find that the natural generalization of this is given by It is worth pointing out that the expression is symmetric against the exchange of n and n ′ . In the next section, we confirm that this combination is suitable for an extension of critical gravity in higher dimensions, by utilizing the FLRW metric. III. HIGHER ORDER TERM FOR FLRW METRIC We consider the following FLRW metric in D dimensions: where a(t) is the scale factor and dΩ 2 D−1 denotes the line element of a maximally symmetric space of (D − 1)-dimensions, whose scalar curvature is normalized to First in this section, we examine the Lovelock tensors of (R) n . By explicit calculation of curvatures, we find, for n = 1, Here and hereafter, we use the suffixes denoting the spatial dimensions, i, j = 1, 2, . . . , D − 1. Also, for n = 2, we obtain By the combinatorial property of the generalized Kronecker delta, we can find the Lovelock tensor for a general n as follows: and G (n) 0 j = G (n) i 0 = 0. It should be noted that the 00 component of the Lovelock tensor does not includeä. Next, we calculate the generalized Schouten tensor S (n) µ ν for the FLRW metric. Because the trace of the Lovelock tensor is given as Because this combination is apparently linear inä, the action including this term can be expressed as the functional of a andȧ, by means of integrations by part. Therefore, we realize that there is no scalar mode in a cosmological setting. The combination L (n)(n ′ ) ≡ −4G (n) µν S (n ′ ) µν can exist for limited numbers (n, n ′ ), which depends on D. For the exchange symmetry, we assume n ≤ n ′ . We find that L (n)(n ′ ) is Up to now, we become aware of a similarity to the Lovelock Lagrangian. It takes the form for the FLRW metric as follows: The equivalence up to the overall constant is obvious, that is L (n)(n ′ ) ∝ L (n+n ′ ) . Incidentally, we can consider L (0)(n) as the Lovelock Lagrangian L (n) . We note, however, that The allowed spacetime dimension is the same for odd m, m < D. Therefore, our Lagrangian and theirs are almost equivalent. The variety with respect to two integers in L (n)(n ′ ) is due to the use of Riemann tensors as well as scalar curvatures and Ricci tensors in our approach. It is notable that differences may occur if we consider the black hole or non-conformally flat solutions in the theory governed by the Lagrangians of higher order terms. Later, we show that the restriction by the dimensions can be overcome. To exhibit the discussion on the subject, we examine the cosmological action in the present model again in the next section. IV. FURTHER EXTENSION TO HIGHER ORDER IN CURVATURE (ESPECIALLY FOR m > D) We consider the action for m ≥ 2: with arbitrary coefficients α (n)(m−n) . Here, we regard L (0)(n) as L (n) . Then, the action for the scale factor a(t) can be read as where Using the expansion we rewrite the action, after partial integration, as Especially for k = 0, we find a simple action (4.6) Now, we define the Lagrangian L (m) (a,ȧ) for the scale factor a(t) by Then, we find the equation of motion for the scale factor d dt The Hamiltonian constraint is regarded as the variation equation ∂L ∂N = 0, where N is the lapse function defined as dt = N(t ′ )dt ′ . We set N = 1 after the manipulation. We now find the Hamiltonian constraint equation It is known that the variation of the lapse function corresponds to the variation of g 00 and the variation of the scale factor corresponds to the variation of g ii , up to certain factors. We can find, from (4.9) and (4.8), the following generalized Lovelock tensor as the variation of the action with respect to the metric: The generalized Lovelock tensor has a tensorial form, which is proportional tō as seen from the construction, but with arbitrary coefficients in the definition of the action S (m) . In spite of the arbitrariness, the functional form of the Lagrangian is unambiguous if the conformally flat metric is substituted. In the similar manner, the corresponding generalized Schouten tensor is defined as in the withḠ (m) ≡ g µνḠ (m)µν . Then we find Then, the trace of the product of these tensors is found to be Unfortunately, only for D = 4, our iterative method cannot give a (R) 3 term, since the (R) 2 term becomes a total derivative for the conformally flat metric. We define the expression for L (3) by the manner followed by Meissner and Olechowski as Another way to cross the dimensional limitation, which is inspired by the work of Meissner and Olechowski [19], is also found. In the manner of their paper [19], the following Lagrangian (modulo volume form) is proposed: where it is expressed by our present notation. We become aware of an extension of it as where m i (i = 1, . . . , n) are arbitrary integers. For the FLRW metric, this term turns out to be proportional to 2ℓä a where ℓ = n i=1 m i . This is also the same form of the candidate Lagrangian. Many equivalent combinations exist for higher order terms under the vanishing-Weyl-tensor condition. In the next section and after, we will turn our attention to apply the higher order Lagrangian obtained here to cosmological models . In this section, we investigate the possible inflationary stage in evolution of the universe in the model with higher order terms discussed in the previous sections. Let us consider the general cosmological action with an appropriate total derivative term with respect to the time, which removes the second derivative of the scale factor with respect to time. As usual, the energy momentum tensor of matter is taken as with the action for matter S matter . The non-zero components of the energy momentum tensor of matter is assumed as T 0 0 = −ρ and T i j = p δ i j , where ρ is the energy density and p is the pressure, for the FLRW universe. As is seen in the previous sections, the equation of motion derived from the action (5.1) coincides with the linear combination of the components of the Lovelock tensors in functional form with the assumption of the FLRW metric. Thus, the energy conservationρ holds generally. This fact is due to the absence of the scalar mode and the fact that we did not need to rescale the metric. We here give a few model in four-dimensional spacetime with focus on the possibility of an inflationary growth of the scale factor. Furthermore, we take k = 0, i.e., assume the flat space. Then, the action (5.1) is equivalent to The 00 component of the equation of motion reads If we can choose the coefficient β ℓ freely, almost arbitrary equations including ρ and H ≡˙a a can be made. That is, where f (x) is a function which can be expressed by a series and does not include the x 2 term. Here the factor given in the right hand side is chosen as for the similarity with the standard cosmology. We wish to call the cosmology of the model 'f (H 2 ) cosmology'. 3 Now, we specify the function f (x). For example, let us consider with a typical mass scale M. Solving the equation for H 2 , we obtain If the energy density is sufficiently low, such as ρ ≪ G −1 M 2 , the relation in the standard cosmology is valid: On the other hand, in the era of the high energy density such that ρ ≫ G −1 M 2 , we find and we obtain an approximate de Sitter inflation. Another model can be chosen, in which the correction is a monomial of the higher order term. That is, In the rapid expansion phase, H 2 ≫ M 2 , the correction would be dominant. Further, if we assume the dust matter, i.e., ρ ∝ a −3 , we obtain This indicates the phase of the power-law inflation. Even though we show special toy models here, we find that the existence of higher order terms yields the inflationary growth of the scale factor with ordinary matter, and with no scalar mode and no redefinition of the metric. VI. DOMAIN WALL We shall apply the previous discussion on a conformally flat metric to solutions for domain walls ("thick branes"). The D-dimensional metric suitable for a co-dimension one domain wall is given by where x = (x 1 , x 2 , · · · x D−2 ). In addition, we consider a neutral, minimally-coupled, selfinteracting scalar field as a classical matter field. It is known that superpotential method [21][22][23][24][25] is available in this case to find BPS kink equations. Let us investigate the case with higher order terms in a similar manner. Substituting the metric (6.1), the gravitational Lagrangian becomes where the prime ( ′ ) denotes a derivative with respect to y. The term in A ′′ can be removed by discarding a total divergence. Thus we obtain 3) The action for the real scalar field φ(y) for static branes is written by Since the coefficients β ℓ can be arbitrarily chosen, we use an arbitrary function F (x) to represent the general action. Then, we can write the total action as The field equations can now be derived in a usual manner. The differential equation for the scalar is where V φ = dV dφ . The equation for A is written as where F x = dF (x) dx and F xx = d 2 F (x) dx 2 . The reparametrization invariance of y leads to the first integral given by . Note that h(x) as well as F (x) does not possess the x D/2 term for even D. Using these, the equations can be rewritten as and −h(A ′2 ) + 1 2 (φ ′ ) 2 − V (φ) = 0 . (6.10) Now, we take a BPS ansatz [21][22][23][24][25][26] A ′ = −W (φ) . (6.11) Then, the equations reduces to the first order equation (6.12) and the potential in terms of W (φ). If the function h(x) is a polynomial up to the quadratic order, a domain wall solution exists, for the potential V (φ) has two minima [21][22][23][24][25][26]. The solution can be expressed as a kink solution in the φ 4 theory. We find here that the potential with many vacua naturally corresponds to the general higher order gravity. For example, we try to express the sine-Gordon equation. We take W (φ) = Bφ, with a positive constant B. Further, we choose with constants α and C. This choice is possible if the spacetime dimension D is odd. Then the scalar obeys the sine-Gordon equation and an exact static solution is known as Then, the potential takes the form: The minima of the potential are located at αBφ/2 = π/2 ± nπ (n = 0, 1, 2, . . .). Although it is difficult to obtain exact solutions of other types, we can suppose multiple domain walls with distinct topological numbers in this and similar models. The model with many vacua may also serve an interesting mechanism to realize naturally a small cosmological constant in (thick) brane world. This possibility will be studied in future. VII. SUMMARY AND CONCLUSION In the present paper, we have attempted to show the possible quasi-linear second-order theory of gravity in conformally flat spacetimes. Models with arbitrary higher order of curvatures have been obtained. As long as we adopt an isotropic and homogeneous cosmological setting, the energy conservation holds in the models because there is no scalar mode and no requirement of frame rescaling. In spite of them, inflationary expansion can be found in the models. We have also found that the domain wall solution in the present type of the higher order gravity can be obtained naturally with the potential having many minima. Our work corresponds to the extension of the Lovelock higher-curvature gravity in arbitrarily higher order terms in higher dimensions. Our analysis, however, has been limited for the case with the conformally flat spacetime and the coefficient on the tensorial form of the action has been still ambiguous. The stability of the solutions obtained here is problematic for anisotropic perturbations or tensor modes. To study it, we should classify the tensorial form of the Lagrangian which is equvalent only if the Weyl tensor vanishes. It is known that the dimensionally continued Euler forms have the property of factorization in terms of those of lower orders when the spacetime is represented by the direct product of spaces [10]. Thus, compactification should be worth studying to seek some special combination of curvature tensors. It is expected that the hopeful 'critical' relation among the coefficients of different orders of curvatures will be selected by consideration on various background spacetimes. Nevertheless, we emphasize that our arbitrarilty high order gravity can be applied to many models in various contexts. The problem of initial singularity can be reconsidered by studying classical bouncing universes in our model. On the other hand, the Wheeler-DeWitt equation of the Lagrangian should lead to higher-derivative quantum cosmology. The equation must be difficult to treat with, but the study on it may shed new light on quantum gravity. The possible black hole solutions are interesting in both cases of asymptotically flat and asymptotically AdS spacetime. We shall return to some of the problems in future.
4,946.2
2012-02-11T00:00:00.000
[ "Physics" ]
An enhanced two-dimensional hole gas (2DHG) C–H diamond with positive surface charge model for advanced normally-off MOSFET devices Though the complementary power field effect transistors (FETs), e.g., metal–oxide–semiconductor-FETs (MOSFETs) based on wide bandgap materials, enable low switching losses and on-resistance, p-channel FETs are not feasible in any wide bandgap material other than diamond. In this paper, we propose the first work to investigate the impact of fixed positive surface charge density on achieving normally-off and controlling threshold voltage operation obtained on p-channel two-dimensional hole gas (2DHG) hydrogen-terminated (C-H) diamond FET using nitrogen doping in the diamond substrate. In general, a p-channel diamond MOSFET demonstrates the normally-on operation, but the normally-off operation is also a critical requirement of the feasible electronic power devices in terms of safety operation. The characteristics of the C–H diamond MOSFET have been analyzed with the two demonstrated charge sheet models using the two-dimensional Silvaco Atlas TCAD. It shows that the fixed-Fermi level in the bulk diamond is 1.7 eV (donor level) from the conduction band minimum. However, the upward band bending has been obtained at Al2O3/SiO2/C-H diamond interface indicating the presence of inversion layer without gate voltage. The fixed negative charge model exhibits a strong inversion layer for normally-on FET operation, while the fixed positive charge model shows a weak inversion for normally-off operation. The maximum current density of a fixed positive interface charge model of the Al2O3/C-H diamond device is − 290 mA/mm, which corresponds to that of expermental result of Al2O3/SiO2/C-H diamond − 305 mA/mm at a gate-source voltage of − 40 V. Also, the threshold voltage Vth is relatively high at Vth = − 3.5 V, i.e., the positive charge model can reproduce the normally-off operation. Moreover, we also demonstrate that the Vth and transconductance gm correspond to those of the experimental work. www.nature.com/scientificreports/ suitable for electron device application with surface stability 3 . In the case of MOSFETs, the hydrogen termination can effectively induce the conductivity channel with the interface charge (fixed charge) in the electronic device surface. This characteristic makes C-H diamond with p-channel conduction an emerging research topic, which develops feasible high power/high-frequency devices, including high-power FET for different applications, e.g., the inverter systems 4 . The positively charged hydrogen atoms of surface C-H dipoles facilitate the adsorption process of the negative charged adsorbates, which are attracted at the diamond surface from the atmosphere 5 . The surface negative charge sheet induces the 2DHG layer which is located nearby the interface with a high hole density around 10 13 cm −2 (10 20 cm −3 near surface) [6][7][8] . In contrast, when the surface of the diamond is terminated by oxygen the surface conduction originated from 2DHG disappears. When the crystal structure ends, unsatisfied bonds, called "dangling bonds" appear and the surface energy increases 9 . As the high surface energy is not desired, this excess surface energy should be decreased by terminating dangling bonds with H atoms. Then, the negative electron affinity of the diamond at − 1.3 eV appears after H-termination depending on C-H dipoles 10 . This distinguished property has a strong relationship with a chemisorbed species on the C-H diamond surface 11 . Up to now, our research team has successfully reproduced the characteristic of C-H diamond FET using the 2-dimensional (2D) negative charge sheet model 3,4 without relying on the 2D acceptor model 12 . Typically, these negatively charged sites scatter the centers for carrier (holes) transport near the C-H surface 4 . Also, the 2DHG layer can be obtained on the C-H diamond surface using negative interface charge sheet to establish the depletion mode, called normally-on. The C-H diamond MOSFET device usually has a normally-on operation in this context, as identified and analysed by device simulation 4,13 . However, the normally-off operation is required for the electronic power device to confirm the electric system protection from the perspective of safety and device feasibility. Kitabayashi et al. 14 achieved a normally-off operation of the C-H diamond MOSFET with a partially oxidized channel under the gate. Using nitrogen ion implantation, the device exhibits satisfied normally-off operation depending on nitrogen concentration 15 . Saito et al. 16 achieved the normally-off operation of highvoltage AlGaN/GaN high-electron mobility transistors (HEMTs) for power electronic applications to reduce 2DEG density. In addition, Liu et al. 17 confirmed the normally-off device operation using HfO 2 -gate MOSFETs. Fei et al. 18 fabricated the two kinds of diamond MOSFETs electronic device, with an oxidized Si terminated (C-Si) diamond channel. In the study, there are undoped and heavily boron-doped in the contact area (source/ drain), and both of the MOSFET devices exhibited normally-off FET characteristics. However, there has not been any reported mechanism of normally-off result. Here, we propose a positive interface charge model for the normally-off operation of the C-H diamond FETs. Nitrogen is deep donor, the level of which is 1.7 eV from the conduction band minimum in diamond 19 . To achieve the enhancement mode, i.e., normally-off, we simulated the positive charge sheet inserted at the Al 2 O 3 /C-H diamond interface. We investigated this model for controlling threshold voltage (V th ) to achieve the normally-off operation. The V th without applying interface charge becomes almost zero. Also, we take the fixed positive charge as low as possible ( 1 × 10 11 cm −2 ) and use nitrogen (donor) in the concentration of around 10 16 cm −3 and boron (acceptor) in the concentration of around 10 15 cm −3 . Typically, nitrogen coexists with boron in the same crystal because diamond cannot be doped by nitrogen only 20 . However, in this research work, the normally-off C-H diamond MOSFET has been investigated by a fixed Fermi level in the bulk and positive interface charge model. When nitrogent concentration is higher than boron concentration, Nitrogen atoms as donor with an activation energy of 1.7 eV fix the Fermi level at the same energy. This technique is used to obtain a largely negative value of V th indicating the normally-off operation in the diamond MOSFET devices under specific conditions, e.g., fixed positive charge of SiO 2 close to the surface and fixed Fermi level by deep donor level in the bulk. Results Analysis via experimental work. In this work, the DC operation of the 2DHG diamond MOSFET (001) device is carried out via the fabrication of the 2DHG diamond MOSFET using SiO 2 layer (2 nm) located under the gate for confirming normally-off operation (V th < 0 ). Figure 1a shows the cross-sectional of MOSFET structure with a SiO 2 layer, in which the source gate distance is L SG = 2 μm, the gate length is L G = 4 μm and the gatedrain distance is L GD = 2 μm. We used the low boron (acceptor) concentration of 2 × 10 15 cm −3 and nitrogen (donor) concentration of 2 ×10 16 cm −3 in the (001) substarte. The fundamental operation mechanism of this structure is that the SiO 2 considered as a source of positive charge, prevents the holes from accumulating near the interface by the cancellation of negative charge at the interface, i.e., achieving normally-off operation by shifting V th to negative value. The 2DHG is induced by the negatively charged sites of Al 2 O 3, except for the channel under the SiO 2 layer, but it is reduced by positively charged SiO 2 as mentioned. Figure 1b shows V th distribution of the Al 2 O 3 /SiO 2 diamond MOSFET device. The V th range between 1 eV to − 5 eV at the device (32 devices) indicated that in almost all the cases, no normally-on operation was confirmed except for one device. The result confirmed that the normally-off operation was achieved in Al 2 O 3 /SiO 2 diamond devices. The MOSFET device with a SiO 2 layer exhibits the normally-off operation achieved at V th = − 3.5 V that is suitable for power device application, as shown in Fig. 1c. The V th value is determined as the value that decreases the drain current by 6 orders from the maximum drain current. The maximum drain current density is I DS MAX = − 305.0 mA/mm at a drain voltage of V DS = − 30 V and a gate voltage of V GS = − 40 V, as illustrated in Fig. 1d. The drain current density distribution of actual MOSFET devices of 33 samples in which all devices achieved high drain current density, as shown in Fig. 1e. Also, the breakdown voltage was achieved in the Al 2 O 3 /SiO 2 diamond MOSFET device with the gate-drain distance L GD = 20 μm at 1275 V (Fig. 1f). Overall, we fabricated 2DHG Al 2 O 3 /SiO 2 diamond MOSFETs and revealed that the normally-off operation can be obtained without deteriorating drain current density. This outcome is convenient for the analysis of MOSFETs operation by the device simulation using a fixed positive interface charge sheet model. www.nature.com/scientificreports/ Analysis via simulations. The DC operation of 2DHG C-H diamond MOSFET device simulation is carried out via various interface charge sheet models. The normally-on operation is performed by the negative interface charge sheet model, whereas the positive interface charge sheet model with deep donor is dedicated to achieve the normally-off operation corresponding with experimental work illustrated in this paper. Also, the third model is a neutral interface charge sheet model that gives a possibility of controlling hole mobility due to no ion scattering, given that the ion scattering is described as a Coulomb interaction of the two particles. We considered the C-H diamond MOSFET, as depicted in Fig. 2a, with a gate length of L G = 4 μm, a gate width of W G = 25 μm, a passivation oxide ALD-Al 2 O 3 with a thickness of t ox = 200 nm, a source-drain distance (channel length) of L SD = 4 μm, and a C-H diamond substrate with a thickness of 4 μm and the doping thickness of 4 μm. Fixed Fermi level position and band diagram. The nitrogen doping with activation energy at 1.7 eV in low concentration in the diamond substrate (bulk) is a requirement to a fixed position of Fermi level close to the conduction band of p-channel C-H diamond. Also, using a positive fixed interface charge is another requirement to achieve the normally-off device operation. We calculate the Fermi level position of the C-H diamond MOSFET with a low boron concentration of 2 × 10 15 cm −3 and a low nitrogen concentration of 2 ×10 16 cm −3 in the freeze-out region. This concentration corresponds to the real condition of experiment research work conducted by our laboratory mentioned in this paper. Collins et al. 20 showed that the donor in diamond is different from other materials, e.g., silicon or germanium. Typically, nitrogen in diamonds is hard to be ionized because ground level is exceptionally as deep as 1.7 eV. Then, it is regarded that the Fermi level could be clearly pinned at the level of 1.7 eV from the conduction band minimum by the effect of nitrogen doping (e.g. 10 17 cm −3 ) when nitrogen density is higher than that of boron as dopant (e.g. 10 16 cm −3 ) in the same position. We apply the formula to calculate the Fermi position of carriers in the freeze-out region 20 as follows: where E g is a bandgap, E D is a donor ionization energy (activation energy), K β is the Boltzmann constant, T is a temperature, N d and N a are the donor and acceptor concentration, respectively. As shown in Fig. 2b, the Fermi level position is close to the valence band maximum near the interface between Al 2 O 3 and C-H diamond. In the bulk, however, deep nitrogen donor pins the Fermi level at about 1.7 eV from the conduction band minimum. It is calculated based on eq. (1). The reason behind Fermi level pinning in www.nature.com/scientificreports/ this position is that the electrons bounded by deep neutral donors, fix the Fermi level at 1.7 eV in the bulk 20 . The band diagram near the C-H diamond surface moves upward (upward band bending)until the valence band maximum crosses the Fermi level indicating that high hole accumulation due to inversion layer is realized in the negative charge sheet model (Fig. 2b). From our prior study 4 , the 2DHG layer is confirmed when the fixed negative interface charge exists not only on the C-H diamond surface, but also in the passivation oxide layer Al 2 O 3 . It is usually formed by atomic layer deposition (ALD) on a C-H diamond surface. However, in the negative interface charge model, the bulk Fermi level position is pinned close to 1.7 eV from the conduction band minimum, because residual nitrogen concentration of 2 x 10 16 cm −3 is higher than that of boron (2 x 10 15 cm -3 ) as shown in Fig. 2b. Near the interface, the band diagram also turns upward in this case due to C-H diamond. The valence band maximum crosses the Fermi level at the interface calculated by the negative interface charge model, as shown in Fig. 2b. Figure 2c also shows the band bending diagram with neutral interface fixed charge, where band bending is weakened compared with that of negative inteface charge model. At the interface, however, the valence band maximum is located at 1.8 eV from the Fermi level indicating the presence of a weak inversion layer. A weak inversion layer still appears even when the positive interface charge sheet exists with the a real density of 1 x 10 11 cm -2 . The valence band maximum is located 2.1 eV from the Fermi level. As a result the C-H www.nature.com/scientificreports/ diamond MOSFET with positive charge sheet becomes normally-off (enhancement mode), as shown in Fig. 2d. The reason is that the positive interface charge prevents the C-H surface from accumulating positive carriers (holes) to form a channel. Hence, the electron potential at the C-H surface becomes low due to positive charge, which leads to a high barrier for hole that does not allow carrier flow from source to drain without applying a negative gate voltage. This indicates the normally-off operation. We then find out that the Fermi level position does not change with the different nitrogen concentrations as long as it exceeds that of boron. Hence, in this work, we focused on the nitrogen concentration to keep the Fermi level position which is allowed to shift V th to a more negative value for achieving normally-off operation of the device. The negative shift of V th is also obtained by hole recombination with electron of non-activated donor (neutral donor). When the holes enter the subsurface channel region doped with nitrogen (neutral donors), they can be recombined with electrons of non-activated donors. After the recombination they become ionized donors which are positively charged. The presence of positive charge near the interface shift V th more negative as mentioned above. By increasing the nitrogen concentration, the device needs to apply a higher voltage which leads to an increase in the shift of V th threshold voltage to a more negative value for confirming the enhancement mode. A recent study demonstrated that the shift of V th to a more negative value occurs because of an increase in nitrogen concentration corresponding to an increase in trap charge density due to a high nitrogen doping concentration in diamond 21 The operation is completely pinch-off at a gate bias of 8 V and saturated in the Ohmic region when the applied negative gate bias is − 40 V, and the drain voltage is − 30 V, with a voltage step of 4 V. This theoretical result shows that we can achieve a high current density as a function of the drain voltage of the MOSFET with nitrogen-doped bulk when the gate bias is negative, i.e., V GS < 0. Specifically, Fig. 4a illustrates the calculated output of I DS -V DS characteristic with the positive fixed interface charge, in which maximum current density is I DS Max = − 290 mA/ mm when the channel length (distance between the source and drain) is 4 μm and overlapping gate length is 4 μm. This result is so similaer to expermental result that maximum current density is I DS Max = − 305 mA/mm, Fig. 4b. The improvement of field-effect mobility is a requirement to confirm the high drain current density 23 . Also, the gate threshold voltage is V th = − 3.5 V, as identified from the plot of I DS −V GS , i.e., the normally-off operation (enhancement mode) is achieved, as shown in Fig. 4c. Figure 4d shows that the transconductance g m of the device is a constant of 0.4 mS/mm when the drain current V DS is at − 0.5 V. The V th value is controlled by the increased adsorption of the positive charge and/or the decreased adsorption of the negative charge. In addition, the main factor in achieving the normally-off operation is the application of the positive interface charge as a mechanism of surface charge effects on the channel conductivity. The deep donor doping in the substrate using nitrogen with 2 × 10 16 cm −3 density was applied to fix the Fermi level position to obtain inversion channel. Figure 4b shows the I DS −V GS characteristic, which reveals that the gate threshold voltage V th leads to the fabrication of the enhancement mode device. This output result (Fig. 4a) of the simulated fixed positive interface charge model corresponds to the actual experimental work (Fig. 4b) using the SiO 2 layer located at the interface between the gate insulator Al 2 O 3 and the C−H diamond. In other devices, we also achieved a fit result of V th and I DS −V DS characteristics between simulated and experimetanl results. As can be observed from the I DS −V DS characteristic of the interface charge modeling using atlas TCAD device simulator in Fig. 5a,b, the drain current density I DS of the C−H diamond MOSFET with the negative interface charge sheet model (− 1 × 10 12 cm −2 ) and (− 5 × 10 11 cm −2 ) exceeds − 331 mA/mm and − 314 mA/mm at a drain bias of − 30 V, respectively. The evaluation result also shows the saturation behavior in the ohmic region when the gate bias is greater than 0 V, and the pinch-off is observed when V GS is 8 V. Figure 5c,d shows that the V th is 5 V, 1 V at drain voltage − 0.5 V, which indicates the normally-on operation when the negative interface charge is Q f = − 1 × 10 12 cm −2 and Q f = − 5 × 10 11 cm −2 , respectively. In addition, Fig. 5e,f shows that the transconductance of the device is a constant of 0.4 mS/mm when the drain current V DS is at − 0.5 V. In the neutral charge model, the I DS -V DS characteristic plot depicts the saturation behavior when the maximum drain current density is − 294 mA/mm, and the threshold voltage is already zero. Moreover, Fig. 6 shows the device simulated output characteristics I DS −V DS and V th in the neutral charge model, in Fig. 6a,b respectively. Figure 6c shows that the transconductance of the device is a constant of 0.4 mS/mm when the drain current V DS is at − 0.5 V. In this case, no ion scattering in the interface leads to the possibility of a controlling carrier's mobility. When hole mobility is increased in the device, we observe a sharp increase in drain current density saturation 13 . In this context, the C-H diamond MOSFET with a positive interface charge demonstrates the performance goal. The significant increase in I DS is apparent compared to that of partial nitrogen ion implant MOSFET device when L G and L SD have the same values 14 . However, the drain current is still located in the saturation region, even when we increase the gate voltage to a very high value. The interface charge modeling evaluation shows that the drain current gets www.nature.com/scientificreports/ more linear behavior when we apply gate reverse bias till − 40 V, but when increasing the reverse bias beyond − 40 V until reaching saturation voltage, the current stops to increase. This means that the conductance reaches its limitation due to the limitation of hole supply from the source contact which prevents the increase of drain current anymore, as shown in Fig. 7. The decrease in the maximum drain current in the positive interface charge model (compared to that of the negative interface charge model) corresponds to the shifted value of the V th to a negative value. Also, the main reason behind the decreasing drain current density of normally-off C-H diamond MOSFET is the high resistivity of the channel 14,15 . Specifically, we determined that the 2DHG SiO 2 /diamond with transconductance of g m = 0.89 mS/mm was obtained at a drain voltage of − 0.5 V, while the transconductance obtained at g m = 0.4 mS/mm in the simulation with positive interface charge model. The simulation work then confirmed that the achieved normally-off operation using a positive interface charge model corresponds to the case of the MOSFET device that used the SiO 2 layer and the obtained experimental results. www.nature.com/scientificreports/ Conclusion In this research article, we simulate and discuss the characteristics of the MOSFET device with deep donor (E D = 1.7 eV) nitrogen in a low and medium concentration (10 16 and 10 17 cm -3 ) in diamond substrate using the 2D drift-diffusion model. The experimental maximum drain current density is − 305 mA/mm. The simulation achieves similar result at − 290 mA/mm. Those values are the highest in normally-off diamond FET with a complete pinch-off and a saturation region. However, this value is still lower than the current in the case of a negative interface charge model in the saturation region in which high gate voltage causes the highest gate-drain resistivity. The gate threshold voltage can be controlled and shifted to the negative value of V th = − 3.5 V when the applied positive interface charge is close to the interface, i.e., the normally-off operation (enhancement mode) is achieved. The obtained simulated results correlate with the experimental work using the SiO 2 layer located between Al 2 O 3 and C−H diamond. We then show how the surface band can be controlled when the substrate nitrogen doping was applied together with the selection of the interface charge. Also, the saturation behavior of the current in this model would be improved when the source resistance is reduced using other related techniques, e.g., p + type doping in the contact area. These promising results then bring new insight into this research theme and demonstrate that the proposal can facilitate various applications of p-channel diamond MOSFET devices, e.g., complementary power MOSFETs with trench gates as vertical FETs or the smart inverter systems with bulk conductions, to enable high breakdown voltage and low on-resistance, with less switching loss. Methods The device DC operation and gate threshold voltage V th together with the I DS −V DS characteristics of the C-H diamond MOSFET are evaluated by simulation using Atlas TCAD as the two-dimensional (2D) device simulator software (ATLAS User's manual, version 5.24. 1.R. et al. 2017, https:// www. silva co. com/)) 24 . We used this software simulator to achieve both normally-off and normally-on characteristics by considering three typical C-H diamond MOSFET devices using various fixed interface charge models: negative interface charge, neutral-surface charge, and positive interface charge. The device is under the thermal equilibrium conditions at 300 K. Figure 2 shows the C-H diamond MOSFET structure device modeling in an incomplete ionization model. The key parameters that are used for modeling devices include a diamond substrate with a thickness of 4 µm and Al gate length (L G ) of 4 µm formed as the Fig. 1c. (d). The plots of simulated results correspond to mobility varied at 100 cm 2 /Vs when transconductance of device g m = 0.4 mS/mm. www.nature.com/scientificreports/ overlapping gate with a thickness of 100 nm. Also, Al 2 O 3 is formed as ALD with a thickness of 200 nm, channel length (L SD ) is 4 µm, source and drain contacts are formed using Au/Ti. Other key diamond material parameters are summarized in Table 1 in which we show the electron affinity of the diamond is − 1.3 eV. We also assumed the incomplete ionization of impurities model in the freeze-out region, given that nitrogen shows an insulating behavior in the diamond. We then perform nitrogen and boron doping in diamond (4 µm) in the concentration of 2 × 10 16 cm −3 and 2 × 10 15 cm −3 , respectively. Also, the 2DHG diamond MOSFET (001) is carried out via the fabrication of the 2DHG diamond MOSFET using SiO 2 layer (2 nm) under the gate to confirm normally-off operation, in which the source gate distance is L SG = 2 μm, the gate length is L G = 4 μm and the gate-drain distance is L GD = 2 μm. We used the values of boron and nitrogen concentration measured in the experiment work. We investigated the drift-diffusion model, which is the simplified form of the charge transport sheet model in Atlas. The mechanism used in this work is interface fixed charge, which defines the space charge using Poisson's equation with the ionized donor and acceptor. The mathematical model is established using the fundamental equations, including Poisson's equation based on the Maxwell's laws. Typically, the Poisson's equation is formed based on the electrostatic potential ϕ and space charge density ρ as depicted as the following equation: www.nature.com/scientificreports/ In general, the space charge contains the mobile and fixed charges (electron, hole, and ionization energy of impurities). We assumed three surface-charge models in this work, including the negative, positive, and neutral charge models of − 5 × 10 12 cm −2 , and 5 × 10 11 cm −2 , respectively. The continuity equation of electron and hole is given as: where n is the electron concentration and p is the hole concentration, J n and J p are the electron and hole current densities, G n and G p are the electron and the hole generation rates. R n and R p are the recombination rates of electron and hole, respectively, and q is the magnitude of the charge on the electron. The carrier continuity equation in this model is then used for carrier density improvement as a result of transport, generation, and recombination processes for the specific hole, which only creates a wide bandgap of diamond and p-channel unipolar device by means of simulation. www.nature.com/scientificreports/
6,287.4
2020-12-29T00:00:00.000
[ "Materials Science" ]
MicroRNA-34 family expression in bovine gametes and preimplantation embryos Background Oocyte fertilization and successful embryo implantation are key events marking the onset of pregnancy. In sexually reproducing organisms, embryogenesis begins with the fusion of two haploid gametes, each of which has undergone progressive stages of maturation. In the final stages of oocyte maturation, minimal transcriptional activity is present and regulation of gene expression occurs primarily at the post-transcriptional level. MicroRNAs (miRNA) are potent effectors of post-transcriptional gene silencing and recent evidence demonstrates that the miR-34 family of miRNA are involved in both spermatogenesis and early events of embryogenesis. Methods The profile of miR-34 miRNAs has not been characterized in gametes or embryos of Bos taurus. We therefore used quantitative reverse transcription PCR (qRT-PCR) to examine this family of miRNAs: miR-34a, -34b and -34c as well as their precursors in bovine gametes and in vitro produced embryos. Oocytes were aspirated from antral follicles of bovine ovaries, and sperm cells were isolated from semen samples of 10 bulls with unknown fertility status. Immature and in vitro matured oocytes, as well as cleaved embryos, were collected in pools. Gametes, embryos and ovarian and testis tissues were purified for RNA. Results All members of the miR-34 family are present in bovine spermatozoa, while only miR-34a and -34c are present in oocytes and cleaved (2-cell) embryos. Mir-34c demonstrates variation among different bulls and is consistently expressed throughout oocyte maturation and in the embryo. The primary transcript of the miR-34b/c bicistron is abundant in the testes and present in ovarian tissue but undetectable in oocytes and in mature spermatozoa. Conclusions The combination of these findings suggest that miR-34 miRNAs may be required in developing bovine gametes of both sexes, as well as in embryos, and that primary miR-34b/c processing takes place before the completion of gametogenesis. Individual variation in sperm miR-34 family abundance may offer potential as a biomarker of male bovine fertility. Background Terminal maturation is the final series of nuclear and cytoplasmic events that bring a haploid gamete to a state where it has the capability to contribute to a fertilization event and establish an embryo. The maturing oocyte undergoes dynamic morphological and nuclear rearrangements as it approaches ovulation, progressively decreasing its transcriptional activities [1] together with the condensation of genetic material. At this time, the oocyte contains a repertoire of RNAs that will guide it through the early stages of embryogenesis until the resumption of transcription occurs at the time of embryonic genome activation [2]. Critical to this onset of zygotic transcription and further embryonic development is the destruction of a number of maternally deposited transcripts [3]. The formation of mature male gametes is equally complex. Diploid spermatogonia are formed from germline stem cells that undergo mitotic divisions and differentiation to produce primary spermatocytes [4]. Through meiosis, the chromosome number of these spermatocytes is reduced resulting in haploid spermatids. During the final steps of spermiogenesis, round haploid spermatids take on the distinctive morphological characteristics of mature spermatozoa [5]. The transformation of round spermatids into spermatozoa involves nuclear elongation and a high degree of DNA compaction in the sperm head [6]. This reorganization results in the arrest of transcription during mid-spermiogenesis, however, a number of transcripts persist in ejaculated spermatozoa [7]. Paternal contributions to the zygote and its development are increasingly recognized as important element of successful fertilization. Sperm are now known to provide proteins and RNAs that are critical to subsequent development. Some mRNAs present in sperm and early embryos are absent from non-fertilized oocytes, suggesting a unique role for these RNAs in post-fertilization events [8]. MicroRNAs (miRNAs) have, over the past several years, emerged as potent regulators of gene expression that are widely expressed in biological systems. These small non-coding RNAs are highly conserved among eukaryotes and a growing body of evidence supports major roles in developmental, homeostatic and pathological processes. MiRNAs typically act by binding to complementary sequences in the 3′ untranslated region of target mRNAs and inhibiting gene expression by accelerated transcript decay or translational suppression through interactions with the multiprotein RNA-induced silencing complex (RISC) [9]. Their roles in numerous biological pathways have highlighted the complexity of post-transcriptional gene regulation. While many miRNAs are ubiquitously expressed, tissue-specific miRNA expression is common, suggesting unique requirements in different tissues and specific functional roles. The roles of miRNAs are likely to be particularly important in tissues where transcription is limited, resulting in an environment where post-transcriptional regulatory mechanisms predominate. MicroRNAs are transcribed by RNA polymerase II [10] as primary "pri-miRNA," and are processed to "pre-miRNA" by a multi subunit microprocessor complex containing the RNaseIII enzyme Drosha [11] and subsequently into mature miRNA by the bidentate RNaseIII ribonuclease Dicer [12]. Mammalian miRNAs have an average decay rate in somatic cells of approximately 5 days and are up to ten times more stable than mRNAs [13]. Their known roles as potent silencers, and their intrinsic stability makes them strong candidates for the regulation of gene expression in gametes and embryos. Considerable numbers of miRNAs are present in oocytes, many of which exhibit dynamic expression profiles. A subset of miRNAs demonstrate increased abundance through oocyte maturation and embryo development. Conversely, some miRNAs that are abundant in the immature oocyte become depleted throughout maturation, while the abundance of others remains relatively stable [14]. The dynamic expression profiles of miRNAs throughout early embryogenesis strongly implicate them in the timely regulation of embryonic gene expression. The transcriptome of spermatozoa includes both long (tRNA, rRNA and mRNA) and small RNAs including miRNA, endo-siRNA, piRNA [15,16], and recently described short noncoding sperm RNAs (spRNA) [17]. In addition to their potential regulatory roles in early embryonic development, miRNAs are present in the testis and profiling has revealed populations of miRNA that are either preferentially or exclusively expressed in this tissue [18]. These miRNAs may be required for the completion of spermatogenesis, and the loss of miRNA processing machinery in testes results in severe disruptions to this process [19]. A survey of human sperm has uncovered a unique set of primary miRNA transcripts present in sperm cells that are essentially absent from the testis, and are not observable in their short, mature form in the sperm. It has been suggested that these pri-miRNA precursors may be delivered to the oocyte upon fertilization, and are cleaved into their mature form only in the embryo, where they begin targeting mRNAs [20]. The small RNA profiles of sperm may ultimately represent a useful parameter that is correlated with male fertility, as high versus low fertilitystatus bulls have been shown to exhibit different miRNA signatures [21]. The miR-34 family consists of three miRNAs (a,b,c) which contain identical seed regions and show variable tissue expression. Gametes of several species have been shown to contain miR-34 family members that appear important in both gametogenesis [22] and early embryo developmental events [23,24]. In somatic cells, miR-34 is an integral component of the p53 network, impeding cell cycle progression and proliferation by silencing oncogenic targets [25]. The abundance of the miR-34 family is transcriptionally regulated by p53 [26]. MiR-34b and -34c suppress proliferation and colony formation in neoplastic epithelial ovarian cells, and the binding of p53 to a conserved site on the miR-34b/c promoter increases the expression of these miRNA in ovarian surface epithelium cells, an effect lost after the conditional inactivation of p53 [27]. The roles of miR-34 in reproduction are incompletely understood and may be p53 independent, as p53 deficient mice testes show only a slight difference in miR-34c compared to their normal counterparts [22]. Furthermore, miR-34 availability in the gametes may be more dependent on pri-miR-34 processing, as p53 regulates miR-34 at a transcriptional level. While miR-34a appears widely expressed, miR-34c is detected in only a small subset of tissues, particularly the gonads [28]. In the testes of mice it is found primarily in germ cells where expression increases concomitantly with the differentiation of pachytene spermatocytes into round spermatids where the highest levels are present [22]. MiR-34b and miR-34c are found in zona pellucidabound sperm, suggesting an association with fertilization capacity [23]. MiR-34c is also abundant in human sperm and present in the testis, but absent from the human ovary [15]. Small RNA profiling of human seminal plasma in normozoospermic versus vasectomized donors has revealed that miR-34c is one of a select group of miRNA that is markedly reduced or undetectable in vasectomized individuals [29], suggesting that miR-34c present in semen is either preferentially or exclusively derived from the testis/epididymis and that its absence may be a diagnostic of impaired fertility. The simultaneous inactivation of miR-34b/c and its functionally related family member miR-449 -which contains an identical seed sequence, result in male sterility in mice due to severely altered epididymis as well as low sperm counts and deformed sperm with minimal motility [30]. The miR-34 profile in female gametes is highly variable between species. MiR-34b and -34c are found in mouse zygotes and appear to be required for the first zygotic cleavage event, the expression level is comparable to that of sperm, and it is not detected in unfertilized oocytes [23]. In contrast, a single miR-34 orthologue (to miR-34a) is present in the oocytes of Drosophila melanogaster. The miR-34 family is present in oocytes of zebrafish and is maternally inherited in embryos of these two species [24]. To date, the miR-34 family has not been examined for its role in bovine reproduction. Based on the expression and characteristics of the miR-34 family members in the gametes of other species, we designed the present study to characterize this family of miRNAs in male and female bovine gametes in order to establish the dynamics of their processing, their potential as candidates of paternally contributed zygotic RNAs. Sperm preparation Semen samples were collected from 9 Holstein bulls and 1 Jersey bull owned by and housed at EastGen (Guelph, Ontario, Canada), and arbitrarily assigned Animal Identification numbers from 1-10. Frozen semen was thawed in water at 37°C and overlayed onto a discontinuous 45% and 90% density gradient of Percoll (Sigma-Aldrich, St. Louis, MO) diluted in HEPES-buffered Tyrode's albumin-lactate-pyruvate (TALP) medium (Caisson Labs, North Logan, UT) with 1.97 mM CaCl 2 , 0.39 mM MgCl 2 , 25.6 mM Na lactate and 25 mM NaHCO 3 . Sperm cells were purified from cryoprotectant and somatic cell contamination by centrifugation at ambient temperature for 30 minutes at 700 g. After the careful removal of the Percoll solution, sperm pellets were washed in 5 mL HEPES-TALP and collected by centrifugation at 700 g for 5 minutes. Purified, motile sperm were diluted 50 fold in nuclease-free water for counting by haemocytometer, and 3 pools of 4x10 6 sperm per animal were flash frozen in liquid nitrogen. Oocyte collection and in vitro fertilization Oocyte collection and in vitro embryo production was performed as described previously [31]. Bovine ovaries were obtained from a local abattoir (Cargill, Guelph, Ontario, Canada) and transported at 35°C. Within 2 hours of ovary collection, cumulus-oocyte complexes (COC) were aspirated from follicles greater than 6 mm diameter using vacuum aspiration. Collected complexes were placed into 1 M HEPES-buffered Nutrient Mixture F-10 Ham (Sigma-Aldrich) collection media supplemented with 2% steer serum, Hepalene, penicillin and streptomycin. Immature oocytes were denuded of associated cumulus cells in 2 mg/mL Hyaluronidase from Streptomyces hyalurolyticus (Sigma-Aldrich), washed in phosphatebuffered saline plus 0.01% poly vinyl alcohol and collected into pools of 40 and immediately flash frozen in liquid nitrogen. Oocytes for maturation were placed in groups of 15 under mineral oil in 80 μL drops of maturation medium containing 0.5 μg/mL FSH, 1 μg/mL LH and 1 μg/mL estradiol (NIH, USA) in a humidified atmosphere at 38.5°C and 5% CO 2. After 22 hours, COCs were either denuded and collected or fertilized. In preparation for fertilization, in vitro matured oocytes were washed in HEPES-TALP (Caisson Labs) and fertilized in groups of 15 under oil in 80 μL drops of TL-Fert (Caisson Labs) fertilization medium supplemented with 20 μg/mL heparin sodium salt (Sigma-Aldrich), and 0.96 μg/mL albumin from bovine serum (BSA) (Sigma-Aldrich). It is unknown whether miR-34 family miRNAs present in bovine sperm are transferred to the oocyte upon fertilization. To eliminate the potential for variation in miR-34 levels due to paternal contribution, all oocytes used for embryo production in this study were fertilized with cryopreserved bovine sperm from one bull that was not used in any analyses of miR-34 variation between bulls. Frozen semen was washed in modified HEPES-TALP (Caisson Labs) and prepared by conventional swim-up technique. At 18 hours post fertilization, presumptive zygotes were denuded by vortexing and cultured in groups of 25 in 30 μL drops of Synthetic Oviduct Fluid (Caisson Labs) supplemented with 0.96 μg/mL BSA, 88.6 μg/mL sodium pyruvate, 2% non-essential amino acids (Sigma-Aldrich), 1% essential amino acids (Sigma-Aldrich), 0.5% gentamicin, and 2% serum. Cleaved (2-cell) embryos were collected at 36 hours post fertilization. Tissue collection Testicular tissue was obtained from University of Guelph Abattoir (Guelph, Ontario, Canada) immediately after slaughter and was flash frozen on-site. Ovarian cortices were dissected on ice from bovine ovaries collected and transported from the abattoir (Cargill) at 35°C, and were immediately flash frozen in liquid nitrogen. RNA extraction and cDNA synthesis Total RNA including small RNAs was extracted from gametes and tissues using miRNeasy Micro kit (Qiagen, Mississauga, Canada) according to the manufacturer's protocol, and with the inclusion of DNase digestion performed on-column with the RNase-free DNase Set (Qiagen). RNA was extracted from 3 different pools of 4x10 6 sperm from each bull, from 3 pools of 40 oocytes or embryos at each of the stages studied, and from testes and ovarian cortices of 3 male and female cattle. RNA concentration and quality was measured by Nanodrop 2000c (Thermo Scientific, Wilmington, DE). Quantitative PCR Quantitative RT-PCR analysis was performed on samples after reverse transcription using a CFX96 Touch Real-Time PCR Detection System (BioRad Laboratories, Inc., Hercules, CA). Pri-miRNAs were amplified using SsoFast EvaGreen SuperMix (BioRad). Mature miRNAs were amplified with PerfeCTa SYBR Green SuperMix, a specific miRNA Assay Primer, and PerfeCTa Universal PCR primer (Quanta BioSciences). All miRNA Assay Primers used in this study were purchased commercially (Quanta Biosciences), and are based on human miRNA and snRNA sequences. Expression was validated in bovine. The 5′ arm (5p) primers were selected for bta-miR-34a, -34b and -34c by analysis of deep sequencing reads of 5′ and 3′ arm expression in bovine tissues available in miRBase Registry [32]. Primer efficiencies were determined by standard curve. Relative miRNA expression was calculated by efficiency-corrected ΔΔCt method, normalized to the endogenous control snRNA U6 which has been shown previously to be an internal control stable in embryo culture systems [33]. The amount of cDNA template used per reaction represents the equivalent RNA of one oocyte/embryo, or 3 ng sperm or tissue RNA. Statistical analysis Differences in miRNA abundance between tissues and stages of oocyte development were analyzed with the Kruskal-Wallis test and Dunn's post hoc test for multiple comparisons using GraphPad Prism 6. Differences with a P-value <0.05 were considered significant. miR-34 family is expressed in bovine sperm and testis MiR-34 family miRNA levels were measured in sperm isolates purified from ejaculate samples of 10 individual bulls. Three pools of 4x10 6 sperm were prepared from cryopreserved sperm isolated from each bull, and miRNA extraction/qPCR quantification from the triplicate pools of each individual were carried out separately in order to assess the potential for technical variability within the experimental design. Sectioned testis tissue, which contains a large number of differentiating spermatocytes, was found to abundantly express miR-34a, -34b and -34c. For this reason, a sample of testis cDNA was used as a calibrator against which all sperm samples were compared. Multiple samples of testis cDNA were tested and the calibrator selected was chosen based on its similar Ct value to the sperm samples for the endogenous control snRNA U6. MiR-34a, -34b and -34c were detected in the sperm from all individuals tested ( Figure 1A-C) at a median threshold cycle (Ct) among all animals of 30.9, 35.9 and 27.5, respectively. Mean mir-34a expression values in each bull ranged from 9-21% of the expression level in the testis calibrator, while mean miR-34b ranged from 2-15%, and miR-34c ranged from 16-55% of the calibrator expression value. Of the three genes in this family of miRNAs, miR-34c was the most readily detected in all animals tested, and showed the widest range of expression between individuals. This blinded study was carried out with no prior knowledge on the underlying structure of miR-34c distribution, or its relationship with bovine fertility. It is possible that the observed variation may arise from differential transcription, proportion of motile, high-quality sperm in semen samples, or underlying genetics. The three members of the miR-34 family all map to single locations in the UMD3.1 genome assembly of the domestic cow. This most current release of the bovine genome includes 12,251 structural variants, however a search of this assembly did not reveal any annotated copy number variants (CNVs) surrounding the miR-34a, -34b or -34c loci. For comparative purposes the GRCh37.p13 release of the human genome was obtained from Ensembl [37] and revealed annotated gains and losses of copies of the miR-34a allele on human chromosome 1 and at the miR-34b and -34c loci on human chromosome 11. Copy number variation is often manifested phenotypically and is an important source of genetic diversity. CNVs have been detected among different cattle breeds [38] and between individuals of the same breed [39]. Variation in the copy number of miR-34 family genes represents a possible source of variable miRNA expression between individual bulls, and quantification of these alleles compared to a reference genome is a promising direction for further studies. miR-34a and -34c, but not miR-34b, are expressed in bovine oocytes and during in vitro embryo development Members of the miR-34 family are transferred from sperm to oocyte in the mouse and are essential for subsequent cleavage events [23]. To determine miR-34 family expression in female gametes and investigate the possibility that the presence of miR-34c in the zygote is of paternal origin, we quantified the abundance of miR-34a, -34b and -34c in immature/germinal vesicle (GV) oocytes, mature/metaphase II (MII) oocytes and cleaved (2-cell) in vitro produced embryos. MiR-34a and -34c were stably expressed from germinal vesicle-stage oocyte to the 2-cell embryo, while miR-34b was not detected (Figure 2). Dynamic changes in the abundance of many miRNAs occur throughout oocyte meiotic maturation, suggesting active regulation of specific target transcripts in this final period of oocyte development. Since miR-34a and -34c persist from the immature oocyte to the cleaved embryo, it is possible that they are functionally active during this period, or that they may be required in later stages of pre-implantation embryo development if they are comparably present during in vivo development. Knockdown of miR-34 in zebrafish oocytes results in embryonic dysregulation of a number of known mammalian miR-34 target transcripts including the antiapoptotic factor Bcl-2, as well as Notch and its ligand Delta-like 1, resulting in defective brain development [24]. The relationship between miR-34c and Bcl-2 is relevant to both spermatogenesis and embryogenesis. In mouse testes, miR-34c inhibition increases Bcl-2 expression and results in apoptosis-resistant germ cells. These effects may be mediated by the Bcl-2 transcription factor ATF-1; also a miR-34c target demonstrating increased expression as a result of miR-34c inhibition [40]. Bcl-2 itself is a direct miR-34c target with an antiproliferative function and miR-34c inhibition in mouse zygotes results in Bcl-2 protein overexpression and an inability to complete the first zygotic cleavage [23]. The presence of miR-34c in the bovine oocyte and embryo suggests that similar regulation occurs, but that it does not likely depend on a paternal contribution of this miRNA. The bicistronic primary transcript of miR-34b/c is present in reproductive tissues but not in mature gametes In the bovine, the miR-34 family is transcribed from two intergenic genomic locations; miR-34a is found on the forward strand of Bos taurus chromosome 16 (chr16: 45,197,197,496), and miR-34b/c are co-transcribed from a single transcript on chromosome 15 (chr15: 22,134,725-22,134,808 and chr15: 22,135,426022,135,502) [32,35]. Our finding that miR-34c but not -34b is found in bovine oocytes and embryos led us to investigate the presence of the miR-34b/c primary transcript in the ovary, testis, and gametes. An RT-PCR assay was designed to amplify the primary transcript that contains miR-34b and -34c, which was found in testis and in 1 of 3 ovarian tissue samples tested but not in mature sperm or oocytes (Figure 3). The absence of primary miR-34b/c in mature gametes suggests that the processing of this precursor takes place earlier in gametogenesis, and underscores the need for functionally active miR-34c in the oocyte and both miR-34b and -34c in sperm. The findings in the present study differ from miR-34 processing dynamics in the mouse, where pre-miR-34c is present in mature sperm [23]. This demonstrates species-specificity not only of miRNA expression profiles in gametes and embryogenesis, but also unique differences in the stages at which miRNA undergo processing into their functional form. Because the miR-34c found in the oocyte must originally have been derived from its primary transcript, the observed challenges in detecting this transcript in the ovary was surprising. Ovarian cortices were dissected in an attempt to capture not only the somatic cells within the ovary, but also very small follicles that would not have been retrieved by follicular aspiration. It is possible that pri-miR-34b/c is uniquely transcribed in primordial oocytes and is not detectable in a sample that has not been enriched for these small, infrequent cells. It is important to note that the findings described here have been generated using in vitro matured embryos. This is clearly relevant to the wider issue of reproduction, as the utilization of bovine in vitro embryo production systems is essential for the genetic improvement of livestock globally. In vitro embryo development also represents a strong and well-recognized model of oocyte maturation and early embryogenesis in vivo. However, differences in the pattern of gene expression between in vitro and in vivo embryo development have been recognized [41], suggesting that differences may be present in miR-34 biology. This will require confirmation with embryos derived in vivo in future studies. The presence of primary miR-34b/c in the testis (containing many developing spermatocytes) and absence from mature spermatozoa parallels the pattern found in the ovary. Pri-miR-34b/c processing therefore is likely associated with early bovine gametogenesis, and the stability of miR-34b and miR-34c are unequal in the oocyte, as miR-34b is not detectable by the time the follicle has reached~6 mm (the time of follicular aspiration and collection), while miR-34c is present in its functional form and remains through to the first cleavage event. Many miRNAs are found in evolutionarily conserved clusters, and their transcription is initiated by a single promoter. As such, polycistronic miRNA are often up-or down-regulated jointly in response to changing cellular conditions [42]. Though it is common, equal abundance of co-transcribed miRNA is not universal. Inconsistent expression of miRNA derived from a single cluster has been previously documented, and data suggests that the miR-34b/c transcript may be an example of an unequally expressed miRNA cluster [43]. Furthermore, miR-34b/c itself has been recently found to undergo alternative tissue-specific splicing that depends on competition between spliceosome and miRNA-processing machineries [44]. It is possible that a similar competitive process may account for the differences we observe in mature miR-34b and miR-34c, which are derived from the same primary transcript. The miR-34 family is variably expressed in bovine reproductive tissues The levels of miR-34 family miRNAs vary between male and female reproductive tissues and sperm (Figure 4). While miR-34a is most abundant in the ovary, miR-34c is scarcely detectable in sections of ovarian tissue. Conversely, miR-34a is not highly expressed in the testis or in sperm, while miR-34c is abundant in testis and sperm. Interestingly, while miR-34c expression in ovarian tissue is low, miR-34c is abundant in oocytes and embryos, suggesting that miR-34c is enriched in the oocyte compared to the somatic cells in the ovary. Of the miR-34 family, miR-34c was found to be the most interesting candidate in both male and female gametes with respect to individual variability and possible enrichment compared to the surrounding tissue. The presence of miR-34c in mature spermatozoa isolated from ejaculates suggests that the role of miR-34c in bovine reproduction may not only involve spermatogenesis, but it may also have potential roles in fertilization and subsequent embryo development. Human studies have demonstrated that miR-34c is highly abundant in ejaculates of donors with proven fertility [15], and this potential putative function of miR-34c in cattle is further emphasized by the observed variation of miR-34c among individuals. With the unknown fertility status of our samples, miR-34c presents itself as a prospective biomolecule for further study as a marker for male reproductive competence. The scope of this study was to perform the first characterization of miR-34 family miRNA in bovine gametes and associated reproductive tissues. Through a large cohort study, these small RNA candidates could be examined for correlation with other parameters of sperm quality and with known fertility status. Routine evaluations to assess the quality of bull semen are a largely qualitative physical inspection of sperm morphology, motility and concentration [45]. These parameters often fail to predict fertilization, and studies have shown that molecular defects correlated with fertilization failure can exist in morphologically normal sperm [46]. Our results suggest that miR-34c may have the potential to be used as a non-invasive and quantifiable measure to assess reproductive competency in bovine, and may be used in conjunction with current predictors of sperm quality. MiR-34 family abundance in sperm and reproductive tissues. MicroRNA expression determined by qRT-PCR on ovary (Ov), testis (Ts) tissues and sperm (Sp), normalized to U6. Values reported are the mean expression in ovary and testis tissues isolated from 3 individual cattle, and the mean expression of sperm from 10 bulls. Data presented as a relative fold-change against testis RNA. Error bars represent SEM, letters represent groups whose means differ significantly (P < 0.05) for each gene.
6,039.6
2014-09-02T00:00:00.000
[ "Biology" ]
Automated recognition of emotional states of horses from facial expressions Animal affective computing is an emerging new field, which has so far mainly focused on pain, while other emotional states remain uncharted territories, especially in horses. This study is the first to develop AI models to automatically recognize horse emotional states from facial expressions using data collected in a controlled experiment. We explore two types of pipelines: a deep learning one which takes as input video footage, and a machine learning one which takes as input EquiFACS annotations. The former outperforms the latter, with 76% accuracy in separating between four emotional states: baseline, positive anticipation, disappointment and frustration. Anticipation and frustration were difficult to separate, with only 61% accuracy. Introduction The prevailing consensus now acknowledges that animals experience not only negative emotions such as fear and distress [1], but also positive emotional states [2].While the historical focus of animal welfare science centered on pain and suffering, a recent notable shift in perspective encompasses a broader evaluation of their overall quality of life [3].This shift leads to increased interest also in animal emotion research, and specifically positive emotional states [4,5]. Facial expressions are an important information channel for affective states in animals.Charles Darwin famously expounded upon how facial expressions serve as manifestations of emotional states in both human and diverse non-human species [6], however while he proposed a commonality across species for a given emotion, this finding has recently been challenged.Thus while mammals are known to produce facial expressions [7], the mechanistic rules governing this in relation to internal emotional states may vary between species.Consequently, facial expressions and their variability between species are getting increased interest as potential indicators of internal states in the domain of animal emotions and welfare research. The golden standard for the objective evaluation of dynamics of facial expressions within the realm of human emotion research is the Facial Action Coding System-FACS [8,9].FACS has recently been adapted for different non-human species, including several non-human primates (e.g.orangutans [10], chimpanzees [11], macaques [12,13]), marmosets [14], dogs [15], cats [16] and horses [17].The latter are of particular interest due to their lateral eye placement accompanied by face elongation. Horses are an understudied species in the context of emotion research.Being highly social animals, they have complex social frameworks [18,19].They have a well-developed communication through nuanced visual cues, including subtle shifts in eye direction, ear positioning, and facial expressions [17,20].Wathan et al. [21] presented evidence for their ability to distinguish between distinct facial expressions when presented with images of fellow horses, such as those conveying aggressive, positively attentive, or relaxed states. As with many other species, facial expressions in horses have so far been investigated mainly in the context of pain.A number of grimace scales for assessing pain in horses, such as the Horse Grimace Scale (HGS) [22], Equine Pain Face [23] The Equine Utrecht University Scale for Facial Assessment of Pain (EQUUS-FAP) [24] and the FEReq instrument for ridden horses [25].Merkies et al. [26] studied eye blink and eyelid twitches in relation to negative affective states.Hintze et al. [27] further focused on the eye area addressing eye wrinkles as a potential tool to evaluate emotional valence in horses, testing horses in different situations, two of them being food anticipation (positive emotional valence) and food competition inducing frustration (negative emotional valence).An important feature for separation between these positive and negative situations was found to be the change in the angle between the highest wrinkle and the line through the eyeball.Ears are another important facial parts studied in the context of affective states in horses and found to be of importance in fear [28], vigilance [29], as well as pain [22]. Ricci-Bonot and Mills [30] studied facial expressions in horses in a controlled experiment, testing n = 30 horses across three situations involving potential availability of food: one positive situation-anticipation of a reward, and two negative situations-frustration at waiting for a reward, and disappointment at the loss of the reward.Horse facial expressions were coded using the EquiFACS coding system.While the study could not identify facial markers to differentiate anticipation, significant difference was found in the occurrence of 9 actions and behaviors between the two negative situations.The action units 'eye white increase' (AD1), 'ear rotator' (EAD104), and 'biting feeder' were more likely in the frustration phase, while 'blink' (AU145), 'nostril lift' (AUH13), 'tongue show' (AD19), 'chewing' (AD81) and 'licking feeder' were more likely in the 'disappointment' phase. Manual behavior analysis methods have many limitations, such as being prone to bias and error [31], as well as requiring rater agreement studies and extensive human training.Computer Vision based approaches provide an attractive alternative.Broome et al. [32] provides a comprehensive review of state-of-the-art approaches of this type in the context of affect recognition in animals. As already indicated, the majority of these works focus on pain recognition, addressing species including rodents [33][34][35], sheep [36], and cats [37].Several works have addressed automation of pain recognition in horses [38][39][40].Lencioni et al. [38] presented a model, based on a Convolutional Neural Network (CNN), with an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present.While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%.However, the validation method used did not leave one animal out, which is the golden standard in this context (see [32] for a discussion), and may have resulted in lower performance.This work used only input of single frames.Another study focusing on horse facial pain expressions was presented by Pessanha et al. [41].The presented pipeline automatically determines the quantitative pose of the equine head and localizes facial landmarks, based on which classification is made.The manual scoring of pain was performed using the Equine Utrecht University Scale for Automated Recognition in Facial Assessment of Pain (EQUU-S-ARFAP) [42].This scale has not been validated, moreover a significant disagreement between scorers was reported.The pain prediction is done for each region of interest separately, models for some regions had a good performance in binary classification of pain/no pain (orbital tightening had F1 score of 0.86, ears 0.72), while the majority had lower performance.Hummel et al. [40] also focused on facial expressions for pain recognition in horses, presenting a hierarchical system for pose-specific automatic pain prediction on horse faces, exploring also its extension to donkeys.While 0.51-0.88F1 score was achieved in pain recognition in horses, the transfer to donkeys was difficult.Another work on horse pain by Broome et al. addressed the whole body of the horse and used more sophisticated methods taking videos as input [39].In follow-up studies, also transfer from acute to low grade orthopedic pain in horses was also addressed [43], as well as semi-supervised approaches with video [44]. To the best of our knowledge, only one study addressed so far emotional state recognition in horses.Corujo et al. [45] addressed some states including "alarmed", "annoyed", "curious", and "relaxed", defining each of them in terms of eyes, ears, nose and neck behavior.However, these definitions are not objective and nor operationally defined and so are open for observer interpretation (using descriptions such as 'relaxed'), leading to low reliability of ground truth annotation. The study presented here is the first to explore automated recognition of horse emotional states from facial expressions, using a dataset collected from a carefully designed experimental protocol of Ricci-Bonot and Mills [30].In the protocol, similar to the one developed in [46] for dogs, the context defines the emotional states of horses, which were tested in three different scenarios involving the potential availability of food: anticipation of a reward, considered a positive emotional state; frustration at waiting for a reward and disappointment at the loss of the reward-both considered negative emotional states.Tests were conducted in a stable with a feeding device fixed outside the stable within reach of the horse.Analysis of video recordings of facial expressions of the horses was undertaken using the Horse Facial Action Coding System (EquiFACS), an objective system for coding facial movements on the basis of the contraction of underlying muscles, as well as their behaviors.This dataset creates a unique experimental environment for exploring different machine learning approaches in the context of emotion recognition.Specifically, we explore two routes to automated emotion recognition.The first approach uses deep learning, taking videos as input, and analyzing them frame by frame, then aggregating them for an emotional state prediction.The second approach takes as input the EquiFACS coding of the video and uses machine learning for making a prediction of an emotional state. Dataset The dataset used in this study was collected as part of a previous study by Ricci-Bonot and Mills [30].The delegated authority of the University of Lincoln Research Ethics Committee approved this research (UoL2021_6910) and all methods were carried out in accordance with the University Research Ethics Policy and the ethical guidelines of ISAE [47].Written informed consent was obtained from the owner of all horses used in the research.No further ethical approval was required for the current in silico work.All experiments were performed in accordance with relevant guidelines and regulations.The study is reported in accordance to ARRIVE guidelines. A total of 30 videos were obtained from 31 horses involved in the experiment conducted by Ricci-Bonot et al. [30].The horses belonged to different breeds, including Cob Normand, French saddle, Haflinger, Hungarian, Pinto cross Trotter, and some of unknown breed.The age range of the horses was 2 to 23 years, with an average age of 11.5 years and a standard deviation of 6.6.The experiment included 1 entire male, 10 geldings, and 20 females.One horse failed the training phase for food anticipation and all its videos were consequently excluded from the experiment. The dataset included overall 296 video samples of 3-seconds length recorded at a frame rate of 60 frames per second, each frame resolution is 1920x1080 pixels. Tests were conducted in a stable with a feeding device fixed outside the stable within reach of the horse using the protocol which is fully described in Ricci-Bonot and Mills [30].Each subject was tested and recorded once in the baseline condition and three times on each of the anticipation, frustration and disappointments conditions, resulting in a data compound of 87 recordings of anticipation states, 30 recordings of baseline states, 90 recordings of disappointment states and 89 recordings of frustration states.Some videos cannot be used due to lack of enough visibility. All the video samples were coded by a certified EquiFACS coder (C.R.B.) based on the Equi-FACS manual.All action units, action descriptors and other variables were coded as present or absent; for the analysis, only EquiFACS variables shown to be reliable by a second EquiFACS coder and occurring in more than 10% of one of the four situations (baseline, anticipation, frustration and disappointment) were considered; In order to ensure the reliability of the coding, a second certified EquiFACS coder (N.J.) coded more than 10% of the video samples.Table 1 presents the EquiFacs variables that were eventually used in the analysis. AI pipelines overview For narrative purposes we preface our results with essential and practical aspects to improve understanding for those less familiar with AI methods, presenting a high-level overview of the used approaches. We compare the pain classification performance of two different pipelines utilizing two different types of input.The first pipeline takes as input 3sec long video recordings, the second takes as input the EquiFACS coding information. Video classification pipeline The pipeline for video classification used in this study follows the approach presented in [48], making a sophisticated use of the availability of video data in two ways: we integrate temporal information by using the Grayscale Short-Term stacking (GrayST) method [49] to encode movement between consecutive frames into one frame.In addition, we also apply a frame selection technique to better exploit the availability of video data and improve performance. The input to the model are videos.To remove background information, we crop the horse faces using Yolov5 object detection model [50].Then we apply the GrayST method to incorporate temporal information for video classification without augmenting the computational burden.This sampling strategy involves substituting the conventional three color channels with three grayscale frames, obtained from three consecutive time steps.Consequently, the backbone network can capture short-term temporal dependencies while sacrificing the capability to analyze color.The next stage involves encoding each image into a 768-dimensional embedding vector employing a Visual Transformer (ViT [51]) trained in a self-supervised manner using DINO [52] with a batch size equals to 8. We extract the output of the final layer as a 768-dimensional embedding vector that will be used for emotion classification.Then, embedding vectors are fed to SVM models in a two-stage approach.Once the first SVM model is trained on all sampled frames, using the confidence levels (how confident the model is of its classification of a frame; confidence levels are computed as the probabilities of possible outcomes for samples in the dataset) to choose the top frames for each emotional class.Then the second SVM model is retrained using only the highest confidence frames.Fig 1 shows a high-level overview of the pipeline. EquiFACS classification pipeline The EquiFACS data table for classification contains 296 rows (one for each video) and 25 columns: horse subject Id, emotional state and presence or absence of 23 different EquiFACS codes described above.The presence(absence) of a certain AU X in a specified video Y was marked as 1(0) on column of AU X and row of video Y.As part of the information was marked as not available, such entries were filled with 0.5.This data is then fed into a Decision Tree classifier.Fig 1 shows a high-level overview of the pipeline. Model performance For measuring the performance of the models, we use standard evaluation metrics of accuracy, precision, recall and F1 (see, e.g., [37,38] for further details).As a validation method [53], we use leave-one-subject-out cross validation with no subject overlap.Due to the relatively low numbers of horses (n = 30) in the dataset, following the stricter method is more appropriate [35,39].In our case this means that we repeatedly train on 29 subjects and test on the remaining subject.By separating the subjects used for training, validation and testing respectively, we enforce generalization to unseen subjects and ensure that no specific features of an individual are used for classification. Results Table 2 presents our main results: the performance comparison between video-based pipeline and EquiFACS-based pipeline.We can see that the video-based pipeline outperforms the EquiFACS-based one, reaching 76% accuracy for separation between all the classes, as opposed to only 69% by the latter pipeline.It should be noted that this good performance was reached in a process of two phases, described in Table 3.The advantage of the EquiFACS-based classifier which performs lower is however its explainability in the form of a decision tree.Confusion matrices for the video-based and Equifacs-based pipelines can be found in Figs 3 and 4 respectively.It can be seen that separation between Anticipation and Frustration is difficult for both models.Thus Table 2 also presents classification performance for three states when Anticipation and Frustration are treated as one state, which greatly increases performance.The separation between the two 'difficult' states of Anticipation and Frustration reaches 61% accuracy in the video-based model, but had very low (46%) accuracy for te EquiFACS-based model. Discussion The present study is the first to explore automated recognition of horse emotional states focusing on diverse facial expressions, based on a carefully designed controlled experimental setup for dataset creation and annotation.While facial muscular tone may decline with age [54] and facial morphology vary with factors like sex and breed, central to the idea of emotional expression is that reliable changes can be predicted regardless of these factors.Thus although factors like eye wrinkling might change with sex [55] and even be related to emotion, this expression cannot be used as a reliable general marker of emotion in horses, because of this difference.Therefore since we are interested in generic markers we do not attempt to model the effect of factors such as breed, gender or age into our models. We presented classifier pipelines of two different types: deep learning video based and Equi-FACS-based.The former reaches 76% accuracy in separating the four emotional states, while the latter has lower performance (69%).This could be an indication that EquiFACS contains less information that raw video, and there are subtle nuances not captures by the EquiFACS annotation system.This is further strengthened by the fact that the deep learning classifier outperforms EquiFACS also in separating the two "difficult" cases of Anticipation vs. Frustration, reaching 61% accuracy. An EquiFACS-based approach was used in the original study [30], which involved observer based-coding, and this approach, even in an automated process, has one crucial benefit: explainability.As discussed in [37,56], deep learning models have a 'black-box' nature, and it is important to understand how machines classify emotional states, exploring explainability (what is the rationale behind the machine's decision?), and interpretability (how is the model structure related to making such decision?)[57].These topics are fundamental in AI, and are addressed by a huge body of research [58,59]. The EquiFACS-based Decision Tree presented in this study allows us to answer such questions by observing the tree structure, represented by 'if-then' rules.From tree, one can imply, e.g., that the machine chooses "Baseline" when neither of the action units AD19-Tongueshow, AD-51-Head_turn_left nor AD-52-Head_turn_right are present (see the leftmost branch of the tree).When AD-19-Tongue_show is present, either when the lower face part is visible or not (the VC72-Lower_face_not_visible indicator is present or not), the machine chooses "Disappointment" (see the rightmost branch of tree).Otherwise, "Anticipation or Frustration" is derived.This suggests that the system is using more than facial expression, but the wider movement of the head as part of the classification process.The risk of this being artefactual, arising from the design of the study (based on a food delivery system) needs to be carefully considered, and thus generalization about emotional state made with care.A similar phenomenon could arise within the generally superior deep learning video based approach, but we have no way of knowing this.Thus replication studies examining these emotions in horses in other contexts are essential and will strengthen the database used for deriving solid conclusions. For a possible explanation of why the states of anticipation and frustration could not be well separated in our study, despite the fact that they had a stronger separation in the study of dogs [46], we refer the reader to Ricci-Bonot and Mills [30].Whilst the authors considered it possible that there was a lack of facial differences between positive anticipation and frustration in horses or that feeding in the context of the experiment is a largely frustrating event, it was also thought that it might be an artefact of using a 1-0 sampling method, which meant the detail within a video may not have been captured [30].The fact that the video-based deep learning model has 61% accuracy in this case indicates that some visual signal is there, and the explainability of this classifier should be explored in future research, with future experimental protocol designs aimed at better separation of these two emotional states. Fig 1 presents the two pipelines.S1 Appendix presents further technical details on the pipeline. Fig 2 (top) displays examples of cropped horse faces (top) and GrayST stacked frames (bottom) for each of the emotional states from left to right: 'Anticipation', 'Baseline', 'Disappointment', 'Frustration'.The bottom frames capture three consecutive frames: in the 'Baseline' case no movement of the horse is shown, while in the other three cases some movement is captured.
4,496.8
2024-07-15T00:00:00.000
[ "Computer Science" ]
Global–Local Self-Attention Based Transformer for Speaker Verification : Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional global self-attention mechanisms lack the ability to capture local information. To alleviate these problems, we proposed a novel global–local self-attention mechanism. Instead of using local or global multi-head attention alone, this method performs local and global attention in parallel in two parallel groups to enhance local modeling and reduce computational cost. To better handle local location information, we introduced locally enhanced location encoding in the speaker verification task. The experimental results of the VoxCeleb1 test set and the VoxCeleb2 dev set demonstrated the improved effect of our proposed global–local self-attention mechanism. Compared with the Transformer-based Robust Embedding Extractor Baseline System, the proposed speaker Transformer network exhibited better performance in the speaker verification task. Introduction Speaker verification determines whether the identity of the speaker of a test utterance is the same as that of the speaker of a reference speech on the basis of the enrollment utterances of the speaker. Research in this area has mainly focused on obtaining a fixed-dimensional vector representing an utterance, known as speaker embedding. These speaker embeddings are then scored to verify the speaker's identity. Research on speaker embedding extractors aims to enhance inter-speaker variability and suppress intra-speaker variability. In general, extracting speaker embeddings is a crucial factor that largely determines the performance of speaker verification systems. In recent years, the Transformer model [1] has demonstrated excellent performance in natural language processing (NLP). Interest in the Transformer model for speech processing has exploded among researchers. Inspired by the successful application of the Transformer model in NLP, some studies [2][3][4], have tried to replace segment-level pooling layers or frame-level convolutional layers to apply the Transformer model to speaker recognition. However, these works on self-attention are still dominated by previous methods, such as Residual Network (ResNets) [5][6][7], Time-Delay Neural Network (TDNNs) [8,9], and Long Short-Term Memory (LSTM) networks [10]. Other works have used transformer as the backbone but still did not alleviate the limitation of Transformer in speaker verification, such as s-vectors [11]. Since the input is a speech signal, the speaker verification task is different from NLP. There are two challenges to applying Transformer to speaker verification: (1) Transformer are difficult to scale efficiently since acoustic features are much longer than text sentences [12]; (2) compared with CNN, Transformer is insufficient in capturing local information. Due to the success of networks such as TDNN that focus on local features, it is generally believed that local features will improve network performance, so we believe that enhancing the Transformer model's ability to capture local information will improve its performance in speaker recognition. In this work, we propose global-local self-attention to enable the Transformer model to model local features while maintaining the ability to model long-distance dependencies. We divided attention heads into different groups and performed local or global self-attention operations for different groups. This strategy of splitting attention heads does not introduce additional computational costs, and it enhances the ability to capture global dependencies and model local information. In the encoder and decoder, we used additional skip connections to aggregate features at different levels. Furthermore, we introduced locally enhanced positional encoding to further enhance the locality of the model. Without adding extra computation, we improved the performance of Transformer in speaker confirmation tasks by combining multi-level features and enhancing local information design. The paper is organized as follows: Section 2 reviews the previous work related to the self-attention mechanism in speaker recognition. Section 3 presents and explains our proposed model. In Section 4, we discuss the experimental details and analyze the results. Section 5 concludes this paper. Related Work Many convolutional neural networks dominate the field of speaker recognition and have great success. Recently, due to the excellent performance of Transformer in NLP and speech recognition, some works have studied the problem of applying Transformer to speaker recognition. The attention mechanism is at the heart of Transformer's excellent performance. A line of work applies attention mechanisms to pooling mechanisms for speaker recognition as an alternative to aggregating temporal information. Okabe et al. [9] proposed an attentive statistics pooling method that provided the importance of the frame. The attention mechanism was combined with a TDNN-based embedding extractor to assign different weights to different frames and generate weighted means and standard deviation. Cai et al. [5] and Zhu et al. [13] proposed a pooling layer incorporating a self-attention mechanism to obtain utterance-level representations. Wu et al. [14] improved it by adopting vectorial attention instead of scalar attention. India et al. [2] presented double multi-head attention pooling, which extended the previously proposed self-multi-head attention-based method. An additional self-attention layer, which enhanced the pooling mechanism by assigning weights to the information captured by each head, was added to the pooling layer. Wang et al. [15] proposed multi-resolution multi-head attention pooling, which fused the attention weights of different resolutions to improve the diversity of attention heads. Instead of utilizing multi-head attention in parallel, Zhu et al. [3] proposed serialized multi-layer multi-head attention, which aimed to aggregate and propagate attention statistics from one layer to the next in a serialized manner. Different from the above studies, some studies have focused on channel-wise attention. Yu et al. [16] proposed a dynamic channel-wise selection mechanism based on softmax attention, integrating information from multiple network branches with a channel-wise selection mechanism. Jiang et al. [17] introduced a gating mechanism to provide channel-wise attention by exploiting inter-dependencies across channels. These works extended the attention mechanism to the channel dimension to select more important channel information. This led to limited improvement in speaker recognition system performance. In recent years, some works have directly stacked attention layers as a part of layers or the whole embedding extractor. Shi et al. [4,18] applied attention layers and stacked Transformer encoders on frame-level encoders and segment-level encoders, respectively, to capture speaker information locally and globally. A study by Shi et al. [4] was an improvement on Shi et al. [18] that used Transformer encoders with memory to replace the attention layer, and it proposed the idea of using Transformer blocks to process acoustic features segmented into segments. However, it did not integrate the operation of the split window into the Transformer module. Desplanques et al. [19] further incorporated channel attention with a global context into the frame-level layers and statistics pooling layer for better performance. These works, such as [8], are still dominated by sophisticated convolutional networks, Conversely, Safari et al. [20] proposed a serialized multi-layer multi-head attention. This work consisted of three main stages, namely a frame-level feature processor, a serialized attention mechanism, and a speaker classifier. The frame-level feature processor used TDNN to extract high-level representations of the input sound features. The serialized attention mechanism was included in a concatenated self-attention encoding structure that stacked Transformer encoder blocks followed by an additional attention pooling. This structure was used to aggregate variable-length feature sequences into a fixeddimensional representation to create discriminative speaker embeddings. Metilda et al. [11] proposed s-vectors, which replaced the TDNN of [8] with Transformer encoder modules, which were stacked, followed by a statistical pooling layer and two linear layers. In order to capture the speaker characteristics better, this work used self-attention as the backbone of our architecture. Its advantage is that it was not limited to a limited context and focused on all frames at each time step. These works show that Transformer models have the potential to be applied to speaker verification. However, Transformer-based embedding extractors suffer from inferior performance in speaker recognition due to the lack of capacity to capture the local feature. Wang et al. [12] proposed a multi-view attention mechanism that captured long-distance dependencies and modeled the locality by controlling the self-attention receptive field for each head by a head-wise masking matrix. This work made some progress on this problem. It used a mask to realize the calculation of local self-attention, but masking the calculated results wastes computing resources. Proposed Architecture Using the original self-attention alone may not be sufficient to capture local contextual features of utterances. To better capture speaker features, we proposed global-local selfattention in our architecture. In this section, we introduce the structure of each module and explain how these designs were incorporated into our proposed model. The following subsections focus on the different submodules. Figure 1 presents the complete architecture of the model. BN stands for Batch Normalization [21]. or the whole embedding extractor. Shi et al. [4,18] applied attention layers and stacked Transformer encoders on frame-level encoders and segment-level encoders, respectively, to capture speaker information locally and globally. A study by Shi et al. [4] was an improvement on Shi et al. [18] that used Transformer encoders with memory to replace the attention layer, and it proposed the idea of using Transformer blocks to process acoustic features segmented into segments. However, it did not integrate the operation of the split window into the Transformer module. Desplanques et al. [19] further incorporated channel attention with a global context into the frame-level layers and statistics pooling layer for better performance. These works, such as [8], are still dominated by sophisticated convolutional networks, Conversely, Safari et al. [20] proposed a serialized multi-layer multihead attention. This work consisted of three main stages, namely a frame-level feature processor, a serialized attention mechanism, and a speaker classifier. The frame-level feature processor used TDNN to extract high-level representations of the input sound features. The serialized attention mechanism was included in a concatenated self-attention encoding structure that stacked Transformer encoder blocks followed by an additional attention pooling. This structure was used to aggregate variable-length feature sequences into a fixed-dimensional representation to create discriminative speaker embeddings. Metilda et al. [11] proposed s-vectors, which replaced the TDNN of [8] with Transformer encoder modules, which were stacked, followed by a statistical pooling layer and two linear layers. In order to capture the speaker characteristics better, this work used selfattention as the backbone of our architecture. Its advantage is that it was not limited to a limited context and focused on all frames at each time step. These works show that Transformer models have the potential to be applied to speaker verification. However, Transformer-based embedding extractors suffer from inferior performance in speaker recognition due to the lack of capacity to capture the local feature. Wang et al. [12] proposed a multi-view attention mechanism that captured long-distance dependencies and modeled the locality by controlling the self-attention receptive field for each head by a head-wise masking matrix. This work made some progress on this problem. It used a mask to realize the calculation of local self-attention, but masking the calculated results wastes computing resources. Proposed Architecture Using the original self-attention alone may not be sufficient to capture local contextual features of utterances. To better capture speaker features, we proposed global-local self-attention in our architecture. In this section, we introduce the structure of each module and explain how these designs were incorporated into our proposed model. The following subsections focus on the different submodules. Figure 1 presents the complete architecture of the model. BN stands for Batch Normalization [21]. Overall Architecture The overall architecture of our proposed method is shown in Figure 1. The input was 80-dimension mel-filter banks and leveraged a one-dimensional convolutional layer (convolutional layer with kernel size 3 and stride 1) to obtain C × T outputs, and C and T refer to the number of channel dimensions and time dimensions, respectively. Convolution utilizes overlapping windows to form coarse features, which lay the foundation for extracting speaker-discriminative embeddings. We used the architecture with encoders and decoders as the embedding extractor. After the decoder, we employed an x-vectorlike architecture consisting of attention pooling [9] followed by a fully connected layer to generate the final speaker-characterizing embedding. The whole system contains up to 25.2 million parameters. Transformer Block The overall topology of the Transformer block is illustrated in Figure 2a, with two differences from the original Transformer module [1]; namely, we replaced the multi-head self-attention mechanism with our proposed global-local self-attention mechanism, and to introduce the local inductive bias, locally enhanced positional encoding [22] was added as a parallel module to our proposed self-attention mechanism. The Transformer block maintained the size of the feature maps and was set with a 3.2 MLP ratio and 8 attention heads. Transformer block is formally defined as: where X l denotes the output of the lth transformer block of the encoder or decoder, and if it exists at the beginning of each module, it represents the output of the previous module. tion utilizes overlapping windows to form coarse features, which lay the foundation for extracting speaker-discriminative embeddings. We used the architecture with encoders and decoders as the embedding extractor. After the decoder, we employed an x-vectorlike architecture consisting of attention pooling [9] followed by a fully connected layer to generate the final speaker-characterizing embedding. The whole system contains up to 25.2 million parameters. Transformer Block The overall topology of the Transformer block is illustrated in Figure 2a, with two differences from the original Transformer module [1]; namely, we replaced the multi-head self-attention mechanism with our proposed global-local self-attention mechanism, and to introduce the local inductive bias, locally enhanced positional encoding [22] was added as a parallel module to our proposed self-attention mechanism. The Transformer block maintained the size of the feature maps and was set with a 3.2 MLP ratio and 8 attention heads. Transformer block is formally defined as: where denotes the output of the lth transformer block of the encoder or decoder, and if it exists at the beginning of each module, it represents the output of the previous module. Global-Local Self-Attention Despite its strong ability to model global dependencies, the original full self-attention mechanism struggles to capture the local information for utterances longer than text. In a recent study [12], local self-attention with a sliding window was applied to speaker recognition and achieved competitive performance. Inspired by [23], we proposed a novel global-local self-attention mechanism that improves the capability to capture local features while retaining the capability to model long-distance dependencies. As shown in Figure 2b, for half of the channels of the feature maps, the self-attention mechanism is implemented as local self-attention, and each attention head has a sliding window of the same size, while for the other half of the channel, it is implemented as global self-attention without a sliding window. Similar to the original full self-attention mechanism, the input features X ∈ R C×T are linearly transformed to K attention heads, and then each attention head will perform local or global self-attention. For global-local self-attention, we a used a non-overlapping sliding window to partition X into [X 1 , · · ·, X N ] of an equal window size w. w is the size of the window, which gives the model better learning ability. Assuming that the matrix queries, keys, and values of the kth attention head all have the dimension d k , then the proposed local self-attention mechanism for the kth head is defined as: Appl. Sci. 2022, 12, 10154 5 of 10 where W Q k , W K k , W V k ∈ R C×d k , are the linear projection parameter matrices of the queries, keys, and values for the kth attention head, respectively, and d k is set as C/K. We divided the K attention heads equally into two distinct groups. K is usually an even number so that the attention heads are evenly divided into two groups. The first group of attention heads performs local self-attention, while the second group of attention heads performs global self-attention. The difference between the calculation method of global self-attention and multi-head self-attention is that the former needs to add locally enhanced positional encoding, and the output of the kth head is recorded as Global − Attention k (X). Finally, the results of these two kinds of attention are concatenated together as the input of MLP and denoted as GL − Attention(X): GL − Attention(X) = cat(head 1 , · · ·, head k )W (7) where W ∈ R C×C is the projection matrix that projects the self-attention results into the target output dimension. The key design is to split the attention heads into two different groups and perform local and global self-attention operations in parallel. This enables local attention to function under the guidance of global attention so that global information can better interact with local information. We adopted an optimal window size that outperformed other window sizes to achieve the best performance for our proposed method. Locally Enhanced Positional Encoding The positional encoding mechanism plays a pivotal role in the Transformer model. Since the self-attention operation is permutation-invariant, it ignores location information within the input features. To add this information, we considered a straightforward way to add position information to the linear projection values. In addition, we wanted the input element to pay more attention to the location information of its local neighborhood. Therefore, we adopted the locally enhanced positional encoding (LePE) method. LePE is generated by applying a depth-wise convolutional layer [24] on the value V. Given the matrices Q, K, and V in the Transformer model, after adding LePE, the proposed self-attention mechanism can be formulated as: In this way, LePE conveniently adds local contextual location information to input elements. Encoder In our work, the encoder layer consisted of N i sequential Transformer encoder blocks. These deeper features w generally considered to be more complex features and can effectively represent the speaker's identity. However, evidence in [19] suggested that shallow feature maps in hierarchical networks also contribute to more robust speaker embeddings. We argue that this view is also present in our proposed Transformer model. After a sequential of Transformer blocks, we concatenated the outputs of each Transformer block by skip connections to generate new feature maps. Then, a fully connected layer (called a sub-block aggregation net) processed the aggregated features to output the features of the decoder layer. After concatenating the outputs of the Transformer blocks of different layers, the sub-block aggregation net processed the concatenated information and adjusted the dimension of the features to be the same as the input of the next module. The decoder layer has the same architecture as the encoder layer, including several sequential Transformer blocks and a sub-block aggregation net. The difference between an encoder and a decoder is that the former is generally deeper than the latter. For both the encoder layer and the decoder layer, we used layer normalization [25] for the aggregated information before the sub-block aggregation net. In this work, we used a 4-layer encoder and a 3-layer decoder. In our proposed architecture, we used the sub-block aggregation net in the encoder and decoder to aggregate features at different levels, which can prevent the model from consuming too much memory. The encoder extracted the fixed-length representation from the coarse speech features output by the convolutional layer as intermediate features passed to the decoder to obtain utterance-level speaker embeddings. Finally, the output of the decoder was used by the pooling layer to generate the final speaker embedding. Data and Futures For this work, we used the VoxCeleb dataset as our training and test set. The dataset consisted of two versions, VoxCeleb1 [26] and VoxCeleb2 [27]. The VoxCeleb1 contains over 100,000 utterances from 1251 celebrities, while the VoxCeleb2 contains over 1 million utterances from 6112 identities. There is no overlap between the two versions. All data preparation steps were performed using the SpeechBrain VoxCeleb recipe [28]. All our systems were trained in SpeechBrain and evaluated on the VoxCeleb1 test sets. The input features were 80-dimensional mel-filter banks with a 25 ms window and a 10 ms shift to represent the speech signal, while the input features were 512 channels. To make the length of the input divisible by the window size, we dropped the last frame. Data augmentation used the SpeechBrain VoxCeleb recipe during the training process in combination with the publicly available RIR dataset provided in [29]. Finally, we applied SpecAugment [30], which randomly masked 0 to 5 frames in the time domain and 0 to 10 channels in the frequency domain. Experiment Setup The AdamW optimizer with a weight decay of 0.1 was used. We used a mini-batch of 64 and an initial learning rate of 5 × 10 −4 . We used the CyclicLR scheduler with the AdamW optimizer with a minimum learning rate of 1 × 10 −5 to train all models. The step size of one cycle was set to 80 k iterations. All models were trained with AAM-softmax [31,32], with a margin of 0.2 and softmax prescaling of 30. One cycle was applied to the VoxCeleb2 dev set. To make the size of the feature maps in the Transformer block divisible by w, we chose the optimal size w from 20, 25, and 30 according to evidence in [4]. Each window size was tested on the VoxCeleb1 test set, and the results are analyzed in the results section. System Evaluation We adopted the standard equal error rate (EER) and the minimum normalized detection cost MinDCF as evaluation metrics to compare our proposed system with previous work. For MinDCF calculation, we assumed P target = 10 −2 and C FA = C Miss = 1. EER is mainly composed of the false acceptance rate and false rejection rate, and the value when the two are equal is taken as the evaluation metric. The calculation of MinDCF takes into account the different costs of false rejection and false acceptance as well as the prior probabilities of true speakers and imposters. We also showed the DET curve of our proposed method. All our proposed models use a cosine similarity classifier as a backend framework. We analyze the proposed model architecture with a concise ablation study in the next section. Results In this section, we compare the proposed method with those of several excellent studies and analyze the results. An overview of the system performance is shown in Table 1, including VGG [26], TDNN [8,9,14], ResNet [6], Transformer [11,12,20], and our Appl. Sci. 2022, 12, 10154 7 of 10 proposed architecture. According to the results of the VoxCeleb2 speaker verification task shown in Table 1, our model performed better in EER and MinDCF compared with the baseline systems based on VGG, TDNN, ResNet, and Transformer as feature extractors. This shows that enhancing the locality of Transformer can effectively improve performance. The det curve of our proposed method is shown in Figure 3. frames). This shows that a reasonable window size improves performance, and when the window size is too large or too small, the model may not be able to capture the speaker information in the current segment well, resulting in performance degradation. We note that our work outperformed the numbers reported by all systems used for comparison. To investigate the impact of each part of the model, we performed an ablation study on the architecture introduced in Section 3. The results of the ablation experiments are given in Table 3. When performing local self-attention, we used fixed-size and non-overlapping sliding windows to divide the input features into several equal-length segments. To study the effect of different window sizes on model performance, we set the experimental window size according to the experience of previous work, such as [4]. Table 2 presents the results at different window sizes. Among the experimentally set window sizes, the best results were obtained in the EER and MinDCF indicators when the window size was 25 frames. Performance dropped when the window size became larger (30 frames) or smaller (20 frames). This shows that a reasonable window size improves performance, and when the window size is too large or too small, the model may not be able to capture the speaker information in the current segment well, resulting in performance degradation. We note that our work outperformed the numbers reported by all systems used for comparison. To investigate the impact of each part of the model, we performed an ablation study on the architecture introduced in Section 3. The results of the ablation experiments are given in Table 3. Table 3. Ablation study of our proposed architecture. Architecture Voxceleb2 Dev We conducted experiment (a) replacing the proposed attention with the original full self-attention and keeping everything else the same. The results showed that our method outperformed the original full self-attention. Our proposed method yielded relative improvements of 6.8% in EER and 4.1% in MinDCF. This suggests that enhancing the locality of the self-attention mechanism in Transformer can improve the model's performance on the speaker verification task. Experiment (b) clearly demonstrates the importance of the sub-block aggregation net described in Section 3. Aggregating different levels of features through the sub-block aggregation net leads to a relative improvement of 23% in EER and 17.8% in MinDCF. The results showed that aggregating features at different levels through sub-block aggregation net enables the model to obtain richer information, which is beneficial to obtain more robust speaker embedding in our proposed model. In experiment (c), we did not use LePE and kept the other configurations the same. The results showed a relative improvement of 3.5% in EER and 11.5% in MinDCF by introducing LePE. This suggests that further enhancing the locality of the model by positional encoding can effectively improve performance. Conclusions In this work, we proposed a Transformer-based speaker embedding extractor for speaker verification with a novel global-local self-attention mechanism. The method balances the ability to model long-distance dependencies and capture local features by performing local and global attention in parallel. We aggregated features at different levels in the encoder and decoder to obtain more powerful speaker embeddings. The combination of these designs enables our proposed method to achieve excellent results compared with several strong baselines. Finetuning the parameters and more adequate training may further improve results. In future work, we will further improve the performance by combining this method with other techniques, such as pre-training, while exploring how to better apply Transformer to speaker recognition tasks.
6,091.4
2022-10-10T00:00:00.000
[ "Computer Science" ]
Building Extraction of Aerial Images by a Global and Multi-Scale Encoder-Decoder Network : Semantic segmentation is an important and challenging task in the aerial image community since it can extract the target level information for understanding the aerial image. As a practical application of aerial image semantic segmentation, building extraction always attracts researchers’ attention as the building is the specific land cover in the aerial images. There are two key points for building extraction from aerial images. One is learning the global and local features to fully describe the buildings with diverse shapes. The other one is mining the multi-scale information to discover the buildings with different resolutions. Taking these two key points into account, we propose a new method named global multi-scale encoder-decoder network (GMEDN) in this paper. Based on the encoder-decoder framework, GMEDN is developed with a local and global encoder and a distilling decoder. The local and global encoder aims at learning the representative features from the aerial images for describing the buildings, while the distilling decoder focuses on exploring the multi-scale information for the final segmentation masks. Combining them together, the building extraction is accomplished in an end-to-end manner. The effectiveness of our method is validated by the experiments counted on two public aerial image datasets. Compared with some existing methods, our model can achieve better performance. Introduction The aerial image is a kind of high-resolution remote sensing image, which can provide diverse and high-definition information of land covers [1]. With the development of imaging techniques, a large number of aerial images can be collected. How to effectively use them to get more useful knowledge for understanding our planet is always an open and tough task. Many technologies can be adopted to mine the contents from aerial images, such as image classification [2], image semantic segmentation [3], and content-based image retrieval [4,5]. Among the mentioned techniques, image semantic segmentation is the basic and important research topic as it can assign the corresponding semantics to each pixel within the aerial image [3]. Aerial image segmentation has been successfully applied on many remote sensing applications, such as land-use/land-cover classification [6] and points-of-interest detection [7][8][9][10]. 1. How to explore local and global information is the first key point. Note that, in this paper, we define the detailed structure (e.g., buildings' outlines and shapes) as the local information. Meanwhile, the overall structure (e.g., buildings' context within an aerial image) is defined as the global information. 2. How to capture the multi-scale information is the second key point. Due to the specific characteristics of the aerial images, the buildings within an aerial image are different in size. The buildings with bigger sizes may contain hundreds of pixels while the buildings with smaller sizes occupy dozens of pixels. In this paper, we define the mentioned issue as the multi-scale information contained in the aerial images. Considering two key points mentioned above, we propose a new building extraction network with the encoder-decoder structure for aerial images in this paper, and we name it a global multi-scale encoder-decoder network (GMEDN). To extract the local and global information, we design a local and global encoder in this paper. The VGG16 network [29] is used to extract local information through several convolutional layers. Based on the feature maps with local information, a non-local block is introduced to capture global information through mining the similarities between all pixels. To learn multi-scale information, a distilling decoder is developed, which includes the de-convolution and the multi-scale branches. The multi-scale branch predicts the final segmentation results by aggregating multi-layer feature maps in the decoder. Our source code is available on https://github. com/smallsmallflypigtang/Building-Extraction. The main contributions of this method are summarized as follows: 1. The proposed GMEDN adopts the encoder-decoder framework with a simple skip connection scheme as the backbone. For the local and global encoder, a VGG16 is used to capture the local information from the aerial images. Based on this, the non-local block is introduced to explore the global information from the aerial images. Combing local and global information, the buildings with diverse shapes can be segmented. 2. A distilling decoder is developed for our GMEDN, in which the de-convolution and the multi-scale branches are combined to explore the fundamental and multi-scale information from the aerial building images. Through the de-convolution branch, not only the low-level features (e.g., edge and texture) but also the high-level features (e.g., semantics) can be extracted from the images for segmenting the buildings. We name these features the fundamental information. By the multi-scale branch, the multi-scale information that is used to predict the buildings with different sizes can be captured. Integrating the fundamental and multi-scale information, the buildings with various sizes can be segmented. The remaining of this paper is organized as follows. The related work of segmentation network and aerial building extraction are summarized in Section 2. In Section 3, we describe our GMEDN method in detail. Then the experimental and discussion are dedicated to Section 4. Finally, the conclusion of our letter is in Section 5. Semantic Segmentation Architecture Image semantic segmentation is a classical problem in the computer vision community. With the development of deep learning, CNN becomes important in this topic. Here, we roughly divide the current deep CNN-based segmentation networks into three groups. In the first group, the proposed methods are always general-purpose [30,31]. In 2015, the FCN model [27] transformed the fully connection layer of the traditional CNN into a series of convolutional layers. It provides a basic idea to solve semantic segmentation problem with the deep learning. Apart from FCN, another group of successful semantic segmentation networks were proposed and named DeepLab. There are three versions of DeepLab, and they were published in [32][33][34], respectively. DeepLab-v1 uses a fully connected conditional random field (CRF) at the end of the FCN framework to obtain more accurate information. To control the receptive field of the network, they adopt the dilated convolution to replace the last two pooling layers. Based on DeepLab-v1, DeepLab-v2 adds atrous spatial pyramid pooling (ASPP) into the network. ASPP contains dilated convolutions of different rates, which can capture the context of the image at multiple scales. Moreover, DeepLab-v2 uses deep residual network instead of VGG16 [29] (which was used in Deeplab-v1) to improve the performance of the model. Based on DeepLab-v2, DeepLab-v3 refines the structure of the network so that the segmentation results are further enhanced. In the second group, researchers make full use of CNN's hierarchical structure to extract rich information from the images, which is beneficial for segmenting complex contextual scenarios and small targets. For example, the pyramid scene parsing network (PSPNet) [35] aggregates the context information of different layers to improve the models' capacity of obtaining global information. PSPNet embeds the multi-scale, global, and local information in the FCN-based framework through the pyramid structure. Furthermore, to speed up convergence, the authors add the supervisory loss function into the backbone network. Another popular method, feature pyramid network (FPN), was introduced in the literature [36]. It uses feature maps of different resolutions to explore objects of different sizes. Through continuous up-sampling and cross-layer fusion mechanism, both the low-level visual information and the high-level semantic information can be reflected by the output feature maps. In the third group, scholars develop segmentation networks based on object detection algorithms. Inspired by the region selection idea, the mask region-based CNN (Mask R-CNN) was presented in the literature [37]. It adds the mask prediction branch on the basis of the structure of faster R-CNN [38] to complete the semantic segmentation. In addition, the regions of interest (RoI) Align is used to replace the RoI Pooling for removing the rough quantization. Although the segmentation performance of Mask R-CNN is positive, the evaluation function shared scheme would influence the segmentation results negatively. To address this issue, Huang et al. [39] developed the mask score R-CNN (MSR-CNN) model in 2019. It adds the Mask-IoU block to learn how to predict the quality of an instance mask and obtain a great result. Aerial Building Extraction The semantic segmentation methods discussed above are proposed for natural images. Although they can be used to complete the building extraction from the aerial images, their performance may not reach the satisfactory stage. Therefore, many aerial images oriented semantic segmentation approaches were introduced in recent years. Here, we review some popular aerial building extraction networks from the following two aspects. The methods within the first group can be regarded as the variations of FCN. Based on FCN architecture, several post-processing technologies are developed to improve the performance of aerial building extraction. For example, in [40], the authors designed a patch-based CNN architecture. In the post-processing stage, they combine the low-level features of adjacent regions with CNN features to improve the performance. Shrestha et al. [41] proposed an enhanced FCN for building extraction. This model utilizes CRF to optimize the rough results obtained by the FCN for improving the performance of aerial building extraction. Another method was proposed in the literature [42], which is an end-to-end trainable gated residual refinement network (GRRNet). Through fusing the high-resolution aerial images and the LiDAR point clouds, GRRNet improves the performance of building extraction a lot. Besides the FCN families, some other methods are developed with the consideration of the characteristics of the aerial images. To exactly extract the buildings from the aerial images, the high-quality features with multi-scale and detailed information are important. Therefore, some researchers pay their attention to feature learning. Yuan et al. [43] used a simple CNN to learn features and aggregated feature maps of multiple layers for the building prediction. Furthermore, the authors introduce the signed distance function to classify boundary pixels, which is useful to get fine segmentation results. In [44], another feature learning based building extraction method was proposed, which uses spatial residual inception (SRI) module to capture and aggregate multi-scale context information. This model could accurately identify large buildings while preserving global features and local detailed features. Besides the multi-scale schemes, there are some other approaches that can be used to obtain rich features for the semantic segmentation task. For example, Liu et al. [45] proposed their network with the encoder-decoder structure. By adding the skip connection, the information loss caused by the usual pooling could be reduced obviously. Furthermore, the spatial pyramid pooling is embedded in this model to ensure the rich contextual information can be learned. In [46], the authors combined the residual connection unit, the extended sensing unit, and the pyramid aggregation unit together to complete the building extraction task. Due to the introduction of the form filter, the boundaries of segmented results are accurate and smooth. Overall Framework The pipeline of our building extraction model GMEDN is shown in Figure 1. It consists of a local and global encoder and a distilling decoder. For the local and global encoder, it is constructed by a basic feature extraction network (VGG16), a non-local block [47], and a connection block. It aims to explore the local and global information from the aerial images. Through mining the object-level and the spatial context information, the diverse and abundant buildings can be fully represented. The details of the local and global encoder are discussed in Section 3.2. For the distilling decoder, it contains the de-convolution branch and the multi-scale branch. The de-convolution branch captures the fundamental information (e.g., low-level visual features and the high-level semantics) and multi-scale branch explores multi-scale information, which are useful to segment accurately buildings with different sizes and distribution. The details of the distilling decoder are discussed in Section 3.3. Concat 1×1Conv Mask There is another point we want to touch on. As shown in Figure 1, there are five convolutional layers (i.e., convolution, max-pooling, and batch normalization) and five de-convolutional layers (i.e., up-sampling, convolution, and dropout) in the local and global encoder and the distilling decoder, respectively. Due to the max-pooling and the up-sampling operations, some information within the images and feature maps would be lost. To reduce this information loss, the output of the i-th convolutional layer and the output of the i-th de-convolutional layer are summed. The operation mentioned above is the simple skip connection, which is widespread in semantic segmentation networks [27,48,49]. Local and Global Encoder Before introducing the local and global encoder, we describe the non-local block first. The non-local block [47] is developed to obtain global information by capturing the dependencies of all the pixels. The architecture of the non-local block is shown in Figure 2. Transpose:C/2×HW Suppose the input and the output feature maps of the non-local block are X ∈ R H×W×C and Z ∈ R H×W×C , where H and W represent the height and width of the input feature maps, C indicates the number of the channels. First, the input X is convoluted by three groups of 1 × 1 convolutions with different initialized weights to get the U ∈ R H×W×C/2 , V ∈ R H×W×C/2 , and G ∈ R H×W×C/2 . Note that, the 1 × 1 convolution has two functions [50]. First, it can enhance the non-local block's capacity of non-linear fitting due to the active function followed by the convolution. Second, it can change the input data's channels through adjusting the numbers of kernels. Here, according to the literature [47], we set the numbers of 1 × 1 convolution kernels within each group equal to C/2. Therefore, the sizes of the outputs (U, V, and G) are C/2, which can be regarded as the channel reduction. Taking the U as an example, the process can be expressed as where X is the input feature maps, i and j are the coordinates of pixels of X, K denotes the convolutional kernel, k(m, n) indicates the value of K in the coordinate (m, n), b is the convolution bias, the sign " * " represents the convolution operation, and the sign "·" denotes the value multiplication. Second, to use the similarity relationship between pixels for the complex contents mining, the non-local block constructs the similarity matrix R through U and V, and further uses R to obtain global feature maps Y. To obtain the affinity matrix R, U and V are reshaped to U C ∈ R HW×C/2 and V C ∈ R C/2×HW first. Then, R ∈ R HW×HW is obtained by where HW means the number of pixels, C/2 indicates the channels of each pixel, and the sign "×" represents the matrix multiplication. To get the global feature Y , the similarity relationships between pixels are assigned as weights to the feature maps G. Before that, a softmax is used to normalize the affinity matrix R and a reshape operation is adopted to transform G into G C ∈ R HW×C/2 . The process of getting Y can be represented as the following equation, where the sign "×" denotes matrix multiplication. Third, the output of the non-local block Z ∈ R H×W×C can be obtained by fusing the global feature Y and the input feature maps X. This leads that Z contains not only the initial characteristics of single pixels but also the resemblance among all of the pixels. To complete the feature fusion, Y is transformed into Y C ∈ R H×W×C through the reshaping and 1 × 1 convolution operations. This process can be expressed as The non-local block has several superiorities. First, since the number of channels is halved in the first step, the non-local block is a lightweight model that would not increase the computational cost to the original network. Second, since the input and output data have the same scale, the non-local block is a flexible model that can be embedded after any layer. In this paper, we put the non-local block on the top of the basic feature extraction network. Based on the non-local block, we design our local and global encoder as follows. The local and global encoder is divided into three blocks, i.e., basic feature extraction network, non-local block, and connection block. First, we adopt VGG16 as the basic feature extraction network. VGG16 is a classical CNN, which contains five layers. The first, second, third layers consist of the same components, i.e., one max-pooling, three convolutions, and one batch normalization (BN) operations. The fourth layer contains two convolutions compared with the 1st layer. Then the 5th layer contains two convolutions and one batch normalization (BN) operations. The sizes of the kernels of the convolutions in VGG16 are 3 × 3. The kernel sizes and the strides of max-pooling operations in VGG16 are 3 × 3 and 2. Suppose the input image is I ∈ R H 0 ×W 0 ×C 0 , the output of VGG16 block can be represented by X ∈ R H×W×C . Here the H 0 , W 0 , and C 0 represent the height, width, and a number of channels of the input image. This process can be expressed as where F VGG16 (·) denotes the mapping function of the VGG16 block and W VGG16 is the learnable weights of VGG16. Second, we embed a non-local block on the top of VGG16 so that the input of the non-local block is X. According to the above description of the non-local block, the output of the non-local block is Z ∈ R H×W×C . This process can be expressed as where F non−local (·) denotes the mapping function of the non-local block and W non−local represents the learnable weights of the non-local block. Since the non-local block can capture global information (e.g., buildings' context within an aerial image), which can be used to distinguish the buildings from other objects. Thus, global information is beneficial to the building extraction task. Third, we add a connection block on the top of the non-local block to fuse the information from VGG16 and non-local block. It contains a 3 × 3 max-pooling with 2 strides and the repeated operations, i.e., 3 × 3 convolution and dropout operations. Here, the convolution operation aims to fuse information through the weight matrix, the dropout operation is used to avoid overfitting. Suppose the input of connection block is Z ∈ R H×W×C , the output can be represented by En ∈ R H 1 ×W 1 ×C 1 , where the H 1 , W 1 , and C 1 denote the height, width, and channels of the feature maps. The connection process can be expressed as where F connection (·) denotes the mapping function of the connection block and W connection represents the learnable weights of the connection block. Distilling Decoder The distilling decoder is developed to obtain the final prediction mask which has the same size as the input image. There are two branches in the distilling decoder, i.e., a de-convolution branch and a multi-scale branch. In the de-convolution branch, there are five de-convolutional layers, constructed by 2 times up-sampling, 3 × 3 convolution, and dropout operations. Here, the up-sampling and convolution operations are selected to accomplish the de-convolution operation. The dropout operation is adopted to prevent the issue of over-fitting. As shown in Figure 1, when En is input the de-convolution branch, the output of the 1st, 2nd, 3rd, and 4th layers are D1 ∈ R H 0 /16×W 0 /16×C D1 , D2 ∈ R H 0 /8×W 0 /8×C D2 , D3 ∈ R H 0 /4×W 0 /4×C D3 , and D4 ∈ R H 0 /2×W 0 /2×C D4 , respectively. Furthermore, the output of the de-convolution branch is Dec ∈ R H 0 ×W 0 ×Cla , where Cla denotes the number of classes. This process can be formulated as where the F de−convolution (·) denotes the de-convolution branch and the W de−convolution is the weights. The output of the de-convolution branch Dec only contains fundamental information, which is limited to describe the buildings with diverse sizes accurately. Thus, the multi-scale branch is developed to fully explore the different scale information from the input aerial image. Here, the convolution and up-sampling operations are adopted to unify the sizes of outputs corresponding to the different de-convolutional layers. In detail, the 8, 4, and 2 times up-sampling are used for the feature maps D2, D3, D4, respectively. Meanwhile, the 1 × 1 convolution is applied to reduce their dimensions to the number of classes. In this way, we get the following feature maps, i.e., Mul1 ∈ R H 0 ×W 0 ×Cla , Mul2 ∈ R H 0 ×W 0 ×Cla , and Mul3 ∈ R H 0 ×W 0 ×Cla . Then, we concatenate them with Dec to get Fin ∈ R H 0 ×W 0 ×4Cla , which contains fundamental information and multi-scale information. This process is shown in the following equation where the concat (·) means that concatenates them on the channel. To reduce the channels of Fin ∈ R H 0 ×W 0 ×4Cla , we transform Fin ∈ R H 0 ×W 0 ×4Cla to out ∈ R H 0 ×W 0 ×Cla through a 1 × 1 convolution operation. Finally, a mask ∈ R H 0 ×W 0 ×1 with the values of semantic classes can be generated from out ∈ R H 0 ×W 0 ×Cla using the argmax function. In this paper, the mask is a binary prediction that denotes which pixel belongs to the building and which one to the background. There are some points we should further explain. First, in our distilling decoder, the 3 × 3 convolution is used to learn the features from the inputs, while the 1 × 1 convolution is selected to reduce the number of dimensions through adjusting the number of convolutional kernels. Second, since the feature maps of the first de-convolutional layer have little high-level semantic information, we do not take them into account during the multi-scale fusion. Datasets Introduction In this paper, two benchmark datasets are chosen to verify our model, i.e., the Inria aerial image labeling dataset [1] and the Massachusetts building dataset [51]. Both of them have two classes, they are, building and non-building. The buildings in these datasets are diverse in type, various in volume, and different in distribution. Therefore, these two datasets are suitable to validate if our method is useful or not. The Inria dataset contains 360 aerial Red-Green-Blue (RGB) images collected from ten cities and covers areas with 810 km 2 . Each city has 36 images, which are numbered 1 to 36. These images cover different urban settlements, from densely populated areas to sparsely populated forested towns. The sizes of these aerial images are 5000 × 5000, and their spatial resolution is 0.3 m. In this dataset, only 180 images (corresponding to five cities, i.e., Austin, Chicago, Kitsap, Western Tyrol, and Vienna) and their ground truth are published. Some examples, including images and their ground truth, are shown in Figure 3. Suggested by the authors who released this dataset [1], 31 images are selected randomly from each released city to construct the training set and the rest of the images are used to be the testing set in the following experiments. Austin Chicago Kitsap Vienna Tyrol_w Experimental Settings All of the experiments are carried out on an HP-Z840-Work station with Xeon (R) CPU E5-2630, GeForce GTX 1080, and 64 RAM. Our GMEDN is trained by the Adam optimizer with the sparse softmax cross-entropy loss function. Here, the parameters of Adam optimizer are set as follows, β 1 = 0.9, β 2 = 0.999, = 1 × 10 −8 . In addition, we use a schedule decay of 0.004, a dropout rate of 0.5 and a small learning rate of 2 × 10 −4 to complete the training. We initialize the weights of the encoder (parts of VGG16) with pre-trained weights via ImageNet [52]. Other weights of our model are initialized with a Glorot uniform initializer that draws samples from a uniform distribution. The optimization is stopped at 40 epochs, and the early stopping scheme is adopted to avoid overfitting. Note that, due to the limitations of the GPU memory, original images are cropped into 256 × 256 image patches in this paper. For the Inria dataset, the non-overlapped grid scheme is selected to generate the image patches. For the Massachusetts dataset, we use grids with a stride size of 156 to augment limited training patches. To evaluate the effectiveness of our model numerically, two assessment criteria are selected, they are, overall accuracy (OA) and the intersection of union (IoU). OA is the ratio of the number of correctly predicted pixels to the total number of pixels in all test sets, which is defined as, OA = N (correct _pixels) N (total_pixels) , (10) where N (correct _pixels) is the number of correctly predicted pixels, and N (total_pixels) is the number of the total number of testing pixels. IoU measures the correlation between prediction and target label, which is widely used in binary semantic segmentation tasks [6,53]. Here, IoU is defined as, where A is the prediction and B is the target label. IoU equals 0 means A and B do not overlap and 1 means that A and B are the same. Building Extraction Examples The building extraction results of our GMEDN model are exhibited and studied in this section. For the Inria dataset, five images are selected from the published cities to accomplish the building extraction, and the results are exhibited in Figure 5. For each block, the original image, the ground truth map, and the results of our model are located in the top, middle, and bottom. From the observation of these images, we can find that our method can extract the diverse buildings positively. Taking the Chicago city as an example, although the buildings are dense in distribution and diverse in shape, our extraction result is similar to the ground truth map, in which the location of buildings is accurate and the boundaries of buildings are smooth. Taking the Kitsap city as another example, although the buildings are distributed sparsely and irregularly, our model can also obtain the promising extraction results compared with the ground truth. These encouraging results prove that our GMEDN is useful for the Inria dataset. For the Massachusetts dataset, two images are chosen randomly to complete the building extraction by our model. The results are shown in Figure 6. As mentioned in Section 4.1, the resolution of these two images is 1m. Thus, compared with the images from the Inria dataset, the buildings within these aerial images are smaller in size but larger in volume, which increases the difficulty of extraction. Even so, through observing the extraction maps, we can easily find that the proposed method still achieves good performance. The buildings under the shadows of obstacles (such as trees, roads) are segmented well. These good visualization results indicate that our GMEDN is effective in the building extraction task. Comparisons with Different Methods To further validate our method, we compare our GMEDN with the following four popular methods. • Fully convolutional network (FCN). This may be the first general-purpose semantic segmentation network was proposed in 2015 [27]. The network consists of a common classification CNN (e.g., AlextNet [54] and VGG16 [29]) and several de-convolution layers. The CNN aims to learn the high-level features from the images while the deconvolution layers are used to predict the dense labels for pixels. Compared with the traditional methods, its cracking performance attracts scholars' attention successfully. • Deep convolutional segmentation network (SegNet). SegNet was introduced in the literature [55], which is an encoder-decoder model. SegNet first selects VGG16 as the encoder for extracting the semantic features. Then, a symmetrical network is established to be the decoder for transforming the low-resolution feature maps into the full input resolution maps and obtaining the segmentation results. Furthermore, a non-linear up-sampling scheme is developed to reduce the difficulty of the model training and generate the sparse decoder maps for the final prediction. • U-Net with pyramid pooling layers (UNetPPL). The UNetPPL model was introduced in [56] for segmenting the buildings from high-resolution aerial images. By adding the pyramid pooling layers (PPL) in the U-Net [48], not only the shapes but also the global context information of the buildings can be explored, which ensures the segmentation performance. • Symmetric fully convolutional network with discrete wavelet transform (FCNDWT). Taking the properties of aerial images into account, the FCNDWT model [49] fuses the deep features with the textural features to explore the objects from both spatial and spectral aspects. Furthermore, by introducing DWT into the network, FCNDWT is able to leverage the frequency information to analyze the images. Note that, all of the comparisons are accomplished by ourselves, and their experimental settings are equal to ours for the sake of fairness. The OA and IoU scores of different methods counted by the Inria dataset are displayed in Table 1. It is obvious that the proposed GMEDN performs best. Among all of the comparisons, the performance of FCN is the weakest. This is because that FCN is a general-purpose semantic segmentation model which does not consider the specific characteristics of aerial images. Compared with FCN, SegNet performs better since its sufficient de-convolution layers and the specific up-smapling scheme. Due to the multi-scale aggregation scheme, the behavior of UNetPPL is stronger than that of FCN and SegNet. However, its performance is still not as good as DWTFCN since the UNetPPL model does not take the textural features into account, which are beneficial to explore the objects from the complex background. These encouraging results demonstrate that our model is useful for the building extraction task. The IoU and OA scores of different models counted by the Massachusetts dataset are summarized in Table 2. Similar to the Inria dataset's results, our GMEDN still gets the best performance among all of the methods. The gains of our GMEDN in IoU/OA are 3.89%/1.12% (FCN), 6.96%/1.22% (SegNet), 4.74%/0.8% (UNetPPL), and 2.43%/0.48% (FCNDWT). Besides the above conclusion, a notable observation is that there is a distinct performance gap between GMEDN and other methods in IoU. The reasons behind this are summarized as follows. First, the details of buildings (e.g., shapes and edges) can be grasped by the backbone network. Second, through adding the non-local block, the global information can be further obtained which can be used to distinguish the objects from the complex background. Third, the multi-scale scheme can help our GMEDN model to capture the buildings with diverse sizes. Through observing those results, the following issues can be discovered. First, FCN and SegNet have smooth boundary prediction, and they perform well on the small buildings. Second, UNetPPL could extract more diverse buildings since the PPL model can learn the multi-scale information from aerial images. Third, FCNDWT achieves the good behavior on the building locations and boundaries as the intension frequency information is fused. Compared with the comparisons, the maps generated by our GMEDN have clear boundaries and precise locations. These positive visual extraction results prove the effectiveness of our method again. Ablation Study As mentioned in Section 3, GMEDN can be divided into four sub-models, including basic encoder-decoder network (i.e., VGG16, de-convolution branch, and skip connection), non-local block, connection block, and the multi-scale block. To study their contributions to the building extraction task, we design four networks to complete the building extraction respectively, they are, Note that, the experimental settings of the networks in this section are the same as mentioned in Section 4.2. The results of these networks counted on two datasets are shown in Figure 9, where Figure 9a shows the OA scores and Figure 9b displays the IoU scores. The first six groups of bars are the results of the Inria dataset and the last group of bars is the results of the Massachusetts dataset. From the observation of Figure 9, we can find the performance of different networks is proportional to the numbers of sub-models. In detail, the behavior of Net1 is the weakest among all compared networks since it only consists of a basic encoder-decoder network. After adding the non-local block, the performance of Net 2 is stronger than that of Net 1, which proves the usefulness of the non-local block. Due to the fusion scheme, Net 3 outperforms the Net 1 and Net 2 which confirms the contribution of the connection block. Integrating all sub-models, Net 4 achieves the best performance. Furthermore, the performance gap between Net 4 and other networks is distinct. The results discussed above demonstrate that each sub-model can make a positive contribution to our GMEDN model. Robust Study of Non-Local Block In this section, the robustness of the non-local block is studied. To this end, we vary the position of the non-local block. Here, to study the influence of the non-local block to GMEDN, we construct three models by changing the non-local block's positions, i.e., • Model 1: Embedding the non-local block after the third layer of VGG16; • Model 2: Embedding the non-local block after the fourth layer of VGG16; • Model 3: Embedding the non-local block after the fifth layer of VGG16. To get the segmentation results, three models are trained with the experimental settings discussed in Section 4.2. The OA and IoU scores counted on the Inria and Massachusetts datasets are exhibited in Figure 10, in which the first six sets of bars correspond to the Inria dataset while the last set of bars corresponds to the Massachusetts dataset. It is easy to find that the performance differences between the three models are small. Taking the Vienna city within the Inria dataset as an example, three models' OA scores are 94.43%, 94.32%, and 94.54%, while their IoU scores are 80.47%, 80.02%, and 80.72%. This indicates that our GMEDN is not sensitive to the non-local block. Running Time Here, we discuss the time costs of our GEMDN from the training and inference aspects. As mentioned in Section 4.2, the input data is the 256 × 256 aerial image patches that are cropped from the original aerial images. The volumes of the training sets are 1805 and 360 for the Inria and Massachusetts datasets. The training times of the two datasets are 5.71h and 5.23h, respectively. For the inference, the time cost of predicting one patch is 0.26s. Therefore, we need 98.7s to segment an aerial image (5000 × 5000) from the Inria dataset, and 9.36s to complete the building extraction from an aerial image (1500 × 1500) of the Massachusetts dataset. Conclusions With the consideration of the characteristics of aerial images, a simple yet useful method (GMEDN) is proposed in this paper for building extraction based on the encoder-decoder framework with a skip connection. 1. To extract the local and global information from the aerial images for fully describing the buildings with various shapes, a local and global encoder is developed. It consists of a VGG16, a non-local block, and a connection block. VGG16 is used to learn the local information through several convolutional layers, the non-local block aims at learning global information from the similarities of all pixels, and the connection block further integrates local and global information. 2. To explore the fundamental and multi-scale information from the aerial images for capturing the buildings with different sizes, a distilling decoder is developed. It contains a de-convolution branch and a multi-scale branch. The de-convolution branch focuses on learning the buildings' representation (low-and high-level visual features) under different scales by several de-convolutional layers, and the multi-scale branch aims at fusing them to improve the discrimination of the prediction mask. 3. To reduce the information loss caused by the max-pooling (encoder) and the up-sampling (decoder) operations, the simple skip connection is added between the local and global encoder and the distilling decoder. The introduced GMEDN can accomplish the building extraction in an end-to-end manner, and its superiorities are confirmed by the encouraging experimental results counted on two aerial image datasets. Compared with some existing methods, for the Inria dataset, the minimum enhancements obtained by GMEDN are 0.26% in and 1.39% in IOU. For the Massachusetts dataset, the minimum gains achieved by GMEDN are 0.48% in OA and 2.43% in IOU. Although GMEDN achieves positive results in building extraction tasks, it is not a lightweight network which limits its practicability in many realistic scenarios. To address this issue, we will pay our attention to developing the light model for the building extraction task in the future.
8,440.6
2020-07-22T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Benzyl 2-((E)-Tosyliminomethyl)phenylcarbamate Benzyl 2-((E)-to yliminomethyl)penylcarbamat was prepared in good yield and characterized by the condensation reaction of benzyl 2-formylphenylcarbamate with ptoluenesulfonyl amine. The structure of the newly synthesized compound was determined using 1H, 13C-NMR, IR and ass s ectral data. Introduction The Schiff base, structurally known as imine or azomethine, is a nitrogen analog of aldehyde or ketone in which the C=O group is replaced by C=N-R group after water molecular elimination [1]. Schiff bases are some of the most widely used organic compounds which used as pigments and dyes, catalysts and intermediates in organic synthesis [2]. Schiff bases have also been shown to exhibit a broad range of biological activities, including antibacterial, antimalarial, anti-inflammatory, antiviral, and anticancer properties [3][4][5]. In continuation of our research intefrest in 2-aminobenzaldehyde for the synthesis of highly functionalized chiral heterocylcles [6][7][8][9], we report here the preparation of a novel benzyl 2-((E)-tosyliminomethyl)phenylcarbamate. The synthesis of the title compound 3 was achieved in one step, as presented in Scheme 1, which was performed by the condensation reaction of benzyl 2-formylphenylcarbamate (1) [10] with p-toluenesulfonyl amine (2). The reaction was carried out in toluene in the presence of 2 mol% of boron trifluoride diethyl etherate as a catalyst and provided the desired product in good yield. The structure of compound 3 was confirmed by 1 H-and 13 C-NMR, IR, mass spectral data, and all data are in accordance with the assumed structure. Introduction The Schiff base, structurally known as imine or azomethine, is a nitrogen analog of aldehyde or ketone in which the C=O group is replaced by C=N-R group after water molecular elimination [1]. Schiff bases are some of the most widely used organic compounds which used as pigments and dyes, catalysts and intermediates in organic synthesis [2]. Schiff bases have also been shown to exhibit a broad range of biological activities, including antibacterial, antimalarial, anti-inflammatory, antiviral, and anticancer properties [3][4][5]. In continuation of our research intefrest in 2-aminobenzaldehyde for the synthesis of highly functionalized chiral heterocylcles [6][7][8][9], we report here the preparation of a novel benzyl 2-((E)-tosyliminomethyl)phenylcarbamate. The synthesis of the title compound 3 was achieved in one step, as presented in Scheme 1, which was performed by the condensation reaction of benzyl 2-formylphenylcarbamate (1) [10] with ptoluenesulfonyl amine (2). The reaction was carried out in toluene in the presence of 2 mol% of boron trifluoride diethyl etherate as a catalyst and provided the desired product in good yield. The structure of compound 3 was confirmed by 1 H-and 13 C-NMR, IR, mass spectral data, and all data are in accordance with the assumed structure. General Information All reagents were used as received without further purification. Organic solutions were concentrated under reduced pressure using a Büchi rotary evaporator. Chromatographic purification of the title compound 3 was accomplished using forced-flow chromatography on ICN 60 32-64 mesh silica gel 63. Thin-layer chromatography (TLC) was performed on EM Reagents 0.25 mm silica gel 60-F plates. Developed chromatograms were visualized by fluorescence quenching and anisaldehyde stain. 1 H and 13 C-NMR spectra were recorded on a 400 MHz instrument as noted, and were internally referenced to residual protio solvent signals. Data for 1 H-NMR were reported as follows: chemical shift (δ ppm), multiplicity (s = singlet, d = doublet, t = triplet, m = multiplet), integration, coupling constant (Hz) and assignment. Data for 13 C-NMR were reported in terms of chemical shift. IR spectra were recorded on Perkin-Elmer 1600 FT-IR spectrometer (Waltham, MA, USA), and reported in terms of frequency of absorption (cm −1 ). High-resolution mass spectrometry data was recorded on a JEOL JMS-700 MStation mass spectrometer (JEOL, Tokyo, Japan). (3) p-Toluenesulfonyl amine (2, 94 mg, 0.55 mmol) was added to a solution of BF 3 ·Et 2 O (1 µL, 0.01 mmol) and benzyl 2-formylphenylcarbamate (1, 128 mg, 0.50 mmol) in toluene (2 mL) at room temperature. The resulting mixture was refluxed for 60 h until complete consumption of benzyl 2-formylphenylcarbamate 1 was observed as determined by TLC. After being cooled to room temperature, water (2 mL) was added and the products were extracted with dichloromethane (3 × 5 mL). The organic phase was washed with aqueous saturated NaCl solution (2 × 5 mL), dried with anhydrous MgSO 4 , and concentrated in vacuo. The crude residue was purified by flash silica gel column chromatography using EtOAc/hexane (1/10) as eluent to afford the desired title compound 3 (64%, 154 mg).
1,026.2
2016-10-17T00:00:00.000
[ "Chemistry" ]
K + Λ and K + Σ 0 photoproduction with fine center-of-mass energy resolution T.C. Jude a,∗,1, D.I. Glazier a,2, D.P. Watts a,∗, P. Aguar-Bartolomé b, L.K. Akasoy b, J.R.M. Annand c, H.J. Arends b, K. Bantawa d, R. Beck e, V.S. Bekrenev f, H. Berghäuser g, A. Braghieri h, D. Branford a, W.J. Briscoe i, J. Brudvik i, S. Cherepnya j, B.T. Demissie i, M. Dieterle k, E.J. Downie b,i, L.V. Fil’kov j, R. Gregor g, E. Heid b, D. Hornidge l, I. Jaegle k, O. Jahn b, V.L. Kashevarov b,j, I. Keshelashvili k, R. Kondratiev m, M. Korolija n, A.A. Koulbardis f, S.P. Kruglov f, B. Krusche k, V. Lisin m, K. Livingston c, I.J.D. MacGregor c, Y. Maghrbi k, D.M. Manley d, Z. Marinides i, T. Mart o, M. Martinez b, J.C. McGeorge c, E.F. McNicoll c, D.G. Middleton l, A. Mushkarenkov h, B.M.K. Nefkens i, A. Nikolaev e, V.A. Nikonov f, M. Oberle k, M. Ostrick b, P.B. Otte b, B. Oussena b,i, P. Pedroni h, F. Pheron k, A. Polonski m, S. Prakhov i, J. Robinson c, G. Rosner c, T. Rostomyan h, A.V. Sarantsev e, S. Schumann b, M.H. Sikora a, D.I. Sober p, A. Starostin i, I. Strakovsky i, I.M. Suarez i, I. Supek n, M. Thiel g, A. Thomas e, M. Unverzagt b, D. Werthmüller k, L. Witthauer k, F. Zehr k Measurements of γ p → K + Λ and γ p → K + Σ 0 cross-sections have been obtained with the photon tagging facility and the Crystal Ball calorimeter at MAMI-C. The measurement uses a novel K + meson identification technique in which the weak decay products are characterized using the energy and timing characteristics of the energy deposit in the calorimeter, a method that has the potential to be applied at many other facilities. The fine center-of-mass energy (W ) resolution and statistical accuracy of the new data results in a significant impact on partial wave analyses aiming to better establish the excitation spectrum of the nucleon. The new analyses disfavor a strong role for quark-diquark dynamics in the nucleon. Introduction Establishing the excitation spectrum of a composite system has historically been one of the most effective ways to determine the detailed nature of the interactions between its constituents. Establishing the excitation spectrum of the nucleon; a complex bound system of valence quarks, sea quarks and gluons, is currently one of the highest priority goals of hadron and nuclear physics. The spectrum is a fundamental constraint on our understanding of the nature of QCD confinement in light quark systems. Recent advances in theory have linked the excitation spectrum to QCD via lattice predictions [1] and holographic dual theories [2]. These complement the phenomenological QCD-based models such as constituent quark models [3] and soliton models [4]. Despite its importance, the spectrum of nucleon resonances remains poorly established with the basic properties (electromagnetic couplings, masses, widths) and even the existence of many excited states uncertain (for a review see Ref. [5]). In an attempt to address this shortcoming, real photon beams have been used to excite nucleon targets, providing accurate data to constrain partialwave analyses (PWA) and reaction models used to extract information on the excitation spectrum [6][7][8][9][10][11]. This is the choice method for such studies, as the photon probe has a well-understood interaction (QED) and polarization degrees of freedom (linear and circular). A major program of measurements utilizing polarized photon beams, polarized targets and final-state nucleon polarimeters is currently underway with the goal to achieve a "complete", model-independent measurement of photoproduction reactions. The process γ p → K + Λ has the lowest energy threshold for photoproduction reactions with final-state particles containing strange valence quarks. This is a crucial channel as many models predict that some poorly established or "missing" resonances couple strongly to strange decay channels [12]. Isospin conservation demands that only N * and not resonances contribute to the reaction, simplifying the interpretation of the data. The weak decay of the Λ allows access to its polarization from the distribution of its decay particles and ensures that γ p → K + Λ will be the first photoproduction reaction measured with a complete set of experimental observables, providing a benchmark channel for PWAs. Recent measurements of γ p → K + Λ have been obtained with the SAPHIR [13,14] and CLAS detectors [15,17]. Unfortunately the cross-section data have discrepancies that lead to significant differences in the PWA solutions when using either data set (see Ref. [18] for a review). Measurements of γ p → K + Σ 0 give better agreement albeit with discrepancies for backward Kaon angles. γ p → K + Λ data with fine center-of-mass energy (W ) resolution would be an important constraint on the existence of narrow N * states [19,20]. A number of recent searches for narrow N * near 1700 MeV (Ref. [21] for example) were motivated by the prediction of a non-strange, nucleon-like, member of the anti-decuplet with strong photocoupling to the neutron [22]. In response to recent evidence, a speculative new N * state at 1685 MeV was included in the recent Particle Data Group listings [23]. However, alternative explanations for the narrow structures are also offered based on interference structures arising from known resonances [24] or coupled-channel effects [25]. Disentangling the cause of the narrow structure in this mass region is likely to require accurate cross-section and polarization observables for a range of reaction channels. The experiment The data presented here were taken with the Crystal Ball detector [26] at the Mainz Microtron accelerator facility (MAMI-C) [27] in a beamtime of 430 hours. The energy tagged photon beam of ∼ 10 5 γ MeV −1 s −1 was produced by impinging the MAMI-C 1557.4 MeV electron beam on a thin copper radiator, with the photon energy (E γ ) determined by momentum analysis of the recoil bremsstrahlung electrons in the Glasgow Photon Tagger [28]. Photon energy resolutions in the range of 3-4 MeV were achieved, corresponding to resolutions in the center of mass energy, W , in the range of 1.0-2.4 MeV. The photon beam was incident on a 10 cm long liquid hydrogen target comprising 4.2 × 10 23 protons per cm 2 . The Crystal Ball (Fig. 1) is a segmented calorimeter of 672 NaI crystals covering 94% of 4π steradians. Each crystal has separate TDC and ADC readouts giving a time resolution of 2-3 ns and a fractional energy resolution of (1.7/E γ ) 0.4 GeV. A Particle Identification Detector (PID), consisting of 24 plastic scintillators forming a cylinder [29], surrounded the target and gave an energy signal for charged particles. The experimental trigger required a total energy deposit in the Crystal Ball crystals of 360 MeV and at least three of 45 geometric trigger sections to fire. K + identification in the Crystal Ball The extraction of K + Λ and K + Σ 0 channels is complicated by the much larger yields from non-strange channels. This work pioneers a new method of identifying K + in which its weak decay products are characterized by using the energy and timing characteristics of the detector hits in a segmented calorimeter. The two dominant decay modes are K + → μ + ν μ (muonic) and K + → π + π 0 (pionic), with branching ratios of 64% and 21% respectively. The validity of the new technique was tested extensively by comparing a full Geant4 [31] simulation of the apparatus with the experimental data. The main results of these studies are presented in Figs. 2 and 3 and discussed below. Each cluster of hit crystals produced from a charged particle event in the Crystal Ball was separated into two "sub-clusters". The "incident-cluster" (IC) comprised those crystals having a timing coincidence within ±3σ of the timing of the photoreaction in the target, where σ is the achievable coincidence timing resolution (∼ 3 ns). Only events with a summed IC energy above 25 MeV and consisting of only one or two crystals were retained. The crystals with coincidence times at least 10 ns later than the photoreaction were assumed part of the "decay-cluster" (DC) from the decay of The decay cluster linearity for experimental data (black) and simulated data for muonic (pionic) in blue (red). (d) The decay energy localization for experimental data (black) and simulated data for muonic (pionic) in blue (red). The simulated data has been scaled to the integral of the experimental data. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) the stopped K + . A minimum summed DC energy of 75 MeV and at least 4 crystals in the DC was required. A cluster pattern for a typical muonic decay event visualized in the Geant4 simulation is presented in Fig. 1. Fig. 2 shows the energy spectrum for the DC, exhibiting a peak at 150 MeV consistent with the energy of the μ + from K + → μ + ν μ decay at rest. A shoulder extends to 350 MeV, which is the maximum energy deposition for the pionic decay (K + → π + π 0 ). Fig. 2(b) is the time difference between the IC and DC. An exponential fit gives a lifetime in agreement with the accepted K + lifetime of approximately 12 ns. To separate events into the two dominant K + decay modes, two parameters were used: The fractional energy in the furthest crystal in the DC with respect to the total energy in the DC (the decay energy localization), and the average difference in angle between each crystal in the DC and the IC (the decay cluster linearity). Fig. 3 shows these parameters plotted for both experimental and simulated data. Good agreement between data and simulation is observed, with small deviations only evident in regions where the pionic decay mode dominates. The pionic decays have an increased sensitivity to the systematics of the modeling of the low energy thresholds in the CB crystals in the simulation. For this reason, and also because the shower shape gave an improved K + momentum reconstruction, only the dominant muonic decay events were retained for further analysis by applying the two dimensional selection cuts in Fig. 3(a). A small proportion of pionic decay events and other decay modes (such as K + → π 0 e + ν e with a branching ratio of 5%) were expected to remain in the yield. The IC summed energy was then utilized in a E − E analysis with the E provided by the signal in the PID and used to reconstruct the momentum of the K + . This new K + identification technique enables K + detection without the need for large scale spectrometers or Cerenkov detectors and has wide applicability to other facilities. The technique has already been incorporated into the BGO-OD experiment [30] and will be the basis of a new online K + trigger at MAMI, significantly increasing future K + yields. The technique is also a viable method for identifying K + in fast timing environments such as in laser plasma based accelerators. Extracting K + Λ and K + Σ 0 differential cross-sections A new technique to cleanly separate γ p → K + Λ and K + Σ 0 yields was used, via the identification of the decay Σ 0 → Λγ in the Crystal Ball. Fig. 4(a) shows the energy of neutral particles detected in coincidence with the K + , boosted into the rest frame of the hyperon. The peak at 77 MeV is from the detection of the γ from the Σ 0 decay, having an energy corresponding to the Σ 0 -Λ mass difference. Events with energies between 55-95 MeV were selected as decay-γ candidate events for Σ 0 → Λγ . From Fig. 4(a) it is clear a background of additional uncharged events is also present, arising from photons or neutrons from Λ decays. The decay-γ detection efficiency was determined with simulated data to behave linearly with E γ and to be approximately 60%. The false decay-γ detection efficiency from K + Λ events was approximately 9% (Fig. 4(b)). Fig. 4(c) shows the reconstructed missing-mass of the system recoiling from the K + over a restricted kinematic range. The Σ 0 and Λ masses are clearly visible, with the relative contribution of the Σ 0 enhanced with the requirement of a decay-γ candidate. The yield of coincident decay-γ events (filled violet line) has been scaled according to the decay-γ detection efficiency. This efficiency corrected yield was used to subtract the Σ 0 contribution from the data for each kinematic bin. The remaining yield attributed 4. (a) Neutral particle energies in the hyperon rest frame for experimental data (black), simulated K + Λ data (orange) and simulated K + Σ 0 data (green). (b) Simulated decay-γ detection efficiency for Σ 0 → Λγ (green), and false decay-γ detection efficiency for K + Λ (orange). (c) Experimental data for the missing mass recoiling from the K + for the interval 1.086 < E γ < 1.229 GeV and cos C M K = −0.1, used to extract the K + Λ yield. Without (with and efficiency corrected) a decay-γ candidate shown by unfilled red (filled violet) lines, and the subtracted K + Λ yield (thick black line, shaded fill). (d) Simulated K + Λ yield for the same scenario as (c). (e) Experimental data for the missing mass recoiling from the K + for the same interval as in (c), used to extract the K + Σ 0 yield. Without, and efficiency corrected (with) a decay-γ candidate shown by unfilled red (filled violet) lines, and the subtracted K + Σ 0 yield (thick black line, shaded fill). (f) Simulated K + Σ 0 yield for the same scenario as (e). (For interpretation of the references to color, the reader is referred to the web version of this article.) to K + Λ after this subtraction is shown by the filled shaded thick black line in Fig. 4(c). The yield of simulated K + Λ events for the same kinematic bin is shown in Fig. 4(d). The simulation reproduces the shape of the K + Λ distribution observed in the experiment. The relative contribution of misidentified decay-γ events to the yield, arising from the Λ decay products, is shown by the filled violet line under the Λ mass peak. These misidentified events reduce the extracted K + Λ yield by approximately 5%. This loss in yield however cancels out in the cross-section calculation as the same analysis procedure is applied to the simulated data which is used to determine the detection efficiency. Fig. 4(e,f) shows the same experimental and simulated missing mass data, however the yield of non-coincident decay-γ events has been scaled according to the false decay-γ detection efficiency from K + Λ events (thin solid red line). This corrected yield was used to subtract K + Λ contributions to leave only contributions attributed to K + Σ 0 events (filled shaded thick black lines). To determine the systematic error in separating K + Λ and K + Σ 0 yields, a method of fitting Gaussian functions to the total missing-mass spectra was also used, giving an agreement with the above method to better than 4%. Detection efficiencies were obtained by analyzing Geant4-simulated K + Λ and K + Σ 0 events including appropriate timing and energy resolutions and using angular distributions from the SAID PWA [9]. Experimental trigger conditions were implemented as described in Ref. [32]. A maximum detection efficiency of approximately 10% was achieved. The modeling of K + hadronic interactions in the Crystal Ball gave a maximum systematic uncertainty of 4% to the yield, increasing with E γ . This was assessed from comparisons of different physics models in the simulation, and by switching off hadronic interactions. Contamination from other channels passing the selection cuts (dominantly γ p → pπ + π − ) gave an uncertainty typically less than 4% in the K + Λ yield and only at very backward angles. The required identification of the decay-γ for K + Σ 0 rendered [13], blue open diamonds are CLAS data of Bradford et al. [15], cyan solid squares are CLAS data of Dey et al. [16]. The thin black line is the current BnGa 2011-2 solution [11] and the thick black line is the BnGa 2011-02 solution including the new K + Λ and K + Σ 0 data [33]. (The SAPHIR data have cos θ C M K intervals backwards by 0.05 than the given values.) contamination in the K + Σ 0 yield from other channels negligible. Systematic effects from the modeling of the experimental trigger in the simulation (estimated from a 10 MeV variation of the Crystal Ball energy sum threshold) were typically 4% near threshold for K + Λ and reducing with E γ . For K + Σ 0 , the uncertainty was 2-3% near threshold and only at very backward angles. Systematic errors from non-hydrogen components of the target cell, target cell length and PID efficiency were each less than 1%. Results and interpretation The quality of the new Crystal Ball data is illustrated in Figs. 5 and 6, where cross-sections for K + Σ 0 and K + Λ as a function of W are shown for selected center-of-mass K + polar angle bins (θ C M K ), compared with previous SAPHIR [13] and CLAS [15][16][17] data. For clarity, the data are rebinned by a factor of two, however the attainable W resolution of the new data is a factor of 4 to 10 improvement over previous data. After normalizing for the different widths in binning, the statistical accuracy of the new data [13], blue open diamonds are CLAS data of Bradford et al. [15] and green filled triangles are CLAS data of McCracken et al. [17]. The thin black line is the current BnGa 2011-2 solution [11] and the thick black line is the BnGa 2011-02 solution including the new K + Λ and K + Σ 0 data [33]. The thin red and blue lines are fits from the KM model to SAPHIR data and CLAS data [34] respectively. (The SAPHIR data have cos θ C M K intervals backwards by 0.05 than the given values.) (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) is typically a factor of 1.5 better than previous data, except at forward K + angles where the accuracy is comparable. The kinematic range the new data cover is shown in Fig. 7. The new γ p → K + Σ 0 data (Fig. 5) are consistent with the world data over most of the angular range, demonstrating that systematic errors for the new detection techniques are well understood. At backward angles for W below 1.85 GeV there is a divergence between the previous measurements, with the CLAS data showing higher cross sections than SAPHIR. The new data give better agreement with the SAPHIR data [13]. The γ p → K + Λ data show general agreement with the previous data and confirm the existence of the broad peak like structure centered around 1670 MeV for backward K + angles. The data are compared to predictions from PWAs in the Kaon Maid (KM) [8] and Bonn-Gatchina (BnGa) [10] framework, which are constrained by the various combinations of data sets indicated have different resonance contributions and helicity couplings (for a detailed description see [10]). The addition of the new K + Σ 0 and K + Λ data resolve these solutions. Only BG2011-02 can fit the new data and the world dataset with a satisfactorily low χ 2 of 1.3 and 1.2 for K + Σ 0 and K + Λ respectively [35]. Before fitting to the new data the BG2011-02 solution described the new data with a χ 2 of 1.9 for both reaction channels. The most significant difference between the BG2011-1 and BG2011-2 solutions is that the latter supports the need for two P 13 nucleon resonances close in mass: a P 13 (1900) and P 13 (1975). Despite constituent quark models (CQM) predicting the existence of two 3/2 + nucleon states in the region 1850-2000 MeV, it is difficult to explain two such states in the framework of a quark-diquark model or under the assumption of chiral symmetry restoration. The new experimental data therefore provide new constraints on the dynamics of quarks within the nucleon [10]. The well defined structure in the K + Λ cross-section around 1670 MeV at backward K + angles provides a valuable constraint on the existence and width of the disputed [36] P 11 (1710) resonance. To fit the structure a 30% reduction in the resonance width was necessary in the BnGa analysis. Interestingly this produces a width now consistent with the other sightings of this resonance [5]. The improved statistical accuracy and W resolution of the new data allows constraints on the existence of structures in the cross-section arising from narrow resonance states, interferences between resonances or coupled channel effects. There are indications of structure between 1650-1700 MeV and around 1740 MeV which are not described by any of the PWA models. The total cross-section for γ p → K + Λ has been debated in the literature where structure around 1900 MeV has been largely interpreted as evidence for a missing D 13 resonance (for example [18] and references therein). Constraints from the new data lead to revised extrapolated total cross-sections as shown in Fig. 8. The cross-sections extrapolated using the BnGa 2011-02 solution are reduced below 1900 MeV with the inclusion of the new data. The cross-sections extrapolated from the KM model show a reduction mainly in the region around the first peak in the cross-section at 1700 MeV. The structure at 1900 MeV is still evident in the revised extrapolations. Conclusions Precision measurements of the γ p → K + Λ and γ p → K + Σ 0 differential cross-section have been obtained with a new K + iden- [11] and the thick solid black line is the BnGa 2011-02 solution including the new K + Λ and K + Σ 0 data [33]. The thin red dashed line is the current KM model constrained by SAPHIR data [34], the thin blue dashed line is the current KM model constrained by CLAS data, and the thick red line is the KM model constrained by CLAS data and this current K + Λ data [37]. The shaded bands show the estimated systematic errors. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) tification method which has wide applicability for other facilities. The new γ p → K + Λ and γ p → K + Σ 0 data have significantly improved center-of-mass energy resolution than previous data and provide a significant new constraint to the world database of meson photoproduction. A combined analysis of both reaction channels resolves the largest quoted systematic error in determining the resonance spectrum with the BnGa PWA framework, resulting in a preference for a nucleon resonance spectrum which disfavors models assuming quark-diquark dynamics in the nucleon.
5,448.8
2014-07-30T00:00:00.000
[ "Physics" ]
Rate of gas absorption on a slippery bubble mattress We investigate the absorption of a pure gas into a liquid in laminar fl ow past a superhydrophobic surface consisting of alternating solid walls and micro-bubbles. We experimentally measure and numerically estimate the dynamic mass transfer of gas absorption at stable gas – liquid interfaces for short contacting times. We study the net rate of gas absorption experimentally by in situ measurements of dissolved oxygen concentration pro fi les in aqueous solutions fl owing over oxygen bubbles by fl uorescence lifetime imaging microscopy. We numerically analyze the dynamics of interfacial mass transfer of dissolved oxygen by considering (i) kinetic equilibrium conditions at bubble surfaces that are conventionally described by Henry's Law and (ii) non-equilibrium conditions at bubble surfaces using Statistical Rate Theory (SRT). Our experimental results show that kinetic equilibrium is not established for short contact times. Mass transfer of gas into liquid fl ow past micro-bubbles can be well described by our simulations performed with the non-equilibrium theory for a short exposure time ( (cid:1) 180 m s) of liquid with a microbubble, deviating from the commonly accepted Henry's Law. Introduction 2][3] Traditional gas-liquid contacting equipment such as distillation columns, packed towers, tray/plate columns and bubble columns provide direct contact of gas-liquid and phase equilibrium-based absorption/desorption processes at these interfaces.The mass transfer in such processes has been studied extensively and the mass transfer resistances have been reported to be determined by the transport of gaseous and dissolved species in the gas and liquid phases, respectively. 1,4,5nder equilibrium conditions, the concentrations in both gas and liquid phases at the gas-liquid boundary are proportional to their partial pressures, which are commonly described by Henry's Law. 61][12][13][14][15] The overall mass transfer process is reported to consist of four consecutive steps: (i) transport from the bulk gas phase to the outer surface of the membrane, (ii) diffusion through the membrane, (iii) dissolution of gas into liquid, and (iv) liquid phase transport.In these studies, the overall mass transfer coefficients are investigated.Analytical and experimental mass transfer correlations are obtained for developing/fully developed boundary layers using analogous heat transfer solutions of the Graetz-Lévèque type. 14,15The mass transfer resistance in the liquid phase, in relation to operating parameters, and the mass transfer resistance in the membrane, in relation to the wetting phenomena, have been investigated in detail. 12,13,15The mass transfer resistances in the gas phase and at the gas-liquid surfaces are commonly neglected due to the fast gas phase diffusion and the assumption of equilibrium at the gas-liquid boundaries. 7][18][19][20][21][22][23][24][25] The surface resistances have been investigated in many theoretical studies using both phase-equilibrium and non-equilibrium conditions.For example, the stagnant lm theory, 18 Higbie's penetration theory 19 and the lm-penetration theory 20 are among the widely used interface models under equilibrium conditions.The dynamics of non-equilibrium interface transport has also been investigated in previous analytical and theoretical studies based on kinetic and thermodynamic considerations. 22,24Moreover, the mass transfer resistance of gas dissolution at the gas-liquid interfaces has been reported as the rate limiting step in gas absorption/desorption processes for short contacting times. 22,25,26][29][30][31] However, to the best of our knowledge, the mass transfer resistances of gas dissolution into a liquid under pressuredriven laminar ow past slippery hydrophobic substrates with a slip velocity have not been investigated.To generate hydrodynamic slippage, hydrophobic substrates containing gas bubbles have been used in recent studies, [32][33][34] as the liquid ows over those shear-free bubble surfaces.3][34][35][36][37][38][39][40] Such uidic platforms with slippery interfaces have been shown to enhance liquid convection and have been suggested to increase the interfacial mass transport rates. 33,39,41In a recent theoretical study, 42 enhancements of up to two orders of magnitude in interfacially driven transport phenomena are predicted on slippery surfaces.These studies highlight the room for investigation of interfacial gas transport at bubble surfaces.In our previous study, 33 we presented a hydrophobic microuidic device that allows for the formation of stable and activegas pressure controlledmicrobubbles at the boundary of microchannels.We experimentally and numerically examined the effect of micro-bubble protrusion angle q and surface porosity on the slippage on our bubble mattresses.Our results revealed the dependency of hydrodynamic slip on the gas-liquid interface curvature and surface porosity. In this study, we examine experimentally and numerically the oxygen absorption into a water stream in a pressure-driven ow over a superhydrophobic surface with transversely embedded oxygen gas micro-bubbles (Fig. 1).The fabricated devices allow for tunable gas phase pressure, bubble geometry, and liquid phase ow. 33We measure the detailed spatial dissolved oxygen gas concentration proles in the aqueous phase with frequency-domain uorescence lifetime imaging microscopy (FLIM).We numerically study the dynamics of interfacial mass transfer of dissolved oxygen in the liquid side microchannel embedded with curved oxygen microbubbles, using two models of interfacial gas concentration at the (i) kinetic equilibrium state and (ii) non-equilibrium state using Statistical Rate Theory (SRT). Microuidic setup Fig. 1a shows a representative scanning microscopy image of silicon-glass based microuidic devices.The microuidic devices consist of two main parallel microchannels for separate aqueous solution and oxygen gas streams, connected by an array of side channels. 33Silicon microchannels were fabricated by standard photolithography followed by a deep reactive ion etching process.The microchannels were sealed by anodic bonding to glass.To prevent the wetting of the connecting side channels, the original hydrophilic silicon microchannels were hydrophobized.We adopted the protocol described in ref. 43 for the hydrophobization via ultraviolet (UV) light illumination of silicone oil.The width of the main microchannels H was 50 mm or 100 mm.The width of the gas-lled side channels L g was kept constant at 20 mm and the length of the liquid-solid interface L s was 20 mm or 30 mm.The periodic bubble unit length is dened as L ¼ L g + L s and the surface porosity (shear-free fraction, 4 ¼ L g /L) is 0.38 or 0.54 for different L s (Fig. 1).The depth of the microchannels was 100 mm. The ow rate of the liquid side was controlled using a syringe pump (HARVARD Apparatus PHD 2000) and the pressure of the oxygen gas stream was controlled using a pressure controller (Bronkhorst, EL-PRESS, P-602C).The side-channels were lled with oxygen gas due to the hydrophobicity of the microchannel walls and sufficiently large gas pressure P g .The protrusion angles q of the oxygen micro-bubbles into the aqueous stream were controlled by the active control of the applied oxygen gas pressure.The protrusion angles of the oxygen micro-bubbles were varied in a range from 2 to 45 by changing the applied gas pressure ($0.6 to $1.2 bar) for varying liquid ow rates from 9 ml min À1 to 45 ml min À1 .During the measurements the microbubbles were stable and symmetrically pinned at the sharp corners of the side channels. FLIM experiments The local oxygen concentrations dissolved in the liquid side channels were quantied by using a Ru-based oxygen sensitive luminescent dye, ruthenium tris(2,2 0 -dipyridyl) dichloride hexahydrate (RTDP) (Sigma Aldrich).The frequency domain uorescence lifetime imaging microscopy (FD-FLIM) technique was employed to measure the lifetime domains of RTDP aqueous solutions in the microchannels. 44,45With the lifetime measurements, the dissolved oxygen concentrations can be calculated. 44For the FLIM experiments, a LIFA-X system (Lambert Instruments) was used on a Zeiss Axio Observer inverted microscope attached to a modulated LED light source and an intensied CCD camera (LI2CAM) in gated mode.During the measurements, the intensier MCP (micro-channel plate) voltage was kept at 650 V.The optical sensing was carried out at a xed modulation frequency of 100 kHz.We used 10 mM uorescein (Sigma Aldrich) solution as a reference uorophore to test any instrumentation phase shi at the operating frequency.Fluorescein is suitable as a reference phase shi uorophore due to its short lifetime s < 6 ns compared to RTDP ($600 ns).The data acquired were analyzed with the LI-FLIM soware package to resolve the lifetime elds. 400 mM aqueous solutions of RTDP were prepared.Lifetimes of the RTDP were measured in oxygen-free (N 2 saturated), aerated and oxygen-saturated aqueous solutions.The solutions were saturated with a certain gas by bubbling that gas through the solution for sufficiently long times.The dissolved oxygen contents of the solutions were conrmed using a ber optic oxygen sensor (FIBOX 3, PreSens).For these measurements using the microuidic platform, the microbubbles were established by nitrogen, air and oxygen gases for corresponding oxygen-free, aerated and oxygen-saturated conditions. During the mass transfer experiments, oxygen gas microbubbles were established at the boundary of the microchannels (Fig. 1c and 3).If otherwise not stated, deoxygenated RTDP aqueous solution was the working liquid owing past the transversely aligned oxygen bubbles.The experiments were performed at room temperature without further temperature regulation. Governing equations Numerically we study the mass transfer characteristics of oxygen gas dissolution at the surfaces of curved oxygen microbubbles aligned perpendicular to a pressure-driven microuidic laminar ow.We solve for coupled mass and momentum transport on our bubble mattress geometry (Comsol Multiphysics v4.3).The computational domain represents the experimental geometric parameters (Fig. 1b) and we solve for velocity and concentration proles in the liquid side microchannels with the same experimental operating settings.In our simulations, the gas side transport is not considered since the effects of mass transfer limitations in the gas phase and the effects of oxygen depletion during gas dissolution vanish due to the active supply of pure oxygen gas. The Navier-Stokes equations and the conservation of species equation (convection-diffusion) are solved for a steady pressuredriven ow of water in a microchannel consisting of 60 successive bubble units at the bottom surface (y ¼ 0).In order to eliminate any developing ow effects and entrance/outlet effects, an entrance and outlet section of 2-bubble unit cell length was included in the front and end of the bubble mattress.The bubbles are non-deformed and pinned at the corners of side channels, as observed in the experiments.We approximate the bubble proles as circular arcs.We parametrize the bubble interface curvature by the protrusion angle q (Fig. 1b). The pressure-driven laminar ow was produced by applying a mean velocity as the inlet condition.We varied the inlet mean velocities in the range of 0.04 to 0.2 m s À1 , consistent with our experiments.We applied shear-free boundary conditions (i.e.perfect slip) for the bubble surfaces and no-slip boundary conditions for the solid microchannel walls (at the bottom wall along each L s at y ¼ 0 and the upper wall at y ¼ H).The convective velocity (u) was obtained from the Navier-Stokes equations. The steady-state conservation of species eqn (1) was solved for the dissolved oxygen transport.The oxygen dissolves at the microbubble surfaces. Here D is the diffusion coefficient of oxygen in water (¼ 2.67 Â 10 À9 m 2 s À1 at 25 C (ref.25)).C O 2 is the dissolved oxygen concentration in water.The inlet boundary condition to the microchannel was set at a constant concentration value.If otherwise not stated, the inlet concentration is C O 2 | (x¼0) ¼ 0, representing deoxygenated solutions in the experiments.We applied no-ux boundary conditions (vC O 2 /vy ¼ 0) at the solid microchannel walls (at each L s at y ¼ 0 and the upper wall at y ¼ H), as the silicon microchannel walls are not permeable to oxygen. The mass transfer boundary conditions at the bubble surfaces represent the gas dissolution mechanism at the gasliquid interface that determines the rate of gas absorption by the liquid.We examine two different boundary conditions along the gas protrusions as described below (Fig. 2). Mass transfer boundary conditions 7][18][19][20][21][22][23][24][25] The surface resistances are commonly described under phaseequilibrium conditions.Higbie's penetration theory 19 and the two lm-penetration theory 20 are among the well-known mathematical models.It is important to note that these theories consider equilibrium conditions at the interface.The net exchange rate of absorbed and desorbed species at the surface determines the amount of accumulated species at the interface and thereby the interface concentration.Under equilibrium conditions, the amount of accumulated species is given by Henry's Law. 63][24][25] In kinetic interface models, the net exchange rate at the interface is described by the Hertz-Knudsen equation where mass accommodation (condensation) coefficients are used as tting parameters. 22,23esides the kinetic approaches, a statistical rate theory (SRT) is suggested to predict the instantaneous rate of interfacial molecular transport. 24,25In SRT the gas-liquid boundary is described to consist of two interfaces; the gas-surface interface and the surface-liquid interface, with the consideration of the Gibbs dividing surface 26 (Fig. 2b).The molecular distributions in each phase (bulk gas, surface, and bulk liquid) are related to thermodynamic properties using the Boltzmann denition of entropy.The transition probability of molecules from one phase to another is derived via a rst order perturbation analysis of the Schrödinger equation.The calculated transition probability of gaseous species from the bulk gas phase to the bulk liquid phase has been used in approximating the instantaneous nonequilibrium exchange rate of absorbed and desorbed molecules at the surfaces.For long contacting times, the SRT predicts equilibrium exchange rates and equilibrium interface concentrations at the saturation value. 26he theoretical models [16][17][18][19][20][21][22][23][24][25] for interfaces would suggest that on hybrid substrates with hydrodynamic slippage, the mass transfer resistance of gas dissolution at the phase boundary can become the rate limiting step for short contacting times of the liquid with bubbles.Therefore in our numerical simulations, in addition to the phase equilibrium state (Fig. 2a), we also study the dynamics of interfacial mass transfer at curved oxygen microbubbles at the non-equilibrium state (Fig. 2b). Firstly, the saturation concentration of oxygen in water is applied as boundary conditions at the bubble surfaces in our phase-equilibrium based simulations (Fig. 2a).Under equilibrium conditions, the saturation value of the dissolved oxygen concentration at the interface C i O2 is proportional to its partial pressure P O 2 and described by Henry's Law 6 where K H is the Henry's law constant given as K H ¼ 7.7 Â 10 7 Pa l mol À1 at 25 C (ref. 1) and C sat O2 is the saturation concentration of oxygen in water.For pure oxygen bubbles, the dissolved oxygen concentration at the interface is calculated to be C i O2 ¼ C sat O2 ¼ 1:3 mol m À3 at 1 atm and 25 C. Secondly, the rate of molecular transport between the surface phase and the bulk liquid phase is predicted by SRT and applied as ux boundary conditions along the gas protrusions (coinciding each L g at y ¼ 0) in our non-equilibrium based simulations.According to SRT, this instantaneous molecular rate of transport between the phases is described as: where C sat O2 is the xed equilibrium concentration of the dissolved oxygen, hence the solubility at the given temperature and pressure.C i O2 is the interface concentration of oxygen in the liquid phase.At the bubble surface, concentration is a function of the axial x location and to some extent the bubble protrusion depth into liquid in the y direction (Fig. 2).K sl (moles m À2 s À1 ) is the equilibrium oxygen exchange rate between the bubble surface and the liquid phase and has a constant value for a system with given local thermodynamic properties. where P g w is the partial pressure of water in the gaseous phase, which is the vapor pressure of the owing water.P g O2 is the partial pressure of oxygen in the gaseous phase and is approximated by neglecting the Laplace pressure as P g O2 ¼ P À P g w where P is the total pressure at the bubble surface.C l w is the concentration of water in the liquid phase, which is 55.2 M under the experimental conditions.Here, r and m are the radius and molecular mass of the oxygen gas molecule, k is the Boltzmann constant, and T is the temperature.K H 0 is the "pressure-based" Henry's law constant, given as 4.85 Â 10 9 Pa at 25 C. 1 Without any tting parameters, with a known value of K sl , we implemented the expression given in eqn (3) as boundary conditions dening the oxygen ux at the bubble surfaces.It is important to note here that for longer contacting times, the interface concentration approaches the saturation value ðC i O2 /C sat O2 Þ, and the rate of transport between phases ð J s O2 Þ vanishes, which is indicative of equilibrium conditions, i.e. a constant concentration, at the bubble surface. FLIM measurements Oxygen has been reported elsewhere 46 to be an effective collisional dynamic quencher of RTDP and its effect on RTDP uorescence decay is described by the Stern-Volmer equation ( 5) where I o and s o are the uorescent intensity and the lifetime of unquenched RTDP under oxygen-free conditions, respectively.I and s are the uorescent intensity and the lifetime of RTDP, respectively, for an arbitrary oxygen concentration of the solution.K q is the Stern-Volmer constant.In previous studies, the uorescence intensity and lifetime of RTDP were calibrated as a function of oxygen concentration and linear relationships were reported in agreement with the Stern-Volmer relationship. 2,44,45,47-50K q values were reported in the range of 2.1 to 2.7 Â 10 À3 mM À1 . 44,45,48In most of these studies, the oxygen concentrations were in the physiologically attractive range from oxygen-free to air saturated solutions.The linear response of RTDP, consistent with the Stern-Volmer equation, was also shown in the entire range of oxygen up to the oxygen saturation concentration. 47,51Based on our standard measurements for oxygen-free, aerated and oxygen-saturated aqueous solutions, we calculate the experimental K q to be 1.9 Â 10 À3 mM À1 , consistent with the literature ndings. 44,45,48 Rate of O 2 absorption During the mass transfer experiments, oxygen gas microbubbles were established at the boundary of the microchannels (Fig. 3).We measured the lifetime of RTDP across the liquid side microchannel height (0 # y # H) at different axial locations x (Fig. 3).The bubble interface proles and locations were experimentally determined by locating the minimum lifetime data measured near the hybrid wall.The bubble proles were also calculated by circular arc estimation and depicted by the dashed lines in Fig. 3. Fig. 3 shows the successive lifetime elds measured with FLIM at different axial locations along the same microchannel embedded with microbubbles q ¼ 43 AE 2 .In Fig. 3c, the thickness of the diffusion boundary layer is $23% of the microchannel height H, and does not extend further into the microchannel due to a relatively large Reynolds number Re ¼ 7.5. From the lifetime elds as shown in Fig. 3, the dissolved oxygen concentration gradients were obtained by correlating the lifetime data by eqn (5).When processing the data, no ltering or smoothing was applied to the measured lifetime elds.Due to the scattering of the experimental lifetime data, the obtained oxygen concentration values were averaged over the bubble surfaces (over $30 pixels in x, equivalent to L g ).The presented results are the average of independent experiments performed with the same operating uidic settings.The statistical error bars are calculated from these experiments. Fig. 4a-c represent the local dissolved oxygen concentration proles obtained from the lifetime elds shown in Fig. 3a-c, which are successive measurements at different axial x positions in the microchannel during the same experiment.The local proles across the microchannel height, with respect to the y direction normal to the bubble surfaces, are obtained at x positions coinciding with the bubble surfaces that are indicated with dashed arrows in Fig. 3.Here the deoxygenated RTDP aqueous solution ows at a Reynolds number Re ¼ 7.5.The experimental results are compared with numerical results obtained for phase-equilibrium based and SRT-based gas absorption. The bubble interface position is y $ 3.8 mm in Fig. 4 due to a large protrusion angle q ¼ 43 AE 2 of the bubbles into the water stream.The interface O 2 concentration predicted by phaseequilibrium based simulations converges to the saturation value 1.3 mol m À3 as imposed by Henry's law.Whereas the interfacial dissolved gas concentration predicted by SRT based simulations is $0.8 mol m À3 at x ¼ 1.05 mm and slightly increases for increasing x.When y T 3.8 mm, the oxygen concentration gradients are steep due to a relatively large Reynolds number.As can be clearly seen in Fig. 4a-c, the effect of the microbubbles on the concentration proles is damped when y T 25 mm at x ¼ 1.05 mm and when y T 35 mm at x ¼ 2.30 mm.Beyond the boundary layers, the oxygen concentration drops to the inlet bulk concentration value.Our ndings are consistent with previous studies 45,51 reporting the local oxygen gradients resolved by in situ FLIM measurements.In these studies, oxygen diffuses from an oxygen enriched liquid phase to an oxygen-free liquid phase. The FLIM measurements in Fig. 4a-c reveal very low concentrations near bubble interfaces when y ( 10 mm.There is a sharp discrepancy between the measurements and both numerical results in the proximity of the bubble interfaces (3.8 mm # y ( 10 mm).The standard errors calculated close to the bubble surface are also larger compared to those of the remainder of the microchannel.The deviation can be related to the systematic error in our FLIM measurements close to the gas-liquid interfaces (additional measurements are provided in the Appendix).The deviations near the bubble surfaces do not interfere with the results presented here since we do not consider the experimental data obtained in the proximity of bubble interfaces. Considering the oxygen concentration data when z10 mm # y # H, the measured oxygen concentration proles are in good agreement with the simulations performed with an incorporated SRT boundary condition.Whereas, the numerical results computed for a constant interface concentration at the solubility limit of oxygen in water predict steeper proles and faster gas absorption than those in the measurements.Due to a relatively large Re ¼ 7.5, the water residence time within the hold-up volume of the microchannel embedded with the bubbles is as small as 96 ms.The inset in Fig. 4d shows the numerically calculated ow velocity by solving the Navier-Stokes equations (in Section 3.1) using the same experimental parameters in these sets of experiments, revealing a slip velocity of $25 mm s À1 , consistent with our previous study. 33This slip velocity can induce a very short contact time of the water stream passing a single bubble interface.The exposure time of a slippery bubble surface to water is calculated to be $830 ms.These short exposure times are insufficient for a gas-liquid phase equilibrium.Our ndings are indicative of an additional mass transfer resistance induced by the gas dissolution at the bubble surfaces at short exposure times on a laminar ow over a slippery bubble mattress. To further highlight our results indicative of interfacial mass transfer resistances, we present the local instantaneous convective ux distribution in Fig. 4d.The concentration prole shown in Fig. 4c and the numerical ow velocity prole (Fig. 4d, inset) are multiplied to calculate the local ux.The local ux near the bubble surfaces has low values up to y z 8 mm due to the low local velocity in this region even though the dissolved oxygen concentrations are at their maximum values.The ux then increases to a peak value when 8 mm ( y ( 11 mm.Above y z 11 mm, the local ux decreases with a slope mainly determined by the decrease in C O 2 concentration towards the end of the diffusion layer at y z 35 mm.The differences in the experimental and numerical slopes are mainly attributed to concentration proles, as the coupling velocity prole is unique at the given settings.Here the differences in the uxes obtained by the measurements and phase-equilibrium based simulations are more pronounced. The O 2 uxes are determined by integration of the local ux proles.In Fig. 5, we present the O 2 uxes obtained from the measurements, the simulations performed under phase equilibrium conditions on the bubble surfaces and the simulations incorporating an interfacial mass transfer resistance using SRT.The experimental and numerical total convective uxes are calculated for the entire microchannel height, i.e. for 3.8 mm # y # H. Whereas the area under the local ux curves when 11 mm ( y ( 35 mm contributes most to the total convective ux values.It is important to note here that in this region the experimental values are not hampered by the systematic error in the measurements (y ( 0.1H).The experimental oxygen uxes revealed in Fig. 5 are in a good quantitative agreement with the ones computed by considering an additional gas dissolution resistance using SRT.The ux values calculated with the assumption of a spontaneous phase-equilibrium state are higher, as in this case the interface dissolved oxygen concentration is at the maximum value.The phase equilibrium ux obtained at x ¼ 1.05 mm for Re ¼ 7.5 is $1.8 times higher than the ones obtained from the measurements and the SRT-based numerics.In Fig. 5, the oxygen uxes obtained at Re ¼ 10 are also presented.The deviation from the phase equilibrium mass transfer is more pronounced for a higher Reynolds number Re ¼ 10.The phase equilibrium ux obtained at x ¼ 1.05 mm is $2.2 times higher than those obtained from the measurements and the SRT-based numerics.An increase in Re from 7.5 to 10 results in a decrease in water residence time within the hold-up volume of bubble mattress from 96 ms to 48 ms, with an increase in mean velocity.Furthermore for a higher Reynolds number Re ¼ 10, the slip velocity at the hybrid wall (y ¼ 3 mm for q ¼ 35 AE 3 ) has a much higher value $112 mm s À1 , thereby a much shorter exposure time $180 ms of a slippery bubble surface to the water stream compared to the exposure time $830 ms for Re ¼ 7.5 (Fig. 5).These ndings show that these short exposure times lead to larger deviations between the equilibrium based boundary conditions (Henry's law) and the non-equilibrium based boundary conditions (SRT). We compare the experimental and gas dissolution resistance incorporated-numerical O 2 uxes obtained at x ¼ 1.05 mm for Re ¼ 7.5 and Re ¼ 10.The experimental ux value for Re ¼ 10 is $2.3 fold higher than that for Re ¼ 7.5.In good agreement with the measurements, our simulations predict a $3 fold higher ux value for Re ¼ 10 compared to Re ¼ 7.5 (Fig. 5).This slight increase in the Reynolds number results in a larger decrease in exposure times due to a larger slip velocity near the bubble surface, and thereby a decrease in the dissolved oxygen concentration at the interface.Nevertheless, this enhanced hydrodynamics results in enhanced convection, and therefore an enhanced convective mass ux, as tripled in this case.Consistent with the literature ndings, 10,39,42,52 our ndings clearly demonstrate that the hydrodynamic slippage amplies the mass transport on hybrid substrates consisting of liquidsolid and liquid-gas interfaces. Conclusions We studied the interfacial mass transport accompanied by hydrodynamic slippage at gas-liquid interfaces in microuidic devices.The uidic conguration allows for the investigation of high-resolution gas absorption dynamics at curved gas-liquid interfaces.The local oxygen concentration elds resolved by FLIM measurements reveal slower gas absorption rates compared to those predicted by phase-equilibrium based simulations.The experimental results are in good agreement with the numerical results obtained from the simulations considering an additional mass transfer resistance at the gas-liquid boundary during gas dissolution.Our results indicate that the equilibrium state may not be established at short contacting times.Our ndings also reveal that even at such short exposure times, the hydrodynamic slippage enhances the total mass ux driven by the amplied ow throughput on the bubble mattresses. Appendix Additional measurements and near-bubble deviations We verify our FLIM results by the measurements of different inlet oxygen concentrations which were prescribed to certain values prior to microuidic mass transfer experiments by the aid of an external oxygen sensor during the bulk solution preparation.Fig. 6a and b show the dissolved oxygen concentration proles across the microchannel height, with respect to the y direction normal to the bubble surfaces for varying inlet concentrations of 0.5 and 0.2 mol m À3 .The experimental results are compared with numerical results obtained for phaseequilibrium based and SRT-based gas absorption. The bubble interface position is y $ 0.3 mm as shown in Fig. 6a, as the bubbles are nearly at (q ¼ 2 AE 1 ).At the bubble interface (y $ 0.3 mm), the oxygen concentration is measured to be $0.9 mol m À3 , consistent with the SRT boundary condition numerical results.Whereas the interface O 2 concentration predicted by phase-equilibrium based simulations converges to a saturation value of 1.3 mol m À3 .The thickness of the diffusion boundary layer obtained by FLIM measurements extends to $18 mm, consistent with the numerical results.Beyond the thickness of the diffusion boundary layer, the dissolved oxygen concentration drops to an inlet bulk concentration of 0.5 mol m À3 .Similarly, the experimental oxygen concentration prole shown in Fig. 6b reaches an inlet bulk concentration of 0.2 mol m À3 beyond the diffusion boundary layer thickness at y $ 14 mm.In Fig. 6b, the measured O 2 prole is in good agreement with SRT based simulations when y T 4 mm.However there is a discrepancy between the measurements and both numerical results in the proximity of the bubble interfaces, as shown in Fig. 4. The standard errors close to the bubble surface are larger compared to those of the remainder of the microchannel.The discrepancy in the measured and calculated values when y ( 4 mm is still larger than the standard error margins.The near-bubble deviations can be related to the systematic error in our FLIM measurements close to the gas-liquid interfaces.In the FLIM measurements, the normalized uorescence lifetime signal decreases from 1 (oxygen-free state) to $0.3 (oxygen-saturated state), providing a signal level of $0.7 in the entire oxygen concentration range.We measure the noise level to be $0.1, hence a signal to noise ratio of $7. However we observed reduced signal-to-noise ratios near microchannel walls and bubble interfaces.Such deviations near gas-liquid interfaces have also been reported in previous reports [53][54][55][56] and are attributed to optical distortions and reection effects near interfaces.Besides, we observed differences in the range of z7-15% between the lifetime data calculated by the phase shi and that of the amplitude demodulation of the emitted signal near interfaces.Differences in the phase shi s p and demodulation s m lifetimes have been reported to be indicative of complex quenching mechanisms and are observed for long lifetime probes as Ru-based complexes in previous studies. Fig. 1 Fig. 1 Microfluidic bubble mattress and oxygen dissolution at bubble surfaces.(a) Scanning electron microscopy image of the microfluidic device showing two main microchannels for oxygen gas (P O 2 ) and water (u x (y)) streams connected by oxygen-filled side channels.(b) Numerical results of the dissolved oxygen concentration in the water side microchannel solved with identical operating settings as in (c).(c) Lifetime field resolved by FLIM superimposed on the brightfield microscopy image showing bubbles protruding at 43 AE 2 into the water side microchannel with a height of H ¼ 100 mm.L g is the width of the oxygenfilled side channel (L g ¼ 20 mm), and L s is the width of the solid boundary (L s ¼ 32.6 mm).The color bars refer to the lifetime of RTDP aqueous solution which is given in ns (c) and the oxygen concentration which is given in mol m À3 in (b).In b and c, surface porosity 4 ¼ 0.38 and Q w ¼ 45 ml min À1 . Fig. 2 Fig. 2 Schematic illustration of the applied boundary conditions and the effects of interface resistances on the concentration profiles in the presence of liquid flow.(a) The phase equilibrium state where the interface liquid concentration is fixed at the saturation value given by Henry's Law ( ).The time (t) evolution results in increasing boundary layer thicknesses.(b) The non-equilibrium state where the interface liquid concentrations ( ) are determined by the instantaneous exchange rates of absorbed and desorbed molecules at the surface liquid boundary (sl).The time (t) evolution results in increasing boundary layer thicknesses and increasing interface concentrations approaching the saturation value.In (a and b) the mass transfer resistances in the bulk gas phase are neglected. Fig. 3 Fig. 3 Successive lifetime fields in axial position x.Quantitative visualization of the increasing boundary layer thickness along downstream flow with Q w ¼ 45 ml min À1 at (a) 0.7 ( x ( 1.1, (b) 1.4 ( x ( 1.8, and (c) 1.9 ( x ( 2.3.Here q ¼ 43 AE 2 and 4 ¼ 0.38.In (a-c) the flow direction is from left to right.The color bar refers to the lifetime, which is given in nanoseconds.The dashed arrows indicate the axial positions at which the local oxygen concentration profiles across the microchannel height are obtained. Fig. 4 Fig. 4 Successive dissolved oxygen profiles at different axial x positions.(a) Local O 2 profile at x ¼ 1.05 mm.The inset is a zoom-in of the O 2 profile in (a).(b) Local O 2 profile at x ¼ 1.80 mm.The inset is a zoom-in of the O 2 profile in (b).(c) Local O 2 profile at x ¼ 2.30 mm.The inset is a zoom-in of the O 2 profile in (c).(d) Local convective flux J O 2 profile at x ¼ 2.30 mm.The inset shows the flow velocity profile obtained by simulations performed at the same experimental parameters.In (a-d) symbols represent the experimental results obtained by FLIM, the color solid lines represent the numerical results solved with SRT, and the color dashed lines represent the numerical results solved using Henry's law.Here the microchannel height H ¼ 100 mm, q ¼ 43 AE 2 , 4 ¼ 0.38, and Re ¼ 7.5.The black dashed lines in (a-c) depict the equilibrium saturation value of oxygen in water. Fig. 5 Fig. 5 O 2 fluxes J O 2 as functions of axial x positions and Re obtained by FLIM measurements and numerical calculations.Red filled circles ( ), the red solid line ( ) and the red dashed line ( ) indicate the experimental, the SRT-based numerical and equilibrium-based numerical results, respectively, for Re ¼ 10 and q ¼ 35 AE 3 .Blue filled squares ( ), the blue solid line ( ) and the blue dotted line ( ) indicate the experimental, the SRT-based numerical and equilibriumbased numerical results, respectively, for Re ¼ 7.5 and q ¼ 43 AE 2 .In (a) and (b), 4 ¼ 0.38. Fig. 6 Fig. 6 Experimental and numerical local dissolved oxygen profiles.(a) When the inlet oxygen concentration is 0.5 mol m À3 .Here Re ¼ 2.2.(b) When the inlet oxygen concentration is 0.2 mol m À3 .Here Re ¼ 4. In (a and b), symbols represent the experimental results obtained by FLIM, the colored solid lines represent the numerical results solved with SRT, and the colored dashed lines represent the numerical results solved using Henry's law.Here H ¼ 50 mm, q ¼ 2 AE 1 and 4 ¼ 0.54.The black dashed lines depict the equilibrium saturation value of oxygen in water.
8,101.8
2013-11-06T00:00:00.000
[ "Physics", "Engineering" ]
A Substrate-Reclamation Technology for GaN-Based Lighting-Emitting Diodes Wafer This study reports on the use of a substrate-reclamation technology for a gallium nitride (GaN)-based lighting-emitting diode (LED) wafer. There are many ways to reclaim sapphire substrates of scrap LED wafers. Compared with a common substrate-reclamation method based on chemical mechanical polishing, this research technology exhibits simple process procedures, without impairing the surface morphology and thickness of the sapphire substrate, as well as the capability of an almost unlimited reclamation cycle. The optical performances of LEDs on non-reclaimed and reclaimed substrates were consistent for 28.37 and 27.69 mcd, respectively. Introduction Gallium nitride (GaN) is a direct-bandgap semiconductor.It is commonly used for fabricating light-emitting diodes (LEDs) due to their wide bandgap, chemical stability, and high temperature stability.These LEDs emit from the ultraviolet to the visible range [1][2][3].Sapphire (Al 2 O 3 ) has become a key material for fabricating GaN-based LEDs because it serves as an excellent epitaxial substrate due to its small lattice-constant mismatch with GaN.In particular, patterned sapphire substrates (PSSs) constitute an important epitaxial substrate for GaN-based LEDs because they enhance the light-extraction efficiency and internal quantum efficiency of LEDs [4][5][6].The price of a 4 in PSS is approximately $30-32 USD [7], so PSSs account for about 25% of the total cost of an LED chip.Thus, reclaiming the sapphire substrate for GaN-based LED wafers can significantly reduce the manufacturing cost of LED chips.A chemical mechanical polishing (CMP) method is currently used universally to reclaim substrates [8][9][10].However, the disadvantages of CMP include process complexity, surface scrapes, and thinning of the sapphire substrate.Surface scrapes and thinning result in the performance failure of the epitaxial film.Moreover, CMP makes the PSS pattern disappear.In the present work, we study a substrate-reclamation technology to avoid these disadvantages.In a non-vacuum state, we prepare a reclaimed substrate by thermally decomposing a GaN epitaxial film in a furnace system.We also investigate the properties of decomposed GaN on PSSs and the regrown LED epilayers on the reclaimed PSS. Experimental Section The sample consisted of GaN-based LED epilayers grown on a PSS (GaN-epi/PSS), which we fabricated by metalorganic chemical vapor deposition.For the PSS, the cone-pattern diameter, spacing, and height were 4, 2, and 1.4 µm, respectively.GaN-based LEDs epilayers (thickness = 3.5 µm) were produced by using standard epitaxial procedures.All samples involved similar epilayers and Appl.Sci.2017, 7, 325 2 of 6 process procedures, except for the substrate reclaim conditions.The procedures involved in the substrate-reclaim process are as follows: The GaN-epi/PSS was loaded into a furnace, following which H 2 was released into the furnace, as depicted in Figure 1.The process temperature, H 2 flow rate, and process time for the GaN decomposition were varied in a non-vacuum state to determine the initial decomposition parameters.In process I, the GaN-epi/PSS is prepared at temperatures ranging from 800 to 1200 • C and at gas-flow rates of 10 sccm for 30 min.In process II, an operation temperature (1200 • C) is imposed on the GaN-epi/PSS according to the results of process I and the gas-flow rate is 25 sccm for 30, 60, and 90 min.The details of the substrate-reclaim conditions are given in Table 1.The sample is then unloaded and the reaction products on the sample surface are removed by using an alkaline solution to form an exposed PSS.Here the exposed PSS is defined to be reclaimed PSS.Following, GaN-based LEDs epilayers are regrown on reclaimed PSS, and a 10 × 24 mil 2 chip is fabricated.This type of LED is called "LED/reclaimed PSS".The procedure for fabricating chip devices in this study is as follows: An indium tin oxide transparent conductive layer and Cr/Au metal were deposited on the p-and n-type GaN as Ohmic contacts, respectively.Finally, to confirm the properties of the LED/reclaimed PSS device, GaN-based LED epilayers grown on a non-reclaimed PSS were also fabricated.Here, "LED/non-reclaimed PSS" denotes this type of LED. Appl.Sci.2017, 7, 325 2 of 6 substrate-reclaim process are as follows: The GaN-epi/PSS was loaded into a furnace, following which H2 was released into the furnace, as depicted in Figure 1.The process temperature, H2 flow rate, and process time for the GaN decomposition were varied in a non-vacuum state to determine the initial decomposition parameters.In process I, the GaN-epi/PSS is prepared at temperatures ranging from 800 to 1200 °C and at gas-flow rates of 10 sccm for 30 min.In process II, an operation temperature (1200 °C) is imposed on the GaN-epi/PSS according to the results of process I and the gas-flow rate is 25 sccm for 30, 60, and 90 min.The details of the substrate-reclaim conditions are given in Table 1.The sample is then unloaded and the reaction products on the sample surface are removed by using an alkaline solution to form an exposed PSS.Here the exposed PSS is defined to be reclaimed PSS.Following, GaN-based LEDs epilayers are regrown on reclaimed PSS, and a 10 × 24 mil 2 chip is fabricated.This type of LED is called "LED/reclaimed PSS".The procedure for fabricating chip devices in this study is as follows: An indium tin oxide transparent conductive layer and Cr/Au metal were deposited on the p-and n-type GaN as Ohmic contacts, respectively.Finally, to confirm the properties of the LED/reclaimed PSS device, GaN-based LED epilayers grown on a non-reclaimed PSS were also fabricated.Here, "LED/non-reclaimed PSS" denotes this type of LED. Results and Discussion Figure 2 shows top-view and cross-section scanning electron microscope (SEM) images of GaN-epi/PSSs both exposed and not exposed to gas-flow rates of 10 sccm for 30 min and at various temperatures (800-1200 °C). Figure 2a shows the surface morphology and cross-section of the GaN-epi/PSSs (i.e., original samples).In addition, the top-view SEM image of the original pristine sapphire substrate is also shown in the inset of Figure 2a.Preparing the GaN-epi/PSSs at 800 °C changed the surface morphology of the GaN-epi film, as shown in Figure 2b.Upon increasing the process temperature to 1000 °C, sheet-shape decomposition began in the GaN epilayers and the Table 1.Substrate-reclaim conditions of GaN-epi/patterned sapphire substrate (PSS). Results and Discussion Figure 2 shows top-view and cross-section scanning electron microscope (SEM) images of GaN-epi/PSSs both exposed and not exposed to gas-flow rates of 10 sccm for 30 min and at various temperatures (800-1200 • C). Figure 2a shows the surface morphology and cross-section of the GaN-epi/PSSs (i.e., original samples).In addition, the top-view SEM image of the original pristine sapphire substrate is also shown in the inset of Figure 2a.Preparing the GaN-epi/PSSs at 800 • C changed the surface morphology of the GaN-epi film, as shown in Figure 2b.Upon increasing the process temperature to 1000 • C, sheet-shape decomposition began in the GaN epilayers and the substrate cone patterns started to exposed, as shown in Figure 2c.Further increasing the temperature Appl.Sci.2017, 7, 325 3 of 6 to 1200 • C resulted in cone patterns appearing over the entire sample surface.Figure 2d,e show the top view and cross-section and an enlarged schematic of the PSS after being exposed to a gas-flow rate of 10 sccm for 30 min at 1200 • C. The PSS cone diameter, spacing, and height were 3.33, 2.71, and 1.04 µm, respectively.By comparing them with the original cone pattern, these results indicate that the GaN-epi film still did not completely decompose under these process conditions.In addition, the GaN-epi decomposition reaction correlated positively with the process temperature, which is consistent with the results of Schmid-Fetzer et al. [11]. Appl.Sci.2017, 7, 325 3 of 6 substrate cone patterns started to exposed, as shown in Figure 2c.Further increasing the temperature to 1200 °C resulted in cone patterns appearing over the entire sample surface.Figure 2d,e show the top view and cross-section and an enlarged schematic of the PSS after being exposed to a gas-flow rate of 10 sccm for 30 min at 1200 °C .The PSS cone diameter, spacing, and height were 3.33, 2.71, and 1.04 μm, respectively.By comparing them with the original cone pattern, these results indicate that the GaN-epi film still did not completely decompose under these process conditions.In addition, the GaN-epi decomposition reaction correlated positively with the process temperature, which is consistent with the results of Schmid-Fetzer et al. [11].Figure 3 shows the X-ray diffraction (XRD) spectra of the GaN-epi/PSS prepared and not prepared according to substrate-reclamation process I.The diffraction pattern from untreated GaN-epi/PSS has a sharp GaN (002) peak near 34.6°, several satellite peaks from the InGaN/GaN active layers, and a sapphire (001) peak near 2θ = 41.6°, as shown in Figure 3a.The grazing-angle XRD spectra of samples A, B, and C appear in Figure 3b-d, respectively.The diffraction peaks from the GaN (103) phase near 2θ = 63.6°gradually began to weaken with increasing the temperature from 800-1200 °C .Moreover, when the operation temperature exceeded 1000 °C , a GaO2H (110) peak near 2θ = 21.47°appeared in the diffraction pattern.These results indicate that the GaN-epi film on the PSS begins to transform and to produce a GaO2H derivative.Note that the GaN-epi film still Figure 3 shows the X-ray diffraction (XRD) spectra of the GaN-epi/PSS prepared and not prepared according to substrate-reclamation process I.The diffraction pattern from untreated GaN-epi/PSS has a sharp GaN (002) peak near 34.6 • , several satellite peaks from the InGaN/GaN active layers, and a sapphire (001) peak near 2θ = 41.6 • , as shown in Figure 3a.The grazing-angle XRD spectra of samples A, B, and C appear in Figure 3b-d, respectively.The diffraction peaks from the GaN (103) phase near 2θ = 63.6 • gradually began to weaken with increasing the temperature from 800-1200 • C.Moreover, when the operation temperature exceeded 1000 • C, a GaO 2 H (110) peak near 2θ = 21.47 • appeared in the diffraction pattern.These results indicate that the GaN-epi film on the PSS begins to transform and to produce a GaO 2 H derivative.Note that the GaN-epi film still exists under the conditions of process I. Figure 4 shows the grazing-angle XRD spectra of the GaN-epi/PSS prepared using substrate-reclamation process II.The diffraction pattern reveals a GaO 2 H (110) peak at 2θ ∼ = 21.47 • and a sapphire (012) peak at 2θ ∼ = 25.65 • .As the process time increased from 30 min to above 60 min, the diffraction peaks of the sapphire (113) phase appeared near 43.47 • .Comparing these results with those shown in Figure 3 shows that no GaN peak appeared in the GaN-epi/PSS XRD pattern under the conditions of process II.The results indicate that the decomposition reaction of the GaN-epi film on the PSS is increased when the gas-flow rate increases from 10 to 25 sccm, which is consistent with the results of Koleske et al. [12].Moreover, the exposed pattern morphology is more notable with increasing the process time under a gas-flow rate of 25 sccm, as shown in the SEM image in the inset of Figure 4. Appl.Sci.2017, 7, 325 4 of 6 exists under the conditions of process I. Figure 4 shows the grazing-angle XRD spectra of the GaN-epi/PSS prepared using substrate-reclamation process II.The diffraction pattern reveals a GaO2H (110) peak at 2θ ≅ 21.47° and a sapphire (012) peak at 2θ ≅ 25.65°.As the process time increased from 30 min to above 60 min, the diffraction peaks of the sapphire (113) phase appeared near 43.47°.Comparing these results with those shown in Figure 3 shows that no GaN peak appeared in the GaN-epi/PSS XRD pattern under the conditions of process II.The results indicate that the decomposition reaction of the GaN-epi film on the PSS is increased when the gas-flow rate increases from 10 to 25 sccm, which is consistent with the results of Koleske et al. [12].Moreover, the exposed pattern morphology is more notable with increasing the process time under a gas-flow rate of 25 sccm, as shown in the SEM image in the inset of Figure 4. exists under the conditions of process I. Figure 4 shows the grazing-angle XRD spectra of the GaN-epi/PSS prepared using substrate-reclamation process II.The diffraction pattern reveals a GaO2H (110) peak at 2θ ≅ 21.47° and a sapphire (012) peak at 2θ ≅ 25.65°.As the process time increased from 30 min to above 60 min, the diffraction peaks of the sapphire (113) phase appeared near 43.47°.Comparing these results with those shown in Figure 3 shows that no GaN peak appeared in the GaN-epi/PSS XRD pattern under the conditions of process II.The results indicate that the decomposition reaction of the GaN-epi film on the PSS is increased when the gas-flow rate increases from 10 to 25 sccm, which is consistent with the results of Koleske et al. [12].Moreover, the exposed pattern morphology is more notable with increasing the process time under a gas-flow rate of 25 sccm, as shown in the SEM image in the inset of Figure 4. To ensure the pure surface of the exposed PSS, we analyzed the photoelectron spectrum of the GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching (see Figure 5).These results show the Al2p, Al2s, O2s, and O1s peaks located near 74.4,118, 23, Appl.Sci.2017, 7, 325 5 of 6 and 531 eV, respectively.The binding energy of O and Al corresponds to that of O and Al in Al 2 O 3 (i.e., the sapphire substrate) [13,14].This result is consistent with an exposed PSS surface without Ga and demonstrates that no GaO 2 H is present at the surfaces of the exposed PSS.The C1s peaks are due to the C element adsorbed on the sample surface from the atmosphere ambient.The SEM image in Figure 5 indicates that exposure to a gas-flow rate of 25 sccm for 90 min followed by wet etching results in a pure PSS surface.Figure 6 shows the optical and electrical performances for both LED types with an injection current of 20 mA.The light intensity for the LEDs/non-reclaimed PSS and LED/reclaimed PSS was 28.37 and 27.69 mcd, respectively.The forward voltage for the LED/non-reclaimed PSS and for the LED/reclaimed PSS was 3.16 and 3.18 V, respectively.These results reveal that both types of LEDs have similar optical and electrical characteristics, which indicates that sapphire substrates may be successfully re-used in epi-regrowth of GaN-based LEDs. Appl.Sci.2017, 7, 325 5 of 6 To ensure the pure surface of the exposed PSS, we analyzed the photoelectron spectrum of the GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching (see Figure 5).These results show the Al2p, Al2s, O2s, and O1s peaks located near 74.4,118, 23, and 531 eV, respectively.The binding energy of O and Al corresponds to that of O and Al in Al2O3 (i.e., the sapphire substrate) [13,14].This result is consistent with an exposed PSS surface without Ga and demonstrates that no GaO2H is present at the surfaces of the exposed PSS.The C1s peaks are due to the C element adsorbed on the sample surface from the atmosphere ambient.The SEM image in Figure 5 indicates that exposure to a gas-flow rate of 25 sccm for 90 min followed by wet etching results in a pure PSS surface.Figure 6 shows the optical and electrical performances for both LED types with an injection current of 20 mA.The light intensity for the LEDs/non-reclaimed PSS and LED/reclaimed PSS was 28.37 and 27.69 mcd, respectively.The forward voltage for the LED/non-reclaimed PSS and for the LED/reclaimed PSS was 3.16 and 3.18 V, respectively.These results reveal that both types of LEDs have similar optical and electrical characteristics, which indicates that sapphire substrates may be successfully re-used in epi-regrowth of GaN-based LEDs. Conclusions We have demonstrated that a substrate-reclamation technology allows us to achieve a reclamation cycle that is almost unlimited.This technology will help reduce the manufacturing cost To ensure the pure surface of the exposed PSS, we analyzed the photoelectron spectrum of the GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching (see Figure 5).These results show the Al2p, Al2s, O2s, and O1s peaks located near 74.4,118, 23, and 531 eV, respectively.The binding energy of O and Al corresponds to that of O and Al in Al2O3 (i.e., the sapphire substrate) [13,14].This result is consistent with an exposed PSS surface without Ga and demonstrates that no GaO2H is present at the surfaces of the exposed PSS.The C1s peaks are due to the C element adsorbed on the sample surface from the atmosphere ambient.The SEM image in Figure 5 indicates that exposure to a gas-flow rate of 25 sccm for 90 min followed by wet etching results in a pure PSS surface.Figure 6 shows the optical and electrical performances for both LED types with an injection current of 20 mA.The light intensity for the LEDs/non-reclaimed PSS and LED/reclaimed PSS was 28.37 and 27.69 mcd, respectively.The forward voltage for the LED/non-reclaimed PSS and for the LED/reclaimed PSS was 3.16 and 3.18 V, respectively.These results reveal that both types of LEDs have similar optical and electrical characteristics, which indicates that sapphire substrates may be successfully re-used in epi-regrowth of GaN-based LEDs. Conclusions We have demonstrated that a substrate-reclamation technology allows us to achieve a reclamation cycle that is almost unlimited.This technology will help reduce the manufacturing cost Conclusions We have demonstrated that a substrate-reclamation technology allows us to achieve a reclamation cycle that is almost unlimited.This technology will help reduce the manufacturing cost of both chips and stock numbers of the scrap wafer in the LED industry.In this research, a GaN-epi/PSS was Appl.Sci.2017, 7, 325 6 of 6 found that has a GaO 2 H (110) reactant on the surface of an exposed PSS and a GaN-epi film was completely dissociated after gas-flow rates of 25 sccm for 90 min at 1200 • C. The GaO 2 H (110) reactant was removed to form a pure exposed PSS using an alkaline solution and then the GaN-based LED epilayers were re-grown on reclaimed substrates successfully. Figure 1 . Figure 1.Schematic diagram showing substrate-reclaim setup for GaN-epi/patterned sapphire substrate (PSS) by furnace process in H 2 gas. Figure 2 . Figure 2. Top-view and cross-section scanning electron microscope (SEM) micrographs of GaN-epi/PSSs prepared and not prepared according to substrate-reclamation process I.The enlarged schematic diagram of the PSS in panel (e) depicts the cone pattern size of sample C. Figure 2 . Figure 2. Top-view and cross-section scanning electron microscope (SEM) micrographs of GaN-epi/PSSs prepared and not prepared according to substrate-reclamation process I.The enlarged schematic diagram of the PSS in panel (e) depicts the cone pattern size of sample C. Figure 3 . Figure 3. X-ray diffraction (XRD) spectra of GaN-epi/PSS with and without using substrate-reclamation process I. Figure 3 . Figure 3. X-ray diffraction (XRD) spectra of GaN-epi/PSS with and without using substrate-reclamation process I. Figure 3 . Figure 3. X-ray diffraction (XRD) spectra of GaN-epi/PSS with and without using substrate-reclamation process I. Figure 5 . Figure 5. Photoelectron spectrum of GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching. Figure 6 . Figure 6.(a) Optical and (b) electrical performance of both LED types with 20 mA injection current. Figure 5 . Figure 5. Photoelectron spectrum of GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching. Figure 5 . Figure 5. Photoelectron spectrum of GaN-epi/PSS prepared with a gas-flow rate of 25 sccm for 90 min followed by wet etching. Figure 6 . Figure 6.(a) Optical and (b) electrical performance of both LED types with 20 mA injection current. Figure 6 . Figure 6.(a) Optical and (b) electrical performance of both LED types with 20 mA injection current.
4,517.8
2017-03-27T00:00:00.000
[ "Engineering" ]
Twisted K-theory and Poincare duality Using methods of KK-theory, we generalize Poincare duality to the framework of twisted K-theory. Introduction In [4], Connes and Skandalis showed, using Kasparov's KK-theory, that given a compact manifold M , the K-theory of M is isomorphic to the K-homology of T M and vice-versa. It is well-known to experts that a similar result holds in twisted K-theory, although this is apparently written nowhere in the literature. In this paper, using Kasparov's more direct approach, we show that given any (graded) locally trivial bundle A of elementary C * -algebras over M , the C * -algebras of continuous sections C(M, A) and C(M, A op⊗ Cliff(T M ⊗ C)) are K-dual to each other. When A = M is the trivial bundle, we recover Poincaré duality between C(M ) and C τ (M ) := C(M, Cliff(T M ⊗C)) [8], which is equivalent to Poincaré duality between C(M ) and C 0 (T M ) since C τ (M ) and C 0 (T M ) are KK-equivalent to each other. Preliminaries In this paper, we will assume that the reader is familiar with the language of groupoids (although this is not crucial in the proof of the main theorem concerning Poincaré duality). We just recall the definition of a generalized morphism (see e.g. [6]), since it is used at several places. Suppose that G ⇒ G (0) and Γ ⇒ Γ (0) are two Lie groupoids. Then a generalized morphism from G to Γ is given by a space P , two maps G (0) τ ← P σ → Γ (0) , a left action of G on P with respect to τ , a right action of Γ on P with respect to σ, such that the two actions commute, and P → G (0) and is a right Γ-principal bundle. The set of isomorphism classes of generalized morphisms from G to Γ is denoted by H 1 (G, Γ). There is a category whose objects are Lie groupoids and arrows are isomorphism classes of generalized morphisms; isomorphisms in this category are called Morita equivalences. Finally, we recall that any element of H 1 (G, Γ) is given by the composition of a Morita equivalence with a strict morphism. Graded twists and twisted K-theory In this section, we review the basic theory of twisted K-theory in the graded setting, sometimes in more detail than some other references like [1,5,3]. This is a probably well-known and straightforward generalization of the ungraded case as developed e.g. in [1,10], hence we will omit most proofs. Graded Dixmier-Douady bundles Let M ⇒ M (0) be a Lie groupoid (more generally, most of the theory below is still valid for locally compact groupoids having a Haar system). The reader who is not interested in equivariant K-theory may assume that M = M (0) = M is just a compact manifold. A graded Dixmier-Douady bundle of parity 0 (resp. of parity 1) A over M ⇒ M (0) is a locally trivial bundle of Z/2Z-graded C * -algebras over M (0) , endowed with a continuous action of M, such that for all x ∈ M (0) , the fiber A x is isomorphic to the Z/2Z-graded algebra K(Ĥ x ) of compact operators over a Z/2Z-graded Hilbert spaceĤ x (resp. to the where H x is some Hilbert space and Cℓ 1 is the first complex Clifford algebra). Beware that H x orĤ x does not necessarily depend continuously on x. Of course, the action of M is required to preserve the degree. The usual theory of graded twists [1] corresponds to even graded D-D bundles (i.e. D-D bundles of parity 0), but our slightly more general definition allows to cover Clifford bundles as well: if E → M is a Euclidean vector bundle of dimension d, then Cliff(E⊗C) → M is a graded D-D bundle of parity (d mod 2). Denote byĤ the graded Hilbert space is the M-equivariant M (0) -Hilbert module obtained from C c (M) by completion with respect to the scalar product Two graded D-D bundles A and A ′ are said to be Morita equivalent if (they have the same parity and) A⊗K(Ĥ M ) ∼ = A ′⊗ K(Ĥ M ). The set of Morita equivalence classes of graded D-D bundles forms a group Br * (M) = Br 0 (M) ⊕ Br 1 (M), the graded Brauer group of M. The sum of A and A ′ is A⊗A ′ (note that the parities do add up), and the opposite A op of A is the bundle whose fibre at x ∈ M (0) is the conjugate algebra of A x . In other words, ) in the even (resp. odd) case. Note that H 1 (M,Û (Ĥ)) is not a monoid, since given two morphisms f 1 , f 2 : Γ →Û (Ĥ) (with Γ Morita equivalent to M), the map f : g → f 1 (g)⊗f 2 (g) is not a morphism since On the other hand, if we restrict to degree 0 operators, i.e. if we considerÛ ( The sequence where the first map is the quotient map and the second is P → P × PÛ(Ĥ) K(Ĥ), is canonically split-exact (the proof is analogue to [10]), and the splitting identifies There is a split exact sequence [1,5] Indeed, from the exact Furthermore, in the decomposition . This can be seen by direct checking using (1) Graded S 1 -central extensions One defines the sum of two graded central extensions ( Γ 1 , δ 1 ) and ( Γ 2 , δ 2 ) as ( Γ, δ), where δ(g) = δ 1 (g) + δ 2 (g) and Γ = ( The multiplication for the groupoid Γ is ( Note that the set of isomorphism classes of graded S 1 -central extensions of Γ forms an abelian group. To see that the product is commutative, To see that ( Γ, δ) has an inverse, let Γ op be equal to Γ as a set, but the S 1 -principal bundle structure is replaced by the conjugate one, and the product * op in Γ op is is an isomorphism ( g ∈ Γ is any lift of g ∈ Γ). Let us define the group Ext(M, S 1 ). Consider the collection of triples ( , the K-theory of the reduced crossed-product of the graded C * -algebra A by the action of M. If A corresponds to the graded central extension ( The C * -algebra C * r ( Γ) S 1 is considered as a Z/2Z-graded C * -algebra, using the grading automorphism Note that it suffices to study K 0 Example of manifolds Let M be a manifold. Elements of Ext(M, S 1 ) are given by an open cover (U i ) i∈I , smooth maps c ijk : Define a product on Γ by (x ij , λ)(x jk , µ) = (x ik , λµc ijk ). Then Γ is a groupoid, and there is a central extension S 1 → Γ → Γ. The sum of (c 1 , δ 1 ) and (c 2 , Let us consider the particular case when c = 1 is the trivial cocycle. In that case, C * r ( Γ) S 1 ∼ = C * r (Γ). Let us compare this Z/2Z-graded C * -algebra to the Z/2Z-graded C * -algebra C 0 ( M ), where M → M is the double cover determined by the cocycle δ. Let P = (∐U i ) × M M . Then P is a Z/2Z-equivariant Morita equivalence from Γ to M ⋊ Z/2Z, [5,Remark A.13]. Twistings by Euclidean vector bundles Suppose that E is a Euclidean vector bundle over M ⇒ M (0) . Then E is given by an O(n)-principal bundle over M ⇒ M (0) , hence by a morphism f : Γ → O(n) together with a Morita equivalence from Γ to M. Let E be the graded S 1 -central extension We first need two lemmas. Proof. Since G is compact, every S 1 -central extension is of finite order. Let us recall the argument: given a central extension Then the representation of G in W is a map G → U (W ) ∼ = S 1 which is a splitting of n E, hence E is of order at most n. Therefore, the extension E comes from a central extension 0 → Z/nZ → G → G → 1. Since G is simply connected, the central extension must be trivial as Z/nZ-principal bundle, i.e. G = G × Z/nZ, and the product on G is given by (g, λ)(h, µ) = (gh, λ + µ + c(g, h)) where c : G × G → Z/nZ is a 2-cocycle. Using connectedness of G 0 , c must factor through G/G 0 × G/G 0 , i.e. the central extension is pulled back from a central extension of G/G 0 , which must be trivial by assumption. If E and E ′ are S 1 -central extensions whose restriction to G 0 are isomorphic, then E and E ′ are isomorphic. Proof. After taking the difference of E and E ′ , we may assume that E ′ is the trivial extension. Denote by S 1 → G → G the extension E. Let g → g be a splitting G 0 → G. Choose a family (s i ) such that G = ∐ i s i G 0 , and for each i, choose a lift s i of s i . Define then s i g by s i g. By construction, γh = γ h for all (γ, h) ∈ G × G 0 . Next, define the 2-cocycle c : G × G → S 1 by g h = c(g, h) gh. Let c ij = c(s i , s j ). For all j, let ϕ j : G 0 → S 1 such that s −1 j g s j = ϕ j (g) s −1 j gs j . It is immediate to check that ϕ j is a group morphism, hence ϕ j is trivial by assumption, i.e. s −1 j g s j = s −1 j gs j . Multiplying on the right by h and on the left by s i s j , we get s i g s j h = s i s j s −1 j gs j h = c ij s i s j s −1 j gs j h = c ij s i gs j h, hence s i g s j h = c ij s i gs j h. It follows that c(s i g, s j h) = c ij , i.e. that c factors through G/G 0 × G/G 0 . Since Br(G/G 0 ) is trivial by assumption, c must be a coboundary. We conclude that E is a split extension. Proof. The last equality follows from the fact that C 0 (E) and C 0 (M (0) , Cliff(E ⊗ R C)) are M-equivariantly KK-equivalent [7]. The second equality is just the definition. To prove the first equality, let us suppose for instance that n = dim E is even, the proof for n odd being analogous. We have to compare the graded D-D bundle A ′ E with the graded central extension S 1 → Γ → Γ which is pulled back from S 1 → P in c (n) → O(n). By naturality, we can just assume that Γ = M = O(n), and that E = R n is endowed with the canonical action of O(n). Denote by α : O(n) → PÛ (Ĥ) the canonical action of O(n) on Cℓ n = Cliff(E C ). To show that the central extension associated to the graded D-D bundle Cliff(E C ) → · is (S 1 → P in c (n) → O(n)), it suffices to prove that there exists a lifting α: It follows that β(−1) = −Id, hence β induces a lift β : Spin c (n) →Û (Ĥ) which is S 1 -equivariant. This means that the restriction of E to SO(n) is isomorphic to S 1 → Spin c (n) → SO(n). To conclude that the restriction of E to O(n) is isomorphic to S 1 → P in c (n) → O(n), we apply Lemma 2.4 to G = O(n) and G 0 = SO(n). Poincaré duality 3.1 Kasparov's constructions Let M be a compact manifold (actually, Poincaré duality can be generalized to arbitrary manifolds [8], but in this paper we confine ourselves to compact ones for simplicity). We suppose that M is endowed with a Riemannian metric which is invariant by the action of a locally compact group G. Given any vector bundle A over any manifold M , we denote by C A (M ) the space of continuous sections vanishing at infinity. We will also write C A whenever there is no ambiguity. We denote by τ the complexified cotangent bundle of M . In [8], Kasparov constructed two elements and D ∈ KK G (C τ (M ), C) (in this paper, we will use Le Gall's [9] notation KK M ⋊G (·, ·) for equivariant KK-theory with respect to the groupoid M ⋊ G, rather than Kasparov's RKK G (M ; ·, ·), but of course both are equivalent). Let us recall the construction of θ and D. Let Let us explain the construction of θ. Denoting by ρ the distance function on M , let r > 0 be so small that for all (x, y) in U = {(x, y) ∈ M × M | ρ(x, y) < r}, there exists a unique geodesic from x to y. For every C(M ×M )-algebra A, we denote by A U the C * -algebra C 0 (U )A. Then the element θ is defined as Constructions in twisted K-theory In this subsection, we construct an element θ A ∈ KK M ⋊G (C(M ), C A⊗ C A⊗A op ) for any graded D-D bundle A over (M × G ⇒ M ), i.e. for any G-equivariant graded D-D bundle over M . We may assume that A is stabilized, i.e. that A ∼ = A⊗K(Ĥ ⊗ L 2 (G)). First, let us denote by p t (x, y) the geodesic segment joining x to y at constant speed (0 ≤ t ≤ 1). Using p t , we see that p t : U → M is a G-equivariant homotopy equivalence. Unfortunately, this does not imply that Br(U ⋊ G) and Br(M ⋊ G) are isomorphic for arbitrary G, hence we will make the following Assumption. In the sequel of this paper, and unless stated otherwise, G will be a compact Lie group acting smoothly on a compact manifold M . In that case, . As a consequence, there is a continuous, G-equivariant family of isomorphisms u t,x,y : A x ∼ → A pt(x,y) . Of course, the u t 's are not unique, but this will not be important as far as K-theory is concerned as we will see. We then define θ A as Twisted K-homology Given a C * -algebra A endowed with an action of a locally compact group G, the Gequivariant K-homology of A, K * G (A), is defined by KK * G (A, C). If A is a G-equivariant graded D-D bundle over M , we define K G,A * (M ) by K * G (C A (M )). Remark 3.3 The map µ does not depend of the choice of the isomorphisms u t,x,y , hence ν doesn't either. The rest of the paper is devoted to the proof of Theorem 3.1. Proof of ν • µ = Id For all α ∈ KK M ⋊G (C(M ) ⊗ A, C A (M )⊗B), we have Suppose shown that . We postpone the proof of (b) until subsection 3.9 3.9 Proof of (4) Let us first recall the proof when A is trivial [8,Lemma 4.5]. We want to show that for Write α = [(E, T )] where C(M, A)E = E and T is G-continuous. Then both products can be written as where F i is of the form M We want to show that α Let us just explain the homotopy between the two modules, the homotopy between the F i 's being obtained using Kasparov's technical theorem in the same way as in [8,Lemma 4.5]. The left-hand side is and the right-hand side is where we recall that p 1 : U → M is the second projection (x, y) → y. 1 is the Morita equivalence between C 0 (U ) and p * 0 C A⊗C 0 (U ) p * 1 C A op obtained by composing the Morita equivalence p * 0 H between C 0 (U ) and p * 0 (C A⊗A op ) with the isomorphism p * 0 A op ∼ = p * 1 A op . Using the map p t : U → M instead of p 1 , consider (with obvious notations) the homotopy E⊗ C A σ M,C A (F t )⊗ C 0 (U ) H ′ t⊗C p * t (A⊗A op ) (U ) p * t H op . For t = 1, we get (6). For t = 0, we get E⊗ C A (C A⊗ C τ ) U⊗C 0 (U ) p * 0 H⊗ C p * 0 (A⊗A op ) (U ) p * 0 H op where the right C p * 0 (A⊗A op ) (U )-structure on E⊗ C A (C A⊗ C τ ) U⊗C 0 (U ) p * 0 H is defined as follows: C p * 0 A acts on (C A⊗ C τ ) U by the obvious action, and C p * 0 A op acts on p * 0 H. In other words, it is the tensor product of (5) with β A over C A , where β A is the C A -C A -bimodule In the expression above, the right C A⊗A op -module structure on C A⊗C(M ) H is defined as follows: ∀a ∈ C A , ∀b ∈ C A op ∀ξ ⊗ η ∈ C A⊗C(M ) H, (ξ ⊗ η) · (a ⊗ b) = (−1) |η| |a| ξa ⊗ ηb.
4,348.4
2006-09-20T00:00:00.000
[ "Mathematics" ]
Degradation of the EBR-CFRP strengthening system applied to reinforced concrete beams exposed to weathering Carlos, SP, Brasil Abstract: Little is known about the behavior and durability of strengthening systems applied on concrete substrata and the possible loss of performance due to the degradation of the intervening materials by the structure’s natural aging process and exposure of the externally strengthened elements to aggressive environments. In this context, the present work presents an experimental analysis of the behavior of reinforced concrete beams strengthened with Carbon Fiber Reinforced Polymer (CFRP), applied according to the Externally Bonded Reinforcement (EBR) technique, maintained in a laboratory environment (indoor and protected) or exposed to weathering (outdoor exposure). In addition, specimens of the intervenient materials were also molded and exposed to the same environmental conditions as the beams. The results indicate that weather-exposed epoxy adhesives present reductions up to 70% in their mechanical properties after exposure, while the CFRP composite properties remain similar. It was also found that the strengthening system provided 50% and 28% increments in the load-carrying capacity and stiffness of the elements, respectively. However, the tests conducted after 6 months of weathering exposure showed a 10% reduction in the load-carrying capacity of INTRODUCTION Strengthening a structural element means restoring or increasing its supporting conditions in relation to those for which it was initially designed. Carbon Fiber Reinforced Polymer (CFRP) strengthening systems can be applied to various reinforced concrete elements such as beams, columns, slabs. Design errors, project reading misinterpretation, changes in the structure use, or the natural weathering of buildings are some of the main reasons for strengthening a structure [1]. Among the main materials used for this purpose, Fiber Reinforced Polymers (FRPs) are of high prominence because their mechanical properties are superior to the conventional materials used in strengthening techniques. Of the main characteristics of FRPs, high strength and stiffness, and low weight are the main properties that favor their use in strengthening works [2]. CFRPs are the most widely used materials for structural strengthening [3]. One main advantage of CFRPs is the resistance to chemical agents and corrosion [4]. According to Barros [5], prefabricated and in-situ cured systems are the most used in strengthening techniques. Prefabricated systems are supplied in the form of laminates or circular section bars, with their fibers oriented in the longitudinal direction of the element. In-situ cured matrix and fiber are supplied separately, and CFRP composite manufacturing is carried out at the strengthening application site. In this case, the fibers may be either unidirectional or bidirectional in orientation and supplied in the form of sheets or fabrics. Of the various ways of applying strengthening systems, the Externally Bonded Reinforcement (EBR) technique is widely used. It originated in Europe, proposed and developed in Switzerland, and it was first used to strengthen a bridge using CFRP sheets in the city of Lucerne in 1991. Since then, the EBR technique has continued to be used in strengthening interventions [6], [7] due to its advantages, such as the ease of application and good mechanical performance [8]. Further, the EBR technique allows for increments in bending with the bonding of the strengthening system on the tensile face of the beams, or even by shear, by strengthening the lateral faces of the beams, or with the strengthening systems working simultaneously. According to Karbhari [9] and Gangarao et al. [10], composite materials used in civil construction can be exposed to various aggressive agents. Determining the effects of environmental aggressiveness on the adhesion of the concreteadhesive FRP, especially over prolonged periods, is of extreme importance [11]. There is a need for further knowledge of their long-term performance, durability, and design life in the face of conditions such as moisture, seawater salinity, and thermal cycles, among others. The present work aimed to evaluate the long-term durability and mechanical behavior of the flexural strengthening system's constituent materials, as well as reinforced concrete beams strengthened with CFRP, applied according to the EBR technique, maintained in a laboratory environment (internal and protected) or exposed to weathering (outdoor exposure). Degradation of FRPs when exposed to weathering All materials used in the construction industry are subject to degradation, originated by chemical, physical, or mechanical sources. Applying an FRP strengthening system to a determined concrete element, these degradation mechanisms should be considered to achieve better long-term behavior and durability [12]. Degradation mechanisms can occur either individually or together in the degradation of the composite FRP material (Figure 1). It is generally known that degradation mechanisms directly affect the most sensitive part of the strengthened structure, i.e., the bond between the concrete substrate and the FRP composite. Bonding usually relies on epoxy-based resins, which are susceptible to degradation in harsh environments. These resins are susceptible to ultraviolet (UV) radiation, temperature, humidity, and acid rain [13]. Unless a protective layer is used, the FRP gets directly exposed to the external environment and, consequently, to the possible aggressive agents or eventualities, as shown in Figure 2. Direct exposure to UV radiation from polymers can result in photodegradation, a mechanism that causes the decomposition or dissociation of chemical bonds between polymer chains, driven by the UV radiation present in solar waves. This radiation causes the degradation of polymers, leading to discolouration, surface oxidation, embrittlement, and polymer matrix microfissuration [12]. [14] Zhao et al. [15] evaluated three types of degradation in polymeric resins (ester-vinyl, epoxy, and epoxy ester) when exposed to UV radiation. In the study, specimens with 200 mm, 20 mm, and 4 mm in length, width, and thickness, respectively, were prepared and cured for seven days at room temperature. Subsequently, they were exposed to UV radiation (280-315 nm) at a temperature of 57-63ºC and humidity of 90-95%. The radiation cycles lasted 12 hours (8 hours of UV radiation and 4 hours of condensation) for a total period of 90 days ( Table 1). The ester-vinyl resin presented higher degradation, with a reduction of 65% and 69% in tensile strength and modulus of elasticity, respectively. After exposure, the resin specimens presented significant changes in their colouration. Some fiber types also suffer from degradation caused by UV radiation. According to ISIS [12], carbon and glass fibers are generally not affected by UV radiation, while aramid fibers undergo slight degradation in the face of UV exposure. Homam and Sheikh [16] studied the isolated effect of exposure to UV rays on FRP composites with epoxy resin. In a controlled environment, CFRP and GFRP specimens were exposed to 22°C to 38°C temperature of, 40% relative humidity of radiation from UV-A lamps at 156 W/m 2 . The specimens were tested for uniaxial tensile and shear using the single-lap bond test. Slight increases in tensile strength and stiffness were observed in the specimens of both composites exposed to UV radiation as compared to those maintained in the laboratory environment for exposure periods of 1200 and 4800 hours. Notably, exposure to UV rays did not significantly affect the shear strength of the composites. The bond between the FRP composite and the concrete substrate was susceptible to degradation due to UV radiation. Kabir et al. [17] studied the concrete/adhesive/FRP bonding in external environments. The authors report the exposure to external environments as the most harmful situation for concrete bond-adhesive-FRP. Climate regions: Köpper-Geiger classification system The intensity of weathering effects depends on the climate and microclimate of the place where the structure is allocated. One way to categorize the climate is through the Köppen-Geiger climate classification system, which is used worldwide in different areas (such as climatology, meteorology, geography, bioclimatology, and ecology) and takes into account the average annual and monthly values of temperature and precipitation in addition to climate seasonality. Climatic types are separated by large groups, types, and subtypes. Table 2 presents the nomenclatures and climatic types, while Figure 3 shows the climate classification of Brazil [18]. Figure 3 indicates that tropical climate (Zone A -an average temperature equal to or higher than 18°C every month and significant precipitation) is predominant in Brazil, covering 81.4% of the country's total area. Dry climate (Zone B -low precipitation throughout the year) is present in some parts of the northeastern states of Brazil, while subtropical climates are dominant in the South and Southeast [18]. According to the Brazilian Agricultural Research Company (EMBRAPA) [19], the city of São Carlos in the state of São Paulo (latitude 21°57'42" (S), longitude 47°50'28" (W), and altitude 860 meters above sea level), which is where this research was developed, has a local climate defined as humid subtropical with dry winter and hot summer (known as Cwa). Figure 4 shows other regions of the world that have similar Cfa and Cwa climate types, as per the Köpper-Geiger climate classification system, for which the results obtained in this research can also be expanded. Table 2 Köpper-Geiger classification system [18] Group Type Subtype (where applicable) A -Tropical EXPERIMENTAL PROGRAM The research aimed to evaluate the degradation of reinforced concrete beams strengthened with CFRP sheets and exposed to weathering in a humid subtropical climate with dry winter and hot summer (Cwa) in non-accelerated tests. The beams and intervening materials (epoxy resins and CFRP composites) were exposed to two distinct environments: (a) laboratory environment (internal, protected), inside a controlled temperature and humidity chamber for a period of 6 months (reference), and (b) external environment, wherein the materials were exposed to weathering for the same period of time. Beams The experimental program consisted on twelve simply supported beams of 120 x 200 x 2500 mm 3 , 20-MPa, and positive longitudinal reinforcement composed by two CA-50 steel bars (10-mm diameter; reinforcement ratio of 0.75%). To avoid shear rupture, CA-60 steel stirrups (5-mm diameter and 10-cm spacing) with two top longitudinal reinforcement bars of CA-50 and 6.3 mm diameter ( Figure 5). The beams were designed in the deformation domain 2, based on NBR 6118 [21]. Strengthening system Eight beams were reinforced with unidirectional 300-g/m 2 weight carbon fiber sheets of 0.13 mm thickness. The sheet was cut into 110 x 2200 mm strips to fit the tension surface of the reinforced concrete beam ( Figure 6). The CFRP sheets were applied externally to the beams when they were 186 days old. The substrate was cut with a diamond thinning disc grinder until the entire layer of cement paste was removed and the aggregates were exposed. After, compressed air was applied to the surface for disposing of any solid particles. A layer of the primer resin (Resin A) was applied to improve the adhesion conditions between the CFRP and concrete substrate. The CFRP was bonded using the epoxy lamination resin (Resin B), which was applied 45 minutes after the primer. The primer and saturation resins were prepared into three steps: agitation of each component, mixture of component A to B in the proportion indicated by the manufacturer, and mechanical mixture until a uniform color was obtained. The primer resin was spread on the concrete surface using a brush. The CFRP sheet was covered with lamination resin and applied to the concrete substrate before the application of the strengthening system. While strengthening, we tried to ensure the alignment of the fibers in the longitudinal direction of the beam, to check the non-formation of air bubbles in the back of the CFRP composite, and to avoid the accumulation of excess resin. The steps and procedures used to strengthen the reinforced concrete beams are depicted in Figure 7. Degradation environments Of the twelve beams made, four had no strengthening, and the others were strengthened in flexure with one layer of CFRP sheet applied according to the EBR technique. Two exposure environments were adopted in this research ( Figure 8), as given below. • Laboratory environment (LAB): internal to a chamber, protected, and with monitoring of ambient temperature and humidity, which served as a reference for the other tests ( Figure 8a) • Exposure to weathering (WEA): external environment, free of obstacles that promoted shading in beams and specimens, and with monitoring of humidity, temperature, and radiation ( Figure 8b). Each beam was identified as Vx_y_z, where "x" is the number of tested elements, "y" corresponds to the elements used as reference (REF, of Reference), maintained in a laboratory environment (LAB, of laboratory), or exposed to weathering (WEA, of Weathering), and "z" due to the use or not of strengthening material (0 or 1 layer of CFRP sheet). The strengthening system was also analyzed, specifically the epoxy resins and the CFRP composites. Table 3 presents a summary of the experimental program wherein Vx_REF_0 and Vx_REF_CFRP refer to the beams without and with strengthening, respectively, maintained in the same environment and tested on the same date -200 days after concreting -which were considered as references for the other analyses. The Vx_LAB_CFRP and Vx_WEA_CFRP refer to the beams strengthened with CFRP and maintained in laboratory chamber environment or exposed to weathering, tested 6 months after the application of the strengthening. The other beams will be tested 2 years after strengthening and exposure to the environments previously mentioned. A self-reaction system was designed to allow testing the beam inside the universal testing machine. A double corbel element (Figure 9a) was designed using 31.75-mm and 25.40-mm thick. The corbel was fixed to the testing machine base and supported a 250-cm long steel section (W200 x 26.6). The corbel was screwed to the base of the test machine using six steel hexagon screws type M12, and the metal profile was mounted on it with eight steel hexagon screws type M10. The concrete beams were supported on the metal profile at a roller and at a fixed support, as shown in Figure 9b. To measure the rotation of the supports and displacement of the steel beam ends, two Linear Variable Differential Transformers (LVDTs) with 25-mm stroke and two potentiometers with 20-mm stroke were used, positioned as shown in Figure 9b, 9c. The loading was applied at 0.50mm/min displacement. The load application was double recorded using an external 200-kN load cell (with reading resolution of 0.01 kN), in addition to the test machine load cell, which had a load cell with a maximum capacity of 600 kN and reading resolution of 0.1 kN. The displacements and deformations in concrete, longitudinal reinforcement and CFRP composite were recorded using an ADS-2000 model data acquisition system (manufactured by LYNX). Beam instrumentation included a displacement transducer and six electrical strain gauges. A Vishay displacement transducer, with a linear range of 50mm, was used to measure the vertical displacement of the beams. This was fixed to an external support and positioned at the midspan at the beams. The deformations in the concrete were measured on an electric strain gauge of type PA-06-1500BA-120 (resistance of 120 Ω and length of the reading grid of 40 mm; produced by Excel Sensors), which was positioned in the middle of the beams (SG1). The deformations in the longitudinal strengthening were measured on electrical strainers of type KFG-20-120-C1-11 (resistance of 120 Ω and length of the reading grid of 20 mm; produced by KYOWA), which was positioned in the section of the central section of one of the longitudinal strengthening (SG2). In relation to CFRP composite deformations, electrical strain gages of type KFG-20-120-C1-11 (the same used in longitudinal strengthening) and type PA-06-250BA-120 (resistance of 120 Ω and reading grid length of 6 mm; produced by Excel Sensors), positioned along the strengthening material (SG3 to SG5). Figure 10 shows the instrumentation used. Characterization of materials Characterizing the concrete included an analysis of the axial compressive strength and modulus of elasticity. Molding and curing procedures were performed, as prescribed by NBR 5738 [22], and cylindrical specimens 200 mm in height and 100 mm in diameter were molded. 24 hours after the casting, the specimens were demolded and positioned in the same environment as the beams. The CFRP composite manufacturing system, which supplies the fiber and matrix separately, is known as the "in situ" cured system. In this experiment, an in situ cured system was used. A layer of unidirectional carbon fiber mat weighing 300 g/m 2 was impregnated with epoxy resin. The carbon fiber sheet had to have at least tensile strength of 3800 MPa, modulus of elasticity of 240 GPa, and deformation at rupture of 15.5‰. More information on the characterization of the CFRP composites and experimental results can be found in Ferreira [23]. The adhesives used to fix the composite strengthening system to the concrete substrate were supplied by the same manufacturer of the carbon fiber sheet used in this work. In the experiment, bi-component epoxy primer and lamination resins were used. The characterization tests of epoxy resins and CFRP composites were carried out at the Polymer Laboratory of the Materials Engineering Department (DEMa) of UFSCar. More information on the characterization of adhesives and the experiment results can also be found in Ferreira [23]. The mechanical properties of the steel were evaluated by axial tensile tests, according to the recommendations of NBR 6892-1 [24]. A minimum of three specimens, 50 cm in length and randomly chosen, were tested for each bar diameter used. Both steel and concrete characterization tests were performed at LSE. RESULTS AND DISCUSSION This section presents the results of the tests of characterization of the materials and the behavior of concrete beams without strengthening and reinforced with a layer of CFRP sheet. Environmental data The temperature and average humidity readings in the laboratory environment were 23.3ºC (± 0.03%) and 36.5% (± 0.24%), respectively. The values in parentheses represent the Coefficient of Variation (COV). For exposure to weathering (external environment), Figure 11 presents the data of temperature, humidity, and UV radiation throughout the exposure period of the beams and concrete specimens, i.e., from May to November 2018. During this period the maximum and minimum temperatures were 34.2ºC and 4.7ºC, and the maximum and minimum humidity were 95% and 16%, respectively. Regarding UV radiation, the surface weather station recorded a peak of 4112 kJ/m 2 and an average of 788 kJ/m 2 over the exposure period. Concrete Concrete properties were tested 28 days after concreting and also on the day of the beam tests, which were performed at the age of 200 days (REF) and 380 days (LAB and WEA). The results showed the mean values of compressive strength of 22.0 MPa (± 3.50%) and 28.5 MPa (± 0.54%) and modulus of elasticity of 29.3 GPa (± 7.03%) and 31.6 GPa (± 5.32%) obtained at 28 and 200 days, respectively. For the specimens that were kept in the laboratory environment (LAB) or exposed to weathering (WEA), mean values of 27.4 MPa (± 0.50%) and 25,2 MPa (± 6.94%) for compressive strength and 31.2 GPa (± 2.87%) and 28.5 GPa (± 3.06%) for modulus of elasticity, respectively, were obtained in the test performed at the age of 380 days. Steel Regarding the characterization of steel, for bars with a diameter of 5 mm, type CA-60, an average yielding stress was observed at 2.0‰ of 646.9 (± 4.04%) MPa and a maximum tensile stress of 670.6 (± 3.62%) MPa (Figure 12a). For steel with a diameter of 10 mm, a typical behavior of CA-50 steel was verified with a plastic plateau (Figure 12b), with an average yielding stress of 547.4 MPa (± 2.13%), mean yielding strain of 2.8‰ (± 1.59%), and a maximum tensile stress of 591.5 MPa (± 6.25). The elasticity modules verified for the 10 mm and 5 mm bars were 196.9 GPa (± 1.58%) and 191.3 GPa (± 7.19%), respectively. Epoxy resins The characterization tests for the resins were experimentally performed at the ages of 7 and 14 days and 4 and 8 months after the application of the strengthening system, as presented in Figure 13. Considering 14 days of cure for Resin A (Figure 13a), which was maintained in a laboratory environment, the material presented a small increase in maximum tension and modulus of elasticity (5.7% and 4.5%, respectively) at 4 months. After 8 months of exposure and with the results of the tests performed at 14 days as a reference, there was no major change in the maximum tensile stress of the primer, which presented a small reduction of 2.9%. However, a 9.1% reduction in the modulus of elasticity was verified in the assays. There was also a small reduction in the ductility of the specimens after 8 months of exposure in a laboratory setting. Therefore, there were no significant changes in the mode of rupture and stress-strain behavior throughout the exposure period in the laboratory environment. Considering the exposure to weather of Resin A ( Figure 13b) and with the tests performed at 14 days as a reference, a reduction of 39.1% in tensile strength and 4.5% in the modulus of elasticity was noted at 4 months. At 8 months of exposure, a sharper reduction in tensile strength (about 51.0%), while the modulus of elasticity showed a reduction of 9.1%. It was also verified, for both ages, that the reduction of ductility of the specimens with alteration in the mode of rupture from ductile to fragile without a defined plasticization interval. Further, Resin B, which was maintained in a laboratory environment (Figure 13c) did not present major changes in its mechanical properties (reduction of only about 3.1% of tensile strength) up to 4 months of exposure. However, after 8 months, there was a considerable reduction of 23.8% and 19.2% in tensile strength and modulus of elasticity, respectively. This reduction in tensile strength and modulus of elasticity did not alter the behavior of the stress-strain diagram, and the maximum stress level continued to be reached with close deformations. At 7 days of cure, the tensile strength was already close to that obtained in the complete cure indicated by the manufacturer (14 days). For Resin B's exposure to weather (Figure 13d), the stress-strain diagrams did not present major changes at 7 and 14 days, only a small increase in the maximum tension. After 4 months, there were remarkable losses in its mechanical properties, with a reduction in its tensile strength and modulus of elasticity of 63.3% and 9.1%, respectively. At 8 months, the maximum tensile strength decreased sharply (about 69.2%), while the modulus of elasticity showed a reduction of 18.2%. Observed, also, the reduction of ductility for both ages of exposure with alteration in the mode of rupture, which occurs abruptly and without any stretch of plasticization. Unlike Resin A, Resin B yielded a considerable loss in its mechanical properties after 8 months of exposure to the laboratory environment. Another divergence found in the epoxy resins is related to their mechanical properties at 14 days of curing. It was noted that both the tensile strength and modulus of elasticity of Resin B were higher (about 16%) than those of Resin A despite the equivalent curing time. In a study carried out by Fernandes et al. [25], epoxy resins were also kept in a laboratory environment for a period of 2 years. The results, as in the present study, also showed small reductions in their mechanical properties, specifically 5.5% and 6.9% in tensile strength and modulus of elasticity, respectively. The findings for the epoxy saturation resins (Resin B) exposed to weather were also verified in a research conducted by Escobal [26]. The author evaluated the degradation of epoxy saturation resins exposed to weather (external environment with monitored temperature and humidity) for a total period of 4 months. The results showed losses of 44% in tensile strength after 4 months of exposure. Escobal [26] also evaluated the same epoxy resins exposed to accelerated aging cycles, contained by UV radiation, temperature of 60ºC, and water vapor at 50ºC. The results showed a considerable reduction in the tensile strength of the resins (about 60%), as verified in the present work for resins A and B exposed to the weather. The results found by Zhao et al. [15] were also similar to those in this study. The authors evaluated epoxy resin specimens exposed to UV radiation cycles (simulating exposure in an external environment) for a period of 90 days. At the end of the trials, the authors verified a maximum reduction of 20.4% in the modulus of elasticity of the specimens. CFRP composites For CFRP composites (Figures 14a, 14b), the specimens presented a linear elastic behavior until their rupture, typical of fragile materials. Regarding their exposure to the laboratory environment (Figure 14a), a more significant variation was seen in the behavior of the specimens after the ages of 4 and 8 months. Further, a greater reduction in the tensile strength of the specimens occurred after 4 months of exposure (reduction of about 15% of tensile strength). During this same period, there was a small reduction of 1.73% in the modulus of elasticity. After 8 months of exposure, the CFRP composite specimens gained resistance and showed a lower loss in tensile strength, at about 11%, while the modulus of elasticity reduced by only 3.3%. The last deformation also showed reductions of 14% and 7% over 4 and 8 months' exposure to the laboratory environment, respectively. It was verified that a greater loss occurred in the test performed at 4 months. For weather exposure (Figure 14b), and taking as reference the tests performed at 14 days, a reduction in tensile strength, modulus of elasticity, and last deformation of 18%, 4.9%, and 13.3%, respectively, after 4 months of exposure was verified. At the age of 8 months, the specimens of the CFRP composite presented reductions of 1.4%, 1.3%, and 0.7% in tensile strength, modulus of elasticity, and last deformation, respectively. Once again, there was a significant variation in the behavior of the samples for the analyzed ages. Although the properties of CFRP composites present reductions over exposure time, these oscillations do not mean that the material has degraded. Therefore, it is important to verify the coefficient of variation in the properties of the samples, which, as mentioned before, present considerable variation. Cromwell et al. [27] also found that the exposure of CFRP laminates to aggressive environments, such as constant humidity and saline solution, does not cause major changes in their tensile strength and modulus of elasticity or ultimate deformation. In another research by Dalfré [28], there was no significant change in tensile strength and modulus of elasticity of CFRP composite specimens exposed to cycles of constant humidity and humidity for a total period of 2 years. Concrete beams behavior The behavior of the reinforced concrete beams (with and without strengthening) exposed to the laboratory environment and weathering was analyzed based on ductility, increased load capacity, deformation of materials (concrete, steel, and CFRP), and rupture mode of the strengthening system. The stop criterion adopted for the reference beams tests without strengthening was established in terms of the deformation of the bending strengthening (at the time the deformation in the steel reached the average value of 11‰), while that adopted in the reinforced beams of reference and exposed in laboratory environment and weather was established in terms of the failure of the strengthening system, followed by a sudden loss of load and detachment of the material. Table 4 presents a summary of the average results obtained in the tests of the beam sets for the yielding of the steel reinforcement ( sy ε ), for the concrete crushing ( , c esm ε ), and the instant that the beams reach maximum loading ( max F ). The indicator of effectiveness of the bending strengthening system in terms of increased load capacity (IR) is also presented, which was obtained by analyzing the average load ( ) recorded for the beams without strengthening and strengthened (reference and exposed to the environments), respectively. In order to evaluate the effectiveness of the strengthening system with CFRP sheets applied according to the EBR technique, Figure 15 presents a comparison between the Load and vertical displacement of the reference beams without strengthening (V1_REF_0 and V2_REF_0) and strengthened (V1_REF_CFRP and V2_REF_CFRP). Upon analysis of Table 4 and Figure 15, it is possible to verify the effectiveness of the CFRP EBR strengthening system for the reinforced beams without strengthening and the increase in the load capacity of the strengthened beams. In relation to the beginning of the steel yield, it was verified that the strengthening provided an increase in the load capacity corresponding to the beginning of the longitudinal steel yielding and the maximum force of 19.7% and 50%, respectively. In addition, a reduction in the average vertical displacement of 9.7% for the strengthened reference beams, compared to the reference beams without strengthening due to the reduction of cracking that the strengthening system promotes, restricting the reduction of element inertia was observed. As failure modes, the reference beams without strengthening, V1_REF_0 and V2_REF_0, designed in Domain 2, presented a ductile rupture characterized by the yielding of the longitudinal tensile reinforcement and later crushing of the compressed concrete. All the strengthened beams (V1_y_CFRP and V2_y_CFRP), after great deformation and high curvature of the elements, showed appearances of cracks parallel to the strengthening material in the moments before the failure of the element. Therefore, it appears that the use of the CFRP EBR strengthening system alters the failure mode of the strengthened elements. The failure, which was previously ductile and ruled by the deformation of the longitudinal strengthening, became almost fragile with the detachment of the CFRP sheet adhered to the concrete substrate. Figure 16 presents a comparison between Load and vertical displacement relationship of the reference beams without strengthening (V1_REF_0 and V2_REF_0) and strengthened (V1_REF_CFRP and V2_REF_CFRP), with those exposed in laboratory environment (V1_LAB_CFRP and V2_LAB_CFRP) and to weathering (V1_WEA_CFRP and V2_WEA_CFRP), respectively. For the yielding of the steel reinforcement of the strengthened beams maintained in laboratory environment or exposed to weathering, an average increase in load capacity of 4.6 and 9.6%, respectively, was observed in relation to the yielding of the steel in the beams without strengthening. For the maximum strength, an average increase in load capacity of 40.7 and 35.1% in the reference beams without strengthening (REF_0), respectively, was observed. Analyzing the yielding of the steel reinforcement of the exposed beams and considering the results obtained in the tests of the strengthened reference beams (REF_CFRP), it was noted that the elements maintained in laboratory environment and exposed to weathering presented a reduction of the average load of 12.6 and 8.4%, respectively. It was also observed that the maximum force recorded in the strengthened beams exposed to weathering suffered a reduction of 6.4 and 10.0%, respectively, when exposed to the environments previously presented. Finally, taking into account the maximum force recorded in the tests with the strengthened beams (Table 4), exposure to weather was more aggressive for the strengthening system, since its ultimate load was lower than that of the elements maintained in laboratory environment. Concerning the failure load of the strengthened elements exposed to the environments, it was verified that there were no changes comparing it to the strengthened reference beams, presenting an almost fragile failure, preceded by pops and a lower load than verified for the strengthened reference beams, with the detachment of the CFRP sheet adhering to the concrete substrate ( Figure 17). CONCLUSIONS This work reports findings from an experimental program conducted to evaluate the efficiency of the EBR technique in increasing the load capacity of reinforced concrete beams strengthened in flexure with CFRP composites and also to evaluate the behavior of EBR-CFRP strengthening systems when exposed to weathering. Two exposure environments were adopted in this study (laboratory and weathering). Beams maintained in laboratory condition for 200 days, with and without strengthening, were considered as reference (REF), while the remaining beams were exposed to degradation environments and tested at the age of 380 days. The results obtained allowed for the following conclusions to be obtained: • The efficiency of the EBR technique in increasing the load capacity of bending reinforced concrete beams with CFRP sheets was verified by means of significant increases in the load capacity of the strengthened elements. In relation to the steel reinforcement yielding, it was verified that the strengthening provided an increase in the load capacity corresponding to the beginning of the longitudinal strengthening yield and the maximum force of 19.7% and 50%, respectively. In addition, there was a reduction in the average vertical displacement of 9.7% for strengthened reference beams, compared to those without strengthening, respectively, due to the reduction of cracking that the strengthening system promotes, restricting the reduction of the element's inertia; • Analyzing the beginning of the steel yielding of the exposed beams and taking into account the results obtained in the tests of the strengthened reference beams (REF_CFRP) showed that the elements exposed to the laboratory environment and those exposed to weathering presented a reduction in the average load at steel yielding of 12.6 and 8.4%, respectively. It is also possible to observe that the maximum force recorded in the strengthened beams exposed to these environments suffered a reduction of 6.4 and 10.0%, respectively. • Moreover, it was noted that the evolution of the curing time from 7 to 14 days of the primer and saturation epoxy resins maintained in laboratory environment resulted in statistically equivalent values, i.e., the curing time affected neither the mechanical properties of the specimens nor the behavior of the stress-deformation diagram. • Also, in relation to the epoxy resins maintained in laboratory environment, it was observed that the saturation resin presented losses of 23.8% and 19.2%, respectively, in tensile strength and modulus of elasticity after 8 months of exposure in addition to an alteration in the failure mode, while the primer resin's properties remained statistically unchanged. • The analysis of primer epoxy resin exposed to weathering was observed for the tests performed after 4 and 8 months of exposure reductions of 39 and 50% of tensile strength, respectively, in addition to the change in the failure mode of the specimens, while the modulus of elasticity resulted in statistically equivalent values. • From the analysis of the saturation epoxy resin exposed for 4 and 8 months to the weathering, it was found that the tensile strength reduced by about 64 and 70%, respectively, from the reference condition with 14 days of cure, and the failure mode changed from ductile to fragile without a plasticization interval. Regarding the modulus of elasticity, it was observed that the values were statistically similar up to 4 months of exposure, but after 8 months, a reduction of 18.2% in the modulus of elasticity was verified, as compared to the reference condition. • The CFRP composites exposed to the laboratory environment and weathering showed no increase in their mechanical properties over the temporal evolution of the curing time from 7 to 14 days, and the results of the tests were statistically equivalent.
8,048.6
2021-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science" ]
Sun-tracking optical element realized using thermally activated transparency-switching material We present a proof of concept demonstration of a novel optical element: a light-responsive aperture that can track a moving light beam. The element is created using a thermally-activated transparency-switching material composed of paraffin wax and polydimethylsiloxane (PDMS). Illumination of the material with a focused beam causes the formation of a localized transparency at the focal spot location, due to local heating caused by absorption of a portion of the incident light. An application is proposed in a new design for a self-tracking solar collector. Introduction Sun tracking is a critical issue in the development of concentrated photovoltaics (CPV) as it is necessary for all but low concentrations [1], and therefore has a great impact on efforts to promote further CPV penetration. Concentration may enable more cost effective solar energy conversion by allowing reduced use of photovoltaic material, or the inclusion of more efficient cells with reduced area, coupled with cheap optical elements. This potential has been realized, in the extreme, through the use of highconcentration collectors incorporating multijunction solar cells to achieve record efficiencies, with the present record standing at 36.7% module efficiency [2], with a record cell efficiency of 44.7% [3]. However, the potential of concentrating systems is limited by the requirement of sun tracking, which increases cost and creates a physical encumbrance making CPV unsuitable for many small-scale applications. The origin of the tracking requirement is the thermodynamic concentration limit arising from the principle of étendue conservation and given by 22 / s in a c c Cn   for a device with refractive index and accepting light from a cone with half-angle extent a c c  , providing a concentration C [4]. As a consequence, increasing the concentration of a solar collector necessarily reduces the acceptance angle a c c  and therefore the timespan during which the solar disk can be maintained within the acceptance cone of a stationary concentrator. At present, tracking is accomplished by mechanically rotating the collector to maintain its orientation towards the sun over the course of the day. Depending on the precision required, this may be accomplished using a single-axis or double-axis mechanical tracker [5], with increased precision generally requiring greater physical size and cost. However there is no physical requirement that this 'rotational' approach to sun tracking is the only possible strategy. For example, it has been demonstrated that with appropriate optical design, tracking over a wide angular range may be achieved via small lateral translations of a collecting element [6,7], or by incorporating an optically active element which can vary its optical properties in response to the changing solar angle to create a 'self tracking' system that can track the sun reactively with no external inputs [8,9]. It has been proposed that tracking may be achieved via a localized, light-activated change in transparency to create a concentrating light trap with a dynamic aperture that moves in response to the changing solar angle to admit light over a wide angular range ( Fig. 1(a)) [10]. This is achieved by initially focusing light with a lens or set of lenses onto the receiving surface of a secondary concentrator, which is covered by a transparency-switching material; the local transparency formed as a result of this illumination then follows the focal spot as it moves across the surface due to the changing solar angle. Progress in developing this design has to date been limited mainly by the lack of an appropriate material from which to create the moving aperture element. Recently, a material satisfying many of the requirements for such an application has been identified in the form of a thermally responsive transparency-switching composite of paraffin wax and polydimethylsiloxane (PDMS) [11]. The material undergoes a shift at the melting point of the paraffin between a high-temperature transparent ('on') state and an low temperature opaque ('off') state in which light is strongly scattered. The mechanism of the switching has been identified as a transformation of the microstructure caused by the reversible formation and destruction of dispersed wax crystals, which act as scattering centers for light, in the PDMS matrix resulting from the paraffin phase transition [12] (Fig. 1(b)). In the present work, we make use of this behavior to create a light-tracking aperture suitable for the desired application. Theory The potential value of this design can be demonstrated in principle by an idealized theoretical description of its behavior as a concentrator. For this theoretical discussion we can consider an ideal nonimaging concentrator, which could be approached in real life by, for example, a CPC. To provide a numerical result, we consider a concentrator with an acceptance angle 10 a c c   , containing a solar cell at its exit aperture. For an ideal concentrator, the concentration of rays within the acceptance cone is equal to the thermodynamic limit 2 1 / s in a c c  (for a hollow concentrator), which in this case is 33x. In Fig. 2(a) the same concentrator is used in the configuration of ref. [10], with a transparency-switching covering over the entrance aperture and a primary optic to focus light onto this covering. Prior work by various authors provides sufficient information to estimate the performance of this ideal system analytically. For the primary optic, we consider a configuration described by Zagolla et al. [13]. This system, composed of two aspheric lenses with diameter 25 mm, was found to be effective at concentrating light onto a flat image surface, whose width is the same as the lens diameter, over a ±23° range of incidence angles, defining the tracking range of the system. The spot diameter was shown to be less than 1 mm over the whole tracking range. Based on the presented results of ref. [13], and our own corroborating ray trace simulations on a similar system, the focused rays have an angular divergence Using this information, the optical efficiency of the concentrator can be estimated using a statistical model which has previously been described by us and others [14,15] in the context of a similar light-trapping system [16]. We can consider the light absorbed by the solar cell in this way: when light initially enters the concentrator, some fraction is absorbed immediately and the remainder is rejected. This initial absorption can be given by are the diameters of the small input aperture and the whole concentrator entrance surface, respectively. This holds in the ideal case where transmission through the aperture is 100% and through the rest of the entrance surface is 0%. Carrying out the summation over all possible passes, as in refs. [15] and [14], leads ultimately to the expression (1) Using the parameter values listed in this section, our ideal system can, according to Eq.1, achieve an optical efficiency of up to 95% (concentration of ~31 suns, red line in Figure 2c). The efficiency is strongly dependent on the escape probability. This efficiency may be maintained over the full ±23° tracking range. A figure of merit for the tracker can be the ratio of the integrals of the concentration over the incidence angle; in our idealized case, this is simply Experiment The material is prepared by mixing paraffin wax (Sigma Aldrich product 76228) and uncured PDMS (Dow Chemical SYLGARD 184) in a 1:9 ratio. To enable absorption, a small amount (0.1 wt %) of carbon black paint (Winton Oil Colour 24/Ivory Black from Winsor & Newton) is added to the material. The standard Sylgard curing agent is added to in a ratio of 1:10 and the mixture is stirred manually until all components are fully distributed. The gel-like substance is poured into circular PLA molds of 20 mm radius fixed to a polycarbonate substrate and cured by heating in an oven for 2 hours at 90°C. The cured samples are solid rubbery disks which can be peeled off of the substrate. They undergo transparency switching at about 45°C, which is the manufacturer-provided melting temperature of the paraffin. As has been established in previous work by us and others [11,12], the transition temperature of the composite can be controlled by using paraffins with different melting points. The sun tracking demonstration was set up on the campus of the Masdar Institute in Abu Dhabi. A sample of the material ~1.5 mm thick was mounted on a stage at an angle facing the sun at noon. To prevent thermal conduction between the stage and the material, the sample was in contact with the stage only at the bottom edge and three support points. A converging lens (d=25.4 mm, f=50 mm) was placed parallel to the sample in line with the sun, with the sample at the focal point of the lens, creating a focal spot near the center of the sample (Fig. 3(a)). To document the tracking behavior, two sets of time lapse photographs were taken of the sample at a rate of 1/minute for one hour, one set with the lens covered to show more clearly the location of the transparency, and the other with full illumination to demonstrate that the light passes through the aperture unscattered. Results After the sample is positioned at the focal point of the lens, a transparent aperture forms at the spot location on a timescale of about 10s. The aperture has a clearly defined border and features can be seen clearly through it. If the sample is illuminated for only a short period of time (in this sample about one minute), the transparency remains highly localized around the focal spot location, as seen in Fig. 3(b). If the lens is removed following the aperture formation, switching back to the off state will occur quickly and the aperture will close. This is illustrated in Fig. 4. From the light spot on the white backing, one can observe the transition from the fully opaque state, characterized by the partial transmission of diffused light, to the locally transparent state with full transmission through the aperture (Fig. 4(a-d)). After the lens is removed, a 'pinhole' spot created by transmission of direct sunlight through the aperture can be seen on the backing, gradually vanishing as the aperture closes ( Fig. 4(e-h)). When the lens is left in place, over the first several minutes of illumination the aperture increases in size before stabilizing at a diameter of approximately 5 mm as the equilibrium is reached between absorptive thermal inputs and conductive losses into the surrounding material. During the following hour, the solar angle changes sufficiently that the focal spot moves from the center of the sample almost to its edge, as seen in Figure 4. The aperture position changes with the spot location, remaining centered on the focal point for the duration of the demonstration ( Figure 5, top frames). The aperture size remains constant, with strong localization around the focal point ( Figure 5, bottom frames). Discussion The results of this demonstration serve as a proof-of-concept for a light-responsive aperture based on a thermally activated transparency switching material. While other uses may be found, we focus in this discussion on the application for which the concept was originally developed, that of a concentrating light trap with a moving aperture. The purpose of the material in this case is to admit light in concentrated form through a small aperture, then confine that light inside the trap by internally reflecting any rays on an escape trajectory. In the configuration proposed by Stefancich et al. in ref. [10], or indeed in any light-trapping configuration, the performance depends on limiting the optical losses associated with those reflections. These losses may come from absorption of light by the material; by transmission through the material to the outside; or by rays escaping through the aperture. Optimization of the material is focused on minimizing these three sources of loss. Absorptive losses result from absorption of trapped light by the pigment incorporated into the material. Absorption is essential to the operation of the element and therefore cannot be eliminated absolutely; however, if a selectively absorbing pigment is used, the element may function by using the selected range of wavelengths to activate the tracking element while transmitting other wavelengths without absorption. If photovoltaic cells are being employed, an appropriate pigment would absorb below the bandgap of the utilized PV material, or alternatively the short wavelengths that have a high photon energy and are most responsible for the thermal gains detrimental to the performance of the PV cell. Minimizing transmittance through the 'reflective' regions is equivalent to reducing the off-state transmittance, which is currently being approached with a double strategy of identifying new component materials that maximize the internal refractive index contrast of the composite, and manipulating the crystallization kinetics to increase the scattering center density. Aperture losses are reduced by minimizing the aperture size, which itself is achieved by manipulating the thermal transport behavior so that the equilibrium size of the transparent region is as close as possible to the spot size. This may be achieved by attempting to control the thermal conductivity of the material but is also affected by the degree of pigmentation, which dictates the rate of thermal input. These optimization approaches are the subject of ongoing research.
3,064.6
2015-06-21T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Research Article A Color Image Encryption Scheme Based on a Novel 3D Chaotic Mapping , Introduction Chaos has always been a very active research topic in the field of nonlinear science. ere are many common characteristics of digital chaotic system and cryptography, such as sensitivity and being aperiodic and pseudorandom. ese characteristics promote the application of digital chaotic system in cryptographic algorithm design. Image encryption is one of the main means of information protection. An image will have some characteristics such as high correlation, large data size, and high data dimension, which should be considered additional, when it is encrypted. Because of the low efficiency of real-time processing, traditional encryption algorithms are not suitable for digital images such as DES, 3DES, and AES [1]. erefore, many image encryption schemes based on different fields have been proposed in recent years such as cellular automata [2,3], wavelet transform [4,5], compressing sensing [6], selective encryption [7], DNA coding [8,9], and chaos [10][11][12][13][14]. Because of the large key space, strong dynamic characteristics, complex attractor, and strong ergodicity of high-dimensional chaotic system, it is more secure than low dimension. e high computational cost of high-dimensional chaotic system makes it occupy more hardware and software resources, which is difficult to implement. By contrast, the low-dimensional chaotic system is a simple function, which is easier to implement. Due to the computation precision of software and hardware, the chaotic complexity in real-numbered chaos is often affected by dynamic degradation (collapsing effect) [14,15]. With the study going on, low-dimensional chaotic maps have been performed perfectly on its characteristics [16,17]. In particular, some algorithms based on low-dimensional chaotic maps have been widely studied and cracked [18][19][20]. erefore, when a low-dimensional chaotic mapping is applied in cryptography, it is necessary to enhance all of its chaotic characteristics. However, due to the uncertain parameters, it still includes partial periodic regions which do not maintain long stable synchronization. Each of the existing approaches has its own limitations and drawbacks [30]. Many image encryption algorithms have been proposed based on Shannon's theory about shuffling and diffusion [5,12,31,32]. Shuffling can rearrange pixels by some rules [33] and diffusion can substitute pixels by a series generated by a chaotic map. e inherent features of a chaotic map can be suitable for the two security notions. Generally, most of algorithms come with the common limitations. Firstly, the key is independent of the original image; that is, if the key is unchanged, the different original images that need to be encrypted will use the same key [34][35][36]. Secondly, the encryption scheme based on lowdimensional chaotic maps is not suitable for practical use because of its small key space, short period, and low complexity [14][15][16][17]. At last, the ciphertext is only related to the key and has no relationship with the plaintext and middle ciphertext, which cannot resist the plaintext attack and the ciphertext attack. In this paper, we propose a novel color image encryption algorithm based on an efficient chaotic system coupling two low-dimensional chaotic maps. Firstly, the new chaotic system utilizes combination method to formulate a 3D chaotic map. It not only retains the complexity of its underlying maps but also enlarges the dimensions from 2D to 3D. Also, it enlarges the key space. us, the chaotic map can depict highly chaotic behavior for all parameters. Secondly, the initial key is derived from plaintext, and different images have different initial key, which improves the security of the algorithm. Finally, the formula of this algorithm is related not only to the key and plaintext but also to the intermediate ciphertext, which increases the complexity among plaintext, ciphertext, and key. e other sections of this paper are organized as follows: Section 2 represents the proposed chaotic system and introduces the new 3D chaotic map and then proves its chaotic properties. In Section 3, a detailed explanation of the proposed encryption algorithm is provided. Some analysis of its security and performance is carried out in Section 4 and the conclusion is given in Section 5. 3D Piecewise-Henon Map With the continuous improvement of computing speed and performance of modern computers, nonlinear systems have developed greatly. However, because of the computational accuracy of computer software and hardware limiting, the complexity of chaotic mapping in real number field will gradually decline. In addition, the orbit of the low-dimensional chaotic map will degenerate (collapse effect) in some periods. erefore, low-dimensional chaotic systems are rarely used alone [37,38]. In particular, Henon and logistic mappings are quite different in theory when they are used alone [39,40]. erefore, we combine the characteristics of Henon mapping and the piecewise mapping to construct a new chaotic mapping with larger dimension and better chaotic characteristics, 3D piecewise-Henon mapping (3D-PHM). e initial values x 0 , y 0 , and z 0 are selected in the interval [0, 1]. From 3D-PHM, three state values are obtained by each integer k, which are denoted as x k , y k , and z k . e range of the value is confined in [0, 1] 3 . Because of its good simplicity, ergodicity, and sensitivity to initial conditions, the system has good cryptographic performance. Graphical Analysis. e control parameters c 1 and c 2 in 3D-PHM are very important. We drew a contour map of the approximate entropy of the system with different values, which is shown in Figure 1; c 1 and c 2 are varied from 0 to 15, and the step � 0.01. In Figure 1(a), the complexity of the system is good in most of the regions, and, with the increase of parameters c 1 and c 2 , the approximate entropy is high, except for some regions (Figure 1(b) is the enlargement of part (a); the low value position can be seen). erefore, when studying the characteristics of 3D-PHM, we will set the control parameters of the system (formula (1)) as c 1 � c 2 � c. Lyapunov exponent is an important parameter of chaotic systems. When one of Lyapunov exponents is positive, the system will be a chaotic system. Provided that there are two or more positive Lyapunov exponents, the system is a hyperchaotic system. Eigenvalue method is used to calculate Lyapunov exponent of 3D-PHM, which is (0.3055, 0.3158, 0.1529), when (x 0 , y 0 , z 0 ) � (0.1, 0.2, 0.3), and c 1 � c 2 � c � 15. e Jacobian of equation (1) is as follows: 2 Complexity e initial point is (x 0 , y 0 , z 0 ), and the other iteration points obtained from the iterative equation are (x 0 , y 0 , z 0 ), (x 1 , y 1 , z 1 ), . . . , (x n , y n , z n ), respectively. e first (n−1) Jacobian matrices are J 0 � J(x 0 , y 0 , z 0 ), J 1 � J(x 1 , y 1 , z 1 ), and J n−1 � J(x n−1 , y n−1 , z n−1 ), respectively. When the increment is small enough, its evolution satisfies the linear differential equation: where J � J n−1 J n−2 . . . J. Let the three eigenvalues of matrix J be λ 1 , λ 2 , λ 3 , respectively. Figure 2 shows the chaotic attractor of A, which has no obvious uneven distribution in the region. It has a very good ergodicity when c � 15. Comparative Analysis of Permutation Entropy of Henon, 3D-Henon, and 3D-PHM Maps. e complexity of a chaotic system indicates the uncertainty of a mapping, which can be measured by entropy. For a symbol sequence, the higher its entropy is, the higher its complexity is; otherwise, the lower its complexity is. Permutation entropy (PE) is the algorithm of complexity for measuring time series, which was proposed by Christoph Bandt and Pompe in 2002. e permutation entropy is defined as follows [34]: where e is the embedded dimension of the reconstructed sequence and P k is the probability of the occurrence of each symbol. In theory, when P k � 1/e!, H(e) reaches the maximum ln(e!), but, in practice, H(e) ≤ ln(N − e + 1). In general, H(e) will be standardized by ln(N − e + 1); that is, Obviously, the complexity of the series can be reflected by h(e). e smaller h(e) is, the stronger regularity of the chaotic series is. On the contrary, the larger h(e) is, the higher complexity of the chaotic sequence is and the greater the complexity of the chaotic sequence is. e comparisons of PE among Henon mapping, 3D-Henon, and 3D-PHM are shown in Figure 3. As is shown, PE of 3D-PHM is higher than those of 3D-Henon mapping and Henon mapping. Henon mapping is the least complex. PE of 3D-PHM is significantly higher than that of Henon mapping. at means the complexity of Henon mapping is increased significantly. e NIST Statistical Test. For all 15 tests in the NIST suite, the significance level was set to 1%. If P − value > 0.01, the binary sequence is accepted to be random with a confidence of 99%; otherwise, it is considered as nonrandom. To perform this battery of tests, we have generated up to 10 6 points (x i , y i , z i ); i > n t by 3D-PHM. We convert these sequences into binary form and NIST tests were performed on them. e test results are shown in Table 1 and all the tests were successful. It is shown that 3D-PHM has a strong randomness and can resist many statistical attacks. Hence, the tested binary sequences generated by 3D-PLM are random with respect to all the 15 tests of NIST suite. Approximate Probability Density. Probability distribution function is a tool to measure whether a dynamic system is a uniform distribution [41]. In order to measure the probability density of the system, we divide the three-dimensional attractor A into a group of small cubes in B � [0, 1] 3 , and the number of cubes is n 3 , which is defined by Complexity Let n t be the number of iterations in transition regime, large enough, and after n t + n iterations of 3D-PHM, the proportion of the box where χ E is a special function about the set E ′ which is defined as follows: In addition, in order to get the number of times the trajectory C n (x 0 , y 0 , z 0 ) passes through a particular cube B i,j,k , we need to calculate the relative times of trajectory C n (x 0 , y 0 , z 0 ) accessing this box relative to n. Given that n is large enough, the approximate probability density function P n (x, y, z) � P n (B i,j,k ) under all conditions 0 ≤ i, j, k ≤ n − 1 and all (x, y, z) ∈ B i,j,k and then In order to define the 3D-PHM system to random, it will be further required that the variables defined above and the invariant measure mathematicians called must be independent of the starting point (x 0 , y 0 , z 0 ). Moreover, when the track C n (x 0 , y 0 , z 0 ) is evenly distributed throughout the space, in that way, for all 0 ≤ i, j, k ≤ n − 1, the following formula holds right: where μ(B i,j,k ) represents the natural Lebesgue metric of B i,j,k [42]. For computing the approximate density function of 3D-PHM, cube B is divided into n 3 � 100 3 boxes. When n t � 10 4 and n t + n iterates with mapping (1), n � s × n 3 , where s � 1, . . . , 30. e control parameter and the initial values are fixed at c � 15 and (x 0 , y 0 , z 0 ) � (0.1, 0.2, 0.3). In Figure 5(a), the trajectory C n (x 0 , y 0 , z 0 ) will visit more than 99% cubes B i,j,k , when n exceeds 5 × n 3 . e calculation using mean square error (MSE) is carried out to measure the difference between the values of P n (B i,j,k ) and μ(B i,j,k ). is metric is as follows: where ‖ · ‖ denotes the Euclidian norm. Obviously, in Figure 5(b), MSE values decrease to zero with the number of iterations n increasing, which means the point of the trajectory C n (x 0 , y 0 , z 0 ) is distributed uniformly in the phase space. e histogram of the approximate density function P n is displayed in Figure 6(b). It shows a standard normalized distribution perfectly. e series, which is generated by 3D-PHM, is distributed uniformly in the phase space without any concentration in Figure 6(a). It implies strong chaoticity and ergodicity. The Color Image Encryption Algorithm To thwart a powerful attack based on statistical analysis, Shannon suggests using confusion and diffusion for encryption [43]. Generally, the security of image encryption technology is defined by shuffling and diffusion. Shuffling is a method to change the positions of pixels in an image. Shuffling can make it difficult to predict the initial positions of an encrypted pixel. e diffusion operation can be implemented by a chaotic map; it can change the behavior of the whole chaotic system through a small change of the chaotic map. e two implementations can be applied to all kinds of pictures. Our proposed scheme also has two major steps. First, the plain image is destroyed by shuffling with 3D-PHM. e second step is to diffuse the shuffled image, which e block diagram of the proposed scheme is shown in Figure 7. e details of the proposed encrypting algorithm are outlined as follows: (1) Primary Stages. Convert the image with the size of m × n × 3 into three monochromatic images, where the size is m × n , namely, P R , P G , and P B , and then each matrix is converted into a vector (1 × mn), represented by I R , I G , and I B . (2) Generate the Initial Conditions. Let the control parameter at c � 15 and n 0 be large enough and choose an initial pixel (x 0 , y 0 , z 0 ) ∈ A and calculate the sum of all pixels in every component of the plain image. e formula is as follows: K represents the initial values of the scheme and the formula is as follows: where d � mn − 1, and get the integer key sequence Here, the round n encryption scheme is introduced as follows: 8 Complexity where D R (0), D G (0), D B (0) ∈ [0, 255] are the parameters introduced when encrypting the first scrambling pixel, which can be used as the initial key in the encryption and are results of the nth round encryption. Generally, n ≥ 2. e relationship among the ciphertext, plaintext (or intermediate ciphertext), and the key not only is XOR but also includes nonlinear module. us, the proposed algorithm can resist plaintext attack. en convert each component into matrices C (n) R , C (n) G , and C (n) B , whose size is m × n. Finally, the color cipher C RGB is combined. Decryption is the inverse of encryption, but decryption starts with the last pixel and goes back to the first. e process of decryption is given as follows: (1) Splitting the ciphered image C RGB into three separated components and then converting them into encrypted pixel sequences C R , C G , and C B (1 × mn). Input: the integers m and n, and the vectors I R , I G and I B Output: the shuffled images vectorsS R , S G and S B while k R ≤ mn (3) Use the vectors S R , S G , and S B as input with Algorithm 2 to generate the deshuffled vectors I R , I G , and I B . (4) Obtain pixel sequencesI R , I G , and I B , and combine them into the plain image P. Simulation and Experimental Analysis For all the analysis, the control parameters c 1 � c 2 � c � 15, the round number n � 2, the initial value(x 0 , y 0 , z 0 ) � (0.1, 0.2, 0.3), and C R (0), C G (0), C B (0) � 234, 15, 167 { }. e proposed color image cryptosystem is executed on the standard test image named "Lena" with the size 512 × 512 × 3. e ciphertext of the three color components of the color image is shown in Figure 8. In order to demonstrate that the proposed image encryption algorithm is secure against most common attacks, key space and key sensitivity tests are analyzed, and security analyses such as chosen plaintext attack, histogram analysis, correlation of the image, difference measurement, and information entropy are also carried out. Key Space Analysis. In order to resist brute-force attacks, it is necessary to extend a key space as large as feasible. Generally, the security will be accepted when the key space is greater than 2 128 . e proposed algorithm takes the initial three states of formula (1) as the initial secret keys, expressed by double-precision real type. So the key space is 2 64 × 2 64 × 2 64 � 2 192 . In addition, D R (0), D G (0), D B (0) can also be the initial key. So the key space of the proposed algorithm is 2 216 . erefore, it is sufficiently large to prevent a brute-force attack [35,44,45]. e comparison of key space sizes between the proposed scheme and similar works is displayed in Table 2. Key Sensitivity Analysis. A secure image encryption scheme requires a high secret key sensitivity. It means that even a very small change between the initial keys can cause two completely different decrypted images. e plaintext and the ciphertext are shown in Figures 9(a) and 9(b). In Figure 9(c), the right decryption is shown and the wrong result is shown in Figure 9(d), where the initial value is K � x 0 + δ, y 0 , z 0 and δ � 10 − 16 . It is easy to distinguish that the last image is completely different from the original image. e proposed scheme is sensitive to the key. Image Histogram Analysis. For the three color components of RGB images, the graph with the number of pixels at each gray level is histogram. Histogram analyses for the original image and the encrypted image by the proposed scheme are carried out in this paper. Histograms for three components of the original and encrypted images are shown in Figure 10. From Figures 10(b), 10(d), and 10(f ), the histograms are quite uniform and they are different from the plaintext. It means the that result of the algorithm can resist the known plaintext attack [47]. It is necessary to verify the security of the encrypted image with histogram analysis, but it is not enough. In order to further verify the uniform distribution of the ciphertext, the Chi-square test is applied. It is as follows: where o i are the occurrence times for every gray level (0 to 255) of the cipher and e i is the mean occurrence frequency of the uniform distribution, which is 1024 in the image with the size of 512 × 512. For a secure cryptosystem, the values of χ 2 in encrypted images must be less than the values of χ 2 in plain images. In Table 3, values of χ 2 in the plain image are much larger than values of χ 2 in the encryption image, which means the security of the proposed algorithm has good encryption effect. Correlations of Adjacent Pixels. Besides histograms analysis, the correlation between adjacent pixels in the plain and encrypted image is conducted. e correlation coefficients ρ of several adjacent pixel pairs (including horizontal, vertical, and diagonal pixel pairs) which are selected from the image are computed. e formula is as follows: where x i and y i represent the gray values of two adjacent pixels in the plain and encrypted image and M 0 is the number of adjacent pixels pairs selected randomly from the plain or encrypted image. ρ, x, and y represent the two means of two adjacent pixels. When ρ ⟶ 1, it will indicate that the adjacent pixels are highly correlated, and when ρ ⟶ 0, it will mean that the adjacent pixels are low correlated. For the correlation analysis experiment, we randomly selected 10000 pairs of adjacent pixels from the plain and encrypted images in horizontal, vertical, and diagonal directions. e experimental results are shown in Figure 11, where (i, j) represents the coordinate of pixels in an image. From Figure 11, the correlation of the plain image is significantly higher than that of the encrypted image. In addition, the correlation coefficients ρ of the original image are computed, which are listed in Table 4. ρ of the original image are close to 1. Otherwise, the encrypted ρ are close to 0. It means that the proposed algorithm removed the correlation between adjacent pixels of the original image. en, comparisons are conducted among different algorithms in Table 4. e correlation coefficient of the plain image is extremely different from the encrypted image and the former is close to zero. e encrypted ρ of ours is more close to zero than others. e Difference Measurements. e degradation rate of the image after encryption can also prove the encrypted effect of an encryption system. To measure the difference between the original and the encrypted images, we have the two following effective statistical tools. e Structural Similarity Index. e structural similarity (SSIM) index can indicate the similarity between the two images. e SSIM belongs to [−1, 1]. When the SSIM is 1, it indicates that the two images are completely similar. It is defined by the attribute of the object structure in the reflection scene independent of brightness and contrast, which is from the angle of image composition. e value of SSIM is composed of brightness, contrast, and structure of images. Mean values are used by estimating the brightness, and the standard deviation is used by estimating the contrast, and the covariance is used by measuring the similarity of the structure. e formula used to calculate SSIM of two images is as follows: where u X and u Y represent the means of gray pixels in two images; σ X and σ Y are the standard deviations of the two images; σ 2 X and σ 2 Y are the variances of the two images; σ XY Input: M R and M G and M B , S R , S G , and S B Output: I R , I G and I B Initialize: represents the covariance of the two images; C 1 and C 2 are two constants, where C 1 � K 1 × L and C 2 � K 2 × L. Generally, K 1 � 0.01, K 2 � 0.03, and L � 255. e SSIM of different encryption schemes are listed in Table 5. e SSIM between the original image and other encrypted schemes of "Lena" approach zero. Compared with others, the proposed algorithm has higher superiority. Peak Signal-to-Noise Ratio Analysis. Peak signal-tonoise ratio (PSNR) is the most common and widely used objective evaluation index of images. For a perfect image encryption scheme, the smaller values of the PSNR are (generally, less than 10), the better schemes are. e calculation is as follows: where MSE is the mean square error of an image; C(i, j) is a pixel of the encrypted image and I(i, j) is a pixel of the original image, the coordinate of which is (i, j); L is the range of pixel in the image. We calculated PSNR of each color component of the encrypted image (Lena) and found that it was less than 10. Values of PSNR of this encryption scheme are compared with the results of other schemes; see Table 6. From the comparison in Table 6, we can see that the results of PSNR of this scheme are better than those of other existing algorithms. Analysis of Antidifferential Attack. In the encryption algorithm, diffusion is an important property, which was proposed by Shannon in [43]. A good encryption system must have good diffusivity. It means that one pixel in the original image is changed, and the encrypted image will be changed completely in an unpredictable way. e important significance of the diffusion depends on how complex the algorithm is, which can resist the analysis of the algorithm by the attacker. e number of pixel change rate (NPCR) is usually used to test the effect of changing a pixel in the encrypted scheme, which calculates the percentage of two different image pixels. e average intensity of the two images is tested by UACI. Here, C 1 and C 2 are two cipher images whose corresponding plain images have only one pixel difference. D(i, j) is defined as and UACI is defined as where m and n are rows and columns of the image, respectively. Ideally, the means of NPCR and UACI are NPCR � (1 − 2 − n ) × 100% and UACI � 1/2 2n 2 n −1 i�1 i(i + 1)/2 n − 1100%. For grayscale image with 256 levels, n � 8. e expected values of NPCR and UACI are NPCR E � 99.6094070 and UACI E � 33.463507, respectively. In the experiment, 100 group pixels of Lena image were selected for encryption. Every group has two images. One is the original image, and the other is image with one pixel changed, which is an image where one pixel is changed randomly in the original image. e results of NPCR and UACI are shown in Figures 12 and 13. Results of the proposed algorithm are distributed near the ideal value (horizontal lines in the figure). e mean values of NPCR and UACI in the proposed scheme are 99.6214% and 33.4149%, respectively, which are very close to the ideal values. e comparison with other algorithms is shown in Table 7. e results show that the proposed algorithm can resist plaintext attack, ciphertext attack, and known plaintext attack well, and it is also superior to other algorithms. e reason is, in other schemes, the ciphertext diffusion effect of the algorithm works only in one round of pixel substitution, in which the change of any one pixel in plaintext can only affect the ciphertext behind the changed pixel. In this paper, two or more rounds of diffusion are carried out. erefore, each pixel in the encrypted image will be affected. Analysis of Chosen Plaintext Attack. e ability of a scheme to resist the chosen plaintext attack should be tested in the following way. e formula is as follows [61]: where O1 and O2 are two plain images and E1 and E2 are two encrypted images which are encrypted by the same plain images. When the equation holds right, the algorithm will be highly vulnerable to chosen plaintext attack. Figure 14 Analysis of Plaintext Attacks. In many methods of cryptanalysis, attackers try to identify the relationship between plaintext and ciphertext by searching ways to reduce the key space or the equivalent key space. is kind of attack usually uses white or black pixels to generate ciphertext by the proposed algorithm. en the key is inferred from the corresponding ciphertext image. In order to resist this attack, this encryption scheme should eliminate any relationship between ciphertext and plaintext. e proposed scheme, even if a specific plaintext is selected, such as black and white images, cannot generate recognizable patterns. It is because the encryption algorithm not only depends on the change of pixel position but also depends on the high complexity of the novel chaotic and good multiple diffusion characteristics of the encryption algorithm. Especially when the algorithm is used to encrypt adjacent columns/rows, the adjacent pixels will have little correlation. e results of our proposed algorithm are depicted in Figure 15. e size of the white and black images is 512 × 512. ey are encrypted by the proposed algorithm. From Figures 15(b) e ability of the encryption system to resist noise attack is tested by adding salt and pepper noise with different intensities when encrypting the image. Figure 16 shows the decrypted images where densities of 0.05 and 0.1 are added, respectively. e decrypted images become slightly blurred, but the contents of the images can still be recognized. Analysis of Gaussian Noise. e Gaussian noise with variance of 0.01 and zero mean was added to the encryption process, and the result is shown in Figure 17(a). Take the Gaussian noise with variance of 0.5 and zero mean, and the result is shown in Figure 17(b). Although the decrypted image is still fuzzy, it can still distinguish the basic image [48] 0.0036 0.0053 0.0085 Reference [49] 0.0024 0.0042 0.0033 Reference [50] -0.0681 0.0845 Reference [51] 0.0030 −0.0082 0.0027 where P(s i ) is the probability of the symbol s i which can appear in the chaotic system. For a random signal consisting of 2 n symbols, H(s) is equal to 8. e information entropy value of the proposed scheme is calculated. e results between the proposed algorithm and other schemes are shown in Table 8 and our values are 7.9984, 7.9982, and 7.9979, respectively, which are higher than others, ensuring that our scheme is more complex. Computational Complexity and Speed Test. e computational complexity of algorithms is often used to describe the execution time of programs or the space occupied by algorithms in memory or disk. It is often represented by symbol O. e function T(n) represents the time of an algorithm (or the number of steps), where n is the size of the problem to be solved and n↦g(n). eoretically, the function T(n) � O(g(n)). It means that there is a positive constant δ which makes 0 ≤ T(n) ≤ δg(n). Given that the size of the image is m × n, the times of scrambling statements are m × n during the image encryption. Take two rounds as an example; the times of the diffusion statement are 2m × n. e times of decrypted statement are the same as those of the encryption. So, we can write the expression as T(n) � O(mn). erefore, the complexity of the algorithm is O(mn). us, the proposed algorithm can resist different cryptographic analysis. e speed of every scheme is shown in Table 9. We implement our scheme and other schemes using Matlab R2016b on an Intel Core i5-4590 @ 3.30 GHz Processor, 4 GB RAM, and Windows 10 operating system. Compared with DES, AES, and logistic mapping encryption scheme, our chaotic encrypted algorithm is more quick than traditional ones. Conclusion is paper has proposed a new color image encryption based on 3D-PHM. 3D-PHM shows better chaotic complexity and performance than the original mappings. It enlarges the dimensions of Henon mapping and the complexity of lowdimensional chaotic mapping is increased, and the method can also be applied to other low-dimensional chaotic mappings. At the same time, the parameter range of 3D-PHM is also enlarged, which increases the key space of the original mapping. Based on the new chaotic mapping, this paper proposes the color image encryption scheme, which is associated with the key, plaintext, and intermediate ciphertext. Security and performance evaluation shows that the proposed cipher has various desirable characteristics such as efficiency, flexibility, and resistance against cryptanalytic attacks. Data Availability No data were used to support this study. Conflicts of Interest e authors declare no conflicts of interest.
7,453.6
2020-12-23T00:00:00.000
[ "Computer Science" ]
Antimicrobial Diterpenoids of Wedelia trilobata (L.) Hitchc Continued interest in the metabolites of Wedelia trilobata (L.) Hitchc, a notoriously invasive weed in South China, led to the isolation of twenty-six ent-kaurane diterpenoids, including seven new ones 1–7. Their structures and relative configuration were elucidated on the basis of extensive spectroscopic analysis, including 1D- and 2D-NMR experiments. The antimicrobial activities of all isolated diterpenoids were evaluated against a panel of bacteria and fungi. Introduction Wedelia trilobata is a notoriously invasive weed in a wide range of tropical and subtropical areas [1]. In southern China, this creeping, matforming perennial herb has caused significant damage to farmlands, forests, and orchards [2,3]. Studies have shown that W. trilobata has a strong allelopathic potential on neighboring native plants [4,5]. The major chemical constituents of W. trilobata are ent-kaurane diterpenes, sesquiterpene lactones, and triterpenes with a variety of biological activities, such as antibacterial, antitumor, hepatoprotective, and central nervous system depressant properties [6]. We previously reported ten eudesmanolides isolated from this plant as potential inducers of plant systemic acquired resistance [7]. As continuation of that work, twenty-six ent-kaurane diterpenoids including seven new ones 1-7 were obtained from the whole plant W. trilobata ( Figure 1). All diterpenoids were evaluated against a panel of bacteria and fungi, and compounds 2, 4, 7, 10, 12, and 13 showed weak inhibitory activities against Monilia albicans with MICs of ca. 125 µg/mL. Herein, we report the isolation and structural elucidation of these compounds, as well as their antimicrobial properties. Apart from five carbon signals assigned to the angeloyloxy group (δC 167.9, 128.1, 138.6, 20.9, and 16.0), the 13 C-NMR (DEPT) spectrum of 1 (Table 1) also exhibited 20 carbons composed of three methyls, eight methylenes, four methines (one oxygenated), and five quaternary carbons, which were consistent with a skeleton of an ent-kauranoid [9]. In particular, the NMR spectroscopic features of 1 are similar to those of 8 (16α-hydroxy-ent-kauran-19-oic acid), and only differed in the appearance of an angeloyloxy group at C-3 in 1. It was also confirmed by the chemical shift value of C-3 (δC 78.9, CH), C-9 (δC 56.1, CH) and the HMBC correlations ( Figure 2) from H-3 (δH 4.50, dd, J = 12.2, 4.7 Hz) to C-1′ (δC 167.9, C), C-1 (δC 38.9, CH2), and C-18 (δC 24.7, CH3) as well as the correlations from Me-20, H-12, and H-15 to C-9, and from the methyl at C-4 (Me-18) to a downfield quaternary carbon (C-19) at δC 178.1. The ROESY correlations of H-3 with H-5 and H3-18 suggested that the angeloyloxy was α-orientated, and the hydroxy at C-16 was also assigned as α-orientated by the ROESY correlations of H3-17 with H2-11 and H-14β along with the ROESY correlations of H3-20 with H2-15. Consequently, the structure of 1 was finally determined as 3α-angeloyloxy-16α-hydroxy-ent-kauran-19-oic acid. Compound 2 had the molecular formula C25H38O6 as determined by the HREIMS, with 16 mass units more than 1. The 1 H-and 13 C-NMR data similarities between 2 and 1 (Tables 1 and 2) suggested that they were structural analogues. As compared with compound 1, the main differences were due to the presence of a hydroxymethyl group (δC 66.8) and the absence of a methyl group in 2. The hydroxymethyl group was assigned to C-17 by the HMBC correlations of H2-17 to C-14, C-15, and C-16. Therefore, the structure of 2 was established as shown. Structure Elucidation of Compounds Compound 1 was obtained as a white amorphous powder, with a molecular formula determined as C 25 H 38 O 5 on the basis of HREIMS which indicated a molecular ion peak at m/z 418.2722 M + (calcd. for C 25 H 38 O 5 , 418,2719). The IR spectrum revealed absorption bands of hydroxyl (3431 cm´1) and carbonyl (1711 cm´1) groups. In the 1 H-NMR spectrum (Table 1), the downfield olefinic proton at δ H 6.02 (1H, q, J = 7.0 Hz) and two methyl signals at δ H 1.82 (3H, s) and 1.92 (3H, d, J = 7.0 Hz), indicated the presence of an angeloyloxy group in 1 [8]. C-1′ (δC 167.9, C), C-1 (δC 38.9, CH2), and C-18 (δC 24.7, CH3) as well as the correlations from Me-20, H-12, and H-15 to C-9, and from the methyl at C-4 (Me-18) to a downfield quaternary carbon (C-19) at δC 178.1. The ROESY correlations of H-3 with H-5 and H3-18 suggested that the angeloyloxy was α-orientated, and the hydroxy at C-16 was also assigned as α-orientated by the ROESY correlations of H3-17 with H2-11 and H-14β along with the ROESY correlations of H3-20 with H2-15. Consequently, the structure of 1 was finally determined as 3α-angeloyloxy-16α-hydroxy-ent-kauran-19-oic acid. Compound 2 had the molecular formula C25H38O6 as determined by the HREIMS, with 16 mass units more than 1. The 1 H-and 13 C-NMR data similarities between 2 and 1 (Tables 1 and 2) suggested that they were structural analogues. As compared with compound 1, the main differences were due to the presence of a hydroxymethyl group (δC 66.8) and the absence of a methyl group in 2. The hydroxymethyl group was assigned to C-17 by the HMBC correlations of H2-17 to C-14, C-15, and C-16. Therefore, the structure of 2 was established as shown. Compound 2 had the molecular formula C 25 H 38 O 6 as determined by the HREIMS, with 16 mass units more than 1. The 1 H-and 13 C-NMR data similarities between 2 and 1 (Tables 1 and 2) suggested that they were structural analogues. As compared with compound 1, the main differences were due to the presence of a hydroxymethyl group (δ C 66.8) and the absence of a methyl group in 2. The hydroxymethyl group was assigned to C-17 by the HMBC correlations of H 2 -17 to C-14, C-15, and C-16. Therefore, the structure of 2 was established as shown. 4). The NMR data suggested that compounds 3 and 4 possessed the same relative configuration as those of 1 and 2, respectively. Thus, compounds 3 and 4 were determined as 3α-tigloyloxy-16α-hydroxy-ent-kauran-19-oic acid and 3α-tigloyloxy-16α, 17-dihydroxy-ent-kauran-19-oic acid, respectively. Compound 5, a white powder, possessed the molecular formula C 29 H 38 O 4 , as determined by the HREIMS, 13 C-NMR (Table 2) and DEPT data. Comparison of the 1D-and 2D-NMR spectroscopic data of 5 with those of 3α-cinnamoyloxy-ent-kaur-16-en-19-oic acid (15) revealed that their structures were closely similar to each other. The only difference between them was that the double bond of the cinnamoyloxy group at C-3 in 15 was reduced in 5, which was supported by the molecular weights of 5, showing two mass units more than those of 15. This was further confirmed by the HMBC cross-peaks of H-2 1 and H-3 1 with C-1 1 and C-4 1 . The α-orientation of the 3-dihydrocinnamoyloxy group was apparent from the ROESY correlations of H-3β with H-5β and H 3 -18β. Thus, compound 5 was determined as 3α-dihydrocinnamoyloxy-ent-kaur-16-en-19-oic acid. Compound 7 was isolated as a white powder, and its molecular formula was determined as Further analyses demonstrated that compound 7 showed a closely similar NMR pattern to that of 6, indicating that compound 7 was a structural analogue of ent-kaurane-19-oic acid. The double bond was located between C-15 and C-16 by the HMBC cross-peaks of H-15 with C-8, C-14, C-16 and C-17. Meanwhile, the O-bearing methylene group was only connected to C-17 by the HMBC correlations from H 2 -17 to C-14, C-15, and C-16. At last, the O-bearing quaternary carbon could be attributed to C-9 due to the HMBC correlations of H 2 -7, H 2 -11, and H 3 -20 to C-9. The relative configuration of 7 was shown to be identical with that of 6 by NMR analysis. Thus, compound 7 was determined as 3α-cinnamoyloxy-9β, 17-dihydroxy-ent-kaur-15-en-19-oic acid. Evaluation of Anti-Micobial Activity In summary, seven new and nineteen known ent-kaurane diterpenoid metabolites were obtained from whole plant W. trilobata, and some compounds exhibited weak antimicrobial activities. Moreover, we previously reported ten eudesmanolides as potential inducers of plant systemic acquired resistance isolated from this species [7]. Above all, a conclusion that can be drawn is that diterpenes and sesquiterpenes are the main metabolites of W. trilobata and they may be significant as chemical defenses allowing this notoriously invasive weed to adapt to varying surroundings rapidly and effectively ( Figure 3). Evaluation of Anti-Micobial Activity In summary, seven new and nineteen known ent-kaurane diterpenoid metabolites were obtained from whole plant W. trilobata, and some compounds exhibited weak antimicrobial activities. Moreover, we previously reported ten eudesmanolides as potential inducers of plant systemic acquired resistance isolated from this species [7]. Above all, a conclusion that can be drawn is that diterpenes and sesquiterpenes are the main metabolites of W. trilobata and they may be significant as chemical defenses allowing this notoriously invasive weed to adapt to varying surroundings rapidly and effectively ( Figure 3). General Procedures 1D-and 2D-NMR spectra were recorder on either an AM-400 or a DRX-500 or an Avance III-600 spectrometer (Bruker, Karlsruhe, Germany) with TMS as an internal standard. Unless otherwise specified, chemical shifts (δ) were expressed in ppm. MS were measured on a HPLC-Thermo Finnigan LCQ Advantage ion trap mass spectrometer (Waters, Milford, PA, USA). Optical rotation was determined on a SEPA-300 polarimeter (Horiba, Tokyo, Japan). UV spectroscopic data were measured on a 210A double-beam spectrophotometer (Shimadzu, Kyoto, Japan). IR spectra of samples in KBr discs were recorded on a Tensor-27 spectrometer with KBr pellets (Bruker, Rheinstetten, Germany). Column chromatography (CC) was carried out on silica gel G ( Plant Material The whole plant of Wedelia trilobata (L.) Hitchc was collected in Simao, Yunnan Province, China, in August 2011. The specimen was identified by Yu Chen of Kunming Institute of Botany (KIB), Chinese Academy of Sciences (CAS). A voucher specimen (H20110805) has been deposited in the State Key Laboratory of Phytochemistry and Plant Resources in West China, Kunming Institute of Botany.
2,297.4
2016-04-01T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science", "Medicine" ]
The detection of flaws in austenitic welds using the decomposition of the time-reversal operator The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. Introduction The non-destructive testing of austenitic welds using ultrasound is vital for the assessment of safety critical structures such as those found in the aerospace and nuclear industries [1]. The polycrystalline microstructure of these welds is highly scattering making it difficult to detect and characterize internal defects [2][3][4][5][6]. To help overcome these difficulties, the use of ultrasound transducer arrays and the associated full matrix capture (FMC) data is becoming more widespread. FMC data are the complete set of N × N signals generated when each of the N array elements transmits in turn and the others record the scattered time domain signals. In order to characterize flaws within a structure, an image can be created by applying a delay and sum imaging algorithm to the FMC data. The total focusing method (TFM) [7][8][9][10][11][12] is such an algorithm which uses the time domain signals from the FMC dataset to create an image of the inspection area by systematically focusing on each point in the imaging domain. Another branch of imaging techniques are those which use time-reversal principles [13][14][15][16][17][18][19][20][21]. These methods are based on the principal of last in-first out; the delay laws from the received signal are reversed and another ultrasonic wave is sent into the material using these laws to improve focusing of the defect. It has been shown that this process can be used for selective focusing and to iteratively focus on multiple scatterers within a medium. The decomposition of the time-reversal operator (DORT) method [18,22,23] uses the singular value decomposition (SVD) of time-frequency domain data extracted from FMC data. An image of a scatterer in an inhomogeneous medium can be generated by using the eigenvectors of the response matrices which contain the phase laws that need to be applied in order to focus on the scatterer [24][25][26][27]. The method has been successfully applied in polycrystalline materials in [3,19,28], and it was shown that the DORT technique could successfully differentiate the contribution of the defect from noise arising from the microstructure. More recent work by Marengo et al. [29,30] explores detection (using acoustic, electromagnetic or optical data) of unknown scatterers embedded in unknown complex background media using the time-reversal mirror. A common factor with the classical delay and sum imaging algorithms discussed above is the requirement of a subjectively chosen imaging threshold. Previous work has been carried out in [31][32][33] to address this issue and size cracks objectively in the time-frequency domain. In this paper, an objective detection criterion specific to austenitic welds is proposed, based on the SVD of the response matrix. The distribution of these singular values has already been used as an indicator of multiple scattering in coarse-grained media [34]. To illustrate, the method is applied to data arising from both finite-element simulations and experiments. Having detected a defect, the DORT imaging method is then used to image a crack within an austenitic weld and the results are compared with those produced using the TFM. The decomposition of the time-reversal operator method The DORT method [18] is a detection technique which uses the SVD of FMC data to determine the time-reversal invariants. The method can be divided into two stages: the first stage determines the existence of a defect and the second stage concerns the imaging of the defect and its localization within the structure. To begin, the DORT method transforms the time domain FMC data, H N×N×N T = {H ijp : i, j = 1, . . . , N, p = 1, . . . , N T } (where N is the number of phased array elements and N T is the number of time samples taken), into the time-frequency domain via a time-windowed discrete Fourier transform (DFT). For each time, T p , and time window, T, a submatrix of H is given bŷ . where 0, otherwise, and T p = t 1 + t(p − 1). (2.2) Here, t 1 is the start time of the signal being sampled (in practice, this will be large enough to not include reflections from the front face of the structure being inspected). The DFT is calculated to produce the set of response matrices, K N×N (T p , f q ), at fixed time T p (with p = 1, . . . , N T ) and fixed frequencies f q = q/ t (with q = 1, . . . , N f , where N f is the total number of frequency samples). The SVD of the response matrix K(T p , f q ) for each fixed time, T p , and frequency, f q , pair is then calculated using where Λ is a diagonal matrix containing real, positive singular values λ k , k = 1, . . . , N, the columns of U are the left singular vectors and the rows of V are the right singular vectors. For isotropic scatterers, each singular value is associated with one scatterer in the material. In the case of small, non-isotropic scatterers (where the diameter of the scatterer is much smaller than the wavelength), four singular values are associated with the scatterer (the largest is associated with the spherically symmetric part of the scattering amplitude and the other three are associated with the directional part). Where the scatterer is larger than the wavelength, there exist many singular values associated with it. Once the SVD for each time-frequency pair, (T p , f q ) is determined, the largest singular value, λ 1 (T p , f q ), is normalized using the quadratic mean of all the singular values at that time-frequency pair [35]λ If there is a flaw present, the normalized first singular value will be above a threshold value, τ . This stage can be used for objective flaw detection where no a priori knowledge of the material being inspected is required. In this paper, a detection threshold, τ , specific to austenitic welds is calculated. The second stage in the DORT method concerns image reconstruction and requires the input of a homogenized material wave speed, c. The image is generated using back propagation, where the propagation operator is a time harmonic spherical wave (Green's function), and the focusing is determined by the right eigenvectors, V 1 (T p , f q ), associated with the singular values wherē λ 1 (T p , f q ) > τ. The image domain is discretized by a grid, where the number of pixels in the vertical direction of this grid is dictated by the number of time samples (N T ) and the number of pixels along the horizontal axis is a free parameter which will be denoted byx l (l = 1, . . . , N L ). So, for a fixed point in the image space (x l , z p ), the propagation operator is discretized into a 1 × N vector G lp ; the elements of which are given by z p = cT p /2 is the depth in the material and x j is the spatial position of array element j. Each value in the image, I(T p ,x l ), is calculated using the absolute value of the back propagated wave which is focused using the right eigenvector associated with the largest singular values that lie above the threshold τ . Hence, Detection of flaws in austenitic welds The first stage of the DORT method investigated here uses the distribution of the largest singular values, {λ 1 } m , m = 1, . . . , N T N f , from the time-frequency response matrices, K ij (T p , f q ). If the weld contains a defect this distribution will give rise to values which are significantly larger than a specified detection threshold, τ . In this paper, a detection threshold specific to an austenitic weld is empirically calculated using a finite-element simulation. The configuration in this work differs from the multiple scattering regime in [36], as multiple scattering does not dominate the received signal here. The problems which arise are mainly due to the weld's grain structure, which causes the wave to scatter and refract as it passes through the weld. (a) Finite-element simulated data In order to run accurate finite-element simulations of waves propagating through an austenitic weld, it is imperative to have knowledge of the internal microstructure of the weld as the anisotropic nature of the material has a marked effect on the passage of elastic energy through it. A simulation in the software package PZFlex which included the microstructure of an austenitic weld was generated in [37], where considerable effort was expended in fully characterizing the weld microstructure using Electron Backscatter Diffraction (EBSD) measurements taken from [5]. Although the weld comprises a single anisotropic material, grain boundaries are present within the weld; a consequence of the welding process. In fact, the internal microstructure can be viewed as a partitioning of the weld area into a large set of sub-regions, each one with an assigned crystal orientation. Within the finite-element simulation, the internal geometry is meshed with elements with dimension equal to λ/15 (where λ is the wavelength), approximately 200 µm in this case, which is below the Rayleigh scatterer limit of 300 µm and sufficient to model accurate wave propagation [37]. Each element is assigned a crystal stiffness and orientation, and groups of contiguous elements with the same stiffness and orientation form a grain within the weld. For this work, new FMC datasets were generated where a zero volume crack and side drilled holes of varying radii were inserted into this anisotropic geometry within the PZFlex software. The theoretical focusing width (equivalent to the lateral resolution) of the phased array transducer coupled with the sample geometry modelled in the simulation is approximately 3 mm according to the Rayleigh criterion [38] and the series of flaws inserted into the simulation represent cases where the flaw is less than (a 1 mm diameter side drilled hole), commensurate with (a 2.5 mm side-drilled hole) and larger than (a 5 mm long crack) this measurement of spatial resolution. A schematic demonstrating the set up is shown in figure 1. A square grid was used within the simulation and so the crack is represented by a thin rectangular void (no stiffness) and behaves as a perfect reflector. The simulation also included a 64 element ultrasonic array (the parameters of which are given in table 1) placed directly above the weld microstructure. A 1.5 MHz single cycle sinusoid was transmitted by one element and the time domain echo received by all 64 elements was recorded. The transmitting element was then systematically changed by moving along the array until the full matrix of time domain data was captured, for a total of 64 unique simulations per virtual inspection scenario. By applying the TFM algorithm to the collected FMC data, the known location of the back wall was used to estimate a constant longitudinal wave speed (the finite-element simulation does include both longitudinal and shear waves but only the longitudinal wave speed was deemed necessary as it is associated with the propagation of the largest amplitude part of the wave front). The RMS longitudinal velocity was calculated as 5758 m s −1 , with a standard deviation of 146.2 m s −1 (these were calculated using the known distances and corresponding times at which the echo from the back wall occurred in the A-scans where the transmission and reception took place on the same element). In the 1.5 MHz inspection simulated, the correlation length [39] was estimated as approximately λ/8, (where λ is the wavelength). This simulated data and its associated parameters are used in all forthcoming sections to test the methods proposed in this paper. transducer centre frequency 1.5 Table 2. Parameters used to generate the inter-element response matrix, K ij (T p , f q ), from the ultrasonic data arising from the finite-element simulation of a phased array inspection of an austenitic weld containing a series of flaws. parameters to create inter-element response value time window ( T) 4 where B is the bin width and s = 1, . . . , N s (N s is the total number of bins). The total number of singular values contained within each bin is denoted as D(b s ). The probability density distribution of the singular values is estimated [35] by where n = N × N T × N f is the total number of normalized singular values arising from all of the response matrices K ij (T p , f q ), where p = 1, . . . , N T and q = 1, . . . , N f . This distribution is compared to the quarter circle law (QCL) [40] which is given by and gives the distribution of singular values from a square random matrix derived from random matrix theory (RMT). The entries of the random matrix have to be independently and identically distributed for this law to be applied. Figure 2 shows the comparison between the QCL (green line) given by equation (3.3) and the distribution, ρ(b s ) of singular values from the response matrices K ij (T p , f q ) (see equation (3.2) and figure 2, blue line) arising from the finiteelement simulated data of an austenitic weld. It can be seen from this plot that the distribution of singular values from the austenitic weld does not fit with the QCL. The overall shape of the distribution is not dissimilar to that shown by Shahjahan et al. [34] and is thus characteristic of multiple scattering in coarse-grained media. As no flaw is contained in these simulations, these large singular values must stem from scattered waves emanating from some of the larger grains in the weld. These are not to be classified as flaws and so the detection criterion based on RMT and the QCL is not suitable for inspecting austenitic welds. The next subsection investigates this in more depth and arrives at a detection criterion for defects in austenitic welds using the distribution of the largest singular values from the response matrices K ij (T p , f q ) (p = 1, . . . , N T and q = 1, . . . , N f ). (c) A threshold for detection of flaws in austenitic welds using the largest singular value The distribution of the largest singular values of the response matrices, K ij (T p , f q ), from the finite-element simulated data of an austenitic weld (as outlined in §3a), with and without a flaw inclusion, are analysed in this subsection. The aim is to determine a threshold specific to austenitic welds which can be used as an objective flaw detection method. Histograms of the normalized largest singular values for all time-frequency pairs calculated from the response matrix, K ij (T p , f q ), from the finite-element simulated data of an austenitic weld containing no flaw (blue bars) and with a 1.25 mm radius side drilled hole flaw (red bars) are shown in figure 3. This figure shows that when there is no flaw included within the austenitic weld (blue bars) the highest concentration of largest singular values lie between 2 and 2.5, with the largest being 3. 7. When the flaw (a 1.25 mm radius side-drilled hole) is included (red bars), there is still a large proportion of the first singular values lying between 2 and 2.5; this is to be expected as these correspond to the scattering arising from the grains within the weld structure. However, it is also clear that a significant proportion of the first singular values are greater than 3.7. These (d) Results when the detection method is applied to finite-element simulated data The detection algorithm summarized in §3c is applied here to other finite-element simulations. The time step ( t = 159 ns) corresponds to a spatial step of 0.5 mm and the time window T = 4.5 µs is approximately 20% of the total time that the wave front takes to reach the back wall of the test piece. Figure 5 shows the distribution of the largest singular values across the time and frequency domain. The crack has clearly and objectively been detected as there exists a cluster of singular values larger than the threshold (τ = 3.7). If one compares this plot with the equivalent when the weld microstructure is removed to create a homogeneous media (not shown for brevity), then it is clear that the singular values are lower here. This is due to less energy reaching the flaw (and in turn being received back by the transducer) as some of the energy in the ultrasonic waves has been scattered by the grain boundaries within the weld. Figure 6 shows the largest singular values in time-frequency space when side-drilled holes of radius (a) 0.5 mm and (b) 2.5 mm are inserted into the finite-element simulations which include the heterogeneous weld material. The parameters used to create the corresponding time-frequency response matrices are summarized in table 2. Again, it is clear from these time frequency distributions that in each case there exist singular values larger than the detection threshold and it can be concluded objectively that there exists a flaw within the structure. As expected, as the radius of the side drilled hole is increased the value of the singular values associated with the flaw also increase. (e) Results when the detection method is applied to experimental data The detection algorithm presented in §3c is now applied to experimental data from a test piece which contains an inconel 82/182 weld [5]. The parent material to the right of the weld is 316L stainless steel and to the left is carbon steel with an inconel 182 buttering layer between this and 9 rspa.royalsocietypublishing.org Proc. R table 3 for a summary of the array and material properties). An FMC was collected from the sample where the array was positioned off centre, directly above the weld (as shown in figure 7) so as to include scattering by the 12 mm vertical zero-volume flaw (crack) present in the centre of the weld, located 37 mm from the surface. Note that since the scenario considered is a linear phased array inspection, we are concerned with only two dimensions, and as the defect is a zero-volume crack, then it effectively only has one dimension (length). Although the crack is perpendicular to the array, a planar phased array was chosen over an oblique incidence inspection as it represents a very difficult case in which the TFM struggles to detect anything. Such a scenario could arise in practice if there was limited access to the component of interest. In order to demonstrate the effect of a flaw inclusion on the largest singular value distribution, FMC data were also collected from an area within the weld, where it was known that there was no flaw. The time-frequency response matrix was calculated in both cases with the parameters as summarized in table 4. The normalized first singular value distributions,λ 1 (equation (2.4)), are shown in figure 8, where (a) there is no defect in the inspection area and (b) a 12 mm long, vertical, zero-volume crack is present within the inspection area. It is clear from figure 8b that there are singular values larger than the detection threshold (τ = 3.7) at the lower frequencies. These are associated with the crack and occur at the lower frequencies as the crack is long in comparison with the wavelength (the crack length to wavelength ratio is 10.4). In the no flaw case (figure 8a), there are some significant singular values occurring at around 22 µs which can be attributed to scattering by the back wall. The histograms corresponding to these largest singular values are shown in figure 9 and by comparing figure 9a,b, it is clear that there is a higher proportion of singular values that exceed the threshold τ = 3.7 when a flaw is present in the inspection area. Indeed, in figure 9, there is an extremely large singular value around 6.5. Table 4. Parameters used to generate the inter-element response matrix, K ij (T p , f q ), arising from the experimental ultrasonic data summarized in table 3. parameters to create inter-element response value time window ( T) 5.9 µs ( Imaging flaws in austenitic welds using the decomposition of the timereversal operator and total focusing methods In this section, the imaging stage of the DORT algorithm is applied to data arising from both the finite-element simulation and experimental inspection of an austenitic weld with defects. In addition, the TFM is applied to these FMC datasets to produce images for comparison. An image is created using the DORT method once the flaw has been detected using the largest singular value distribution. This highlights the time-frequency pairs where the corresponding eigenvectors can be used to back-propagate and create an image of the defect (see equation (2.7)). It is important to note that within this work the most basic form of TFM has been used to generate the forthcoming images [10] and that there are more advanced versions of the method available [7,8]. In the following sections, all the images have been plotted on a decibel scale I dB = 20 log 10 (I/I max ), where I is the image matrix produced and I max is its maximum. (a) Image reconstruction of flaws within an austenitic weld using finite-element simulated data In this section, the DORT method and the TFM are applied to the data arising from the finiteelement simulation of the phased array inspection of an austenitic weld incorporating a side drilled hole and a horizontal crack. The images from the DORT method can be compared with those generated using the TFM via the signal-to-noise ratio (SNR). In this work, the SNR is calculated by SNR = 20 log 10 (A max /A 0 ), where A max is the maximum amplitude in the image and A 0 is the maximum amplitude from a region in the image which does not contain any scattering from the flaw but does contain noise. Note that an aspect of subjectivity arises from the choice in the noisy region from which A 0 is determined. The imaging methods are first applied to data arising from the finite-element simulation of the inspection incorporating a 0.5 mm radius side drilled hole within the weld microstructure. The resulting image using the DORT algorithm is shown in figure 10a. From this image, it can be seen that the DORT method has successfully found the side drilled hole but its shape and size are not recovered. The location of the imaged side drilled hole is out by approximately 4 mm in the horizontal direction and 3 mm in the vertical direction (using the maximum amplitude of the point spread function from the image as a reference point). The SNR of this image is 23 dB, calculated using the noisy region enclosed by the black box. The TFM was also applied to this dataset to produce the image in figure 10b which has been cropped to reduce the effects of the front face and back wall reflections. It can be observed that the scattering from the flaw is close to the order of the noise, and it is difficult to find and identify the flaw in this clutter. The SNR in this image is calculated using the maximum amplitude in the region enclosed by the black box to give a measurement of 8 dB. In this case, the DORT has proved superior in terms of detection and demonstrates a marked advantage over the TFM in the detection of a sub-wavelength defect embedded in a noisy host medium. The next configuration considered is when a horizontal crack of length 5 mm is included within the simulation (see table 1 for the relevant parameters). The image reconstructed using the DORT method is shown in figure 11a, where the white line demonstrates the actual location and length of the flaw. The SNR in figure 11 is 19 dB, where A 0 is taken to be the maximum amplitude in the region depicted by the black rectangle. The TFM was subsequently applied to these data to generate the image shown in figure 11b where again, the white line indicates the true location and length of the crack. The SNR from the TFM image is 20 dB, where the estimate of noise was calculated within the region enclosed by the white rectangle. In this particular case, the TFM proves to be superior to the DORT algorithm. As discussed earlier, it has been shown that many eigenvalues can be associated with one scatterer depending on its size and characteristics [41], and it is demonstrated here that failure to account for all the relevant singular values means that the DORT algorithm is unable to characterize the nature and size of the flaw. A potential avenue for future work could entail further investigation of the singular value distribution when there is a crack-like defect present and subsequently developing the DORT method to include more than just the largest singular value in the image reconstruction. (b) Image reconstruction of a flaw within an austenitic weld using experimental data In this section, the DORT and TFM imaging algorithms are applied to a second experimental FMC dataset. The test sample under inspection was manufactured from 316L stainless steel and constructed from two welded austenitic plates. The defect of interest was a lack-of-fusion crack tilted at 50 • with respect to the x-axis, with 6 mm height (approx. 7 x Figure 12. This schematic shows the test sample used to collect the experimental data as summarized in table 5. A 6 mm lackof-fusion crack orientated at 50 • with respect to the x-axis is present on the boundary of the double V weld between two austenitic plates of 22 mm depth. The DORT method was applied to this FMC dataset to produce the image shown in figure 13a. A high amplitude region attributed to the flaw is reconstructed at a depth of approximately 17 mm (compared to the known position of the upper crack tip at 14 mm). The predicted location of the flaw along the horizontal axis is shifted to the left from the known location as shown in the corresponding TFM image (figure 13b). It can be surmised that this is caused by the tilted angle of the crack; the strongest scattering is received by elements to the left of the flaw and it is only this information that the DORT algorithm exploits. Although the crack-like nature of the defect is not recovered using the DORT algorithm (this information is not fully captured using only the largest singular value), the indication that a flaw exists is undeniable and an impressive SNR of 47 dB is achieved. For comparison purposes, a TFM image of the flaw was constructed using the experimentally derived pressure wave speed of 5820 m s −1 . The overall location and characterization of the flaw is improved in this image, however a less impressive SNR of only 16.8 dB is achieved. The higher SNR evident in the image constructed by the DORT algorithm also suggests that it has improved detection capabilities in coarse-grained media over the standard TFM and indeed, further evidence to support this conclusion is shown in [42]. The overall conclusion is that due to the limited information used by the DORT algorithm, characterization proves problematic. However, for detection purposes the DORT method presents a robust alternative to the TFM which can sometimes fail in cluttered media. Conclusion In this paper a flaw detection algorithm, based on the first stage of the DORT method was outlined, within which a detection criterion specific to austenitic welds was proposed. This detection algorithm was then applied successfully to data arising from a finite-element simulation of the phased array inspection of an austenitic weld containing a side drilled hole (of radius 0.5 mm) and a horizontal crack (of length 5 mm). In addition, the method was successfully applied to experimental FMC data from an austenitic weld with a 12 mm, vertical zero-volume crack. In the latter part of this paper, the DORT algorithm was used to image the flaws. The data arising from the simulated inspection of a 0.5 mm side drilled hole was interrogated by both the DORT method and the classical TFM imaging technique. In this case, the DORT showed an improved detection capability over the TFM in which it was difficult to separate the defect from background noise. However, on examination of the simulated data incorporating a 5 mm crack parallel to the array, the TFM exhibited its superior characterization abilities. This was not surprising as the data exploited by the DORT (the largest singular values of the time-frequency response matrices) does not contain information on the nature of the defect. To improve upon this aspect of the algorithm, the restriction to examination of only the largest singular values must be relaxed. The comparison between the DORT algorithm and the TFM was then carried out in application to the experimental data arising from the inspection of a lack-of-fusion crack between two welded austenitic plates. Similar discoveries were made; an increased SNR was achieved using the DORT algorithm suggesting that it is more suitable for detection of flaws within noisy media. The position of the crack along the horizontal axis was skewed in the DORT reconstruction, and it was suggested that this could be attributed to its 50 • tilt which causes the highest amplitude scattering to be received by elements to the left of the flaw. There was no such problem in the TFM reconstruction and the angled crack was reasonably well characterized. However, a lower SNR of 16.8 dB was achieved. From these scenarios, it is concluded that in its current form, where
7,250.6
2016-04-01T00:00:00.000
[ "Materials Science" ]
Qualitative Demonstration of Spectral Diversity Filtering Using Spherical Beam Volume Holograms We investigate the feasibility of designing spectral diversity filters using spherical beam volume holograms. Our experimental results qualitatively show the separation of the information of different incident wavelength channels using spherical beam volume holograms. The major trade-off in using these holograms is between the degree of spatial spectral diversity and the number of allowed spatial modes (or the divergence angle) of the incident beam. Adaptive optical networks using photorefractive crystals, " Appl. Introduction Recently, there has been a lot of interest in designing compact and sensitive spectrometers essentially for bio and environmental sensing.The key element of every spectrometer is a wavelength sensitive (or dispersive) device that allows for separation of different wavelength channels for detection.Holograms (or gratings) are well-known candidates for this task due to their wavelength selectivity [1], which results in non-uniform diffraction of different wavelength channels of a collimated optical beam.Most of the optical spectrometers built based on this phenomenon exploit surface relief or thin film gratings which primarily have single grating vectors.However, these spectroscopy techniques are not efficient for spatially incoherent light sources.The reason is that for an incoherent source with uniform spectrum in the input plane of such spectrometers, the output will be an ambiguous pattern with contributions from different wavelength channels overlapping each other.The problem has been solved in conventional spectrometers by limiting the angular range of the incident beam by using spatial filtering.Unfortunately, spatial filtering drastically reduces the photon throughput for diffuse source spectroscopy.While such inefficiency might be tolerated in absorption spectroscopy (where a strong incoherent source can be used), it is a major limitation for weak diffuse sources, such as those generated in Raman spectroscopy.In such cases, the signal from the desired molecules is very weak and successful sensing requires a sensitive and efficient spectrometer. In order to design more sensitive spectrometers, multimode multiplex spectroscopy (MMS) was recently proposed based on using a weighted projection of multiple wavelength channels (i.e., multimode) of the incident signal [2].In contrast to conventional spectrometers, the output signal in MMS is composed of multiple wavelength channels, and the information of each channel is separated by post processing of the detected signal.The key element in MMS is a spectral diversity filter (SDF) that inverts an incident optical signal with uniform spectrum over the input plane to an output pattern with non-uniform spatial-spectral information.By measuring the output light intensity over the output plane by a detector array (for example a CCD camera) and performing an inverse filtering (as outlined in Ref. [2]), we can approximate the input spectrum. A spectral diversity filter maps a homogeneous but diffuse spectral source onto a spatially encoded pattern.Inversion of the spectral-spatial mapping enables spectral estimation.Construction of spectral diversity filters is constrained by the constant radiance theorem [3].According to the constant radiance theorem it is not possible to produce spatial patterns from a diffuse source without increasing the mode volume or reducing the photon throughput.In contrast with conventional spectroscopy, however, throughput losses using spectral diversity filters may be independent of spectral resolution.Spectral diversity filters have been demonstrated using an inhomogeneous three-dimensional (3D) photonic crystal [2].Under the photonic crystal approach, the input-output mode volume is fixed but a spatially structured fraction of diffuse incident light is reflected.While 3D photonic crystals are very attractive as super-dispersive elements, they are hard to fabricate based on an arbitrary design.Thus, other (more designable and manufacturable) schemes for the development of SDFs are needed. This paper considers the feasibility of making SDFs using spherical beam volume holograms (SBVHs).The holograms are recorded by the interference pattern between a plane wave and a spherical wave inside a photopolymer.We will qualitatively show that during readout of these holograms with a white light source, the information of different wavelength channels of the incident beam have different spatial distribution in the output plane.The details of our experiments are presented in section 2. Experimental results are presented in section 3 and further discussed in section 4. Final conclusions are made in section 5. Experiments For demonstrating the spectral diversity of SBVHs, we have recorded a few holograms using the interference of a spherical beam and a planar beam to obtain a range of grating vectors (in contrast to a plane wave hologram that only has one grating vector).We have read these holograms with monochromatic beams with different degrees of collimation (from a collimated beam to a completely diffuse beam) illuminating the hologram in the direction of the spherical recording beam. Figure 1 shows the basic schematic of the experimental setup for recording the SBVHs.In all experiments reported in this paper our recording material was the Aprilis photopolymer [4] with L = 200µm thickness.It is a photopolymer recording medium which uses the cationic ring-opening mechanism [5].Our recording light source was a solid state laser operating at λ = 532nm.We passed a plane wave through a lens with f = 2.5cm to make a spherical beam.The distance of the focus of the spherical beam to the center of the hologram was d = 16mm.When measured in air, the angles of the spherical beam axis and the plane wave with respect to the normal axis were θ 1 = 10°, θ 2 = 46°, respectively, as shown in Fig. 1.Both beams were TE polarized (E vector perpendicular to the plane of the figure).The size of the hologram is 8mm by 8mm.In some experiments, we changed the distance of the lens with respect to the recorded hologram to change the numerical aperture of the spherical beam.Also we have varied the angle of the plane wave using a 4-f system (not shown in Fig. 1).We recorded both single holograms and multiplexed holograms in each spot of the recording material to investigate the effect of the complexity of the hologram on its spectral diversity.In the multiplexed hologram case, we have used rotation multiplexing.This technique is implemented by means of rotating the sample with respect to the plane containing the center of the spherical beam, the center of the recording spot, and a line parallel to the recording plane wave (i.e., rotating the hologram about z-axis in Fig. 1). The hologram was probed using an approximately monochromatic signal generated by passing the light from a regular 50W light bulb through a monochromator (Fig. 2).The full width half maximum (FWHM) bandwidth of the output light from the monochromator was 8nm.The hologram was far enough (d ≈ 70cm) from the output slit of the monochromator to approximate a collimated reading beam at the hologram.A CCD camera with an imaging lens system was put behind the hologram to capture the image of the transmitted light through the hologram right on the back face of it.We also changed the transmission wavelength of the monochromator and grabbed the image of the transmitted light for different wavelengths to observe the spatial-spectral diversity.We also investigated the diffracted light from the hologram, as it was illuminated by a collimated white light beam normal to the hologram face.The diffracted beam hit a white screen at distance about 2cm and the picture of the screen was taken by a digital camera as shown in Fig. 3.The diffracted beam was focused at some point on the screen and the location of the focus varied with wavelength resulting in a colorful picture on the screen.Fig. 3. Reading SBVHs with a collimated white light source.The diffracted light focuses on a white screen and a digital camera takes its picture. Spectral diversity of SBVHs When reading the SBVHs in the direction of the recording spherical beam using the experimental setup shown in Fig. 2, we observed a dark crescent in the middle of a uniform bright background in the output plane as shown in Fig. 4. Figure 4 (which is a movie) shows the output pattern as the reading wavelength is continuously scanned from 600nm to 900nm.It is clearly seen from the movie that the dark crescent moves as the reading wavelength changes.This change in the output pattern corresponds to the spectral diversity that is required for MMS. with multiple grating vectors (for example recorded by plane wave and a modulated beam formed by passing a plane wave through a spatial light modulator) is read by a reading beam that is not exactly the same as one of the recording beams, only a portion of the hologram (i.e., a subset of grating vectors) is Bragg matched resulting in the reconstruction of only a portion of the other recording beam [6][7][8].When the SBVH is read by the collimated beam from the monochromator in Fig. 2 (i.e., by approximately a plane wave) instead of the spherical beam) only a portion of the SBVH is Bragg matched, which corresponds to a diffracted beam that has a crescent shape.Reading with a different reading wavelength results in Bragg matching another subset of grating vectors of the hologram and thus another crescent diffracts.In other words, by scanning the reading wavelength, the position of the dark crescent in the output plane in Fig. 2 shifts as shown in Fig. 4. In order to observe the effect of the reading wavelength on the partial Bragg matching, we also captured the diffracted light from the hologram using the setup in Fig. 3 when the hologram was illuminated with a collimated white light source in the direction of the recording spherical beam.The resulting image is shown in Fig. 5.One can observe that the SBVH diffracts the entire visible spectrum although it is a thick hologram.The range is in fact broader than just the visible range.This is due to the partial Bragg matching of different potions of the SBVH by different incident wavelengths as explained above.This is not the case for a plane wave volume hologram (PWVH) that is used in conventional spectrometers.A PWVH either diffracts the entire collimated reading beam (and not a crescent) or does not diffract it at all because the hologram has only one grating vector, which is either Bragg matched or mismatched by the reading beam.To make the output patterns even more diverse, several SBVHs can be multiplexed by rotation multiplexing explained in section 2. Figure 6 (which is a movie) illustrates the output pattern of an 8 rotation-multiplexed SBVH set in the reading setup of Fig. 2, when the incident wavelength (controlled by the monochromator) is continuously scanned from 644nm to 878nm.In performing rotation multiplexing, the rotation angle of the recording material between two successive recordings is 45°.It is clear from Fig. 6 that the output pattern (composed of 8 crescents each corresponding to diffraction from one SBVH) has different spatial distributions for different wavelengths.Thus, the spectral diversity here is better than that for a single SBVH.Note that the dynamic range parameter (or the M/# [9]) of the recording material limits the number of holograms that can be multiplexed.To obtain dark crescent, large diffraction efficiency for all holograms is required.This diffraction efficiency is given by η = (M# / M) 2 , with M being the number of multiplexed holograms [9,10].The material used in our experiment has M/# = 5.That is why we used a maximum of M = 8 holograms in these experiments to have η close to 50%. Discussion The results presented in section 3 demonstrate the potential of SBVHs for designing SDFs.However, all these results were obtained with collimated incident beams.In other words, we have demonstrated that these holograms might be used to separate the information of different wavelength channels of a collimated incident beam.For practical applications, we would like this diversity to be present for a spatially incoherent beam (or at least a beam with reasonably large divergence angle).For evaluating the spectral diversity of SBVHs for non-collimated reading beams, we have read them both with spherical (i.e., non-collimated with finite divergence angle) and diffuse beams.For the spherical beam case, the reading experiments were performed using the experimental setup shown in Fig. 2 with an additional lens placed in front of the hologram to generate a spherical beam.The results for a collimated beam (with no additional lens) and a beam with reasonable divergence angle (full angle θ = 20° measured in air) are shown in Figs.7(a) and 7(b).It is clear that the dark crescent becomes wider and brighter as the divergence angle increases. To study more extreme cases, we put a diffuser in the reading setup shown in Fig. 2, in front of the hologram.Figures 7(c) and 7(d) show the output of the CCD camera when the distance between the diffuser and the hologram is 27.5cm and 2.5cm, respectively.The latter implements the worst case by approximating a fully incoherent source.The smaller the distance between the hologram and the diffuser, the larger the divergence angle of the reading beam and the lower the spectral diversity in the output plane will be. Figure 7 clearly demonstrates the trade-off between the spectral diversity and the divergence angle of the incident beam (or the number of spatial modes that are included in the spectrometer).We expect that limiting the divergence angle of the incident beam to θ = 40° would allow for a reasonable diversity in the output plane.Furthermore, by recording more complex holograms (For example between a plane wave and a modulated beam through a spatial light modulator) better diversity for a given divergence angle can be obtained.Note that the first demonstration of MMS using a 3D photonic crystal was done using an incident beam with θ = 20° divergence angle. Conclusion We demonstrated qualitatively that a SBVH formed by the interference pattern of a plane wave and a spherical beam can act as a spherical diversity filter.We also showed that by multiplexing several spherical beam holograms using rotation multiplexing, we can obtain better output spectral diversity.There is a trade-off between the spectral diversity and the number of spatial modes (or the divergence angle or the power) of the input beam that is allowed to pass through the hologram.The more collimated beam results in a better spectral diversity. Fig. 2 . Fig. 2. Reading SBVHs with a monochromatic collimated beam and imaging the back face of the hologram.The light source is far enough from the hologram aperture to approximate a collimated reading beam. Fig. 4 . Fig. 4. (862 KB movie) Output pattern of a single SBVH illuminated by a collimated monochromatic beam as the wavelength of the incident beam is continuously scanned from 600nm to 910nm.The dark crescent moves as the wavelength is scanned.Both the presence of the crescent and its displacement with wavelength are due to the partial Bragg matching of the SBVH by the reading beam.It is known that when a hologram Fig. 5 . Fig. 5. Diffracted beam from a SBVH illuminated by a collimated white light source measured using the experimental setup in Fig. 3. Fig. 6 . Fig. 6. (707 KB movie) Output pattern of a complex volume hologram (formed by rotation multiplexing of 8 SBVHs) illuminated by a collimated monochromatic beam as the wavelength of the incident beam is scanned from 644nm to 878nm.The output multi-crescent pattern changes as the wavelength is scanned. Fig. 7 . Fig. 7. Effect of the divergence angle of the reading beam on the spectral diversity of the SBVHs.A single SBVH is read at λ = 532nm with (a) a collimated monochromatic beam , (b) spherical beam with divergence angle of 20° (full angle in air), (c) diffuse light where the diffuser is 27.5cm far from the hologram in the setup of Fig. 2, (d) diffuse light where the diffuser is 2.5cm far from the hologram in the setup of Fig. 2.
3,599.4
2004-06-28T00:00:00.000
[ "Physics" ]
Enhanced XAO: the ontology of Xenopus anatomy and development underpins more accurate annotation of gene expression and queries on Xenbase Background The African clawed frogs Xenopus laevis and Xenopus tropicalis are prominent animal model organisms. Xenopus research contributes to the understanding of genetic, developmental and molecular mechanisms underlying human disease. The Xenopus Anatomy Ontology (XAO) reflects the anatomy and embryological development of Xenopus. The XAO provides consistent terminology that can be applied to anatomical feature descriptions along with a set of relationships that indicate how each anatomical entity is related to others in the embryo, tadpole, or adult frog. The XAO is integral to the functionality of Xenbase (http://www.xenbase.org), the Xenopus model organism database. Results We significantly expanded the XAO in the last five years by adding 612 anatomical terms, 2934 relationships between them, 640 synonyms, and 547 ontology cross-references. Each term now has a definition, so database users and curators can be certain they are selecting the correct term when specifying an anatomical entity. With developmental timing information now asserted for every anatomical term, the ontology provides internal checks that ensure high-quality gene expression and phenotype data annotation. The XAO, now with 1313 defined anatomical and developmental stage terms, has been integrated with Xenbase expression and anatomy term searches and it enables links between various data types including images, clones, and publications. Improvements to the XAO structure and anatomical definitions have also enhanced cross-references to anatomy ontologies of other model organisms and humans, providing a bridge between Xenopus data and other vertebrates. The ontology is free and open to all users. Conclusions The expanded and improved XAO allows enhanced capture of Xenopus research data and aids mechanisms for performing complex retrieval and analysis of gene expression, phenotypes, and antibodies through text-matching and manual curation. Its comprehensive references to ontologies across taxa help integrate these data for human disease modeling. Background The embryological literature for Xenopus, the African clawed frog, reaches back more than a century [1]. As many forms of human disease are associated with defects in genes involved in the earliest steps of embryonic development, studying the orthologous genes in Xenopus laevis and X. tropicalis as model systems to elucidate the molecular and cellular pathways through which these genes function has grown in strength in recent decades. Annotation and assembly of the Xenopus tropicalis genome demonstrated that it has long regions in which genes exhibit remarkable synteny with the human genome [2], establishing it as an important model for comparative genomics and modeling human gene function. Despite being allotetraploid, X. laevis also reflects genetic synteny to humans with full chromosomal duplication of X. tropicalis gene sequences. The X. laevis genome is currently being assembled and annotated [3]. In addition to being excellent genetic models, both frog species have large externally developing embryos with rapid embryogenesis, allowing easy study of early vertebrate development from fertilization through organogenesis and limb development. Likewise, large experimentally malleable oocytes, particularly from X. laevis, are a key tool in studies of ion channel physiology and toxicology and the cell cycle [4]. Oocytes and synchronously developing embryos are easily obtained in large numbers allowing researchers to quickly gather large amounts of data. Together these two Xenopus species accelerate our understanding of the mechanisms underlying human health and disease [5], yet a daunting challenge remains: to organize, integrate, and make accessible vast quantities of information as it emerges. Xenbase (http://www.xenbase.org), the Xenopus biology and genomics database [6,7], integrates diverse data from high-throughput screens, scientific literature, and other databases (such as NCBI) into a number of database modules, thus allowing researchers to investigate specific genes using well-defined terminologies that bridge different kinds of data. To this end, the Xenopus Anatomy Ontology (XAO) was developed as a structured, controlled terminology that 1) unites anatomy and development of the vertebrate embryo with the molecular and cellular research findings, 2) enables powerful data searches, and 3) facilitates accurate annotation of research findings. From its inception, we intended the XAO to be integral to the functionality of Xenbase, and as such the XAO acts as a platform to support automated and manual curation and to power the gene expression search feature. The XAO provides consistent terms for 1313 anatomical features and developmental stages and draws a detailed conceptual picture of the frog from unfertilized egg to adult. Thousands of relationships between terms describe which tissues are components of other tissues, structures, and anatomical systems, as well as articulating the tissues' developmental lineage. The timing of each feature's embryonic development is framed by references to the community-standard Nieuwkoop and Faber (NF) staging series [8]. The XAO is frequently updated and fine-tuned with an emphasis on completeness for each term, and in response to the evolving areas of Xenopus research. This is essential to making high-quality annotations and robust database queries of Xenopus data with maximal utility to the research community. The implementation of the XAO through Xenbase allows the capture of rich content from the scientific literature and it enables the retrieval and analysis of complex data that have been annotated using the ontology, principally gene expression. Curation of mutant and morphant phenotypes using the XAO is currently in development. The XAO has always been freely available to Xenopus researchers and biomedical ontology developers for use in their projects and we continue to encourage them to provide feedback. Interrelating the XAO with different bio-ontologies has been a key to making Xenopus data accessible to the broader scientific and biomedical communities [9], enabling researchers to query across orthogonal biological and human disease databases. We previously reported the initial development of the XAO [10], emphasizing strong representation of embryonic development and interoperability with established species-specific and gross-level anatomy ontologies. In 2009, the XAO was recognized as one of eight exemplar ontologies in the Open Biological and Biomedical Ontologies (OBO) Foundry [11]. We identified at that time several areas of the XAO to be targeted for improvement. It needed expansion to support accurate gene expression curation from the Xenopus literature [12] and many existing terms required descriptions and more comprehensive relationships to other terms. Here we report the progress in pursuit of these goals making Xenbase, with its seamless integration of the XAO, a vital and growing biological and genomics database and making the ontology itself a useful resource for Xenopus researchers. Results and discussion Ontology organization and content Anatomical entities are organized in a single classification (is_a) framework in the XAO. Upper-level nodes (e.g., 'compound organ' , 'organism subdivision') comprise a structural axis of classification cross-referenced to the Common Anatomy Reference Ontology (CARO) [13], providing interoperability with other model organism anatomy ontologies that use CARO as well as a starting point for classifying Xenopus-specific features. The XAO reflects various aspects of biological organization with five other logical relationship types. Cell types, tissues, structures, and sub-systems are described as being part_of other tissues, structures, and systems. The lineage of tissues in the course of development is represented by develops_from relationships. The timing of their development is indicated by starts_during and ends_during relationships linking them to specific developmental stages based on the normal table of Xenopus development by Nieukoop and Faber [8]. This NF stage series, which has long been the standard in Xenopus research, exists as a sub-ontology within the XAO, with preceded_by relationships delineating the temporal ordering of the 66 component stages. The ontology's developmental tree begins with the specification of the classical vertebrate primary germ layers ('ectoderm' , 'endoderm' , and 'mesoderm') and branches into the tissues and structures comprising the growing embryo and tadpole. Ultimately, these features are placed within 19 major anatomical systems, from the 'alimentary system' to the 'urogenital system'. The XAO's latest release (October 9, 2013) contains 1313 anatomical and developmental stage terms, 5148 relationships, and 695 cross-references to other ontologies (Table 1). Throughout the course of our work we have taken care that the ontology adheres to the community conventions and best practices recommended by the OBO Foundry [14]. The entry for the 'brain' [XAO:0000010], for example, not only provides a consistent name for this feature and a logical classification as a 'cavitated compound organ' , its relationship to the term 'central nervous system' indicates where the brain functions while other relationships indicate that the organ first appears at NF stage 22 from its precursor structure, 'anterior neural tube'. Subsequently, the 'forebrain' , 'midbrain' , and 'hindbrain' , are part_of 'brain'. Further divisions relate more regions and structures as part_of these three main regions of the brain; for example, the 'forebrain' has 16 distinct terms, 'midbrain' has 9 terms, and 'hindbrain' has 13 terms defined by part_of relationships. This finer granularity allows more precise gene expression curation that is demanded by modern research. Expansion and improvements As we began to annotate gene expression reported in the literature and to develop a Xenbase expression search interface (released to the public in 2009) it became clear that usability and annotation quality depend on the ontology having comprehensive sets of terms, definitions, and relationships. Expression queries that include anatomical parameters are designed to draw on relationships in the ontology for their functionality. The Xenbase in-house curation interface restricts terms that can be used for annotations at particular stages based on their developmental timing asserted in the ontology. Since describing the XAO five years ago, we implemented major improvements, expanding it from 701 to 1313 anatomical and developmental stage entities and fleshing out much of its existing content ( Figure 1). The ontology, which initially included textual definitions for only 292 terms, now has a definition for every term in the ontology. In its initial release, the ontology comprised to a large extent a "partonomy", with the principal hierarchical structure depending on which entity each anatomical feature is "a part of" rather than what it is "a type of." It lacked a single classification framework that fosters the development of good, clear definitions. Concurrent with our effort to be definition-complete, we ensure that every term has an is_a parent. Furthermore, the majority of embryonic structures previously lacked Ontology cross-references 695 Figure 1 Growth of the xenopus anatomy ontology. In the course of its major public releases since 2008, the number of terms in the ontology has grown by 87% and the number of part_of and develops_from relationships has substantially increased. The majority of terms in the initial release lacked definitions and is_a parents, while the latest release (October 9, 2013) is definition-and is_a-complete. specific starts_during and ends_during stages. All anatomical features now have these relationships following extensive surveys of literature describing various anatomical systems (e.g., the 'skeletal system' [15]). The ontology has grown to contain 843 synonyms (originally 203), 859 part_of relationships (originally 363), and 490 develops_from relationships (originally 308). Now, every anatomical term or one of its is_a ancestors has at least one part_of and at least one develops_from relationship to another term. Information that curators glean from the literature has often led to adjustments of start and end stage relationships in the ontology. Ontologybuilding rules have ensured that the start and end stages associated with related features are consistent and make biological sense. For example, the stage range of 'pronephric mesenchyme' , by rule, must be the same as or fall within that of 'mesenchyme' , its is_a parent. Similarly, the validity of develops_from and part_of relationships is governed by the timing of the related terms; e.g., if 'pronephric mesenchyme' gives rise to 'pronephric kidney' , the latter must appear sometime within or immediately after the former's range of NF stage 21-30 ( Figure 2). By using the starts_during and ends_during developmental stage restrictions we can distinguish transient embryonic structures from tissues that are only present in the adult. Good examples of transient embryonic structures are the 'pronephric kidney' that starts_during NF stage 28 and ends_during NF stage 64, and the 'tail region" that starts_during NF stage 26 and which is resorbed at metamorphosis and thus ends_during 'NF stage 66'. An example of an adult structure is the 'mesonephric kidney' , the kidney in the adult frog, that starts_during NF stage 39 and ends_during 'death'. Curators validate the temporal constrains of XAO terms as part of their on going annotation of the published literature and adjust start and end stages accordingly. For example during the recent XAO expansion curators found many papers describing the expression of the gene nkx2.1 as one of the earliest markers of lung fate [17][18][19]. As a result, the starts_during and ends_during for the term 'lung primordia' were revised. References to other ontologies enable cross-taxon comparisons without the need for complex term translators. The XAO currently contains 695 cross-references; the initial release had only 145. Its close integration with the cross-species Uber Anatomy Ontology (UBERON) [20] provides a bridge to cell types and other vertebrate anatomy ontologies. The XAO's rich content and referencing has enabled it to be utilized in projects outside of Xenbase, e.g. Bgee [21], which integrates gene expression data from several animal species. While addressing the XAO's overall condition, we focused on enhancing several specific aspects of the ontology relevant to Xenopus as an animal model: Musculoskeletal system Integration of amphibian limb phenotypes into the Phenoscape Knowledgebase [22], which links evolutionary phenotypes for vertebrates with data from model organisms [23], prompted improvements in this area. The representation of the 'skeletal system' , which previously comprised only 35 features, grew to include 146 skeletal element and tissue types based on the Vertebrate Skeletal Anatomy Ontology (VSAO) [24]. Of these, 24 are cranial cartilage terms. The ontology now has 45 individual muscle terms, including many cranial muscles. Only three were described in the original XAO release. In addition, we updated the nomenclature and definitions for limb segments ('autopod' , 'stylopod' , and 'zeugopod') and terms for each digit segment and joint regions in support of phenotype curation. Neural crest Xenopus has proven to be a very important model system for studying neural crest (NC) cell differentiation, stem cell properties, epithelial-mesenchymal transition, and cell migration [25,26]. In order to support the extensive Xenopus embryo research in this area we have significantly expanded the XAO NC terms. First, Xenbase curators performed an extensive analysis of the NC literature. After drafting preliminary terms, we consulted domain experts from chick, mouse, fish, and frog communities to improve and coordinate neural crest representation in the XAO. Then, in March of 2012 Xenbase staff participated in a cross-ontology RCN NC workshop [27] where we presented our results and made final term modifications to ensure that the XAO NC definitions were in agreement with other ontologies. This exemplifies our general approach to XAO improvement. Previously, the main 'neural crest' functional domains represented in earlier versions of the XAO included only the 'cranial neural crest' , the migrating streams ('mandibular crest' , 'hyoid crest' , and 'branchial crest') and the 'trunk neural crest'. We expanded this to include 'neural plate border' (with 'pre-chordal neural plate border' and 'chordal neural plate border' domains); 'cardiac neural crest' , 'sacral neural crest' , and 'vagal neural crest'; 'premigratory neural crest cell' , 'migratory neural crest cell' , and 'postmigratory neural crest cell'; and 'anterior branchial crest' and 'posterior branchial crest' domains. We built develops_from relationships between NC and the tissues and structures to which NC contributes, such as the 'craniofacial skeleton' , 'glial cells' , and 'enteric neurons' of the 'hindgut' , and the 'outflow tract' of the 'heart'. Neurological structures and stem cells While adding 42 more terms for subdivisions and regions of the brain (e.g., 'rhombomere 1' to 'rhombomere 8' of 'hindbrain') and 8 new neuron types ('motor neuron' , 'interneuron' , etc.), the XAO has also doubled the number of neurological placode terms from 10 to 20. These include the 'facial placode' , 'glossopharyngeal placode' , and 'vagal epibranchial placode' [28]. Stem cell populations of specific regions such as the 'ciliary marginal zone' of the 'retina' have also been added. Pronephric kidney Embryonic kidneys are an important model system for investigating principles regulating multicomponent complex organs [29,30], and Xenopus is prized as a model because of its simplicity and experimental accessibility. Xenbase has a rich complement of annotated expression data for Xenopus 'pronephric kidney' development. The XAO now contains significant updates to the definitions, timing, relationships, and synonyms of all pronephric structures. Heart and vasculature Xenopus has been instrumental in studies of vertebrate heart development and new transgenic lines are being used to investigate the molecular mechanisms and complex gene regulatory networks underlying human congenital heart defects and diseases [31]. As the anatomy of the developing heart and vasculature is described and imaged in finer detail [32], the XAO must expand to capture this detail. So far we have added 19 new heart terms such as 'primary heart field' , 'secondary heart field' and 'epicardial precursor cell' , and this will be a continued focus for ontology improvement. Regenerative structures Xenopus has emerged as a leading model for tissue regeneration research [33]. Recent additions of 'blastema' terms specific to the fin, limb, tail, and eye allow curators to capture gene expression involved in normal growth and regeneration. Oogenesis stages With their large size, Xenopus oocytes are amenable to studies of ion channels and transporters [34] as well as maternal gene expression. The XAO staging series incorporates seven new unfertilized egg stages, from 'oocyte stage I' to 'mature egg' stage, and oocytes have been added as cell types, enabling curation of gene expression during oogenesis. Implementation in Xenbase Integration of the XAO in Xenbase allows the research community to reap practical benefits without needing expert knowledge of ontologies and their design. It also provides important use cases for how the ontology can be used in research applications in general. The gene expression search interface on Xenbase allows researchers to constrain their queries by a range of stages, narrow queries to a single stage, and/or select anatomy search terms from an autocomplete menu, checkbox panel, or expandable tree. It provides an option to include developmental successor or precursor tissues as search parameters. For example, one can perform an expansive search including data for all derivatives of 'neural crest'. Behind the scenes, the database leverages the is_a, part_of, and develops_from assertions in the XAO to retrieve data annotated not only with the exact selected term(s) but with related ones as well. This is a significant improvement over a general text-matching search of Xenbase content, in which a search for 'heart' , for example, would only look for that precise phrase and might fail to retrieve data annotated with a more specific heart component such as 'endocardium'. Searching Xenbase with the term 'pronephros' provides another use case: of the 203 image records returned by an ontology-based search for 'pronephric kidney' , 25 are not annotated with that exact term and are instead annotated with 'pronephric duct' , 'nephrostome' , etc. Synonyms in the ontology, furthermore, enable users to choose a search term based on their preferred nomenclature and reduce the chances of an empty result. A complementary gene expression tool called Xen-MARK is also available on the web [35]. XenMARK uses an ontology-free annotation system that displays expression images as heat map diagrams projected on embryo schematics. Gene expression queries in Xen-MARK use coarse-grained 19-term anatomy tags (e.g., 'eye' and 'retina'), thus in XenMARK end users need no detailed knowledge of anatomy terms, but can find genes expressed in the same 'geographical' area of an embryo. Figure 3 Links from the Xenopus Anatomy Ontology to Xenopus data in Xenbase. The XAO search anatomy items function returns the exact match on the query term 'heart'. In addition, anatomy features such as 'cardiac mesoderm' that have relationships to 'heart' in the ontology are also returned. Clicking on any term takes users to an XAO term summary page. The Expression tab on the 'heart' term page has links to data related to 4520 expressed genes. Shown schematically is a sample of genes (green fill) that most frequently appear in each of three principal data categories (images, clones, papers: indicated by solid lines). nkx2-5 expression appears in the greatest number of Xenbase images and is cited in the most publications. The gene acta1 is associated with the most clones from heart tissue. The genes actc1, myl3 and acta1 (bold outline) are associated with the greatest combined number of records as of May 20, 2013. The total number of records in each data category for all heartexpressed genes is shown. Heart image by Kolker, Tajchman & Weeks [32], used with permission, ©Elsevier (2000). In contrast, the XAO contains over a thousand precise terms (e.g., 42 terms that are part_of the eye with a subset of 23 terms that are part_of the retina) so Xenbase users can perform advanced fine-grained expression queries. Xenbase and XenMARK have data sharing agreements and reciprocal links to enable users to move between the sites easily. To this end, Xenbase provides anatomy and stage term searches [36], taking users to dedicated term summaries, listing term metadata and links to related Xenopus information. The Expression tab on every anatomy term page (e.g. 'heart' [37]) contains links to data related to genes expressed in that tissue or structure, including images with captions, clones, and publication records (Figure 3). Xenbase complements manually curated data with automated and semi-automated annotation processes [6]. The text-mining tool Textpresso [38] processes newly loaded research article metadata from PubMed [39] and captures terms that match anatomy features, thereby providing links from titles, abstracts, and figure captions to XAO terminology. Future directions In the next phase of XAO development we plan to employ the MIREOT (Minimum Information to Reference an External Ontology Term) [40] technique to import upper ontology terms and cell types, replacing our heretofore manual approach, and to make the XAO compliant with the Basic Formal Ontology [41]. We intend to import a "slim" version of the Gene Ontology (GO) [42] cellular component hierarchy, which will allow us to curate gene expression with cellular localization terms. We are currently testing phenotype curation using combined XAO, GO, and Phenotypic Quality Ontology (PATO) [43] terms. Conclusions Thanks to major expansions and improvements, the XAO allows capture of richer content from Xenopusspecific scientific literature and research data and provides an essential mechanism for performing complex data retrieval and analysis. We will continue to expand and refine it and to closely interrelate it with other ontologies. The XAO is already essential to gene expression annotation and searches, which will allow researchers to benefit from its further improvement. Xenbase, with an enhanced antibody search function and phenotype database feature currently in development, will continue to drive ontology development. The XAO provides an integral component of entityquality (EQ) and entity-quality-entity (EQE) annotations made in combination with PATO in order to describe phenotypes. While serving these many functions, the ontology's comprehensive references to bio-ontologies across taxa will help integrate data that provide insight into human disease, further enhancing the standing of Xenopus as an important animal model. Methods The Xenopus Anatomy Ontology is free and open to all users. It may be downloaded from the Xenbase FTP site [44] in file formats that allow it to be opened in two freely available tools, OBO-Edit [45] and Protégé [46]. An OBO-compliant version is available at the OBO Foundry [47] and in Google Projects [48]. Users may also browse the ontology at Xenbase [49] and at a variety of external informatics sites such as Ontobee [50]. Xenbase employs an in-house system of shared documents and spreadsheets where curators request new terms with metadata (definitions, relationships, and stages) and discuss wider structural changes. Public requests and feedback may be submitted at the XAO SourceForge [51] and Google Projects issue trackers. We regularly seek expertise and input from developers of other vertebrate anatomy ontologies (e.g., the Amphibian Anatomy Ontology [52] and Zebrafish Anatomy Ontology [53]) and the Uber Anatomy Ontology. The XAO adheres to a comprehensive referencing (xref ) scheme. This consists of CARO xrefs for upper-level terms and Cell Type (CL) [54] or UBERON xrefs for other applicable terms (the Amphibian Anatomy Ontology, an effort closely related to the Xenopus ontology, and the VSAO have recently been absorbed by UBERON, so unlike in previous releases, XAO terms now refer to the relevant UBERON entries and not the original ontologies). We strive to make definitions consistent with UBERON, other anatomy ontologies, and the CL, augmenting them as necessary to reflect their specificity to Xenopus.
5,667.2
2013-10-18T00:00:00.000
[ "Biology", "Computer Science" ]
Connectome of the Suprachiasmatic Nucleus: New Evidence of the Core-Shell Relationship Visual Abstract Introduction It is well established that the suprachiasmatic nucleus (SCN) is a master brain clock, made up of many peptidergic neuronal types, and that the various SCN subregions are characterized by clusters of similar peptidergic cells (Antle and Silver, 2005;Yan et al., 2007). Among the best studied cell types of these SCN subregions are neurons synthetizing arginine vasopressin (AVP) in the shell or dorsal region, and those expressing vasoactive intestinal polypeptide (VIP) in the core or ventral region (Abrahamson and Moore, 2001;Moore et al., 2002;Antle and Silver, 2005). Also found in the shell are met-enkephalin (ENK)expressing neurons and in the core are calretinin (CALR) and gastrin-releasing peptide (GRP)-producing neurons Abrahamson and Moore, 2001). Possibly due to the small size of SCN neurons and its fine afferent and efferent fibers, there is relatively little known about the connectivity among these diverse neuronal cell types and how these might be altered in arrhythmic mutant animals. Importance of networks As for other brain regions, it is generally accepted that the connectivity and characteristics of its neurons play an important role in the SCN functional specialization. Although individual SCN neurons are autonomous oscillators, they must function within a network to generate circadian rhythmicity within the nucleus (Welsh et al., 2010). There is limited information available on connectivity among SCN neurons. Early studies in rat using Golgi and Nissl staining in light and electron microscopic work demonstrated that the axons and dendrites of the greatest majority of its neurons terminate within the SCN (Güldner, 1976(Güldner, , 1984van den Pol, 1980;Güldner, 1983;van den Pol and Tsujimoto, 1985;Klein et al., 1991). Güldner (1976) estimated that each SCN neuron made 300 -1000 synaptic contacts. Subsequent work confirmed this assessment and estimated that within the SCN, individual neurons make as many as 1000 synapses (Moore and Bernstein, 1989). There is evidence of substantial communication within neurons of a specific cell type in that boutons immunoreactive for AVP synapse onto AVP containing dendrites and similarly, boutons containing GRP synapse onto GRP cells (van den Pol and Gorcs, 1986). Taken together, the weight of evidence indicates that network organization and communication among neurons, is key to the functioning of the brain clock, although little work has been done on the connections among SCN cell types. Connections of SCN cell types There have been a few studies of contacts among SCN peptidergic cell types in the rat and hamster but none in mice, to our knowledge. In rats, using double-label immunochemistry, reciprocal appositions have been described between neurons bearing the following peptides: AVP and VIP (Ibata et al., 1993;Jacomy et al., 1999); AVP, VIP, and GRP (Romijn et al., 1997). In hamsters, injection of tracers into the SCN indicates that the dorsal and medial regions project densely to most of the nucleus, but not to the core region, which had been identified by calbindin (CalB) neurons (Kriegsfeld et al., 2004). Interconnections among CalB and other peptidergic cells of the SCN have been examined using epiand confocal microscopy and intra-SCN tract tracing (LeSauter et al., 2002. Dynamic aspects of network organization have also been described. There is a rhythm in CalB expression in fibers, with many appositions seen between GRP and AVP at zeitgeber time (ZT)14 but exceedingly few at ZT4 . A similar finding has been reported in rats where more CALR fibers are typically seen at ZT14 than at ZT2 (Moore, 2016). To explore the intra-SCN network in which GRP neurons participate in more detail, individual GRP neurons bearing green fluorescent protein (GFP) were filled with biocytin tracer (Drouyer et al., 2010). These neurons form a dense network of local circuits within the core, revealed by appositions on other GFPcontaining cells and by the presence of dye-coupled cells. Dendrites and axons of GRP cells make appositions on AVP neurons, whereas adjacent biocytin tracer-filled, non-GRP cells have a less extensive fiber network, largely confined to the region of GRP-bearing cells. This work in rats and hamsters points to the highly specialized connectome of the SCN and to its temporal dynamics. Importance of core and shell communication The anatomic division of SCN into core and shell regions, based on the location of AVP and VIP-containing neurons aligns well with its functional organization. VPAC2, the receptor for VIP, is abundant in the SCN (Cagampang et al., 1998) and is essential in maintaining intercellular and behavioral rhythmicity in the SCN. Mice bearing a null mutation of the VPAC2 receptor cannot sustain normal circadian activity rhythms (Harmar et al., 2002;Piggins and Cutler, 2003;Maywood et al., 2006). In accord with the implications of this finding, mice lacking VIP in the SCN have abnormal circadian activity, impaired cellular rhythmicity, and reduced synchrony among neurons (Aton et al., 2005;Brown et al., 2007). In arrhythmic mice lacking essential components of the circadian transcription-translation feedback loop, the introduction of VIP signaling is sufficient to coordinate gene expression and maintain rhythmicity (Maywood et al., 2011). Such findings support the hypothesis that VIP originating in the core SCN, acting through its receptor, is crucial for maintaining rhythmicity. The present study was designed to examine the connectome of the mouse SCN, as has previously been done in rats and hamsters. Given the importance of VIP and AVP in regulating circadian rhythmicity, a second goal was to assess changes in the network organization consequent to the loss of VIP. Thus, we examined sagittal and coronal sections of the SCN and used triple-label immunocytochemistry (ICC) and confocal microscopy, to examine wild-type (WT) and VIP-KO mice. We stained for neurons of the core, namely GRP, VIP, and CALR as well as those of the shell, specifically AVP and ENK. The contacts between various peptidergic types were quantified by determining the number of appositions between fibers of one peptidergic type onto the cell body of an-other. The results present a first analysis of the mouse SCN connectome network that leads to the generation of the circadian rhythm. The results also describe a vital role for VIP in determining AVP expression levels. Animals and housing Two genotypes of male animals were used in this study. C57BL/6NJ mice (RRID:IMSR_JAX:005304) were obtained at 6 weeks of age from The Jackson Laboratory. VIP/Peptide Histidine Isoleucine (PHI)ϩ/ϩ (WT), heterozygous VIP/PHIϮ and knock-out VIP/PHIϪ/Ϫ (VIP-KO) mice were derived from breeding pairs of heterozygous VIP/ PHIϮ mice provided by Dr. C. S. Colwell (University of California, Los Angeles). These mice had been raised on a C57BL/6 background (Colwell et al., 2003). Animals were housed in translucent propylene cages (48 ϫ 27 ϫ 20 cm) and provided with ad libitum access to food and water. They were maintained in a 12/12 h light/ dark cycle at a room temperature of 21°C. Mice were sacrificed at ZT14, at three to four months of age. This time was selected because previous studies indicated a substantial time of day effect of calcium binding proteins on the SCN connectome, with high fiber expression at ZT14 (LeSauter et al., 2009;Moore, 2016). Mice were deeply anesthetized with sodium pentobarbital (200 mg/ kg) and perfused intracardially with saline followed by 4% paraformaldehyde in 0.1 M phosphate buffer, pH 7.3. All handling of animals was done in accordance with the Institutional Animal Care and Use Committee guidelines of the University. Immunocytochemistry (ICC) Brains were postfixed for 24 h at 4°C and then cryoprotected in 20% sucrose in 0.1 M phosphate buffer (PB) overnight. Coronal or sagittal sections (50 m) were cut on a cryostat. Both single-and triple-label ICC was performed. We note that coronal sections are more familiar to students of the SCN than are sagittal sections. That said, in the coronal view, the connections along the rostralcaudal plane are severed. Because there is evidence that indicates the importance of the network along the rostrocaudal axis (Hazlerigg et al., 2005;Yan and Silver, 2008;Sosniyenko et al., 2009;Buijink et al., 2016), we used sagittal sections to investigate the network in this plane. We use coronal sections as well to enable reviewers to relate familiar coronal views of the nucleus to the less familiar sagittal view. Comparison of AVP staining in VIP-KO and WT AVP cell counts in SCN, supraoptic nucleus (SON), and paraventricular nucleus (PVN), in WT and VIP-KO littermates, were studied in simultaneous immunostaining runs, using 50-m coronal sections. For AVP cell counts in SCN of WT and colchicine-treated VIP-KO littermates, 50-m sagittal sections were used. Photomicrographs of these areas were captured with a Nikon Eclipse E800 microscope (Nikon) equipped with a cooled CCD camera (Retiga Exi; Q-Imaging), using Q-capture software (RRID: SCR_014432, Q-Imaging) with the excitation wavelengths 480 Ϯ 20 nm for Cy2. Images were stored as TIFF files with a 1392 ϫ 1040-pixel array and then imported into Adobe Photoshop CS6 (Adobe Systems, Inc., RRID: SCR_014199). Counts were done independently by two researchers blind to the experimental conditions on three sections for each region with six mice/group and are reported as cell number/brain section. Inter-observer reliability was Ն93%. AVP cell counts in WT and VIP-KO littermates were studied in a series of confocal images using ImageJ (Na-tional Institutes of Health; RRID:SCR_003070). The perimeter was measured on 1-m optical sections in the largest extent of the neuron where a distinct nucleus was seen. The area through this plane was calculated from the perimeter. Confocal microscopy Each triple-labeled section containing the SCN was observed under a Zeiss Axiovert 200 MOT fluorescence microscope with a Zeiss LSM 510 laser scanning confocal attachment (Carl Zeiss). The sections were excited with argon-krypton, argon, and helium-neon lasers using the excitation wavelengths of 488 nm for Cy2, 543 nm for Cy3, and 633 nm for Cy5. Each laser was excited sequentially to avoid cross talk between the three wavelengths. Determination of appositions For visualization of the entire SCN, images were collected with a 20ϫ objective as 8-m multitract optical sections. For analysis of contacts among various peptidergic cell types, images were collected with a 63ϫ objective (in Fig. 1, but not for the data analysis, images were overexposed to enhance visualization of the appositions). SCN neurons were examined through their entirety in 1-m increments (z-axis), using the LSM 3.95 software (Carl Zeiss). Each neuronal cell body was examined to evaluate whether it bore zero, one to two, or more than or equal to three or more appositions, as in prior work (Jacomy et al., 1999;LeSauter et al., 2009). In addition, the total number of appositions made by AVP onto GRP, CALR, and ENK was assessed. There are caveats to the present methods of evaluating contacts, including (1) dendrites and axons cannot be differentiated, (2) synapses cannot be evaluated and (3) over or underestimation of the number of appositions due to ICC procedure, or (4) to missing fibers cut during sectioning, may occur as previously acknowledged (LeSauter et al., 2009). These do not affect the comparisons germane to the present study. AVP in VIP-KO mice The number of AVP neurons were compared in VIP-KO colchicine injected, and WT littermates (N ϭ 6/group). VIP-KO mice were anesthetized, by intraperitoneal injection of 60 mg/kg ketamine and 5 mg/kg xylazine, placed into the stereotaxic apparatus (David Kopf Instruments), and prepared for aseptic surgery. A 10-l Hamilton syringe (Hamilton) was used to inject 2 l of colchicine (10 g/l in 0.9% saline; Sigma) in the lateral ventricle of VIP-KO animals and they were perfused 72 h later. Stereotaxic coordinates relative to bregma were: flat skull, anteroposterior, ϩ1 mm; mediolateral, ϩ0.7 mm; dorsoventral, -3.0 mm from the top of the skull. Sagittal sections from VIP-KO and WT controls were stained for AVP and cell counts were done as above with 90.3% interobserver agreement. Determination of colocalization To determine the colocalization of two peptides, neurons were examined in confocal scans and were considered double-labeled when coexpression was seen in at least three consecutive 1-m scans. Statistical analysis One-way ANOVA followed by Tukey's post hoc test was used to compare the number of appositions made by each peptidergic cell type onto other peptidergic cell types (Fig. 4). Two-way ANOVA followed by Tukey's post hoc test was used to compare the number of AVP cells in SCN, SON, and PVN of WT and VIP-KO littermate mice (Fig. 6), and to compare the number of AVP appositions onto other cell types in WT and VIP-KO mice ( Fig. 7; Table 4); t tests were used for all other comparisons. All analyses were done using SigmaStat 2.03 (RRID: SCR_010285, SPSS Inc.). Appositions between peptidergic cell types in the SCN of WT mice Contacts between peptidergic neuronal cell types were examined in WT mice, with the goal of determining whether appositions were reciprocal or not between specific cell types. The appositions made by each peptidergic neuron type onto the others were scored, where suitable antibodies were available. Low-power images of the entire SCN show the distribution of each peptide in neurons and fibers in sagittal sections of the SCN (Fig. 1A,B, top rows). The results indicate that AVP neurons are distributed throughout the shell and AVP fibers extend through- Figure. 3. Number of neurons receiving more than or equal to three appositions. Here, instances of statistically significant differences between contacts made and received are shown, while Table 1 indicates statistical analysis for all appositions examined; ‫‪p‬ءء‬ Ͻ 0.001, ‫ء‬p ϭ 0.02; #p ϭ 0.06. Tables 2, 3). A, In WT mice, there was a preferential unilateral direction of communication between AVP-VIP and GRP-CALR (blue). While the other connections indicated are all reciprocal, there are few AVP to VIP contacts and more GRP to CALR contacts than the converse ‫‪p‬ءءء(‬ Ͻ 0.001, ‫‪p‬ءء‬ ϭ 0.008). B, In contrast to WT mice, in VIP-KO mice, there was no evidence of preferential direction of communication between peptidergic cell types. out the shell and the dorsal core. VIP neurons are located in the ventral core while GRP is mostly dorsal to the VIP cells. Both VIP and GRP fibers project throughout the SCN. CALR neurons lie primarily in the ventral SCN and fibers project dorsally throughout the nucleus. ENK neurons are sparsely distributed in the dorsal SCN and the fibers lie throughout the SCN with greatest density in the dorsal region. Figure 1, rows 2, shows examples of appositions of each immunoreactive fiber type studied onto AVP neurons, while Figure 1, rows 3, shows the reciprocal connections. To quantify the appositions, contacts made by each neuronal type were evaluated using the following categories Ն3, 1-2, 0 as previously (Jacomy et al., 1999;LeSauter et al., 2002LeSauter et al., , 2009). Examples of zero and more than or equal to three appositions are shown in Figure 2. The results indicate that all neuronal cell types except one made numerous and reciprocal appositions onto each other. The most impressive exception was the paucity of connections from AVP fibers onto VIP neurons. In con-trast, VIP neurons made numerous contacts onto AVP neurons. There were also significantly more appositions from AVP to GRP than conversely and a marginally greater number of appositions between VIP to CALR than conversely (Fig. 3). Appositions between peptidergic cell types in the SCN of VIP-KO mice Because previous reports indicate that VIP mRNA is reduced at all times of day in VPAC2 mice (Harmar et al., 2002), we sought to determine the state of connections between AVP and other peptidergic cell types in our mice. To our surprise, there were no significant differences in reciprocal connections, detected between WT and VIP-KO mice. In the VIP-KO mouse, AVP to GRP appositions were somewhat more numerous than GRP to AVP, but unlike the WT, this difference was not significant. We next asked whether a particular peptidergic SCN neuron made selective contacts or communicated equally with all other types [ Fig. 4A,B; Tables 1 (WT), 2 (VIP-KO)]. This assessment of appositions shows that AVP fibers contact fewer VIP cells than GRP, CALR, or ENK cells in the WT mouse (F (3,29) ϭ 20.2, p Ͻ 0.001). Also, there is a significantly greater number of appositions of GRP onto CALR neurons (F (2,23) ϭ 6.2, p ϭ 0.008) compared to AVP neurons, although this statistical difference did not hold for GRP onto VIP. In contrast, VIP, ENK, and CALR cells make a similar number of appositions with the other cell types. Unlike the WT mouse, there were no significant differences in reciprocal connections between GRP and AVP and CALR cell types in the VIP-KO mouse. Comparison of WT and VIP-KO in number of neurons receiving appositions from other neuronal cell types yielded no significant differences between strains (Table 3). Effect of VIP-KO on AVP When examining appositions, we noted that, compared to WT mice, AVP expression was reduced in the SCN neurons of VIP-KO mice, with no differences between groups in the size of SCN AVP cells (WT: 84.2 Ϯ 1.6 m 2 ; VIP-KO: 86.7 Ϯ 2.3 m 2 , t (133) ϭ 0.88, p ϭ 0.38). There-fore, AVP expression in littermates of the WT and VIP-KO mouse were compared. Examination of coronal and sagittal sections indicates that the expression of AVP in the VIP-KO mouse is severely reduced in the SCN. This can be seen throughout the extent of the nucleus in photomicrographs of sections stained for AVP (Fig. 5). Comparison of the number of AVP neurons in SCN, SON and PVN indicates that the reduction in AVP-ir cell number is restricted to the SCN and is not seen in SON and PVN, nearby AVP-rich regions (ANOVA: WT vs VIP-KO: F (1,33) ϭ 11.3, p ϭ 0.002; brain regions F (2,33) ϭ 0.46, p ϭ 0.63; interaction F (2,33) ϭ 2.8, p ϭ 0,08; Fig. 6). AVP appositions onto other cell types, WT versus VIP-KO We were surprised to note that although VIP-KO mice had fewer AVP-ir neurons, the number of GRP, CALR and ENK neurons receiving more than or equal to three AVP appositions did not differ between WT and VIP-KO mice. To reassess this surprising result, we revisited the finding by analyzing the absolute number of appositions made by AVP cells onto ENK, GRP, and CALR cells in WT and VIP-KO littermates ( Fig. 7; Table 4). Here again, there were no differences between WT and VIP-KO mice in number of AVP appositions made on each peptidergic type. Number of AVP neurons in SCN of colchicinetreated VIP-KO mice The assessment of appositions indicated similar numbers of contacts between AVP and other neurons in the VIP-KO and WT animals, although we saw fewer AVP neurons in the VIP-KO. We sought to assess whether this was the result of fewer AVP neurons making more contacts, or a similar number of neurons in the two cell types but a reduction in detectable AVP peptide. To assess whether there is a decrease in numbers of neurons producing AVP in the mutant mice or rather, a deficit in AVP synthesis, the number of AVP neurons in colchicinetreated VIP-KO mice and WT littermates were compared (N ϭ 6/group; Fig. 8). The results indicate that there was no significant difference in the number of AVP neurons in the SCN of WT and colchicine-treated VIP-KO littermate mice. Colocalization of peptides in WT and VIP-KO We next asked whether VIP-KO and WT mice differed in colocalization of major SCN peptides. In WT mice, there was colocalization of CALR with AVP, VIP, and GRP (Fig. 9). There was no colocalization detected of other peptides (AVP/GRP, AVP/VIP, VIP/GRP AND ENK/AVP, ENK/GRP, ENK/VIP, or ENK/CALR; data not shown). Finally, there were no differences between WT and VIP-KO littermates in colocalization of peptides ( Fig. 10; Table 5). Appositions between SCN neurons Although individual neurons are oscillators, it is widely agreed that the network of the SCN is essential for normal circadian timekeeping (Hastings et al., 2014;Pauls et al., 2016;Herzog et al., 2017). This study addressed a caveat in that much of the data on which this consensus rests derives from studies of mice, but evidence on the nature of connections among SCN neurons derives from rats and/or hamsters. The general notion is that in rodents, core neurons communicate with those in the shell while there is less communication in the reverse direction (Leak et al., 1999). The present study explores appositions between various core and shell peptidergic cell types in the mouse SCN. The results suggest that while approximately equal, reciprocal appositions occur between some neurons examined, impressively, this was not the case for other, major core-shell connections. Specifically, for AVP to VIP, fewer than 10% of VIP neurons received more than or equal to three AVP contacts, while for VIP to AVP, 90% of AVP neurons received more than or equal to three VIP contacts. Some differential communication among core neurons was also detected, with GRP making more appositions onto CalR than onto VIP or AVP neurons. We also found that in contrast to their WT littermates, there was a marked reduction in number of detectable AVP neurons in VIP-KO mice, although a few intensely labeled neurons were seen in each animal. Surprisingly, these mice had the same numbers of appositions as WT mice. We assessed whether this could be due to sprouting of fibers as a consequence of the reduction in neuron number in these mice. However, when transport of AVP was blocked by administration of colchicine in VIP-KO mice, the numbers of AVP neurons were comparable to WT animals. Such results are consistent with reduction of AVP synthesis and/or asynchronous AVP rhythms in the VIP-KO mice and with findings that VIP regulates the long-term firing rate of SCN neurons (Kudo et al., 2013) The SCN connectome The results of this study clarify core to shell communication in the mouse SCN and reexamine the general hypothesis put forth in previous work that the communication from core to shell is more extensive than the reverse (Daikoku et al., 1992;Leak et al., 1999). The results indicate that this general pattern does not apply to all neuronal subtypes of the core. More specifically, we find that there are far more contacts made by VIP core neurons onto AVP shell neurons than the converse. There was also evidence of specialization in that the core GRP neurons receive more appositions from AVP neurons than the converse. Unexpected was the trend suggesting that essentially all CalR neurons receive appositions from GRP, but there are fewer contacts in the reverse direction. Such results suggest important specializations of the network. The same general pattern of core to shell and intracore connections is seen in other species, with differences in the specific peptides involved. In hamster, CalB neurons receive numerous appositions from VIP and GRP fibers (LeSauter et al., 2002). Reciprocal connections are seen between VIP and GRP neurons in hamsters and in rats (Romijn et al., 1997). It is clear from numerous studies that for AVP-to-VIP connections, AVP fibers are much more densely distributed in the AVP-rich shell area than in the VIP and GRP-rich core areas both in rats and in mice (van den Pol, 1986;Daikoku et al., 1992;Abrahamson and Moore, 2001;Moore et al., 2002). In close agreement with the present results, Jacomy et al. (1999) show that around 80% of VIP cells received few AVP appositions. With regard to the present results, we note that this work does not specify whether all appositions reported here originate from cells local to the SCN as cross talk between the bilateral SCN (Michel et al., 2013) or input from other brain regions may also occur. Temporal variation of peptide expression complicates the task of defining the SCN connectome. Expression levels of some SCN peptides are under circadian regulation, with possible species differences in times of peak Figure 10. Colocalization of peptides in WT and VIP-KO mice. No differences were detected between genotypes. See Table 5 for statistics. (Moore, 2016). AVP receptor expression and AVP, VIP and GRP content in the SCN are rhythmic (Okamoto et al., 1991;Inouye and Shibata, 1994;Kalsbeek et al., 2010). VIP regulation of AVP Our observation that AVP protein is reduced in VIP-KO mice is consistent with previous related reports. VPAC2 is found in nearly all SCN cells, including AVP-containing cells (An et al., 2012). AVP mRNA is reduced in VPAC2-KO mice (Harmar et al., 2002) and AVP is induced by VIP or VPAC2 agonist (Rusnak et al., 2007). We had been surprised to note that the number of AVP appositions on other neurons did not differ between VIP-KO and WT mice. Given the finding that in colchicine-treated VIP-KO mice the number of AVP neurons was similar to their WT littermates, it appears that the VIP-KO mice have a reduction in AVP protein synthesis, but the amount of the peptide produced is sufficient to be detected with the present protocols. VIP-KO alters other genes and proteins It appears that the altered capability of VIP-KO and VPAC2-KO mice ability to express rhythmic behavioral responses is due in part to disruption of normal signaling not only of VIP but also of AVP. AVP not only augments the amplitude of rhythms within SCN but also acts as an output signal to the rest of the brain. There is a circadian rhythm of AVP in the cerebrospinal fluid (Schwartz and Reppert, 1985). AVP, acting through its V1 receptor, is important in augmenting electrical activity of SCN neurons (Ingram et al., 1998). AVP participates in coordinating oscillations through the AVP V1a receptor which extend widely in the SCN, including both core and shell regions (Li et al., 2009). AVP input to SCN neurons may contribute to its synchronizing effect within the SCN (Ingram et al., 1998;Bittman, 2009;Li et al., 2009), and this is likely part of the mechanism whereby AVP influences locomotor rhythms (Cormier et al., 2015). In summary, we conclude that the well characterized arrhythmicity of mice lacking VIP or its receptor (Colwell et al., 2003;Brown et al., 2007;Hughes and Piggins, 2008) may be due in part to the disruption in AVP in these animals. Colocalization of peptides in the SCN is the rule and not an exception The present results indicate that VIP and CALR as well as GRP and CALR are coexpressed in some but not all SCN neurons. In rat, colocalization of VIP, PHI, and GRP in some but not all neurons have been reported (Okamura and Ibata, 1986;Kawamoto et al., 2003). In hamster analysis of colocalization of peptides shows that 91% of the substance P cells, 42% of the GRP cells and 60% of the VIP cells in the core coexpress CalB (LeSauter et al., 2002). The SCN is a dynamic network The work on appositions in the WT mouse indicates that the SCN network is highly specialized. While AVP contacts onto VIP neurons are very sparse, the numbers of contacts onto GRP neurons are somewhat augmented compared to communications in the reverse direction. For other neuronal types, the communications appear to be largely reciprocal. The connectome delineated here indicates that the peptidergic network in VIP-KO animals is similar to that of WT, although the VIP protein is absent. Furthermore, in VIP-KO animals, there is a deficit not only in the VIP protein, but also in AVP. The consequences of compromised VIP-ergic signaling in the SCN have been seen in altered behavioral, cellular, and intercellular circadian activity. The present study shows that AVP synthesis is also compromised in these KO mice. Taken together this study characterizes the SCN network in mouse and further highlights the interrelationship between VIP and AVP in maintaining circadian timekeeping.
6,363
2018-09-01T00:00:00.000
[ "Biology" ]
Super Riemann surfaces and fatgraphs Our goal is to describe superconformal structures on super Riemann surfaces (SRS), based on data assigned to a fatgraph. We start from the complex structures on punctured $(1|1)$-supermanifolds, characterizing the corresponding moduli and the deformations using Strebel differentials and certain \v{C}ech cocycles for a specific covering, which we reproduce from a fatgraph data, consisting of $U(1)$-graph connection and odd parameters at the vertices. Then we consider dual $(1|1)$-supermanifolds and related superconformal structures for $N=2$ super Riemann surfaces. The superconformal structures $N=1$ SRS are computed as the fixed points of involution on supermoduli space of $N=2$ SRS. 1. Introduction 1.1.Some history and earlier results.The geometry of moduli spaces of (punctured) Riemann surfaces has been a central topic in modern mathematics for many years.Since the 1980s, string theory served as a significant source of ideas in studying moduli spaces.For a proper description of string theory, one has to consider certain generalizations of moduli spaces related to the fact that strings, while propagating, should carry extra anticommutative parameters, thus generating what is known as superconformal manifold as introduced by M.A. Baranov and A.S. Schwarz [1] or super Riemann surface (SRS) as independently introduced by D. Friedan [2] (see also [3], [4], [5], [6], [7], and [8] for a review).It turned out that such spaces' geometry is quite involved, e.g., [9].An important task is, of course, related to the parametrization of such supermoduli. There are several ways of looking at the parametrization problem.For example, one could deal with supermoduli spaces of punctured Riemann surfaces with the negative Euler characteristic from the point of view of higher Teichmüller theory as a subset in the character variety for the corresponding supergroup.In the case of original moduli spaces using methods of hyperbolic geometry R. Penner described coordinates in the universal cover of moduli space, the Teichmüller space, as the subspace of the character variety of P SL(2, R), so that the corresponding Riemann surfaces appear here from the uniformization point of view as a factor of the upper half-plane by the element of the related character variety, i.e., the Fuchsian subgroup [10]. The action of the mapping class group in these coordinates is rational.It could be described combinatorially using decorated triangulations or dual objects, known as metric fatgraphs or ribbon graphs for the corresponding Riemann surfaces.Thus constructed coordinates were generalized to the case of reductive groups [11].The Date: September 26, 2023. supergroup case yet remained a mystery until recently.In [12], [13], [14], such coordinates were constructed in the framework of the higher Teichmüller spaces associated to supergroups OSp(1|2) and OSp(2|2), which correspond to the Teichmüller spaces N = 1 and N = 2 SRS.The desired N = 1 and N = 2 SRS could be reconstructed using the elements of character variety via the appropriately modified uniformization approach [6], [15]. There is a different, more "hands-on" approach to the moduli spaces of punctured Riemann surfaces, where one can see directly the transition functions for the corresponding complex structures, which we discuss in more detail below.One can start from the parameterization of moduli spaces by the so-called Strebel differentials, which again can be described using metric fatgraphs [21].That approach allowed (see [22]) to "glue" the Riemann surface explicitly by constructing transition functions. In this paper, we want to generalize this construction in the case of super Riemann surfaces.We start by describing the moduli space of (1|1)-supermanifolds.This result also describes the moduli space of N = 2 super Riemann surfaces.Finally, we study the moduli space of N = 1 super Riemann surfaces using the fact that this space can be obtained as a set of fixed points of the involution of the space of (1|1)-supermanifolds constructed in [7]. We would also like to mention recent progress in studying supermoduli spaces from various perspectives.While our approach deals with real parametrization of supermoduli in parallel to work on super-Teichmüller theory [12], [13], [14], a lot of exciting features of supermoduli are related to the algebro-geometric description.The main result of Donagi and Witten [16] that supermoduli space is not projected led to a renewed interest in the subject in the modern era and supergeometry in general.One can mention recent works of Felder, Kazhdan, and Polishchuk dealing with the Schottky approach for supermoduli [17] as well as the general treatment of supermoduli spaces as Deligne-Mumford stacks [18].Some other recent results, which use both real and complex geometry points of view, are related to enumerative invariants related to supermoduli [19], [20]. 1.2.The structure of the paper and main results.In Section 2 we review basic notions related to (1|1)-supermanifolds, N = 1 and N = 2 Super-Riemann surfaces (SRS).We devote special attention to the punctured N = 1 SRS with two puncture classes corresponding to various spin structure choices: Ramond (R) and Neveu-Schwarz (NS). In Section 3, we define two instrumental objects which come from geometric topology.The first object is a fatgraph (or ribbon graph).This graph is homotopically equivalent to the punctured surface with the cyclic ordering of half-edges at every vertex, which comes from the orientation of the surface so that each puncture is associated with a particular cycle on the graph.The second object is a spin structure on the fatgraph, making it a spin fatgraph.We describe spin structures as the classes of orientations on fatgraphs based on the works [12], [13], [14], where N = 1, N = 2 SRS were studied from a uniformization perspective.This construction allows distinguishing boundary components of such spin fatgraph, separating them into two sets based on comparing their orientation and the orientation induced by the surface.Those two sets correspond to NS and R punctures in the uniformization picture. Section 4 is devoted to an important construction allowing us to relate the data assigned to the fatgraphs to the theory of moduli of Riemann Surfaces, following [22], [21].Namely, we explicitly describe the moduli spaces of moduli spaces of surfaces F c with marked points, using special covering {U v , V p }, with one neighborhood U v for every vertex v and V p for every puncture p.The set of {U v } has only double overlaps is a punctured surface.U p overlaps with all U v for all the vertices surrounding the puncture.To construct the corresponding transition functions w ′ = f v ′ ,v (w), y = f p,v (w) on overlaps, we consider the fatgraph with one positive number per every edge, producing the metric fatgraph.Then we attach the infinite stripe to the edge, with the width being the corresponding parameter.The transition functions rise from gluing stripes corresponding to edges into neighborhoods U v , with the width being a positive parameter assigned to the edge.The key ideas of this description, which is due to Kontsevich [21] and further elaborated by Mulase and Penkava [22], lies within the theory of Strebel differentials.These are holomorphic quadratic differentials on a punctured surface with certain extra conditions.One can reconstruct the metric fatgraph and the corresponding complex structure for every Strebel differential so that their zeroes define the vertices of the fatgraphs, and the order of zero determines the valence of the corresponding vertex.At the same time, the punctures correspond to their double poles.All this can be summarized in the fact that Strebel differentials parametrize the trivial R s + -bundle over the moduli space of Riemann surfaces with s punctures. In Section 5, we use this fatgraph description to characterize the moduli space of (1|1)-supermanifolds with punctures: we use the term "puncture" for marked points or (0|1)-divisors assigned to marked points on the underlying Riemann surface.At first, we consider the split (1|1)-supermanifolds, which can be viewed as Riemann surfaces with line bundle L over it.The corresponding moduli space can be then described by the flat U(1) connections on the corresponding metric fatgraphs with zero monodromies around the punctures, accompanied by a fixed divisor at punctures, one for every degree. Next, we describe this construction's deformation by expressing the tangent bundle's odd parts to the corresponding moduli space as Čech cocycles on the Riemann surface F .These cocycles lead to the infinitesimal deformations of the transition functions, which could be continued beyond the infinitesimal level. Parametrizing such Čech cocycles is a nontrivial problem, which, however, can be solved in the case when deg(L) = 1 − g − n − r/2, where n is the number of point punctures and r is the (even) number of (0|1)-divisor punctures, and g is a genus.In this case, the corresponding cocycles can be characterized by the ordered sets of complex odd parameters for every vertex, where the number of parameters in each set depends on the valence of the vertex.This is roughly twice more parameters than needed, so there are equivalences between complex structures constructed in such a way.We characterize those equivalences explicitly using sections of the appropriate line bundles. Thus the fatgraph description of the split case, together with the parametrization of cocycles, immediately leads to complete parametrization of the complex structures of (1|1)-supermanifolds with such degree. We note that on the level of uniformization, this is an important subclass of supermanifolds obtained in [13], corresponding to flat connections with zero monodromies around punctures. In Section 6, we use the results of Dolgikh, Rosly, and Schwarz [7], who explicitly described the equivalence between N = 2 super Riemann surfaces and (1|1)supermanifolds, expressing the transition functions of N = 2 SRS using the transition functions for (1|1)-supermanifolds obtained in Section 5. In Section 7, we first discuss the involution on the moduli space of N = 2 SRS, such that the fixed points of this involution are N = 1 SRS.We then describe the split case, characterizing various choices of the corresponding line bundle using spin structures on the fatgraph, thus looking at the corresponding supermoduli space with the given assignment of R and NS punctures as a 2 2g covering space over moduli space of punctured Riemann surfaces.We then apply the involution to the deformations, first on the infinitesimal level and then continuing beyond, using the superconformal condition.This eventually leads to our main Theorem 7.2 which describes deformations of N = 1 SRS. 2. (1|1)-supermanifolds, N = 1 and N = 2 Super-Riemann Surfaces and Superconformal Transformations 2.1.Super Riemann surfaces and superconformal transformations.We remind that a complex supermanifold of dimension (1|1) (see, e.g., [23]) over some Grassmann algebra S is a pair (X, O X ), where X is a topological space and O X is a sheaf of supercommutative S-algebras over X such that (X, O red X ) can be identified with a Riemann surface (where O red X is obtained from O X by quoting out nilpotents) and for some open sets U α ⊂ X and some linearly independent elements {θ α } we have We will also refer to (X, O red X ) as a base manifold.These open sets U α serve as coordinate neighborhoods for supermanifolds with coordinates (z α , θ α ).The coordinate transformations on the overlaps U α ∪ U β are given by the following formulas z α = f αβ (z β , θ β ), θ α = ψ αβ (z β , θ β ), where f αβ , ψ αβ are even and odd functions correspondingly.A super Riemann surface (SRS) Σ [6], [8] over some Grassmann algebra S is a complex supermanifold of dimension 1|1 over S, with one more extra structure: there is an odd subbundle D of T Σ of dimension 0|1, such that for any nonzero section D of D on an open subset U of Σ, D 2 is nowhere proportional to D, i.e. we have the exact sequence: One can pick the holomorphic local coordinates in such a way that this odd vector field will have the form f (z, θ)D θ , where f (z, θ) is a non vanishing function and: Such coordinates are called superconf ormal.The transformation between two superconformal coordinate systems (z, θ), (z ′ , θ ′ ) is determined by the condition that D should be preserved, namely: Locally one obtains: so that the constraint on the transformation emerging from the local change of coordinates is 2.2.N = 2 super Riemann surfaces.N = 2 super Riemann surfaces (N = 2 SRS) is a generalization of super Riemann surfaces, being a supermanifold of dimension (1|2) with extra structure.Its tangent bundle has two subbundles D + and D − , so that each of them are integrable, meaning that if D ± are nonvanishing sections of D ± , we have for some functions a and b.At the same time, the direct sum D + ⊕ D − is nonintegrable, so that [D + , D − ] is a basis for the tangent bundle.Namely, for N = 2 super Riemann surface Σ one has the following exact sequence: As in the case of super Riemann surfaces one can show that there exist superconformal coordinates in which locally D + and D − are generated by: It turns out, there is an equivalence between (1|1) supermanifolds and N = 2 SRS as it was estabished by Dolgikh, Rosly and Schwarz [7].One can notice that there is an involution θ + ↔ θ − .The corresponding complex (1|1) supermanifold constructed from the N = 2 SRS after the involution is of course generally a different one and it is called dual.In fact, such a dual supermanifold turns out to be a supermanifold of (0|1) divisors of the original one.The self-dual (1|1) supermanifolds are of course N = 1 super Riemann surfaces. We will discuss these questions in more detail later in the text. 2.3. Punctures: Ramond and Neveu-Schwarz.Let us now discuss the types of punctures on N = 1 super Riemann surface. The NS puncture is a natural generalization of the puncture of ordinary Riemann surface, and can be considered as any point (z 0 , θ 0 ) on the super Riemann surface.Locally one can associate to it a (0|1)-dimensional divisor of the form z = z 0 − θ 0 θ, which is the orbit with respect to the action of the group generated by D, and this divisor uniquely determine the point (z 0 , θ 0 ) due to the superconformal structure. Let us consider the case when the puncture is at (0, 0) locally.In its neighborhood let us pick a coordinate transformation z = e w , θ = e w/2 η, (2.8) such that the neighborhood (without the puncture) is mapped to a supertube with w sitting on a cylinder w ∼ w + 2πi, and D θ becomes (2.9) Hence (w, η) are superconformal coordinates, and we have the full equivalence relation given by The case of Ramond puncture is a whole different story.On the level of super Riemann surfaces, the associated divisor is determined as follows.In this case, we are looking at the case when the condition that D 2 is linearly independent of D is violated along some (0|1) divisor.Namely, in some local coordinates (z, θ) near the Ramond puncture with coordinates (0, 0), D has a section of the form We see that its square vanishes along the Ramond divisor z = 0.One can map the neighborhood patch to the supertube using a different coordinate transformation z = e w , θ = η, (2.11) those coordinates on the supertube will be superconformal, since Notice that the identifications we have to impose on (w, η) now become: To describe Ramond punctures globally, consider a subbundle D is generated by such operators D * η , for the Ramond punctures p 1 , p 2 , . . .p n R .We have exact sequence: 14) where P = n R i=1 P i is a divisor where D 2 = 0 mod D. In the split case T Σ| X = T X ⊕ G, dividing the tangent space to T Σ into even and odd parts, which one can identify with D 2 ⊗ O(P)and D correspondingly.Also, notice that after reducing it to the base manifold leading to the fact that there should be even number of such punctures, known as Ramond or simply R punctures.We refer to the Section 4.2.2. of [8] for more details. Fatgraphs and spin structures From now on we will consider Riemann surfaces of genus g with s punctures (s > 0) and negative Euler characteristic, which we will denote as F s g or simply F .The corresponding closed version will be denoted as F c . Consider the fatgraph τ , corresponding to an s-punctured surface F .This is a graph, which is homotopically equivalent to F , with cyclic orderings on half-edges for every vertex [10] induced by the orientation of the surface. Let τ 0 , τ 1 denote the set of vertices and edges of τ respectively.Let ω be an orientation on the edges τ 1 ⊂ τ .As in [12], we define a fatgraph reflection at a vertex v of (τ, ω) to reverse the orientations of ω on every edge of τ incident to v. Definition 3.1.We define O(τ ) to be the equivalence classes of orientations on a trivalent fatgraph τ spine of F , where the equivalence relation is given by ω 1 ∼ ω 2 iff ω 1 and ω 2 differ by finite number of fatgraph reflections.It is an affine H 1 -space where the cohomology group H 1 := H 1 (F ; Z 2 ) acts on O(τ ) by changing the orientation of the edges along cycles. In [12,13,24] various realizations of the spin structures on the surface F , characterized by a trivalent fatgraph τ are described.Those results can be easily generalized to a fatgraph with vertices of any valence. In fact, following [25], a spin structure can be characterized by a quadratic form q : H 1 (F ; Z 2 ) −→ Z 2 so that for any cycles a, b one has q(a + b) = q(a) + q(b) + a • b where a • b denotes the intersection form. The space of all orientation classes is an affine H 1 -space.Really, fix a fatgraph τ , we denote by o ω (e) the orientation of the edge e ∈ τ in the orientation ω.We define which defines an element in H 1 (F ; Z 2 ). Proposition 3.2.[12,13] The set of spin structures is isomorphic to the space of quadratic forms Q(F ) on H 1 (F ; Z 2 ), and also isomorphic to O(τ ), as affine H 1 -spaces. This leads to the following important consequence. Theorem 3.3.[13] Given an oriented simple cycle φ ∈ π 1 (F ) homotopic to a path on the fatgraph τ with orientation class [ω], the corresponding quadratic form is given by where L φ (resp.R φ ) is the number of left (resp.right) turns of γ on the fatgraph τ , and N φ (resp.N γ ) is the number of edges of τ such that γ and ω have the same (resp.opposite) orientation. If we talk about the paths corresponding to the boundary cycles on the fatgraph, q([γ]) = (−1) k , where k is a number of edges with orientation opposite to the canonical orientation of γ.One can identify them with R and NS punctures in uniformization picture [14], so that k is even for R punctures and odd for NS punctures. In fact there is another way of thinking about the spin structures, using graph connections [26], [10], [27].Definition 3.4.[10] Let G be a group.A G-graph connection on τ is the assignment g e ∈ G to each oriented edge e of τ so that g e = g −1 e if e is the opposite orientation to e. Two assignments {g e }, {g ′ e } are equivalent iff there are t v ∈ G for each vertex v of τ such that g ′ e = t v g e t −1 w for each oriented edge e ∈ τ 1 with initial point v and terminal point w. Therefore, we obtain the following description of spin structures.Corollary 3.5.[13] The space of spin structures on F is identified with Z 2 -graph connections on a given fatgraph τ of F . We will refer to the fatgraph with the associated spin structure/Z 2 -graph connection as spin fatgraph. Complex structures and Strebel differentials 4.1.Gluing of Riemann surfaces.Consider the fatgraph τ corresponding to an spunctured surface F of genus g.Let us assign a positive real parameter L j associated to every edge j.We will refer to the resulting object as a metric fatgraph. It is known from the works of Penner (the so-called convex hull construction) that the fatgraphs with valence of every vertex greater ore equal to 3 or dual ideal cell decompositions of Riemann surfaces describe the mapping class group-invariant cell decomposition of the decorated Teichmüller space (see e.g.[10], [28]), a universal cover of R s + ⊗ M g,s , where M g,s is a moduli space of Riemann surfaces of genus g with s marked points.Then the trivalent fatgraphs correspond to the higher-dimensional cells of dimension 6g − 6 + 3s. An important problem is how to reproduce inequivalent complex structures based on the data of metric fatgraphs.In an important work of Mulase and Penkava [22], based on earlier ideas of Kontsevich [21], allows to construct the appropriate covering of a Riemann surface and the transition functions, associated with a given fatgraph, thus exhausting all possible complex structures.Let us have a look at those in detail. Fixing an orientation on τ , we consider a neighborhood U v with coordinate w corresponding to the fixed m-valent vertex v, so that the vertex is placed at the point w = 0.One can describe that neighborhood by considering stripes glued together via formula if all m edges are pointing out from vertex v.In the case if one or more of them is pointing towards vertex v, we substitute the above formulas by One can construct such coordinate patches around every such vertex.The overlaps U v ∩ U v ′ are described by the corresponding stripes associated to the edge j of the fatgraph runnig between v and v ′ .Note, that there is no triple intersections on such a punctured surface and that the vertices of the fatgraph belong to the boundary of the intersections. Let us look at the transition functions on the overlaps between two such coordinate neighborhoods U v , U v ′ around neighboring vertices v and v ′ , assuming the edge is pointing from v to v ′ .We note that both coordinates w and w ′ are expressed in terms of z j in the following way: where c j , c ′ j are mth and m ′ th roots of unity.The resulting overlap coordinate transformation f v ′ v between patches is given by the following formula: where −iπ/2 < arg(w m/2 )< iπ/2.That completely describes the transition functions between charts for the punctured Riemann surface F . Note that if the consecutive edges L 1 , L 2 , . . ., L n correspond to boundary piece of the fatgraph associated with puncture p, one can glue the following coordinate neighborhood V p with coordinate y covering the puncture: so that one glues together top or bottom part of the stripes based on orientation and x-variable is on a cylinder x ∼ x + 2πi.Suppose the k-th strip is glued to the vertex v k with coordinate w k as above, then the transition function f pv is given by: ) .(4.7) In the following we will adopt the following notation for L-parameters: if the edge connects two vertices v, v ′ we will denote the corresponding parameter as L v,v ′ .4.2.Strebel differentials.An important object in the constructions of [21], [22] are the Strebel differentials, the quadratic meromorphic differentials with special properties.A nonzero quadratic differential is a holomorphic section µ of K ⊗2 , where K stands for canonical bundle on F .It defines a flat metric on the complement of the discrete set of its zeroes, written in local coordinates as |µ(z)|dzdz, where µ = µ(z)dz 2 . A horizontal trajectory of a quadratic differential is a curve along which µ(z)dz 2 is real and positive.The Strebel differential is the one for which the union of nonclosed trajectories has measure zero.Non-closed trajectories of a given Strebel differential decompose the surface into the maximal ring domains swept out by closed trajectories.These ring domains can be annuli or punctured disks.All trajectories from any fixed maximal ring domain have the same length, the circumference of domain. The following Theorem is due to Strebel: For every such Strebel differential one can construct the covering associated to the corresponding fatgraph, described in the previous section and vice versa, so that in the charts U v , Strebel differential µ has the following explicit form: It also has pole of order 2 at punctures, so that in y-coordinates for each neighborhood V p the differential looks as follows: One can formulate then the following Theorem.In this paper we do not need more properties of Strebel differentials, however, we refer reader to [22], as well as original source [29] for more information. 5. Complex structures on (1|1) supermanifolds 5.1.Split case.Let us consider the punctured Riemann surface glued as in previous subsection using metric fatgraph and overlapping neighborhoods U v corresponding to vertices.To construct the cooordinate transformations for a split (1|1)-supermanifold SF with such a base complex manifold, one has to consider a line bundle L over F c .Then the coordinate transformations for the coordinates (w ′ , ξ ′ ), (w, ξ), (y, η) corresponding to neighborhoods U v , U v ′ , V p of vertices v, v ′ and puncture p are given by the following formulas: where g v ′ ,v , g p,w is the holomorphic function, serving as a transition function of bundle L. The collection {g v ′ ,v , g v,p } generates a Čech cocycles representing the Picard group of F c , i.e.Ȟ1 (F, O * ), if the following constraint on g v ′ ,v and {g v,p } is imposed around the given puncture p: Then the following Proposition holds.Proposition 5.1.When L is degree 0 over F c , the fatgraph data describing it is a U(1) graph connection with a trivial monodromy around every boundary piece. Proof.Notice, that one can choose g v ′ v to be constant functions with values on a unit circle, which on the level of fatgraph is described by U(1)-graph connection, so that g v,v ′ = e ih v,v ′ , where h v,v ′ ∈ R .Indeed the corrresponding holomorphic equivalences for the corresponding Čech cocyle reduce to constant U(1) gauge transformations at the vertices.However, according to the condition (5.3) that we imposed, we have to have g v 1 ,v 2 g v 2 ,v 3 . . .g v n−1 ,vn g vn,v 1 = 1, which is exactly the trivial monodromy condition. In order to describe any line bundle of degree d, one has to do the following.First, choose a fixed divisor of degree d, say a linear combination of puncture points.Then, multiplying it on appropriate bundle of degree 0, one can reproduce the original bundle.Since we described the moduli spaces of degree 0 bundles in a Proposition 5.1 above, we can now characterize split punctured supermanifolds. • Fixed divisor M of degree d, which is a linear combination of puncture points. The data above determines the complex split (1|1)-supermanifold corresponding to the line bundle of degree d on F .For a fixed divisor M, metric fatgraphs with U(1) connections describe the moduli space of split (1|1)-supermanifolds. 5.2.Infinitesimal Deformations and various types of punctures.As usual, the infinitesimal deformations of the above formulas leading to generic non-split structure are described by H 1 (SF c , ST ), where ST is a tangent bundle of SF c , where SF c is a split (1|1)-supermanifold which we discussed in the previous section.Since we are deforming the split case one can describe infinitesimal deformations ρ ∈ H 1 (SF c , ST ) using Čech cocycles, i.e. in coordinates (ξ, w) of where the indices v, v ′ and p, v here mean that the corresponding elements are the corresponding Čech cocycles considered on the intersections U v ∩ U v ′ , and V p ∩ U v .Now we need to specify the behavior at the punctures to describe the cocycles ρ leading to deformations of SF in terms of cocyles on F c . There are two types of punctures we want to consider: • puncture as a (0|1)-dimensional divisor on SF c .We denote the number of such punctures as r. • puncture as a (0|0)-dimensional divisor, or in other words just a point on SF c .We denote the number of such punctures as n. Let T be the tangent bundle of F c , D n+r be the divisor corresponding to sum of all points on F c corresponding to punctures, and D n is the sum of the ones corresponding to point-punctures on SF .Let us look in detail at the components of (5.4): where Ž1 is the notation for Čech cocycles of degree 1. Note, that we need to impose the constraints on cocycles on where s = v, u, α, β.Here u-and v-terms correspond to the deformations of the original manifold F , and notice, that we already incorporated the moduli for the base manifold F and the line bundle L in the formulas (5.1).The odd deformations, provided by the cycles α, β give the following deformations for the upper line of (5.1) which describes (in the first order in complex parameters)all possible complex structures on the punctured supermanifold. If we remove the infinitesimality condition, formulas above will be deformed.Let us formulate it in a precise form.(1) Consider the following data: • A metric fatgraph with a U(1)-connection with trivial monodromy around boundary pieces, a fixed divisor which is a iinear combination of puncture points of degree d, which defines a split punctured (1|1) supermanifold determined by base Riemann surface F and line bundle L. • Čech cocycles where D n+r is the divisor corresponding to sum of all s = n + r punctures on the closed surface F c , D n is the sum of the certain subset of the set of punctures, and the cohomology classes of {b k } {a k } form a basis in the corresponding cohomology spaces.This data gives rise to a family of complex structures on SF , the (1|1)supermanifold with n point punctures and r (0|1)-divisor punctures, so that the transition functions on SF are given by the following formulas on the overlaps {U v ∩ U v ′ }: where g , are holomorphic functions on the overlaps, depending on the parameters σ α k and σ β k such that: where {f v ′ v }, {g v ′ ,v } define the split supermanifold with the line bundle L and s punctures so that in the first order in {σ α k } and {σ (2) Let us fix the choice of transition functions in (5.8), for every metric fatgraph τ with the U(1)-connection, divisor of degree d, and the odd data given by the cocycles β, α on F c .The complex structures constructed in such a way are inequivalent to each other, and the set of such complex structures constructed by varying τ and the data on it, form a dense subset of maximal dimension in the moduli space of punctured (1|1) supermanifolds with underlying line bundles of degree d. Proof.Let us look at the formulas (5.8) as generic ones, for arbitrary holomorphic functions {α v ′ v }, {β v ′ v } on overlaps.There is a finite number of odd parameters which parametrize all {α v ′ ,v }, {β v ′ ,v } corresponding to inequivalent complex structures.Expanding the formulas (5.8) in terms of these parameters we obtain that in the linear order as in the infintesimal case.Conversely, since α, β represent the tangent space to the moduli space of complex structures, parameters σ α , σ β serve as coordinates there.Considering the corresponding 1-parametric subgroups generated by α, β we obtain formulas from (5.8).The fact that the cohomologically equivalent cocycles lead to the equivalent complex structures is justified by dimensional reasons. It is, however, nontrivial to explicitly parametrize those cocycles α, β.In the next subsection we will analyze the special case of supermanifolds with the line bundle L of negative degree. (1|1)-supermanifolds with deg It is not easy to explicitly parametrize cocycles α, β from a fatgraph data if one does not fix a degree.From now on, we will be interested in the case when deg(L) = 1 − g − k, where s ≥ k ≥ 0 on F c .Assuming that the number of divisor punctures is even and setting k = r/2, both bundles Let us be generic enough first and characterize the cycles in Π Ž1 (F c , L ⊗ O(−D n )) where s ≥ k = g − 1 − deg L ≥ 0, using the data from the fatgraph.To do that, we define a cocycle ρ, a representative of Π Ȟ1 (F c , L ⊗ O(−D n )) as follows: are the polynomials with odd coefficients, assigned to each fatgraph vertex v of degree at most Then the following proposition holds. (1) The cycles (5.9) are uniquely defined by the numbers σ v at the fatgraph vertices, thus forming a complex vector space of dimension 4g−4+2s. Proof.To prove part (1) one just has to count the number of vertices and parametres at verrtices.An elememtary Euler characteristic computation shows that 2g − 2 + s = j≥3 j 2 − 1 V j (τ ), (5.11)where V j (τ ) is the number of j-valent vertices in τ .Notice, that for a j-valent vertex v, we have exactly j −2 odd parameters from the expansion of σ v (w), which immediately leads to the necessary parameter count, giving 4g − 4 + 2s. To prove (2), on each coordinate neighborhood U v , Strebel differential µ has the form µ| Uv = w mv−2 dw 2 , and µ| Vp = − a 2 p 4π 2 dy 2 y 2 , which means that one can rewrite the formula for the cocycle (5.12)where Suppose that such cocycle is exact, namely: where γ (m−3) (w) is the Taylor expansion of γ(w) up to order m − 3. Also the identity γ p = a p µ + γ| Vp , i.e. a p µ + γ| Vp = 0 is possible only if γ| Vp has poles not greater than 2 at y = 0, or, more precisely, γ ∈ H 0 (V p , L ⊗ K 2 ⊗ O(2D n+r )).Therefore, cycles ρ and ρ are cohomologous to each other iff the relation between the parameters on the fatgraph {σ v } and {σ v } correspodingly parametrizing them is as follows: where γ ∈ H 0 (F c , L⊗K 2 ⊗O(2D n+r )) on F c , such that γ| Uv = γ(w) with poles at the punctures of F of order less or equal to 2 so that γ (m−3) (w) is the Taylor expansion of β(w) up to order m − 3. Now, to prove part (3), we need to show that such classes of cocycles form a 2g − 2 + k + n-dimensional complex space as elements of Π Ȟ1 (F c , L).For a given section γ of L⊗K 2 ⊗O(2D n+r ), the collection of the coefficients in γ (m−3) , for each vertex v form a vector in our 4g − 4 + 2s-dimensional space of σ-parameters.The space, spanned by all such vectors is a complex 2g −2+2s−k −n-dimensional space.Indeed, it cannot be of smaller dimension, since we know that dim C Ȟ1 (F c , L ⊗ O(−D n )) = 2g − 2 + k + n, at the same time it cannot be of greater dimension, since we know that the dimension of space of such meromorphic global sections of L ⊗ K 2 ⊗ O(2D n+r ) is 2g − 2 + 2s − k − n by the Riemann-Roch theorem.Now we are ready to formulate a Theorem regarding parametrization of complex structures via fatgraph data. (1) Consider the following data associated to the fatgraph τ : • Metric structure and a U(1)-connection on τ with zero monodromy around punctures and a fixed divisor of degree d = 1 − g − r/2 at the punctures. We will call two sets of data from (1) associated to fatgraph τ equivalent if the odd data are related as in Theorem 5.4. Constructing transition functions f v ′ ,v and g v ′ ,v from the even fatgraph data and cocycles α, β, corresponding to r (0|1)-divisor punctures and n point punctures from the odd data, one obtains a family of complex structures on (1|1)-supermanifold in the framework of Theorem 5.3. (2) Fixing the transition functions in (5.8) and considering one such complex structure per equivalence class of data for every fatgraph τ , we obtain a set of inequivalent complex structures, which is a dense subspace of odd complex dimension 4g − 4 + 2n + r in the space of all complex structures on (1|1)supermanifolds with base line bundle of degree d = 1 − g − r/2 and s = n + r punctures, where n is the number of point punctures and r is the number of (0|1)-divisor punctures. Proof.The first part of data allows to construct split (1|1)-supermanifold as we know from previous sections, the odd data from the second allows to construct the corre- If we choose an orientation on the fatgraph, the formulas (see Theorem 5.3): produce the transition functions on U v ∩ U v ′ for the vertex oriented from v to v ′ . Remark.Note, that the gauge equivalence for U(1) connection, produce the following identification.If real numbers h v,v ′ parametrize U(1) connection, then the transformations: produce equivalent configuration for infinitesimal parameters σ.In the paper [13] the uniformization version of N = 2 Teichmüller space was constructed (see also [3], [15]), which corresponds exactly to (1|1)-supermanifolds, which serves as a universal cover for the one we use here in the case of deg(L) = 1 − g − r/2.The above identifications played an instrumental role in the construction. In the next two sections we will use the obtained results to describe transition functions for the corrresponding dual supermanifold and N = 2 super Riemann surface following [7].5.4.Dual (1|1) supermanifold.Finally, we give a desription of the concept dual (1|1) supermanifold is a supermanifold of (0|1) divisors of SF .To describe the explicit coordinates and coordinate transformations transformations on such an object one can use a very simple equation (see, e.g., [8]): (5.17 We will now substitute first equation in the second and obtain: which leads to two equations: The latter equation immediately gives the transformation for ζ: which could be simplified as follows: Now, substituting that into the equation (5.18) for a ′ we obtain: a), and simpler: One can see from the transformations we obtained that the self-dual (1|1) supermanifolds are indeed N = 1 SRS.Let us combine all that in the following theorem. Theorem 5.6.Given the coordinate transformations (5.15) for SF , the coordinate transformations for the dual manifold SF of (0|1) divisors is given by the formulas: , where (5.20) Remark.Note, that in the case of a dual manifold, L is replaced by L −1 ⊗ T . N=2 Super Riemann Surfaces In this section we write down the coordinate transformations for punctured N = 2 supermanifold SF N =2 , corresponding to SF , based on the equivalence between complex structures on (1|1)-supermanifolds and superconformal structures on N = 2 supermanifolds discovered in [7]. Let us write down the transition functions between the chart with coordinates (z, θ) and chart with coordinates (u, η) on (1|1) supermanifold in the following way: u = S(z) + θV (z)ϕ(z), η = ψ(z) + θV (z), (6.1)where S(z), V (z) and ϕ(z), ψ(z) are correspondingly even and odd analytic functions.Onthe other hand, the superconformal coordinate transformations for N = 2 SRS between the charts with coordinates (z, θ + , θ − ) and (z ′ , θ ′ + , θ ′ − ) are: There is a following Theorem matching these transformations.Theorem 6.1.[7] There is one-to one correspondence between N = 2 SRS from (1|1)supermanifolds.The explicit correspondence between transition functions is given by the following formulas: Let us now describe how it works for the transition functions we introduced in the previous section.In our case: Therefore, we can write for the transition functions of SF N =2 : . This can be rewritten in a simpler way: Hence we obtain the following theorem.Theorem 6.2.Formulas (6.5) produce the transition functions describing the superconformal structure on N = 2 SRS with punctures, corresponding to (1|1)-supermanifolds with transition functions (5.15).Namely the transition function corresponding to oriented edge v, v ′ of the fatgraph, i.e. the overlap U v ∩ U v ′ is desribed by the functions ǫ ± (w), q ± (w) from (6.5). 7. Involution and N = 1 Super-Riemann surfaces with NS and R punctures. Involution: R vs NS punctures. There is an involution I on the moduli space of super-Riemann surfaces, such that where D ′ ∓ is the corresponding operator after N = 2 superconformal transformation.Such an involution takes N = 2 super Riemann surface to the dual, which on the level of (1|1)-supermanifolds produces a manifold of (0|1)-divisors, which we discussed earlier.The self-dual supermanifolds are known to be N = 1 super-Riemann surfaces. Note, that the two examples of the action of involution which we considered in this section are the only ones, which preserve the base manifold.7.2.Split N = 1 SRS.Let us now discuss split N = 2 SRS, which implies that we let cocycles α, β = 0.The involution acts on the level of transition functions as follows: Therefore, for fixed points of the involution we have . (7.5)This means that g w) where sign(v ′ , v) is the notation for the sign of the square root, so that on a resulting N = 1 SRS we have: Choice of signs for such square roots is the same as the choice of spin structure on the punctured surface.However, we already discussed that problem on the level of fatgraphs (Section 3), which leads to the following Theorem.Theorem 7.1.Consider a metric fatgraph τ with a spin structure ω provided by the orientation as discussed in Section 3.This data defines the superconformal structure on the split N = 1 SRS.For every boundary cycle on the fatgraph, corresponding to puncture p, let m p be the number of oriented edges, which are opposite to the orientation induced by the one on the surface.The corresponding puncture is Ramond or Neveu-Schwarz, depending on whether m p is even or odd. Proof.So, let us consider the metric graph τ with orientations on edges.Our problem is to use orientations to define To do that, for each overlap we will look at the z coordinates on stripes, discussed in section 4. For given vertices v and v ′ , the transformation between z and z ′ coordinates is given by We will define the value of the ∂ z fv ′ ,v (z) = ±i in the following way.If the orientation is from vertex v to v ′ we choose the positive sign (sign(v, v ′ ) = 1) and negative otherwise (sign(v, v ′ ) = −1).One can prove that such choice does not depend on the choice of orientation for a given spin structure, namely a different choice, corresponding to a fatgraph reflection, will just result in a reflection of an odd coordinate for a given vertex. Regarding R and NS punctures, one can deduce immediately that the statement is correct just by a simple condition that there is a natural combinatorial constraint on the punctures with m p being odd on a fatgraph (see Section 4), matching the one for Ramond punctures on a surface.Nevertheless, let us prove that directly. For a given choice of spin structure, let us superconformally continue g , where we remind that where z is the coordinate on the consequtive stripe v k , v k+1 and v 1 , . . ., v m are consequtive vertices around the puncture.Now we obtain R and NS punctures by gluing the supertube with a proper twist of the odd variable.That will of course depend on the number m p of the edges {v i , v i+1 }, which have opposite orientation with respect to orientation induced on the cycle by the one on the surface.Note, that in terms of z-variables ∂ z fp,v (z) is a constant, so one can again make a choice of signs explicitly. We have the following identity where positive sign is for even N and negative otherwise.In the case of m p even, we can choose {sign(p, v)} so that for sign(v, v ′ ), so that v, v ′ are neighboring vertices, so that sign(v, v ′ ) = sign(p, v)sign(p, v ′ ), thus gluing the stripes into supertube.However, this is not possible in the case of odd m p .In this case we have to assume that sign(v n , v 1 ) = −sign(p, v 1 )sign(p, v n ), thus gluing the stripes into twisted supertube corresponding to NS puncture. Remark.One can of course superconformally transform the twisted supertube in NS puncture case into the disk, the same way we did in the introduction, thus making the corresponding cocycle {g v,p (y)} = {± ∂ y f v,p (y)} .In Ramond case this is of course impossible.We see that if p is an R puncture, . Therefore, for the bundle L we have a condition: which is possible only when n R is divisible by 2. 7.3.N = 1 SRS: non-split case.In order to construct nonsplit N = 1 SRS, we first will do it on infinitesimal level near the split N = 1 SRS.So, let us look at the formulas (6.5) when α v ′ ,v β v ′ ,v are infinitesimal: . (7.9)An invariance under simple involution D ± −→ D ∓ allows to identify α v ′ ,v and β v ′ ,v and as before g 2 v ′ ,v (w) = ∂ w f v ′ ,v (w), thus infinitesimally the transformations for the resulting N = 1 SRS on the overlap U v ∩ U v ′ is given by: w)(ξ + ρ v ′ ,v (w)), (7.10) so that the signs of are prescribed as in the Theorem 7.1, where ρ ∈ Π Ž1 (F c , L ⊗ O(−D N S )) and L 2 = T ⊗ O(−D R ), so that D R and D N S are the divisors corresponding to the sum of all NS and R punctures correspondingly.We described such cocycles using odd number decorations at the vertices of the the fatgraph in the Theorem 5.4.The formulas (7.10) are not hard to continue to full superconformal transformations for transition functions (one can obtain them by applying involution D ± −→ D ∓ invariance to the formulas (6.5) as well): v ′ ,v (w))(ξ + λ v ′ ,v (w)).Combining the parametrization data for cocyles ρ from Theorem 5.9 with the results of this section, we obtain the following omnibus Theorem, describing the dense set of superconformal structures oinside moduli space of N = 1 SRS.Theorem 7.2.Consider the following data on a fatgraph τ : (1) Metric structure. (2) Spin structure, as equivalence class of orientations on the fatgraph.The cycles on the fatgraph, encircling the punctures are divided into two subsets, NS and R, depending on whether there is odd or even number of edges oriented opposite to the surface-induced orientation of the appropriate boundary piece of a fatgraph correspondingly.We denote the number of the corresponding boundary pices as n R and n N S . (3) Ordered set {σ k v } k=0,...,mv−3 of odd complex parameters for each vertex v, where m v is the valence of the vertex v. Then the following is true: (a) Data from (1) and (2) determine uniquely the split Riemann surface with n R Ramond and n N S Neveu-Schwarz punctures with the transition functions given by (7.11) one for each overlap U v ∩ U v ′ .The sign of the square root is given by the spin structure on the fatgraph, making odd coordinate a section of a line bundle L on the corresponding closed Riemann surface F c , such that L 2 = T ⊗ O(−D R ), where D R is a divisor, which is a sum of points corresponding to the Ramond punctures. where ρ v , ρ v ′ are the meromorphic sections of L ⊗ O(−D N S ) on U v , U v ′ correspondingly, so that m v is valence of the given vertex v.The cocycles defined by configurations described by {σ v } and {σ v } are equivalent to each other if and only if σ v (w) − σv (w) = γ (m−3) (w), (7.13) for every vertex v, γ ∈ ΠH 0 (F c , L ⊗ K 2 ⊗ O(D N S + 2D R )), γ| Uv = γ(w) so that γ (m−3) (w) is the Taylor expansion of γ(w) up to order m − 3. We call two sets of data associated to the fatgraph τ equivalent, if they are related as in (7.13). (c) There exist a superconformal structure for N = 1 super Riemann surface SF with n R Ramond punctures and n N S Neveu-Schwarz punctures so that the superconformal transition functions on for each overlap U v ∩ U v ′ are: ) v ′ ,v (w)), (7.14)where the deformed functions f (d) To describe the non-split SRS, we fix the choice of transition functions in (c) for every metric spin fatgraph τ with the odd data from (3).We consider the set of superconformal structures constructed by picking one superconformal structure per equivalence class of data for every fatgraph τ .The points in this set represent inequivalent superconformal structures, and together they form a dense subspace of odd complex dimension 2g − 2 + n N S + n R /2 in the space of all superconformal structures with n N S Neveu-Schwarz and n R Ramond punctures associated to F . Theorem 4 . 1 . [ 29 ] For any connected closed Riemann surface F c with s distinct points p 1 , . . ., p s , s > 0 and genus g, s > χ(F c ) = 2 − 2g and n positive real numbers a 1 , . . ., a s there exists a unique Strebel differential on F = F c \{p 1 , . . ., p s }, whose maximal ring domains are s punctured disks surrounding p i 's with circumference a i 's.The union of non-closed trajectories of Strebel differentials together with their zeroes define a graph, embedded into a Riemann surface, thus giving to it a fatgraph structure.Every vertex of a fatgraph corresponds to the zero of the Strebel differential of degree m − 2, where m ≥ 3 is the valence of the vertex.The length of each edge gives the graph a metric structure. Theorem 4 . 2 . [21] Let M comb g,s denote the set of equivalence classes of connected ribbon graphs with metric and with valency of each vertex greater than or equal to 3, such that the corresponding noncompact surface has genus g and s punctures numbered by 1, . . ., s.The map M g,s × R s −→ M comb g,s which associated to the surface F c and numbers a 1 , . . .a s the critical graph of the canonical Strebel differential from Theorem 4.1 is one-to-one. ) w = a + ζξ where a, ζ are the coordinates parametrizing such a (0|1) divisor.Let us derive the formulas for the transformations of a, ζ variables, for the transformation between the charts with coordinates (a, ζ) and (a ′ , ζ ′ ), so that w (b) Part (3) of the above data allows to construct Čech cocycles on a Riemann surface F, which are the representatives of Π Ȟ1 (F c , L⊗O(−D N S )), where D N S is a divisor, corresponding to the sum of the points corresponding to NS punctures: (σ) v ′ ,v , λ (σ) v ′ ,v depend on odd parameters {σ k v }, characterizing Čech cocylce {ρ v ′ ,v }, with f (0) v ′ ,v = f v ′ ,v and in the first order in {σ k v } variables λ (σ)
12,427.8
2023-07-06T00:00:00.000
[ "Mathematics" ]
Making mathematics meaningful for freshmen students: investigating students’ preferences of pre-class videos Engaging students in university mathematics classes can be a challenge for professors. One pedagogical technique is the use of pre-class videos in a flipped classroom. The students are exposed to the concepts and theories before attending class so that class time can be devoted to interacting with the content to better understand it. Most of the research into the flipped classroom shows that the students generally like the idea and feel they benefit from the approach; but to date, there is no conclusive research showing that students’ improve their grades. This research is a precursor to a larger study on the flipped classroom in university mathematics classes and investigates the types of videos undergraduate students prefer to help guide the development of a pre-class videos library. Eight-one students in three university mathematics classes in a private university in the United Arab Emirates were involved in the study. Electronic supplementary material The online version of this article (doi:10.1186/s41039-015-0026-9) contains supplementary material, which is available to authorized users. A pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. As noted by Hamdan et al. 2013, "there is an established body of research that supports … the shift from a teacher-centered to a student-centered approach to instruction: (2013, p. 6). In addition, flipped learning, by encouraging a shift to a learning paradigm in higher education and by moving direct instruction to the individual space, "addresses one challenge facing many instructors interested in creating dynamic learning environments: How to free up time during class" (Wallace et al. 2014, p. 254). Figure 1 depicts the movement of instruction from the group space to the individual space and resulting changing classroom dynamics. The benefits of the flipped classroom to the students are numerous and encourage high-impact practices. These practices include, "students taking responsibility for their own learning, investing time and energy in practice, collaborating with classmates around challenging learning activities, receiving and responding to frequent and timely feedback from instructors, and seeking to connect their learning to real-life applications" (Kul et al., 2010, cited in Wallace et al. 2014. For example, students can take responsibility for their own learning by accessing the content anytime, anywhere, on any device. In addition, students can pause and rewind and review the video, or sections of the video, as often as they like. Interaction and active learning opportunities are increased during class time, and all students, regardless of ability can be engaged and challenged to further their understanding of the material by their peers and their instructors. The hallmark of the flipped classroom "consists of two parts: interactive group learning activities inside the classroom, and direct computer-based individual instruction outside the classroom" (Bishop and Verleger 2013). The use of pre-class videos is the most common technique associated with the computer-based instruction. In their study, Long et al. (2014) found that 78 % of the students agreed or strongly agreed that they prefer learning via videos. The students stated that the videos helped them understand the knowledge; they were easy to follow and were convenient to view. Murray et al. (2015) also found that students were generally positive about the flipped classroom particularly the convenience and the flexibility of the videos. The types of videos vary from information displayed on the screen in bullet points with voice over narration to full instructor presence working at the board. The length of the videos also varies. Ferrer and García-Barrera (2014), suggest adhering to the segmentation principle and break up the information into smaller segments. They also discuss three personalization principles to consider when preparing the instructional videos. These include: 1. Informal language principle: it is better to use informal language (first and second person) than formal language. 2. Guide principle: incorporating characters on screen that fulfill a coaching role promoting learning. 3. Author visibility principle: when the author is an active and visible element in the video, personally involved in the narration, learning is more successful (p. 2610). One of the main concerns about the flipped classroom is whether or not the students will do the pre-class preparation, whether they will watch the video or other online activities assigned by the teacher. Since the class time is generally devoted to group work or other tasks relying on the students having done the preparation work, this is a legitimate concern. However, in their ongoing large scale study with Engineering and Mathematics students, Lape et al. (2015) have found that the students report they do watch the video to prepare for the in-class activities. Flipped learning in mathematics classes Mathematics can be a challenging subject for many students, and it can be difficult for teachers at all levels to help the students reach at their full potential. In higher education, math courses can be a stumbling block toward majoring in STEM fields. It is a required course for the engineering fields, but even some of those students struggle with understanding and applying the higher level concepts presented in a traditional lecture class. The flipped classroom allows students to apply what they have learned, usually in a group setting. The results, as demonstrated in two recent studies, can be better understanding of the material and higher grades. For example, McGivney-Burelle and Fei (2013) conducted a study with two calculus classes. One was taught in a traditional, lecture format and the other was in a flipped classroom. They found that the students in the flipped Calculus classroom received higher grades than the students in the traditional class. Syam (2014) also conducted a pilot study of the flipped classroom in a pre-calculus course with his students at Qatar University for 2 weeks to cover three sections from the syllabus. At the end of the trial period, students had to complete a quiz to assess their understanding of the material. Syam found that the results of this quiz were higher than the previous quizzes where students were not taught in a flipped classroom. In addition, a questionnaire was distributed to the students to probe student perceptions more deeply. The results suggest that the majority of students preferred the flipped classroom and would rather take future mathematics classes in this manner. Methods Although not without its criticisms, there is enough evidence to suggest that students can benefit from the flipped classroom and that those who are involved in a flipped class enjoy it. Although there is general consensus in the literature that students strongly prefer the videos made by their professors, before investing a lot of time and energy into preparing the pre-class videos, the lead author wanted to find out the students' attitudes towards the usefulness of the pre-class videos and the type of videos that they found the most useful. Thus, in this study, three videos from YouTube were used. The students were surveyed to identify their preferences to guide the development of a future video library prepared by the Professor. The study was conducted in Spring 2015 with three Undergraduate Math classes at American University of Sharjah (AUS). AUS is a coeducational, private institution in Sharjah, United Arab Emirates. It has a student body of approximately 6000 students representing over 90 nationalities. The students come from different educational systems with different prior learning environments especially when it comes to mathematics. Permission to conduct the study with AUS students was granted by the AUS Internal Review Board. Eighty-one students, 46 females, and 35 males, in two sections of MTH 103, Calculus for Engineers, and one section of MTH 111, Mathematics for Architects participated in this study. MTH 103 is a fundamental course that builds the mathematical foundations to engineering and other scientific fields. This course covers limit of functions, differentiation, applications of derivatives, and theory of integration with applications. For many students, this course is a stumbling block toward majoring in STEM fields. MTH 111 introduces the topics of geometry and calculus needed for architecture, reviews trigonometry, areas and volumes of elementary geometric figures, and the analytic geometry of lines, planes and vectors in two and three dimensions. Data collection In preparation for the flipped class, three videos that are freely available on YouTube were reviewed and chosen by the Professor. The length of the videos, the presentation style, and the level of detail in the explanation varied as shown in Table 1. These videos were chosen for their difference in presentation modes of the same topic. The students were given access to the three online videos through the AUS Course Management System. The Professor informed the students that their next class would be conducted differently and explained the concept of the flipped classroom. She showed them where to find the videos on the course management system and told the students that she would prefer if they watched all three but were expected to watch at least one of the videos. The students were well aware that the Professor was not going to give a lecture on the Chain Rule and that they needed to watch the pre-class videos and be prepared to do some in-class exercises to better understand this rule. Before starting the group exercises, the students filled out a short survey about the videos. The survey was designed on purpose to be short and clear to encourage the students to fill it out. They were under no obligation to do so and were not rewarded or punished for either participating or not participating. The survey was designed to gather demographic information and both quantitative and qualitative data. It can be found in Additional file 1. Results and discussion One of the concerns expressed in the literature is whether or not the students will watch the pre-class videos. In spite of being well informed of the videos' availability and how the class would be structured differently, a total of 43 % (37 % female and 51 % male) did not watch the videos. Reasons for not watching the videos included forgetting about the assignment and running out of time because of other homework assignments for other classes. One female student noted that she did not watch the videos because she felt she already know the information on the videos, and one male student did not watch because he "thought it will be explained in class and I understand from my professor perfectly". Of the 57 % of the students who did watch the videos, there was no predominant video preference as shown in Fig. 2. In addition, there was no statistical correlation between the students' majors (engineers and architects) and their general video preference. However, the highest percentage is for the longer video with more details and the ability to see the teacher. There were, however, some gender differences in video preference. A higher percentage of the females viewed the videos for a total of 57 versus 49 % of the males. As demonstrated in Figs. 3 and 4, the females had a preference for the longer video (A) compared to the males who showed a preference for the shorter video (C). Some of the students watched all three videos, while others watched only one. The reasons for watching just one, for both males and females, were mainly centered on the lack of time. Comments from the students who watched all three videos explaining their preferences included: I am a visual learner, so to understand very well, I must see examples: Also, watching video B after video C was useful because I had already learned the basic rules from video C. When it came to video A, I was bored and did not need more information. Video B was also effective because the basic rules (3) of the chin rule were always available on the right hand side easy for me to relate back to. (Watched all 3) Shorter videos are easier to focus on. The videos overall were useful. They explained the chain rule in the same way over a long period of time which I preferred. All three were useful. Students' attitudes towards the use of the pre-class videos Overall, the students were positive about the videos and their usefulness. There were, however, two students, one female, and one male, who expressed their dissatisfaction 47% 24% 29% with the videos. The female noted that: "I rather have my lecture in class with the teachers teaching style that I am comfortable with. Prefer videos after class than before". The male student wrote, "The flipped class is a bad idea. Normal classes are better." Thus, we can understand his point, but we do not know the reason why he does not like the flipped classroom. This comment, however, can be seen to be consistent with Talbert's observation that in his mathematics classes, "many students in flipped classrooms are rebelling because they want professors to lecture to them and tell them exactly how to earn a good grade" (2015 p. 15). He recommends, acknowledging in class that some students will feel anxious or uncertain about learning in a new, unfamiliar way, and to take time to explain how they may benefit from the flipped classroom. Both negative comments underscore the importance of uncovering students' preferences to best help them learn. Further, as Barret notes, "the techniques all share the same underlying imperative: Students cannot passively receive material in class, which is one reason some students dislike flipping" (Berret 2014, p.2). The positive comments from the males who watched at least one of the videos included the following: No Predominant Video Preference It really helped to understand the material better The videos were a great way to understand the material at home. Nothing other than they are "very helpful". Keep posting more. I like this idea since it is very convenient and enables better understanding because you get to watch it your own pace. Positive comments from females who watched at least one of the videos included: We should have more of these videos and practice more during class. Thank you. It's an effective way of learning. I wish there was videos to watch for all my courses and subjects and I wish if we took more than the Chain Rule. Observations from the professor In the flipped class, out of the 50-min-allocated class time, 35 min were dedicated to student active learning. The students were given a set of problems to solve starting with very straightforward ones and gradually getting more challenging. As the students were working on problems and interacting with each other, the professor noticed that they were motivated, sharing their knowledge, and explaining concepts to their peers. Because the concepts, the theorems, and also the conditions of applying these mathematical notions were covered in the pre-class videos, she was able to freely move from one group to another, answering questions, and correcting solutions. She was able to help them at the right time and meet the needs of all students from all ability levels. For example, the struggling learners needed more extra help than the rest of the class. Because some of them are lacking very basic mathematics due to the fact that they are coming from different educational systems with different prior learning environments, they needed one-on-one time to understand the basic concepts and fill their knowledge gaps. The middle level students were able to solve the problems and often benefited from the interactions with the high achievers and anchored their knowledge as they argued with their peers or assisted the lower level students. The advanced students asked for higher level thinking problems. They quickly solved the easy questions and had time to think about the in-depth problems, and this allowed them the opportunity to reach their full potential. They were not limited or held back with basic explanations or easy question from a low-level student. The opportunity to have one-on-one conversations with many students during class time was a real benefit of the flipped class. The professor had conversations with students that she had never talked to before because they were too shy to talk in front of their peers or they are afraid of making mistakes. During this experience, they were able to ask questions, clarify misconceptions, and be an active participant in the classroom. Conclusion In spite of the fact that some students did not watch the videos, the overall purpose of this small scale research was achieved. That is, the professor was able to make the mathematics class more meaningful for the students with the use of the pre-class videos. She was not constrained to delivering a "one standard lecture" to fit all. Because of the nature of the group work assignments, even the students who did not watch the videos had the opportunity to interact with the content and learn from each other. All the students, regardless of ability level were moving in the right direction with each student progressing towards his/her full potential at his/her own pace. The positive feedback from the students and the professor's observation of the benefits to the students is the impetus for the next phase of implementing the flipped classroom. Although there was no clear preference for the type of video, long, short, detailed, etc., the professor is now ready to prepare her own pre-class videos and continue with the flipped classroom to make her classroom a more enriching, rewarding learning experience for the students. Her efforts are not lost on the students, as one student noted: "Thank you Prof. You are the only one who is helping us to improve."
4,127.2
2016-01-07T00:00:00.000
[ "Mathematics", "Education" ]
Compact Scrutiny of Current Video Tracking System and its Associated Standard Approaches With an increasing demands of video tracking systems with object detection over wide ranges of computer vision applications, it is necessary to understand the strengths and weaknesses of the present situation of approaches. However, there are various publications on different techniques in the visual tracking system associated with video surveillance application. It has been seen that there are prime classes of approaches that are only three, viz. point-based tracking, kernelbased tracking, and silhouette-based tracking. Therefore, this paper contributes to studying the literature published in the last decade to highlight the techniques obtained and brief the tracking performance yields. The paper also highlights the present research trend towards these three core approaches and significantly highlights the open-end research issues in these regards. The prime aim of this paper is to study all the prominent approaches of video tracking system which has been evolved till date in various literatures. The idea is to understand the strength and weakness associated with the standard approach so that new approaches could be effectively act as a guideline for constructing a new upcoming model. The prominent challenge in reviewing the existing approaches are that all the approaches are targeted towards achieving accuracy, whereas there are various other connected problems with internal process which has not been considered for e.g. feature extraction, processing time, dimensional problems, non-inclusion of contextual factor, which has been an outcome of the proposed review findings. The paper concluded by highlighting this as research gap acting as contribution of this review work and further states that there are some good possibilities of new work evolution if these issues are considered prior to developing any video tracking system. Overall, this paper offers an unbiased picture of the current state of video tracking approaches to be considered for developing any I. INTRODUCTION With the advancement of computer vision and video surveillance systems, video tracking has gained immense popularity in both domestic and commercial applications [1]. Fundamentally, video tracking is a mechanism of identifying, recognizing, and tracking a mobile object over time [2]. Apart from its applicability towards video surveillance systems, video tracking is now used over various applications: viz. video editing, medical imaging, traffic control, augmented reality, communication, and video compression, human and computer communication [3][4][5]. Usually, the comprehensive mechanism of a video tracking system could involve more processing of time owing to its dependency on a massive amount of data within a video sequence [6]. Complexity in the operational process also existing in recognizing an object with accuracy in a video tracking system [7]. Essentially, the video tracking system aims to connect the mobile target object (or multiple objects) present over a sequence of video frames. This could be highly a difficult process, especially when the speed of the mobile object is quite faster relatively or uncertain concerning the defined rate of video frames. The uneven orientation of a mobile object is another complicated scenario in video tracking, which offers complexity in analyzing the presence of an object for a given scene over a sample of time. In order to address this conventional issue, the motion model is adopted in the video tracking system [8]. This motion model is responsible for defining the relationship between the target object image and its influence over the mobility scenario. Regarding the motion model, generally homography or affine transformation is used for two-dimensional models when tracking is performed over planer objects [9]. The motion model for a three-dimensional object is usually related to the position and orientation of the object [10]. While dealing with video compression, the macroblocks are divided into keyframes, and selected motion motions are considered disruptions of these frames considering motion parameters [11]. In the case of a deformable object, the motion model generally considers the position of a target object over the mesh [12]. At present, there is various literature on video tracking systems, which mainly evaluates sequential frames in a video yielding to an identified target object within the transition of frames [13][14][15][16]. However, considering the generalized classification, it is found that existing video tracking algorithms are of two types, i.e., representation along with localization of a target object and filtering of data. The first kind of algorithm are generally known for their low computational complexity, and they are again classified into contour-based tracking and kernel-based tracking. The second kind of algorithm mainly deals with the dynamics of the target object and performs assessment based on multiple hypotheses. Thereby, such an algorithm results in enhance capability towards tracking mobile objects of complex form. However, these algorithms are also computationally complex, and it has dependencies over different parameters, e.g., stability, redundancy, quality, etc. The algorithms that fall under such category are Kalman filter and particle filtering. Therefore, the prime research problem considered for this work is thatalthough there are various implementation and discussionwww.ijacsa.thesai.org based papers on video tracking system, but there is no globalviewpoint to standardize the effective of all the exercised approaches. It is still vague to understand the actual scenario of existing video tracking approaches as the taxonomies are not well discussed. Apart from this, it is also known fact that there is an increasing demands of video surveillance system where sophisticated features are demands. There are also consistent evolution of various approaches in order to assist in internal processing of video frames in image processing. This leads to a motivating factor that this topic is worth doing a research on owing to its abundant scope of application in upcoming days as well as trade-off in finding any potential standardized model. Therefore, the significant objective / contribution of this paper is to discuss the techniques applied in major implementation work towards the significant classes of video tracking approaches, i.e., point-based tracking, kernel-based tracking, and silhouette-based tracking. The study also contributes to discuss open research problems. The organization of this manuscript is as follows: Section II briefs about the taxonomy of video tracking approaches followed by a discussion of Point tracking approaches in Section III. Existing kernel tracking and silhouette tracking approaches are carried out in Section IV, and Section V. Discussion of findings of the study is carried out in Section VI. A summary of this paper is carried out in Section VII. II. TAXONOMY OF VIDEO TRACKING APPROACHES Basically, the video tracking mechanism targets trajectory generation associated with an object by identifying the position of an object with respect to all the video sequences over time. The taxonomy of the existing video tracking approaches is pictorially shown in Fig. 1. It is found that existing video tracking approaches are broadly classified into three forms, i.e., point-based, kernel-based, and silhouette-based approach. The point-based approach makes use of points for tracking and is further classified into deterministic and probabilistic approaches. In a deterministic approach, the cost factor is evaluated for connecting with each object over a video sequence considering motion constraints (i.e., common motion, change of minor velocity, higher velocity, rigidity, and proximity). In the deterministic method, the state-space approach is usually used to model the properties of an object and its associated parameters (i.e., acceleration, velocity, position, etc.). At present, studies towards point-based tracking mainly use Kalman filtering, particle filtering, and hypothesisbased tracking system. The kernel-tracking-based system uses the mobility of an object with respect to connecting frames and is broadly classified into two forms, i.e., multiple tracking and template-based tracking. The multi-view tracking system is modeling without the inclusion of zero interactivity between object and background. The aggregated information is represented by this model, corresponding to the objects in the given scene of the video sequence. The possibility of different shapes and sizes of the same object is quite high in this approach. It is further categorized into subspace and classifier forms of tracking. Template-based matching is used mainly for tracking single objects, and it ensures the minimal cost of computation. It has been found that the kernel-based tracking dominantly uses the mean-shift approach, support vector machine, and template-based approach. Such an approach is also discussed to offer better occlusion handling capability while requiring more training to attain better performance. The final class of video tracking system is silhouette tracking mechanism, which is meant for overcoming the incapability in tracking issues associated with simplified geometric shapes, e.g., shoulders, head, hands, etc. Therefore, in this form of tracking, the region of the objects is explored from each frame to offer a detailed description of the target object. The modeling is feasible in this form of tracking mechanism using contours of an object or edges of an object or color histogram. Fundamentally, there are two classes of silhouette tracking systems, i.e., shape-based and contour-based. The shape is also used to define the state-space of an object. In order to increase the posteriori probability, the system always updates the state over a specific period of time. In such a case, the posterior probability depends upon the initial state and the likelihood of the current state that generally uses the spatial distance between two contours. It is to be noted that the silhouette-based approach classification is more or less a similar form, i.e., contour and shape-based, and no new forms have been ever surfaced. In all the approaches mentioned above, certain issues have been consistently under observation, i.e., occlusion and tracking using multiple cameras. There are three classes of occlusion, i.e. i) Occlusion due to structures in the background scene, ii) Inter-object occlusion, and iii) Self-occlusion. Developing a uniform algorithm for video tracking, considering all these three occlusion cases itself, is a challenge of a bigger dimension. Similarly, variation in shape and size of the same object is a bigger challenge when applying a multiview tracking system. The next section briefs about existing research work in this direction. III. POINT TRACKING MECHANISM Point tracking is the first form of the object tracking method, where various forms of points are used across the target frame to represent the identified object over the video sequences. However, mapping a specific point over an identified object is quite a complex scenario, especially when a target object exists in the scene, misdetection, or occurrence of occlusion. Basically, this system is of two types, i.e. deterministic approach and probabilistic approach. However, a closer look into the trends of the approaches and methods being carried out by point-based tracking system is mainly found to use Kalman filter, particle filtering, and hypothesisbased tracking. A. Kalman Filter-based Methods In recent years, the Kalman filter usage has significantly increased for object detection under various environmental conditions. In most of the work, the adoption of the Kalman filter is proven to offer significant accuracy when it comes to tracking at a higher speed. Even under the complex form of video files (e.g., satellite videos), the Kalman filter is reported to offer better tracking performance (Guo et al. [17]). Global motion attributes characterize the moving object to offer a measurable score of tracking performance using the Kalman filter. The core part of the tracking system is developed using the correlation filter, which uses the original pixel and HOG to represent its features. The study formulates an objective function intending to reduce the squared form of the error that occurs between the suitable map of response and correlated response. The study integrates the usage of correlation filter and Kalman filter to facilitate higher tracking speed with accuracy and fault tolerance. However, the limitation is that usage of this approach requires more robustness while performing dynamic tracking operations. A study towards achieving robustness is carried out by Gupta et al. [18], where the depth of interest is used to perform object tracking over the mobile environment. The study makes use of an unscented Kalman filter using an experimental approach for performing forecasting of the location of a mobile object. A similar direction of the work towards tracking a mobile object's position is also discussed by del Rincon et al. [19]. In this work, the author has considered a use case of tracking different parts of the human body using the strategy of multiple tracking and a two-dimensional articulated model. The interesting part of this study is its supportability towards identifying and tracking various rotational aspects of the human body. Study towards tracking multiple targets have been carried out by Wang and Nguang [20]. The uniqueness of this part of the study is to integrate the connected data using a probabilistic model with the Kalman filtering (Fig. 2). A slightly unconventional mechanism of object tracking is carried out by Yang et al. [21] considering the use case of tracking aircraft. The study has used a deep learning mechanism to improvise the accuracy in the tracking system where the model integrates the Kalman filter and extended Kalman filter to forecast the trajectory. Based on a regionbased deep neural network, the presented scheme uses a shared structure of the convolution, which is used to encode the data connected with the positional information of the flying object. The region of interest area is then attached to the pooling layers on the top of the deep neural network where the response system of (i, j) is mathematically depicted as: In the expression (1), the variable r c represents the mapping score associated with the region of interest. Each class is further subjected to a software response system in deep learning. The mobility model of the presented deep learning mechanism is as follows: x k =Ax k-1 +Bu k +w k z k =Hx k +v k (2) The above expression (2) is used for tracking the mobile target where state vector x is used to define the system with estimated value as z and matrix for state transition A. The variables B and u represent the controlling parameters, while the transformation matrix is represented as H. This model also considers the presents of noises w and v. The study outcome ( Fig. 3) shows that the model is capable of identifying the flying object under different context of background. Irrespective of any direction of mobile state of air-born object, the model can successfully perform identification. The study outcome has been finally verified by comparing with other existing contents on multiple schemes, e.g., mean strategy, cropping with correction, cropping with estimation, normal cropping, and estimation. Table I highlights that Kalman filtering with deep learning, offering the higher capability to perform tracking of the dynamic mobile object. The study has also claimed to reduce the detection time; however, the spontaneity in the tracking duration may differ based on the different background aspects of the scene, which is not found to be discussed in the presented system and thereby acts as limiting factor. B. Particle Filtering Method When there is a large set of information, the data sample is required for performing any further processing. This sample of data, which is also called a particle, is utilized to represent the data distribution associated with various stochastic nature processes. The particle filtering process is used for extracting such filtered samples of a particle in the presence of noisy information. There is also a higher possibility of the presence of impartial information and a non-linear state of varied form. From the perspective of the video tracking system, there is a need to track the object with various nonlinearities and uncertainties. Hence, the concept of particle filtering suits well in designing a video tracking system. One of the significant advantages of using the particle filtering process is its inclusion of a non-Gaussian mechanism of distribution study and nonlinearities. Apart from this, it can also be said that particle filter acts as a better alternative for the existing Kalman filter. The critical issue associated with Kalman filtering is that it assumes a normal distribution of state variables, which is less practical in nature and therefore it is its limitation. Such issues can be addressed using particle filtering where the state density at a specific time can be mathematically represented as: The variable n represents particles with a sampling probability of π t n as weight, which can index the significance of the considered sample. This mechanism can also address the computational complexity by storing the cumulative weight for each tuple. The frequently used sampling process includes selection, prediction, and correction. In the Selection method, the random samples s t n is selected from S t-1 by generating an arbitrary number r in the probability range of [0, 1]. The idea is to find the smallest sample j such that sampling probability c t-1 is less than r considering s t n =s t-1 . In the prediction method, a new sample is generated as s t n =f{s t n , W t n }, where f(x) is a non-linear function with W t n as mean Gaussian error. In the correction method, the estimation of weights π t n is carried out where π t n is equivalent to g(z t |x t = s t n ), where g is Gaussian density function. Adoption of these methods offers more comprehensive tracking performance, even considering a different number of features. One such recent work used a similar approach where multiple features are used towards facilitating video tracking (Bhat et al. [22]). The authors commented that features are exclusive of environmental variables, and various attributes of color distributed can be used as feature space. It is highly application-specific, while the study has considered that the KAZE feature, which is capable of blurring and smoothening the information along with noise. This challenge is addressed by using additive operator splitting for achieving the sharpness. According to this model (Fig. 4), the system takes the input of video sequences followed by selecting a target from a specific frame. The particles are generated in the surrounding of the centroid of the blob, followed by updating the particles. This updating procedure can be carried out by using the motion model. Finally, the particles based on the spatial score are weighted, followed by resampling all such particles to obtain a new centroid, which leads to the generation of the filtered location of the target, thereby assisting in video tracking. Particle filtering is also used to address the issues of the appearance model, which suffers from various extrinsic limitation factors, e.g., clutter in the background, occlusion, and variation in illumination (Wang et al. [23]) (Fig. 5). This system uses unique particle filtering to generate information about the state of the target concerning the current frame immediately after updating the template. It should be noted that the study mentioned above is based on tracking using a matching mechanism also, which has a dependency upon an interesting local point, thereby reducing the robustness. This problem is sorted out by Zhang et al. [24], where basis matching is used as a substitution of point matching. Gabor filter is used to learn the target model, while particle filter identifies the object over the dynamic system. The study outcome was assessed on various test environments of occlusion, variation in illumination condition, alteration in poses of an object, and clutter in the background. C. Hypothesis based Method The majority of the video tracking system consists of the inclusion of two video frames, where there is less likelihood of inappropriate correspondence if the correspondence is created between two frames only. It facilitates effective tracking outcomes when there is a deferment of reading other video frames. In this regard, this process of multiple hypotheses helps manage multiple such correspondence hypotheses associated with all the objects for the given instantaneous frame. This approach offers a higher likelihood of the last frame with an object over a specific time period with a capability to construct upcoming queued tracks for the next object while eliminating the already existing track results. It should be noted that multiple hypothesis-based approaches are essentially an iterative process that initiates from the set of current tracks while multiple disjoint tracks formulate to form a collection within the hypothesis. The system then carries out a prediction for the position of an object for each hypothesis over the consecutive frame. These predictive outcomes are compared with the original measurement by assessing spatial measurement. Depending on this measurement of the spatial score, the system establishes hypotheses that further provide new hypotheses over the next rounds of iteration. However, it is necessary to know that owing to the iterative operation. It leads to a computational burden. This complexity can be addressed by considering probabilistic modeling, where the correspondence is random variables that are statistically independent of each other. Particle filtering can also be used to address this issue; however, it may offer a lower probability of enumerating all the possible correspondence. Hence, multiple hypotheses area better option when it comes to the demand for checking all the possibilities. Another advantageous feature of the multiple hypothesisbased tracking systems is their ability to perform tracking of smaller targets. However, it is associated with the larger tree structure in existing approaches with a large number of branches. This issue can be sorted out by applying a certain optimization approach. Work towards this target is carried out by Ahmadi and Salari [25], where particle swarm optimization has been used to explore the optimal number of tracks from the video sequence. The implemented steps in this work are i) exploring the preliminary tracks with the aid of a multiple hypothesis approach, ii) fine-tune and adjust the observed track information using particle swarm optimization, and iii) merging all the collected track information that maps with a single target object. However, the limitation of this approach is its capability to track only a single object. This limitation is overcome in the work of Kutschbach et al. [26], which extends its tracking towards multiple objects using the probability of Gaussian mixture with multiple hypotheses. The study also makes use of a kernelized correlation filter for better accuracy performance. It is to be noted that the iterative nature of this approach is also discussed in existing literature for optimal outcomes. However, most of the existing approaches are found to have a lack of any inclusion of relevance between two video frames which is one major limitation. Apart from this, there is no optimized approach to utilize the preliminary information from the individual frames (Sheng et al. [27] [28]). The optimization carried out in this approach is to undertake the information about independent sets of maximum weight. The study constructs the hypothesis between the consecutive video frames using the transfer model of the hypothesis. Also, the complexity associated with the iterative process has been addressed using an approximation algorithm of the polynomial time. This process indirectly improves the efficiency of the system. The upper bound UB of this tracklet is mathematically given in this work as. 6 highlights the visual outcomes to show that this model can track different objects at the same time with different sizes of windows. However, this approach is limited to a single camera with multiple object tracking. Further, the authors have developed a graph model with distance and time information connected with the trajectories. The model has used a temporal graph to assess the presented tracklet generation, resulting in connectivity among hypotheses and benchmarking. The video tracking operation is further improved when this model is integrated with tracking using network flow. At the same time, similar network flow parameters are utilized to assess the validity of the model. The test environment used in this study is further extended where multiple similar targets are subjected to tracking but using multiple cameras (Yoo et al. [29]). The tracking is carried out over multiple tracks that are completely unknown and obtained from time and distance relationships. The realization of the multiple tracks is carried out by solving the clique problem of a higher degree of weights associated with each frame. The study makes use of feedback information obtained from the result of the preliminary frame online. With the adoption of the tracklet, the presented approach is now capable of generating much fine-tune set of candidate tracks and filtering out all the candidate tracks that are found to be unreliable. Hence, there are various point tracking systems at present for video tracking system. IV. KERNEL-BASED TRACKING MECHANISM This is a typical mobile object tracking system in a video represented using a primitive object region from one to another video sequence frame. Normally, the parametric motion is witnessed in the form of affine, conformal, and translation for all the motion of objects. The computation of the flow field of dense nature can also be used to represent the motion of an object. Various approaches in such methodology are constructed based on the techniques used for motion estimation of an object and the number of a tracked object. The existing literature is witnessed to adopt mainly three essential approaches under this method viz. i) Mean Shift Method, ii) Support Vector Machine, and iii) Template-based Method. A. Mean Shift-based Method Although this is one form of video tracking mechanism, its core principle is based on the video sequences segmentation. Existing literature discusses a technique where the mean shift approach is used along with the other associated techniques to improve the tracking performance. The work carried out by Baheti et al. [30] has used an enhanced Lucas Kanade Algorithm for effective controlling of the computational complexity for performing object tracking. The objective function stated for this purpose is: In the above expression (5), the estimation of an error for aligning the template with reference is carried out by considering input image I with reference image T(x). W is considered as a warping function with warping parameters p such that p=(p 1 , p 2 , …, p n ) T , while isincrement parameter of warping function. The preliminary set of warping parameters is obtained from the RANSAC algorithm using its homography estimation. The warped image is subtracted from the gradient, followed by computing the gradient information about the template image in a specific direction and extracting Jacobian related to it. The descent image of the steepest form computed using matrix multiplication followed by computing the Hessian matrix with further multiplying it with the error. The variation is the computed parameter, which is subjected to the objective function for minimization of the error leading to good accuracy in tracking. Fig. 7 highlights the visual performance of both the method to show that adding the Lucas Kanade Algorithm with mean shift offers more accuracy compared to conventional mean shift. However, it should be noted that this approach doesn't emphasize much on the complex environment of background, which is necessary for adaptive tracking. The existing approach reports that the usage of the kernel correlation filter can solve this issue of complexity associated with the background when used alongside with mean shift (Feng et al. [31]). In this method, the trained image with its respective positional information is considered an input followed by tracking based on the kernel correlation filter. [30]. www.ijacsa.thesai.org Further, with the inclusion of the new frame, the confidence map of the filter is obtained, which is followed by assessing if the mean shift is required to be used. For this purpose, the histogram feature is mean shift is obtained, which finally leads to the outcome of tracking (Fig. 8). However, the method doesn't include occlusion mitigation, which is required for cluttered scene analysis. The problem of occlusion and complex background have better possibilities of solving if the emphasis is offered much on data distribution with a multidimensional approach. The conventional mean-shift approach can be extended to three dimensional with more suitability in tracking dynamic object (Liu et al. [32]). Such mechanism performs dual steps viz. i) all the significant mobility points are tracked, and appearance model is subjected to related finetuning necessary and ii) detection process is initiated along with compensation of errors in tracking owing to complete occlusion. The study outcomes show some robust tracking performance compared to some of the existing mechanisms of different variants of kernel correlation filters using colored videos. The technique involves preprocessing the infrared sequence of video followed by target identification. For the first image, the detected target region is captured, followed by multiscale transform and fusion of the target region. Upon subjecting to transformation using multiscale image give the outcome for part of the fused image. A similar sequence of processes is carried out for the second image. The background is captured from the identified target, followed by a similar set of processing carried out by the first image to give a second set of fused images. Both the fused image is further organized as a sequence to perform tracking. A recent work carried out by Peng and Zhang [33] has a unique implementation of mean shift where detection and tracking of the target region are tracked using the mean shift method. In contrast, the root means square is estimated between two frames to assess the error score. Other associated studies on similar technique with slighted variation in using the mean shift was seen in the work of Shu et al. [34], Tan et al. [35], Wang et al. [36], and Chen et al. [37]. B. Support Vector Machine based Approach In the area of the learning algorithm, Support Vector Machine (SVM) is considered as a supervised model which is capable of performing both linear and non-linear classification. This characteristic makes it suitable for improving the accuracy of the video tracking system. The SVM approach has proven effective when it comes to object recognition and tracking. SVM, when combined with Scale Invariant Feature Transformation (SIFT), offers better performance (Dardas and Georganas [38]). The technique applies vector quantization for mapping key points with the training image, followed by applying k-means clustering and bag-of-words. However, better classification performance is seen when one-class SVM is used with Markov chain Monte Carlo implementation (Feng et al. [39]). The inclusion of dynamics in tracking are addressed using probability hypothesis density. The enhancement of SVM in tracking is further proven when excluding the coupled label prediction using kernelized supported SVM for adaptive tracking. The complexity owing to unbound growth in support vector is controlled using a budgeting mechanism. This fact was also verified by Yuan et al. [40] using multiplicative kernels. Such approaches also void the inclusion of contextual modeling, which otherwise is discussed to offer better SVM predictability (Sun et al. [41]). Such an approach decomposes the spatial context in the form of foreground and background for obtaining a robust appearance model to deal with deformation and occlusion issue in video tracking. Sun et al. [42] have also used SVM for categorizing the scene from its sophisticated surroundings. Such an approach is proven to encode the perception of human vision using gaze shifting path. Fig. 9 highlights the process used where an aggregated convolution neural network is used along with over gaze shifting path further subjected to SVM for effective classification. Such idea of the combined process, i.e., identifying an object, learning, and tracking, is also carried out by Yin et al. [43]. In this work, SVM is used for dual purpose, i.e., performing linear classification and state-based structure classification where applicability increases over complex video scenes. SVM is also proven to reliable modeling (Sun et al. [41] [42]) where learning is carried out over multiple views and harnesses the geometrical structures of the tracked outcome. Overall, SVM has a fruitful performance when it comes to video tracking from complex video sequences. However, the approaches don't offer many solutions towards computational complexity associated with classification performance. Fig. 9. Categorization of the Scene for Tracking Sun et al. [42]. C. Template-based Approach The formation of the template is usually carried out using normal geometrical structures. It is capable of bearing both information of appearance and spatial data from the given scene. However, one of the pitfalls of this approach is that the generated appearance of an object can only be encoded from a single view. This narrows down its applicability towards tracking video with lesser variation in poses of an object while tracking. Hence, there are various attempts in present times to circumvent this issue. Guo et al. [44] have used an adversarial network with the guidance of the generative task to perform dynamic learning. Templates are selected from online adaption from the image sources with ground truth along with an arbitrary vector. However, it doesn't perform the dynamic matching of the template. This problem is discussed to be solved by Huang et al. [45] where segmentation of an object is carried out using aggregation network with temporal attribute with Hungarian matching scheme from template bank (Fig. 10). Existing studies have also witnessed the template matching process to be hybridized where different methods from other categories of video tracking are found to be used. Studies of Lin and Chen [46] and Mutsam and Pernkopf [47] have discussed the usage of the particle filter with template matching. A unique study carried out by Pei et al. [48] has used graph matching over the template to establish the connection between objects and trajectories. Rehman et al. [49] have used template matching and deep learning over multiple regions of interest to improve its scope of tracking and accuracy. The work carried out by Su et al. [50] has integrated color histogram with a template for maximizing the performance of video tracking along with the significance of the update process over selected regions of interest. Xiu et al. [51] have extracted differential information where the initial region of the target is carried out using rough matching followed by magnifying the region of search. The study outcome is proven to offer better tracking performance in contrast to the existing template matching algorithm. Apart from this, various artifacts associated with the interference are also solved in such an approach with a reduction in outliers for improving video tracking performance. V. SILHOUETTE TRACKING MECHANISM The silhouette is a simpler mechanism for tracking an object when it comes to the non-rigid nature of an object. In such a case, the region of an object is estimated for all the frames to facilitate tracking. The encoded information with the region of an object is used for this form of the tracking system. The possibilities of such information could be an edge map, or it could also be any shape model. Typically, contours and shape factors are used in the process of a silhouette tracking mechanism. Object tracking could be a complicated process when the video is of multi-dimension as there is a proliferation of multidimensional video with the advancement of digital technologies. The work carried out by Kim et al. [52] has addressed this problem by performing contour tracking using the graph-cut method. The study considers the distance of the angular radial factor and its variation as its essential constraint with a presence of deformation in shape. With the refinement of the contours, the ambiguous seeds are eliminated for precise segmentation using graph cut. Combined with a neural network, the performance of the contour-based tracking system can be enhanced (Kishore et al. [53]). This strategy uses the Horn Schunk optical flow method for obtaining features for tracking while the shape features are extracted from active contours. Different events are classified using the backpropagation approach in the form of words, and then converted into signals for matching. The work of Luo et al. [54] has implemented a silhouette-based tracking system using a segmentation approach with a block-based technique. The information of motion during the encoding of the video is utilized for tracking purposes. Kalman filter is also reported to enhance this in the form of video tracking system (Pokheriya and Pradhan [55]). The study makes use of the background subtraction method of adaptive nature. Another unique mechanism to carry out this silhouette tracking mechanism by using the camshaft algorithm (Zou et al. [56]). This is mainly utilized for computing the distribution of color probability, thereby facilitating a video tracking system. Apart from this, the inference system becomes quite simpler concerning the background. This algorithm is considered to be useful to deal with occlusion and target deformation. Fig. 11 showcase the unique outcome where the original image ( Fig. 11(a)) is processed to the obtained distribution of color probability (Fig. 11(b)) followed by extraction of distribution of motion probability (Fig. 11(c)) and distribution of cumulative probability (Fig. 11(d)). The core study findings of the proposed paper are discussed with respect to existing research trends and briefing of openend research problems. A. Existing Research Trend To visualize existing research trends, the proposed system collects the paper published in the IEEE Xplore digital library published between 2010 and 2020. The findings are graphically shown in Fig. 12 as following, Fig. 12, it can be seen that there is less survey work in this area, as well, as more emphasis is given to the conventional mechanism of point-based and kernel-based tracking system. Contribution towards silhouette-based is very few to find. Apart from this, the number of journal publications towards kernel-based is significantly less as compared to pointbased tracking. This eventually shows that there was no equal emphasis being given to all the taxonomies of the video tracking system. B. Research Gap Different variants of research work are being carried out towards the video tracking system with a unique focus on accuracy. Every implementation offers a productive guideline towards adopting an effective methodology towards addressing the problems while also associated with a specific limitation and issues. Following are the list of open-end research issues which demands attention:  Less Simplified Feature Extraction Process: Apart from extracting unique features, it is essential to ensure costeffective modeling adherence. The majority of the existing approaches are highly inclined towards extraction of local level features, limiting the applicability in case of a change in visual and scene context. There is an emergent need to include a global level of features, which should result in inclusive of both low and high levels of attributes towards facilitating effective modeling of the video tracking system.  Less Focus on Processing Time: An effective video tracking algorithm and system will definitely demand almost instantaneous response time. Without this inclusion, the practicability cannot be defined precisely. The existing system uses iterative and complete modeling towards a video tracking system due to its sole focus on achieving accuracy in its performance. With the inclusion of different challenges like different variants of occlusion, multi-view tracking, and sophistication of algorithm operation, the system must offer an instantaneous response in the presence of any dynamic video sequences.  Need to Emphasize on Dimension Reduction: While performing video tracking, the system undergoes extraction of various informative contents, which are required to be stored and processed for improving preciseness in tracking. This is the case mainly with learning-based algorithms, which demands a higher dimension of trained data. The inclusion of a higher dimension of trained data will increase the memory complexity and increase the processing time to yield an appropriate response. Hence, there is a need to evolve up with an approach that can offer a better form of dimension reduction of the features considering the cases of a complex form of imageries in the video sequence, e.g. ariel images. There is also less focus on the optimization-based approach, which has good potential to deal with this open-end problem. www.ijacsa.thesai.org  The need to include contextual scene information: Existing approaches are built primarily over object detection followed by tracking. In the process of detection, the emphasis is only towards the foreground object and less towards the contextual information of the object and given scene. Without the inclusion of a contextual-based approach, the video tracking will have a limited scope of operation when exposed to uneven and dynamic mobility of an object whose heuristics are not present in the ruleset or ground truth or even in the trained image. Hence, contextual information demands enhanced scope. VII. CONCLUSION Irrespective of archival of the work carried out towards video tracking system, there is no evidence of any standardized model that acts as a benchmarked factor. Therefore, this article presents a typical insight into the identified three video tracking classes which is frequently found to be used: pointbased tracking, kernel-based tracking, and silhouette-based tracking. It also briefs all standard methods that are witnessed to be implemented in these three standard video tracking algorithms. However, a closer look into the existing approach will only exhibit that they all can be further classified into three more classes of approach, i.e., contour-based tracking, tracking using native geometric model, and representation of a target object. All these associated models and their accuracy strongly depend upon how accurate the process of object detection and recognition is over a challenging scene of a video sequence. The study also concludes that each of the three standard categories discussed in this paper has both advantages and limiting factors, which should be improved upon to come up with a novel and effective video tracking scheme. Therefore, our future work will emphasize addressing the open-end research issues discussed in the prior section. To do this, the future direction of work will emphasize more on modeling global features for the extraction process, along with an emphasis on precision. The future work will also be in the direction of inclusion of policy to balance the demands of higher accuracy and optimal processing time, lacking in existing approaches. Finally, an optimization-based approach could be implemented to address the issues connected with computational complexity.
9,644.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Oscillations in an artificial neural network convert competing inputs into a temporal code The field of computer vision has long drawn inspiration from neuroscientific studies of the human and non-human primate visual system. The development of convolutional neural networks (CNNs), for example, was informed by the properties of simple and complex cells in early visual cortex. However, the computational relevance of oscillatory dynamics experimentally observed in the visual system are typically not considered in artificial neural networks (ANNs). Computational models of neocortical dynamics, on the other hand, rarely take inspiration from computer vision. Here, we combine methods from computational neuroscience and machine learning to implement multiplexing in a simple ANN using oscillatory dynamics. We first trained the network to classify individually presented letters. Post-training, we added temporal dynamics to the hidden layer, introducing refraction in the hidden units as well as pulsed inhibition mimicking neuronal alpha oscillations. Without these dynamics, the trained network correctly classified individual letters but produced a mixed output when presented with two letters simultaneously, indicating a bottleneck problem. When introducing refraction and oscillatory inhibition, the output nodes corresponding to the two stimuli activate sequentially, ordered along the phase of the inhibitory oscillations. Our model implements the idea that inhibitory oscillations segregate competing inputs in time. The results of our simulations pave the way for applications in deeper network architectures and more complicated machine learning problems. Revisions For the revisions, the reviewer's original comments are marked in italic, our response to the reviewer is marked in bold, and the added text passages are written in normal font.In the revised manuscript, we have marked the edits based on the comments by reviewer 1 in blue, reviewer 2 in green, and reviewer 3 in pink. Reviewer 1 We thank the reviewer for their time revising our manuscript and their positive feedback.All edits related to the reviewer's comments are indicated in blue in the revised manuscript. There is a broad range of temporal dynamics in the neural system, and these temporal dynamics are thought to have an important function in the representation of information, especially multiple competing information. In this manuscript, the authors trained the ANN to recognize letters, and with adding of temporal dynamics in the hidden layer, the ANN was able to read out sequentially for two letters presented at the same time. Moreover, the sequence of readout is along the phase of inhibitory oscillations.The results of the study proved that the inhibitory oscillations help to segregate the information in time.Overall, this study is a timely and insight-providing study that demonstrates the positive effects of oscillations on information readout.It was a pleasure to read such a well-designed and well-written study, and I think that both the neuroscience and machine learning fields will have interest in the results of this study, and I would like to see it published. I have 3 small suggestions that the author might consider. 1.I would like the authors to discuss how the findings of this study contribute to our understanding of how the neural system works. The main contribution of our algorithm is that we test whether inhibitory oscillations serve to support visual processing.Neuronal alpha oscillations have long been known to be inhibitory, but their relevance for neuronal computation is debated.Specifically, we test whether inhibitory oscillations allow a neural network to overcome computational bottlenecks when processing multiple inputs. Moreover, our algorithm is rooted in insights from neuroscientific and cognitive studies of the visual system, which have revealed that object recognition underlies both parallel and serial processes. Lastly, we outline predictions based on our simulations that can be tested using electrophysiological approaches. Please see below for details. l. 343: A key contribution of our approach is that we translate conceptual ideas based on neuroscientific studies into a computational model. Our network embraces two key properties of visual perception: parallelisation and segregation.Prior work has shown that simple visual features can largely be processed in parallel [5,30,29], while object recognition has been demonstrated to be supported by serial processes [15,24].This indicates a bottleneck problem which has been argued to arise from the converging hierarchical structure of the visual system [3,11,24,29]. A similar bottleneck problem arises in our non-dynamic network (Figure 5). Also see ll. 357 Our algorithm further draws from evidence for phase-coding observed in recordings from the rodent and human hippocampus, whereby spiking activity has been shown to be modulated along the phase of ongoing theta oscillations (4)(5)(6)(7)(8)16,23,26,27]).The order in which a sequence of inputs has been experienced, has further been proposed to be preserved in the spiking activity [13,23] but see [22].We here demonstrate how the visual system may utilise a similar mechanism based on inhibitory alpha oscillations to support object recognition. As outlined in 2 Methods, we tuned the hyperparameters in our simulations to resemble oscillatory dynamics observed in electrophysiological recordings.The rise time of the activation τ h was chosen based on the membrane time constant of excitatory neurons (10 -30 ms) [4].The activation period of an individual letter within a temporal code was 23-30 ms (Figure 6b and c), i.e. 35 to 40 Hz.This corresponds to the period of gamma oscillations, which have been proposed to be involved in the feedforward processing of visual information [1,2,17].As such, our algorithm is strongly linked to the idea that visual processing is modulated by an interplay of gamma and alpha oscillations [17,12]. The involvement of these oscillations in organising visual processing is backed by a rich body of literature. And ll. 389 In addition to the explored computational benefit of inhibitory oscillations in visual processing, a key contribution of our study is that we demonstrate two testable predictions. 2. I would like to know if adding other frequencies of oscillations to the hidden layer by changing the time parameter would produce similar results to this study?In other words, could the authors discuss the similarities and differences between brain information processing and ANN information processing, especially in the time scales.This is a great point that has prompted us to do further simulations presented in Figure S2 and S3.We further explore the role of the frequency of the inhibition and refraction in the discussion session. ll. 261 We hypothesised that speeding up the refraction would allow an increase of the number of items within the temporal code.This was tested by repeating the simulations shown in Figure 6 with τ r = 0.05.However, while a reduced time constant for the refraction did result in a faster activation of the two nodes corresponding to the letters in the image (Figure S2b), only the first inhibitory cycle showed three activations, whereby the attended letter is read out before and after the unattended one (Figure S2c).A faster refraction was further associated with an overall reduced amplitude, and an occasional activation of the output node corresponding to the letter that was not presented in the image.Overall, decreasing the time course of the refraction did not seem to offer a stable solution for increasing the number of items in the temporal code. Reducing the frequency of the inhibitory oscillation to 5 Hz, lead to a robust temporal code with three activations per cycle of the inhibitory oscillation (Figure S3c).These simulations do however still show a activation to the letter not presented (Figure S3c top and middle panel).For the presented algorithm, slowing down the inhibition appears to be more effective to include more items into the phase code than speeding up the refraction. And ll. 363 As outlined in 2 Methods, we tuned the hyperparameters in our simulations to resemble oscillatory dynamics observed in electrophysiological recordings.The rise time of the activation τ h was chosen based on the membrane time constant of excitatory neurons (10 -30 ms) [4].The activation period of an individual letter within a temporal code was 23-30 ms (Figure 6b and c), i.e. 35 to 40 Hz.This corresponds to the period of gamma oscillations, which have been proposed to be involved in the feedforward processing of visual information [1,2,17].As such, our algorithm is strongly linked to the idea that visual processing is modulated by an interplay of gamma and alpha oscillations [17,12]. 3. There is a small typo, the second line of 2.1 on page 3. Should it be Figure 2a? Thank you, we have corrected that. Reviewer 2 We thank the reviewer for their time and thorough evaluation of our manuscript.All revisions in the manuscript based on the reviewer's suggestion are marked in green. The authors present an ANN to which they add biologically-inspired dynamics to implement inhibition through alpha-oscillations.They train their dynamical ANN to decipher between pairs of simultaneous stimuli (pairs of letters) to mimic the need of object based attention when presented with several stimuli.That's a nice concept and it makes sense.However, my main concern is regarding the stimuli themselves: from what is displayed in the figures, they use three letters, A, E and T, printed in white over a black background, but the problem is that each letter seems to always be situated in the same, non overlapping part of the total input image (i.e.A is in the top right corner, E in bottom left and T in the top left).So the question is whether the ANN learns to decipher between the letters or just the region of the image that has some white pixels in it..? Also I am a bit confused about the training procedure..In each epoch, the network was trained on 132 images, whereby each letter appeared at each location on the image. Question 2: Could you introduce some noise on the inputs, i.e. not have exactly the same shape every time for the same letter but small variations?How does that affect your results?Without it it is hard to know if your results can generalize at all... Following the reviewer's suggestion, we have now added some (low amplitude) noise to the images for training and testing, as depicted in Figure 2b, Figure 5a and Figure 6a.As such, all simulations performed with dynamical ANN were performed using stimuli the network had not seen before. Question 3: Could you quantify the results accuracy that you show for all the different experiments?i.e. give the readout accuracy or how much overlap/no overlap for the stimuli distinction.. We have added the read-out accuracy to Figure S1.Please also see l. 254 The maximum read out accuracy (activation) for each letter is indicated in each plot.As the activations were calculated using a softmax function, a value of 0.99 (e.g. for letter "A" in top left panel), indicates that the network is 99% certain about the presence of the letter "A" while the remaining 1% are shared between letters "E" and "T".While the response to "E" in the combined "T" and "E" input is notably reduced compared to the other experiments (Figure S1 bottom right), the network still achieves a read out accuracy of 0.59, well above the chance value of 0.33.The simulations show that the network is able to segregate all input combinations in the test set.We have added a plot showing a combined input of A and T to Figure 6.We also explore a simulation with 3 simultaneous inputs in Figure S4, as described in the manuscript ll.273 Following these tests, we investigated whether the network could generate a temporal code representing all three stimuli.Figure S4a shows the exemplary input, which was generated by multiplying the attended letter ("E") by 1.2, the unattended letter("T") by 0.8, and "A" with 1.After adding the noise, the image was scaled to the luminance range from 0 to 1.We used the original settings for the dynamics with c = 10, s = 0.1, τ h = 0.01, and τ r = 0.1.Indeed, the refraction without inhibition allow the network to dynamically activate each letter in the input, albeit with varying amplitude and activation period (Figure S4b).Generating a temporal code with all three items, however, proved to be challenging.Introducing a 10 Hz inhibition resulted in "E" and "A" being read out in the first and second cycle of the inhibition, respectively, after which the network produced a code with two items ("E" and "A") in each cycle (Figure S4c).Slowing down the inhibition to 6 Hz resulted in a temporal code with three items in two out of the five cycles shown here, however, the network often activated the output node corresponding to "E" after reading out "A" (Figure S4d). In sum, while the network was able to produce a stable temporal code with two inputs, it was not trivial to produce a code with three stimuli.We will explore the biological relevance of increasing the number of items in the temporal code in the Discussion. In the discussion section, we outline that we were predominantly interested in generating a temporal code with two items, due to a hypothesised link between the multiplexing and saccadic reviewing.This paragraph is marked in blue as it mainly addresses a comment by reviewer 1. ll. 380 In our model, the number of items in the temporal code could be increased by reducing the frequency of the inhibitory oscillation (Figure S3).This simulation suggests that the visual system may slow down alpha oscillations in anticipation of complex visual inputs.So far, however, only an increase in alpha frequency has been linked to visual detection and processing speed in temporal attention paradigms [7,25].Moreover, multiplexing by alpha oscillations has been proposed to support saccadic previewing [11].According to this model, the first item in the temporal code represents the stimulus that is currently fixated, while the second item may represent the goal of the next saccade [11].Therefore, while changing the dynamics may be relevant for computational goals in the dynamical ANN, we believe that the temporal code with two items organised by inhibitory 10 Hz oscillations may capture the dynamics of visual cortex and associated conceptual models more accurately. Question 5: Could you explain the training procedure in the methods section?Also you should describe clearly your training set (cf. question 1).There is just a very brief explanation in the legend of figure 2 and nothing in the main text.. Also what does it mean that the MSE is "approaching 0"?How much is it?Is it on the training or on the test set?Is there a test set? The reviewer is correct that we used the same stimuli for training and testing in the previous version of the manuscript.Now that we are using images with noise, we generated a new set of noisy stimuli that were not part of the training set.We have updated the section on network training in ll.109 The training set consisted of three letters, presented in one of four quadrants in the image.After adding gaussian-distributed noise ranging from 0.01 to 0.25, each input was normalised by its maximum value, such that the luminance in each image ranged between 0 and 1 Figure 2b).The weights of the network were initialised according to a uniform distribution within the range [−x, x], where x = 6 nin+nout with n in and n out being the number of inputs and outputs to the current layer, respectively (Glorot initialization) [9].The Adam optimiser was chosen to minimise the cross-entropy loss using stochastic gradient descent [18]. In each epoch, the network was trained on 132 images, whereby each letter appeared at each location on the image.The network weights were learned by backpropagating the error through the network layers (as mentioned above, the bias term was fixed at b = −2.5).All experiments reported in 3 Results were conducted on a test set of noisy images with letters "A", "E", and "T" the network had not seen during training. Minor comments: -Typo: you always refer to figure 2 as figure 3... Thank you, we have corrected this. -In figure 2, I would change the labels order to follow the order of the text: 2d->2b, 2b->2c and 2c->2d. The labels in Figure 2 are now in accordance with the order in which they appear in the text. -Figure 3 and results section 3.1 (bottom of page 6): can you make more explicit the parts without alpha inhibition vs when there is? i.e. in the text specify explicitly that you first train without and in the figure put some titles or something to differentiate the sides without and with alpha (similar as what you do in figure 4). We have added a title to Figure 3a and f to make explicit which column shows the frequency and amplitude with and without inhibition. -Figure 7: I'm not sure what it is but it's not very easy to read... Maybe you need to put an inset on time or something like that so it makes it easier to see the sequential activations of the neurons...We have updated Figure 7 and the associated text and caption to describe the parallel representation and segmentation along the layers of the neural network.We further chose to only plot 400 instead of 600 ms, to increase the size of the line plots in the top panel. See l. 288 The top panels in Figure 7a and b indicate how strongly the hidden representations to the combined input correspond to the neural representations of both letters individually. And ll. 292 A similarity value of s E (t) = 1 in Equation ( 8) indicates that all hidden nodes corresponding to the individual letter (E and T), are activated by the current image showing both letters simultaneously. In the first layer, the normalised dot product indicates that the nodes representing both "E" (orange trace) and "T" (green trace) activate in parallel, in anti-phase to the inhibitory oscillation (Figure 7a top panel). The network also appears to activate the hidden representation of "A", albeit to a lesser extent (blue trace). Indeed, the time course of the activations in each node (Figure 7a, bottom panel) demonstrates that almost all nodes in the first layer activate during the excitatory cycle of the oscillation.This indicates that the first layer represents the two presented letters in parallel. In comparison, the activations in the second layer demonstrate that the nodes responding to each letter are activated in a sequence: the normalised dot product between the current representations and the activations to an individual letter "E" precede the ones corresponding to letter T (green trace, Figure 7b).The bottom panel in Figure 7b indicates that a smaller fraction of the network is activated at each time point, and the successive activation of the hidden nodes can be observed.Finally, Figure 7c shows the read-out in the output layer, confirming that the representations of "E" and "T" are fully separated during the excitatory cycle of the inhibition (also see Figure 6c). In sum, our simulations show how integrating dynamics driven by excitation and refraction enables a fully connected neural network to multiplex simultaneous inputs -a task it has not been trained on explicitly.This mechanism is further stabilised by pulses of inhibition at 10 Hz, akin to alpha oscillations in the human visual system. -Figure 8: this one is also a bit hard to read... Maybe make everything larger?Or I'm not sure what you should do exactly but try to make it more digest if you can...We have changed the layout from 3 to 4 rows with 3 plots per row to increase the size of the plots. -Also generally for all figures: everything is very small (titles, legends etc.), maybe you can increase a little bit the fonts you use to make it easier for the reader.. We have increased the font sizes in all plots as well as the width of the line graphs in each Figure .-There's a typo in the intro page 2, third paragraph "object recognition has been argued *to* have a limited capacity. This typo is now corrected. Reviewer 3 We thank the reviewer for their time and effort reviewing our manuscript, and the positive feedback.All revisions related to the reviewer's comment are marked in pink in the revised version of the manuscript. The efficient processing of multiple, competing inputs is a fundamental task in computational neuroscience and machine learning.In this work, Duecker et al. implement a brain-inspired mechanism to address such task in the context of image classification in an artificial neural network.Such mechanism consists in enforcing, after training, a dynamics of nodes activations that mimick alpha-oscillations and rythmic inhibition in the brain.When an image containing multiple stimuli is presented, such dynamics allow for an alternating representation of the competing inputs, effectively embedding a serial, multiplexed neural code.I find such mechanism a simple but elegant idea, well motivated by biological findings, and the results exposition is convincing in supporting its effectiveness.I do have some comments and suggestions that might help in improving the clarity of the exposition and the reproducibility of the results, but overall my impression is highly positive.I find that this paper is a great contribution to the growing literature of models at the interface between computational neuroscience and machine learning and of potential interest for both communities. Main suggestions: I found the explanation of the architecture of the deep neural networks employed a bit obscure: -First, I guess there is a misprint in the Methods section introduction, where it is stated that a 2-layer architecture was employed, but then a three-layer one was discussed below and illustrated in the figure .-Second, it is not clear to me the sentence "A weight matrix of size 28 × 28 was applied to the input (56 × 56) with a stride of 28, such that each node in the first layer received 4 × 28 × 28 inputs, ensuring representational invariance across the quadrants in the input.".If I understood well from the code, the first layer is supposed to be a conv2d layer with 64 output features, effectively learning a single 28x28 kernel for each feature, followed by a sum opera3on.I suggest unpacking and clarifying this passage. We have updated the paragraph in "2.1.Network architecture" to make clear that the network is indeed a network with only two hidden layers, whereby the convolutional kernel is only used to ensure weight-sharing, meaning competition, between the quadrants.This was important, as the conventional approach to implementing a fully connected network, i.e. flattening the image and connecting each pixel in the input to the first hidden layer would have resulted in the emergence of four networks, one for each quadrant. See ll. 90 The inputs to the network were images of size 56 × 56, each showing one of three letters ("A", "E", and "T"), presented in one of the image's quadrants (Figure 2b).We aimed to show that integrating oscillatory dynamics into the hidden layers would allow the network to overcome computational bottlenecks when processing an image presenting two letters at the same time.Therefore, we implemented competition between the quadrants by applying a weight matrix of size 28 × 28 to the input with a stride of 28.The results of the convolution between each quadrant and the weight matrix were then summed in each hidden node.To make the network dynamics tractable, we refrained from using a conventional CNN architecture, and instead used the convolutional kernel merely to implement weight sharing between the quadrants (but see 4 Discussion). -Below Eq. ( 1) is written that each input is a linear sum of activations in the previous layer, ... but shouldn't be h instead of z? Thank you, we have updated this, see l. 101 The input z j arises from the activation in the previous layer according to z j = i w ji h i , with w ji being the weight matrix connecting nodes i and j, and h i being either the pixels in the image (for hidden layer 1) or the activations in the first hidden layer (for hidden layer 2). -It is not well explained how the post-training dynamics is propagated across layers, especially because different types of propagations (e.g., with phase delay) are used later in the manuscript. The dynamics are propagated from one layer to the next, as the output from layer 1 is dynamic, meaning layer 2 receives a dynamic input.Moreover, the hidden activations in layer 2 also change according to the ordinary differential equations defined in equation 3 and 4, leading to a dynamic output.We have clarified this in 2.3 Network dynamics in the hidden layers, ll.130 The input z j to the ODEs in the second hidden layer was calculated from the dynamic outputs of the first layer, and the dynamic activations in the second layer were used to calculate z j in the softmax activation Equation (2).The result of this feedforward propagation from the first to the second hidden layer is explored in Figure 4 and 7. Relatedly, it would be interesting to check the robustness and generalizability of the results across a range of similar architectures.In particular, in the architecture described in the manuscript, few possible images are allowed (3 letters x 4 quadrants), and the kernel is perfectly adapted to the size of the image (28x28), and a large stride is employed, effectively conveying independent input for each quadrant.This could potentially result in overfittng.I think it would be instructive to explore architectures with smaller, more common kernel sizes and stride (e.g., 3x3 or 5x5) to see how results generalize when the first layer learns more elementary features.In par3cular, I expect the second layer dynamics in Fig. 7 to exhibit less segregation, but this is just a guess. We agree with the reviewer that an exploration of our ideas in a different architecture would be interesting.However, the scope of this manuscript was to present the idea of taking inspiration from the dynamics observed in electrophysiological recordings from the human and non-human primate visual system, to implement a multiplexed coding scheme in an ANN.We purposefully present a network with a reduced architecture that can solve a simple, tractable problem, such that we could thoroughly investigate the parameter space of the ODEs (Figure 3), as well as the dynamics in each layer (Figure 4) and the parallelization and segregation of the representations in the network (Figure 7).For this simple model, we present a total of 9 different experiments (plus the hyperparameter tuning) in 13 Figures.As such, we feel that an application in a deeper architecture would warrant a separate publication.The issue of overfitting has now been addressed by adding noise to the images (as per a request by reviewer 2) and using a newly generated test set for the simulations the network has not seen before. We understand that we may have raised the expectation that the manuscript will explore the presented ideas in a deep CNN, due to the strong focus on convolutional networks in the abstract and introduction and the application of a convolutional kernel for weight-sharing.Therefore, we clarified our objective and goals in the abstract, introduction, methods and hope that this addresses the reviewer's concern sufficiently. ll. 1 The field of computer vision has long drawn inspiration from neuroscientific studies of the human and non-human primate visual system.The development of convolutional neural networks (CNNs), for example, was informed by the properties of simple and complex cells in early visual cortex.However, the computational relevance of oscillatory dynamics experimentally observed in the visual system are typically not considered in artificial neural networks (ANNs). L. 20 The inclusion of convolution in artificial neural networks (ANNs) was originally inspired by the feature detection properties of cells in early visual cortex, and marked a significant milestone in computer vision [20]. l. 26 Despite the success of embracing the spatial tuning properties of visual neurons for computer vision, there are only few examples of ANNs that have drawn inspiration from the temporal dynamics of cortical activity [8,28,22]. l. 36 We deliberately present a tractable network with a reduced architecture to demonstrate the computational benefit of oscillatory dynamics in computer vision.Our aim is to pave the way for applications in deep CNNs that can benefit from both the spatial tuning properties and temporal dynamics of the visual system. And the discussion l.415 While the simple nature of the network limits its computational abilities, it allowed a tractable implementation and comprehensive exploration of the imposed dynamics, as demonstrated in Figure 3, 4, and 6.As such, the presented work sets the stage for applying the presented principles to CNNs with a deeper archiecture that can solve benchmark image classification problems such as (E) MNIST [6,21], and ImageNet [31]. Minor comments: -If the task is classification, why using the mean squared error as training objective (fig.2c)? We agree with the reviewer that cross-entropy loss would have been a more appropriate choice for image classification.In the previous version of the manuscript the network was trained and tested on the same, simple stimuli.For this simple problem, we found MSE loss to converge faster and lead to the somewhat binary activations we were aiming to achieve for our dynamics.We have now changed our training and test set based on a request by reviewer 2 and added low amplitude noise to each image. For this set, we found that cross-entropy loss performed well and are therefore using it for the new network (please see Figure 2b and c). -There seems to be some inconsistency in Thank you, we have corrected these referencing mistakes. -In Eq. ( 8), h j seems to be a scalar, there is no need to define the Hadamard product Yes, we agree with the reviewer and have updated equation ( 8) -The words refractory and relaxation dynamics are used interchangeably, I would s3ck to one choice to avoid confusions We agree and now use the term "refraction" consistently throughout the manuscript. -I really enjoyed reading the discussion about the possible future directions on the neuroscientific and machine learning side.In this case, the dynamics was imposed after training; I wonder whether the authors could instead comment on possible training mechanisms that would give rise to such dynamics? We have addressed this comment in the final paragraph of our discussion.While we agree that introducing dynamics into the training process may be interesting, we feel that the current implementation may be more realistic in the context of object recognition. l. 436 Moreover, Liebe et al. [22] have demonstrated emerging oscillatory dynamics when training an RNN to memorise a sequence.A key difference to our work is that the inputs and outputs in these previous studies were dynamic, which may have resulted in emerging dynamics with minimal intervention by the researcher.However, it would still be interesting to test whether training a network to not only classify images but also to convert simultaneous stationary inputs into a sequence results in rhythmic activations in the hidden layers.It should be emphasised however, that we here provide an implementation of the idea that top-down control by inhibitory alpha oscillations supports multiplexing.This oscillatory top-down control has been proposed to reach the sensory systems of the human and primate brain through thalamo-cortical connections [14] or as a backwards travelling wave initiated in frontal regions [10].The logic of imposing the dynamics after the training was based on the notion that learning to recognise different objects, i.e. the main task of the visual ventral stream, is different to learning to represent items in a sequence.As such, we argue that the current implementation with imposed external top-down control to learned representations of visual objects is more biologically realistic for the presented problem. Question 1 : Could you show what happens when you change the stimuli's position on the image, and/or what happens if you repeat twice the same stimuli in different positions on the same input image?Also what happens if you put one of the letters in the bottom left corner of the image?Because, to follow your example, maybe the apple and passionfruit are not always exactly at the same place on the supermarket shelves and yet, you recognize them regardless...In case I misunderstood and the letters are not always in the same position and this is just for the example images you choose you need to clarify that..We should have indeed explained this better in the previous version of the manuscript.Each letter was indeed presented in one each the four quadrants.We have updated Figure2bto reflect this.Moreover, we have used different input images with the letters at different locations for the simulations in Figure5and Figure6to clarify this.Also see l. 114 Question 4 : fig.S1 but it would be nice to have it here).And also, what happens with the three simultaneous stimuli? Figure referencing, e.g., in page 3 both citations should be Figure 2, and similary in page 5, and 6 (last reference before the last paragraph).In page 8, paragraph 3.3.1, the reference should to Fig 5b.
7,547.4
2023-11-27T00:00:00.000
[ "Computer Science" ]
The Convenience of Single Homology Arm Donor DNA and CRISPR/Cas9-Nickase for Targeted Insertion of Long DNA Fragment. OBJECTIVE CRISPR/Cas9 technology provides a powerful tool for targeted modification of genomes. In this system, a donor DNA harboring two flanking homology arms is mostly used for targeted insertion of long exogenous DNA. Here, we introduced an alternative design for the donor DNA by incorporation of a single short homology arm into a circular plasmid. MATERIALS AND METHODS In this experimental study, single homology arm donor was applied along with a single guide RNA (sgRNA) specific to the homology region, and either Cas9 or its mutant nickase variant (Cas9n). Using Pdx1 gene as the target locus the functionality of this system was evaluated in MIN6 cell line and murine embryonic stem cells (ESCs). RESULTS Both wild type Cas9 and Cas9n could conduct the knock-in process with this system. We successfully applied this strategy with Cas9n for generation of Pdx1GFP knock-in mouse ESC lines. Altogether, our results demonstrated that a combination of a single homology arm donor, a single guide RNA and Cas9n is capable of precisely incorporating DNA fragments of multiple kilo base pairs into the targeted genomic locus. CONCLUSION While taking advantage of low off-target mutagenesis of the Cas9n, our new design strategy may facilitate the targeting process. Consequently, this strategy can be applied in knock-in or insertional inactivation studies. Introduction Harnessing the clustered regularly interspaced short palindromic repeats (CRISPR) and the CRISPR-associated protein (Cas) system has provided a new means, CRISPR/Cas9 technology, for introduction of targeted changes into a genome sequence (1,2). In this technology a nucleoprotein complex that consists of the Cas9 protein and a single guide RNA (sgRNA) are used to generate a double-strand break (DSB) in a specific genomic target site, determined by the sgRNA sequence (1)(2)(3). Predominantly, DSBs are repaired through the error-prone non-homologous end joining (NHEJ) mechanism which usually results in indel mutations (4). However, in the presence of a customdesigned homologous donor DNA, homology directed repair (HDR) can introduce customized changes into the DSB site (5,6). Concerns about the CRISPR/Cas9-induced off-target mutations have led to the development of a mutant Cas9 variant, Cas9 nickase (Cas9n), which makes a singlestrand break or nick in target DNA (2). Despite its lower off-target mutation rate, Cas9n was shown to be less efficient than the wild type variant (3,7). This issue has been addressed through a double nicking strategy which entails using a pair of sgR-NAs along with Cas9n. This strategy was applied successfully for both gene targeting in cultured cells (7,8) and generation of mutant organisms (9). However, designing two sgRNAs with the required criteria for double nicking strategy and their co-expression in target cells might reduce the simplicity and versatility of this method. Using appropriate donor DNA constructs along with the CRISPR/Cas9 system has led to the efficient introduction of a variety of subtle to multiple kilobase-pair (Kbp) modifications into eukaryotic genomes (1,2,10,11). This design strategy for targeted integration of long DNA fragments into a genome is based on the application of two flanking homology arms. It has been demonstrated that 200-400 base pair (bp) homology arms can effectively be used with the CRISPR/Cas9 system (12,13) which are far shorter than multiple Kbp arms that are suggested for conventional gene targeting vectors (14). Here we described an alternative design for CRIS-PR/Cas9 mediated insertion of a long DNA fragment into the mammalian genome. In this design, we used a circular donor DNA that contained a single 318 bp homology arm in combination with Cas9n and a single sgRNA. This approach was applied to insert a green fluorescent protein (GFP) coding sequence (CDS) into the genomic locus of the pancreatic and duodenal homeobox 1 (Pdx1) gene in the mouse insulinoma cell line (MIN6) and mouse embryonic stem cells (ESCs). Transfection of MIN6 cells and flow cytometry MIN6 cells was seeded at a density of 10 4 cells per cm 2 in 6-well cell culture plates, 24 hours before transfection. Transfection was performed using Lipofectamin 3000 (Life Technologies, Germany) according to the manufacturer's instructions. Briefly, 1.5 µg of each plasmid DNA (donor plasmid and Cas9/Cas9n expressing construct) and 6 µL of Lipofectamin 3000 were used per each well. The transfection medium was replaced with fresh medium after 12 hours. After 48 hours of transfection, transfected MIN6 cells were dissociated with trypsin and washed with phosphate-buffered saline. An untransfected sample was included as the negative control. Single cell suspensions Single Homology Arm for CRISPR/Cas9 Gene Targeting of live cells were transferred into flow cytometry tubes where approximately 20000 cells per sample were acquired by a Partec PAS flow cytometer (Partec, Germany) and analyzed using FlowJo 7.6.1 software (Tree Star Inc., USA). Transfection experiments was performed in three separated biological replicates. Transgenesis and genotyping of mouse embryonic stem cells To target Pdx1 gene, we used Royan B20 ESC line, previously evaluated in terms of pluripotency and germ line transmission (15). Approximately 10 7 ESCs were co-transfected with 20 µg of pCas9n-sgPdx1 and 40 µg of pKI-Pdx1 by electroporation. Transfected cells were spread into two, 10 cm cell culture plates and treated with 500 µg/ mL of G418 (Sigma-Aldrich, USA) for two weeks. Antibiotic resistant colonies were picked up and cultured in multi-well plates. Genomic DNAs were extracted with a Genomic DNA extraction kit (Bioneer, Daejeon, Korea); genotyping polymerase chain reactions (PCRs) were performed with two sets of genotyping primer pairs (Table 1) and a Taq DNA Polymerase Master Mix (Ampliqon, Denmark). Each set of primers amplified the flanking genomic regions of the knock-in allele. PCR condition was as follow: 95˚C for 10 minutes, 30 cycles of 95˚C for 30 seconds, 62˚C for 30 sec5onds, and 72˚C for 1 minute. Positive clones for both genotyping PCRs were considered as targeted clones and their PCR products were purified with a PCR product purification kit (Roche, Germany) and sequenced (Pishgam, Iran) using the same primers. Electrophoresis of the PCR products were performed in a AgaroPower electrophoresis instrument (Bioneer, Korea) on 1 % agarose gel under a 7 V per cm electric field. Quantification of transgene copy number Real-time quantitative PCR (qPCR) was applied to quantify transgene copy numbers in ESC lines. For this purpose, we extracted genomic DNAs from each cell line as described above. Tenfold serial dilutions of each genomic DNA sample were prepared in nuclease-free water and applied as the template in qPCRs. Three sets of qPCR reactions were performed using primer pairs (Table 1) specific for GFP (representing the transgene), Sry (single copy endogenous target) and Fgf10 (double copy endogenous target). qPCR was conducted with 2 μL of the diluted DNA in duplicate on a Rotor-Gene 6000 Real-time Thermal Cycler (Corbett Research Pty. Ltd., Australia). Acquired quantification cycles (Cqs) were applied for calculation of efficiency and copy numbers as described previously (18)(19)(20). Briefly, we calculated amplification efficiencies to ensure that the method's requirements (amplification efficiency >90%) were met. Acquired Cqs in each dilution were normalized by the respective Cqs of Sry. GFP copy numbers were determined relative to respective internal controls, Sry and Fgf10, using the comparative Cq method (∆∆Cq). Transgene copy number were estimated with seven qPCR replicates for each transgenic cell line. Statistical analysis Statistical analysis was performed with Graph-Pad Prism 6 (GraphPad Software, Inc., San Diego, CA, USA) through the use of two-way analysis of variances (ANOVA) and Tukey's multiple comparison test at 5% level of significance. Data were presented as mean ± SD. Pdx1 gene targeting in MIN6 cells In order to evaluate the feasibility of single homology arm for gene knock-in, we used a donor plasmid (pKI-Pdx1) that harbored a single homology region specific to the mouse Pdx1 locus and a GFP CDS. Introduction of a sgRNA guided singleor double-strand break upstream of the Pdx1 CDS and subsequent HDR within the homology region were intended to result in the insertion of whole donor vector into the Pdx1 locus, flanked with two identical copies of the homology arm sequence (Fig.1A). We have used insulinoma MIN6 cells which constantly express Pdx1, for convenient detection of knock-in events. In these cells, targeted insertion of GFP sequence into the Pdx1 locus is expected to result in GFP expression. After co-transfection of MIN6 cells with combinations of donor plasmid DNA and either sgPdx1-Cas9 or sgPdx1-Cas9n expressing plasmids, we have observed GFP+ cells (Fig.1B). A control experiment with a constitutive GFP expressing plasmid showed the relatively low efficiency (approximately 6 %) of the transfection procedure in MIN6 cells (Fig.1C). However, both Cas9 and Cas9n mediated targeting led to significantly (P<0.05) higher frequencies of GFP+ cells when compared with corresponding control groups that lacked sgPdx1 (Fig.1C, D). Under these settings we did not observe a significant difference (P=0.53) between efficiencies of Cas9 and Cas9n (Fig.1D). Of note, the difference between Cas9 and Cas9n in the absence of sgPdx1 was not significant (P=0.21). These results suggest that both Cas9 and Cas9n could mediate the knock-in process with the single homology arm donor plasmid at comparable efficiencies. Generation of Pdx1 GFP knock-in embryonic stem cell lines Given the utility of knock-in ESCs for generation of transgenic animals and differentiation studies, we aimed to evaluate the combination of single homology arm donor and Cas9n for generation of Pdx1 GFP knock-in mESC lines. Resultant antibiotic resistant ESC colonies were screened by two PCR genotyping primer pairs specific to the desired targeted allele ( Fig.2A). Among 16 colonies subjected to PCR genotyping, 3 were positive for both PCR reactions (Fig.2B). Further investigation of the knock-in alleles with DNA sequencing revealed that the flanking sequences of homology arm regions were identical to predicted targeted allele in all three knock-in clones (Fig.2C). PCR genotyping for wild type allele showed that all three knock-in cell lines were heterozygous in Pdx1 locus (Pdx1 GFP/+ ), harboring a wild type Pdx1 allele (Fig.3A). Sequences of wild type and targeted alleles revealed no mutation at the nicking site (Fig.3B). A qPCR-based assay results indicated that only one copy of the GFP gene existed in each knock-in cell line (Fig.3C) which ensured the absence of randomly integrated copies of the donor vector. Discussion Here we demonstrated the feasibility of a single homology arm design for CRISPR/Cas9-targeted insertion of a long DNA fragment into a mammalian genome. This strategy simplified the design and cloning procedure of the donor construct by using only a single, short homology arm. Our results showed that a single homology arm donor along with a single sgRNA and Cas9n could be applied for targeted insertion of long DNA fragments into the mammalian genome. We successfully applied this strategy to generate knock-in Pdx1 GFP ESC lines, the genomic sequences of which revealed the precise integration of the donor vector into the genomic target. Although the CRISPR/Cas9 system provides an efficient way for gene targeting, the off-target activity of the system remains an issue of concern. Nicking on a single genomic target can improve safety at the expense of efficiency (3,7). Interestingly, we did not observe any significant difference between the efficiencies of Cas9 and Cas9n when used along with the single homology arm donor in MIN6 cells. However, these results might be affected by low transfection efficiency in our experimental setting. Further investigations with different sgRNAs and cell types would be required to validate these findings. Nevertheless, we have successfully applied Cas9n with our single homology arm donor for generation of knock-in ESCs with a frequency of 18.75% (3 targeted out of 16 antibiotic resistant colonies). According to the performance of Cas9n in MIN6 cells, we used this nickase variant in ESCs which was expected to decrease the chance of off-target mutations. Consistent with this expectation, we observed indel mutation neither in the sgPdx1 target sites, nor in the targeted locus or wild type allele. Although extensive off-target analyses were not conducted in this study, Cas9n has repeatedly been shown to be less mutagenic than wild type Cas9 in previous studies (2,3,7,8,12). Introduction of two nearby nicks with a pair of sgRNAs may increase both efficiency and specificity (7,8), but this process requires the design, construction and co-transfection of two sgRNAs which should meet a number of criteria for optimal activity (7,21). Our strategy, in contrast, minimiz-es the design complexity by using a single sgRNA and a short, single homology arm. Hypothetically, using a single homology arm not only simplifies the design, but also may increase efficiency by reducing the number of homologous recombination events required for vector integration. A similar principle has been previously applied to design conventional long homology arm insertional targeting vectors (14). However, more investigations are required to compare the single versus double homology arm designs in terms of efficiency. A possible drawback of using single homology arm is the insertion of total backbone of the donor vector into the genome. This can be amended by either using minicircle vectors or post-targeting deletion of the vector sequences using site specific recombinase strategies such as Cre-lox system. The single homology arm used in this study contained the sgPdx1 target sequence. Therefore, the homology arm could be subjected to Cas9 or Cas9n activity which might lead to cutting or nicking the donor vector. Further investigations would be required to confirm the occurrence of the nick in the donor vector and its impact on targeting efficiency. However, previous studies on long homology arms showed that both nicking and DSB in the homology arm increase targeting efficiency (22). Conclusion The proposed design for the donor DNA has provided a convenient means for RNA guided gene targeting. Although further studies are required for comprehensive evaluation of targeting efficiency and specificity, here we have demonstrated a proof of principle for the single homology arm design strategy. The simplicity and adequate efficiency for derivation of knock-in cell lines may favor the use of this design strategy, particularly for clonal gene targeting in cells such as pluripotent stem cells. University. Authors certify no potential conflicts of interest.
3,232.6
2016-09-26T00:00:00.000
[ "Biology" ]
Investigating the relationship between energy expenditure, walking speed and angle of turning in humans Recent studies have suggested that changing direction is associated with significant additional energy expenditure. A failure to account for this additional energy expenditure of turning has significant implications in the design and interpretation of health interventions. The purpose of this study was therefore to investigate the influence of walking speed and angle, and their interaction, on energy expenditure in 20 healthy adults (7 female; 28±7 yrs). On two separate days, participants completed a turning protocol at one of 16 speed- (2.5, 3.5, 4.5, 5.5 km∙h-1) and angle (0, 45, 90, 180°) combinations, involving three minute bouts of walking, interspersed by three minutes seated rest. Each condition involved 5 m of straight walking before turning through the pre-determined angle with the speed dictated by a digital, auditory metronome. Tri-axial accelerometry and magnetometry were measured at 60 Hz, in addition to gas exchange on a breath-by-breath basis. Mixed models revealed a significant main effect for speed (F = 121.609, P < 0.001) and angle (F = 19.186, P < 0.001) on oxygen uptake (V˙O2) and a significant interaction between these parameters (F = 4.433, P < 0.001). Specifically, as speed increased, V˙O2 increased but significant increases in V˙O2 relative to straight line walking were only observed for 90° and 180° turns at the two highest speeds (4.5 and 5.5 km∙hr-1). These findings therefore highlight the importance of accounting for the quantity and magnitude of turns completed when estimating energy expenditure and have significant implications within both sport and health contexts. Introduction A high body mass index is a major risk factor for the incidence of numerous non-communicable diseases (NCD), such as cardiovascular and kidney diseases, diabetes and some cancers [1][2][3][4][5][6]. Indeed, obesity has been recognised as a major public health challenge for the 21 st century [7], with concerns regarding the health and economic burden, which have led to the identification of global targets to halt the rise in obesity prevalence by 2025 [8,9]. However, recent figures a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 from the NCD Risk Factor Collaboration, which analysed the aggregated data of 19.2 million participants from 200 countries, suggest that if current, post-2000, trends continue, the probability of meeting these targets is almost zero. These findings highlight the critical need to develop and implement effective interventions for the prevention and treatment of excess body mass [10]. A fundamental principle in the development of such interventions is the assessment of the energy expenditure associated with activities of daily living; weight management strategies are most effective when individuals can accurately determine how much energy they have expended [11]. Indeed, a failure to assess energy expenditure appropriately may, at least in part, explain the inconsistent evidence regarding intervention effectiveness and sustainability. Walking represents a popular, convenient and relatively safe form of activity that can easily be incorporated into weight management programmes [12][13][14]. The energy expenditure associated with walking are reported to be either linearly or slightly exponentially related to speed [15]. However, the applicability of these findings is based on walking in a straight line which does not align with everyday activities. In particular, recent studies have suggested that the process of changing direction is associated with significant additional energy expenditure [16][17][18]. Wilson et al. [18] reported that the energy expenditure of turning is linearly related to the degree of turning angle at 6 kmÁhr -1 while Hatamoto et al. found quadratic [17] or linear [16] functions best represented the relationship between running speed and the energy expenditure of a 180˚turn. A failure to account for this additional energy expenditure of turning has significant implications for both sporting and health contexts. For example, most English Premier league footballers make more than 700 turns per match [19] while medical treatment effectiveness is often assessed using a six-minute walking test. While the latter is intended to be conducted over a standardised 30 m straight line distance [20], some studies have used 20 m or 50 m straights [21,22], significantly influencing the number of turns completed and thus potentially confounding inter-study comparisons. Indeed, the difference in the number of turns completed, and thus overall energy expenditure, may explain studies, which utilised shorter straights reporting significantly shorter distances covered [23][24][25]. The purpose of the present study was therefore to investigate the influence of walking speed, angle, and their interaction, on energy expenditure. We hypothesised that 1) as walking speed increased, so would energy expenditure; 2) as angle increased, so would energy expenditure and that 3) angle and speed would demonstrate a synergistic effect on energy expenditure while walking. Materials and methods Participants In total, 20 healthy adults (7 female, 13 male; 28 ± 7 yrs; 20.5 ± 4.1 kgÁm 2 ) were recruited for the study. The participants were all recreationally active but none were highly trained. Prior to testing, participants were informed of the protocol and risks and provided written consent. All procedures were approved by a Swansea University ethics committee and were conducted in accordance with the Declaration of Helsinki. Participants were asked to arrive at the laboratory in a rested state, at least two hours postprandial and to avoid strenuous exercise in the 24 hours preceding each testing session. Participants were also asked to refrain from caffeine and alcohol for 6 and 24 h before each test, respectively. All tests were performed at the same time of day (± 2 h). determination of peak oxygen uptake ( _ V O 2peak ) and the gas exchange threshold (GET). On each of the two subsequent visits, participants completed the turning protocol. Incremental treadmill test Following a three-minute warm-up at 6 kmÁhr -1 , the treadmill speed increased by 1kmÁhr -1 every minute until participants reached volitional exhaustion. The treadmill gradient was set at 1% throughout the test [26], until participants reached their maximal running speed at which point it subsequently increased by 1% every minute until volitional exhaustion. Participants were given strong verbal encouragement throughout the test. Turning protocol On subsequent visits to the indoor track, each participant was asked to complete repeated three-minute bouts of walking interspersed by three minutes of rest. In a randomised order, each participant walked at four different walking velocities (2.5, 3.5, 4.5, 5.5 kmÁhr -1 ) in combination with four different angles (0, 45, 90, 180˚). Specifically, each of the sixteen conditions involved 5 m of straight walking interspaced by prescribed turns with the speed dictated by a digital, auditory metronome, which sounded once half-way between turns and once on the turns. Each condition incorporated an equal number of left-and right-handed turns, as illustrated in Fig 1. Measurements Throughout all the tests, gas exchange variables (MetaMax Cortex 3B, CORTEX Biophysik GmbH, Germany) were measured on a breath-by-breath basis and displayed online. Prior to Energy expenditure of turning each test, the gas analysers were calibrated using gases of known concentration and the turbine volume transducer was calibrated using a 3-litre syringe (Hans Rudolph, Kansas City, MO). The delay in the capillary gas transit and analyser rise time were accounted for relative to the volume signal, thereby time-aligning the concentration and volume signals. Additionally, two combined tri-axial accelerometers and tri-axial magnetometers (SLAM Tracker, Wildbyte Technologies Ltd, Swansea, UK), measuring at 60 Hz, were worn by participants; one set was worn on the right mid-axilla line at the level of the iliac crest and one set at the middle of the lower back. Data analysis The peak _ V O 2 was defined as the highest 10 s stationary average during the incremental treadmill test. The GET was determined by the V-slope method [27] as the point at which carbon dioxide production began to increase disproportionately to _ V O 2 , as identified using purpose written software developed using LabVIEW (National Instruments, Newbury, UK). The mean _ V O 2 during each condition was taken as the first 45 seconds of the final minute of that bout. Subsequent analyses were based on the premise that the energy expenditure of turning was superimposed on the baseline energy expenditure of straight line travel. Thus, the difference in _ V O 2 during straight line walking (0˚) at each speed compared to the _ V O 2 engendered during walking with 45, 90 or 180˚turns, was attributed to the additional energy expenditure of turning. This _ V O 2 was converted to gross energy expenditure in kJ using a conversion factor of 20.1 J per ml of oxygen and subsequently divided by the total number of turns per condition to provide an estimate of the energy expenditure of each angle and speed combination. The raw accelerometer data were converted to dynamic body acceleration (DBA) by first smoothing each channel to derive the static acceleration using a running mean over 2 s [28] and then subtracting this static acceleration from the raw data [29]. The resulting values for dynamic acceleration were all then converted to positive values. These values for DBA were then vectorially summed to give: Where VeDBA is the vectorial dynamic body acceleration, A x , A y , and A z are the derived dynamic accelerations at any point in time corresponding to the three orthogonal axes of the accelerometer [30]. Mean and summed VeDBA were derived for each individual turn and straight during the middle minute of the each condition and for the overall three minute bout. Individual turns and straights were determined using custom designed C++ software (DDMT Wildbyte Technologies Ltd) written specifically for the SLAM Tracker devices, to visualise the accelerometry and magnetometry traces and identify the point at which each trace significantly deviated from the local mean. Statistics Gaussian distributions in data were confirmed by Shapiro-Wilks tests. To account for the repeated and correlated nature of the data, linear mixed-effects models were used to determine the influence of, and interaction between, walking speed and turn angle on energy expenditure and VeDBA (S1 File). All condition combinations were placed in one model with covariates (sex, stature, peak _ V O 2 and/or turning _ V O 2 for VeDBA) added to subsequent adjusted models to determine their modulatory effect. Pearson product moment correlation coefficients were used to analyse the degree of association between key variables. Statistical analyses were conducted using PASW Statistics 21 (SPSS, Chicago, IL). All data are presented as means ± SD. Statistical significance was accepted when P 0.05. Results Descriptive characteristics of the sample population are shown in Table 1. The male participants were significantly taller and demonstrated a higher peak _ V O 2 , both in absolute and relative terms (normalised per kg body mass). As shown in Table 2, there was a significant main effect for speed (F = 121.609, P < 0.001) and turn angle (F = 19.186, P < 0.001) on _ V O 2 and a significant interaction between these parameters (F = 4.433, P < 0.001). Specifically, as speed increased, _ V O 2 increased, but significant increases in _ V O 2 relative to straight line walking were only observed for 90˚and 180t urns at the two highest speeds (4.5 and 5.5 kmÁhr -1 ; Table 2). Males demonstrated a significantly greater _ V O 2 across all conditions (F = 25.322, P < 0.001), although this difference was reversed when stature was included in the model (Sex: F = 16.77, P < 0.001; Stature: F = 152.493, P < 0.001). _ V O 2 during the turning protocol was dependent on peak _ V O 2 (F = 100.970, P < 0.001) but once scaled to account for differences in body size, this relationship was no longer significant (F = 0.708, P > 0.05). The estimated energy expenditure associated with an individual turn is represented in Fig 2, showing the synergistic interaction between speed and turn angle in determining the energy expenditure. Discussion This is the first study to investigate the interaction between speed and turn angle in determining the energy expenditure associated with walking. In agreement with one hypothesis, as speed increased for any given turning angle, the associated energy expenditure similarly increased. However, whether angle comprised a significant additional energy expenditure was dependent on the degree of turn angle. Specifically, irrespective of speed, 45˚turns did not significantly increase energy expenditure, whilst 180˚turns were always associated with a greater energy expenditure than straight line walking. Speed and angle demonstrated a significant interaction; 90˚turns were only associated with significantly increased energy expenditure relative to straight line walking at 4.5 and 5.5 kmÁhr -1 . This synergistic interaction was further supported by the exponential relationship found to best represent the relationship between speed and angle [15]. These findings therefore highlight the importance of accounting for the quantity and magnitude of turn completed when estimating energy expenditure, particularly at higher speed and angles. In recent years there has been increasing recognition of the physiological demands engendered by turning 180˚when running. Dellal et al. [31] reported a greater heart rate, blood lactate and ratings of perceived exertion (RPE) during intermittent shuttle runs involving 180t urns compared to straight line running at the same average running velocities, subsequently confirmed by Buchheit et al. [32]. Furthermore, Bekraoui et al. [33] found that covering the same distance at the same average speed resulted in a significantly greater physiological response when the course was 3.5m compared to 7.0m. These earlier findings were recently extended by Hatamoto et al. [17] who found that, even at running speeds as low as 3 kmÁhr -1 , thirty 180˚turns per minute elicited a similar metabolic demand as straight line running at 6 kmÁhr -1 . In the present study, a significant increase in total energy expenditure relative to straight line walking was not observed at 2.5 kmÁhr -1 , but was observed at 3.5 kmÁhr -1 . Whilst these findings are largely in accord with those of Hatamoto et al. [17], it is pertinent to note certain methodological discrepancies, such as the training status of the sample population and turning frequencies utilised, which limit inter-study comparisons. Specifically, there were considerable differences in the number of turns completed, with Hatamoto et al. [17] utilising up to 30 turns per minute compared to the 35 turns in 3 minutes at 3.5 kmÁhr -1 used in the present study. The greater energy expenditure associated with turning whilst walking is likely to be primarily attributable to the deceleration and subsequent acceleration required to make a turn, both of which necessitate eccentric and concentric muscle contractions, respectively [34]. Acceleration has been shown to engender a greater energy expenditure than travelling at a constant speed, with the energy expenditure dictated by the rate of acceleration [35]. A high acceleration rate requires a high degree of horizontal propulsion [36], therefore the change in acceleration is greater when performing a 180˚turn at higher running velocities, thereby resulting in greater energy expenditure. The extent of the angle turned has also been shown to alter the biomechanical properties in running; a 90˚turn exerts a significantly higher vertical, braking and propelling force than a 45˚turn [37]. It would therefore be postulated that greater angles would also be associated with further increases in directional forces and thus energy expenditure during walking. In accord with this hypothesis, a linear relationship has been suggested between angle and energy expenditure when walking at 6 kmÁhr -1 [18]. However, the present study suggests a synergistic interaction between speed and angle, with the influence of increasing angle within a speed only evident at 4.5 kmÁhr -1 and above. This discrepancy may be attributable to differences in the walking velocities, the specific techniques used to turn, stature or training status [32,38]. Indeed, both stature and peak _ V O 2 , an indicator of aerobic fitness and training status, were significant predictors of energy expenditure in the present study. Hatamoto et al. [16] previously suggested that ball game players, who are likely to be mainly running rather than walking and who turn more frequently anyway, were likely to have a more efficient turning technique. However, the mean _ V O 2 of an individual turn was reported to be 0.34 ± 0.13 mlÁkg -1 and 0.55 ± 0.09 mlÁkg -1 at 4.3 kmÁhr -1 and 5.4 kmÁhr -1 , respectively [16]. These values are substantially more than the values observed in the present study (Fig 2: 4.5 kmÁhr -1 = 0.07 ± 0.03 mlÁkg -1 ; 5.5 kmÁhr -1 = 0.13 ± 0.07 mlÁkg -1 ), despite the less trained status of the present participants. The reason for this discrepancy and its contradiction to the postulated role of aerobic fitness and technique are presently unclear, although it is perhaps pertinent to note the different methods of calculating the energy expenditure of an individual turn and the recent findings of Zagatto et al. [39] who found a lower metabolic power to be associated with more frequent changes of direction. It is interesting to note the apparent dissociation between VeDBA and turning angle in the present study, whereby increasing the angle of the turn was not associated with any significant increase in VeDBA. This could be attributable to the short duration of the turns, although the high measurement resolution makes this unlikely, or measurement error associated with the use of magnetometry to isolate the turn. However, whilst the magnitude of change in the signal was decreased at lower turn angles, this is unlikely to entirely explain the present findings. Rather, this finding may largely be attributable to the complex and individual-specific interaction between the surge, heave and sway components of DBA as well as muscular effort that involves generation of high forces without the dynamism typical of straight-line travel. Indeed, recent studies using force plates to investigate turn kinetics suggest that during a turn, the surge (inline) component of DBA is accompanied by a sway (perpendicular) component (Griffiths et al., In press). Furthermore, the surge component tends to 'average' zero over the straight sections (equal deceleration and acceleration phases) but during a turn section, the surge component becomes negative on average to provide the deceleration required to enter and execute the turn. In addition, the heave component (vertical) component of DBA may increase above and beyond normative walking values but this may depend on the turning technique being employed, e.g. some participants may elect to turn using a 'stop and reverse direction' method while others may prefer a 'gradual cornering' approach. The authors are of the opinion that this is by far the most likely explanation for the lack of sensitivity of VeDBA to turns. The present findings have significant implications within both sporting and health contexts given that few sporting, fitness or functional activities occur in a strictly linear fashion [37]. Indeed, whilst the present study only considered walking and caution should be exercised when extrapolating the findings to speeds associated with running and team sports, it is perhaps pertinent to note the similarity between the current findings and those reported elsewhere. Specifically, Dellal et al. [31] reported a greater heart rate, blood lactate and ratings of perceived exertion (RPE) during intermittent shuttle runs involving 180˚turns compared to straight line running at the same average running velocities, subsequently confirmed by Buchheit et al. [32]. Furthermore, Bekraoui et al. [33] found that covering the same distance at the same average speed resulted in a significantly greater physiological response when the course was 3.5m compared to 7.0m. These earlier findings were recently extended by Hatamoto et al. [17] who found that, even at running speeds as low as 3 kmÁhr -1 , thirty 180˚turns per minute elicited a similar metabolic demand as straight line running at 6 kmÁhr -1 . In the present study, a significant increase in total energy expenditure relative to straight line walking was not observed at 2.5 kmÁhr -1 , but was observed at 3.5 kmÁhr -1 . Whilst these findings are largely in accord with those of Hatamoto et al. [17], certain methodological differences should be considered, such as the training status of the sample population and turning frequencies utilised, which limit inter-study comparisons. Specifically, there were considerable differences in the number of turns completed, with Hatamoto et al. [17] utilising up to 30 turns per minute compared to the 35 turns in 3 minutes at 3.5 kmÁhr -1 used in the present study. However, not all studies have found a significant influence of turning on energy expenditure, with Zamparo et al. [40] reporting no change in _ V O 2 with increasing turn angle from 0 to 180˚. This discrepancy may be related to the use of maximal running velocity during this study, thereby minimising the potential for further increases in _ V O 2 to be elicited with increasing turn angle. Nonetheless, we would concur with Hatamoto et al. [16] that the energy expenditure associated with turning should be considered when estimating total energy expenditure during a football game in which more than 700 turns are typically completed per match [19]. From a health perspective, one important application of the present findings is in the design and interpretation of physical activity interventions. For example, the majority of energy expenditure prediction algorithms based on accelerometry data are derived from treadmill exercise. Such linear modes of locomotion are not cognisant of the additional metabolic costs associated with turning and this may, to some extent, contribute to the poor accuracy associated with the derived models during free-living conditions [41,42]. Such inaccuracies are likely to be emphasised in certain populations, such as children, who are characterised by highly sporadic movements [43,44]. Furthermore, accounting for the energy expenditure of turning could also be important in the evaluation of clinical trial effectiveness. Whilst the sixminute walking test is designed to be conducted over a 30m, straight line course with a 180t urn [20], reported distances covered range from 20 to 50 m [21,22] due to space and resource limitations. Such discrepancies, using reference values reported by Chetta et al. [45] could result in the number of turns ranging from 12 to 32, which, according to the present data, may be associated with an additional _ V O 2 expenditure ranging from 118 mlÁmin -1 to 296 mlÁmin -1 . Swank et al. [46] demonstrated that a 6% improvement in peak _ V O 2 was associated with a 5% decrease in risk of all-cause mortality in Congestive Heart Failure patients. Given the significantly lower peak aerobic capacity in patients, discrepancies arisen from failing to account for the energy expenditure of turning, which could be as much as 20% of a patients peak _ V O 2 , would considerably alter the interpretation of intervention efficacy. Future studies should seek to generate algorithms that account for distance and turns completed during a six-minute walk test, facilitating standardisation between centres. There are certain limitations associated with the current study that should be acknowledged, such as the walking velocities utilised. Previous studies have employed higher running speeds, whereas we employed speeds more typical of habitual physical activity. Whilst this increased the generalisability of our findings to health contexts, caution should be taken when extrapolating these findings to a sporting context. Furthermore, although a strength of the study to optimise interpretation of our results, the controlled nature of the protocol limits ecological validity. Finally, although the walking speeds were associated with a moderate intensity of exercise for most of the participants, some may not have achieved a steady state _ V O 2 within the 3-minute bout, thereby influencing the mean _ V O 2 observed. In conclusion, the present study demonstrated a synergistic interaction between speed and angle in determining the energy expenditure associated with walking. Specifically, 90˚and 180˚turns are associated with significant additional metabolic costs at 4.5 kmÁhr -1 and above. These findings therefore highlight the importance of accounting for the quantity and magnitude of turns completed when estimating energy expenditure and have significant implications within both sport and health contexts.
5,565
2017-08-10T00:00:00.000
[ "Physics" ]
Total and Phosphorylated Cerebrospinal Fluid Tau in the Differential Diagnosis of Sporadic Creutzfeldt-Jakob Disease and Rapidly Progressive Alzheimer’s Disease Background: CSF total-tau (t-tau) became a standard cerebrospinal fluid biomarker in Alzheimer’s disease (AD). In parallel, extremely elevated levels were observed in Creutzfeldt-Jakob disease (CJD). Therefore, tau is also considered as an alternative CJD biomarker, potentially complicating the interpretation of results. We investigated CSF t-tau and the t-tau/phosphorylated tau181 ratio in the differential diagnosis of sCJD and rapidly-progressive AD (rpAD). In addition, high t-tau concentrations and associated tau-ratios were explored in an unselected laboratory cohort. Methods: Retrospective analyses included n = 310 patients with CJD (n = 205), non-rpAD (n = 65), and rpAD (n = 40). The diagnostic accuracies of biomarkers were calculated and compared. Differential diagnoses were evaluated in patients from a neurochemistry laboratory with CSF t-tau >1250 pg/mL (n = 199 out of 7036). Results: CSF t-tau showed an AUC of 0.942 in the discrimination of sCJD from AD and 0.918 in the discrimination from rpAD. The tau ratio showed significantly higher AUCs (p < 0.001) of 0.992 versus non-rpAD and 0.990 versus rpAD. In the neurochemistry cohort, prion diseases accounted for only 25% of very high CSF t-tau values. High tau-ratios were observed in CJD, but also in non-neurodegenerative diseases. Conclusions: CSF t-tau is a reliable biomarker for sCJD, but false positive results may occur, especially in rpAD and acute encephalopathies. The t-tau/p-tau ratio may improve the diagnostic accuracy in centers where specific biomarkers are not available. Introduction Prion diseases are caused by the propagation and aggregation of the misfolded prion protein scrapie (PrP Sc ) in the brain [1]. Sporadic Creutzfeldt-Jakob disease (sCJD) is the most frequent form of human prion diseases, and accounts for around 90% of all cases, with an incidence of 1.5-2 per million person-years. It is clinically characterized by a rapidlyprogressive encephalopathy, inevitably leading to death after a mean disease duration of 5-6 months [2]. The clinical phenotype is associated with distinct biochemical and morphological subtypes that are determined by the glycotype (type 1 or type 2) of the pathological prion protein (PrP Sc ) and by the polymorphism at Codon 129 of the prion protein gene (PRNP) involving valine (V) and methionine (M) [3]. The most common subtype is MM1 and represents "classical" sCJD with rapidly progressive dementia, cerebellar syndrome, myoclonus, and a very short disease duration. Other subtypes may show predominant movement dysfunction (MV2 and VV2) in early stages or a prolonged disease duration (MV2, MM2, VV1). For many years, the diagnosis of sCJD has been based on criteria that included EEG, elevated CSF proteins 14-3-3, and MRI as biomarkers to support a probable clinical diagnosis [4,5]. Recently, the real-time quaking-induced conversion (RT-QuIC), which is able to detect PrP Sc in CSF and other tissues with an excellent diagnostic accuracy, was included in revised consensus criteria [6]. Unfortunately, protein 14-3-3 and RT-QuIC analyses are usually only performed in specialized centers. In this context, CSF total Tau (t-tau), a microtubule-associated neuronal and glial protein [7], is considered as a valuable alternative biomarker with a good diagnostic accuracy [8] that might be improved by calculating a ratio with phosphorylated tau181 protein (t-tau/p-tau ratio) [9,10]. However, the interpretation of test results is complicated by a missing unified cut-off for the diagnosis of sCJD. In addition, elevated CSF t-tau is also widely employed in the diagnostic process for Alzheimer's disease, indicating general neurodegeneration [11,12]. Although CSF t-tau values are much higher in sCJD than in AD, some studies reported that the discriminatory value versus clinically atypical AD may be reduced [13,14]. Further, highly elevated CSF t-tau concentrations were observed in patients with various non-neurodegenerative encephalopathies, such as acute ischemia, encephalitis, and after seizures [15]. The first aim of this study was to investigate the diagnostic accuracy of CSF t-tau in the differentiation of sCJD from AD and rapidly-progressive AD (rpAD), an AD subgroup that is defined by rapid cognitive decline [16], altered biomarker profiles [17], and potentially represents a disease entity with distinct beta-amyloid (abeta) strains [18]. In addition, we analyzed potential improvements of the diagnostic accuracy by calculating the t-tau/p-tau ratio. The second aim was to explore and describe the spectrum of differential diagnoses of patients with very high CSF t-tau values (above a pre-defined cut-off for sCJD) in a general neurochemistry laboratory cohort. Study Cohorts For this single-center study, a total number of n = 310 patients with sCJD (n = 205), non-rpAD (n = 65), and rpAD (n = 40) were included in the cohort for the evaluation of the diagnostic accuracy of t-tau, p-tau181, and the t-tau/p-tau ratio. Patients with sCJD were selected from a study of the National Reference Center for Transmissible Spongiform encephalopathies (NRZ-TSE) on epidemiology and biomarkers of prion diseases (ethical board number: 11/11/93). The selection criteria were the availability of the complete CSF tau biomarker dataset and neuropathological confirmation of definite CJD [4]. Patients with AD were selected from a prospective observational study on AD and rpAD (ethical board number: 6/9/08). The selection criteria were the availability of the complete CSF tau biomarker dataset, clinical diagnosis of probable AD [11], and sufficient follow-up information to differentiate between non-rpAD and rpAD. Further, concomitant CNS pathologies, especially clinically relevant cerebrovascular disease, inflammatory CNS diseases, and other neurodegenerative diseases, were ruled out as far as possible based on clinical syndrome and a complete diagnostic work-up, including CSF analyses and MRI. The rpAD group was defined by a loss of >5 points per year in each patient [16]. All analyzed CSF samples in both groups (AD and CJD) originated from lumbar punctures that had been performed during the diagnostic process (ante-mortem). The second cohort included patients from the general neurochemistry laboratory of the Göttingen University Medical Center. Cases were selected on the base of the institution of treatment (only patients from the University Medical Center Göttingen were considered) and availability of CSF t-tau data. Between 2004 and 2019, CSF t-tau was analyzed in n = 7036 patients. Only patients above a previously defined CSF t-tau cut-off of >1250 pg/mL [19,20] were included for further evaluations (3%, n = 199). Diagnoses were evaluated based on information from the medical reports. All patients in both cohorts had given informed consent for the scientific evaluation of their anonymized data. Biochemical Analyses All CSF analyzes were performed in the neurochemistry lab of the Göttingen University Medical Center before conceptualization of this study during the diagnostic process; the technicians were blind to the final diagnosis. T-tau was measured using INNOTEST hTAU Ag ELISA Kit from Fujirebio. Tau phosphorylated at Thr181 was analyzed using INNOTEST ELISA kit PHOSPHO-TAU (181 P) from Fujirebio. Statistical Methods Multiple group comparisons were performed with univariate variance analyzes and Tukey HSD post hoc tests. The data was log transformed to achieve normalization. To calculate and demonstrate discriminatory values of biomarkers, Receiver Operator Characteristics (ROC) were carried out. The area under the ROC-Curve (AUC) with according 95% intervals (95%CI) was considered as measure for the diagnostic accuracy. Optimal cut-offs were calculated using the Youden-Index. DeLong's-Tests [21] were performed to investigate the differences between ROC-curves of the biomarkers. In n = 7 sCJD cases, the test-ELISA kit had produced a t-tau value of >2200 pg/mL, and further dilution to determine an exact (higher) value was not performed. A value of 2200 pg/mL was assumed in these cases and used in statistical calculations to avoid favoring the hypothesis of higher values in sCJD. Statistical analyses were performed with Jamovi ® in R ® and SPSS. Demographic Data and Biomarker Values in the Study Cohort The two major diagnostic groups showed similar age characteristic with a median of 70 (IQR 16.5) in AD and 68 (IQR 14.5) in sCJD years patients. Some sCJD subtypes showed younger age medians, especially MV2 (61 years, IQR 12) and VV1 (51, IQR 36.0 years). Regarding sex distribution, 56% of patients in the AD group and 46% of patients in the sCJD group were female. Interestingly, the sex distribution in the rpAD group differed substantially with 68% of the patients being female ( Table 1). The proteins 14-3-3 showed an intermediate (or "weak positive") Western Blot signal in the CSF of four AD patients (7%), all of them belonging to the rpAD subgroup. CSF 14-3-3 was positive or intermediate in n = 188 patients from the sCJD group (92%). The median t-tau concentration in sCJD patients was 4840 pg/mL (IQR: 6882.5) and 546 pg/mL (IQR: 511) in AD patients. T-tau concentrations in sCJD and AD subgroups can be found in Table 1. Diagnostic Accuracy of CSF t-tau, p-tau181, and the t-/p-tau Ratio in the Study Cohort CSF t-tau discriminated sCJD from the whole AD group with an AUC of 0.942 (95%CI: 0.917-0.967) at an optimal cut-off of 1583 pg/mL. At this concentration, the sensitivity was 85% and the specificity 93%. The AUCs in the differentiation of sCJD and Diagnostic Accuracy of CSF t-tau, p-tau181, and the t-/p-tau Ratio in the Study Cohort CSF t-tau discriminated sCJD from the whole AD group with an AUC of 0.942 (95%CI: 0.917-0.967) at an optimal cut-off of 1583 pg/mL. At this concentration, the sensitivity was 85% and the specificity 93%. The AUCs in the differentiation of sCJD and non-rpAD (0.957, 95%CI: 0.934-0.979) and rpAD (0.918, 0.884-0.953) were also very high, but optimal cut-offs differed to a rather great extent between non-rpAD (>990 pg/mL) and rpAD (>2045 pg/mL). CSF p-tau181 showed moderate to good diagnostic accuracy with AUCs of 0.799 (95%CI: 0.748-0.849) vs. all AD, 0.776 (0.718-0.835) vs. non-rpAD, and 0.835 (95%CI: 0.761-0.090) vs. rpAD. The optimal cut-offs were <62 pg/mL vs. AD and non-rpAD, and <72 pg/mL vs rpAD. The t-au/p-tau ratio showed an excellent diagnostic accuracy in the discrimination of sCJD and AD as well as all subgroups (each AUC ≥ 0.990) at similar cut-off values of >13 vs. AD and the non-rpAD subgroup and >14 vs. rpAD patients. Please see Table 2 for a summary of the data and according ROC curves in Figure 2A-C. Table 2. AUCs and best cut-offs of t-tau, p-tau181, and the p-tau/t-tau ratio. In a second step, we compared the obtained AUCs. Here, the t-tau/p-tau ratio performed significantly better than CSF t-tau alone in the discrimination of sCJD from non-rpAD (AUC difference: −0.036, 95%CI: −0.055 to −0.018, p < 0.001) and from rpAD (AUC difference: −0.071, 95%CI:−0.102 to −0.040, p < 0.001). In addition, we compared the AUCs of CSF t-tau in the discrimination of sCJD from non-rpAD than and from rpAD. Although the AUC vs. rpAD was lower than vs. non-rpAD (AUC difference: −0.038, 95%CI: −0.080 to −0.003), the difference did not pass the significance threshold (p = 0.070). Regarding the t-tau/p-tau ratio, the AUC difference between non-rpAD and rpAD was marginal (AUC difference: −0.004, 95%CI: −0.016 to 0.008, p = 0.557). Please see Figure 2D for a summary of the data. (B) Receiver operating characteristics (ROC) in the discrimination of sCJD from non-rapidlyprogressive Alzheimer's disease (non-rpAD), displaying t-tau (red line), p-tau181 (blue line), and the t-tau/p-tau ratio (t-tau/p-tau ratio) (green). (C) Receiver operating characteristics (ROC) in the discrimination of sCJD from rapidly-progressive Alzheimer's disease (rpAD), displaying t-tau (red line), p-tau181 (blue line), and the t-tau/p-tau ratio (green). (D) Comparison and indication of significant differences with 95% confidence intervals (CI) between Areas Under the Curve (AUC) from ROC analyzes. Diagnostic Accuracy in sCJD Subtypes Information on disease subtype, including PrPSc glycotype and Codon129 PRNP genotype, were available in a subset of n = 106 patients. Regarding CSF t-tau, MM2C and MV2K sCJD showed lower concentrations than other subtypes (Table 1), in line with finding in the literature [22]. However, we did not statistically compare all biomarker values over all six observed groups because case numbers, especially in MM2C, VV1, and mixed types, were very low. For the same reason, we concentrated evaluations of the diagnostic accuracy on the three most common subtypes: MM/MV1, VV2, and MV2K. CSF t-tau showed the best accuracy in the differentiation of MM/MV1 cases from AD cases at a cut-off of >2045 pg/mL (0.977, 95%CI: 0.944-1). In VV2 cases, CSF t-tau also showed high AUCs of >0.900 vs. all AD types, but in MV2K cases, especially vs. rpAD, the AUC was lower (0.792, 95%CI: 0.646-0.937). This was not the case for the t-tau/p-tau ratio, which showed AUCs > 0.985 in all sCJD subtypes vs. all AD types (Table 3). Exploration of High CSF t-Tau Values in a General Neurochemistry Laboratory The second part of the study evaluated differential diagnoses in patients with high CSF t-tau values between 2004 and 2019 that had been referred to the Göttingen University Medical Center and analyzed in the institutional neurochemistry laboratory. Out of n = 7036, we identified n = 199 patients with CSF > 1250 pg/mL (3%). Their diagnoses and the associated numbers of cases are displayed in Figure 3A. About 25% of these patients were diagnosed with prion diseases. The second largest group were patients with AD (23%), followed by acute (stroke) and chronic vascular encephalopathy (16%), seizures (12%), inflammatory CNS disease in (9%), and mixed neurodegenerative dementia in n = 12 (6%). Other conditions were present in 7% of the cases and 3% cases unclassified according to available clinical data. In cases with CSF t-tau ≥ 2200 pg/mL, prion diseases accounted for 41% of the cases, whereas the frequency of AD (7%) and MD (1%) was substantially lower ( Figure 3B). Viruses 2022, 13, x FOR PEER REVIEW 9 of 15 41% of the cases, whereas the frequency of AD (7%) and MD (1%) was substantially lower ( Figure 3B). Regarding the t-tau/p-tau ratio, data was available in n = 170 cases. As this cohort was preselected by a minimum CSF t-tau of 1250 pg/mL and many t-tau concentrations above the maximum laboratory standard of 2200 pg/mL were present, we did not statistically analyze the data with group comparisons and ROC curves. Instead, we describe the distribution of t-tau/p-tau medians over the diagnostic groups. Here, prion diseases showed the highest medians, (40.56) and AD the lowest (9.80) medians. Non-neurodegenerative conditions such as vascular events (24.69), seizures (23.44), and inflammatory CNS diseases (35.06) showed t-tau/p-tau values higher than AD and lower than prion diseases (Figure 4). Regarding the t-tau/p-tau ratio, data was available in n = 170 cases. As this cohort was preselected by a minimum CSF t-tau of 1250 pg/mL and many t-tau concentrations above the maximum laboratory standard of 2200 pg/mL were present, we did not statistically analyze the data with group comparisons and ROC curves. Instead, we describe the distribution of t-tau/p-tau medians over the diagnostic groups. Here, prion diseases showed the highest medians, (40.56) and AD the lowest (9.80) medians. Nonneurodegenerative conditions such as vascular events (24.69), seizures (23.44), and inflammatory CNS diseases (35.06) showed t-tau/p-tau values higher than AD and lower than prion diseases (Figure 4). Discussion The results of our investigation validate the good diagnostic accuracy of CSF t-tau in the differential diagnosis of sCJD in the context of AD and rpAD. However, the optimal cut-off (>1583 pg/mL versus AD) was higher than what can be found in the literature. In previous publications, CSF t-tau cut-offs between >1072 pg [23] and >1400 pg/mL [24] were identified using the heterogeneous control groups of non-CJD neurologic diseases or rapidly-progressive dementias of different etiologies [8,9,19,25,26]. Studies that used AD patients as controls showed differing results. AUCs varied between 0.93 with a cutoff of >2131 pg/mL [27] and 0.78 with a cut-off of >1200 pg/mL [14]. Another study showed a relatively high AUC of 0.92 at a rather low cut-off >1128 pg/mL [13]. This study used Discussion The results of our investigation validate the good diagnostic accuracy of CSF t-tau in the differential diagnosis of sCJD in the context of AD and rpAD. However, the optimal cut-off (>1583 pg/mL versus AD) was higher than what can be found in the literature. In previous publications, CSF t-tau cut-offs between >1072 pg [23] and >1400 pg/mL [24] were identified using the heterogeneous control groups of non-CJD neurologic diseases or rapidly-progressive dementias of different etiologies [8,9,19,25,26]. Studies that used AD patients as controls showed differing results. AUCs varied between 0.93 with a cut-off of >2131 pg/mL [27] and 0.78 with a cut-off of >1200 pg/mL [14]. Another study showed a relatively high AUC of 0.92 at a rather low cut-off >1128 pg/mL [13]. This study used only "typical AD" cases for the evaluation, and the results were similar to those from our subgroup analysis of non-rpAD (AUC 0.96, cut-off: >990 pg/mL) cases. Lower diagnostic accuracies were reported when atypical forms of AD were included or focused [13,14]. In our study, we defined rpAD by pre-existing criteria [16,28] as a distinct AD subgroup in biomarker analyses. We partially validated previous observations that investigated so-called atypical AD and showed that the AUC of CSF t-tau for the discrimination of sCJD was lower vs. rpAD than vs. non-rpAD (AUC difference: −0.038). However, the p-value from DeLong's test (0.070) stayed above the pre-defined threshold for statistical significance. In addition, the optimal cut-off to discriminate the rpAD group (>2045 pg/mL) was substantially higher. It was shown before that rpAD may be characterized by a distinct biomarker profile [17] and that a faster disease progression goes along with higher values of biomarkers of neuronal damage [29]. The latter has also been shown for sCJD [30]. Nonetheless, the difference between the diagnostic accuracy in non-AD and rpAD in our study was not as clear as the difference between typical and atypical AD in the aforementioned studies. The potential reason may be the selection and definition of the AD group. Whereas many studies investigated the diagnostic accuracy of CSF tau using patients with AD that had initially been suspected as sCJD, we analyzed CSF from rpAD patients that were part of an independent study on AD, reflecting the spectrum of AD in a non-specialized center. We could not identify a significant elevation of CSF t-tau in rpAD compared to non-rpAD (p = 0.096) patients but we observed significantly higher p-tau181 values in the rpAD group (p = 0.015). This comparison was not a main objective of the study, but the results are important and match with findings in atypical AD [13,14]. However, unlike other studies [17], we could not find significant differences between the t-tau/p-tau ratios in rpAD and non-rpAD patients (p = 0.999). The t-tau/p-tau ratio is a major improvement to the use of CSF t-tau in the differential diagnosis of sCJD, which was demonstrated by several studies [9,10,26,[31][32][33]. Here, we validate those findings and were able to show significantly higher AUCs vs. non-rpAD as well as rpAD (p < 0.001). Most important, there is only a marginal and non-significant difference between AUCs for the discrimination of sCJD from non-rpAD and from rpAD (p = 0.557). This indicates that the t-tau/p-tau ratio may be robust and less susceptible to show false positive results in AD patients with very high CSF t-tau values. In addition, we could show that the good diagnostic performance remains constant not only over AD groups, but also regarding different sCJD subtypes. Whereas the AUCs of t-tau were rather low (0.792) in the discrimination of rpAD and MV2 (0.792) and very high in MM/MV1 versus non-rpAD (0.979), the AUCs of the t-tau/ p-tau ratio showed values > 0.980 in all ROC analyzes (Table 3). Very high CSF t-tau values may not only occur in sCJD and a proportion of AD patients. It was shown that CSF t-tau is not markedly elevated in other neurodegenerative dementias [34], but as a general marker of neuro-axonal damage, very high concentrations were observed after cerebral ischemia [35], hemorrhage, seizures [36], as well as in encephalitis and other conditions [15]. In the cohort from the general neurochemistry laboratory, only about half of the patients with very high CSF-tau values (>1250 pg/mL) were diagnosed with prion diseases (25%), AD (23%), or other neurodegenerative diseases (mixed or single pathologies: 7%). Studies with similar approaches have reported that the majority of patients with CSF t-tau >1000 pg/mL [37] and >1200 pg/mL [38] were diagnosed with AD (73% and 51%, respectively), followed by CJD. A potential reason for this discrepancy may the slightly higher cut-off used in this study and the fact that the German NRZ-TSE is located at the Göttingen University. The frequency of patients with prion diseases is higher in this institution due to second opinion referrals. Unlike in AD and other secondary tauopathies [39], elevated t-tau may not go along with elevated p-tau181 in non-neurodegenerative encephalopathies [35]. This may potentially be a reason for a lower diagnostic accuracy in some CJD mimics. In our exploratory evaluation of patients with CSF t-tau >1250 pg/mL, values of the t-tau/p-tau ratio in AD patients were apparently lower than in inflammatory and vascular encephalopathies. These conditions showed a huge overlap with prion diseases ( Figure 3B). This is of high importance, because inflammatory and neurovascular diseases belong to the most common differential diagnoses of sCJD and rapidly-progressive dementia [40]. The strengths of this study include the use of a well-characterized "real-life" AD control group with a rather impartial criterion for the definition of rpAD [16], the consideration of different subgroups in AD as well as in sCJD, and a high case number in the sCJD group. Focusing on only one disease as differential diagnosis and on one CSF biomarker Viruses 2022, 14, 276 11 of 14 category (14-3-3 data was not available for the AD cohort) was also a limitation of the study. Unfortunately, the retrospective study design did not allow comparative evaluations with ELISA 14-3-3, beta-amyloid 1-42, or recent biomarker candidates for CJD such as neurofilament light chain, alpha-synuclein, or soluble triggering receptor expressed on myeloid cells 2 (TREM2). Future investigations should also consider potential improvements of the diagnostic accuracy by combinations of different biomarkers, as well as consideration of clinical (e.g., disease stage) and demographic factors [41]. The lack of valid comparative data on the t-tau/p-tau ratio in the laboratory cohort is another important limitation. It emphasizes the need to take non-neurodegenerative dementia etiologies into account when performing future evaluations of the diagnostic accuracy of the t-tau/p-tau ratio and other biomarkers for prion diseases. The current focus of biomarker research in CJD, AD, and other dementias lies on blood-based analyses, and the utility of plasma t-tau for the differential diagnosis of sCJD has already been validated [42]. In this context, an investigation of potential improvements through the application of a plasma t-tau/p-tau ratio as well as other promising tau markers such as non-phosphorylated tau [43] and p-tau217 [44] should be considered in future research. Conclusions CSF t-tau is a valuable alternative biomarker for sCJD when specific tests like RT-QuIC are not available. However, very high t-tau values may occur in other diseases, especially in rpAD and in the non-neurodegenerative etiologies of rapidly-progressive dementia. The t-tau/p-tau ratio is able to improve the diagnostic accuracy for the discrimination of sCJD from AD and rpAD significantly, but its utility in the context of ischemic and inflammatory encephalopathies has to be explored further. Although we and others reported excellent accuracies of CSF t-tau and the t-tau/p-tau ratio, predictive values of these biomarkers are mainly determined by the extremely low prevalence of sCJD. Thus, CSF t-tau should not be used as a general screening test for sCJD, and incidental findings of very high concentrations in the diagnostic process of a suspected AD, as well as non-neurodegenerative encephalopathies, have to be interpreted with caution. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: J.W. has been an honorary speaker for Actelion, Amgen, Beijing Yibai Science and Technology Ltd., Janssen Cilag, Med Update GmbH, Pfizer, Roche Pharma, and has been a member of the advisory boards of Abbott, Biogen, Boehringer Ingelheim, Lilly, MSD Sharp and Dohme, and Roche Pharma and receives fees as a consultant for Immunogenetics and Roboscreen. All authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
5,642
2022-01-28T00:00:00.000
[ "Biology", "Medicine" ]
Understanding a Transcriptional Paradigm at the Molecular Level In yeast, the GAL genes encode the enzymes required for normal galactose metabolism. Regulation of these genes in response to the organism being challenged with galactose has served as a paradigm for eukaryotic transcriptional control over the last 50 years. Three proteins, the activator Gal4p, the repressor Gal80p, and the ligand sensor Gal3p, control the switch between inert and active gene expression. Gal80p, the focus of this investigation, plays a pivotal role both in terms of repressing the activity of Gal4p and allowing the GAL switch to respond to galactose. Here we present the three-dimensional structure of Gal80p from Kluyveromyces lactis and show that it is structurally homologous to glucose-fructose oxidoreductase, an enzyme in the sorbitol-gluconate pathway. Our results clearly define the overall tertiary and quaternary structure of Gal80p and suggest that Gal4p and Gal3p bind to Gal80p at distinct but overlapping sites. In addition to providing a molecular basis for previous biochemical and genetic studies, our structure demonstrates that much of the enzymatic scaffold of the oxidoreductase has been maintained in Gal80p, but it is utilized in a very different manner to facilitate transcriptional regulation. The GAL genetic switch (supplemental Fig. S1) in both the bakers' yeast Saccharomyces cerevisiae and the milk yeast Kluyveromyces lactis is composed of an activator (Gal4p), an inhibitor (Gal80p), and a ligand sensor (Gal3p in S. cerevisiae or Gal1p in K. lactis). These proteins in the two yeasts are, at least in part, interchangeable. For example, Gal4p from both S. cerevisiae (ScGal4p) and K. lactis (KlGal4p) will complement a gal4 mutation in either yeast (1,2) despite the two proteins sharing comparatively little overall sequence similarity (28% amino acid identity and 57% similarity over their entire length). Gal80p from either yeast are highly related (58% amino acid identity and 82% similarity) and will inhibit the transcriptional activity of either version of Gal4p. However, while KlGal1p can complement both a Scgal1 (galactokinasedefective) and a Scgal3 (ligand sensor-defective) mutation (3), ScGal3p cannot complement the non-inducible phenotype of a Klgal1 deletion mutant unless the KlGAL80 gene is also replaced by ScGAL80 (4). Studies have suggested that there are differences in the cellular locations of the three GAL regulatory proteins in the two yeasts and consequently potential differences in the mechanism of transcriptional activation. In both cases, Gal4p is presumed to be nuclear, while ScGal80p can be found in both the nucleus and the cytoplasm and is capable of shuttling between the two. ScGal3p is predominately, if not exclusively, cytoplasmic (5,6). Recently, KlGal80p has been identified as an exclusively nuclear protein while KlGal1p is present both in the nucleus and the cytoplasm (7). On the basis of these and other data it has been suggested that the S. cerevisiae GAL switch is activated when galactose and ATP bind to Gal3p in the cytoplasm. This traps Gal80p in the cytoplasm thereby freeing Gal4p from its repressing effects and allowing transcriptional activation to occur (8). In K. lactis, the GAL switch appears to be controlled by competition in the nucleus between KlGal1p and KlGal4p for KlGal80p binding (7). In either model, Gal80p must interact with two very different proteins: Gal4p, via its carboxyl-terminal activation domain, and Gal3p (Gal1p in K. lactis) when galactose and ATP are bound to the ligand sensor. Here, we describe the molecular structure of KlGal80p and propose a model for its interaction with the activator, Gal4p, and with the ligand sensor, Gal1p. EXPERIMENTAL PROCEDURES X-ray Structural Analysis-KlGal80p was cloned, expressed, and purified according to standard procedures (described in the supplemental "Experimental Procedures"). The protein utilized for crystallization contained a His-tag at the NH 2 terminus with the following sequence: MGSSHHHHHHSSENLYFQGH. Crystals of both the wild-type and selenomethionine-labeled protein were obtained from 18 to 22% 2-methyl-2,4-pentanediol and 100 mM MES 3 (pH 6.25) at 4°C. They belonged to the space group P2 1 2 1 2 with unit cell dimensions of a ϭ 112.2 Å, b ϭ 137.1 Å, and c ϭ 72.9 Å. The asymmetric unit contained one dimer. X-ray data from flashed-cooled crystals were collected at the Structural Biology Center Beamline 19-BM (Advanced Photon Source, Argonne National Laboratory, Argonne, IL). These data were processed and scaled with HKL2000. The software package SOLVE (9) was used to locate the positions of 10 selenium atoms and to generate initial protein phases. Solvent flattening and averaging with RESOLVE (10) resulted in an interpretable electron density map for the selenomethionine-labeled protein at 3.0 Å resolution. The model based on this electron density map was subsequently used to refine against x-ray data collected from a wild-type crystal to 2.1 Å resolution with the software package TNT (11). X-ray data collection and model refinement statistics are presented in supplemental Tables S1 and S2, respectively. Light Scattering-Protein necessary for light scattering experiments was prepared according to standard procedures (described in the supplemental "Experimental Procedures"). Protein samples (200 l at 20°C containing 3.5 nM of each protein) were run on a Superdex 200 10/300 GL column (Amersham Biosciences) equilibrated and run (at 0.7 ml/min) in buffer containing 20 mM Tris-HCl (pH 8.0), 200 mM NaCl. Protein was detected using a DAWN EOS 18-angle laser photometer coupled to an Optilab rex refractive index detector with a Wyatt QELS system. For the interaction between KlGal1p and KlGal80p, samples additionally contained 100 mM galactose and 2.5 mM ADP. RESULTS AND DISCUSSION The x-ray structure of KlGal80p was solved to 2.1 Å resolution and refined to an overall R work /R free of 19.5%/24.5%. Overall, the electron density for the protein was well ordered except for two short loop regions (Asp-245 to Gly-248 and Asp-309 to Ser-316 in Subunit I and Asn-247 to Arg-250 and Gly-311 to Ser-316 in Subunit II) and two larger regions that are located on the same side of the molecule (Gly-328 to Glu-362 and Leu-394 to Lys-413 in Subunit I and Gly-328 to Glu-361 and Gly-395 to Lys-413 in Subunit II). The Gal80p subunit adopts a two-domain architecture (Fig. 1a). The NH 2 -terminal domain is formed by Ala-1 to Leu-151 and consists of a classical Rossmann fold with six strands of parallel ␤-sheet flanked on one side by two ␣-helices and by three on the other. An additional ␣-helix from the COOH-terminal domain (Ser-376 to Phe-393) completes the helix/sheet packing of the Rossmann fold (Fig. 1a). There is a decided ␤-bulge in the second strand of the dinucleotide-binding motif at Val-50. A cis-alanine at position 124 lies in the random coil region connecting the fifth ␤-strand to the fifth ␣-helix of the Rossmann fold. The COOHterminal domain of Gal80p (Gln-152 to Ile-457) is dominated by a nine-stranded mixed ␤-sheet with an overall width of ϳ37 Å. This domain also contains a small three-stranded mixed ␤-sheet, four ␣-helices, and a helical turn (Fig. 1a). Gal80p packs in the crystalline lattice as a dimer with overall dimensions of ϳ55 ϫ 75 ϫ 110 Å (Fig. 1b). From light scattering experiments, we have demonstrated that KlGal80p in solution is exclusively dimeric even in the absence of EDTA as opposed to that previously reported in (7) ( Table S3). The large mixed ␤-sheet of the COOH-terminal domain is intimately involved in the formation of the dimer, whereas the Rossmann fold motifs are distributed on opposite sides of the protein. The subunit:subunit interface is extensive with a total buried surface area of 4400 Å 2 . A search with DALI (12) reveals that the closest structural relatives of Gal80p include Zymomonas mobilis glucose-fructose oxidoreductase (13), rat biliverdin reductase (14), and Leuconostoc mesenteroides glucose-6phosphate dehydrogenase (15). A comparison of the ␣-carbon traces for Gal80p and Z. mobilis oxidoreductase (Fig. 2a) demonstrates that these two proteins superimpose with a rootmean-square deviation of 2.0 Å for 307 structurally equivalent ␣-carbons, which is remarkable given their limited amino acid sequence identity of ϳ13% (16). It has been postulated that Lys-129 and Tyr-217 are involved in the catalytic mechanism of the oxidoreductase (13). These residues correspond to Trp-123 and His-214 in KlGal80p. In addition to the above-mentioned enzymes of known function, there are several putative oxidoreductases whose models have been deposited in the Research Collaboratory for Structural Bioinformatics (RCSB) Protein Data Bank with similar three-dimensional architectures. Most of these proteins have a cis-proline in the same position as the cis-alanine (Ala-124) in Gal80p. In those proteins with bound NAD(P), the cis-proline is typically preceded by a lysine residue whose ⑀-nitrogen forms a hydrogen bonding interaction with the 2Ј-hydroxyl of the nicotinamide ribose. There are other examples, such as porcine malate dehydrogenase (17), where the cis-proline is preceded by an asparagine residue whose side chain, again, interacts with the nicotinamide ribose. In Gal80p the corresponding residue is a tryptophan, which precludes a similar interaction. One of the characteristic signature sequences for a Rossmann fold is Gly-X-Gly-X-X-Gly/Ala, which connects the first ␤-strand to the first (or dinucleotide-binding) helix. It is the second glycine of this sequence that packs against the phosphoryl groups of the NAD(P). A superposition of these regions in the seven closest structural relatives of Gal80p is presented in Fig. 2b. In Gal80p, there is a three-residue insertion in this loop, which results in a markedly different conformation. Indeed, in Gal80p the second glycine of the signature sequence is replaced with Thr-26. Given that the side chain of Thr-26 is too bulky to lie against the phosphoryl groups of NAD(P) and that the typical lysine residue that hydrogen bonds to the nicotinamide ribose is replaced with Trp-123 in Gal80p, it is unlikely that it binds NAD(P), at least in the orientation observed in other family members. Several recent and insightful mutational analyses of ScGal80p have been performed (16,18). In one of these studies, variants of the protein from S. cerevisiae were uncovered that were defective only in Gal4p-or in Gal3p binding (16). Given the amino acid sequence homology between the S. cerevisiae and K. lactis proteins, we have mapped these mutations onto the KlGal80p structure (Fig. 3a). Those mutations that give rise to defective Gal4p binding to the ScGal80p and that are visible in the KlGal80p model are located at positions Gly-153, Gly-184, Arg-190, Asp-261, His-262, Gly-283, and Leu-320. Most of these are located at the subunit:subunit interface. The mutations at Gly-153 and Gly-184, however, are particularly interesting because they are separated by ϳ17 Å and lie on either side of a large cleft formed by the COOHterminal end of the ␤-sheet in the Rossmann fold and an ␣-helix defined by Ser-211 to Ile-222. Two additional mutations have been identified in ScGal80p that result in defective Gal4p-binding: A309T and G310D (16). The corresponding amino acids in KlGal80p, namely Ala-310 and Gly-311, reside in a disordered surface loop of six residues that connects two anti-parallel ␤-strands and that is situated at the top of the cleft (Fig. 3b). In glucosefructose oxidoreductase there is a three-residue deletion in the loop, which folds in toward the nicotinamide ring of the NADP. Indeed, it is this cleft region that in glucose-fructose oxidoreductase and similarly related enzymes harbors both the NADP and the catalytic machinery. Strikingly, the cleft in the oxidoreductase is not nearly as wide as that observed in Gal80p. As an example, the ␣-carbons of Lys-41 and Ala-196 of the oxidoreductase are separated by ϳ6 Å, and the side chain of Asp-194 forms a salt bridge with Lys-41, which further closes off the cleft. In Gal80p that gap is much wider with the ␣-carbons of Trp-31 and Ser-194, for example, being ϳ14 Å apart. Additionally there are no apparent salt bridges to close the gap. Given the location of the Gal4p-only defective mutants in Gal80p and the three-dimensional characteristics of the cleft, we would suggest that this region forms the binding site for Gal4p. There has been considerable speculation concerning the structure of the activation domain of Gal4p, which is known to be coincident with the Gal80p interaction site (19 -21). Experiments have demonstrated that in ScGal4p, the carboxyl-terminal 30 residues are recognized by Gal80p (22). Some structural predictions have suggested that these residues lie in an ␣-helix, and it was inferred that the activation domain of Gal4p, and hence the region that interacts with Gal80p, may be helical. Other studies, in marked contrast, have suggested that the activation domain of Gal4p is ␤-sheet at low pH and is essentially unstructured at physiological pH (23). Using current structure prediction algorithms for peptides, the corresponding residues in KlGal4p (amino acids 836 -865: TNNFLNPSTQQLFNTTTMDDVYNYIFDNDE), are predicted to have ␣-helical character. Employing an ␣-helix in Gal4p for binding to Gal80p makes structural and chemical sense. In an ␣-helix the hydrogen bonding capacity of the backbone carbonyl groups and amide nitrogens is mostly satisfied. On the other hand, in a ␤-hairpin motif, for example, the backbone hydrogen bonding pattern would not be completely satisfied if it were to bind into the type of cleft observed in Gal80p which is devoid of ␤-sheet. On the basis of both secondary structural predictions and the nature of the Gal80p putative binding cleft, we predict that the COOH-terminal 30 residues of KlGal4p most likely bind into the Gal80p cleft (Fig. 3b) as an ␣-helix. Co-crystallization experiments with Gal80p and a peptide representing the COOH terminus of KlGal4p are in progress to address this issue. The mutations in ScGal80p that are defective in only Gal3p binding are clustered and correspond to Gly-302, Gly-324, Glu-367, and Val-368 in KlGal80p. These mutations map to the structure at the edge of the mixed ␤-sheet in the COOH-terminal domain and toward the surface (Fig. 3, a and b). Importantly, these residues are located near the large disordered region between Gly-328 and Glu-362. We propose that these residues mark the binding surface for KlGal1p and that the disordered region in Gal80p becomes ordered upon complexation with the ligand sensor. Note that both of the proposed binding sites for Gal4p and Gal1p (or ScGal3p) are on the same side of the Gal80p dimer (Fig. 3b), and this is in keeping with the recent and elegant experiments of Anders et al. (7), which suggest that the binding sites on Gal80p for Gal1p or Gal4p are overlapping. Both the crystal structure (Fig. 1b) and biochemical assays in solution (Fig. 4) indicate that KlGal80p is dimeric. The complex between KlGal80p and Gal4p (Fig. 4a) is most easily interpreted as dimers of each protein interacting with one another. From our model, we predict that the activation domains of Gal4p pack into the Gal80p clefts (Fig. 4c). Our data, and in contrast to that previously published (7), suggest that KlGal1p may exist in (Table S3) were determined on the basis of the light scattering data. The proposed compositions of the complexes are indicated by the color-coded schematics above each of the peaks. The SDS-PAGE of each collected fraction is also shown. a, a version of ScGal4p, comprising the DNA binding and dimerization domains (amino acids 1-93) fused to the activation and Gal80p interaction domain (amino acids 768 -881), was used in conjunction with KlGal80p. The size of the complex between the two suggests a 2:2 stoichiometry. b, the interaction between KlGal80p and KlGal1p in the presence of both galactose and ADP suggests that a monomer of Gal1p is capable of interacting with the Gal80p dimer. c, a model for the interactions of Gal80p. KlGal80p is exclusively dimeric and interacts both with Gal4p and with Gal1p in this state. For its interaction with Gal4p, the dimer of KlGal80p interacts with a dimer of Gal4p. We predict that the extreme carboxyl-terminal ends of Gal4p fit into the groove in the Gal80p structure. To interact with Gal80p, KlGal1p requires the presence of both galactose and ATP and may be either monomeric (7) or dimeric. The affects of these interactions for the expression of the GAL genes are indicated. both a monomeric and dimeric state (Fig. 4b). However, the interaction between KlGal80p and KlGal1p would appear to occur via a 2:1 complex, although it is possible that, at high concentration, a dimer of Gal80p interacts with two molecules of Gal1p (7). The proposed binding sites for both Gal4p and Gal1p in KlGal80p are distinct but are located on the same side of the molecule. We believe that it is therefore possible that the binding of one partner excludes the binding of the other. It has become increasingly apparent in recent years that the molecular scaffolds employed by enzymes of sugar metabolism are ideally suited to function as transcriptional regulators as well. The structure of Gal80p, even with its limited amino acid sequence homology, is remarkably similar to glucose-fructose oxidoreductase. Likewise, Gal3p, the transcriptional inducer in S. cerevisiae, demonstrates a ϳ90% sequence similarity to Gal1p, the bona fide galactokinase of the Leloir pathway that can, itself, function as a ligand sensor (24 -30). The molecular architecture of UDP-galactose 4-epimerase, another enzyme in galactose metabolism (31), has now been observed in NmrA, a negative transcriptional regulator in Aspergillus nidulans (32) and TIP30/CC3, a putative metastasis suppressor that promotes apoptosis (33). More than likely, other examples will be found where enzymatic scaffolds have been hijacked to serve in transcriptional regulation roles. The control of expression of the yeast GAL genes has been analyzed at the genetic and biochemical level for nearly 50 years. Indeed, the ability of an organism to respond to a variety of external conditions and signals at the transcriptional level is a fundamental cellular property. The structure presented here provides new molecular details regarding the GAL transcriptional switch, which, to date, is the best understood system for eukaryotic transcriptional control.
4,072.6
2007-01-19T00:00:00.000
[ "Biology" ]
Changes in the Trunk and Lower Extremity Kinematics Due to Fatigue Can Predispose to Chronic Injuries in Cycling Kinematic analysis of the cycling position is a determining factor in injury prevention and optimal performance. Fatigue caused by high volume training can alter the kinematics of the lower body and spinal structures, thus increasing the risk of chronic injury. However, very few studies have established relationships between fatigue and postural change, being these in 2D analysis or incremental intensity protocols. Therefore, this study aimed to perform a 3D kinematic analysis of pedaling technique in a stable power fatigue protocol 23 amateur cyclists (28.3 ± 8.4 years) participated in this study. For this purpose, 3D kinematics in hip, knee, ankle, and lumbar joints, and thorax and pelvis were collected at three separate times during the protocol. Kinematic differences at the beginning, middle, and end of the protocol were analyzed for all joints using one-dimensional statistical parametric mapping. Significant differences (p < 0.05) were found in all the joints studied, but not all of them occur in the same planes or the same phase of the cycle. Some of the changes produced, such as greater lumbar and thoracic flexion, greater thoracic and pelvic tilt, or greater hip adduction, could lead to chronic knee and lumbar injuries. Therefore, bike fitting protocols should be carried out in fatigue situations to detect risk factor situations. Introduction The kinematic analysis of a cyclist's position during pedaling has become one of the most studied fields in cycling [1]. The influence of the position adopted by the cyclist on sports performance has been investigated, either through the power produced [2,3] or through aerodynamic position [4,5]. In addition, the lower body and trunk kinematics during pedaling have been analyzed [6,7] to detect different risk factors and reduce the most prevalent injuries in cycling [8]. The high training volume and intensity inherent to competition in cycling can generate high levels of fatigue in cyclists, triggering modifications in the kinematics of the lower body and trunk, which are structures responsible for bicycle propulsion [9]. Kinematic changes cause alterations in the magnitude and direction of the forces applied on the pedal, and consequently, a loss of performance [10,11] and a higher probability of cyclist injury [9]. Especially relevant are the kinematic variations caused by fatigue in the rachis structures [12] since they are the main torso stabilizers during pedaling [13]. It is suggested that keeping the rachis in a flexed position for long periods of time seems to be an influential factor in the occurrence of such a prevalent injury in cycling as low back pain (LBP) [14]. Moreover, a common strategy used to study pedaling kinematics has been to collect data within the 2D sagittal plane [1]. One primary problem with these analyses is that the frontal plane is mostly omitted [15], even though a combination of 3D kinematics including the frontal and transverse planes would provide information about the context surrounding the forces applied on the pedals. According to Pouliquen et al. [16], in order to maintain symmetric pedaling within the sagittal plane in fatigued situations, compensatory movements occur in other planes, which can lead to an increased risk of knee overuse injuries [17]. Therefore, it becomes essential to monitor these planes to understand kinematics changes more accurately due to fatigue and the associated risk of injury. Some of those movements, such as a greater knee displacement (projection) to the medial axis or a greater ankle flexion, which would generate a change in knee motion, may be related to knee injuries [18,19]. Knee injuries have received special attention from the literature in an attempt to prevent them due to the higher number of injuries in this joint [20]. In this regard, to the best of our knowledge, only three studies have focused on 3D motion during a fatigue protocol [11,12,16]. Pouliquen et al. [16] studied lower body kinematics in 12 elite cyclists in an incremental test. Moreover, Sayers et al. [11] and Sayers and Tweddle [12] recorded lower body and trunk kinematics in 10 amateur cyclists for one hour at a constant intensity (88% onset of blood lactate accumulation). Three studies showed significant changes in the main joints involved in the propulsion in cycling (hip, knee, and ankle). Another limitation of the literature, along with the paucity of studies, is that the studies that have analyzed the effect of fatigue on cycling kinematics have focused on the use of incremental tests to exhaustion [10,16,21]. In this regard, the changes in pedaling technique may be a consequence of an adaptation to applying greater power output [22]. In consequence, making use of maximal fatigue protocols of maintained intensity, such as the functional threshold power (FTP) test [23], seems to be a good alternative since the possible kinematic changes will come exclusively from accumulated fatigue and not from the combination of fatigue and intensity as in incremental tests [21]. Finally, most of the kinematic analyses (bike fitting) that are currently carried out on cyclists are performed in short protocols that do not induce fatigue, or they are performed on only one side of the body, despite the fact that bilateral asymmetries are common in cycling [24]. In addition, the studies focused on analyzing cycling kinematics are usually based on the paradigm of extracting information only from some discrete points of the pedaling cycle, commonly the lower and anterior position of the crank [25,26]. However, pedaling is a cyclic and continuous action, and therefore, this approach is clearly limited when comparing pedaling conditions. Analyses such as statistical parametric mapping (SPM) can detect differences in whole curves rather than just discrete points [27], allowing for a better understanding of the differences between conditions. Hence, the objective of this study was to perform a continuous kinematic analysis of the pedaling technique in amateur cyclists during a maximal test of maintained intensity in order to observe the kinematic variations in the spinal and lower body joints due to fatigue. Participants The sample consisted of 23 amateur cyclists, 16 men (27.4 ± 9.6 years old, 1.77 ± 0.08 m, 72.0 ± 8.2 kg) and 7 women (33.3 ± 8.0 years old, 1.63 ± 0.05 m, 59.7 ± 8.6 kg), with at least three years of cycling experience (8.1 ± 4.7 years) and at least an average practice of 6 h/week (587 ± 220 min). In order to be included in the study, cyclists had to present no pain or pathologies that could modify the movement pattern in the six months prior to the study. All participants signed informed consent before their collaboration, based on the recommendations of the Helsinki declaration and approved by the institution's Office of Responsible Research. Procedure A three-dimensional motion capture system consisting of seven T10 cameras and a Vero 2.2 (Vicon MX, Vicon Motion Systems Ltd., Oxford, UK), operating at 200 Hz, was used for kinematic analysis. To monitor pedaling conditions, the Wahoo KICKR Power Trainer potentiometer roller was used, with the Blue SC Wahoo cadence and speed sensor, validated by Zadow et al. [28], and the Wahoo ® Fitness: Workout Tracker APP on an Android system. A lower limb and trunk model consisting of 45 markers was used. External reflective markers were placed on the L1, T6, and C7 vertebrae, both acromions, anterosuperior iliac spines, posterosuperior iliac spines, lateral and medial condyles of the femur, external and internal malleoli, calcaneus (lower and upper part), head of the first and fifth metatarsals, and toe. Additionally, technical markers were placed on the lateral end of the iliac crests and four markers clusters on the lateral side of each thigh and leg [29,30] (Figure 1). The markers on the anterosuperior iliac spines, internal femoral condyles, and internal malleoli acted as calibration markers to locate the hip, knee, and ankle joint centers, respectively. Procedure A three-dimensional motion capture system consisting of seven T10 cameras and a Vero 2.2 (Vicon MX, Vicon Motion Systems Ltd.; Oxford, UK), operating at 200 Hz, was used for kinematic analysis. To monitor pedaling conditions, the Wahoo KICKR Power Trainer potentiometer roller was used, with the Blue SC Wahoo cadence and speed sensor, validated by Zadow et al. [28], and the Wahoo ® Fitness: Workout Tracker APP on an Android system. A lower limb and trunk model consisting of 45 markers was used. External reflective markers were placed on the L1, T6, and C7 vertebrae, both acromions, anterosuperior iliac spines, posterosuperior iliac spines, lateral and medial condyles of the femur, external and internal malleoli, calcaneus (lower and upper part), head of the first and fifth metatarsals, and toe. Additionally, technical markers were placed on the lateral end of the iliac crests and four markers clusters on the lateral side of each thigh and leg [29,30] (Figure 1). The markers on the anterosuperior iliac spines, internal femoral condyles, and internal malleoli acted as calibration markers to locate the hip, knee, and ankle joint centers, respectively. Subsequently, a warm-up was performed by pedaling for 10 min at 100 W. This was followed by five steps of 1 min, with 25 W increments. In the last step, the aim was to reach an intensity close to that expected during the test. For this purpose, information was collected on previous FTP assessments of each cyclist. At the end of the warm-up, a 5 min rest was allowed. The warm-up was based on the recommendations of Bishop et al. [31], including low-intensity and high-intensity parts, followed by a rest before the start of the test. After the rest, a maximal 20 min FTP test was performed for the assessment of the cyclists' kinematics [23]. All participants used their own bicycles. Throughout the warmup and the test, they were instructed to hold on to the middle handlebars so as not to vary the point of support and the kinematics of the spine and pelvis [32]. The pedaling cadence was monitored and maintained at around 90 rpm. To control for the maximum intensity of the test, the perception of effort (RPE) was recorded at the end of the test. The motion Subsequently, a warm-up was performed by pedaling for 10 min at 100 W. This was followed by five steps of 1 min, with 25 W increments. In the last step, the aim was to reach an intensity close to that expected during the test. For this purpose, information was collected on previous FTP assessments of each cyclist. At the end of the warm-up, a 5 min rest was allowed. The warm-up was based on the recommendations of Bishop et al. [31], including low-intensity and high-intensity parts, followed by a rest before the start of the test. After the rest, a maximal 20 min FTP test was performed for the assessment of the cyclists' kinematics [23]. All participants used their own bicycles. Throughout the warm-up and the test, they were instructed to hold on to the middle handlebars so as not to vary the point of support and the kinematics of the spine and pelvis [32]. The pedaling cadence was monitored and maintained at around 90 rpm. To control for the maximum intensity of the test, the perception of effort (RPE) was recorded at the end of the test. The motion of the markers was recorded for 15 s in four specific moments, namely, after the start of the warm-up (WU), at the beginning of the test (INI), in the middle at 10 min (MID), and just before the end (FIN) [33]. From the 15 s recorded, the first five pedal strokes were discarded, and the next 10 complete pedal cycles of each lower limb were used for the study. Environmental conditions were controlled for all subjects, with constant temperature and humidity [34]. Data Processing and Analysis Hip joint centers were calculated using the equation proposed by Harrington et al. [35] from the markers of the posterosuperior and anterosuperior iliac spines, the latter of which was reconstructed from the cluster formed by the markers of the posterosuperior spines and those of the iliac crests [36]. Moreover, the knee joint centers were located using the transepicondylar method [37,38]. The ankle joint center was found by calculating the intermediate point between the markers of the internal and external malleoli [39,40]. From the marker positions, hip, knee, and ankle joint angles were calculated threedimensionally using conventional linear algebra procedures [41]. The angles of the pelvis and thorax segments were also calculated in 3D. Finally, the flexion angle of the lumbar region concerning the thorax was calculated in 2D. For the joint angles, a positive sign in x-, y-, and z-axis means flexion, abduction, and external rotation, respectively. For the segments, the positive sign means posterior tilt, right lateral rotation, and axial rotation to the left, respectively (Figure 2). of the markers was recorded for 15 s in four specific moments, namely, after the start of the warm-up (WU), at the beginning of the test (INI), in the middle at 10 min (MID), and just before the end (FIN) [33]. From the 15 s recorded, the first five pedal strokes were discarded, and the next 10 complete pedal cycles of each lower limb were used for the study. Environmental conditions were controlled for all subjects, with constant temperature and humidity [34]. Data Processing and Analysis Hip joint centers were calculated using the equation proposed by Harrington et al. [35] from the markers of the posterosuperior and anterosuperior iliac spines, the latter of which was reconstructed from the cluster formed by the markers of the posterosuperior spines and those of the iliac crests [36]. Moreover, the knee joint centers were located using the transepicondylar method [37,38]. The ankle joint center was found by calculating the intermediate point between the markers of the internal and external malleoli [39,40]. From the marker positions, hip, knee, and ankle joint angles were calculated threedimensionally using conventional linear algebra procedures [41]. The angles of the pelvis and thorax segments were also calculated in 3D. Finally, the flexion angle of the lumbar region concerning the thorax was calculated in 2D. For the joint angles, a positive sign in x-, y-, and z-axis means flexion, abduction, and external rotation, respectively. For the segments, the positive sign means posterior tilt, right lateral rotation, and axial rotation to the left, respectively (Figure 2). Additionally, we calculated the crank angle from the fifth metatarsal markers. The crank angle was used to slice the continuous data in every single pedaling cycle. Each cycle started when the ipsilateral pedal crossed the topmost position. The centered angles (lumbar, thorax, and pelvis) used the right pedal as a reference. All the angles were time normalized to 360 points with linear interpolation, each point representing 1° in a complete crank cycle. Additionally, we calculated the crank angle from the fifth metatarsal markers. The crank angle was used to slice the continuous data in every single pedaling cycle. Each cycle started when the ipsilateral pedal crossed the topmost position. The centered angles (lumbar, thorax, and pelvis) used the right pedal as a reference. All the angles were time normalized to 360 points with linear interpolation, each point representing 1 • in a complete crank cycle. Statistical Analysis One dimensional SPM was used to compare time conditions along the pedaling cycle. To determine the effect of fatigue, INI, MID, and FIN periods were compared, performing an SPM repeated-measures ANOVA. In case a statistical difference was found, a post hoc pairwise t-test with Bonferroni correction was calculated. Cohen's d [42] was also calculated to determine the magnitude of the paired differences. An alpha level of 0.05 was established for all analyses. To determine the effect of pedaling intensity, SPM paired t-test was calculated between WU and INI conditions. All analyses were conducted using self-created scripts in Python 3.8 language, using the open source module spm1D for Python (v.0.4.3, www.spm1d.org (accessed on 20 December 2020)) [27]. Statistical Analysis One dimensional SPM was used to compare time conditions along the pedaling cycle. To determine the effect of fatigue, INI, MID, and FIN periods were compared, performing an SPM repeated-measures ANOVA. In case a statistical difference was found, a post hoc pairwise t-test with Bonferroni correction was calculated. Cohen's d [42] was also calculated to determine the magnitude of the paired differences. An alpha level of 0.05 was established for all analyses. To determine the effect of pedaling intensity, SPM paired ttest was calculated between WU and INI conditions. All analyses were conducted using self-created scripts in Python 3.8 language, using the open source module spm1D for Python (v.0.4.3, www.spm1d.org (accessed on 20 December 2020)) [27]. In the spinal area, differences were found in lumbar flexion-extension, during the initial propulsion phase (approximately 0-90° crank) of each lower extremity, and in thoracic flexion-extension throughout the pedal cycle. Concerning the lateral rotation, it increased in the thorax and pelvis segments, also corresponding to the beginning of the propulsion phase of each lower limb (0-90° of the crank). Finally, there was a rotation in the thorax, during the initial phase of the propulsion phase of the right lower limb (approxi- In the spinal area, differences were found in lumbar flexion-extension, during the initial propulsion phase (approximately 0-90 • crank) of each lower extremity, and in thoracic flexion-extension throughout the pedal cycle. Concerning the lateral rotation, it increased in the thorax and pelvis segments, also corresponding to the beginning of the propulsion phase of each lower limb (0-90 • of the crank). Finally, there was a rotation in the thorax, during the initial phase of the propulsion phase of the right lower limb (approximately 35 • to 55 • of the crank). Table 1 also shows the specific pairwise comparisons when a statistical significance was found in the ANOVA. Some of the most important changes compared to the beginning of the test are the increase of ankle extension and knee flexion in the 0 • crank phase, and changes in both hips in the y-axis in the propulsion phase, resulting in increased adduction. In the spinal structures, the most important differences are the greater flexion at the end of the test in the lumbar structure and a greater pelvic left lateral rotation when the left crank is at 180 • . Effects of Intensity Finally, when comparing the two time points of the recording of different intensity (WU and INI), significant differences were revealed in both hips in the y-axis, with greater adduction at the beginning of the propulsion phase at the initial moment (0-130 • ) (left between 340 • and 120 • of the crank; p = 0.001; right between 10 • and 10 • of the crank; p = 0.001); and right between 10 • and 140 • of the crank; p < 0.001), and in the z-axis, with a greater external rotation at the initial moment, exclusive of the left hip in the final recovery phase (between 280-360 • of the crank; p < 0.001) (Figure 4). We also found a greater thoracic flexion at the initial moment during the entire crank stroke (p = 0.001) and a greater homolateral pelvic tilt when the right crank was between 30 • and 90 • (p = 0.032) (Figure 4). Discussion This study aimed to analyze the kinematics of pedaling technique in amateur cyclists during a maximal test of maintained intensity. The main finding of this research was that there were significant changes in almost all the joints involved in pedaling throughout the maximal protocol. This confirms the limited validity of kinematic protocols, performed without fatigue, usually applied in the bike fitting context. Discussion This study aimed to analyze the kinematics of pedaling technique in amateur cyclists during a maximal test of maintained intensity. The main finding of this research was that there were significant changes in almost all the joints involved in pedaling throughout the maximal protocol. This confirms the limited validity of kinematic protocols, performed without fatigue, usually applied in the bike fitting context. Differences in the lateral rotation of the thorax and pelvis segments (y-axis) may be the result of the cyclist pushing on the handlebars in the propulsion phase during pedaling [12]. On one hand, the greater trunk leans found at the final moment of the test may be associated with less coactivation of the trunk musculature, which increases the load on the spinal region and consequently the injury risk [43]. A greater lateral rotation, as has been shown in the thoracic region at the end of the fatigue protocol in this study, has also been previously associated with LBP risk in cycling [44]. On the other hand, the greater lumbar and thoracic flexion at the end of the protocol confirms the hypothesis that prolonged periods of holding a position produces a spinal creep [45], which could partly explain the high number of injuries suffered by professional cyclists in the spinal areas [46]. Srinivasan and Balasubramanian [47] have shown how increased fatigue in the trunk stabilizing muscles during pedaling, such as the erector spine, is often associated with cyclists presenting more severe symptoms in their LBP. Furthermore, people with LBP tend to have less trunk control [48]. The kinematic changes in the lumbar region occur at the end of the test and not in the middle, which confirms the need to develop fatigue protocols to detect the risk of LBP [49]. Regarding the lower body joints, the changes produced in the ankles are in line with other articles that studied the relationship between kinematics and fatigue [11,16,21]. We can assume that an increase in fatigue of the ankle extensor and flexor muscles causes an increase in joint stiffness [50] and consequently a change in the range of motion. The changes observed in the knee in the sagittal plane, in line with the results of Pouliquen et al. [16], are derived from the changes in the ankle angle, as they act in a closed kinematic chain [51]. However, this result differs from results found by other studies [11,21] in which the joint affected by the changes of the ankle's sagittal plane was the hip. It has been suggested that the use of pedal cleats could modify the kinematics of the lower limb joints. Their use tends to produce changes in the knee and ankle, while changes occur in the knee and hip when cleats are not used [52]. Bini and Diefenthaeler [21] did use cleats, while Sayers et al. [11] did not specify. Among the cyclists, the use of cleats is widespread and we understand that most of the published studies are conducted with cleats. Another possible explanation is the participants' choice of pedaling technique (pushing or pulling [53]), which is not usually reported and can have different joint implications. Finally, to our knowledge, this is the first study to apply a continuous statistical comparison of the pedaling technique not based just on discrete points, and therefore, they may not be entirely comparable. Another key aspect is the change in the left knee, both in the y-and in the z-axis, with an increase in the adduction and the external rotation, and the increase in adduction in the hips in the y-axis. The kinematic alterations in the transverse and frontal planes could be related to the appearance of common cycling injuries such as patellofemoral pain [9,16,18]. Although this difference is only found in the left knee, it should not be assumed that the movement is cyclical and similar in both lower extremities [24]. As other studies have shown, it is common to find deficits in strength and range of motion between lower extremities [54]. This confirms the need to biomechanically analyze both legs during kinematic analyses, an aspect that is overlooked in some of the studies that have previously linked kinematics and injury [17,55]. Finally, changes in kinematics due to the intensity (WU vs. INI conditions) of pedaling in cyclists seem to justify the greater thoracic flexion, pelvis lateral rotation, and hip adduction at the beginning of the protocol compared to the warm-up, as an adaptation to performing greater power [22]. To the authors' knowledge, this is the first study to analyze 3D kinematic differences in a constant intensity fatigue protocol. Furthermore, this study presents a larger sample size than previous studies conducted in 3D [11,12,16]. Therefore, the results obtained show the need to develop kinematic analysis protocols, adapted to the specific fatigue demands of each cycling discipline. This allows the optimization of movement to avoid injuries and improve performance and highlights the need to adopt preventive programs for the joints mainly affected by fatigue. Future research should investigate the relationship of core stability/strength with consistency in the spine and lower body kinematics. According to Abt et al. [56], core stability reduces torso movement and lower body alignment, and this could reduce the risk of injury in cyclists. Asplund and Ross [57] suggested that greater core stability would reduce the risk of suffering a spine injury, and this could be achieved through training focused on the trunk, performing dynamic and/or static exercises. One of the newest methods for quantifying stability in static exercises is accelerometry, which would allow us to monitor the status of core stability and improve the dose-response in programs focused on the trunk [58]. The main limitations of this study have to do with the heterogeneity of the sample since, even though they were amateur cyclists, they reported differences in the volume of weekly minutes of training and in their experience, a variable that could influence the kinematic changes. An experienced cyclist may have greater consistency in kinematics [59]. Another important limitation is the absence of an external recording of physiological variables associated with fatigue, such as heart rate or blood lactate, as the RPE may be insufficient in people unfamiliar with the scale. Conclusions A maximum fatigue protocol at stable intensity appears to modify the kinematic pattern of amateur cyclists' spinal, pelvic, and lower body structures. This modification in kinematics may increase the risk of injury to these structures and reduce performance. For this reason, it is important to carry out bike-fitting protocols in fatigue situations, which are close to the specific demands of competition and allow the detection of possible risk factors for injury. Additional trunk stability training is likely to reduce the movement associated with fatigue, thereby reducing the risk of injury.
6,105.4
2021-04-01T00:00:00.000
[ "Biology" ]
Mutations in Domestic Animals Disrupting or Creating Pigmentation Patterns The rich phenotypic diversity in coat and plumage color in domestic animals is primarily caused by direct selection on pigmentation phenotypes. Characteristic features are selection for viable alleles with no or only minor negative pleiotropic effects on other traits, and that alleles often evolve by accumulating several consecutive mutations in the same gene. This review provides examples of mutations that disrupt or create pigmentation patterns. White spotting patterns in domestic animals are often caused by mutations in KIT, microphthalmia transcription factor (MITF), or endothelin receptor B (EDNRB), impairing migration or survival of melanoblasts. Wild boar piglets are camouflage-colored and show a characteristic pattern of dark and light longitudinal stripes. This pattern is disrupted by mutations in Melanocortin 1 receptor (MC1R), implying that a functional MC1R receptor is required for wild-type camouflage color in pigs. The great majority of pig breeds carry MC1R mutations disrupting wild-type color and different mutations causing dominant black color were independently selected in European and Asian domestic pigs. The European allele evolved into a new allele creating a pigmentation pattern, black spotting, after acquiring a second mutation. This second mutation, an insertion of two C nucleotides in a stretch of 6 Cs, is somatically unstable and creates black spots after the open reading frame has been restored by somatic mutations. In the horse, mutations located in an enhancer downstream of TBX3 disrupt the Dun pigmentation pattern present in wild equids, a camouflage color where pigmentation on the flanks is diluted. A fascinating example of the creation of a pigmentation pattern is Sex-linked barring in chicken which is caused by the combined effect of both regulatory and coding mutations affecting the function of CDKN2A, a tumor suppressor gene associated with familial forms of melanoma in human. These examples illustrate how evolution of pigmentation patterns in domestic animals constitutes a model for evolutionary change in natural populations. INTRODUCTION Pigmentation phenotypes have been under strong selection in domestic animals throughout their evolutionary history, and references to variation in pigmentation are indicated already in ancient literature and illustrations. For instance, the Greek historian Herodotus described that the Persian emperor Xerxes (in reign 485 to 465 BC) kept sacred white horses, most likely white horses caused by the Graying with age mutation (Rosengren Pielberg et al., 2008). Coat color variation is also described in old Roman literature (Forster and Heffner, 1968). Pigmentation must have been one of the first traits that were altered after domestication was initiated and extensive color diversity is a hallmark for domestic animals. The molecular characterization of mutations underlying these changes has given insight about mechanisms underlying pigmentation patterns. The evolution of pigmentation patterns in domestic animals constitutes a model for evolutionary change in natural populations. This review provides examples of mutations that disrupt pigmentation patterns and others that create pigmentation patterns. In addition to the patterns described here, it is worth noticing the Himalayan pattern that occurs in several species like cat, rabbit, mouse, and gerbil (Lyons et al., 2005), and is caused by temperature-sensitive mutations of tyrosinase producing white color but with dark pigmentation in cooler areas of the body, like the tips of the ears. WHITE SPOTTING PATTERNS IN DOMESTIC ANIMALS White spotting patterns occur frequently in domestic animals. The most common causes are mutations in KIT encoding the KIT tyrosine kinase receptor, microphthalmia transcription factor (MITF), or endothelin receptor B (EDNRB), all with a crucial role for melanoblast migration and survival. Thus, a common reason for white spotting is lack of pigment cells in skin and/or in the hair/feather follicles. The majority of KIT mutations causing pigment patterns in domestic animals are structural rearrangements. There are two reasons why these are common in domestic animals. One is that structural rearrangements that do not touch the coding sequence may give a spectacular pigmentation pattern without causing negative pleiotropic effects, because KIT function is also essential for development of hematopoietic cells and for germ cells. The second reason is because some regulatory elements affecting KIT expression are located hundreds of kb both upstream and downstream of the coding sequence, disruption of these often gives spectacular spotting patterns. This is well illustrated by the domestic pig where a 450 kb duplication encompassing the entire coding sequence and more than 100 kb upstream and downstream of the coding sequence is causing the Patch phenotype characterized by large areas of the coat lacking pigmentation (Johansson Moller et al., 1996;Giuffra et al., 2002). Further, the Belt phenotype, characterized by a white belt across the foreleg, is associated with several duplications in non-coding regions of KIT (Rubin et al., 2012). The top dominant KIT allele, present in billions of pigs used for meat production worldwide, is Dominant white causing complete or near complete absence of skin and hair pigmentation. The Dominant white allele carries multiple causal mutations, including the different duplications associated with the Patch and Belt phenotypes, and in addition a splice mutation in one of the copies that leads to skipping of exon 17 encoding the tyrosine kinase domain. Thus, this results in a dominant negative receptor with normal ligand binding but inactivated tyrosine kinase signaling (Marklund et al., 1998;Rubin et al., 2012). The Dominant white allele is affecting pigmentation based on the combined effect of regulatory mutations (the duplications) and a coding change (splice mutation) in one of the copies. Due to this combination, it is the most dominant KIT allele as regards its effect on pigmentation in any mammal and with no or only very mild pleiotropic effects on hematopoiesis and fertility. Other examples of KIT structural rearrangements causing striking pigmentation patterns in domestic animals are Tobiano white spotting in horses caused by a 40 Mb inversion where one of the inversion breakpoints is located about 100 kb downstream of KIT (Brooks et al., 2008), and color sidedness in cattle caused by two serial translocations affecting KIT expression (Durkin et al., 2012). In contrast to pigs where there is an allelic series at the KIT locus, white spotting in dogs is largely determined by an allelic series at the MITF locus (Karlsson et al., 2007). This Spotting (S) locus was first described by Little (1957) and is composed of four alleles Solid (S, wild-type), Irish spotting (S i ), Piebald (S p ), and Extreme white (S w ). Irish spotting occurs in breeds like Bernese mountain dogs, Collie and Basenji, and is characterized by limited white spotting on the chest and often with a white ring around the neck. The Piebald phenotype occurs in for instance Beagles and Fox terriers and is characterized by more extensive white spotting across the body. Finally, Extreme white occurs in Dalmatians, white Boxers, and white Bull terriers and presents as a near total absence of pigmentation but remaining spots of pigmentation show normal pigmentation implying no defect in pigment production per se. In contrast to the situation in mice where the majority of described alleles affect the coding sequence and is associated with severe negative pleiotropic effects in other tissues where MITF function is essential, none of the MITF alleles in dogs affect the coding sequence and they have no or only mild negative effects (Karlsson et al., 2007); a fraction of the Extreme white dogs show deafness. Furthermore, an interesting aspect of the dog MITF alleles is that the three mutant alleles do not represent three independent mutations but show haplotype sharing strongly suggesting that the three alleles have evolved by consecutive accumulation of several causal mutations in the non-coding part of MITF. Functional characterization indicated that a simple repeat polymorphism in the MITF promoter is likely one of the causal variants affecting white spotting patterns (Baranowska Körberg et al., 2014). A non-coding variant in the 5' region of MITF is also associated with a white spotting pattern in cattle (Hofstetter et al., 2019). In horses, mutations in both MITF and PAX3 are associated with the Splashed white pigmentation pattern (Hauswirth et al., 2012). A missense mutation in EDNRB Ile118Lys is causing the Overo white spotting pattern in horses and in the homozygous condition the Overo lethal white syndrome, where lethality is caused by intestinal aganglionosis (Metallinos et al., 1998;Santschi et al., 1998;Yang et al., 1998). This horse syndrome corresponds to the form of Hirschsprung disease in humans caused by mutations in the same gene. A missense mutation in EDNRB2 is also associated with a feather pigmentation pattern in chicken (see below). MC1R MUTATIONS IN PIGS BOTH DISRUPT AND CREATE PIGMENTATION PATTERNING Melanocortin 1 receptor (MC1R) is one of the major coat color loci in the domestic pig. The wild boar piglets show a striking camouflage color composed of longitudinal dark-and lightcolored stripes (Figure 1). In the great majority of pig breeds of the world, this camouflage color is disrupted by MC1R mutations, either dominant black or recessive red mutations . Fang et al. (2009) tested pigs from 68 different breeds from Europe and China and found that pigs from only one, the Hungarian Mangalica, were homozygous for the wildtype allele. Domestication of pigs occurred in parallel in Europe and Asia from two different subspecies of the wild boar, the European wild boar and the Asian wild boar that separated from each other about one million years ago (Giuffra et al., 2000;Groenen et al., 2012). Two different missense mutations in MC1R causing dominant black color were selected in European and Asian domestic pigs, D124N and L102P, respectively. A comprehensive screen of the MC1R coding sequences in wild boars and domestic pigs from both Europe and Asia led to the conclusion that there is purifying selection to maintain camouflage in wild boars and selection to disrupt camouflage in domestic pigs (Fang et al., 2009). A conclusion based on the observation that seven out of seven nucleotide substitutions among European and Asian wild boars were all synonymous, whereas nine out of 10 nucleotide substitutions among domestic pigs were non-synonymous changes. The disruption of camouflage pattern in pigs carrying dominant black or recessive red alleles at MC1R suggests strongly that a wild-type MC1R receptor, whose signaling activity is controlled by the relative abundance of melanocytestimulating hormone (MSH) and agouti (ASIP), is required for the development of this pattern. The most likely explanation is that differential expression of agouti is causing patterning as recently reported for periodic feather patterning in juvenile galliform birds (Haupaix et al., 2018). Furthermore, differential expression of the transcription factor ALX3 is associated with the development of periodic dorsal stripes in the African striped mouse, which resemble the camouflage pattern in piglets (Mallarino et al., 2016). One of the MC1R alleles in pigs is also creating a stochastic pigment pattern. That is the black-spotting E P allele that evolved from the European dominant black allele (E D1 ), carrying the D124N missense mutation, by the insertion of two C nucleotides at codon 22 creating a mononucleotide repeat of 8 C . A frameshift mutation at codon 22 is expected to result in a complete loss-of-function and lack of black eumelanin in the coat. But that is not the case, the most common phenotype is red with a more or less random distribution of black spots across the body or white coat with larger black spots; whether the black spots occur on a white or red background is determined by one or more other genetic factors that have not yet been identified. The black-spotting phenotype associated with this allele ranges from almost no spots at all, in particular on the red background as in Tamworth pigs, to an entire black coat with six white points (tail, nose, and four white feet) in Berkshire. So, how is this possible? The explanation is that the 8 C mononucleotide repeat is somatically unstable and may lose two nucleotides or gain one nucleotide and thereby restore the open reading frame. When that happens, constitutive MC1R signaling is reactivated due to the presence of the D124N missense mutation. This somatic instability of the mononucleotide repeat was confirmed by RT-PCR analysis . The phenotypic range associated with the black-spotting allele is most likely explained by sequence variants affecting the probability for somatic reversion to occur as well as loci affecting the proliferation of melanocytes after reversion has occurred. A similar stochastic pattern of pigmented spots occurs in white horses carrying the dominant graying with age mutation. These horses are born normally colored but start to gray already during the first year of life and they are usually completely white before they are 10 years of age. Graying with age is caused by a 4.6 kb tandem duplication in an intron of syntaxin 17 (Rosengren Pielberg et al., 2008). Many horses that are heterozygous for this mutation show large number of small pigmented spots and are called flea-bitten gray. It appears plausible that this phenotype is caused by somatic loss of one of the duplicated copies or that it is inactivated by an epigenetic mechanism. A third example of a stochastic generation of a pigmentation pattern in domestic animals is the merle patterning in dogs in which pigmentation is diluted by a retrotransposon insertion in PMEL (previously denoted SILV), somatic deletion of the retrotransposon restores normal pigmentation (Clark et al., 2006). REGULATORY MUTATIONS IN TBX3 DISRUPT CAMOUFLAGE COLOR IN HORSES The majority of domestic horses have a non-dun phenotype characterized by intense pigmentation and caused by homozygosity for a recessive allele at the Dun locus. The dominant Dun phenotype occurs in some horses like Icelandic horses and the Norwegian Fjord horse, but this is in fact a wild-type color present also in the Przewalski's horse, a close relative to the ancestor of domestic horses. Dun is causing a pattern of dilution on the flanks but leaves a dark dorsal stripe and may be associated with other dark patterns which may include facial mask, shoulder cross, and zebra-like stripes on the legs (Imsland et al., 2016). There are many mutations described in the pigmentation literature causing pigment dilution caused by various defects in the pigment machinery. It is worth noticing that Dun in horses is in fact a wild-type phenotype contributing to camouflage by reducing the intensity of pigmentation. Histological studies revealed that the difference between hairs from Dun and non-dun horses is that the latter have a symmetric deposition of pigment whereas hair from Dun horses have an asymmetric deposition of pigments on the outward-facing side of the hair (Imsland et al., 2016). Thus, this is a pigmentation pattern affecting the individual hair. High-resolution genetic mapping combined with whole genome sequencing data from Dun and non-dun horses revealed that the non-dun phenotype is caused by cis-acting regulatory mutations affecting tissue-specific expression of the TBX3 transcription factor gene (Imsland et al., 2016). TBX3 had never before been associated with pigmentation, but it is important during development and loss-of-function mutations cause the ulnarmammary syndrome in humans that involves defects in limb, apocrine gland, tooth, and genital development (Bamshad et al., 1997). Imsland et al. (2016) first showed that the majority of non-dun horses, including the reference horse used for the horse genome assembly, were homozygous for an 1609 bp deletion located about 5 kb downstream of TBX3, in a region showing high sequence conservation among mammals. The fact that not all non-dun horses were homozygous required further analysis which revealed the presence of two different alleles non-dun1 (lacking the deletion) and non-dun2 (with the deletion). The causal mutation for non-dun1 is a single nucleotide substitution within the region deleted in non-dun2! Furthermore, genotyping more than 1000 horses for these two causal variants explained a phenotypic heterogeneity among non-dun horses where horses homozygous for non-dun2 have the most intense pigmentation and non-dun1 horses show an intermediate phenotype often with a weak dorsal stripe (Imsland et al., 2016). Interestingly, non-dun1 is in fact also a wild-type allele since it was found in two ancient horses (4400 and 42,700 years old). This implies that there existed two different color morphs of the ancestor of domestic horses, Dun and non-dun1, possibly adapted to different environmental conditions. Imsland et al. (2016) also established a plausible molecular mechanism underlying camouflage color in Dun horses, which is disrupted in non-dun horses. In Dun horses, TBX3 has an asymmetric expression in the hair follicle that matches the asymmetric deposition of pigment. KIT ligand (KITL) shows downregulation in the area where TBX3 is expressed, which in turn means that pigment cells are not attracted to this part of the hair follicle explaining the lack of pigment deposition. In contrast, TBX3 is not expressed in the hair follicle in non-dun horses explaining the symmetric deposition of pigment. Thus, the results suggest that the deleted region contains an enhancer required for TBX3 expression in the hair follicle. The work on the Dun horse coat color revealed a previously unknown function for TBX3 and a previously unknown mechanism for generation of camouflage color in mammals. This mechanism is present at least in all equids including zebras. For instance, the Somali wild ass, the wild ancestor of the donkey, shows a very clear Dun phenotype with diluted pigmentation on the flanks, a dorsal black stripe and zebra-like leg stripes. It is possible that this mechanism for camouflage pattern is also active in other mammals including different species of deer and antelopes. MUTATIONS IN THE CDKN2A TUMOR SUPPRESSOR GENE CREATE PIGMENTATION PATTERN IN CHICKEN Sex-linked barring is an iconic plumage phenotype present in breeds like Barred Plymouth Rock and Coucou de Rennes (Figure 2A). This phenotype shows sex-linked dominant inheritance and is characterized by feathers with periodic black and white bars. In the initial identification of the causal gene for this phenotype, Hellström et al. (2010) mapped this locus to a 12 kb region containing only the CDKN2A tumor suppressor gene encoding two transcripts INK4b and ARF. CDKN2A had never before been associated with a pigmentation phenotype but has a well-established link to pigment cell biology because heterozygosity for loss-of-function mutations in this gene is a major risk factor for familiar forms of malignant melanoma in humans (Hussussian et al., 1994). Sequence analysis of the 12 kb region across many breeds of chicken revealed three CDKN2A alleles: wild-type (N), Sex-linked barring (B1), and Sex-linked dilution (B2); the latter is a variant of Sex-linked barring but causing a more diluted pigmentation and not as sharp contrast between the black and white bars as in Sexlinked barring. Four sequence variants, all in or near the ARF transcript, were unique to the B1 and B2 alleles and not found in any of the sequenced wild-type chromosomes ( Figure 2B). B1 carried a V9D missense mutation while B2 was associated with an R10C missense mutation, and both carried two SNPs in non-coding sequences. The two missense mutations were very strong candidates for being causal because they were both nonconservative and affected the MDM2-binding domain of the ARF protein. However, it was a mystery why both mutations occurred on the same very rare haplotype characterized by two non-coding changes not found among wild-type chromosomes. A second study (Schwochow-Thalmann et al., 2017) characterized a fourth allele, B0, that carried only the two noncoding changes ( Figure 2B). The B0 allele was associated with the most extreme suppression of pigmentation with a weak barring pattern and was therefore named Sex-linked extreme dilution. Functional analysis revealed allelic imbalance associated with the B0, B1, and B2 alleles, with higher expression of the mutant allele compared with the wild-type allele in feather follicles but not in skin or in liver. The results imply that one of the two The non-coding mutation(s) present in the B0, B1, and B2 alleles cause a tissue-specific upregulation of CDKN2A encoding the ARF protein. ARF inhibits MDM2-mediated degradation of p53. p53 will activate downstream targets possibly initiating premature melanocyte differentiation resulting in a loss of mature pigment cells. (B) The amino acid substitutions associated with the B1 and B2 alleles impair the ARF/MDM2 interaction, which counteracts the consequences of upregulated ARF expression. (C) In wild-type feathers, melanocyte progenitor cells migrate up from the feather base and start expressing ARF in the barb region leading to differentiation of melanocytes and pigment production without exhausting the pool of undifferentiated melanocytes. In Sex-linked barred feathers, upregulated ARF expression may lead to premature differentiation of pigment cells and a lack of undifferentiated melanocytes that can replenish the ones producing pigment. As the feather keeps on growing, no more melanocytes are available to produce pigment resulting in the white bar. A plausible explanation for the periodic appearance of white and black bars is that new recruitment of melanocyte progenitor cells takes place after the undifferentiated melanocytes have been depleted. From Schwochow-Thalmann et al. (2017). non-coding changes or the combined effect of the two constitutes a cis-acting regulatory mutation. Furthermore, functional characterization using far-UV circular dichroism and isothermal titration calorimetry (ITC) of the different protein variants (N, B1, and B2) showed that the missense mutations impair the interaction between ARF and MDM2 (Schwochow-Thalmann et al., 2017) and thus counteract the effect of the upregulated expression of ARF caused by the regulatory mutation. The results are consistent with an evolutionary scenario where the regulatory mutation occurs first resulting in the B0 allele followed by two independent missense mutations causing the B1 and B2 alleles. Schwochow-Thalmann et al. (2017) proposed a plausible mechanism of action of the mutant alleles (Figure 3), partially based on the finding by Lin et al. (2013) that Sex-linked barring is associated with premature differentiation of pigment cells. First, upregulation of ARF expression blocks MDM2 and leads to upregulation of the p53 tumor suppressor and shorter lifespan of pigment cells. Thus, this is the opposite effect compared with the consequence of CDKN2A loss-of-function mutations predisposing for human melanoma. Second, the missense mutations in the B1 and B2 alleles impair the interaction between ARF and MDM2 ( Figure 3B). The Sex-linked barring phenotype is most likely caused by a repeated process where melanocyte progenitor cells are recruited, differentiate and produce pigment resulting in a black bar, followed by exhaustion of pigment cells due to premature differentiation resulting in a white bar, and then the process start again with the recruitment of new progenitor cells ( Figure 3C). In addition to Sex-linked barring, there is an extensive diversity in intra-feather pigmentation patterns in the domestic chicken. Smyth (1990) described the following main types: stippling (wild-type), penciling, autosomal barring, single lacing, double lacing, spangling, and mottling. The difference between stippling and the other variants is caused by mutations at distinct loci or by the combined effect of multiple loci, some of these genes have been identified at the molecular level. Autosomal barring resembles Sexlinked barring but the difference is that the white bars in Sex-linked barring lack pigment whereas the light-colored bars in Autosomal barring usually have pigment deposition. According to Smyth (1990), autosomal barring is caused by the combined effect of mutations at the Extension, Dark brown, Patterning, and Columbian loci. Extension corresponds to MC1R in which an allelic series has been identified at the molecular level (Takeuchi et al., 1996;Kerje et al., 2003;Dávila et al., 2014). Dark brown is caused by an 8.3 kb deletion upstream of SOX10 (Gunnarsson et al., 2011). The SOX10 transcription factor plays an important role during melanocyte development, and SOX10 mutations is causing some forms of Waardenburg syndrome, involving pigmentary disturbances of hair, skin, and eyes (Pingault et al., 1998). The Patterning and Columbian loci have not yet been identified at the molecular level. Mottling (white tip of the feather) has been associated with a missense mutation in EDNRB2 encoding endothelin receptor B2 (Kinoshita et al., 2014). DISCUSSION An extensive phenotypic diversity of pigmentation patterns is a characteristic feature in domestic animals. It has been speculated that this, in particular white spotting patterns, is a by-product of selection for tameness (Wilkins et al., 2014). The argument is based on proposed pleiotropic effects of mutations in genes expressed in neural-crest derived cells affecting both brain function and pigmentation. However, to the best of my knowledge, there are no convincing empirical data supporting this hypothesis as a major explanation for coat color diversity in domestic animals. For instance, the white spotting locus in dogs (S/MITF) contradicts this hypothesis, because mutations affecting melanocyte migration with no or minor pleiotropic effects have been selected. Furthermore, some of the friendliest dogs, like Labradors and Golden Retrievers, tend to be homozygous for the Solid (wild-type) allele at this locus. In fact, a recent study confirms the degree of white pigmentation do not covary with differences in behavior across dog breeds (Wheat et al., 2019). The most important reason for the extensive variation in pigmentation pattern in domestic animals is positive selection for variation (Fang et al., 2009). Selection against camouflage like the striping patterns in piglets may have facilitated animal husbandry when domestic piglets were roaming freely around early settlements. Furthermore, Columella, the Roman authority on agriculture, wrote almost 2000 years ago that shepherds prefer dogs with white color because it helps them to distinguish dogs from wolves at low light conditions (Forster and Heffner, 1968). There has also been a selection for pure beauty, exampled by the phenotype of the white horses (Rosengren Pielberg et al., 2008) and Sex-linked barring in the domestic chicken (Schwochow-Thalmann et al., 2017). Finally, relaxed purifying selection, related to the importance of pigmentation for camouflage and mate choice in natural populations, has most certainly contributed to the observed diversity in domestic animals. A characteristic feature of alleles affecting pigmentation in domestic animals is that alleles with strong effects on pigmentation but no or only mild negative pleiotropic effects have been preferred. This is well illustrated by the MITF white spotting alleles in dogs, and by KIT alleles in pigs where a drastic effect on pigmentation without strong negative effect on hematopoiesis and fertility became possible after the gene was duplicated. The relatively long evolutionary history of domestic animals, compared with experimental organisms, has resulted in another characteristic feature of alleles affecting pigmentation in domestic animals, namely, the common occurrence of alleles differing by more than one sequence change from the wild-type allele. This is illustrated by four examples given in this review, including KIT alleles in pigs and cattle, MC1R alleles in pigs, and CDKN2A alleles in chicken. In this aspect, domestic animals are better models than experimental organisms for pigmentation phenotypes in natural populations. For traits that have been under selection for many generations, the presence of multiple causal variant is expected to be the rule rather than the exception. A possible example of this is the MC1R allele associated with white ornamental feathers in Satellite males in the ruff that differs by four derived missense mutations compared with the wild-type allele (Lamichhaney et al., 2016). AUTHOR CONTRIBUTIONS LA wrote the manuscript.
6,223
2020-05-13T00:00:00.000
[ "Biology" ]
In vivo temporal and spatial profile of leukocyte adhesion and migration after experimental traumatic brain injury in mice Background Leukocytes are believed to be involved in delayed cell death following traumatic brain injury (TBI). However, data demonstrating that blood-borne inflammatory cells are present in the injured brain prior to the onset of secondary brain damage have been inconclusive. We therefore investigated both the interaction between leukocytes and the cerebrovascular endothelium using in vivo imaging and the accumulation of leukocytes in the penumbra following experimentally induced TBI. Methods Experimental TBI was induced in C57/Bl6 mice (n = 42) using the controlled cortical impact (CCI) injury model, and leukocyte-endothelium interactions (LEI) were quantified using both intravital fluorescence microscopy (IVM) of superficial vessels and 2-photon microscopy of cortical vessels for up to 14 h post-CCI. In a separate experimental group, leukocyte accumulation and secondary lesion expansion were analyzed in mice that were sacrificed 15 min, 2, 6, 12, 24, or 48 h after CCI (n = 48). Finally, leukocyte adhesion was blocked with anti-CD18 antibodies, and the effects on LEI and secondary lesion expansion were determined 16 (n = 12) and 24 h (n = 21), respectively, following TBI. Results One hour after TBI leukocytes and leukocyte-platelet aggregates started to roll on the endothelium of pial venules, whereas no significant LEI were observed in pial arterioles or in sham-operated mice. With a delay of >4 h, leukocytes and aggregates did also firmly adhere to the venular endothelium. In deep cortical vessels (250 μm) LEIs were much less pronounced. Transmigration of leukocytes into the brain parenchyma only became significant after the tissue became necrotic. Treatment with anti-CD18 antibodies reduced adhesion by 65%; however, this treatment had no effect on secondary lesion expansion. Conclusions LEI occurred primarily in pial venules, whereas little or no LEI occurred in arterioles or deep cortical vessels. Inhibiting LEI did not affect secondary lesion expansion. Importantly, the majority of migrating leukocytes entered the injured brain parenchyma only after the tissue became necrotic. Our results therefore suggest that neither intravascular leukocyte adhesion nor the migration of leukocytes into cerebral tissue play a significant role in the development of secondary lesion expansion following TBI. Introduction Leukocytes are believed to play an important role in secondary brain damage following acute brain injury such as stroke or brain trauma [1]. Following brain injury, blood-borne leukocytes begin to roll on -and subsequently stick to -the cerebrovascular endothelium, and then finally migrate into the cerebral tissue, where they are believed to cause damage, e.g. by releasing reactive oxygen species [2]. Inhibiting the adhesion of leukocytes to the endothelium (for example, by blocking intercellular adhesion molecule 1 (ICAM-1) or MAC-1), resulted in a significant reduction in infarct volume in experimental models of focal and global cerebral ischemia [3][4][5][6][7]. A similar sequence of events also seems to occur following traumatic brain injury (TBI); however, the role of leukocyte invasion in secondary brain damage remains somewhat controversial. Several reports support a contributing role of leukocytes in secondary brain damage. For example, an accumulation of polymorphonuclear leukocytes in the brain correlated with increased intracranial pressure (ICP) and reduced cerebral blood flow (CBF) following cold injury in rats [8]. Antibodies directed against leukocyte adhesion molecules (for example, anti-ICAM-1) reduced leukocyte accumulation in the tissue and led to improved neurological function following fluid percussion injury (FPI) [9]. Mice deficient in T and B cells and mice that were treated with T cell inhibitory agents had less traumatic brain damage following aseptic cerebral injury (ACI) than controls [10]. A 50% reduction in total post-FPI contusion volume was correlated with a reduction in the number of accumulating monocytes/macrophages in the medial cortex three days after injury [11]. Finally, neutrophil depletion reduced CCI-induced edema formation and contusion volume in mice [12]. On the other hand, several studies have reported that neither inhibiting leukocyte adhesion with anti-CD18 antibodies nor depleting neutrophils affected the permeability of the blood-brain barrier (BBB) following experimental TBI [13][14][15], and mice that were deficient in Pselectin and ICAM-1 exhibited neuroprotection without a change in leukocyte accumulation [16]. Additionally, most studies regarding the role of leukocytes in posttraumatic brain damage did not investigate intravascular leukocyte accumulation, nor did they correlate the spatial and temporal accumulation of leukocytes in traumatized brain tissue. Accordingly, it remains unclear whether leukocytes adhere to the cerebrovascular endothelium, migrate into damaged tissue, and cause additional damage or whether they migrate into the damaged brain tissue only after secondary brain injury has occurred. To address these questions, we investigated both the time course and the effect on secondary contusion growth of a) leukocytes that accumulate in the tissue, and b) intravascular leukocytes and leukocyte-platelet aggregates that adhere to the cerebrovascular endothelium following traumatic brain injury precisely in the region in which secondary brain damage occurs. Animals For this study, we used 6 to 8-week-old male C57Bl6 mice (23 to 26 g) that were obtained from either Charles River (Kisslegg, Germany) or Jackson Laboratories (Bicester, UK). The animals had free access to tap water and pellet food. Mice that were allowed to awaken between subsequent procedures within one experiment were housed individually throughout the experiment. All animal experiments were conducted in accordance with institutional guidelines and approved by the government of Upper Bavaria (license number and ethical approval number 06/04), and by both the Ministry for Health and Children in Dublin, Ireland (license number B100/4169) and the Research Ethics Committee of the Royal College of Surgeons (REC number 467). Controlled cortical impact Traumatic brain injury was induced in a large craniotomy window over the right hemisphere using a controlled cortical impact (CCI) device that was optimized for use in mice [17,18]. For experiments involving intravital microscopy or histological assessment, the impact piston travelled at 6.0 or 8 m/s, respectively, with a penetration depth of 0.5 or 1.0 mm, respectively; the contact time with the tissue was 150 ms. Thereafter, the cranial bone was re-implanted and affixed with histoacrylic glue. Sham-operated animals were subjected to the same surgical procedure without the induction of a trauma. Subsequently, the animals were immobilized in a stereotactic frame, and two cranial windows -one for TBI and one for IVM monitoring -were prepared over the right hemisphere. After two baseline recordings of selected cerebral arterioles and venules, the animals were subjected to either TBI or a sham operation (n = 6 mice per group); the animals were then transferred back to the intravital microscope, and the previously observed vessels were re-monitored 30, 60, 90, and 120 min after CCI ( Figure 1A + B). To monitor LEI at later time points (4 to 5.5 h, 8 to 9.5 h, and 12 to 13.5 h post-CCI; n = 6 animals per time point) mice were anesthetized with 2% isofluorane in 65% N 2 O and 33% O 2 and subjected to CCI as described above. Subsequently, the animals were allowed to awaken in a recovery chamber (heated to 33°C and containing 50% humidity). Upon the recovery of motor function, the mice were transferred to their respective cages. After 3, 7, or 11 h, the animals were prepared for in vivo imaging as described above, and vessels were observed four times every 30 min (see Figure 1C). At the end of each experiment, the animals were sacrificed by transcardiac perfusion with 4% PFA. Intravital microscopy of superficial vessels A square (2 mm × 2 mm) cranial window was prepared over the right fronto-parietal cortex; the dura mater was kept intact. The cerebral microcirculation was then investigated in an area 1.5 to 3.5 mm frontal to the primary contusion ( Figure 1A), that is, in the region in which secondary brain damage occurs [18,21]. The animals were placed on a computer-controlled microscope stage for repeated analyses of the same vessels. Visualization of the microvessels was facilitated by an intravenous injection of fluorescein isothiocyanatelabeled dextran (FITC-dextran; 0. (D) Following an injection of either anti-CD18 antibodies or control IgG, the animals were subjected to CCI; 13 h thereafter, the mice were prepared for IVM. All IVM experiments ended with perfusion-fixation with paraformaldehyde and the subsequent removal of the brain. (E) The animals received either control IgG or anti-CD18 antibodies and were then subjected to CCI; 15 min or 24 h later, their brains were removed for histological assessment. molecular weight 150,000; Sigma Chemical, St. Louis, Missouri, USA). Before each measurement, leukocytes were stained by repeated intravenous injections of the fluorescent dye rhodamine 6 G (0.05 ml of a 0.01% solution; Merck, Darmstadt, Germany). The images were collected using a video camera and recorded on videotape. Analysis of leukocyte-endothelium interactions in pial vessels A computer-assisted microcirculation analysis system (CapImage; Ingenieurbüro Dr. Zeintl, Heidelberg, Germany) was used to quantify the IVM images off line by a frame-to-frame analysis [18]. The number of rolling and adherent leukocytes (7 to 12 μm in size) and aggregates (15 to 25 μm in size) was analyzed in the arterioles and venules by an investigator who was blinded with respect to the treatment of the animals. For each region of interest, a vessel segment 50 μm in length was studied for 30 sec in each measurement. Rolling leukocytes/aggregates were identified by multiple intermittent contacts with the vascular endothelium and by their significantly lower velocity compared to freely moving leukocytes/aggregates in the central flow of the vessel. Leukocytes/aggregates were categorized as adherent when they attached firmly to the vascular endothelium for longer than 30 sec. Intravital microscopy of vessels in deeper regions of the brain To investigate LEI in deeper regions of the brain (that is, at a depth of 50, 150, and 250 μm), we used a Zeiss 2photon imaging system that was based on an LSM 710 confocal microscope equipped with a Chameleon Vision ΙΙ Ti:Sa laser (Coherent Scotland Limited, West of Scotland Science Park, Glasgow, Scotland). The principle of multi-photon microscopy implies that there is only one point where the excitation photons meet and thus result in an excitation of the dye. The great advantage of this technique is that this point can be at any chosen level within several 100 μm depths in the tissue. The emitted light will always come from this one point and can be detected with high spatial accuracy [22]. The animals were subjected to CCI and prepared for in vivo imaging as described above. For 2-photon imaging, a 2 mm × 2 mm craniotomy window was prepared under continuous cooling with saline, the dura mater was carefully removed, and a custom-made cover glass (Schott Displayglas, Jena, Germany) was inserted and affixed with dental cement (Cyano Veneer, Hager & Werken, Duisburg, Germany). Eight to ten hours after CCI, animals were transferred to the 2-photon microscope and IVM was performed according to a standardized protocol. The cranial window was divided equally into four parts, each containing one region of interest (ROI). In total, we monitored 12 regions (that is, four ROIs at three different depths). Each ROI covered a volume of 425 μm × 425 μm × 50 μm, and the layers covered 0 to 50 μm, 100 to 150 μm, and 200 to 250 μm below the surface see Figure 1A. Z-stacks were acquired in 2-μm steps within 30 sec. For each ROI, two consecutive z-stacks were acquired. Vessels, leukocytes, and leukocyte-platelet aggregates were labeled as described above. We used an excitation wavelength of 830 nm and detected green and red fluorescence using two non-descanned detectors (NDD; BP500-550 and BP565-610). Analysis of leukocyte-endothelium interaction in deep vessels The data were analyzed using ImageJ (downloaded from http://imagej.nih.gov/ij/) using the plugins Calculator Plus, Co-localization, and 3D Object Counter. Leukocytes and leukocyte-platelet aggregates were identified according to their respective sizes as described above. By acquiring two consecutive images within 60 sec, we were able to identify leukocytes and aggregates as being either rolling or adherent. Contusion volume The animals were deeply anesthetized with 4% isofluorane in 33% O 2 and 63% N 2 O and sacrificed by cervical dislocation 15 min or 2, 6, 12, or 24 h after CCI. Subsequently, the brain was removed, snap-frozen on powdered dry ice, and stored at −80°C. Coronal sections (10-μm thick) were collected every 500 μm through the entire brain using a cryostat, stained with cresyl violet, and digitally recorded. The area of the contusion and of both hemispheres was quantified using an image analysis system (Olympus DP-SOFT; Olympus, Hamburg, Germany) by an investigator who was blinded with respect to the treatment conditions. Compared to healthy tissue the nuclei in the contusion are pyknotic and densely stained and the neuropil is very pale [21,23,24]. The size of the necrotic areas was corrected for brain swelling, and total contusion volume was calculated based on the contusion areas that were obtained from 15 sections [17,21]. The section that was selected for quantification was the section that contained the largest contusion among 15 coronal sections that were cut through the contused brain. Leukocytes were quantified by counting all of the labeled cells in the injured and uninjured hemispheres. The analysis was performed by an investigator who was blinded with respect to the treatment of the animals. To assess post-trauma leukocyte accumulation in the brain, 48 animals were randomly assigned to the following six groups: 15 min and 2, 6, 12, 24, and 48 h following CCI. For each time point, paraffin and frozen sections were prepared from four animals. Inhibition of leukocyte adherence using anti-CD18 antibodies Animals were anesthetized with 2% isofluorane in 65% N 2 O and 33% O 2 , then received 1.2 μg/g bodyweight of either an antibody directed against CD18 (GAME-46, BD Biosciences Pharmingen, California, USA) or a control IgG delivered via the femoral vein [26]. Subsequently, the animals were subjected to CCI as described above. To investigate the antibody's effect on LEI, we performed IVM in both groups (n = 6 mice per group) at 16 and 16.5 h post-injury as described above ( Figure 1D). To detect the potential effect of anti-CD18 treatment on secondary brain damage, animals were sacrificed 15 min or 24 h post-injury (n = 7 mice per group; Figure 1E), and contusion volume was assessed by histomorphometry as described above. Statistical analysis Sample size was calculated before start of the study based on the following parameters: for the IVM experiments, a standard deviation of 25 to 30% of mean, a minimally detectable difference of at least 50%, and a power of 0.8; for histological assessment, a standard deviation of 15%, a minimally detectable difference of at least 25%, and a power of 0.8. The Mann-Whitney U-test was used to analyze the differences between groups. The Friedman one-way analysis of variance on ranks followed by the Student-Newman-Keuls test was used to analyze differences over time. Data obtained from the histological analysis are presented as mean +/− standard deviation (SD), and data acquired from the in vivo imaging experiments are presented as mean +/− standard error of the mean (SEM), unless indicated otherwise. Differences with a Pvalue of <0.05 were considered to be statistically significant. Animal physiology The blood gases, blood electrolytes, and physiological parameters (mean arterial blood pressure and core body temperature) of the animals were monitored and maintained within physiological limits throughout all experiments. Tables 1 and 2 show representative examples taken from two experimental groups. There were no significant differences between the groups or between different time points within an individual group for any physiological parameter. Physiologically important parameters were monitored continuously throughout the experiments (n = 7 animals per group). End-tidal pCO 2 , mean arterial blood pressure (MABP) and ventilation frequency were recorded throughout every IVM recording experiment included in this study. The parameters remained within the physiological range as shown in this representative table of values obtained in the experiments testing the effect of anti-CD18 antibodies or IgG control. None of the parameters showed a significant difference between groups. At the end of each IVM recording experiment blood gases were analyzed. The pH, pCO 2 and pO 2 remained within physiological thresholds and showed no significant difference between groups. The observed metabolic acidosis is not unusual for laboratory rodents fed with standard chow. Figure 2B; Figure 3A). This enhanced level of LEI was stable until the end of the observation time, that is, 13.5 h after trauma (P < 0.001 versus baseline and sham; Figure 3A). The increase in the number of adherent leukocytes in the venules only reached significance 4 h following CCI (5 +/− 2.1 adherent leukocytes/100 μm/min), which was an increase of approximately ten-fold over baseline (P < 0.02 versus baseline and sham). After 8 h, this number reached 17.5 +/− 3.2 adherent leukocytes/100 μm/min, and this rate was stable until the end of observation time (P < 0.001 versus baseline and sham; Figure 3C). In arterioles, neither rolling nor adherent leukocytes were present in noteworthy numbers. We also observed the formation of leukocyte-platelet aggregates, which were defined by their size of 15 to 25 μm [18]. No rolling or adherent aggregates were evident under baseline conditions in either the venules or arterioles (Figure 2A). Following TBI, rolling aggregates appeared in the venules within the first two hours after injury ( Figure 2B), although this did not reach significance (P = 0.053 versus baseline and sham; Figure 3B). In contrast, at 4 h post-injury, the number of rolling aggregates had reached highly significant levels (7.3 +/− 1.4 rolling aggregates/100 μm/min; P < 0.001 versus baseline and sham). Finally, we measured 17.9 +/− 3.3 rolling aggregates/100 μm/min at 13.5 h post-injury; the number of rolling aggregates had not reached a plateau by the end of the observation time ( Figure 3B). Similar to adherent leukocytes, the number of adherent aggregates only became significant at 4 h post-CCI (1.6 +/− 0.4/100 μm/min; P < 0.01 versus baseline and sham). The number of adherent aggregates continued to increase over time without reaching a plateau (13.3 +/− 1.2 adherent aggregates/100 μm/min at 13.5 h posttrauma; P < 0.001 versus baseline and sham; Figure 3D). We observed no rolling or adherent aggregates in the arterioles. Leukocyte-endothelium interactions at a depth of 0 to 250 μm 8 h post-controlled cortical impact At depths of 0 to 50 μm and 100 to 150 μm, there was a trend towards an increasing number of rolling leukocytes after trauma relative to sham-operated animals ( Figure 4A-D; Figure 5A,B left graphs), which became significant in two ROIs. At a depth of 0 to 50 μm, in ROI 4 we counted 9.5 +/− 2 rolling leukocytes after CCI compared to 4.7 +/− 1.1 in the sham-operated animals (P < 0.02). At a depth of 100 to 150 μm, in ROI 3 we detected 7.7 +/− 0.9 and 4.0 +/− 0.6 rolling leukocytes after CCI and in sham-operated animals, respectively (P < 0.002). The average total intravascular volume in an ROI at a depth of 100 to 150 μm was 17,300,000 μm 3 ( Figure 5C left graph). For all ROI at all depths, the number of adherent leukocytes was less than number of rolling leukocytes, and there were no significant differences between traumatized and sham-operated animals ( Figure 5A-C left graphs). We observed both rolling and adherent aggregates throughout all three levels; however, their numbers were smaller than the number of leukocytes, and there was no significant difference between animals that were subjected to CCI and sham-operated animals ( Figure 5A-C right graphs). Leukocyte migration into the brain following traumatic brain injury quantified using immunohistochemistry Leukocytes The positive control (see Methods) exhibited staining inside the interfollicular spaces in the spleen. In contrast, we detected no labeled cells in either the negative control or in native brain slices. In the contralateral hemispheres of brains that were removed 24 h post-CCI, which served as controls, we counted 8 +/− 5 leukocytes per hemisphere ( Figure 6A, Figure 7B). At every post-TBI observation time, we measured a significant increase in the number of leukocytes in the injured (ipsilateral) hemisphere relative to the amount in the contralateral side assessed at 24 h post injury (P < 0.03). At 24 and 48 h post-injury, a maximum of 571 +/− 200 leukocytes was reached in the ipsilateral hemisphere ( Figure 6C, Figure 7B). B lymphocytes In the positive controls, labeled cells were visible at the edge of the spleen's follicles, whereas both vaginae periarterialis lymphaticae and the red pulp were unlabeled. Neither the negative control nor the native brain sections contained any stained cells. No B lymphocytes were detected in the brain up to 48 h post-CCI. T lymphocytes The positive controls exhibited staining of the vaginae periarterialis lymphaticae within the white pulp, whereas the peri-arteriolar spleen noduli and red pulp were unlabeled. Neither the negative control nor native brain sections contained any stained cells. No T lymphocytes were detected in the brain up to 48 h post-CCI. Correlation between secondary brain injury and the migration of leukocytes into the brain Six hours after CCI, leukocytes were found primarily within the contusion, whereas very few leukocytes were present in the pericontusional penumbra ( Figure 6). Both the contusion volume and the number of inflammatory cells in the traumatized hemisphere increased over time, reaching maximum values 24 h after injury ( Figure 7B + C). At all of the time points, the inflammatory cells were found predominantly at the center of the contusion (where the tissue was already necrotic), whereas virtually no inflammatory cells were found in the vulnerable penumbra, where the neurons were still alive ( Figure 8A-D). This finding indicates that leukocytes accumulated in the tissue only after secondary lesion expansion had occurred. We did not detect any leukocytes in the contralateral hemisphere ( Figure 6A; Figure 7B). Inhibition of leukocyte adherence using anti-CD18 antibodies In light of the above results, we investigated the effect of anti-CD18 antibodies on leukocyte adhesion in the venular endothelium at a time in which pronounced LEI occurred, that is, 16 h post-injury. Following the administration of an anti-CD18 antibody (control: IgG), we observed a significant reduction of leukocyte-endothelial interactions in pial microvessels ( Figure 9A -D). Quantification of the data revealed no significant difference with respect to rolling leukocytes ( Figure 10A). In contrast, the administration of anti-CD18 antibodies reduced the number of adherent leukocytes in the venules by approximately two-thirds the number in animals that received control IgG (11.1 +/− 1.9 versus 33.7 +/− 7.9 adherent leukocytes/100 μm/min, respectively; P < 0.01; Figure 2C-F; Figure 10C). Similar to the results for leukocytes, we observed no significant difference in rolling aggregates following the injection of either anti-CD18 antibody or control IgG ( Figure 10B). However, the administration of anti-CD18 antibody reduced the number of adherent aggregates by two-thirds at 16 h post-CCI relative to the control IgG group (3.4 +/− 0.7 versus 9.0 +/− 1.9 adherent aggregates/ 100 μm/min, respectively; P < 0.02; Figure 2C-F; Figure 10D). Contusion volume after injection of anti-CD18 antibody or control IgG Fifteen minutes post-CCI, contusion volume was 6.7 +/− 1.3 mm 3 ; this volume is equivalent to 13% of the total volume of the contralateral hemisphere. After 24 h, secondary brain damage increased the contusion volume to 10.3 +/− 0.6 mm 3 in mice that received control IgG ( Figure 10E); a similar increase was observed in mice that received anti-CD18 antibody. 24 h following CCI mice in both groups had contusion volumes that were similar and equivalent to 16% of the volume of the contralateral hemisphere ( Figure 10E). Discussion It is well known that TBI can induce an inflammatory reaction [1,27,28]; nevertheless, data supporting a role for blood-borne white blood cells as a causal factor in secondary brain injury following head trauma are highly controversial. There are many possible ways in which leukocytes could contribute to secondary brain damage, including the release of free radicals, the activation of proteases, the production of pro-inflammatory chemokines and cytokines, alterations in cerebral blood flow, and/or increases in vascular permeability [28][29][30][31]. These mechanisms could be mediated either via the interaction of leukocytes with the endothelium or via the leukocytes that migrate into the tissue. In the current study, we investigated both possibilities by visualizing leukocytes both in the intravascular space using in vivo microscopy and in brain tissue using immunohistochemistry. Of note, all data were obtained from the traumatic penumbra, which is the area of the brain in which secondary brain damage occurs within the first 24 h following brain trauma [18,21,24,32]. Our results demonstrate that both increased LEI and the formation of leukocyte-platelet aggregates are initiated in the microcirculation of the penumbra within the first few hours following TBI. Nevertheless, these effects seem to occur predominantly in superficial vessels, and only to a much lower degree in deeper microvessels. Additionally, leukocytes migrate into the post-TBI brain only after the tissue becomes necrotic. The inhibition of LEI had no effect on secondary lesion expansion following CCI. Intravascular leukocyte-endothelium interactions in superficial vessels Using intravital microscopy, we investigated and quantified LEI up to 13.5 h following CCI. Under normal physiological conditions, LEI was limited to some rolling leukocytes in venules, which is in line with observations published previously by our group [19] and others [33]. Immediately following trauma, however, the number of rolling leukocytes increases significantly and -even more importantly -leukocytes begin to adhere to the venular endothelium. Most interestingly, these events occurred before secondary lesion expansion and hence could have potentially mediated secondary brain damage (for example, by disrupting the BBB or by initiating inflammatory cascades). Elegant studies have suggested a correlation between post-trauma leukocyte accumulation in the brain and secondary brain damage [11,12,29,34,35]. However, in those studies, it was unclear whether the detrimental effect was caused exclusively by leukocyte accumulation or by an associated phenomenon such as leukocyte-endothelium adhesion initiating inflammatory cascades or an upregulation of the adhesion mediator ICAM-1 and subsequent brain edema formation. Moreover, post-trauma ICAM-1 expression has been correlated with increased permeability of the BBB despite being independent of leukocyte accumulation in the brain [13][14][15][16]30,36]. In view of the possibility that ICAM-1 might play a leukocyteindependent role in secondary brain damage [2,31], we used an antibody directed against a structure located on the leukocytes themselves to directly investigate the role of intravascular LEI following TBI. Hence, we used an anti-CD18 antibody that is directed against the beta unit of the lymphocyte function-associated antigen 1 (LFA-1; β-chain CD18 and α-chain CD11a), which binds to ICAM-1 and mediates (among other effects) leukocyte adhesion to the endothelium [2,37,38]. Thus, by blocking the interaction between LFA-1 and ICAM-1, we inhibited leukocyte-endothelium interactions. Using this antibody, we reduced leukocyte adherence by approximately twothirds compared to an IgG control antibody. However, this did not affect the progression of secondary lesion expansion, indicating that leukocyte adherence to the cerebrovascular endothelium does not play an important role in the pathophysiology of secondary lesion expansion following CCI within the first 24 h. We focused on leukocyte adherence, which has been shown to initiate intracellular signaling and disruption of the BBB [2,39,40]. In contrast, rolling leukocytes interact with the endothelium only very briefly and have not been assigned a role in Figure 4 Representative images of leukocyte-endothelium interaction (LEI) at a depth of 100 to 150 μm. The vessel lumen is stained with fluorescein isothiocyanate-labeled (FITC) dextran (green), and leukocytes are labeled with rhodamine 6 G (red). Eight hours after controlled cortical impact (CCI) leukocytes and leukocyteplatelet aggregates interact with the endothelium at a depth of 100 to 150 μm (lower panel). In contrast, there were few leukocyteendothelium interactions under normal physiological conditions (that is, 8 h after a sham operation; upper panel). All three axes of the calibration bar are 50 μm. either initiating inflammatory cascades or opening the BBB. Effect of aggregates on secondary brain damage following traumatic brain injury To date, leukocyte-platelet aggregates have been reported to occur primarily in relation to endothelial stress, for example, due to increased levels of oxidized lipoprotein, inflammation, or diabetes [41][42][43]. Activated platelets up-regulate their expression of P-selectin, which then binds to its natural ligand, P-selectin-glycoprotein-ligand-1 (PSGL-1), on neutrophils and monocytes [44]. Using intravital microscopy, the formation of leukocyte-platelet aggregates was observed both after subarachnoid hemorrhage (SAH) [45] and after TBI [18]. Following SAH, an antibody directed against Pselectin significantly reduced the formation of leukocyteplatelet aggregates and the adherence of aggregates to A B C Figure 5 Quantification of leukocyte-endothelium interaction (LEI) at various depths. (A-C) Number of rolling and adherent leukocytes (left) and aggregates (right) at the indicated depths 8 to 10 h after controlled cortical impact (CCI). At each depth, LEI was measured in four standardized regions of interest (ROI). In ROI 4 at 0 to 50 μm and in ROI 3 at 100 to 150 μm, we observed significantly more rolling leukocytes than in sham-operated animals (A and B left, indicated with an asterisk). In all other ROIs at all three depths, we observed no significant differences in LEI between CCI and sham-operated animals. the endothelium [45]. Similarly, in our study, inhibiting leukocyte adherence to the endothelium led to a reduction in the adherence of aggregates, which confirms that the aggregates were composed-at least in part-by leukocytes. Nevertheless, unlike the effect of inhibiting P-selectin following SAH, inhibiting LEI did not affect aggregate formation itself. Because aggregates were observed almost exclusively in venules, their effect on the cerebral microcirculation -and in particular, their contribution to vessel occlusion -might not be of Following CCI, leukocytes accumulated in the ipsilateral hemisphere. These leukocytes migrated predominantly into the contusion core (C), whereas they were nearly absent from the pericontusional penumbra (B). The leukocytes in the penumbra are indicated by the red arrows. Leukocytes were labeled using an anti-CD45 antibody. primary importance. However, post-TBI microvessel occlusions in tissues outside of the brain have been reported, for example, in the lung [46]. Despite the fact that the reduction of the adherence of aggregates to the venular endothelium did not affect secondary lesion expansion, it remains unclear whether directly inhibiting aggregate formation would have a beneficial systemic/ pulmonary effect following TBI. Accordingly, future studies of the role of P-selectin in aggregate formation and the role of post-trauma aggregates both in the brain and in other organs would be needed to clarify these questions. Leukocytes appeared in the brain only after neuronal cell death had occurred. Intravascular leukocytes and aggregates in deeper brain levels Although we also observed rolling and adherent leukocytes and aggregates at a depth of up to 250 μm in the brain using 2-photon microscopy, these events were much less prevalent than in superficial vessels. This effect becomes even more prominent when the average vessel volume is taken into account. The vessel volume investigated in superficial venules (being approximately 85,500 μm 3 ) than in deeper regions of the brain (492,100, 173,200, and 125,800 μm 3 at depths of 0-50, 100-150, and 200-250 μm, respectively). Several factors may account for this observation. First of all, the diameter of deep vessels (which are primarily capillaries with some arterioles and venules) is much smaller than the diameter of superficial venules. According to the Bernoulli and Venturi Law, a decrease in diameter is accompanied by an increase in blood flow velocity. Therefore, the much faster blood flow velocity in deeper vessels might reduce LEI by increasing shear stress and minimizing cellcell interactions. Secondly, the post-trauma inflammatory reaction is caused by the contusion itself and is therefore predominantly present in the vessels that drain blood from the site of injury. Hence, leukocyte activation and aggregate formation was scarcely present in arterioles but was present mainly in superficial venules and, to a limited degree, in deeper tissue, most likely in draining capillaries and postcapillary venules. Leukocyte migration into vulnerable tissue Our second aim was to investigate whether leukocytes accumulate in the region of interest (that is, the penumbra) before the tissue becomes necrotic. Because we quantified the expansion pattern of secondary brain damage surrounding the primary contusion at high temporal and spatial resolution [17,21], we could compare the progression of neuronal cell death and leukocyte accumulation within the tissue over time. Following CCI, leukocytes accumulated predominantly in the contusion core. Although significant numbers of granulocytes and monocytes migrated into the tissue during the first 48 h, neither B lymphocytes nor T lymphocytes appeared in the brain. The peak accumulation of leukocytes occurred at 24 to 48 h post-trauma, which is in agreement with previous studies, including a clinical report [47] and several animal experiments using either CCI [15,48] or a weightdrop paradigm [48][49][50]. Other white blood cells such as monocytes, macrophages, T cells, and B cells appear predominantly 5 to 6 days after trauma [47,50,51], which is in agreement with our results. In contrast, [10]. The authors described a clear correlation between CD4-positive T lymphocytes in the tissue and increased post-traumatic brain damage. This early accumulation of T cells is in contrast to both our results and results from Holmin et al. [50] and might be explained -at least in part -by the differences in the trauma models and experimental methods that were used in the respective studies. In our experiments, leukocytes migrated into the tissue only after neuronal cell death had occurred; thus, their accumulation does not seem to play a role in secondary lesion growth following CCI. The role of leukocytes following controlled cortical impact versus other brain injury models Leukocyte-endothelium adherence and the subsequent migration of leukocytes are known to play a role in secondary brain injury following stroke. This role has A C B D E Figure 10 Effect of anti-CD18 antibodies on leukocyte-endothelium interaction (LEI) and contusion volume. (A) The number of rolling leukocytes in the cerebral venules 14 h after controlled cortical impact (CCI) was not affected by the administration of anti-CD18 antibodies compared to control IgG. (C) In contrast, the number of adherent leukocytes was reduced by approximately two-thirds by anti-CD18 antibodies relative to control IgG. (B,D) Qualitatively similar results were observed 14 h after CCI with respect to rolling and adhering aggregates in the cerebral venules after the administration of either anti-CD18 antibodies or control IgG. (E) Nevertheless, treatment with anti-CD18 antibodies had no effect on the expansion of the secondary lesion, measured 24 h after CCI. been studied extensively, particularly with respect to LFA-1 and Mac-1, which both use the CD18 binding site [6,[52][53][54]. The different result in our study can be most likely attributed to the post-injury differences in the pathophysiology that occur between stroke and TBI and by differences in the processes that are initiated selectively by the mechanical injury that results from CCI [55]. Despite some similarities in the progression of secondary brain damage following both types of injury, the relative importance of individual factors seems to differ. For example, the inflammatory response, including leukocyte-endothelium interactions, clearly plays a more important role in secondary brain injury following stroke than it plays in the wake of TBI [1,55]. Another study reported that leukocyte infiltration into the tissue was correlated to histological outcome after fluid percussion injury (FPI) [11]. The accumulation of monocytes and macrophages within the brain was significantly reduced three days after FPI by the administration of anti-CD11 antibodies, which bind to a fraction of the CD11/CD18 integrin and thereby inhibit leukocyteendothelium adhesion; most importantly, this reduction in leukocyte accumulation was accompanied by a reduction in lesion volume. However, it remains unclear whether this beneficial effect was due to a reduction in either leukocyte adherence and/or leukocyte migration into the tissue. Again, this discrepancy with our results could be attributed primarily to differences in kinetics initiated by the different experimental models [56]. Firstly, FPI leads to diffuse brain injury that includes hemorrhagic contusions and reaches distant brain regions relative to the primary impact site [56][57][58]. In contrast, CCI produces a relatively restricted contusion with rapidly developing central necrosis [56,59] in which the diffuse part [32] might be much less important. Hence, in TBI models that produce a much more extensive injury pattern (for example, CCI, in which the maximum lesion volume is reached within 24 h [17]), leukocytes may not contribute significantly to secondary lesion growth compared to models in which the initial injury is less severe and develops over longer periods of time (for example, FPI or ischemic brain injury). This is also supported by the finding that post-CCI leukocyte accumulation in the vulnerable tissue peaks at 24 to 48 h (as shown both in the current study and by others [15,48]), which is 1 to 2 days earlier than the time of peak leukocyte accumulation following FPI [11,51]. Summary Following CCI, leukocytes begin to migrate into the injured brain only after neuronal cell death has already occurred. Moreover, leukocytes accumulate predominantly in the contusion core (that is, in tissue that is already necrotic), but barely in the traumatic penumbra, where secondary injury occurs. Inhibiting the adhesion of leukocytes and aggregates to the cerebrovascular endothelium does not reduce the progression of secondary lesion growth. Consequently, our data suggest that blood-borne leukocytes do not mediate secondary lesion expansion following contusional TBI.
8,596
2013-02-28T00:00:00.000
[ "Biology", "Medicine" ]
STABILITY AND GLOBAL ATTRACTIVITY FOR A CLASS OF NONLINEAR DELAY DIFFERENCE EQUATIONS where c ∈ [0,1) is a given constant, k is a positive integer, f : R→ R is continuous and f (0)= 0, f (u) = 0 for u = 0. Such a equation arises from some of the earliest mathematical models of the macroeconomic “trade cycle,” and have attracted a great deal of attention (see, e.g., [1, 4, 5, 6, 7, 8, 9, 10] and references cited therein). When k = 1, Sedaghat [9] obtained some sufficient conditions for the permanence and boundedness by exploring the relationship between the first order equations and the higher order equations. Our main goal in this paper is to obtain some sufficient conditions which guarantee that the equilibrium of (1.1) is a global attractor. We still investigate the stability of (1.1) and show that the stability properties, both local and global, of the equilibrium of the delay equation (1.1) can be derived from those of the associated nondelay equation x(n+ 1)= f (x(n)), (1.2) Introduction Consider the following nonlinear delay difference equations where c ∈ [0,1) is a given constant, k is a positive integer, f : R → R is continuous and f (0) = 0, f (u) = 0 for u = 0.Such a equation arises from some of the earliest mathematical models of the macroeconomic "trade cycle," and have attracted a great deal of attention (see, e.g., [1,4,5,6,7,8,9,10] and references cited therein).When k = 1, Sedaghat [9] obtained some sufficient conditions for the permanence and boundedness by exploring the relationship between the first order equations and the higher order equations. Our main goal in this paper is to obtain some sufficient conditions which guarantee that the equilibrium of (1.1) is a global attractor.We still investigate the stability of (1.1) and show that the stability properties, both local and global, of the equilibrium of the delay equation (1.1) can be derived from those of the associated nondelay equation where the f is the same function as in (1.1).This result is of considerable benefit to the study of delay-difference equations of this type since the stability properties of nondelay difference equations are better understood [2,3].A point x is called an equilibrium of (1.1) if x(n) = x(n ≥ 0) is a solution of (1.1).It is obvious that (1.1) has the only equilibrium x = 0 under the hypothesis. We say that the equilibrium x = 0 of (1.1) is a global attractor if and only if, for arbitrary initial conditions, the corresponding solution x(n) of (1.1) satisfies lim n→∞ x(n) = 0.The region of attraction of the equilibrium x = 0 is defined as the set of all initial points {x(−k), x(−k + 1),...,x(0)} such that lim n→∞ x(n) = 0. Without loss generality, throughout this paper the norm will be defined as 3) The rest of the paper is organized as follows.In Section 2, we derive a sufficient condition for global attractivity of the equilibrium of (1.1).In Section 3, we discuss the stability properties of (1.1). Global Attractivity of (1.1) The objective of this section is to derive sufficient conditions which guarantee that the equilibrium of (1.1) is a global attractor.Let (2.1) Then (1.1) is reduced to: Noting that c ∈ [0,1), (2.2) has the unique equilibrium ū = 0. We first show the following proposition. The following theorem gives a sufficient condition for the equilibrium x = 0 of (1.1) to be a global attractor. In this section, we present the main results which relate the stability properties of the delay equation (1.1) to those of the associated nondelay equation First we establish a lemma which will be used in proving the main theorem. ) for all x, y ∈ R. If the equilibrium of (3.5) is stable, then the equilibrium of (1.1) is also stable. Proof.It is sufficient to prove the stability of the equilibrium of (3.2) because of the equivalence of (1.1) and (3.2).Let > 0 be arbitrary.Since the equilibrium of (3.5) is stable, there exists the definition of y given by (3.1).Hence, for all n ≥ −k, which implies for all n ≥ −k.Therefore, for all n ≥ 0, by (3.1) Noting that f satisfies we get and from Lemma 3.1(b) and (3.17), Therefore, for arbitrary > 0, there exists δ > 0, such that y(0) < δ implies y(n) < for n ≥ 0, so the equilibrium of (3.2) is stable.This completes the proof.Theorem 3.3.Assume that (3.12) holds.If there exists a constant m > 0 such that G(m) = {x ∈ R||x| < m} is a subset of attractive region of the equilibrium of (3.2), then G(m) is also contained in the attractive region of the equilibrium of (1.1). Proof.Let > 0 be arbitrary.Since G(m) is a subset of attractive region of (3.2), there exists Assume that y(0) ∈ R k+1 and y(0) < m, then we have |x(−k)| < m.So there exists which implies, by (3.1) and (3.12), that Then y(0) < m implies y(n) < for n ≥ T 3 .So G(m) is also s subset of attractive region of the equilibrium of (1.1).This completes the proof. Theorems 3.2 and 3.3 can be combined to give the following corollaries. Corollary 3.4.Assume that the condition (3.12) holds.If the equilibrium of (3.5) is asymptotically stable, then the equilibrium of (1.1) is also asymptotically stable. Corollary 3.5.Assume that the condition (3.12) holds.If the equilibrium of (3.5) is globally stable, then the equilibrium of (1.1) is also globally stable.
1,278.2
2005-01-01T00:00:00.000
[ "Mathematics" ]
E-commerce adoption in ASEAN: who and where? As an economic bloc, the Association of Southeast Asia Nations (ASEAN) aims to leverage the usage of e-commerce for the benefits of all: government, enterprises, and citizens of its member countries. However, each country varies greatly in terms of economic development and cultural factors, which explains the uneven level of e-commerce adoption in the region. This paper seeks to provide empirical evidence by integrating individual and country-level characteristics to profile e-commerce users in ASEAN. By analyzing multi-source data from 5870 individuals in six countries in 2017, the results reveal that e-commerce adoption is more prevalent among female, younger, more educated, employed, and higher income users. Also, the adoption of e-commerce is found to be stronger in societies that exhibit high individualism, low masculinity and low uncertainty avoidance. This study proposes that e-commerce adoption shall not only be explained by individual characteristics and formal institutions, but also by country-level variables and national culture. Introduction The ability to conduct business transactions via computer networks has existed since the 1960s, yet the emergence of two companies, Amazon and eBay, in 1995 has enormously transformed the way today's businesses operate. E-commerce refers to the process of selling and purchasing goods and/or services through computer networks following the methods specifically designed for placing and acquiring orders, according to the Organisation for Economic Co-operation and Development [43]. In contrast to traditional retail, most e-commerce activities are handled virtually during pre-purchase (i.e., information search), purchase, and post-purchase (i.e., feedback and after-sales service) stages. Therefore, the ability to and preference for adopting e-commerce is a privilege only available to computer-literate customers. Beyond that, the function of e-commerce has extended into other related activities such as serving as a platform for information sharing among consumers and traders that, while easily accessible, is considered risky by many [41,50]. Thus, the application of e-commerce is no longer restricted by the quality of formal institutions, such as infrastructure and cyber law, but is also dependent on cultural aspects such as risk tolerance. In synthesizing the two level of analysis, it is argued that the eventual online purchase behavior of customers is greatly explicated by their demographic characteristics and the country environment. Global e-commerce diffusion today remains uneven and the digital divide is still apparent across countries [33,67]. Despite the fact that e-commerce is conceivably advantageous for all (governments, businesses, and customers) the prevalence of e-commerce adoption is greatly heterogeneous between nations [40]. This phenomenon demands extensive research to explore the individual and country-level factors contributing the adoption of e-commerce to ensure that the benefits offered can be fully embraced by all economies. Thus far, many researchers have attempted to develop a framework for proposing the drivers and deterrents of e-commerce growth in a country (e.g., [1]), in which the unit of analysis is either enterprises [67] [36,61,66]. Despite the advancement, there is a dearth of cross-country studies of e-commerce, mostly due to the lack of effort to integrate individual and macro-level factors for analysis in a single model. Instead, existing work is mostly fragmented, focusing on who e-commerce users are (i.e., personal and cognitive-psychological attributes of consumers [66]) or where e-commerce users live (i.e., physical infrastructure and rule of law in a country [39,44]). This approach, though meriting acknowledgement, neglects to solve either puzzle and has not adequately considered individual and country drivers simultaneously to explain the rate of e-commerce adoption across nations. There is a deficit in knowledge especially in the context of a regional bloc like the Association of Southeast Asia Nations (ASEAN), where the common goal is to narrow the digital gap while disseminating the benefits of e-commerce particularly among consumers in all country members. In attempt to offer empirical and practical insights, this research integrates multilevel factors and examines the relationship toward e-commerce adoption in ASEAN. Specifically, the empirical approach tests individual demographics (age, gender, education level, employment status, and income level) and national culture (power distance, individualism, masculinity, and uncertainty avoidance), for explaining the propensity to buy things online among 5870 consumers from six ASEAN countries in 2017. The contributions of this study are notable for academics and practitioners in several ways. First, it advances the research on e-commerce adoption by incorporating and analyzing drivers at multiple levels in a single model. This method is robust as it allows for a cross-country analysis while controlling for confounding effects of other individual and macro factors. Second, this study observes and further examines the role of national culture toward the application of e-commerce, thus complementing existing studies that have focused extensively on formal institutions such as telecommunication infrastructure. Third, this research is among the very few studies that specifically looks at a particular economic bloc. This approach is particularly vital for upholding the common aspirations of ASEAN in fully leveraging the benefits of the digital economy while some citizens and member states are still impeded by personal and cultural barriers. E-commerce as a strategic measure of the ASEAN economic community (AEC) ASEAN was established on 8 August 1967 in Bangkok, Thailand, and currently consists of ten countries: Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand, and Vietnam. There are seven goals of this regional bloc, including the acceleration of economic growth, social progress, and cultural development in order to strengthen the foundation for a prosperous and peaceful community, through active collaboration and mutual assistance among the members. AEC is one of three pillars comprising the ASEAN community vision. In particular, AEC Blueprint 2025 provides a broader strategic roadmap toward a highly competitive and innovative region focusing on five key areas, particularly the enhancement of connectivity and sectoral cooperation within the economic bloc. To achieve that, several measures have been planned including the advancement of e-commerce adoption among both businesses and citizens of ASEAN. Acknowledging the potential contributions of e-commerce, ASEAN initiated the e-ASEAN Framework Agreement 2000 for promoting the adoption and usage of e-based business platforms and digital technology to enhance the competitiveness of the bloc. However, these efforts are hindered by the socio-economic heterogeneity among the ten member countries. The differences are observed not only in terms of economic development, but also in cultural attributes. This issue has been seriously addressed in the Initiative for ASEAN Integration and the ASEAN Equitability Development Monitor 2014 that emphasize the need to narrow the development gap, particularly in countries like Cambodia, Laos, Myanmar, and Vietnam. As such, the prevalence of e-commerce varies across all ASEAN members. Age Age simply measures one's date of birth, yet strongly defines the specific behavior of people at different life stages. Psychologically, idiosyncrasies in the purchasing behavior of consumers are influenced by the physical and cognitive aging processes, as well as accumulated life experiences [53]. Each generational cohort varies by attitudes, preferences and values, which eventually shape buying patterns including preferred methods for shopping and buying [45,46]. Accordingly, empirical evidence supports age as a significant factor in explaining an individual's decision regarding the adoption of e-commerce [13,37,63]. Although most studies found that young consumers are the dominant users of e-commerce, in some countries like Israel [37] and Taiwan [63], a different pattern emerges. Yet, while older consumers today are using the internet more often than before, the younger generation has comparatively more active users making online purchases [25,55]. Young people, especially those born after 2000, are considered the first high-tech generation [42]. They are more computer-literate and exposed to cutting-edge technological applications. Also, young consumers are very consumption-oriented and sophisticated when shopping [32]. They do not simply consider shopping as an act of buying things, but rather a decision that has to be made based on an evaluation of a set of information [18,37]. Thus, many of them choose to adopt e-commerce because the information provided by the internet is broader and richer than what could be acquired in the physical stores. On the other hand, older people prefer a traditional way of seeking information before proceeding with a purchase [24]. They still want to hear an explanation about the product or service from the salesperson rather than relying on a description displayed on the internet [26]. In fact, older consumers are not bothered by the limited information available in the store because they are considered experienced shoppers that are capable of making buying decisions with less knowledge about the products [10]. Lastly, older people resist using e-commerce because they have stronger risk avoidance tendencies than younger consumers [48]. Hypothesis 1: E-commerce usage is more prevalent among younger customers. Gender Another important demographic factor that explains willingness to perform online purchases is gender. Indeed, men and women across cultures and nationalities have different thoughts on the adoption of e-commerce [55,60]. Accordingly, empirical studies found that gender can distinguish several aspects of computer applications including e-commerce, such as level of acceptance [23,25], perceived risk [19], and types of product purchased [52]. Yet, research has not yet reached a solid finding on whether men or women are more likely to do online shopping [9,66]. The principle barrier that inhibits the effort is that most studies do not control for other related variables such as education and income level of the respondents. Thus, in a study where men are found to be more active users, it might not be because of gender, but because of their high income level, as men often earn more than women. This research, though, predicts that gender plays a significant role in explaining e-commerce behavior, even after controlling for the confounding variables of employment status, education, and income level. In brief, we follow the argument of Zhou et al. [66] that the psycho-socio traits of men and women lie at different edges, thus shaping their attitudes toward online shopping. Females and males have different attitudinal and behavioral orientations that, while comprised in their genetic makeup, are more importantly derived from their social experiences [49]. First, the purchasing behavior of women is largely driven by emotional and social interaction, while men opt for convenience. Hence, e-commerce is rather less attractive for women due to the absence of direct interaction with the sellers [15]. Second, the types of products sold on the internet are more suitable for men than women [52,60]. Men are associated with "hard" products like computers, electronic gadgets, and sport apparel which are widely available. "Soft" products such as food and textiles for women are not only limited but also require actual testing prior to purchase. In the same vein, women, more than men, appreciate the physical evaluation of products including seeing, touching, and feeling the product in order to make a purchasing decision [11,15]. This is another reason why many women found e-commerce to be less accommodating than conventional shopping. Lastly, since e-commerce activities are plagued by concerns of privacy and security [7], the platform is less favorable for women, who avoid risk [19], than men who easily trust people [60]. In short, in agreement with the majority of studies, we expect men to be more likely to adopt e-commerce than women [35,61,63]. Hypothesis 2: E-commerce usage is more prevalent among men. Education Classic beliefs contend that education is an essential quality that molds a person's value systems, cognitive preferences, learning capabilities, skills, and innovativeness [5]. Accordingly, one's level of education can predict if they will be proactive or reactive toward cutting-edge technological applications in daily life [4]. In general, people with a higher education demonstrate greater knowledge, experience, and risk tolerance, making them eager to adopt online shopping [6,34]. Previous research supports the argument that a higher level of education has a positive effect on a person's tendency to use an e-commerce platform at both the individual- [3,20,57,58] and firm-level [12,17]. Hypothesis 3: E-commerce usage is more prevalent among customers with a higher education level. Employment status and income level Since education, employment status, and income level are positively related, some studies ascertain that a person's employment status is a significant factor in explaining e-commerce adoption. For example, Pérez-Hernández and Sánchez-Mangas [47] found that the probability of making an online purchase is higher among workers than unemployed individuals. Perhaps a better explanation is that people with a job earn money that enables them to buy things online. Thus, the following examination shall explore the relationship between income level and probability of online shopping. Indeed, even before online shopping was well established, studies discovered that at-home shoppers are among higher income earners [14]. Subsequently, empirical studies support similar findings that online shoppers possess more wealth than shoppers at traditional stores [3,38,57,58]. There are two possible reasons for this. First, wealthier people are more convenience-oriented, risk tolerant, and less brand and price conscious [16], all of which are consistent with e-commerce adoption. Second, most items sold over the internet like books, computer hardware, etc., are considered "normal goods, " those for which demand increases as income increases [66]. Thus, we posit the following: Hypothesis 4: E-commerce usage is more prevalent among working customers. Hypothesis 5: E-commerce usage is more prevalent among customers with a higher income level. E-commerce country-influencer: national culture While most studies have emphasized the importance of formal institutions on the diffusion of e-commerce, they often neglect the underlying effect of socio-cultural norms and values [2]. Culture has long been recognized as a vital factor that influences consumer behavior, including preference to adopt e-commerce [67]. In brief, culture can be defined as the characteristics of a human group that shares basic assumptions on correct ways to perceive, think, and feel [51]. Although national culture is a macro-level phenomenon, the effect is reasonably observable at the individual level [54]. Thus, examining the country's culture for consumers is both appropriate and meaningful for explaining individual behavior [31], as demonstrated in other research (e.g., [22]). One of the most prominent dimensions of national culture is described by Hofstede [27]. He defines national culture as a set of assumptions, attitudes, behaviors, beliefs, expectations, and values, shared by a group of people, called "the collective programming of the mind" distinguishing members of one group from another. Although the time orientation dimension (long-term versus short-term) and the happiness dimension (indulgence versus restraint) have been added in later time, this research opts to focus on the four original dimensions: power distance, individualism versus collectivism, masculinity versus femininity, and uncertainty avoidance. Power distance The power distance dimension generally measures the degree to which a society accepts inequality in distribution of power [28]. In practice, high power distance societies accept the establishment of hierarchical order with no justifications required. On the other hand, low power distance societies enjoy equality, egalitarianism, and neutrality that removes the barriers between superiors and subordinates. In other words, there is stronger interaction between the two parties in the latter than in the former, further spurring trust value among the members [65]. Yet, this trust value is a pre-condition for the adoption of e-commerce because of the great uncertainty inherent to online shopping, e.g., faceless transactions, virtual payment, etc. [2]. In contrast, the trust deficit in a high power distance society tends to create a prejudiced attitude toward other people on the internet, such as sellers are dishonest and unethical, which impedes the interest to use e-commerce [65]. Hypothesis 6: E-commerce usage is more prevalent among customers in counties with low power distance. Individualism The portrayed self-image dimension defines the number of social contacts that a person feels responsible for taking care of, either oneself and immediate family (individualism) or extended relatives and other ingroup members of the society (collectivism) [28]. The theoretical approach to understanding the relationship between individualism and collectivism and online shopping behavior emphasizes the emergence of trust value in the society [2]. Although trust value is considered to be stronger in a collectivist society, it is restricted to ingroup members only [30]. In order words, this type of society does not trust new, strange, or unusual things [59]. Therefore, resistance to change is perceivably higher among customers in collectivist nations, causing the acceptance of new technology to be relatively slower [2]. On the other hand, individualistic countries uphold the principle of universalism, where the citizens identify themselves within broader groups of society. These people are also good at meeting new members and generally more willing to trust them [64]. Applying this understanding into the context of technology acceptance, we propose that the propensity to use e-commerce is higher among customers in individualistic countries. Hypothesis 7: E-commerce usage is more prevalent among customers in high individualism countries. Masculinity Masculinity and femininity describe social gender roles that either emphasize achievement, heroism, assertiveness, and material rewards for success ("tough"-related values in the former) or appreciate cooperation, modesty, and caring ("tender"-related values in the latter) [28]. In fact, Hofstede [27] found that people in high masculinity societies consider most people to be untrustworthy. Instead, high femininity societies easily embrace harmonious relationships with other people. Srite and Karahanna [54] posit that e-commerce provides a medium for shopping which is more convenient and pleasant than traditional stores, thus is likely preferred by people in a high femininity society. Uncertainty avoidance Uncertainty avoidance (UA) describes the extent to which a society feels threatened by, as opposed to tolerant of, uncertainty and ambiguity [28]. In practice, people in low UA nations are more willing to accept risks when trying new things, thus are more likely to venture into entrepreneurship and adopt modern technologies [21]. On the other hand, high UA societies are comfortable with the status-quo and are resistant to change, thus act conservatively [56]. As a consequence, they are slower in embracing modern technology [64]. Another reason to suspect that low UA countries are more likely to adopt e-commerce is that trust value is high among the citizens [31]. Since these people are open to accept change although it is potentially risky, they are actually ready to build trust over those uncertainties. Similarly, as a new form of trade, e-commerce is still considered more uncertain than traditional stores, and only customers with strong trust value are willing to use it. In contrast, members of high UA cultures would simply ignore the existence of this new technology [8]. Hypothesis 9: E-commerce usage is more prevalent among customers in low uncertainty avoidance countries. The framework of this research is presented in Fig. 1. Methods and data To test the hypotheses, we utilize individual-level data from the World Bank Global FINDEX database 2017. FINDEX is the world's most complete dataset on financial inclusion around the globe, sponsored by the Bill and Melinda Gates Foundation. In 2017, FINDEX collected data from a survey to over 150,000 national representatives in more than 140 countries. Although FINDEX 2017 covers 10,206 individuals in nine ASEAN countries (except Brunei), data on Hofstede's national culture only includes six countries for analysis: Indonesia, Malaysia, Philippine, Singapore, Thailand, and Vietnam. After integrating data from both sources, including that of the World Bank as a control, Dependent variable E-commerce adoption behavior of individual is measured in binary: if respondent has, or has not, personally bought things online in the past 12 months. This is a more accurate measure of e-commerce usage because prior to the 2017 data, FINDEX asked if respondents "… made payments on bills or bought things online using the Internet, " where the use of the internet is ambiguous on the purposes other than shopping. Explanatory variables As mentioned, this study hypothesizes on both individual demographic characteristics and country-level culture. The former draws on FINDEX while the latter utilizes Hofstede's national culture publicly accessible on the website. Details on each variable are presented in Table 1. While age is measured continuously, gender and employment status are dichotomous, and education and income level are ordinal-type. All dimensions of national culture are scored from 0 to 100 to describe the degree between individualism versus collectivism, masculinity versus femininity, etc. Control variables In order to improve robustness of the model, we include individual and country confounding variables: account ownership and economic status of a country. First, the eventual e-purchase behavior is largely determined by the ownership of a means to make online payments [44]. The business-to-customer e-commerce index developed by the United Nations Conference on Trade and Development also acknowledges that the demand for e-commerce is justified when a person has an account at a bank or another type of financial institution, or mobile money account. Second, we control for a country's common economic development: gross domestic product (GDP) per capita. Since there are only six countries for analysis, we are unable to include more country controls because most of them are strongly correlated, potentially causing multicollinearity. For example, GDP per capita is correlated with an internet penetration rate of 0.794, Global Cybersecurity Index at 0.850, and population size at − 0.911. The correlations among the variables are shown in Table 2. Descriptive analysis Unlike advanced regional economic blocs such as the European Union, ASEAN integration is rather less extensive; most terms are confined to free trade agreements. The principle barrier to greater integration is the wide gaps between countries with regards to political ideology, economic development, as well as socio-cultural values, In a same vein, the national culture for these countries is also varied, particularly for the dimensions of power distance and uncertainty avoidance. For instance, Malaysia has the highest power distance not only in the region, but also in the world (100) while Thailand has the narrowest in ASEAN (64). Also, Thailand is the most risk averse with an uncertainty avoidance measure of 64, while Singapore is extremely risk tolerant (8). In sum, heterogeneity across member states provides a unique context for researching how individual characteristics interact with informal institutions to influence the online purchasing behavior of customers. Graphical representation of the national culture is displayed in Fig. 2. Results and discussion Before running the model, we took adequate measures to ensure the absence of common method bias (although this is less likely because our dataset is compiled from multiple sources), multicollinearity using variance inflation factors, and outliers. The final dataset is, therefore, robust for analysis. To test the hypotheses, we employ ordinary least squares (OLS) regression for estimating the relationship between explanatory variables and the online purchasing behavior of customers. Table 3 For individual predictors, Model 2 and 4 consistently support hypothesis 1, 3, 4 and 5 that e-commerce is more prevalent among customers who are younger, have a higher level of education, are currently employed, and have a higher salary. However, hypothesis 2 is not supported because female customers are found more active online shoppers than men. Although the findings contradict our initial expectations, it is not totally surprising because some earlier research has found similar results, and there are both theoretical and methodological reasons to support the findings. First, although men have long been recognized as active users of technology breakthroughs, recent trends show more women embracing online applications including e-commerce platforms [25]. In certain contexts, e-commerce adoption among women has exceeded male shoppers [55]. In fact, global retailers now consider women to be one of the fastest growing segments to be exploited in online shopping platforms [60]. Second, low e-commerce usage among men in this study could not entirely attributed to the gender-related physiological characters because e-commerce behavior is also largely explained by the social experiences [49]. Since our model does not control for prior social experience, we could argue that more male customers reluctant to do online purchase probably because his close relatives and friends had bad experiences with the online platform, i.e., fraud or technical errors causing double charged, etc. In terms of the national culture, models 3 and 4 confirm hypotheses 7, 8 and 9 that countries with strong individualism, low masculinity, and low uncertainty avoidance exhibit higher rates of e-commerce adoption among the citizens. However, no support is received for hypothesis 6. Our results suggest that power distance does not explain the discrepancy in e-commerce diffusion between ASEAN countries. Although prior research argued that power distance dimension does explain the level of trust in the society [65], the effect on eventual e-commerce behavior is not apparent, presumably because the interactions in an online platform are virtual. Therefore, power gaps between parties are less visible in electronic interpersonal communication regardless the power distance of the society. Lastly, for control variables, the results show that account ownership and a country's GDP are significant and positively related with e-commerce adoption. Table 4 displays the profile of e-commerce users in ASEAN following the results of this study. Conclusions As an economic bloc, can ASEAN ensure that strategic plans for promoting e-commerce platforms in the AEC Blueprint 2025 are equally beneficial to all country members? Do demographic characteristics interact with national culture to influence the eventual online shopping behavior of customers? We attempt to answer these issues by arguing that e-commerce adoption is explained by the integration of both levels of factors: individual Table 3 Regression results ***p < 0.001, **p < 0.010, *p < 0.050 traits and country features. More importantly, we posit that national culture plays as important a role as formal institutions to predict the prevalence of e-commerce adoption among citizens in a country. Thus, the quest to identify online shopping trends requires not only exploring who the customer is, but also where they live. Our empirical approach draws on multiple sources of data to test the relationship among individual characteristics (age, gender, education, employment, and income) and national culture (power distance, individualism, masculinity, and uncertainty avoidance) to the propensity for online shopping among 5870 individuals in six ASEAN countries. The results of this research highlight on important contributions for academic and practical purposes. First, the profile of e-commerce users in ASEAN is rather homogenous in terms of the demographics. Online shopping platforms are found to be used mostly by customers that are female, young, have a higher education level, are currently employed and earn more income. This research provides evidence that although one of the core missions of AEC 2025 is to ensure all citizens in the region benefit from the advantageous offered in e-commerce, the eventual adoption remains a privilege for young individuals with better education and higher salaries. Implicitly, e-commerce is prevalent only among those who have the access and ability to exploit the technology, and more importantly, could afford to pay. Second, this study establishes a conceptual link between two important yet isolated research themes: e-commerce behavior and institutions. Specifically, we shed light on the fact that that national culture is as important as formal institutions in pushing customers into adopting e-commerce. From that, we propose that policies designed to encourage individuals to use the internet for shopping should not only focus on formal institutions such as infrastructure and regulation, but also on cultivating favorable socio-values related to virtual business activities such as trust. This suggests that tangible aspects are less important; our controls show a strong evidence that prosperous economic status is extremely vital as it helps the country to provide better internet coverage and speed, as well as strong financial institutions for account ownership. In sum up, policy makers, particularly in ASEAN member countries, should acknowledge that e-commerce platforms are still not fully exploited by all, but instead by a rather specific type of customer: younger, more educated, and wealthier. To a certain extent, this profile sends a signal that the trend only favors people in more developed countries where the education system is better and disposable income is higher. This should be an alarming call for ASEAN committees to continuously and aggressively narrow the development gaps across the country members. Lastly, the initiatives at the macro-level to promote e-commerce adoption should not neglect the important aspect of socio-cultural values. Although national culture is not visibly changed in the short or medium term, local government should strive to nurture favorable values in society such as trust and risk tolerance. Despite its contributions, this research should be viewed in light of several limitations, thus suggestions for future research. First, data of this research only covers six ASEAN countries. Even if it tested all ten members, it is still considered fewer in comparison to other larger regional blocs such as the 54 members of the African Union, 28 members of the EU, and 22 countries in the Arab League. This limitation concerns the generalizability of our findings particularly as it inhibits us from inserting more country-level controls. Second, our dataset relies solely on secondary data that did not capture the personalities of individuals. Although our theoretical argument has partly based on the relationship between demographic factors and the personalities of individuals, this study does not directly test that to further examine the effect on online purchase behavior. Lastly, this research does not demonstrate causality. Rather, we hold to a strong assumption that individual behavior is determined by both personal and social environment, not vice versa. In other words, it is theoretically sound to argue that a customer's age determines his or her online purchase behavior, but it is extremely difficult to comprehend that shopping online can make someone attain a higher level of education. Abbreviations ASEAN: Association of Southeast Asia Nations; AEC: ASEAN Economic Community; GDP: gross domestic product; IA: uncertainty avoidance; OECD: Organisation for Economic Co-operation and Development; OLS: ordinary least squares.
6,794.2
2021-01-12T00:00:00.000
[ "Economics" ]
Domain walls without a potential We show that domain walls, or kinks, can be constructed in simple scalar theories where the scalar has no potential. These theories belong to a class of k-essence where the Lagrangian vanishes identically when one lets the derivatives of the scalar vanish. The domain walls we construct have positive energy and stable quadratic perturbations. As particular cases, we find families of theories with domain walls and their quadratic perturbations identical to the ones of the canonical Mexican hat or sine-Gordon scalar theories. We show that canonical and non canonical cases are nevertheless distinguishable via higher order perturbations or a careful examination of the energies. In particular, in contrast to the usual case, our walls are local minima of the energy among the field configuration having some fixed topological charge, but not global minima. I. INTRODUCTION Topological and non-topological solitons play an important role in various domains of physics ranging from liquid crystals, fluid mechanics to cosmology (see e.g. [1][2][3][4][5]). The simplest and canonical example of such objects are certainly domain walls, or kinks, which are known to exist in particular in simple scalar theories where the vacuum manifold possesses several connected components. Considering such a theory, with a scalar φ, and a potential V (φ), domain walls can exist if the potential has more than one minimum. The purpose of this work is to show that similar domain wall solutions exist in scalar theories with no potential; i.e. theories where the Lagrangian vanishes identically when the derivatives of the scalar vanish. Among such theories, we will concentrate here on Lorentz invariant theories where the Lagrangian depends both on the real scalar field φ and on its the kinetic term X, defined by assuming space-time is endowed with a Lorentzian flat metric η µν (we will not consider here gravitating solutions). Hence we will consider Lagrangians L of the form where the dependence of P on X and φ is non trivial and in particular not given by a sum of a free kinetic energy X and potential energy V (φ). Such theories have been considered in many instances and are usually denoted as k-essence in the context of cosmology and gravitation [6][7][8][9]. They have second order equations of motion and can even be generalized to Lagrangian including up to second derivatives of the field, the so-called Horndeski theories [10,11]. Such theories can be used in particular to mimic dark matter via the MOND paradigm [6,12] or even possibly as dark matter itself [13], to generate inflation without a potential [9,14,15] or get a late time accelerated expansion [8,16]. In this context, the possibility of finding solitonic configurations in theories with non-canonical kinetic terms was considered in several works, in particular in the Horndeski framework [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] and the corresponding field configuration are sometimes dubbed "k-defects" [17]. Similar solutions also arose in the past in other contexts for example in the well known Skyrme model [35]. The k-defects, in particular, were found to behave differently from standard defects due to the different nature of the kinetic terms [17,30], however, at least in the single field case, all the existing k-defects, are, despite their name, supported by a non trivial potential in the action, just as the usual topological defects are. I.e. in the solutions considered so far, P (φ, X = 0) has a non trivial dependence in the field. Here we show that defects field configurations, specifically kinks, can be obtained in theories with no potential, i.e. theories where the Lagrangian vanishes identically if the kinetic term X is set to zero. This might not come as a surprise considering that one is allowed to freely choose the function P to produce a given specified field profile, however, we will also show that the quadratic perturbation theory around these solutions can be made stable. In fact we will further show that simple models can be considered where both the kink solution and its perturbations are identical to those of the canonical theories usually considered. We will not attempt here a full classification of the theories allowing such kinks "without a potential" but will only exhibit some simple models as an existence proof and discuss some of the properties of these kinks in comparison with the usual ones. This work is organized as follows: in the next section II we recall some properties of kinks of usual scalar theories. We then introduce k-essence domain walls (section III) and show how one can obtain kinks which have a profile just identical to the one of the canonical mexican hat model and discuss their stability and topological properties in a non perturbative way. This is then generalized to other canonical profiles including the one of the sine-Gordon model (section IV). In a following section, we discuss the perturbation theory around our wall solutions (section V) before concluding (section VI). Two appendices give technical details on some results introduced in the body of the text. A. Actions and field equations for canonical domain walls Canonical domain walls can be constructed in a fairly standard theory for a scalar field φ with a Lagrangian of the form where the field is assumed to live in a D dimensional flat space-time with metric η µν = diag (−1, 1, · · · , 1), and V (φ) is the potential energy. In the canonical case, V is chosen so that it has two or more minima (with the same values of the potential V ) at different values φ k min of the field (where k index the different minima). Domain walls 1 are then obtained as static vacuum solutions φ(z) of the field equations which only depend on one space-like direction z (to simplify the discussion, one also usually assumes that the field live in D = 2 dimensions) and interpolate between different adjacent minima φ −∞ min at z = −∞ and φ +∞ min at z = +∞. For the canonical models (3), a given vacuum profile φ(z) obeys the vacuum field equation which has the first integral where J 0 is a constant, and here and henceforth a prime means a derivative w.r.t. z. Note further that, when we want to stress that a given expression is valid only on shell for the background domain wall solution, we will replace there the straight symbols (eg. "=") (designating off-shell relations) by curly symbols (eg. "≈"). As a consequence, the kink profile obeys B. Some energy considerations A standard trick due to Bogomolny [36] (that we write here in a slightly non standard way) allows then to discuss easily the total energy 2 H of such a configuration. Indeed, this energy (or the energy per unit transverse to the direction z if D > 2) is given by by the integral over z of the Hamiltonian density H(z) given by so that one has where J 1 is an arbitrary constant. Choosing J 1 = J 0 we see that the last bound is saturated for a solution of the field equations obeying (4), as the square appearing in the right hand side of (8) vanishes. Moreover, it is possible to make this energy finite for such a solution representing a domain wall. In this case, one takes J 0 = 0 and the domain wall energy H dw is given by the simple expression We will later enforce this finiteness as well as demand that the energy density of the wall is locally finite. Thus we shall require that H(z)dz < +∞ (11) ∀z, |H(z)| < +∞ (12) C. Changing variables The simple form of the first integral (4) can be used to enlighten the nature of the canonical domain wall solutions as well as ease the finding of the solutions to be discussed thereafter. Indeed, for a generic φ, we define ψ as obeying I.e., the φ(ψ) solution of the above equation is given by the same functional dependance as φ(z) solution of the domain wall profile equation (4) with J 0 = 0. And using the new variable ψ as field variable, the domain wall field equation simply read ψ = 1, and in the ψ variable, the solution is then simply represented by 3 ψ = z. Using the ψ variable, we see that the Lagrangian (3) simply reads ≡ where v(ψ) is defined simply by the relation v(ψ) = V (φ(ψ)), X ψ is defined as in (1) replacing there φ by ψ, and the above equation also defines the function w(X ψ ). Considering the above Lagrangian as a starting point, and looking for a one dimensional profile ψ(z), we see that the part of the field equations deriving from this Lagrangian and not proportional to second derivatives of the field simply reads Hence, looking for a profile of the form ψ = λz, and using that for such a profile one has obviously ψ = 0 and X ψ ≈ −λ 2 /2, we see that we get a solution provided −λ 2 /2 is a root of the function y defined by In the canonical case, one has w(X ψ ) = 2X ψ − 1 and hence y(X ψ ) = 2X ψ + 1. Obviously λ = ±1 generates a solution irrespectively of the form of v (say provided that v does not vanish as ψ varies over the real line). To get a proper domain wall, one should then check that the obtained profile has localized energy and is stable. The previous expression (10) yield the following form of the energy density H dw (z) = 2v (ψ(z)) (18) yielding the total energy Hence, a necessary condition to have a domain wall is that the above integral converges. D. Some canonical models Among the most studied and well known cases which have these properties is the model with the mexican hat potential The kink and antikink solutions are given by the profiles φ mh (z) = ± tanh(z) (21) and interpolate between the vaccua φ ±∞ min = ±1. This also yields the following relation between φ and ψ as defined in equation (13) Using the variable ψ, the Lagrangian reads the function v(ψ) is given here by v mh (ψ) = 2 cosh 4 ψ −1 (25) and the energy of the solution is just found to be Another case of interest is the sine-Gordon potential which obviously has the infinitely many minima V = 0 at the fields values φ k min = 2πk. The kink profile which interpolate between the adjacent minima φ k min and φ k+1 min is obtained to be φ sG (z) = 2πk + 4 arctan e z . Remarkably, the sine-Gordon theory looks very similar to the mexican hat theory (23) when using the ψ variable. Indeed, in that case, we get that the relation between ψ and φ is given by φ(ψ) = 2πk + 4 arctan e ψ ⇔ ψ = (−1) k ln tan φ 4 (29) and the function v is just obtained to be given by v mh (ψ) = 2 cosh −2 ψ As a result, the sine-Gordon Lagrangian reads now Note that of course, the above changes of variables φ(ψ) are so-defined that its maps the real line (domain of variation of ψ) to a finite interval (domain of variation of φ) which does not represent the full range of variation of the φ field of the original model, and e.g. it does not cover the large values of φ in the mexican hat potential. Note also that the Lagrangians (23) and (31) are singular at the end of the interval of definition of ψ. We will come back to this issue later. Given the similarity between Lagrangians (23) and (31), we can easily generalize these canonical models to a larger set with Lagrangians of the forms where K is some positive constant and k an integer (an even larger family exists letting k be half integer). It is easy to see that ψ = ±1 provides a solution of the field equations of the kink type. The energy of this solution is finite and given by where I k can be computed as where the above expression holds in particular for integers 4 and half integers k. Consider now the change of variable of the form When ψ varies over the whole real line, the interval of variation of φ is just given by − and because cosh is a positive function, we see that the above defined φ[ψ] is invertible into a ψ[φ] on this interval. This change of variable puts the Lagrangian (32) in the standard form (3) with the specific potential where, at this stage, V is defined for φ ∈ − . However, it is easy to see that dV /dφ vanishes at the ends of this interval (where ψ diverges) allowing to extend the domain of variation of φ to the entire real line, either by making V periodic (which is always possible, with period then given by ) or using an analytic extension, possibly non periodic. This later possibility arises e.g. in the case of the canonical mexican hat model (20), which corresponds to k = 2. The k = 6 or k = 10 also yield analytical expressions for ψ[φ] (however not very enlightening) which in turn result in potentials having a similar shape to the mexican hat one. In turn, the sine-Gordon (k = 1) and the k = 1/2 cases have potentials which are periodic by analytic extension. We show these potentials on figures 1 and 2. The stability analysis of those models (and their natural generalisation to the k-essence framework) is presented later, in sec IV E. E. Stability and topology The stability of the canonical domain walls can be adressed in several ways. Before recalling in the next subsection some standard results on perturbations of canonical domain walls; we first discuss here their non perturbative stability appealing to some "topological" arguments. We feel that this discussion is often obscured in the literature by an intrication of "topological" and non "topological" arguments and we would like to clarify this below as it matters for the discussion of the stability of non standard domain walls to be introduced later. We recall first that the bound (9) on the total energy also holds for time dependent solutions as the kinetic energy only adds a positive contribution to the right hand side of (6). More specifically, we can write the conserved total energy H(t) of any field configuration φ(t, z) as where a dot means a time derivative. Separating the different contributions, we have H(t) = H kin (t)+H grad (t)+H ∞ (t) where the terms appearing on the right hand side are given by Obviously, H kin and H grad are positive, so any field configuration has a total energy larger than H ∞ which in turn is only depending on the values of the field at z = ±∞ and is just given by H dw for a canonical domain wall configuration. A standard statement is that the canonical domain walls are stable due to the topology of the vacuum manifold. More specifically, the idea is here that a given vacuum of a canonical theory (3) is obeying X = 0 and φ = φ k min for some specific k and then is indexed (classically) by the field value φ k min . In order to have a finite energy, a given domain wall solution must lie in vacuum at z = ±∞, and the values of the field at ±∞ cannot change continuously while conserving the finite energy of this solution. This is usually related to the existence of a "topological charge" Q defined from the current where C is a proper normalization constant, and µν is the fully antisymmetric Levi-Civita contravariant tensor. By construction, this current is conserved irrespectively of the field equations and for a generic field configuration φ(t, z) one has J 0 = Cφ , The topological (conserved) charge is then defined as The domain wall total energy is, as can be seen from (10), related to Q. Note however that this argument on stability is not so clear as it may seem and we would like to discuss it below with some details. First we note that there is some arbitraryness in the definition of the "topological charge". Indeed, the conservation of the current J µ as it is defined above is just obviously a trivial consequence of the antisymmetry of , so that one could have replaced φ in the right hand side of (41) by any function of φ and obtained a different conserved current and a different associated charge. Given the form of the decomposition (37) an interesting choice of currentJ µ is givenJ whereC and φ 0 are some constants, implying thatJ 0 =C 2V (φ)φ , so that the conserved charge is now For a generic field configuration φ(t, z) one has now a clear identity between the topological chargeQ and H ∞ while this was not true using the topological charge Q, given that in general H ∞ does not depend only on the difference of field values at z = ±∞. Note that the form of the chargeQ is associated with a superpotential W (φ) defined by as observed by Bogomolny [36] (see also e.g. [5]). Let us then consider the issue of the stability of a given domain wall profile. To that end we consider a given field configuration φ(t, z) which only differ at time t = t 0 from some given domain wall profile φ dw (z) in a bounded region. Obviously, because: (i) the static (and eternal) domain wall solution (given by φ dw (t, z) = φ dw (z) ∀t) has vanishing contributions H kin and H grad , (ii) the field profile φ(t 0 , z) and φ dw (z) are assumed to differ only in a bounded region and hence have the same energy contribution H ∞ which is conserved, and (iii) the contributions H kin and H grad are always positive, we see that the domain wall is an absolute mininum of energy for field configurations having the same conserved chargeQ, and that no localized perturbation of it can change the topological chargeQ. This shows that the wall configuration is stable, but we stress that this argument is unrelated to the topology of the vacuum manifold, but only relies on the form of the energy (37). F. Kinks perturbations The perturbative stability of the kinks can be checked by deriving the action for the second order perturbations around them, which is also the starting point for the quantization of these perturbations, using the kinks as vacua. By Fourier decomposing a given such perturbation ϕ as ϕ = ϕ k (z)e iω k t one sees that each mode then obeys where Z 00 , Z zz and M 2 are z-and model-dependent (i.e. depend on the wall profile). The above equation is in the Strum-Liouville form 5 and the modes obey an orthogonality relation with the measure dz(−Z 00 ) of the form (see e.g. [37]) One can show that a generic kink always possesses a zero mode (i.e. a solution of the above (45) with ω k = 0) ϕ 0 ∝ φ associated with the translation of the defect along z. In canonical cases discussed here with potentials (20) and (27), this zero mode is the lowest lying mode of the spectrum and belongs to a discrete part of the spectrum (in the case of potential (20), there is another discrete mode) and can be normalized with the above measure, and there is a continuum above (see e.g. [2,3]). The conditions are fulfilled, indicating stable perturbations. 6 Indeed, the first condition makes sure that the perturbations are free from tachyonic instabilities, while together with the last condition it implies that the perturbations have positive energy and obey an hyperbolic equation. The last condition is also implying that the zero mode ϕ 0 has a finite norm (as one has ϕ 2 0 ∝ φ 2 = −2X). For the canonical models above, we find for the mexican hat model (3)- (20) and for the sine-Gordon model (3)- (27). Once again the two different models exhibit similar features. Both obey the conditions (47). Note that we can have stable perturbations even if the squared mass M 2 (z) is locally negative. Indeed, this is what happens above around the origin z = 0. A. Generic features Starting from a model with a Lagrangian of the form (2), and restricting ourselves to a 1+1 dimensional space, with metric η µν = diag[−1, 1], we look for a kink solution φ(z) with stable quadratic perturbations. For such a static configuration, the field equations have the first integral 7 where J 0 is a constant. This relation is the equivalent of the canonical (4), up to a sign, and it is related to the field equations of the scalar reading One has which is valid for an arbitrary number of dimensions D. Note that in general (i.e. without assuming any special field configuration, so in particular, without assuming that φ only depends on one coordinate z as for the domain wall case) a Lagrangian (2) has to obey some conditions in order for the theory to be consistent for arbitrary field configurations. These conditions read [9,[38][39][40][41][42] 0 < P X (53) 0 < 2XP XX + P X . (54) The first condition above is necessary in order to have a bounded from below Hamiltonian, while the two conditions together lead to hyperbolic equations of motion. In particular note that the second one enters as the coefficient of the second derivative in the field in equation (51). We will come back to these conditions later. A domain wall being static, it energy density H(z) is simply given by the on-shell value of its Lagrangian and in order to have a proper domain wall solution, we shall demand that the energy conditions (11) and (12) hold. We will also look for kinks solutions where J 0 vanishes, as is the case for kinks of canonical models discussed in the previous section. The perturbations ϕ(t, z) around a given background configuration φ(z) have a Lagrangian reading at quadratic order where the kinetic matrix is diagonal. Its non trivial components and the squared mass term are given by Following the same path as in the previous section, we Fourier transform a perturbation as ϕ(t, z) = ϕ k (z)e iω k t so that every Fourier mode obeys equation (45). As in the canonical case, we can show that there is always a zero-mode. Indeed, differentiating the equation of motion (51) of the background field with respect to z yields so that the zero mode is given by ϕ 0 (z) ∝ φ (z). In order to have stable perturbations (and hence a stable solution) we shall demand that conditions (47) are fulfilled, as in the canonical case. Note in particular that, as ϕ 0 (z) ∝ φ (z), and as we will be looking for theories having the same domain wall profiles as in the canonical theory (e.g. φ ∝ tanh(z) or φ ∝ arctan e z ) this implies that the zero mode has no node, and hence, following a standard argument, is the lowest lying one. In addition, as we have Z 00 = −P X , the condition (47b), together with the hypothesis that J 0 vanishes, implies via equation (50) that the total energy of the wall obtained via (11) (and (55)) is finite and positive. This also shows that whenever J 0 vanishes, the normalizability of the zero mode implied by condition (47b) is just equivalent to having a wall with finite total energy. In fact, as seen from the definitions (57), conditions (47) are equivalent on the wall background to conditions (53) and (54). To summarize, in order to find a proper domain wall with stable perturbations (and assuming J 0 = 0, as we shall now do), it is enough to ask that conditions (47) hold, which in turn implies (11) and the normalizability of the zero mode. We will also check that (12) holds. We recall also that we will look for walls in theories with no potentials, i.e. in theories where the Lagrangian P (φ, X) vanishes identically at X = 0. B. Stability conditions Let's apply the conditions (47) to a general potential-free P (φ, X) case. We will assume that the function P can be power expanded into √ −X as in where we have set α 0 to zero in order to avoid having a potential as well as set α 1 to zero as such a term would not contribute to the field equations when the profile depends only on one spatial direction z. Hereafter, we will denote the background (i.e.the domain wall) value of |φ | as f so that one has X ≈ −f 2 /2 and f is positive . We will further consider that f is either a constant or a non trivial function of φ, f (φ) (which is always the case at least implicity if φ(z) is locally non constant). Note further that as we consider here spatial profiles, X is negative, hence the chosen minus sign inside the powers appearing on the right hand side of (59). In a more general situation, should we want to keep fractional powers in (59), we would rather introduce an absolute value of X for terms with odd n in this expansion. No domain-walls for P (X) theories Let's first investigate the simplest P (X) case (i.e. we assume that P φ = 0). In this case, the on-shell conservation equation (50) can easily be integrated to yield (a non vanishing J 0 would just add below a trivial constant on the right hand side) Such a theory does not fall in the class (59), as it just has a non vanishing α 1 , it does not yield domain walls of the kind we are after here and hence will not be further considered. Separable theories We then focus on "separable" theories, i.e. consider where {β n } n≥2 is a collection of constant coefficients. In this case, one has simply where the last equality holds for the sought for domain wall, as we assumed J 0 = 0. Hence, leaving aside the case of a vanishing α which would make the theory trivial, we must have f a constant f 0 , root of the polynomial equation (62). In this case, the energy density is given by H = −α(φ) β n f n 0 , so we have to impose that α is regular everywhere (or at least in the domain of variation of φ for the domain wall profile). The kinetic matrix of the domain wall perturbations is given by thus the conditions (47a) and (47b) become respectively Note that the case of separable theories (61) in fact also covers canonical domain walls discussed in the previous section, as the corresponding canonical Lagrangians can be put in the separable form using the variable ψ (equations (23) and (31)). Using this variable, and not (we stress) φ, one finds indeed that the canonical domain walls are represented by f = |ψ | a constant equal to f 0 = 1. However, obviously, theories which are separable in φ variable cannot support domain wall profiles of the type φ = tanh(z), as the corresponding f is not constant. This would not be true, if one would relax the no-potential hypothesis. E.g. the following separable theory which behaves as in the X → 0 limit as admits a stable domain wall with a φ = tanh(z) profile (with in this case a non vanishing J 0 = −P 0 ). Non-separable theories Let us now focus on the more general case of non-separable theories in the class (59) and define n 0 ≥ 2 as the smallest integer n for which α n is non-vanishing. We can extract α n0 from the first integral J 0 = (n − 1)α n f n = 0. We find where again, we imply here that f can be locally expressed as a function of φ. So the energy constraint (12) is satisfied as long as f and α n (φ) do not blow up on the relevant range of variation for φ. The kinetic matrix of the perturbations around the wall profile are given by So the conditions (47a) and (47b) become respectively In addition, one has of course to check that condition (12) holds. To proceed further, we will be looking in the next section for theories admitting walls with identical profiles to the one of the canonical mexican hat theory and further show how this can be generalized. A. Static profiles We look for Lagrangians that can accommodate an hyperbolic tangent domain-wall, φ = tanh(z) identical to the one of the mexican-hat model (21). The interest of such configuration is three-folded. First, it will make our wall easy to compare with the usual ones, second the zero-mode ϕ 0 ∝ φ = cosh −2 (z) is also the fundamental mode, as it bears no node; and third, the background value of X is easily expressed in terms of φ. Indeed, f = |φ | obeys the functional relation for the domain wall profile (background) which can be used to simplifying the calculations. With this in mind, we can further assume that we can power expand the function α n as where β n,p are some constants (and the factor 2(n − 1) is introduced to simplify formulae below). Note that this expansion is even in z (and φ) as f (z) is. We could have added an odd part as well, however, this would drop out of the crucial normalization condition (69b) and we will not consider this possibility in this work, as we do not look for exhaustivity here. The first step if to check the existence of a domain wall solution in the equation of motion, or rather here using the first integral (50) with J 0 = 0. Let us first further simplify the setting by considering the case where only 3 coefficients β n,p do not vanish above, i.e. consider a Lagrangian P of the form (we will later come back to a more general form) where we have set in addition α 2 = −1/2, so that we also have n 0 = 2. In order to get a finite energy, we must have n + p > 0 and m + p > 0 so that the integrals f n+p dz and f m+p dz converge in equation (67). Note that we have in particular (see equation (34)) Next, equation (67) imposes Assuming a non vanishing β n,p , we get hence that (as f is not a constant here) together with the relation so that we are left with a family of theories parametrized by one parameter κ ≡ β m,2−m , with Lagrangians The conditions (74) ensures that the energy of the solution is finite. Indeed the energy density is found via (67) to be which integrates into a total energy . Hence we get a strictly positive energy (density) provided that Let us finally check the constraints (47)-(69). The coefficients of the kinetic matrix are found via eq.(68) to be independent of z and given by As expected we see that the positivity and finiteness of the energy is equivalent to the fullfillment of condition (47b), so we just need to check that the other condition (47a) is satisfied. As Z 00 is strictly negative, this just amount to check that Z zz is strictly positive, which is implying that Hence, at this point, we have shown that the family of Lagrangians (77) does accommodate a hyperbolic tangent configuration φ = ± tanh(z) with stable perturbations as long as n and m are two distinct integers and together with κ verify the bounds Note in particular, that these bounds cannot be satisfied if κ = 0, hence we need at least two non trivial terms of the form (1−φ 2 ) 2−n (−2X) n/2 in the Lagrangian P (φ, X). However, more terms are allowed and we could have considered a larger family with Lagrangians of the form where κ n are more than three non vanishing properly chosen constants. We will later derive the conditions the κ n must obey. We see here in particular that the mexican potential appear in the explicit form of the functions α n (φ). In fact the family (85) can even be generalized to where n 0 is an integer strictly greater than 1 (in principle, κ n0 (φ) can vanish) and {κ n (φ)} is a collection of functions obeying In order to recover (85), it suffices to take n 0 = 2 and κ n (φ) = (1 − φ 2 ) 2−n κ n , and the condition on the collection of constants {κ n } discussed below is automatically satisfied provided the above conditions hold. If the family (85) is quite simple, it is not the only one to exhibit such features, and another one, inspired by the DBI action, is presented in appendix A. In the rest of this section, we will mostly focus on the on the family (77) and features of its domain wall solution before discussing its perturbations in the next section. B. Changing variables In order to compare our walls to the canonical ones, and better understand their existence, it is instructive to first use the variable ψ presented in a previous section in equation (13) where V is taken to be the mexican hat potential (20). Namely we set ψ = tanh −1 φ so that the wall solution reads ψ ≈ z and the Lagrangian (77) reads now P n,m (ψ, X ψ ) = 1 Comparing this form with (23) we see that the above family of theories and the canonical scalar with a mexican hat potential belong to the same family of theories with Lagrangians of the form where κ n are constants, and in order to avoid issues with fractional powers of negative expressions, we can restrict the discussion to even integers n. Note also that the more general form (85), once rewritten using the ψ variable, reads also as in the above (90). One difference between our theories (85) an the canonical one (23) is of course the presence in (23) of a pure potential encoded in a non vanishing κ 0 above. A generic theory (90) falls in the class of separable theories discussed in the previous section and the expression of the first integral J reads then as in (62) Hence we see that we can get a domain wall solution ψ = λz (leaving for the time being the possibility that λ differs from ±1) provided that J 0 vanishes and that λ is a root of the polynomial (using that −2X ψ ≈ λ 2 ) λ → k∈N κ k (k − 1)|λ| k hence verifies This holds true with λ = ±1 both for the canonical theory (3)-(20) which has κ 0 = κ 2 = −1/2 (and the other κ k vanish), and for the family (89) which has One worrysome aspect of the family of theories (77) (or (89)) is of course the fact that their Lagrangian appear singular at φ = ±1, i.e. at the minima of the mexican hat potential (20) which are reached at spatial infinity by the domain wall solution φ ≈ ± tanh(z). Note first that, as will be shown later, the quadratic Lagrangian for the perturbations around this solution is nowhere singular (including at z = ±∞) allowing a well defined perturbation theory around the "vacuum" represented by the domain wall. We also note that, once written with the ψ variable, both the canonical model (3)- (20) and the models of the family (77) appear singular at ψ = ±∞ which correspond to the minima φ = ±1. However, going back to the φ variable for the canonical mexican hat model, one gets rid of this singularity. We now show that, similarly, a change of variable can be made in the models (77) (or (89)) in order to make the Lagrangian everywhere non singular and in fact to extend elegantly the models "beyond" ψ = ±∞ (or φ = ±1). To see this, it is convenient to define the variable ξ p by This can be explicitly integrated to yield where 2 F 1 (a, b; c; u) is the Gauss hypergeometric function (which is well defined on the unit interval for its fourth argument u, and whenever c > a + b, see e.g. [43]). Some special values of p however lead to more nice-looking forms: where F is the elliptic integral of the first kind 8 . Note that ξ ∞ just equals the variable ψ defined in Eq. (22). The minima φ = ±1 of the mexican hat potential are mapped respectively to the following values ξ ± p given by where we recall that Γ(0) = ∞ and Γ(1/2) = √ π. As a consequence one sees in particular that for 2 < p < ∞ the minima of the mexican hat potential are sent to finite values of the ξ p variable. We also have ξ p (0) = 0, and one can check that the mapping (97) is (monotonic and hence) one to one between φ ∈ [−1, 1] and ξ ∈ [ξ − p , ξ + p ]. In addition, noticing that dξ b /dφ diverges in φ = ±1, one see that the inverse mapping φ = φ(ξ p ) can be naturally extended (for finite p > 2) to a periodic everywhere smooth, non singular function defined on entire real line and of period 4ξ + p . In general, this inverse mapping, even though it exists, does not correspond to simple functions, however, this is not true for p = 4 and p = 8, for which we have φ = sin(ξ 4 ) and φ = sin (2am(ξ 8 /2))) = 2sn(ξ 8 /2)cn(ξ 8 /2), where am is the so-called amplitude of the elliptic integral F , and sn and cn are the so-called sine-amplitude and cosine-amplitude, and we allow now ξ p to vary over the entire real line. Obviously the period of the first function 8 Note that we use here the definition of [43], i.e. F (ϕ, k) = ϕ 0 dα √ 1−k 2 sin α , which differs from the definition used e.g. in Mathematica [44]. (1) replacing there φ by ξ, we get that P now reads where φ is now considered as a function of ξ (i.e. φ = φ(ξ) which we can -but do not have to -consider as periodic in ξ). In this form the Lagrangian is no longer singular at the finite values φ = ±1 (corresponding to ξ ± ), even though the purely "kinetic" term of ξ has the non standard form ∝ (−X ξ ) m/2 . For the family (101), if one notes w(ξ) = 1 − φ 2 (ξ), the first integral J is found to be (while equations (50)-(52) hold, mutatis mutandis) Explicity, we find the field equation operator E given by which in particular, as we have 2 < n < m, implying m ≥ n + 1 > 3, is nowhere singular. The domain wall profile, solution of the above, is obviously given by Those profiles are shown in figure 3 for the cases m = 2 (the usual tanh), m = 6 and m = 8. C. Energy, Bogomolny and topological considerations Before writing out in the next section the explicit theory of perturbations around our kinks, we would like here to study their energy making the link with Bogomolny's and Derrick's arguments. To that hand we first consider the theory written in the ψ variable, and start with the general form (90) which encompasses the canonical mexican hat model (allowing for a non vanishing κ 0 ). The total energy density H(t, z) of a given (arbitrary) field configuration is easily found to be where to simplify the discussion we assume here and henceforth that only κ k with even k are non zero. In the case of the canonical mexican hat model (recall that we just have then κ 0 = κ 2 = −1/2) we find an energy density H(t, z) = (1 + ψ 2 +ψ 2 )/(2 cosh 4 ψ). Using then the notation We see that the Bogomolny trick and decomposition (38)(39)(40) amounts here to just write the polynomial in x and y appearing in the numerator of H = (1 + x 2 + y 2 )/(2 cosh 4 ψ) as where the first term on the right hand side yields the kinetic energy (after the proper division by 2 cosh 4 ψ), the second one vanishes for the wall profile x = ∓1 and the last one give the equivalent of the "topological" charge (40), i.e. it gives choosing here the lowest sign (as would be appropriate for the kink, as opposed to the antikink which would correspond to the solution x = −1 and the choice of the upper signs) where the last form indeed matches the expression (40) with the mexican hat potential (20). For the domain wall profile we find the the above expression yield 4/3 (see eq. (34)). We now show that a decomposition similar to (108) exists in general for our theories. Indeed, considering (105) we see that the polynomial equivalent to (108) reads in full generality so that the Hamiltonian density is just Π 0 (x, y)/2 cosh 4 ψ, while, in order to have a domain wall with profile ψ = ±z, the coefficient κ n must obey (see equation (92)) where the Σ κ,k are defined by where we imply in particular that Σ κ,0 = n∈N κ 2n (using the convention that 0 0 = 1). At this stage, considering the form of Π 0 , we can notice that the Hamiltonian cannot be bounded below if the largest integer n for which κ 2n does not vanish, call it n max , is even. In constrast, if n max is odd we see that at large x and y the dominant terms in Π 0 (x, y) read (−2κ 2nmax )(x 2 + (2n max − 1)y 2 )(x 2 − y 2 ) nmax−1 ) which shows that the Hamiltonian is bounded below for negative κ 2nmax (and finite ψ). In fact it can further be shown (see below) that it is possible to find, for specific odd n max and κ 2n , an everywhere positive Hamiltonian (the Hamitonian vanishing only at (x = 0, y = 0)). Let us now expand Π 0 around x = ±1 and y = 0 corresponding to the domain wall solution. We find after some simple manipulations where Π(a, b) is a polynomial in a and b which vanishes in (a = 0, b = 0) and in addition start only at order a 2 and b expanding around this point. We see that the first term on the right hand side of (116) vanishes by virtue of (114). Hence we can write the total energy of any field configuration in a theory of the family (90) which has a domain wall solution as where the two contributions on the right hand side read, using (114) where one sees that the last term is a topological conserved charge just identical (up to a constant factor) to the one of the canonical model (109) (see also (40)). In the variable ψ it is associated with the current The above decomposition (117) generalizes the one of Bogomolny in our context, and one can check that with the choice of non vanishing κ 2n given by κ 0 = κ 2 = −1/2 we find back exactly the form (37). It also allows to check for the stability of the wall configuration within a class of field configuration sharing the same conserved charge H ∞ . To that end we can look at the the behaviour of the the contribution H kin,grad (t) by expanding Π around (0, 0). Specifically, taking into account the constraint (114) we find the following expansion of Π 0 : where the left over terms are at least cubic in (x ∓ 1) and y. This shows that the domain wall solution represent a local minimum of the energy in the class of all field configuration having the same topological charge provided that the quantities Σ κ,0 and Σ κ,2 (defined above) verify For the domain wall one has Π = 0 (i.e. Π(±1, 0) = 0) which means that the energy is only containing a non zero topological contribution H ∞ . Note however that in contrast to the canonical mexican hat domain wall, the domain wall "without a potential" (i.e. whenever κ 0 vanishes) can not be global minimum of the energy within the class of configuration with the same topological charge. Indeed, from the above discussion we ses that Π = Π 0 ± 2xΣ κ,0 , but Π 0 vanishes in (x = 0, y = 0) where the dominant terms as x and y approach zero are quadratic in x and y. This means that Π has to change sign accross (x = 0, y = 0) and must be somewhere negative, preventing the local minimum of Π at x = ±1, y = 0 (where Π vanishes) to be a global minimum. Conditions (123) are the ones κ n should obey in order to get a stable wall configuration. Setting κ 2 , κ n and κ m as in equations (93), (94) and (95) for some specific even n and m, we can check that above conditions (123) are equivalent to conditions (84) for the set of models (89). To discuss a more explicit case, let us consider the simple model in the class (77) with there n = 4 and m = 6. Explicitly the Lagrangian of this model reads in the ψ variable hence we have κ 2 = −1/2, κ 4 = (1 − κ)/6, κ 6 = κ/10, so that Σ κ,0 = −(5 + κ)/15 and 4Σ κ,2 − Σ κ,0 = 1 + κ, hence the constraint (123) is satisfied provided that This range corresponds also to the allowed range for κ given in Eq.(84). Moreover, in the line of the discussion following equation (114) one can show that restricting further κ to be larger than −(17 + 3 √ 21)/10 ∼ −3.07 we get an everywhere positive Hamiltonian H(t, z). As further expected, we find in that case that Π vanishes at x = ±1, y = 0 which is a local minimum of Π, but Π is negative somewhere on the y = 0 line in the (x, y) plane and hence x = ±1, y = 0 is not a global minimum of Π. This is shown in figure 4 for different values of κ, while figure 5 shows the shape of the polynominal Π along the x = 1 line. It is also interesting to see how the usual scaling argument due to Derrick [45] applies here. To that end, consider a rescaling of the domain wall solution ψ 0 (z) = ±z as in ψ ω = ψ 0 (ωz). The total energy of the rescaled field configuration ψ ω is easily obtained as Restricting ourselves here to the case of even n we obtain easily the first and second derivatives of H ω evaluated at ω = 1 as The first derivative above vanishes by virtue of the relation (114), thus confirming that the domain wall is indeed a solution. The second derivative is positive if the condition (123) holds, thus Derrick's usual scaling no-go argument is evaded and the domain wall is stable against dilatations. D. Static and moving walls In the canonical mexican hat model (3)- (20), equation (92) has only the roots λ = ±1. However, considering the more general models (77) (or (89)), it is possible that equation (92), which now reads has some other roots λ different from ±1. This would yield a domain wall solution of profile Note however, that λ = ±1 is always a solution of equation (128), so that the standard domain wall profile coexists always with the profile (129). E.g., the Lagrangian (124) admits, beyond the "canonical" wall φ = tanh(±z) another wall solution of the kind (129) with λ = ±1/ √ −κ. However, while properties of the solution (129) (with λ = ±1) are given in appendix appendix B, it is also shown there that both solutions cannot be stable simultaneously: the solution (129) can be made stable at the price of violating the bounds (84) on κ which are in turn necessary for the stability of the solution with the canonical profile. However, having more than three terms in the Lagrangian (77) (or (89)) leads to the possibility to have more roots to the equation (92) and hence possibly more than one stable wall solution, this will be investigated elsewhere. Another possibility to extend the solutions discussed above is to let the walls move. In particular, using the ψ variable and considering for simplicity the models (89), it is easy to see that the part of the field equations that do not contain any second derivatives, is in full generality proportional (and as consequence of Lorentz invariance) to which for a static wall is in turn proportional to the expression of the first integral J . This means that any static wall profile (129) extends (including the "canonical" case λ = ±1) to a moving solution of the form where β < 1 is the dimensionless speed, and where one has −2X ψ = ψ 2 −ψ 2 ≈ λ 2 . E. Sine-Gordon like and other walls The above discussion and construction can easily be extended to other kind of kink profiles such as the one of sine-Gordon or more generally the family of models (32). Indeed, consider Lagrangians of the form As the discussion of sections IV B and IV C applies whatever the ψ-dependent factor in front of the Lagrangian, its conclusions hold also for the family (132). In particular, ψ = λz is a solution as long as λ obeys (92) and moreover, the solution with λ = ±1 is stable provided conditions (114) and (123) hold. Turning back to the original φ variable, the corresponding Lagrangians are simply given by (85), where the powers of (1 − φ 2 ) are replaced by powers of |φ |, considered and expressed in terms of φ, and it is easy to get the corresponding domain wall profiles for the φ variable. In particular, for k = 1, we get stable domain wall profiles identical to the one of sine-Gordon model reading as in eq. (28) with the Lagrangians One feature of the sine-Gordon model is its integrability leading in particular to non trivial solutions such as breathers or kink-antikink (see e.g. [4]). It would be interesting to investigate if some remnant of such solutions still exist in the kind of models considered here. V. WALL PERTURBATIONS We focus here on properties of perturbations around the domain wall solutions discuss in the previous section. To be specific, we will concentrate on the set of theories (77)-(89)-(101). A. Quadratic perturbations We write a generic field configuration as φ(t, z) = φ(z) + ϕ(t, z), where φ(z) = ± tanh z and the domain wall perturbations ϕ(t, z), once Fourier transformed w.r.t. time, obey equation (45). We give in the table I below the relevant coefficients Z 00 , Z zz and M 2 appearing in this equation, we also indicated there the value of the energy density H(z) of the domain wall solution. These functions are given both for the generic P n,m Lagrangians of equation (77), for the specific choice (n, m) = (4, 6) corresponding to the Lagrangian P 4,6 , and for the canonical mexican hat model (3)-(20) whose Lagrangian is denoted by P can . The quantities relevant for this last model are henceforth indicated with an index " can ". We first note that the perturbations of our "domain walls without a potential" Pn,m P4,6 Pcan where the frequency of each mode is multiplied by an universal factor obtained below In order to find stable perturbations, we recall that we have to demand that conditions (47) are obeyed, which amounts to just demand that Z 00 is negative and Z zz positive. In turn, this gives the bounds on κ given in equation (84). We can further note that our models allows to find walls which have exactly the same profile and energy density as the canonical walls by tuning to 1 the coefficient in front of cosh −4 (z) in H(z) choosing κ = n(m − 1)/(m − n). These walls are thus perfect "Doppelgänger" walls to use the terminology of [30]. However, for such walls, the bounds (84) are violated so in our cases these perfect Doppelgänger walls are not stable. However, choosing which satisfies the bounds (84), we getω 2 k = ω 2 k and so the theory has exactly the same spectrum as the canonical one. This correspond explicitly to the family of Lagrangians which have stable domain walls with profile identical to the one of the canonical mexican hat, and an energy density, and kinetic matrix just rescaled by a common factor given by (n − 2)(m − 2) 2(2 − n + m(n − 1)) = 1 − mn 2(mn − (m + n) + 2) . In this class of models, that we will call here and henceforth a mimicker, the simplest ones are possibly obtained by choosing (n, m) = (4, 6) and κ = −5/4 yielding the simple Lagrangian which has a domain wall solution φ = ± tanh(z), a Hamiltonian everywhere positive as seen in the previous section, and energy density and kinetic matrix just rescaled by a global factor 1/4 with respect to the canonical ones. Note that for this particular model, we can compute so that we see that condition (53) is always fullfilled in agreement with having an everywhere positive Hamiltonian, while condition (54) can be violated somewhere in the field space. However, the later condition is verified on the wall background and in its vicinity in agreement with the found local stability. B. Cubic perturbations and strong coupling As we saw in the previous section, the walls considered here are local minima of the energy in the class of field configuration with fixed boundary conditions at z = ±∞. This contrasts with canonical domain walls which are global minima. As a consequence, one should be able to distinguish the two looking at higher order perturbations as we now show. Up to surface terms, for a generic theory of the kind (2), the third-order perturbed Lagrangian reads where the different coefficients appearing above are given by For the canonical model (3)- (20), only Y = V φφφ = 12φ is non-vanishing. For the P n,m models (77), as well as their subset mimickers (137), one finds for the background given by the wall of canonical profile φ = ± tanh(z), in particular some relevant coefficients are gathered in the following table II. One can notice that for all P n,m models, Y 00 = −2φ/3 Y 00z and Y zz = −2φ Y zzz . Moreover, for the mimickers, all contributions containing time derivatives of the perturbations vanish at cubic order. However, the cubic interactions are found diverging at large z, for which, for the domain wall profile, 1/(1 − φ 2 ) as well as φ/(1 − φ 2 ) diverge. Hence the perturbation theory in the φ variable diverges at large z off the wall. Note however, that as we have shown that the wall is a local minimum of the energy in the class of field configurations with fixed boundary conditions, one expects that there is a range of localized perturbations of the wall which are absolutely stable. To end, we also notice that one cannot mimic our models with a P (φ, X) of the form f (X) − V (φ) as Y µν would be vanishing. Note also that the generic properties of the perturbations found in this section using the φ variable: sound quadratic perturbations, off-the-wall strong coupling at cubic order, persists e.g. if one trade the φ variable to ξ (once the quadratic perturbations properly normalized). VI. CONCLUSION In this work we have studied domain walls in some k-essence theories. We have shown in particular that domain walls can be supported by non-canonical kinetic terms only, without the help of a potential. If pure P (X) theories (77), for the subset of mimicker models, and for the specific choice (n, m) = (4, 6). cannot accommodate these unidimensional solitons, the class of Lagrangians (77) is an example of potential-free theories that can, and we have obtained an even larger set of theories sharing the same property. Moreover, we showed that theories can be found having domain wall profiles just identical to the ones of canonical field theories such as a canonical scalar field with a mexican hat potential or sine-Gordon theories. We have also showed that our walls are local minima of the energy in the set of field configurations with some fixed topological charge, however, in contrast with the usual case, they are not global minima. We also studied the quadratic perturbations of these walls, showing in particular that these perturbations can be stable and even identical to the perturbations of the domain walls of canonical models. Canonical walls can however be distinguished from the one discovered here looking at cubic vertices of the the perturbations, which in our case become strong off the wall surface. This work raises various questions beyond the ones already mentioned in the main text above. First, as it is clear that our walls are only stable when subjected to small enough and localized perturbations (hence "perturbatively stable"), it would be interesting to study their classical or quantum decay. One could also imagine constructing similar objects in a more general setup such as Horndeski theories or studying the possibility to get solitons with different topologies (such as strings or monopoles) and higher dimensions along the line considered here. On a more phenomenological account, it is known that k-essence can have interesting application in the early Universe, e.g. during inflation (see e.g. [9,14,15,41,46,47]), an interesting question would hence to look there at the possibility of the formation and decay of the kind of domain walls considered here in the early times, and a related question would be to study the effects of turning on gravity. which is defined as long as (1 − φ 2 ) 2 /c(φ) < n/(n − 2). The coefficients of the kinetic matrix are given by so that the condition (47a) is automatically satisfied and the condition (47b) is again equivalent to the finiteness of the total energy. In order to fulfill it, it is easy to see that P 0 has to be negative, and that we have to choose carefully c(φ). For example, the Lagrangian with P 0 a strictly negative constant, and n an integer strictly greater than 1, admits stable domain wall configurations.
13,949.6
2020-09-01T00:00:00.000
[ "Mathematics", "Physics" ]
Effects of water on pyridine pyrolysis: A reactive force field molecular dynamics study The emission of nitrogen oxides (NOx) from coal combustion causes serious environmental problems. Fuel splitting and staging is a promising method for NOx control by combustion modification. In this process, nitrogen-containing compounds generated from pyrolysis gas play an important role in regulating NOx generation. Water from coal could potentially change reactions during the coal pyrolysis process. Adjusting the content of water in coal may be an effective way to control coal pyrolysis reactions. This work aims to investigate the effects of water on pyridine (a main nitrogen-containing compound in coal) pyrolysis via reactive force field (ReaxFF) molecular dynamics (MD) simulations. Results indicate that the addition of water during the pyridine pyrolysis process increases the number of OH radicals in the system and accelerates the consumption of pyridine at the initial stage. However, at a later stage, water inhibits the consumption of pyridine as it impedes the condensation reaction of pyridine molecules. Common and unique intermediates are identified and quantified under various water-content conditions. Results suggest that water also reduces the proportion of nitrogen atoms in the polycondensation product. Furthermore, ring opening processes of pyridine molecules are reproduced at the atomic level. The changes in reaction pathways due to the presence of water are also revealed. The new insights into the mechanisms of pyridine pyrolysis under water and water-free conditions provide a possibility to control nitrogen migration during the pyrolysis process, which is of great significance to emission reduction from coal combustion. © 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Credit author statement ZB performed the research, analysed data and wrote the manuscript draft. XZJ co-supervised the research and revised the manuscript. KHL supervised the project and finalised the manuscript. Introduction The emission of nitrogen oxides (NOx) from coal combustion causes serious environmental problems, such as photochemical smog and acid rain [1]. In recent years, a variety of technologies have been developed for coal combustion to control NOx emissions. Fuel staging or reburning is a promising method for NOx control by combustion modification. The idea of fuel reburning is to recycle the NOx formed to nitrogen during combustion. The reburning reactor includes three zones [2]: a main reaction zone, where coal combustion under fuel-lean conditions takes place and NOx is generated, a reburning zone, where reburn fuel is injected and reacts with NOx forming N 2 , a burnout zone, where air is added to ensure complete combustion of fuel. Reburning fuels play a key role in NOx reduction during coal combustion. They can be divided into two categories: fossil fuels (such as natural gas, coal and oil) and pyrolysis gas. It is reported that pyrolysis gas has better performance in NOx reduction than fossil fuels [3e5]. In a fuel staging (also termed fuel splitting and staging) process, coal is decomposed to pyrolysis gas and char. Char and pyrolysis gas are primary fuel and reburning fuel respectively. Previous studies [3e5] have identified that the nitrogen-containing compounds in pyrolysis gas is important for effective NOx reduction in the fuel splitting and staging process. Water, an intrinsic component in coal, can accelerate coal pyrolysis process and greatly alter products distribution in pyrolysis gas [6]. Therefore, adjusting the content of water in coal could be an effective way to control the N migration during coal pyrolysis, which has the potential to improve the NOx control performance during coal combustion. Previous studies have explored chemical effects of water during coal pyrolysis by experiments and simulations. Ouyang and coworkers carried out experiments focusing on the effects of H 2 O during char pyrolysis [7]. They proposed that H 2 O reduced the char generation, stabilized the char structure and increased the char reaction rate. Hu and co-workers investigated the effects of H 2 O on the pyrolysis of coal [8]. Results showed that the yield of tar and light tar decreased with water content increasing during the coal pyrolysis. Liu and co-workers interrogated pyrrole pyrolysis with water using a density functional theory method [9]. The computational research suggested that H 2 O molecules inhibited the formation HCN but promoted the generation of NH 3 . Gou and coworkers explored the effects of water vapor on the pyrolysis products of coal [6]. They found that water promoted the generation of HCN, NH 3 , H 2 and CO, which can restrain the NOx formation during coal combustion [6]. Previous studies have made great contributions to understanding the pyrolysis phenomena from a wide range of perspectives, like the composition of products and reaction rate. However, there are some fundamental questions remaining unanswered. For example, the effects of water on the mechanisms of nitrogen-containing compounds pyrolysis in coal are still poorly understood. Further efforts are required to explore the atomic/molecular events therein and reveal the reaction mechanisms. The current experimental techniques are unable to accurately detect the temporal evolution of the distributions of intermediates and products. Atomistic-scale computational techniques, like reactive molecular dynamics that can capture atomistic behaviors of constitutive atoms/molecules [10], lend the possibility to reveal the detailed reaction mechanisms and obtain intermediate structures [11,12] that cannot be obtained by current measurement methods. Among the existing atomistic methods, the ReaxFF MD is a promising method to simulate complex chemical reactions with reasonable computational cost and high accuracy. Recently, ReaxFF MD simulations have been applied to pyrolysis of coal [13e19] and chemical reactions of nitrogen-containing compounds [20,21]. However, due to the complexity and uncertainty of coal molecular structures, low content of nitrogen, and the influence of other radicals or functional groups [22], it is difficult to build a complete coal molecular model to investigate nitrogen properties during coal pyrolysis. Alternatively, nitrogen-containing compounds in coal such as pyridine [23,24] are used as a surrogate for coal. In this study, a series of ReaxFF MD simulations are conducted to investigated the effects of water on pyridine pyrolysis. Firstly, effects of water on the pyridine pyrolysis rate and intermediates are studied. Secondly, ring-opening reactions and proportion of polycondensation products are explored during pyrolysis. Finally, reaction mechanisms of principal products like H 2 , CO, HCN and NH 3 are compared between conditions with and without water addition. ReaxFF MD The ReaxFF is a force field MD method that lies in between quantum chemical simulation and classical molecular dynamics simulation, which was originally developed by van Duin and coworkers [25] to study the kinetics of chemical reactions. ReaxFF employs a bond-order formalism in conjunction with polarizable charge descriptions to determine both reactive and non-reactive interactions between atoms [26]. Energy contributions to the ReaxFF potential are shown in Equation (1): where the terms are total energy, bond energy, penalty energy, valence angle energy, torsion angle energy, van der Waals energy, Coulomb energy and specific energy, respectively. Further details of ReaxFF are shown in Ref. [26]. Case set-ups The initial parameters of the simulated systems are shown in Table 1. In each case, the computational domain is a periodic box. System 1 contains 20 pyridine molecules only. In systems 2 to 8, there are 20e500H 2 O molecules added to investigate the effects of water on pyridine pyrolysis. Fig. 1 shows the model configurations for pyridine pyrolysis without and with water. a is the ratio of the number of water molecules, n (H 2 O), to the number of pyridine molecules, n (C 5 H 5 N), as shown in Equation (2). The density of each system is kept the same at 0.3 g/cm 3 by varying the size of the computational box. Simulation details In this paper, the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) was used to carry out ReaxFF MD simulations of pyridine pyrolysis. The reactive force field of the C/H/O/N system was chosen, whose parameters are trained with quantum chemistry calculations and have been carefully validated [27,28]. The time step was 0.1 fs and the bond order cutoff value was 0.3. The NVT ensemble [29] was selected for all simulations. Due to excessive computational cost, MD typically adopts higher temperatures than in the experiments in order to accelerate simulations. This approach has been verified to reproduce reaction mechanisms observed in experiments [30e32]. Before "production" simulations, energy minimization and system equilibration were carried out. The temperature was kept constant at 1000 K for 50 ps After that, the temperature of each system is increased to a final temperature of 3000 K with a heating rate of 100 K/ps and then kept constant. The total simulation time is 1000 ps Three replicates with different initial positions of reactants were simulated for every case. The simulation was complemented via REAXC package on the platform of Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). Post-processing The reaction pathways are obtained by Chemical Trajectory AnalYzer (ChemTraYzer) scripts [33]. The dynamic trajectories were visualised using VMD [34]. Unless otherwise indicated, the data used in the figures of this study are the average results of the three replicate simulations. Error bars in all figures are Standard Error (SE) of three replicates. Validation of simulations The validation of the ReaxFF MD simulations is achieved by comparing intermediate products obtained from this study with those from previous studies. The key intermediate species are HCN, CN, NH 3 , H 2 and C 2 H 2 , which agrees with previous work [30]. The mechanisms of pyridine pyrolysis and chemical effects of water during pyridine pyrolysis are analyzed in the following sections. ranging from 0 to 25 at 3000 K. At the initial stage up to 600 ps, at least 90% of C 5 H 5 N molecules are consumed in all cases. To study the water influence on consumption rate of pyridine, the consumption number of pyridine at different stages was calculated as shown in Fig. 2c. It is clear that water promotes pyridine consumption rate during the first 200 ps A similar phenomenon was also observed in previous studies that water can promote reactions during ethanol and methane oxidation and pyrolysis char [7,31,35]. Under water-free conditions, pyridine molecules are consumed by reactions: Effects of water on pyridine consumption rate However, OH radicals are generated with water addition during pyridine pyrolysis by reactions: And new reactions are found during pyridine pyrolysis with water as follows: The addition of water during the pyridine pyrolysis process brings OH radicals in the system and accelerates the consumption of pyridine. More details about the effects of water on intermediates are shown in Table 2, which will be discussed in Section 3.3. However, as the pyrolysis goes on, water presents obvious inhibitory effects on the consumption of pyridine. To explain this phenomenon, we investigated the effects of water on polycondensation compounds in Section 3.4. And the species number in this case with water is significantly higher than that without water during pyridine pyrolysis. This implies H 2 O molecules take part in various intermediate reactions and generate additional intermediates during the pyrolysis process. This result is also confirmed by results in Section 3.2 that water molecules produce OH radicals during pyridine pyrolysis, which promote the consumption of pyridine. Besides, when the value of a is higher than 5, the number of species in the system remains more or less the same even with a increasing. Effects of water on intermediates To further clarify the influence of water molecules on intermediates during pyridine pyrolysis, the intermediates are compared among cases under water-free and water conditions as shown in Table 2 Effects of water on polycondensation compounds During coal pyrolysis, there are both decomposition and polycondensation reactions. The pyrolysis products are char (C 40 þ), tar (C 5 eC 40 ) and gas (C 0 eC 5 ) in descending order according to the number of C atoms [13]. In this part, water influence on decomposition and polycondensation reactions is explored during pyridine pyrolysis. Fig. 4aec presents the proportion of C, N and H elements in C 5 þ during pyridine pyrolysis. With the increase of water molecules in system, the percentages of C, N and H in C 5 þ decrease greatly. When a is 25, few C 5 þ compounds are formed during pyridine pyrolysis. This phenomenon is in agreement with previous experimental studies [7,8]. Results show that water molecules greatly inhibit polycondensation reactions and modify pathways to char, tar and gas, which is of great significance to control nitrogen migration during coal pyrolysis. In addition, according to the products analysis of pyridine in the water-free condition, the polycondensation reaction mainly occurs after 200 ps. This provides an explanation for the finding in Section 3.2 that water exerts obvious inhibitory effects on the consumption of pyridine after 200 ps. Fig. 4d shows snapshots of C 5 þ with the increasing a. As the value of a ranges from 0 to 10, the number of C atoms contained in the polycondensation product is significantly reduced (from C 21 to Table 2 Main intermediates among cases with and without water addition. Different symbols are used to clarify the influence of water on intermediates. : a ¼ 0e25, A: Bai, X.Z. Jiang and K.H. Luo Energy 238 (2022) 121798 C 6 ). Besides, the increase of water molecules also promotes the existence of O atom in polycondensation products. Effects of water on ring-opening reactions According to previous studies [8,9], pyridine molecules undergo ring opening reactions firstly during pyrolysis. Fig. 5a illustrates snapshots of ring-opening reactions during pyridine pyrolysis at all cases. Four types of ring-opening pathways were detected by MD during pyridine pyrolysis. Type A happens when o-C 5 H 5 N reacts with H atom forming o-C 5 H 6 N firstly. Then o-C 5 H 6 N opens the ring to form a chain C 5 H 6 N. Type B occurs when C 5 H 5 N directly opens the ring to generate chain C 5 H 5 N. Type C is the case when pyridine molecules lose an H atom and then undergoes a ring-opening reaction, which is in agreement with previous studies [31,36e38]. Type D occurs when C 5 H 5 N reacts with OH radicals in the system to form an oxygen-containing intermediate and then ring-opening reaction occurs. As it happens, the H atom on the C atom adjacent to N atom is transferred to the N in all types. After that, chain intermediates (C 5 ) are pyrolyzed and HCN, CN, C 4 H 4 and C 4 H 3 are generated. The effects of water on key species will be discussed in detail in Section 3.6. Fig. 5b shows the proportion of each type under different a values. As the content of water molecules in the system increases, the proportion of pyridine molecules to open rings through type A and type B decreases. Besides, the percentage of type C increases to the peak point at a ¼ 10 and then decreases with increasing value of a. Ring-opening reactions of type D only occurs when the water content in the system is high. In pyridine pyrolysis without water addition, pyridine molecules convert to C 5 H 6 N and C 5 H 4 N through R1 to R3. The water addition brings about OH radical by reactions R4 and R5. And the OH radical promotes the generation of C 5 H 4 N by R6. Thus, water suppresses the ring-opening reactions via type A and type B and promotes type C of ring-opening reactions. However, when the value of a increases to 4, there are pathways to generate oxygen-containing intermediates (C 5 H 6 NO, C 5 H 5 NO, C 5 H 4 NO and C 5 H 3 NO). Those are: Results indicate that H 2 O accelerates the consumption of C 5 H 4 N and promotes the production of oxygen-containing intermediates. That is the reason why high concentration of water has an inhibitory effect on type C and type D only occurs in systems with high water concentration. Effects of water on products H 2 , CO, HCN and NH 3 Pyridine molecules undergo ring-opening reactions and then pyrolyze to produce the main intermediates HCN, CN, C 4 H 4 and C 4 H 3 , which is in agreement with previous results [31,36e38]. In this part, we explore the effects of water on those radicals as well as principal products H 2 , NH 3 and CO during pyridine pyrolysis. Fig. 6 presents the effects of H 2 O on the generation of H 2 , CO, HCN and NH 3 . As the number of H 2 O molecules increases, the yield of H 2 , CO and NH 3 shows an upward trend, which is in good agreement with a previous study in Ref. [6]. However, water influence on HCN is more complicated. When the value of a is in the range of 0e3, the yield of HCN remains the same. As a increases, a parabolic profile is observed which peaks at a ¼ 10. According to the findings in Section 3.4, water reduces the content of C, H and N in C 5 þ, which accounts for the increasing trend of H 2 , CO and NH 3 . To understand the trend of HCN, the influence of water on transfer pathways of main intermediates was interrogated as shown in Fig. 7a and b. In pyridine pyrolysis under water-free conditions, H 2 mainly comes from H in the pyrolysis process, that is Water during the pyrolysis process adds a new pathway to H 2 by R5. Fig. 6a describes effects of H 2 O on transfer pathways of nitrogen-containing intermediates. As pyrolysis goes on, HCN and CN will convert to NH 3 in all cases [30]. And the transfer pathway is HCN / CNH / NH / NH 2 /NH 3 [39]. However, due to the conversion of HCN and CN to N 2 occurring at high temperatures [30], N 2 is not observed in our simulations. New pathways HCN / CH 2 NO and CH 2 NO / CHNO are generated with water addition during pyridine pyrolysis by reactions: When a is greater than 2, R17 to R20 are found during the pyrolysis process as shown below: Besides, R21 takes place in the range of a ¼ 4e25 as follows: Combining the findings from Fig. 6c Therefore, the yield of NH 3 is promoted with water addition during pyridine pyrolysis. Fig. 7b describes water influence on migration pathways of main nitrogen-free intermediates during pyridine pyrolysis. In all cases, C 4 H 4 and C 4 H 3 were major initial nitrogen-free species during pyridine pyrolysis [30]. And C 2 H 2 and C 2 H are mainly produced by thermal decomposition of C 4 H 4 and C 4 H 3 . C 4 H 2 is formed by the loss of one H atom from C 4 H 3 . In pyridine pyrolysis with water addition, OH reacts with main intermediates (C 4 H 3 , C 4 H 2 , C 2 H 2 and C 2 H) to form CO. However, there are huge differences in transfer pathways to generating CO at various a values. When the H 2 O content in the system is low (the value of a in the range 1e4), OH radicals mainly react with C 2 compounds to generate oxygencontaining intermediates by reactions: And C 2 H 3 O, C 2 H 2 O and CHO are key precursors forming CO for a ranging from 1 to 4. When the range of a is 2e25, CO will convert to CHO 2 through R25: And CO 2 is generated by decomposition of CHO 2 through R26 with a ¼ 4e25. When the value of a is 5e25, OH radicals will react with C 3 &C 4 compounds via reactions: Discussion In the present study, ReaxFF MD simulations were conducted to understand the influence of water on nitrogen-containing compounds (pyridine) in coal pyrolysis. We have uncovered new intermediates and reaction pathways that were not reported in previous studies [6,9]. Besides, the effects of water molecules on the consumption rate of pyridine and ring opening processes of pyridine molecules are also revealed at the atomic level. Based on the aforementioned analysis, we have demonstrated that the modification of pyrolysis by water addition can be applied to improve NOx control performance in the fuel splitting and staging process. In the fuel splitting and staging process, the released nitrogencontaining species from large N-containing compounds are beneficial for NOx reduction as it can reduce nitrogen oxides selectively [4,5,40]. However, Greul et al. also proposed that small N- containing species in pyrolysis gas will react with O 2 to form NOx, causing negative impact on NOx control [40]. Hence, controlling the proportion of large N-containing compounds in the pyrolysis gas and the pyrolysis process of N-containing compounds is important to reduce NOx emissions. The current results suggest that the addition of water molecules would modify the reaction pathways in the pyrolysis process of N-containing compounds, thus achieving maximum NOx reduction. Though nitrogen-containing compounds show better NOx reduction performance than nitrogen-free compounds, nitrogenfree radicals can also convert NOx to N 2 . According to previous studies [41,42], the possibility of non-hydrocarbon fuels, such as H 2 and CO, to reduce NO to N 2 is low compared with the hydrocarbon radicals in the reburning process. According to the present research, the addition of water can promote conversion of hydrocarbon compounds to small C-containing radicals, which is beneficial for NOx control. On the other hand, high water concentration will convert hydrocarbon compounds into CO, CO 2 and H 2 , alleviating NOx reduction in the reburning process. Thus, a proper water content in the process of reburning is required if water is used to regulate NO generation. In general, the regulating effects of water on pyridine pyrolysis is monotonic. This behaviour is beneficial for control of the pyrolysis process. However, there are also non-monotonic behaviors with respect to water content in intermediate species (C 2 O 2 , C 3 H 2 O and C 3 H 3 O) and consumption rates of pyridine pyrolysis. For intermediate species C 2 O 2 , C 3 H 2 O and C 3 H 3 O, when the value of a is lower than 10, the process is controlled by the condensation reaction of species (CO reacts with CO, C 2 H 2 and C 2 H 3 , respectively). It is found that the yields of C 2 O 2 , C 3 H 2 O and C 3 H 3 O are low, and their roles in the conversion of NOx to N 2 are insignificant [41,42]. Thus, their effects on NOx control can be neglected. The non-monotonic relationship between water content and pyridine consumption rates suggest that different strategies for NOx control are required as the reaction evolves at different stages. Conclusions In this study, pyridine pyrolysis without and with water were investigated via ReaxFF-MD simulations. The effects of the added water with different proportions on pyridine pyrolysis reactions were investigated in detail. It is found that the addition of water during the pyridine pyrolysis process facilitates the generation of OH radicals and accelerates the consumption of pyridine at the initial stage of pyrolysis. By contracts, as water greatly inhibits the condensation reaction of pyridine molecules, water exerts inhibitory effects on the consumption of pyridine as pyrolysis goes on. Furthermore, water has significant influence on the total number of species during the pyridine pyrolysis and intermediates are identified and quantified under various conditions. In addition, water also reduces the N content in the polycondensation product (C 5 þ). This research provides new insights into atomic-level mechanisms of pyridine pyrolysis under water and water-free conditions, and has implications on control of N migration during the pyrolysis process and the emission of nitrogenous pollutants from coal pyrolysis and combustion. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
5,609.8
2022-01-01T00:00:00.000
[ "Chemistry" ]
Model selection and signal extraction using Gaussian Process regression We present a novel computational approach for extracting weak signals, whose exact location and width may be unknown, from complex background distributions with an arbitrary functional form. We focus on datasets that can be naturally presented as binned integer counts, demonstrating our approach on the CERN open dataset from the ATLAS collaboration at the Large Hadron Collider, which contains the Higgs boson signature. Our approach is based on Gaussian Process (GP) regression - a powerful and flexible machine learning technique that allowed us to model the background without specifying its functional form explicitly, and to separate the background and signal contributions in a robust and reproducible manner. Unlike functional fits, our GP-regression-based approach does not need to be constantly updated as more data becomes available. We discuss how to select the GP kernel type, considering trade-offs between kernel complexity and its ability to capture the features of the background distribution. We show that our GP framework can be used to detect the Higgs boson resonance in the data with more statistical significance than a polynomial fit specifically tailored to the dataset. Finally, we use Markov Chain Monte Carlo (MCMC) sampling to confirm the statistical significance of the extracted Higgs signature. Introduction Analyzing data from physical experiments or observations often involves fitting computational models in order to extract a signal in the presence of both background effects and random noise. For example, such a setting appears naturally in the analysis of X-ray diffraction patterns from crystalline samples that contain contributions both from distinctive Bragg peaks and diffuse background scattering [1,2], inference of transiting exoplanet parameters in astronomy [3,4], and the discovery of the Higgs boson and search for the new physics at the Large Hadron Collider (LHC) at CERN [5]. The data from LHC and similar experiments usually comes in the form of binned integer counts [6]. Traditionally, modeling such data is performed under the assumption of the Poisson distribution, by employing a parametric fit [6]. The fitted models are subsequently used to estimate the background contributions and extract the signal of interest. However, the choice of parametric functions is often ad-hoc and the degree of model complexity requires a delicate balancing act between overfitting and underfitting the data [7][8][9]. For data analyses of the type performed on the LHC bin counts, the optimal complexity of the model is usually evaluated by performing Wilk's tests [10]. Most analyses that employ this technique test it on a fraction of the full dataset, usually 10% or less. This process is called "blinding" and is meant to reduce biases in the analysis. However, the model selection process must be repeated periodically as more data becomes available, and often the functional form employed in the model has to be updated as well [11]. Using nonparametric methods such as Gaussian Process regression is an effective way of alleviating these concerns [7,[12][13][14][15]. Gaussian Process (GP) regression is a well-established machine learning technique [7,13] commonly used in various fields such as astrophysics, gravitational wave detection, and high energy physics [16][17][18][19]. In particular, in high energy physics GP regression was used to model the smooth continuum background from quantum chromodynamics (QCD) in searches for dijet resonances in LHC data [19]. The authors argued that using GP for background estimation was more robust with respect to increasing luminosity compared to parametric fitting methods. GP regression's advantages over more conventional methods, which employ a linear expansion over a fixed set of basis functions such as polynomials or Gaussians, are due to its non-parametric flexibility and a principled Bayesian framework. Instead of explicit basis functions, GP regression is defined in terms of kernel functions which specify the degree of correlations between two points in the dataset. The GP approach allows us to perform inference using a much broader class of functions, including those which would otherwise require an infinite basis set [7]. GP regression is robust with respect to the size of the dataset [19]. Nevertheless, the flexibility of GP regression can be a double-edged sword. In GP regression, kernel functions typically depend on several hyperparameters that are varied to fit the data, typically through non-Bayesian techniques such as maximizing the marginal likelihood of the observed data [7,13]. The hyperparameters describing the kernel function control the flexibility of the resulting model, while the type of the kernel function determines the success in capturing certain features in the data, such as periodic oscillations and longterm trends [13]. Thus, the universality and the power of the GP approach may come at the cost of overfitting with respect to both kernel type and kernel hyperparameter choices. Therefore, a method is required that can constrain the flexibility of the GP regression in a controlled manner. Previous work in this area has focused on "kernel learning" to address the issues of flexibility and robustness, with several techniques proposed that aim at constructing composite kernels for Support Vector Machines [20], Relevance Vector Machines [21], and GP regression [22,23] using a library of base kernels. Semiparametric regression attempts to combine interpretability of parametric models with flexibility of non-parametric models by combining the two approaches in a single framework [24]. However, none of the above approaches focus specifically on integer count data or on the processes that are naturally viewed as localized signals superimposed on the smooth background. A previous application of GP regression to LHC data [19] employed both standard and custom-built kernels motivated by physical considerations. In contrast, in this work we have developed both a model selection procedure suitable for GP regression and an approach for estimating the statistical significance of the extracted signal. New signals in physical observations of particle resonances in LHC data often appear as localized features ("bumps") superimposed on a smooth background. Accurate modeling of the background spectrum is therefore essential to both extracting the signal and assessing its statistical significance. In this paper, we present a rigorous approach to model selection in GP regression applied to binned integer data, which we expect to be a superposition of a localized signal and a smooth background of unknown functional form. We exploit the flexibility of the GP regression by determining the kernel hyperparameters through the fit to background-only data, with the signal window masked out. These parameters are subsequently used to extrapolate the background contribution across the signal window, enabling us to separate the background from the signal contribution. We describe procedures for kernel type selection based on both Bayesian and Akaike information criteria. We also propose a method for estimating the statistical significance of the signal by performing a hypothesis test with data devoid of signal as the null hypothesis and data containing signal as the alternate hypothesis. While similar in spirit to standard hypothesis-testing approaches, our significance test takes into account both the uncertainties inherent in the Bayesian nature of GP regression and the sampling noise related to generating integer bin counts from GP-predicted, real-valued Poisson rates. In this work, we illustrate our procedure by detecting for the Higgs boson resonance in the open data collected by the ATLAS experiment at the LHC [25]. We show that using GP regression leads to extracting the Higgs boson signature at a higher level of statistical significance compared to parametric fits. Our computational pipeline can be applied for background estimation and signal detection in any dataset where a localized signal is obscured by background processes. 2 Model selection and signal extraction procedure Gaussian process regression In regression, the observed data y = (y 1 . . . y N ) (in our case, integer counts in N bins) is modeled by z = (z(x 1 ) . . . z(x N )): where k = 1 . . . N (N is the total number of datapoints), X = (x 1 . . . x N ) is a vector of input variables (in our case, centers of the bins with integer counts), and k is a random noise variable independently sampled from a Gaussian distribution N ( |0, σ 2 i ) for each data point, where σ 2 i is the noise variance in bin i. In the GP framework, the model z(x) is not represented as an explicit linear expansion over a set of pre-determined basis functions. Instead, we directly consider the marginal likelihood p(y|X) integrated over all possible models [7,26]: where m = (m(x 1 ) . . . m(x N )) is a vector of values of the mean function m(x) for all datapoints, and the covariance matrix C = K + Σ, where K is the Gram matrix and Σ is a diagonal N × N matrix with Σ ii = σ 2 i . The elements of the Gram matrix are values of the kernel function k(x, x ) evaluated for all pairs of input variables: K ij = k(x i , x j ). Note that, consistent with Eq. (2.1), p(y|z) = N (y|z, Σ), while p(z|X) = N (z|m, K) from the definition of the Gaussian Process. Thus, GP regression is defined by the mean function m(x) and the kernel function k(x, x ) [7,13], which determines the degree of correlation between any two datapoints. In general, the kernel function depends on a set of n hyperparameters θ = (θ 1 , θ 2 , . . . , θ n ), whose number and meaning depend on the kernel type. The hyperparameters of a given kernel are usually optimized by maximizing the marginal likelihood in Eq. (2.2), a non-Bayesian procedure [7,13]. Ordinarily, kernel hyperparameters include σ 2 i , which represent the amount of experimental noise in each bin. However, since this would introduce too many hyperparameters and make their optimization difficult or impossible, we estimate σ 2 i directly from the data using Garwood intervals which allow us to extract two-sided confidence intervals from the number of events in each bin under the assumption that the events are Poisson-distributed [27,28]. Thus, σ 2 i are estimated independently and are not treated as hyperparameters in our approach. The strength of the GP approach stems from the fact that the joint marginal probability of observing a set of datapoints is Gaussian (Eq. (2.2)). Moreover, the predictive probability p(ỹ i |y), the conditional probability distribution of observing a real-valued "count"ỹ i in bin i given a dataset with N previous observations, is also Gaussian, with the mean f (x i ) and the variance V (x i ) given by: wherek = (k(x 1 , x i ), . . . , k(x N , x i )) and α = k(x i , x i ) + σ 2 i . In this work, we consider two types of GP regression: with m(x i ) = 0, ∀i for modeling the background-only distribution, and with the Gaussian mean function for modeling signal+background datasets, where the signal component is represented by: Here, A defines the signal strength, while µ and σ represent signal mean and width, respectively. The rounded value of A can be interpreted as the total number of signal events. Note that when the Gaussian mean function is introduced, the set of model hyperparameters θ needs to be augmented with {A, µ, σ}. Model selection A key issue in GP regression is the choice of a kernel and, given a kernel, derivation of the optimal set of hyperparametersθ for it. Typically, the optimal set of hyperparameters is obtained by maximizing the marginal log-likelihood log p(y|X, θ, K i ) [7,13], where p(y|X, θ, K i ) is given by Eq. (2.2) and its dependence on the set of hyperparameters θ and the kernel type K i are made explicit for clarity. Since this step is non-Bayesian, a question of kernel selection arises which would take into account both kernel complexity (i.e., the amount of signal smoothing provided by a given kernel) and the number of kernel hyperparameters. A standard way for carrying out model comparison is based on the Bayesian Information Criterion (BIC) for the marginal log-likelihood [7,29]: where p(y|X, K i ) is the model evidence (likelihood marginalized over hyperparameters), n is the number of model parameters, and N is the number of datapoints. Note that the second term on the right-hand side penalizes model complexity, such that lower BIC scores are more preferable. The derivation of BIC relies on a number of approximations whose validity depends on the details of the system under consideration. Specifically, the derivation employs the Laplace approximation to estimate the integral over the hyperparameters and assumes that N is so large (or the Gaussian prior distribution over the hyperparameters is so broad) that the effects of the hyperparameter priors are negligible, resulting in: where H = − ∇ θ ∇ θ log p(y|X, θ, K i )|θ is the Hessian in the model hyperparameter space evaluated at the hyperparameter values that maximize the marginal log-likelihood. If N is large and the Hessian has full rank, the second term on the right-hand side can be roughly approximated as 1 2 log |H| n 2 log N , yielding Eq. (2.5). An elegant alternative approach to model selection is based on the Akaike Information Criterion (AIC), which accounts for the fact that the log-likelihood computed on a training dataset provides an estimate of the prediction error that is too optimistic, because the same data is being used to fit the model and assess its error [30,31]. To account for this optimism, a correction term is added which is based on the sum of covariances between the observed datapoint and the newly generated datapoint for each input variable x i . It can be shown that the sum of covariances is proportional to the number of degrees of freedom in the N → ∞ limit, resulting in the following expression for AIC: where d is the number of degrees of freedom in the model. Thus AIC provides an estimate of the log-likelihood that would have resulted if another dataset was independently generated at the same values of input variables (an in-sample estimate). In the case of GP regression, d needs to be replaced by d eff in Eq. (2.7), where d eff is the effective number of degrees of freedom for the GP regression with a given kernel type, which captures the amount of smoothing induced by the GP fit [13,32]: where the dependence of the Gram matrix on the optimal kernel hyperparametersθ is made explicit for clarity. Note that similar to BIC, lower AIC values are preferable; however, unlike BIC, AIC is a non-Bayesian measure and thus provides an alternative approach to model selection. Choosing the appropriate kernel type is crucial to the success of GP regression, since different kernels emphasize different correlation structures in the data. In practice, kernels are often constructed manually using simple comparison metrics such as marginal likelihood or BIC naive . In some cases, composite kernels are constructed automatically using kernel engineering techniques (see e.g. Ref. [22]). Here, we propose a kernel selection technique which is based on the consensus between AIC and BIC measures of model complexity. This framework allows us to compare models with different kernels and choose a specific kernel type on the basis of both a Bayesian approach to model selection, which emphasizes the complexity of the kernel in terms of the number of kernel parameters, and a non-Bayesian approach, which is based on the amount of smoothing introduced by the GP fit. Poisson likelihood Since our data consists of integer counts in N bins, we have also employed a Poisson-type model to generate integer predictions in each bin. Specifically, we assume that the mean of the predictive probability f (x i ) (Eq. (2.3)) provides the rate for the Poisson process in each bin [19]: Note that f (x i ) implicitly depends on the kernel type and the optimized hyperparameter valuesθ. Eq. (2.9) can be used both to generate integer counts and compute the loglikelihood of the observed counts. Poisson log-likelihood can also be used instead of the GP marginal log-likelihood to compute BIC (Eq. (2.6)), BIC naive (Eq. (2.5)), and AIC (Eq. (2.7)). Gaussian process kernels We used the GP package from scikit-learn (https://scikit-learn.org/stable/), augmenting the implementation to include custom kernels and kernel extraction features. In this paper, we have explored three kernels to model the continuum background distribution: the Radial Basis Function kernel (RBF), the Matérn kernel with ν = 5/2 (Matern), and the second-order polynomial kernel (Poly2). The three kernel functions are defined below: where σ 0 is the amplitude and l is the length scale of the covariance function; where σ 0 is the covariance amplitude as in k RBF , l is a positive parameter characterizing the covariance, and d(x, x ) is the Euclidean distance between datapoints x and x ; where σ 0 sets the magnitude of the zeroth-order term in the polynomial expansion. Thus the RBF, Matern and Poly2 kernels depend on 2, 2 and 1 hyperparameter, respectively. Functional fit For comparison, we also employ a fourth-order parametric polynomial fit with explicit basis functions, which is typically used to model the background distribution [25]: where w p are the fitting coefficients and m(x i ) is either set to 0 for background-only fits with the signal window masked out, or given by Eq. (2.4) for signal+background fits on the entire dataset. The fits were carried out using ROOT data analysis software [33], by maximizing the Poisson log-likelihood in Eq. (2.9). For the background-only fit, the values of the fitting coefficients are w 0 = 1.84 × 10 5 ± 1.60 × 10 2 , For the background+signal fit, the fitting coefficients are w 0 = 1.64 × 10 5 ± 6.16 × 10 4 , All the parameter uncertainties have been estimated via Hessian analysis available in ROOT. The value of A and its uncertainty have been rounded to correspond to the integer number of events. The datasets on which the fits have been performed are described in more detail below. Datasets We use the di-photon sample from the open dataset made available by the ATLAS collaboration at LHC [25]. We use the selection criteria as documented in Ref. [25] to create a di-photon invariant mass distribution, m γγ , that shows the Higgs decay. The Higgs decay is a localized bump on top of the smooth background distribution, traditionally modeled by a polynomial [19]. The di-photon distribution consists of integer event counts y i in N = 30 bins. Since in this work we focus on the datasets in which we expect to find a localized signal whose location is approximately known, we first mask out the region containing the signal. We expect the signal to be localized with a characteristic width that is small compared to the characteristic length scale describing the background shape [19]. In new resonance searches, we typically scan for the signal at multiple points within the full range of the dataset, with a prior expectation for the signal width. To search for a signal in a specific window, we use the entire range of data with the signal window masked out to determine the optimal parameters for the background-only GP regression fit. In new resonance searches, this process could be repeated for multiple masked-out signal windows. Here, we model the signal using a simple Gaussian whose mean µ and width σ are approximately known to be 125 GeV and 2.5 GeV, respectively [5,25]. Thus, in the background-only fits we mask out a signal window ±2σ around the signal mean µ; all data outside of this window are assumed to belong to the background distribution and are therefore fit using GP regression with m(x i ) = 0. Specifically, we optimize the parameters θ of a given kernel by maximizing the marginal log-likelihood (Eq. (2.2)), which yields the optimal set of hyperparametersθ. Givenθ, the predictive distribution is then provided by Eq. Model selection for background-only fits To determine which kernel type best represents our data, we have carried out model selection using BIC and AIC model comparison measures, summarized in Table 1. We find that the marginal log-likelihood is much worse for Poly2 than for either RBF or Matern. This disadvantage is too substantial to be offset by the fact that Poly2 uses one less hyperparameter. As a result, both BIC naive (Eq. (2.5)) and BIC (Eq. (2.6)) rank the kernels in the same way, giving a slight edge to RBF over Matern. Note that this rank is the same as with the marginal log-likelihood without BIC corrections. However, Matern is slightly favored over RBF when considering Poisson log-likelihoods with rates provided by the mean of the GP predictive distribution (Eq. (2.9)). This preference for Matern holds when the Poisson log-likelihoods are augmented with complexity corrections to produce BIC naive scores, while Func4 becomes strongly disfavored due to its larger number of fitting parameters. To investigate this matter further, we have considered the effect of the AIC penalty, which effectively accounts for the amount of smoothing effected by each kernel type [32,34]. We observe that RBF is favored over Matern when the AIC penalty is taken into account (Table 1). Furthermore, Poisson log-likelihoods slightly favor Func4 over RBF or Matern GP fits. However, this slight advantage disappears when the AIC correction is taken into account, with the best score assigned to GP regression with the RBF kernel (Table 1). Overall, we conclude that with Poisson log-likelihoods, there is a slight advantage for RBF over Func4 on the basis of AIC and a distinct advantage for RBF or Matern over Func4 on the basis of BIC naive . Considering all the evidence together, it appears that GP regression with the RBF kernel is the best way to model our data, although the preference of RBF over Matern is fairly slight. In addition to the AIC and BIC-based model selection, we have carried out visual comparisons of the four different models, by plotting the mean predictive distribution of the GP regression with RBF, Matern and Poly2 kernels (f (x i ) in Eq. (2.3) with m(x i ) = 0), and the maximum-likelihood (ML) Func4 fit (Eq. (2.13)) to the event counts outside of the signal window (Fig. 1). It is clear from the upper panel of Fig. 1 that RBF, Matern and Func4 produce very similar fits, whereas Poly2 tends to underfit the data. This is also clear from the residuals (Fig. 1, lower panel), which are consistently larger for Poly2 than for the other three models. Interestingly, RBF, Matern and Func4 all show a slight spurious bump where the models have been extrapolated across the signal window; outside of the signal window, deviations from zero are almost always within the error bars. Thus, visual inspection rules out Poly2 but cannot be used to differentiate between RBF, Mattern, and Func4. Signal extraction In order to extract the signal superimposed on top of the background distribution, we have carried out GP regression with the RBF kernel and m(x i ) given by Eq. (2.4) using the entire dataset (Fig. 2). Importantly, the kernel parameters were kept at the valueŝ θ obtained via the previous fit to the background distribution, with the signal window masked out. On the basis of the fitted Gaussian parameters, the correspondence between both models and between each model and the Higgs simulation (described in Ref. [25]) is overall very high (Fig. 2). However, we note that the GP approach with the RBF kernel extracts a clear Higgs signature consisting of A RBF = 473 ± 123 events above the background, compared to A Func4 = 443 ± 199 events from the Func4 fit. Thus, the mean number of predicted events is higher and the uncertainty is significantly lower with the GP RBF fit. The lower uncertainty of the prediction indicates that GP RBF is preferable to Func4. Synthetic datasets for testing statistical significance of signal extraction To investigate the statistical significance of the observed signal, we have created 500 toy datasets based on the GP fit with the RBF kernel and m(x i ) = 0 to the background-only data. This fit has generated an effective integer number of counts due to the background only, where the square brackets indicate the rounding operation. Next, we sampled from the GP predictive probability N (ỹ i |f (x i ), V (x i )), producing real-valued background "counts"ỹ i in each bin i. Finally, we usedỹ i / N i=1ỹ i as probabilities in a multinomial sampling process, generating a synthetic histogram of integer event counts. Each synthetic histogram was constrained to have N eff counts, equal to the total number of events inferred to be due to the background. Note that our toy datasets include both the uncertainty inherent in GP regression and the uncertainty related to generating integer event counts from the underlying model. In order to create a full background+signal test set, we have added signal counts from the Higgs simulations [25] to each of the 500 background datasets. Thus the signal component is fixed, while the background component varies from dataset to dataset according to the background model uncertainties. Test for biases in signal extraction To test the robustness of the fit, we check for potential biases in our background estimation procedure. Namely, for each of the 500 background+signal toy datasets described above, we carry out a GP fit with the Gaussian mean function (Eq. (2.4)) on the entire dataset, while keeping the kernel hyperparameter values fixed atθ, values found by the previously described fit to the background distribution, with the signal window masked out. This procedure generates a set of predicted signal strength values, {A pred }, which can be compared with the corresponding exact value, A true , the sum of the event counts added to the background-only counts in order to create the combined background+signal toy datasets. Specifically, we compute a Z-score like measure: where A pred and A true are defined above and σ A is the standard deviation of the {A pred } values. Fig. 3 shows the resulting distribution of the Z W scores. We observe that the empirical distribution is well described by a Gaussian with µ = 0.19 and σ = 1.00 (the latter is expected due to the normalization in Eq. (5.1)). The near-zero value of µ indicates that there are no substantial biases in our two-step background+signal reconstruction procedure. We conclude that the signal contribution can be deconvoluted correctly from the underlying smooth background distribution. Fraction µ: 0.19, σ : 1.00 Figure 3: Test for potential biases in signal extraction. Shown is a normalized histogram of Z W scores (Eq. (5.1)) obtained by generating 500 toy datasets and carrying out GP regression as described in the text (blue bars). Orange curve: a Gaussian fit to the histogram, which yields µ = 0.19, σ = 1.00. Posterior distributions of signal parameters and significance analysis We have investigated the posterior distributions of signal-characterizing parameters by carrying out Markov Chain Monte Carlo (MCMC) sampling [35] of the Poisson log-likelihood (Eq. (2.9)). Routinely employed in Bayesian analysis, MCMC sampling of posterior probabilities is conceptually similar to studying model parameter sensitivity and estimating confidence intervals in frequentist statistics [36,37]. Poisson rates f (x i ) depend on the hyperparameter valuesθ obtained via the previously described background-only fit and on the mean function m(x i ), whose parameters {A, µ, σ} were sampled from the following priors: the prior for A is uniform in the [0, +∞) range, while the prior for µ is Gaussian, with the 124.7 GeV mean and 0.02 × 124.7 GeV standard deviation. The σ prior is also Gaussian, with the 2.4 GeV mean and 0.1 × 2.4 GeV standard deviation. The mean values are consistent with the Higgs simulation [25] and with the fits presented in Fig. 2. The 0.02 and 0.1 scaling factors in the priors are motivated by the ATLAS studies [5]. MCMC was implemented using the Emcee package [38] (https://emcee.readthedocs.io), with 10 4 samples in each of 12 independent MC trajectories. 4 shows MCMC posterior distributions of the three parameters characterizing the signal: overall signal strength A, the mean position of the signal peak µ, and the width of the signal peak σ. In Fig. 4a MCMC sampling was based on a synthetic dataset without any signal added, which was randomly chosen among the 500 background-only test sets described above. As expected, P (A), the marginalized posterior probability for signal strength, is highest when A is close to zero and falls off rapidly as A increases, while P (µ) and P (σ) appear Gaussian. Moreover, the correlations between all 3 parameter pairs appear to be weak. In contrast, when the real data is analyzed which contains both the background counts and Higgs events, the maximum posterior probability value of A is located around 500 counts, consistent with the earlier Hessian analysis of GP regression with the RBF kernel (Fig. 4b). Indeed, a Gaussian fit of P (A) in Fig. 4b has yielded 485 ± 121 Higgs events, very close to the 473 ± 123 Higgs events obtained earlier using the GP regression framework. Thus there is a clear signature of Higgs counts in the real data. Interestingly, the joint probability P (σ, A) reveals a correlation between signal strength and signal width, with stronger signals tending to have larger widths. To provide a more quantitative estimate of the statistical significance of the signal strength observed in real data, we have plotted a histogram of the 95% confidence levels for A for all 500 background-only toy datasets (Fig. 5). The value observed with the actual data is 3.02σ above the median, whereσ is the distance between the median and the 84% quantile, and is larger than 99.4% of the values empirically observed in the histogram. Shown is the histogram of the 95% quantiles (confidence levels, or CL) for A inferred from 500 synthetic datasets with background-only counts using MCMC sampling. Black dashed line indicates the 50% quantile, or the median, of the 95% CL distribution (i.e., the median number of Higgs events observed above background, reported at 95% CL). From left to right, the yellow lines show 2.5% and 97.5% quantiles and the green lines show 16% and 84% quantiles, respectively. The red arrow indicates the 95% CL value obtained from the data (cf. Fig. 4b). Summary In this work, we have developed a procedure for using Gaussian Process (GP) regression to extract localized signals from smooth background distributions. Although this procedure is of interest in many areas of science, including astrophysics and crystallography, here we focus on extracting Higgs events from an ATLAS open dataset which consists of binned event counts. Despite its relatively small size, this is a challenging dataset since the putative signal is masked by the background and the inference procedure used to analyze the data affects the statistical significance of the predicted signal. Traditionally, the background distribution is modeled using a polynomial fit, onto which a Gaussian signal is superimposed (Eq. (2.13)) [25]. Here we propose an alternative framework in which GP regression is used to model the background and the signal is modeled via the mean function, which affects both the marginal likelihood of the observed data (Eq. (2.2)) and the predictive probability, the conditional probability of a new datapoint given the previously observed data (Eq. (2.3)). As with the functional fits, the mean function is represented by a Gaussian with three free parameters (Eq. (2.4)), one of which, A, is especially relevant to us since it represents the total number of signal events found in the dataset. The GP framework is more flexible than more standard approaches which employ a fixed set of basis functions such as polynomials or Gaussians [7,13]. This flexibility comes from the focus of the GP-based approach on the correlation structures in the dataset, which are modeled using kernel functions. Although GP methods are not limited by the prior choice of the finite basis (indeed, some popular kernels correspond to infinite basis sets) and the GP approach is in principle fully Bayesian, most kernel functions depend on one or several hyperparameters, such as the characteristic length scale in the RBF kernel. A fully Bayesian treatment of the dependence of the model evidence on hyperparameters is usually impractical; however, simply maximizing the evidence with respect to hyperparameters may lead to overfitting for more complex kernels. In order to provide a more principled approach to the selection of the kernel type, we have considered two independent methodologies. One of them, BIC, is based on evaluating the model evidence under the Laplace approximation and the assumption that the effects of hyperparameter priors are negligible (Eq. (2.6)). With several additional approximations, notably the assumption that the Hessian matrix has full rank, BIC yields a simple correction which penalizes model complexity (Eq. (2.5)). The other approach, AIC, is non-Bayesian. Instead of concentrating on the model evidence, it focuses on the degree of smoothing that results from applying a given kernel to the dataset. Thus, the AIC and BIC approaches are complementary and reflect different kernel properties (amount of data smoothing vs. the shape of the log-likelihood landscape as a function of hyperparameters). Using both criteria holistically, we have chosen a well-known RBF kernel for our GP regression models, although the results with the Matern kernel are only slightly worse. We note that AIC yields approximately equal scores for the GP RBF fit and the traditional fit, which models the background using a fourth-order polynomial ( Table 1). The results of the two fits are visually very similar when the GP mean predictive probability is compared with the ML curve produced by the functional fit, and both approaches are close to the Higgs simulations predictions (Figs. 1,2). However, the total area A under the signal bump is somewhat higher with the GP RBF fit compared to the functional fit, with 473 and 443 Higgs events, respectively. More critically, Hessian analysis reveals that the standard deviation is much smaller with the GP prediction, 123 vs. 199 in the functional fit. Thus, the GP approach is preferable since it leads to the higher signal strength prediction with considerably less uncertainty. After ascertaining that our signal extraction procedure is not biased (Fig. 3), we have proceeded to investigate the posterior probabilities of model parameters by MCMC sampling (Fig. 4). This computational approach is necessary since we have focused on the Poisson log-likelihood (Eq. (2.9)), which is more appropriate for modeling integer event counts. The Poisson log-likelihood depends on the kernel hyperparameters, which were kept fixed to their valuesθ obtained by fitting to the background-only data (Fig. 1), and on the signal strength, mean and width, which were sampled from prior distributions. The prior for signal strength A was uninformative, assigning equal weights to any non-negative value. The priors for the mean and the width were informative, modeled by Gaussians whose parameters were constrained by Higgs simulations (Fig. 2) and by the studies of instrumental errors in the ATLAS detector [5]. The resulting posterior probability for signal strength shows a clear Higgs signature, with 485 ± 121 Higgs events (Fig. 4b). These numbers are consistent with the previous estimate obtained by Hessian analysis of the signal parameters in GP regression, which yielded 473 ± 123 Higgs events. When the MCMC sampling procedure is applied to synthetic datasets where no contributions from the signal are expected, the posterior distribution for signal strength is centered on zero and the typical predicted values are much smaller (Fig. 4a). The latter is clearly seen by combining the data from 500 independently generated background-only synthetic datasets into a histogram of 95% confidence levels for signal strength A (Fig. 5). The corresponding confidence level obtained from the real dataset is larger than 99.4% of the histogram values and corresponds to 3.02σ, whereσ is the distance between the median and the 84% quantile. Thus our signal strength prediction is also highly significant within the MCMC framework. In summary, we have developed a novel GP regression framework for extracting localized signals from smooth background distributions of unknown functional form. This problem appears in many areas of science where a weak signal of interest is masked by background events due to light scattering, extraneous emission sources, etc. The location and the width of the signal can sometimes be guessed based on physical considerations; in other cases, consideration of multiple putative signal windows is necessary, as in LHC anomaly detection searches [40][41][42][43][44][45]. In both scenarios, only rough estimates of the position and the width of the signal window are required. Data outside of the signal window is assumed to belong to the background and a GP model without the signal contribution is fitted to it. We carry out model selection using both BIC and AIC considerations, including an in-depth analysis of the BIC assumptions. The extrapolation of the model across the signal window then provides an estimate of the background, from which the signal can now be separated in a second GP fit where only the signal parameters are allowed to vary, while all the background parameters remain fixed. This two-step procedure allows us to deconvolute the signal from the background in a robust and reproducible manner. An application of our approach to the open Higgs boson dataset from the ATLAS detector (known as the ATLAS open dataset) yields a highly significant prediction of the Higgs boson signature, outperforming the traditional approach based on fitting a polynomial function to the background distribution.
8,736
2022-02-11T00:00:00.000
[ "Computer Science" ]
Leveraging Energy Harvesting and Wake-Up Receivers for Long-Term Wireless Sensor Networks Wireless sensor nodes are traditionally powered by individual batteries, and a significant effort has been devoted to maximizing the lifetime of these devices. However, as the batteries can only store a finite amount of energy, the network is still doomed to die, and changing the batteries is not always possible. A promising solution is to enable each node to harvest energy directly in its environment, using individual energy harvesters. Moreover, novel ultra-low power wake-up receivers, which allow continuous listening of the channel with negligible power consumption, are emerging. These devices enable asynchronous communication, further reducing the power consumption related to communication, which is typically one the most energy-consuming tasks in wireless sensor networks. Energy harvesting and wake-up receivers can be combined to significantly increase the energy efficiency of sensor networks. In this paper, we propose an energy manager for energy harvesting wireless sensor nodes and an asynchronous medium access control protocol, which exploits ultra-low power wake-up receivers. The two components are designed to work together and especially to fit the stringent constraints of wireless sensor nodes. The proposed approach has been implemented on a real hardware platform and tested in the field. Experimental results demonstrate the benefits of the proposed approach in terms of energy efficiency, power consumption and throughput, which can be up to more than two-times higher compared to traditional schemes. Introduction Wireless Sensors Networks (WSNs) are today a mature technology enabling a large variety of cyber-physical system applications in environmental monitoring, healthcare, security and industrial domains. They are composed of multiple wireless sensor nodes that monitor an environment and wirelessly send information data to one or more remote hosts called sinks. A wireless sensor node is made of several components: a processing unit, memory, sensors, a transceiver and an energy source [1]. Usually, these devices are battery-powered and therefore have a limited lifetime, making energy one of the most precious resources, especially in scenarios where the network is expected to work for several months or even years. To tackle this problem, a successful approach is Energy Harvesting (EH), which allows the nodes to be powered by environmental energy sources such as sunlight, wind, vibration, water flow, etc. Using EH, it is possible to increase the WSN lifetime by an order of magnitude with respect to traditional battery-powered approaches and to achieve the Energy Neutral Operation (ENO) state, i.e., the amount of harvested energy is greater than or equal to the amount of consumed energy over long • A novel EM for EH-WSNs is proposed. Unlike most state-of-the-art EMs, the energy management strategy proposed in this work only requires the residual energy as an input, making it practically easy to implement on real hardware platforms. • A novel MAC protocol, called SNW-MAC (Star Network WuRx-MAC) leveraging ULP WuRx for data-gathering star networks is proposed. SNW-MAC enables asynchronous communications, minimizes the cost of packet transmissions and allows error corrections. SNW-MAC significantly reduces the energy cost variability of packet transmissions, allowing accurate control of the consumed energy by the EM. Moreover, an analytical study of SNW-MAC scalability is presented. • SNW-MAC and the proposed energy management scheme were implemented and evaluated in the field, using a state-of-the-art ULP WuRx [13]. The proposed scheme was evaluated in the context of indoor light energy harvesting through exhaustive experimentation. • In order to achieve a fair evaluation of the proposed scheme, two state-of-the-art MAC protocols were also implemented on the same hardware and application scenario. Results show that ULP WuRx allow improved communication efficiency, which is exploited by the EM to achieve a higher throughput (up to more than double) compared with state-of-the-art schemes. To rigorously measure this improved energy efficiency in the context of data gathering WSNs, the Energy Utilization Coefficient (EUC) is defined and used as an evaluation metric. The remainder of this paper is organized as follows. Section 2 describes the related work. Section 3 presents the energy management scheme and the EUC metric. Section 4 details the design of SNW-MAC and presents an analytical study of its scalability. Section 5 exposes the experimental setup used to evaluate our approach, and Section 6 presents the experimental results. Finally, Section 7 concludes this paper. Related Work To the best of our knowledge, no previous works have proposed a joint focus on both EM and communication with ULP WuRx. However, many papers deal with either one or the other topic. Hence, this section is split into two subsections on the two topics. The related work regarding energy management for EH-WSNs is first given, followed by a presentation of state-of-the-art MAC protocols leveraging ULP WuRx. Energy Management for EH-WSNs Research on energy management for EH-WSNs has been very prolific in recent years, and many solutions have been proposed [2,[23][24][25][26][27][28][29][30][31]. They can be classified based on their requirement of predicted information about the amount of energy that can be harvested in the future, i.e., prediction-based and model-free. Prediction-based schemes require that an energy predictor [32] supplies the EM with predictions of the future harvested energy. The first EM using the prediction-based approach was introduced in 2007 by Kansal et al. [2]. In this scheme, an exponentially-weighted moving average filter is used to predict the future amount of harvested energy, and the duty-cycle is computed according to the difference between predicted and observed energy inputs. Castagnetti et al. introduced the Closed-Loop Power Manager (CL-PM) in [24], which uses two distinct energy management strategies, one for periods during which environmental energy is available and one for periods during which the harvested energy is below a fixed threshold, referred to as zero energy interval. Le et al. proposed Wake-up Variation Reduction PM (WVR-PM) [25], a variation of CL-PM that allows a node to store more energy when environmental energy is available to achieve a similar quality of service during zero energy interval periods than when environmental energy is available. Renner et al. proposed a prediction-based algorithm [33] for energy harvesting sensor nodes using a supercapacitor as the energy storage device. The algorithm consists of a prediction block, which provides the forecast of the supercapacitor energy reserve for a given consumption and a given harvest forecast. The second block implements an energy policy, which defines predicates to enforce the properties of the operation style. The algorithm was implemented on a testbed of twelve energy harvesting wireless sensor nodes organized in a multihop topology, and the field test lasted four weeks. In a previous work [26], we introduced GRAPMAN, an EM for EH-WSNs powered by pseudo-periodic energy sources that aims to achieve high average throughput while maintaining consistent quality of service, i.e., with low fluctuations with respect to time. As the amount of energy that a sensor can harvest shows large fluctuations and is hard to predict, energy predictors suffer from significant errors, incurring overuse or underuse of the harvested energy [29]. Unlike prediction-based approaches, model-free schemes do not require any prediction or model of the energy source. LQ-Tracker [27] uses Linear-Quadratic Tracking, a technique from adaptive control theory, to adapt the duty-cycle considering only the state of charge of the energy storage device. Similarly, Le et al. proposed using a Proportional Integral Derivative (PID) controller [28]. With P-FREEN [29], Peng et al. designed an EM that maximizes the duty-cycle of a sensor node in the presence of battery storage inefficiencies. The authors formulated the average duty-cycle maximization problem as a non-linear programming problem and proposed a set of budget-assigning principles that maximized the duty-cycle by using the currently-observed energy harvesting rate and the residual energy. In a previous work [30], we proposed an approach that relies on fuzzy control theory to dynamically set the node's consumed energy. Fuzzy rules are used to make a decision about the allocated energy considering the current state of charge of the energy storage device and the amount of harvested energy. Yang et al. proposed AutoSP-WSN [4], a framework for Solar-Powered WSNs incorporating both energy management algorithms and communication protocols and that was evaluated in the field. However, this work differs from ours in two major ways. First, the authors introduce routing and link rate control protocols, while we focus on MAC protocols leveraging emerging ULP WuRx. Moreover, AutoSP-WSN features a prediction-based EM, while a model-free EM is proposed in this work, which suits the stringent hardware constraints of WSN devices. Most of the previously-proposed EM have been evaluated only through simulations, without any implementations on real sensor node hardware. Therefore, many practical problems are considered out of the scope of these studies. For example, accurate energy spending mechanisms and detailed harvested energy tracking are difficult to implement, and their implementation incurs significant overhead [34]; however, many theoretical energy-harvesting adaptive algorithms assume the availability of these values. In this study, the proposed EM has been experimentally evaluated using real hardware platforms. To this aim, the proposed EM only needs the current residual energy, and it can be used with various MAC protocols. MAC Protocols Leveraging Wake-Up Receivers There has been a tremendous amount of research on the design and implementation of MAC protocols in WSNs [35]. WSN MACs can be classified into three paradigms: synchronous, pseudo-asynchronous and asynchronous. In the first approach, neighboring nodes are synchronized to wake up at the same time. However, in the context of EH-WSNs, environmental power sources provide energy that continuously varies over time and space, making synchronous approaches not optimal for such application scenarios [36]. Indeed, nodes powered by energy harvesting must be able to dynamically adapt their duty-cycle, and pseudo-asynchronous and asynchronous schemes allow each node to choose its active schedule independently of other nodes. Traditionally, pseudo-asynchronous schemes rely on duty-cycling [7], in which nodes are periodically powered on and off according to their own specific schedule while establishing on demand rendezvous using a beaconing approach. Pseudo-asynchronous schemes can be categorized as transmitter-initiated or receiver-initiated depending upon who initiates the rendezvous [37]. In the transmitter-initiated scheme, the receiving node periodically wakes up to monitor the channel and goes back to sleep after a short wake-up duration if the channel is found to be clear. When a node has a packet to send, it transmits request-to-send signals to the destination node, each followed by a listening period. The destination node, upon waking up according to its regular schedule acquires the transmit request and answers the transmitting node by a clear-to-send message. After this rendezvous process, the data packet is sent. In the receiver-initiated scheme, the receiving node periodically wakes up and send a clear-to-send beacon. It then monitors the channel for a short duration and goes back to sleep if no signal is detected. If a node needs to transmit data, it listens to the channel for the clear-to-send beacon from the receiver and, upon reception, starts sending its data packet. Many variations of these two approaches can be found in the literature [35], and because of the underlying wake-up scheme and rendezvous process, these approaches are referred to as pseudo-asynchronous. Fully-asynchronous communications can be achieved with the use of ULP WuRx. ULP WuRx enables the cancellation of the energy waste due to the rendezvous process and the periodic wake-ups [38] As ULP WuRx technology is still under development and relatively new, only a few research studies were conducted on designing communication protocols leveraging ULP WuRx. WUR-MAC [39] was the first MAC protocol that took advantage of ULP WuRx. The ULP WuRx and the main transceiver use separate channels, and a request-to-send/clear-to-send handshake with channel assignment is done using the ULP WuRx. By receiving each incoming request-to-send and clear-to-send frame, a node has therefore the information about which channel is used by its neighbors. When it wants to communicate, it randomly chooses a free channel and sends a request-to-send frame containing the chosen channel. As our approach does not require request-to-send/clear-to-send handshake or a similar mechanism, packet transmissions are energetically less expensive than with WUR-MAC. Zippy [18] is a flooding protocol that leverages WuRx. This protocol has been experimentally validated on real sensor nodes. However, in the context of data-gathering star networks, the flooding approach, in which each packet is sent to all neighboring nodes, is not well-suited as only the sink needs to receive the data sent by the nodes. In a previous work, we introduced OPWUM [40], an opportunistic forwarding MAC using timer-based contention for next hop relay selection and leveraging ULP WuRx. This MAC is nonetheless specifically designed for multi-hop networks, and is therefore not appropriated to star networks. In this paper, we propose a protocol dedicated to star networks, and unlike most of the previously-cited protocols, this protocol was implemented on sensor nodes and evaluated in the field. Regarding EH-WSNs, Le et al. [38] compared the energy consumption of the TICERprotocol [37] with and without ULP WuRx. Through simulations, showed that using a ULP WuRx drastically reduces the energy cost of communications. The energy thus saved is used to increase the node throughput. This approach is nonetheless still energetically expensive, as it requires the nodes to send WuBs, which is done at high transmission power and a low bitrate because of the current WuRx sensitivity. Magno et al. proposed a power unit for WSN nodes [21], which features multi-source energy harvesting, multi-storage adaptive recharging, as well as wake-up capabilities using ULP WuRx, which enables the control and configuration of the power unit in an energy-efficient way. In this paper, the proposed ULP WuRx-based communication scheme is fully coordinated by the sink of a star network. The risk of collisions is canceled, and the energy cost of a transmission is therefore reduced and constant. This allows an accurate control of the consumed energy by the EM. Energy Management for an Energy-Harvesting Sensor Node In this section, we introduce a new EM for EH-WSNs, whose task is to dynamically adjust the performance of the node, evaluated in this work by the throughput, according to the current residual energy. The proposed EM can be used in collaboration with various MAC protocols. Later, we show the benefits of combining this novel EM with the SNW-MAC protocol leveraging WuRx proposed in Section 4. We assume that the time is divided into time slots of equal duration T, and the EM is executed at the end of each slot to set the throughput of the node for the next slot k. At each execution, the EM measures the current residual energy denoted by e R and sets the frequency at which the node performs sensing and sends the so-obtained data. The EM sets this frequency by adjusting the wake-up interval for the next slot denoted by T W I , i.e., the time between two consecutive sense-and-send operations. Two submodules compose the proposed EM as shown in Figure 1. The Energy Budget Computation (EBC) module evaluates the energy that the node can consume in the next time slot k to remain sustainable. This amount of energy is called the energy budget and is denoted by e B [k]. The inputs to the EBC are the residual energy at the end of the slot k − 1 and the variation of residual energy, respectively denoted by e R [k − 1] and ∆e R [k − 1], defined by: The second module is the Throughput Computation (TC) module, which calculates the wake-up interval T W I [k] according to the energy budget e B [k]. When the topology is a star network, the only task of a node is to perform a measurement and to send the so-generated data to the sink. In multi-hop networks, each node must also relay packets sent by other nodes. Only the TC is specific to star network applications, and the EM can therefore easily be adapted to multi-hop scenarios by designing a module replacing the TC that allocates the energy budget among the sensing and the relaying tasks, required by multihop networks. EBC Design Most of the EMs presented in the literature assume the availability of the harvested and consumed energy values [2,24,25,30]. However, precise tracking of these values is difficult, and their implementation incurs high overhead [34]. Therefore, an EM that only requires the current amount of residual energy is proposed in this work. The aim of the EBC is to keep the device in the ENO-MAX state, i.e., the amount of consumed energy equals the amount of harvested energy over a long period of time [27], by dynamically adapting the energy budget. Four residual energy levels of the energy storage device are defined and shown in Figure 2a. Figure 2a. If the stored energy falls below E f ail R , a power outage is incurred. On the other hand, if the energy storage device is full, i.e., the amount of stored energy is E max R , then the excess of harvested energy is wasted, as it cannot be stored. This situation is called a saturation of the energy storage device. To avoid saturation, the risk of saturation interval [E up EN I , E max R ] is defined. It allows the EBC to avoid the waste of energy by overflow of the storage device, serving as a buffer when an increase of the harvested energy occurs. Moreover, the energy stock E S = E down EN I − E f ail R is defined as the amount of energy required to ensure the operating of the device during periods without intake energy from the harvester and depends on the application and the energy source characteristics. Therefore, E S is the amount of energy that should be stored in the energy storage device to avoid power outage in the case of energy scarcity periods. The aim of the EBC is therefore to keep the state of charge of the energy storage device in the ENI [E down EN I , E up EN I ] when environmental energy is available, thus avoiding the waste of energy by saturation of the storage device, while storing enough energy to survive periods during which no energy is harvested. We call "quality of service" the application requirements regarding the sensing period, bit error rate, etc. Ensuring the minimum quality of service required by the application necessitates a minimum energy budget per slot denoted by E min B . At each execution of the EBC, the energy budget of the next slot k is computed as follows: where δe B [k] is the energy budget correction, which is calculated according to the current values of e R and ∆e R . The objective of the EM is two-fold: (i) to find an energy budget that achieves an adequate compromise between the discharging rate and quality of service when little environmental energy is available; and (ii) when environmental energy is available, to avoid saturation or to find a good compromise between the charging rate and quality of service. This is done by adjusting the value of the energy budget by an amount that depends on both the residual energy and its variation. The rule base shown in Figure 2b presents the EBC strategy. In this table, ∆e B is a positive parameter of the EM and corresponds to the energy budget correction when the amount of stored energy is either in the ENI interval or in the risk of saturation interval. As most applications do not perform well under strong variations of the allocated energy budget, choosing ∆e B requires a compromise between the reactiveness of the EBC and the variability of the allocated energy budget. Indeed, choosing a high value of ∆e B leads to high reactiveness of the EM, but at the cost of strong variations of the energy budget, which may not be suitable for many applications. On the other hand, choosing low values of ∆e B leads to the low reactiveness of the EM and smooth energy budget variations. Therefore, the choice of ∆e B depends on both the energy source and application requirements. Four scenarios can be considered from Figure 2 and are detailed thereafter. Power failure Risk of saturation Charging/ Discharging Risk of saturation (R7-8-9): A risk of saturation occurs when e R > E up EN I . To avoid the waste of energy by the overflow of the energy storage device, the EBC increases the energy budget until the residual energy decreases to bring it to a value belonging to the ENI. Energy neutral interval (R4-5-6): If the amount of residual energy belongs to the ENI, the EBC goal is to keep the node in the ENO-MAX state. The ENO-MAX state is achieved when the residual energy is kept constant with regard to time, and the EBC thus corrects the energy budget regarding the sign of ∆e R to keep the node in the ENO-MAX state. Charging state (R3): The node is considered to be in the charging state when e R < E down EN I and ∆e R is positive. The node is thus re-filling its energy stock. In these conditions, the goal of the EBC is to keep the residual energy increasing for the amount of stored energy to be greater than or equal to E S , i.e., for the residual energy to reach the ENI in a reasonable time, while allocating a high enough energy budget to ensure a good quality of service. A trade-off must be made between the charging time and the quality of service. Indeed, at one extreme, a conservative policy is to allocate the minimal energy budget while the energy storage device is not fully charged, leading to a quick refill of the storage device at the cost of a low quality of service during the charging phase. On the other hand, allocating almost all the harvested energy will lead to a slow charging rate, but to a good quality of service, regarding the currently-available environmental energy, during the charging phase. As the choice of an appropriate strategy is dependent on the application, a tunable strategy is proposed. The energy budget correction δe B is set to a value proportional to the residual energy variation ∆e R , and the proportionality factor is a function of e R denoted by µ C (e R ) [30]: where M C and K C are positive parameters allowing the tuning of the charging strategy. µ C increases with e R , as the more energy is stored, the less conservative we need to be. While M C sets the maximum value of the proportionality factor, K C sets the growth rate of µ C . If K C = 1, δe B increases linearly with e R . For values of K C lower than one, the growth rate increases when the residual energy increases, while for values of K C higher than one, the growth rate decreases when e R increases. Discharging state (R1-2): The node is considered to be in the discharging state when e R < E down EN I and ∆e R is negative. The node is thus using its energy stock. In this scenario, a trade-off must be made between the allocated energy budget and the lifetime of the node, i.e., the time it can last before running into a power outage. A conservative policy is to set the energy budget to the minimum required, hence maximizing the lifetime at the cost of a low quality of service. On the other hand, setting the energy budget to an arbitrary high value leads to high quality of service at the cost of short lifetime. Similarly to what has been done for the charging state, a customizable energy management, which can be tuned according to the need of an application, is proposed. The energy budget correction δe B is set at a value proportional to the residual energy variation ∆e R , and the proportionality factor is a function of e R denoted by µ D (e R ) [30]: where M D and K D are positive parameters allowing the tuning of the discharging strategy. µ D decreases with e R , as the less energy is stored, the more conservative we must be, and the impacts of K D and M D on the discharging strategy are similar to the ones of K C and M C on the charging strategy. TC Design The TC aims to compute the throughput of the node over a time slot to consume the amount of energy specified by the EBC. As wireless communications are usually the most consuming task over all the other tasks such as sensing and computing [6], the throughput of the node given an energy budget is strongly tied to the MAC protocol. To transmit a single packet, a given MAC protocol typically requires many steps, such as receiving/sending a beacon frame, sending a data frame, receiving an Acknowledgment (ACK) frame, etc. The number of states in which the node can be when communicating using a given protocol is denoted by N S . Each state is defined by the combination of the different components' states (MCU, radio chip, sensors). The time spent in the state i ∈ {1, . . . , N S } during a single packet transmission is denoted by τ i , and the corresponding power consumption of the node is denoted by P i . The energy cost of the whole process of performing a measurement and sending a single packet is therefore: and the energy consumed by the node over one time slot k is: where P S is the power consumption of the node when all the components are in the sleep state and τ T is the total time required to perform a measurement and send a packet and is equal to Therefore, in order for the consumed energy e C [k] to be equal to the energy budget e B [k], the wake-up interval is set to the following value: This equation is obtained by replacing e C [k] by e B [k] in Equation (6). The associated throughput, in packets per minute, is thus: If we assume a low data-rate application, typical in EH-WSNs, usually MAC protocols are based on pseudo-asynchronous approaches, which makes the estimation of the τ i values challenging. Indeed, rendezvous schemes incur high variability of the time spent in the idle state and receive state for different packet transmissions. As a consequence of an inaccurate estimation of these values, the energy consumed by the node can be significantly different from the energy budget calculated by the EBC, which can lead to power failures or energy waste. Therefore, in Section 4, a new protocol reducing the energy consumption variability of packet transmission is proposed. Energy Utilization Coefficient To evaluate the energy efficiency of different MAC protocols, the Energy Utilization Coefficient (EUC), denoted by ξ, is defined as the ratio of the throughput to the energy budget: It is expressed in packets per minute and per Joule. For notational simplicity, the slot number indication "[k]" is omitted in the rest of this section, and all the following equations refer to a single time slot. The EUC quantifies the achieved throughput of a MAC protocol regarding the available energy budget and is similar to other energy efficiency metrics, e.g., [41,42]. By combining Equations (7)-(9), we obtain: where H (in Joules) is defined by: H is a constant particular to a given hardware and MAC protocol. Indeed, the τ i values depend on the MAC protocol, while the P i values depend on the hardware. Two remarks can be made regarding Equation (10). First, the EUC is not constant for a given hardware and MAC, but increases with the energy budget e B . Secondly, the EUC is bounded, as: From Equation (12), it can be observed that the maximum EUC ξ ∞ is higher for small values of H. Therefore, the smaller H is, the better it is. For the rest of this work, the P i values are assumed to be fixed, and the power consumption in sleep state P S is supposed to be much smaller than the power consumption of the other states P i . This assumption holds true for all the WSN platforms. Hence, minimizing H is done by minimizing the τ i values. In order for H to be minimal, only the data frame should be sent at each packet transmission. However, most of the MAC protocols introduce an overhead to synchronize the nodes (e.g., the rendezvous process in pseudo-asynchronous MAC protocols) or for error control (e.g., ACK frames). As we will see in the next section, using ULP WuRx allows the minimization of H and hence the maximization of the EUC. MAC Protocol Leveraging Wake-up Receivers This section introduces SNW-MAC [43], a protocol for data-gathering star networks that uses ULP WuRx. Traditional protocols use the duty-cycling approach to reduce energy consumption; however, this scheme does not eliminate the energy waste incurred by idle listening and overhearing. Moreover, these protocols are subject to collisions, which reduce their scalability and increase their energy consumption. SNW-MAC leverages ULP WuRx to enable asynchronous communication, minimizing the energy required to transmit a packet and making collisions impossible between packets sent by nodes belonging to the same SNW-MAC-based network. It is assumed that a Physical layer (PHY) providing an error detection mechanism is used. For example, the widespread IEEE 802.15.4 PHY provides a Cyclic Redundancy Check (CRC) error-detecting code. Design of SNW-MAC SNW-MAC is an asynchronous scheme that uses the receiver-initiated approach to minimize the energy consumption of WSN nodes. As the power consumption of ULP WuRx has to be orders of magnitude less than the main radio, these devices are usually characterized by low sensitivity and a low data rate [13,44]. For this reason, sending WuBs to a ULP WuRx can be costly energy-wise as it is done at a low bitrate and high transmission power to achieve the same range as the main radio. Packet transmission using SNW-MAC is illustrated by Figure 3a. The sink initializes a communication by sending a WuB containing the address of a specific sensor node and then listens to the channel to receive the data packet. The targeted sensor node is awoken by its ULP WuRx, and starts sending the data packet. Each sensor node piggybacks its wake-up interval in data packets. The sink keeps an updated table that associates for each node its wake-up interval and polls each node at the right time. The sink sets for each node a timer for the wake-up interval piggybacked in data packets and polls the node every time the timer expires. Sensing operations are performed by each node at anytime between two sink polls, ensuring that data are ready to be sent when the sink sends the wake-up beacon. Compared to traditional receiver-initiated protocols, this approach reduces the energy consumption of the sink and the nodes as no rendezvous process is required. The sink energy consumption is further reduced as useless periodic WuBs sending is avoided. Because the wake-up interval is typically a 16-bit integer, minimal overhead is incurred by the piggybacking of this information. Moreover, the sink can use it to monitor the sensor node activity. WuB format: The WuB format of SNW-MAC is shown in Figure 3b. A WuB is 19 bits long and is composed of three synchronization bits, the 8-bit address of the node to wake up and an 8-bits sequence number of the expected data packet, used for error control, as explained hereafter. Error control and retransmission: By coordinating data packet transmission at the sink, SNW-MAC cancels the risk of collisions compared to traditional pseudo-asynchronous schemes as each node is specifically polled. However, wireless channel interferences may lead to corrupted frames, and energy-efficient error control and packet retransmission is therefore an important issue. As the sink is entirely in charge of coordinating the packet transmission, it is responsible for detecting transmission error and scheduling another attempt. Each WuB embeds an 8-bit sequence number of the expected data packet, as shown in Figure 3b. The sink keeps an updated table that associates for each node the next packet sequence number to poll. When a sensor node ULP WuRx acquires a WuB, it reads both the address and the sequence number. Thanks to the capability of the ULP WuRx to directly recognize the address on the board, it wakes up the node MCU only if the address is valid and then sends to it the sequence number using the serial port. All the packets that have a sequence number lower than the one received are considered as either successfully received or dropped because of a too high number of transmission attempts and are thus erased from the transmission buffer. The data packet that has the sequence number asked by the sink is then sent. The data packet piggybacks its sequence number. When a packet is successfully received by the sink from a given sensor node, the sink checks the data packet sequence number. If the sequence number of the data packet is the one expected by the sink, then the sink increments the sequence number associated with this node. When the sink detects a transmission failure, e.g., the received data packet is corrupted, it does not increment the sequence number and sets a random backoff. When the backoff expires, it initiates a new communication using the same sequence number, as illustrated by Figure 3a. Compared to traditional error-control schemes that require ACK frames, the energy overhead is significantly reduced for sensor nodes as they do not need to listen to ACK frames after each data packet transmission. On the sink side, as no ACK frame is sent, energy is also saved. Nonetheless, this energy saving is counterbalanced by longer WuBs sent by the sink due to the sequence number. Using SNW-MAC, only the data frame is sent by the nodes, thus minimizing the per-packet energy consumption and the H value introduced in Section 3.2. Moreover, the per-packet energy consumption variability is also minimized if the data frame length does not change. Indeed, the only possible cause of energy consumption variability is due to retransmissions. Having a low energy consumption variability is important to allow the EM to accurately control the energy consumption of the node. Analytical Study of Scalability In this section, the scalability of SNW-MAC and traditional pseudo-asynchronous MAC protocols is evaluated in the context of star networks. The emphasis is put on the sink, which is in charge of gathering the packets from all the sensors. This section compares the achievable packet reception rates when SNW-MAC is used and when pseudo-asynchronous MAC protocols are used, in order to study how sustainable the proposed approach is with regard to the sink compared to these protocols. The number of nodes that compose the network is denoted by N (not including the sink), and the packet generation rate is modeled by a Poisson distribution of parameter λ packets per minute. Next, the expressions of the packet arrival rate at the sink are derived for SNW-MAC and pseudo-asynchronous MAC protocols when it is assumed that the only cause of packet loss is collisions and that all collisions are destructive, i.e., lead to corrupted packets. SNW-MAC: Collisions between packets sent by nodes belonging to the same SNW-MAC-based network are impossible as the sink specifically polls each node. Nonetheless, as receiving a packet requires a non-null duration, the receiving rate is still bounded. The total time required to receive a packet is denoted by τ R and is defined by: where τ d is the time required to receive the data payload and τ o is the overhead incurred by the hardware and the protocol at each packet reception (WuB sending, radio setup, turn-around time, software overhead). The maximum receiving rate in packets per minute is thus: where · is the floor function. We assume that the packet generation rates of the nodes are independent of each other and are modeled by Poisson distributions of mean λ packets per minute, and we denote by A the aggregate rate. As Poisson distributions are stable by sum, A follows a Poisson distribution of mean Nλ packets per minute. However, because the maximum receiving rate of the sink is Γ, the receiving rate of the sink, denoted by R, is modeled by the following distribution: Indeed, the sink saturates when the receiving packet rate reaches Γ, and therefore, higher receiving packet rates are impossible as the sink cannot poll the nodes quickly enough. The average packet rate is thus: As: we finally have: Pseudo-asynchronous MAC: Using traditional pseudo-asynchronous MAC, the sink periodically wakes up to receive data packets, and the sink wake-up interval is denoted by T S . The time is thus divided into equal length time slots of duration T S . When a node generates a packet, it typically tries to send it at the sink's next wake-up. The number of packets, denoted by X, generated by a given node over a time slot can be modeled by a Poisson distribution of parameter λT S 60 . Therefore, the probability that a node generates packets in a time slot is: Let Y be the number of nodes that have generated packets over a time slot. Y can be modeled by a binomial distribution of parameter p g . As the Y nodes will try to send a packet at the next sink wake-up and if we assume that all the collisions are destructive, the number of packets received by the sink during a time slot is a function of Y denoted by R (Y) and defined by: the last case being the collision scenario. Furthermore: and for k ≥ 1: leading to: Therefore, the average number of packets received during a time slot is: and the average receiving rate γ PAM in packets per minute is thus: Figure 4 shows γ SNW−MAC and γ PAM for values of N ranging from 0-100 and values of λ ranging from values of 0-300 packets per minute. In real scenarios, the wake-up interval of pseudo-asynchronous protocols T S is usually set to a much higher value than τ R to save energy. However, in order to compare SNW-MAC to the best case scenario of pseudo-asynchronous protocols regarding scalability, both τ R and T S were set to 40 ms, leading to Γ = 1500 packets per minute for SNW-MAC. As we can see, γ SNW−MAC increases until reaching Γ. The sink then saturates, and the receiving packet rate stops increasing. On the other hand, γ PAM first increases with λ and N, but decreases after reaching a maximum because of collisions, limiting its scalability. Moreover, the maximum reached by γ PAM is 674 packets per minutes, which is more than twice smaller than the maximum reached by γ SNW−MAC . These numerical results show the better scalability of SNW-MAC, even when compared to the best case scenario of pseudo-asynchronous MAC protocols. In the case where T S is set to 250 ms and τ R to 40 ms, which are typical real scenario values, numerical results show that the highest packet rate achieved by pseudo-asynchronous MAC protocols is 300 packets per minute, which is 5-times lower than SNW-MAC. Experimental Setup The experimental setup used to evaluate our approach is presented in this section. First, the architecture of the nodes that compose the testbed is presented. Then, details on the ULP WuRx implementation are given. Finally, the designs of two state-of-the-art MAC protocols, to which SNW-MAC is compared, are presented. Node Architecture Multiple EH-WSN platforms have been proposed by academia and industry over the last decade. In this work, we consider a single-path architecture version of the Multiple Energy Source Converter (MESC) architecture proposed in [45]. In the single-path architecture, there is only one energy storage device, and all the harvested energy is used to charge the storage device, which directly powers the node through a DC-DC converter. Figure 5 shows the block architecture of MESC that can be used with a variety of energy harvesters (e.g., photovoltaic cells, thermoelectric generators and wind turbines) using the appropriate energy adapter to normalize the output energy. Supercapacitors were chosen as storage devices as they are more durable and offer a higher power density than batteries [46]. In this work, the PowWow platform [47], based on the MESC architecture and equipped with a Texas Instruments CC1120 radio chip, is used as the testbed. The energy storage device is a 0.9 F supercapacitor with a maximum voltage of 5.0 V, and the minimum voltage required to power the node is 2.8 V. PowWow embeds a voltage measurement chip, the INA3221 from Texas Instruments, which allows measurement of the supercapacitor voltage, denoted by V C , with a precision of 0.1 mV. The residual energy e R can thus easily be computed as follows: where C is the supercapacitor capacitance. When SNW-MAC is evaluated, the ULP WuRx is added to the node, and the so-obtained mote is shown in Figure 6. The EM introduced in Section 3 was implemented on PowWow, in addition to SNW-MAC introduced in Section 4 and two others state-of-the-art MAC protocols presented in Section 5.3. The parameters used for experimentations are shown in Table 1. As the supercapacitor supplies the node via a DC-DC converter as shown in Figure 5 and the efficiency of the DC-DC converter varies with the input voltage, the power consumed by the node depends on the charge of the supercapacitor. Therefore, the power consumption of each of the N S states (introduced in Section 3.2) was measured for different input voltages of the DC-DC converter ranging from 2.8 V-5.0 V, and piecewise linear interpolation was used to get the P i values as functions of the supercapacitor voltage. Figure 7 shows the so-obtained measures and the corresponding interpolated functions for two states of the node. As we can see, piecewise linear interpolation permits an accurate modeling of the node power consumption. Ultra-Low Power Wake-Up Receiver Each wireless sensor node is equipped with a ULP WuRx presented in [13] only when the proposed approach is evaluated. This ULP WuRx employs On-Off Keying (OOK) modulation, the simplest form of Amplitude-Shift Keying (ASK) modulation, in which digital data are represented by the presence or absence of the carrier wave. The analog front-end of the receiver is designed for the 868-MHz frequency band and has a sensitivity measured to be −55 dBm at a bitrate of 1 kbps. The computational capabilities of the ULP WuRx are achieved by the use of a ULP 8-bit microcontroller, the PIC12LF1552 from Microchip, which was selected for its low current consumption (20 nA in sleep mode), fast wake-up time (approximately 130 µs at 8 MHz) and for the serial port supporting I2C and SPI, allowing easy communications with the node MCU and enough computational capability for parsing data and commands. When a carrier is detected, the front-end wakes up the microcontroller that reads the address embedded in the WuB and performs address matching. If the received address is not valid, the microcontroller goes back to a sleep state. If it is the valid one, it wakes up the node MCU using an interrupt. State-Of-The-Art MAC Protocols Used for Comparison The SNW-MAC protocol described in the previous section is compared to PW-MAC [9] and the Unified Radio Power Management Architecture (UPMA)-X-MAC [48], two well-known state-of-the-art pseudo-asynchronous MAC protocols. Figure 8a illustrates a packet transmission using the transmitter-initiated UPMA-X-MAC from UPMA. The node initiates the communication by continuously sending the data packet until an ACK frame from the sink is received. On its side, the sink periodically wakes up and listens to the channel. If it detects activity, it waits until the incoming data packet is fully received and then sends the ACK frame. PW-MAC is a receiver-initiated protocol that focuses on energy efficiency at both the receiver and transmitter sides. A simplified version of the protocol is used in this study as only downlink transmissions are considered. The packet transmission using PW-MAC is shown in Figure 8b. At the receiver side, the sink periodically wakes up and sends a Beacon (BCN) frame. At the transmitter side, each node accurately predicts the time at which the sink will wake up. If a packet needs to be sent, the node wakes up just before the next beacon is sent by the sink. Once the beacon is acquired, the node sends the data packet and waits for the ACK frame. At each packet transmission, a prediction error is computed, and the node updates its prediction time according to this error. Experimental Results This section starts by analyzing the WuRx power consumption. Then, results of the microbenchmarks performed to provide detailed insights into the energy cost of the transmission and reception of a packet using the evaluated protocols are exposed, and these results are used to compute the H and ξ ∞ values related to the EUC metric defined in Section 3. The energy consumption of the sink incurred by the evaluated protocols is then studied. Next, the benefits of SNW-MAC are shown by comparing it to the two state-of-the-art MAC protocols introduced in the previous section. Finally, our scheme is evaluated under variable energy harvesting conditions to show the benefits of the EM in collaboration with the MAC protocols and the higher performance of the proposed approach. Energy Consumption of the Wake-Up Receiver One of the requirements of a ULP WuRx is very low power consumption as it is always active, even when all the other components are in the sleep state. The power consumption of the ULP WuRx was measured to be 1.83 µW when the radio front-end was active and the PIC was in sleep state and 284 µW when the PIC was active at 3.3 V and was parsing the received data at 2 MHz. Therefore, the ULP WuRx power consumption becomes significant when the PIC is active. At each wake-up, the PIC is active for 19 ms to perform address matching. Hence, the energy consumed by the ULP WuRx at each wake-up of the PIC is 5.40 µJ. If we consider a typical node, not using a ULP WuRx, but using the duty-cycling approach with a duty-cycle set to a typical value of 0.05% and consuming 100 mW when the transceiver is active, then the total energy consumed by this node over a period of 24 h is 4.32 J. This amount of energy corresponds to more than 8 × 10 5 wake-ups of the PIC. The number of "false" PIC wake-ups, i.e., wake-ups not caused by WuBs, but by the wireless channel noise, was measured over a period of 24 h in an indoor environment. Figure 9 shows the total cumulative number of false wake-ups according to time. It is not surprising to observe that most of the false wake-ups happen during the daytime. In total, 3110 false wake-ups were counted over a 24-h period, which is two orders of magnitude below the previously-considered scenario assuming a typical 0.05% duty-cycle. Moreover, no false wake-ups of the main MCU happened. These results show the importance of the microcontroller embedded in the ULP WuRx. Indeed, performing address matching by a ULP microcontroller avoids numerous false wake-ups of the node MCU, whose power consumption is significantly higher. Energy Microbenchmarks To evaluate the energy efficiency of a MAC protocol, it is important to measure the energy consumption of the transmission and the reception of a single packet. Therefore, the energy traces of both operations were measured for the three evaluated protocols, by capturing the voltage drop across a 10.2 Ω resistor in series with a 3.5-V power supply using an Agilent Technologies MSO-X-3024A oscilloscope. In addition to allowing detailed analysis of the energy consumption, these microbenchmarks were used to set the τ i values introduced in Section 3.2 and to compute the H and ξ ∞ values related to the EUC metric. The results of the measurements are exposed by Figure 10, in which P C is the power consumption of the node. Figure 10a shows that sending a data packet using the proposed SNW-MAC protocol achieves the lowest power consumption compared with the other protocols, as it requires only the sending of the data frame (B). The reception of the WuB does not appear in this figure, as the power consumption of the WuRx when decoding a WuB is 284 µW, which is too low to be visible on the scale of Figure 10a. Moreover, the energy cost of sending a packet is constant if the data payload length is fixed. Regarding the sink in Figure 10b, the two stages of a packet reception, sending the WuB (A) then receiving the data frame (B), can be seen in this figure. As sending the WuB is done at a lower bitrate and higher transmission power than for non-WuB frames, polling a node is energetically expensive for the sink. This result motivates the piggybacking of the wake-up interval of each node in data packets, allowing the sink to poll them only at the right time (see Section 4). Figure 10c,d shows respectively the energy cost of a packet transmission and reception using PW-MAC. Sending a packet with this protocol requires the receiving of a beacon (A) and an ACK (C) frame, making the energy cost of sending a packet higher than with SNW-MAC. Moreover, the sender wakes up a short time before the sink transmits a beacon to prevent prediction errors. This time interval varies at each transmission, leading to a non-constant energy cost per packet transmission. The prediction error becomes significant due to the clock drift, and when it exceeds a fixed threshold, an update of the prediction state is triggered, leading to even higher energy consumption. Regarding the sink, receiving a data packet does not require the sending of a WuB and is thus less energetically expensive than with SNW-MAC. Nonetheless, SNW-MAC does not require the transmission of an ACK frame, which partially counterbalances the energy overhead incurred by the WuB transmission when compared to PW-MAC. Figure 10e shows the energy cost of sending a packet for a node using UPMA-X-MAC. In this case, the packet was successfully received by the sink at the sixth attempt. For each attempt, the two stages, sending the data packet (B) and listening for an ACK (C), can be seen. As shown in Figure 10f, the sink woke up during the fifth attempt (B) and thus did not receive the complete data packet. It stayed awake to receive it at the next attempt (B) and sent an ACK (C). The cost of sending one packet with UPMA-X-MAC greatly varies for different transmissions because of the randomness of the sink wake-up time relative to the node transmission starting time. On average, a node has to wait half of the wake-up interval of the sink before a data packet is successfully received. This makes the sending of a packet with this protocol energetically more expensive than with SNW-MAC or PW-MAC. On the sink side, the energy cost of a packet reception is also highly variable and requires on average the listening of one and a half data packets in addition to the transmission of an ACK frame. Using these microbenchmarks, the τ i values introduced in Section 3.2 were measured, and H and ξ ∞ were calculated for the different MAC protocols using the lowest measured values of the P i , leading to the best achievable values of H and ξ ∞ . Table 2 presents the obtained results. It can be observed that using SNW-MAC allows a significantly better use of the energy budget. Indeed, SNW-MAC permits values of H (respectively ξ ∞ ) more than twice smaller (respectively bigger) than with PW-MAC and more than nine-times smaller (respectively bigger) than with UPMA-X-MAC. Energy overhead of the EM: The EM is periodically executed by each node and therefore incurs an energy overhead. Using micro-benchmarks, it was measured that each execution of the EM consumes 207.41 µJ at most. For the rest of this work, the duration between two executions of the EM T is set to 120 s, and the power consumption overhead incurred by the EM is thus equivalent to a constant power draw of 1.73 µW, which is similar to the power consumption of state-of-the-art electronic components for WSN nodes in the sleep state. Energy Consumption of the Sink To evaluate the energy consumption of the sink incurred by the evaluated MAC protocols, the energy cost per received packet was evaluated. Using SNW-MAC, the sink only wakes up to poll a sensor node to receive a packet, and therefore, the energy cost per packet received is constant. However, using a pseudo-asynchronous MAC protocol, such as UMPA-X-MAC or PW-MAC, the sink must periodically wake up to check for incoming packets. If the average rate at which the sink receives packets is denoted by γ and if the cost of waking up and receiving a packet is denoted by e RX and the cost of waking up and not receiving any packet is denoted by e W , then the average power consumption of the sink incurred by packet reception is: The average energy cost per received packet is therefore: Using the microbenchmarks presented previously, the values of e RX and e W were measured for UPMA-X-MAC and PW-MAC, and the energy cost of receiving a packet using SNW-MAC was also measured. Figure 11 shows the energy cost per received packet for the three evaluated protocols, for different values of γ and for values of T S of 125 ms and 500 ms. These results were plotted using Equation (28), for values of γ in the range [0, 1 T S ]. It can be seen that if γ is lower than 66 packets per minute, then SNW-MAC enables lower energy consumption on the sink side, despite the higher power needed to transmit WuB and even when the wake-up interval is set to 500 ms. Moreover, when the wake-up interval of pseudo-asynchronous protocols is decreased, which is required if the packet rate is high to reduce collision risk or if low latency is required, the energy cost per packet received significantly increases for UMPA-X-MAC and PW-MAC, as can be seen in Figure 11. In the monitoring applications, which is the focus of this work, low packet rates are expected, and in this case, Figure 11 shows that SNW-MAC enables lower power consumption on the sink side. Evaluation on a Star Network The proposed EM and the evaluated MAC protocols were implemented on a testbed made of six PowWow nodes including one sink, in a star topology. The nodes were deployed in a room with no windows and were exclusively powered by indoor fluorescent light, which achieved a luminance of 500 lx, allowing reproducibility of the experiments. Each node, except the sink, was equipped with a solar panel Sanyo AM-5913CAR-SCE, of size 60.1 mm by 55.1 mm. The sink was battery powered. Moreover, the nodes were deployed under different lighting conditions, as shown in Figure 12. Nodes 1, 2 and 5 were located on desks, directly under the ceiling lights, while Node 3 was deployed in a more shadowed area, and Node 4 was located on a bookcase, close to the ceiling, thus receiving less light than the others. Each experiment lasted for 3 h and was performed during daytime, and the PowWow nodes have been equipped with a ULP WuRx only when the SNW-MAC protocol was evaluated. Figure 13 shows the obtained results, where Figure 13a presents the throughput, in packets per minute, achieved with the different MAC protocols. SNW-MAC significantly outperforms the two other protocols, allowing up to two-times higher throughput than PW-MAC for Node 2 due to the lower energy cost of packet transmissions. The performance of SNW-MAC is confirmed by Figure 13b, which shows that the EUC is much higher for SNW-MAC, revealing a better use of the energy budget. It is not surprising to notice that the results obtained for each node are strongly linked to the average energy budget allocated by the EBC shown in Figure 13c. As the amount of harvested energy varies for different nodes, the average allocated energy budget also differs. Indeed, Nodes 3 and 4 were placed in shadowed areas and therefore received less environmental energy than the others, leading to lower average energy budgets. Node 2 was on a shelf, closer to the light source, and therefore harvested more energy. Finally, Figure 13d shows the Packet Delivery Ratio (PDR) achieved by the three protocols. A PDR of 100% is achieved for all nodes that use SNW-MAC, while failed transmissions occur for the two other protocols. These results show that the use of ULP WuRx enables the design of highly-reliable protocols. Evaluation under Variable Light Conditions The benefits in terms of achievable throughput of the proposed approach have been evaluated under variable light conditions. The residual energy of a single sensor node was tracked when exposed to fluorescent lighting, typical of an indoor environment, then without any available environmental energy (the lights were off) for 2 h and finally exposed to indoor light again. The whole experiment lasted for 5 h, starting with the storage fully charged, and the results are shown in Figure 14. In this evaluation, only PW-MAC has been compared to SNW-MAC, as it allows higher throughput than UPMA-X-MAC, as we have previously seen. Figure 14 shows that the EM successfully adapts the throughput of the node to keep it sustainable. In the first part of the experiment, i.e., before the light is turned off, and in the last part, i.e., after 4.3 h, the energy buffer is full, and environmental energy is available. In that case, the EM aims at keeping the amount of residual energy in the ENI interval. When the residual energy is in the range [E up EN I , E max R ], the energy budget is increased by an amount of ∆e B , in order to avoid energy waste, which corresponds to the rules (R7) and (R8) of Figure 2b. When the residual energy is in the ENI interval, the energy budget is either left unchanged or adjusted by ±∆e B according to the sign of ∆e R . These cases correspond to the risk of saturation and energy neutral interval scenarios introduced in Section 3.1. When no environmental energy is available, the energy budget is progressively decreased, following the rule R1. One and a half hours after the beginning of the experiment, the minimal energy budget is reached, and the energy budget is no longer decreased to ensure minimum quality of service. However, the energy buffer keeps discharging, as no energy is harvested. This corresponds to the discharging state scenario introduced in Section 3.1. When environmental energy is available, 2.4 h after the beginning of the experiment, the energy buffer starts recharging, and the energy budget is progressively increased until the ENI interval is reached, which corresponds to the rule R3 and to the charging state scenario. Once the ENI interval is reached, the energy budget is still increased to avoid saturation, which corresponds to the rules R7 and R8. From this evaluation, we observe that the throughput using SNW-MAC is in all conditions higher than with PW-MAC, showing the better energy efficiency of SNW-MAC. Especially, the throughput of the proposed approach is up to 2.5-times higher than PW-MAC in periods where harvested energy is available. These results demonstrate the ability of the EM to achieve energy neutrality with different MAC protocols and the benefits of its combination with the highly efficient SNW-MAC protocol, which exploits a ULP WuRx to enable asynchronous communication. Conclusions In this work, an energy manager combined with an asynchronous MAC protocol has been proposed for energy harvesting wireless sensor networks. The proposed solution leverages two complementary technologies, energy harvesting and ultra-low power wake-up receivers, to increase the energy efficiency of wireless sensor networks and to enable energy neutrality. This new scheme is designed to be implemented on real hardware and therefore solely requires the measure of the residual energy achieving a negligible overhead. The proposed approach has been experimentally validated in the context of data-gathering sensor networks with a star topology and compared with two state-of-the-art MAC protocols. Experimental results show that a 2.5 gain in term of throughput can be achieved by SNW-MAC compared to PW-MAC The energy efficiency was evaluated using a new metric introduced in this work, the energy utilization coefficient. Moreover, the better scalability of the proposed MAC protocol compared to traditional pseudo-asynchronous MAC has been analytically demonstrated. Funding: This work was supported by "Transient Computing Systems", an SNF project (200021_157048), by the SCOPES SNF project (IZ74Z0_160481) and by the "POMADE " project, funded by CD22 and the Brittany region. Conflicts of Interest: The authors declare no conflict of interest.
14,163.8
2018-05-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Machine learning based decline curve analysis for short-term oil production forecast Traditional decline curve analyses (DCAs), both deterministic and probabilistic, use specific models to fit production data for production forecasting. Various decline curve models have been applied for unconventional wells, including the Arps model, stretched exponential model, Duong model, and combined capacitance-resistance model. However, it is not straightforward to determine which model should be used, as multiple models may fit a dataset equally well but provide different forecasts, and hastily selecting a model for probabilistic DCA can underestimate the uncertainty in a production forecast. Data science, machine learning, and artificial intelligence are revolutionizing the oil and gas industry by utilizing computing power more effectively and efficiently. We propose a data-driven approach in this paper to performing short term predictions for unconventional oil production. Two states of the art level models have tested: DeepAR and used Prophet time series analysis on petroleum production data. Compared with the traditional approach using decline curve models, the machine learning approach can be regarded as” model-free” (non-parametric) because the pre-determination of decline curve models is not required. The main goal of this work is to develop and apply neural networks and time series techniques to oil well data without having substantial knowledge regarding the extraction process or physical relationship between the geological and dynamic parameters. For evaluation and verification purpose, The proposed method is applied to a selected well of Midland fields from the USA. By comparing our results, we can infer that both DeepAR and Prophet analysis are useful for gaining a better understanding of the behavior of oil wells, and can mitigate over/underestimates resulting from using a single decline curve model for forecasting. In addition, the proposed approach performs well in spreading model uncertainty to uncertainty in production forecasting; that is, we end up with a forecast which outperforms the standard DCA methods. Introduction Hydrocarbon production forecasting includes estimation of the ultimate recoveries and the lifetimes of wells, which are material factors for decision-making in the oil and gas industry because they can impact significantly economic evaluation and field development planning. Although mathematically richer forecasting models (e.g., grid-based reservoir simulation models) have been developed over the past decades, decline curve analysis (DCA) is still widely used because of its simplicity: The mathematical formulations of DCA models are simple with only a few parameters, and only production data are required to calibrate the parameters. The Arps model (Arps, 1945) has been used for DCA for more than 60 years and has been proved to perform well for conventional reservoirs. However, because of the complexity of flow behaviors in unconventional reservoirs as several flow regimes are involved (Adekoya, 2009;Joshi, 2012;Nelson, 2009) the Arps model may not be ideal, and many other models have been proposed (e.g., the Stretched Exponential decline model (Valk o and Lee, 2010), the Duong model (Duong, 2011) and the combined capacitance-resistance model proposed by Pan (Pan, 2016). The Pan model is subsequently referred as Pan CRM in this paper. Some researchers (e.g. Gonzalez et al., 2012) have attempted to identify a single "best" model among several DCA models. However, Hong et al. (2019) have argued that selecting a single "best" model eliminates other potentially good models and exhibits overconfidence (i.e., trust the single model 100%), which can cause significant over/underestimates. Thus, their proposed approach incorporates multiple models by using Monte Carlo simulation to assess the probability of each model and consequently provides a probabilistic forecast of production. Some limitations of Hong et al.'s approach are: (1) a collection of DCA models still needs to be predefined, and (2) the assessed probability of each model is only a measure of the model's relative goodness to other models. If, for example, all the candidate models overestimate production, using Hong et al.'s approach will still result in an overestimated forecast. Thus, an approach that does not require the predefinition of DCA models is deemed preferable; i.e., using a nonparametric model. Machine learning (ML) is still a relatively new technique in the oil and gas industry. Several researchers have discussed the applications of ML for DCA. For instance, (Gupta et al., 2014) used neural networks (NNs)-a ML technique-for DCA. They first trained the NNs using historical data to capture the decline in production in shale formations, and the trained model was then used for prediction. This study also used the autoregressive integrated moving average (ARIMA) (George et al., 2015), a time series analysis to analyze the historical data and identify the trends and relationships of historical and predicted data. Although they applied these two methods for a sample size of around 30 wells, but they did not quantify uncertainties in the forecasted results. (Ma and Liu, 2018) predicted the oil production using the novel multivariate nonlinear model based on traditional Arps decline model and a kernel method. (Aditya et al., 2017) developed a novel predictive modeling methodology that linked well completion and location features to DCA model parameters. The objective of the methodology was to generate predicted decline curves at potential new well locations. (Han et al., 2020) used Random Forest (RF) to develop a predictive model that can be used to predict productivity during the early phase of production (within 6 months). The required datasets were obtained from 150 wells, targeting shale gas, stationed at Eagle Ford shale formations. Reservoir properties, well stimulation and completion were considered as key input parameters whilst the cumulative production of gas during a span of 3 years was identified as the target variable. Although (Aditya et al., 2017;Han et al., 2020) results were promising, the applicability of their methodology depends heavily on the presence of specific geological, well stimulation and completion data, and the quality and accuracy of the data have a big impact and influence, any anomaly in data consequently make their results less promising. In the context of deep learning, (Luo et al., 2019) built non-linear models using RF and Deep Neural Network (DNN) algorithms to forecast the cumulative production of oil during a span of 6 months. The whole dataset was obtained from around 3600 wells positioned at Eagle Ford formations. Key parameters associated with geological parameters such as structural depth, thickness of the formation, total organic carbon (TOC), number of calcite layers and average thickness of the layer (thickness of the formation divided by the total count of the layers) were identified as the input variables that impacted the productivity of wells in Eagle Ford. In the context of Deep Recurrent Neural Networks, (Lee et al., 2019) used the Long short-term memory (LSTM) algorithm to develop a model for forecasting future shale-gas production. The gas production and shut-in period of the past were taken into account to deduce the input features. The training dataset was collected from 300 wells located in Alberta, Canada, at the Duverney formation. For 15 wells stationed in the same field, the model was tested. The trained model demonstrated the ability to predict production rates over a longer period (55 months). They found out that the method can be used even faster to forecast future production rates and analyze the impact of added attributes such as the shut-in period. It was highlighted that the approach would provide a more reliable and accurate forecast of the production of shale gas and that this method can be used in both traditional and unconventional scenarios. In terms of reliability and utilization, further tuning and improvement of the feature selection process will produce a system with improved predictive capabilities. Stimulation parameters attribute derived from geological knowledge, and refracturing were proposed to be included as possible features that have a critical impact on shale gas production and improve the methode the methodgeological knowledge, and hat the ahose circumstances, the highintensity drilling associated with unconventional hydrocarbon resources and the underperformance of DCA make this technique more successful. (Zhan et al., 2019) checked the LSTM methoduded as possible features that have a critiduction of oil over two years or even further by using very little previous data acquired during the initial production phases. From over 300 wells stationed in unconventional onshore formations, the required dataset was obtained. Over the first few production years, it is possible to recover around 70% of the total EUR from shale wells, after which a rapid decline is observed. The steepness of the decline makes it hard to survey the trend that causes over-estimation. In such a dynamic situation, they highlighted this methodology's value for forecasting production and assessing the reservoir. They found that the average difference between the accumulated production estimated and observed stayed within 0.2%, while the variance did not exceed 5%. (Sagheer and Kotb, 2019) tested deep LSTM (DLSTM) network predictive efficacy in which more LSTM layers were stacked to address shallow structure limitations when operating with data from long interval time series. They figured that the proposed approach performed much better than other models used in the analysis, like those based on ARIMA, deep gated recurrent unit (GRU), and deep RNN. Based on their applicability, the models were tested and validated in two real-field case studies, such as India's Cambay Basin oil field and China's Huabei oil field. An ensemble empirical mode decomposition (EEMD) based LSTM was suggested by (Liu et al., 2020) to increase the oil production forecasting speed and accuracy. Two real-field events, the JD and SJ oilfields based in China, were tested to determine and verify the efficacy of the model. The EEMD-LSTM model was contrasted with models EEMD-Suport Vector Machine (SVM) and EEMD-Neural netowrk. The EEMD-LSTM model has been found to work much better as compared to the other models by producing the forecast perfectly and with great quality. Although several machine learning and deep learning models have been proposed to learn better on how to handle multiple seasonal patterns in oil production data. However, to the best of our knowledge, no studies have yet applied such probabilistic model for production forecasting. The novelty of this work is to improve upon the existing techniques used by petroleum engineers to analyze and appraise oil wells. Evaluating oil well potential is a lengthy investigation process. This is because the production profiles can be complex, as they are driven by reservoir physics and made even more challenging by a variety of operational events. Petroleum engineers analyze and evaluate the production profiles of oil wells, understand their underlying behavior, forecast their expected production, and identify opportunities for performance improvements. The investigation process is, nevertheless, time-consuming. This introduces opportunities to optimize these processes. Thus, State-of-the-art level probabilistic machine learning methods are considered DeepAR (David et al., 2019) and Prophet time series analysis (Taylor and Letham, 2007), that are known to be effective in pattern recognition and outperforming the state-of-the-art forecasting methods on several problems. These two algorithms can be used to understand and predict the behavior of oil wells. Our objective is to determine the viability of these algorithms in predicting the distribution of future outcomes, specifically with time series data representing the oil production of petroleum without having substantial knowledge regarding the extraction process or physical relationship between the geological and dynamic parameters. In the remainder of this paper, we first review the DL and time series analysis modeling that will be used to accomplish the task. Thereafter, we explain the evaluation metrics used to assess the quality of the forecast; and finally, we present the experimental results and a discussion of our work. Time series analysis A time series is a sequence of data obtained at many regular or irregular time intervals and stored in a successive time order; for example, a sequence of measured oil production rates over time. The objective of time series analysis is to extract useful statistical characteristics (e.g., trend, pattern, and variability) from a time series, to determine a model that describes the characteristics, to use the model for forecasting, and ultimately to leverage insights gained from the analysis for decision supporting and making. Traditionally, time series models can be classified into generative and discriminative models, depending on how the target outcomes are modeled (Ng and Jordan, 2002). The main difference between the two models is that generative models predict the conditional distribution of the future values of the time series given relevant covariates while the discriminative models use the past value. In this study, we will use discriminative models, as they are more flexible and require fewer parameters and structural assumptions than generative models. For more details about generative and discriminative model, see (David et al., 2019;Gasthaus et al., 2019;Ng and Jordan, 2002;Ruofeng et al., 2018). A critical aspect of discriminative models is the process of reconstructing a single sequence of data points to yield multiple response observations. To solve this, sequenceto-sequence (seq2seq) (Cho et al., 2014) and autoregressive recurrent networks (David et al., 2019) approaches were used to feed and generate output from time series prediction models. In seq2seq, the model is fed a sequence of time series as inputs, and it produces a time series sequence as output, unlike the autoregressive model, which reduces the sequence prediction to a one-step-ahead problem. DCA DCA is a type of time series analysis with data type of oil production data. DCA aims to predict the future production of a well or a field based on historical data. The prediction is useful for evaluating the economics of the future production and supporting decisions such as whether a well or a field should be abandoned. Pan's Combined Capacitance-Resistance Model (Pan CRM) is DCA method. It is designed to capture the major flow regimes-transient and semi-steady state flow regimes-relevant for an unconventional well. (Pan, 2016) proposed a model to capture the productivity index behavior over both linear transient and boundary-dominated flow. Its formula is given as: where J is the productivity, J 1 is the constant productivity index that a well will eventually reach at boundary dominated flow, b is the parameter of the linear transient flow, b is related to the permeability in the analytical solution of linear flow into fractured wells presented by (Wattenbarger et al., 1998). Pan obtained the empirical solution of rate over time by combining the previous equation and a tank material balance equation.The standard form is given as: where c t the total compressibility, V p the drainage pore volume, and DP is the difference between the initial reservoir pressure and the assumed constant flowing bottom hole pressure. For small t, the Pan CRM may offer an unrealistically high rate, as q(t) approaches infinity when t approaches 0. The Pan CRM is analytically derived and has all the parameters associated with a reservoir system's physical quantities. In this study, for each single well, c t , V p , DP, b and J 1 are determined through history matching with the goal to minimize a predefined loss (or objective) function by adjusting the model parameters. Prophet forecasting model The Prophet forecasting is a bayesian nonlinear univariate generative model for time series forecasting, which was developed by the Facebook Research team (Taylor and Letham, 2007) for the purpose of creating high-quality multistep-ahead forecasting. This model tries to address the following difficulties common to many types of time series forecasting and modeling: • Seasonal effects caused by human behavior: weekly, monthly, and yearly cycles; dips and peaks on public holidays; • Changes in trends due to new products and market events; • Outliers. The Prophet forecasting model utilizes the additive regression model, which comprises of the following components: where y(t) is the variable of interest, g(t) is the piecewise linear or logistic growth curve for modeling non-periodic changes in a time series, seasonality s(t) represents periodic changes (e.g., weekly or yearly seasonality), h(t) reflects the effects of irregular holidays, and e t represents the error term that accounts for any uncertain changes not accommodated by the model (usually, e t is modeled as normally distributed noise). We invoke the growth trend g(t) as a core component of the entire Prophet model. The trend illustrates how the entire time series expands and how it is projected to evolve in the future. For analysts, Prophet proposes two models: a piecewise-linear model and a saturating-growth model. Nonlinear, saturating growth is modeled using the logistic growth model, which occurs as follows in its most basic form: where m is an offset parameter, k is the growth rate, and C is the carrying capacity. However, the value of C is not inherently a constant, which usually varies over time. It was then replaced by a time-varying capability C(t). Moreover, the growth rate of k is not constant. Therefore, it is presumed that the change-point where growth rates change has been integrated and the growth rate between two change-points is constant. The piecewise logistic-growth model is formed as follows: where c is the vector of rate adjustments, d is the vector of correct adjustments at changepoints, and k þ aðtÞ T d is the growth rate at time t. a(t) is defined by the following: where, s is the time point of change in the growth rate. Linear growth is modeled using a constant growth rate piecewise, and its formula is given as: where a(t), k, d, and c are the same as the nonlinear trend model. In the time series, seasonality reflects periodic changes daily, weekly monthly and yearly seasonality. To provide a versatile model of periodic effects, the Prophet forecasting model depends on a Fourier series. Its smooth fitting formula is given as: where P is a regular period that the time series may have (for example, P ¼ 7 for weekly data or P ¼ 365 for annual data) and N is the number of such cycles that we want to use in the model. The final seasonal model appears as follows when combining all seasonal time series models in s(t) into a vector X(t): sðtÞ ¼ XðtÞb where b Normalð0; r 2 Þ is needed before the seasonality to enforce a smoothing. Holidays and events: To completely understand the effect on holidays of a business time series or other major events such as workover, production shutdown for operations (for example, a workover), these constraints are explicitly set by the Prophet forecast model. Recurrent neural network (RNN) Compared with the traditional artificial neural network (ANN), the structure of RNN neuron is different from that of ANN by adding a cyclic connection, which form feedback loops in hidden layers, and hence the information of the last item in RNN can be transmitted to the current item. The structure of RNN neuron is shown in Figure 1. When the time series X ¼ ðx 1 ; x 2 ; x 3 ; . . . ; x n Þ is input, the sequence of hidden layer is H ¼ ðh 1 ; h 2 ; h 3 ; . . . ; h y Þ and the sequence of output layer is Y ¼ ðy 1 ; y 2 ; y 3 ; . . . ; y n Þ. The relationship of X, H and Y are listed in the following equations: where, r is the non-linear activation function, W xh , W hh and W hy are the weight matrix from input to hidden layer, hidden layer to hidden layer and hidden layer to output, respectively, b h and b y are biased terms. Long short-term memory neural network (LSTM) The LSTM neural network model (Greff et al., 2017;Hochreiter and Schmidhuber, 1997) is a type of RNN structure, which is widely used to solve sequence problems. An LSTM tends to learn long -term dependencies and solve the vanishing gradient problems 1 (Grosse, 2017), an issue observed in training ANN with gradient based learning techniques as well as backpropagation algorithms. An LSTM allows the storage of information extracted from data over an extended time period, and shares the same parameters (i.e., network, weights) across all timesteps. The structure of the LSTM shown in Figure 2 consists of the long term state ðc t Þ and three multiplicative units N with gði t Þ, output gate ðo t Þ, and forget gate ðf t Þand equivalently write, read, and reset information within the model's cells. These three multiplicative gates enable the LSTM memory cells to store and access information over long time periods. The gates control the amount of information fed into the memory cell at a given timestep. Unlike traditional RNN methods that overwrite new content at each timestep, the LSTM state vector and weights are modified at each timestep to take into account any evolution of the input-output relation occurring over time and carry that information over a long distance. The LSTM functions are listed as follows: where the input gate ði t Þ, a forget gate ðf t Þ and previous cell state ð c t Þ control the current cell state ðh t Þ, and the output gate ðo t Þ and current cell state ðc t Þ are used to control the hidden state ðh t Þ at time t. r is the element-wise sigmoid function, denotes the elementwise dot product operator, ðx t Þ is the input vector at time t, and h tÀ1 is the hidden state vector that store all the useful information prior to time t. W xi , W xf , W xc , and W xo denote the weight matrices of different gates for input ðx t Þ; W hi , W hf , W hc , and W ho are the weight matrices for hidden state h t ; W ci , and W cf denote the weight matrices of cell state c tÀ1 ; and b i , b f , b c , and b o denote the bias vector. Gated recurrent unit (GRU) The GRU is similar to the LSTM, but with a simplified structure and parameters. It was first introduced by . GRUs have been used in a variety of tasks that require capturing long-term dependencies (Junyoung et al., 2014). Similar to the LSTM, the GRU contains gating units that modulate the flow of information inside the unit. However, unlike the LSTM, the GRU does not include separate memory cells, and contains only two gates-the update gate and the reset gate-as displayed Figure 3. The update gate z t decides how often the unit updates its activation functions. This process takes a linear sum between the existing state and a newly computed state. The second gate within the GRU, the reset gate r t , acts to forget the previously computed state. The updated functions are listed as follows: where the activation h t of the GRU at time t is a linear interpolation between the previous activation h tÀ1 and the candidate activationh t , W denotes the weight matrices, x t is the input vector at time t, and U denotes the weight matrices of the cell state. DeepAR DeepAR is a generative, auto-regressive model. It consists of a recurrent neural network (RNN) using Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU) cells that takes the previous time points and covariates as input. In this study, We use the forecasting model from Salinas et al. (David et al., 2019). Unlike other methods of forecasting, DeepAR jointly learns from every time series. In (David et al., 2019) publication, DeepAR was outperforming the state-of-the-art forecasting methods on many problems. Let z i;t be the value of time series i at time t, the objective is to model the conditional distribution Pðz i ; t 0:T jz i;1:t 0 À1 ; x i;1:T Þ, of the future of each time series ½z i;t 0 ; z i;t 0þ1 ; . . . ; z i;T :¼ z i;t 0:T , given its past ½z i;1 ; . . . ; z i;t 0À2 ; z i;t 0À1 :¼ z i;1:t 0À1 , where t 0 represents the time point from which z i;t is assumed to be unknown at prediction time, and x i;1:T are covariates that are presumed to be known for all time points. The time ranges ½1 : t 0 À 1 and ½t 0 : T are the context range and the prediction range, respectively. The model is based on an autoregressive recurrent network, summarised in Figure 4. The model distribution Q H ðz i;t 0 jz i;1:t 0 ; x i;1:T Þ; x i;1:T is considered to be a product of likelihood factors: The network inputs are the covariates x i;t at each step t, the goal value at the previous step z i;tÀ1 , and the previous network output h i;tÀ1 at each step t. The network output h i;t ¼ hðh i;tÀ1 ; z i;tÀ1 ; x i;t ; HÞ is then used to measure the parameters h i;t ¼ hðh i;t ; HÞ of the probability lðzjhÞ that is used to train the parameters of the model. A sampleẑ i;t $lðÁjh i;t Þ is fed back to the next step instead of the true value when z i;t is unknown (David et al., 2019). with h i;t the autoregressive recurrent network output h i;t ¼ hðh i;tÀ1 ; z i;tÀ1 ; x i;t ; HÞ which will be fed as the next timestep input for h i;tþ1 -hðÁÞ is a function that is implemented by a multilayer recurrent neural network with LSTM or GRU cells parametrized by H, and the likelihood lðz i;t jhðh i;t ; HÞ being a fixed distribution parametrized by a function hðh i;t ; HÞ. The h i;t 0 À1 , initial state contains z i;t 0 À1 , context range information required to predict values in the prediction range. Given the model parameters required to predẑ i;t 0 :T $ Q H ðz i;t 0 jz i;1:t 0 ; x i;1:T Þ can be obtained directly by ancestral sampling: First, h i;t 0 À1 is obtained as as a recurrent network output, then we sampleẑ i;t 0 :T $ lðÁjhðĥ i;t ; HÞÞ for t ¼ 1; . . . ; t 0 À 1, whereĥ i;t ¼ hðh i;tÀ1 ;ẑ i;tÀ1 ; x i;t ; HÞ is initialized withĥ i;t 0 À1 ¼ h i;t 0 À1 andẑ i;t 0 À1 ¼ z i;t 0 À1 . The use of these samples makes it possible to calculate quantities, like the value distribution quantities, at a particular time in the prediction range. Likelihood model. The probability of lðzjhÞ should at best reflect the data statistical properties. It can be selected between any potential possibility, for example, Bernoulli, Gaussian, Binomial-negative, etc. For instance, the mean and the standard deviation are the parameters h ¼ ðl; rÞ in the Gaussian likelihood case. These are provided to the network output respectively by the network output and softplus activation to ensure r > 0: Loss function. The model parameter H which consists of the RNN hðÁÞ parameters and the hðÁÞ parameters, can be learned by maximizing the log-likelihood, as follows: with the time series dataset z i;1:Ti:1;...;N and known related covariates x i;1:T . No inference is needed to calculate the previous equation compared to state-space models with latent variables, as h i;t is a deterministic input function. It can therefore be explicitly optimized with respect to h with stochastic gradient descent. Measures for evaluating forecast As previously mentioned, the purpose of this task is to predict several future timesteps in the target time series. Confidence intervals are also given and predicting the exact values (such as point forecasting). These are based on percentiles calculated from a probability distribution based on a fixed number of samples (e.g. DeepAR model). To evaluate the forecast accuracy, we use the mean continuous ranked probability score (mean CRPS). Mean (CRPS): used to quantify both the accuracy and precision of a probabilistic forecast (Hersbach, 2000). A higher value of mean CRPS indicates less accurate results. CRPS can be defined as: pðyÞd y is the cumulative distribution of a quantity of interest, and Hðx À x obs Þ is the step function, i.e., For N samples, the CRPS can be evaluated as follows: Data collection and preparation procedure In this work, we use oil production data from wells in the Midland field. We have selected 22 Midland wells, relatively smooth data, which indicates fewer significant operational changes. The selected Midland wells have been completed in a natural fractured reservoir and measured monthly. However, there are some missing measurements (i.e., no recorded values) for a few months for each selected well. We simply ignore these missing values. Some measurements have recorded zero values, and we suspect they indicate temporary shutdown for operations (e.g., a workover). The zero values may interfere with the training process, so we remove them from the data, then the datasets are rescaled with a standardization. The standardization is included in deep learning to improve neural networks convergence. Table 1 lists the lengths of production history of the selected wells. The lengths range from 105 to 362 months. No matter how long a well's production history is, we use the data of the last 24 months (regarded as a short term) for the blind test. Taking Well-ID3 as an example: As shown in Figure 5, the data covers 108 months. The data from Month 1 to 84 are used for building training and forecasting model using DeepAR and Prophet model, and the data from Month 85 to 108 are used for blind testing to assess the performance of prediction results. The same procedure is applied to the 22 selected wells individually. Models implementation The two models considered, DeepAR and the Prophet time series, are evaluated based on Midland datasets. The experimental setup which is shared for each dataset evaluation is first described before the dataset experiments. Hyperparameter Value Epochs 100 Batch size 32 Batches/epoch 100 Tables 2 and 3). If necessary, the final models are trained on the best parameters without early stopping or validation failure on a new training set. For each selected wells, the optimization is performed on the following parameters: • The context length and the learning rate: The number of prior timesteps taken to make the most precise forecasts. The tested context lengths depend on the data provided. • Stacking layer number: The number of layers in the recurrent neural network. • The cell type: Cell type in the recurrent neural network (GRU or LSTM). • The number of gaussians: The number of Gaussians considered to be the probability distribution of each timestep in Gaussians' mixture. • The dropout rate: The output of each LSTM cell is feed to a Zoneout 2 cell which uses this dropout rate (David et al., 2017). Prophet time series implementation. An open-source implementation of the Temporal Prophet time series model that was published with the paper (Taylor and Letham, 2007) can be found on this web documentation. For each well, the main hyperparameters which can be tuned are: • Changepoint prior scale: This is likely the parameter that is most impactful. It determines the trend change points in particular. If it is, too, the trend will overfit, if it is too small, the trend will be underfitting, and variation that should have been modeled with trend changes will be treated with the noise term instead. The default value of 0.05 works for several time series, but this can be tuned; the range is [0.001, 0.5]. • Sasonality prior scale: This parameter regulates the flexibility of seasonality. Similarly, a large value helps the seasonality respond to large variations, a small value shrinks the magnitude of the seasonality. The default parameter is 10, with practically no regularisation being applied. This is because overfitting occurs here very rarely (there is inherent regularisation because it is modeled with a truncated Fourier sequence, so it is filtered practically low-pass). [0.01, 10] would possibly be a good range for tuning it. • Growth: Options are 'linear' and 'logistic'. Figure 6 demonstrates the forecast results for some selected wells using the DeepAR, the means of forecasts (dashed steel blue curve) comparing to blind-test data (dashed red curve) and Pan CRM model (black line curve). In general, the production forecast seems to be reasonable, the DeepAR model can forecast both the upward and downward trends generally well and outperform the Pan CRM model, it is observed that the prediction intervals are, mostly, containing the correct values, except for the well-ID11, this could be explained by being incapable of predicting when changes in production are going to happen. We quantify the accuracy of the probabilistic forecast using the mean CRPS score as listed in Table 4. The results for prediction accuracy are quite satisfactory. In most cases, the mean CRPS decreases as the length of production history increases. This indicates that a longer production history (i.e., more data) will improve the DeepAR model forecast. A major drawback of DeepAR is that it has very little to no interpretability. We cannot interpret any physical meanings from the trained DeepAR model parameters. Figure 7 shows forecasts from the trained Prophet models. The means of forecasts (the dashed Steel blue curve) follow the blind-test data (dashed red curve in Figure 7) generally well. The P5-P95 prediction intervals (grey band in Figure 7) covers most of the blind-test data. However, for Well-ID8, the forecast significantly deviates from blind-test data and fails to capture both trends and the peaks and troughs reasonably; more specifically, the forecast underestimates the oil production rates. Results Compared to Prophet, the DeepAR models represent distinct trends in the mean CRPS score as listed in Table 5. This is possibly due to the DeepAR layer's capacity to "memorize" long-term patterns. In contrast, Prophet's predictions rely majorly on the pattern of the most previous historical data. Besides, DeepAR indicates that the lowest CRPS errors arise. Simultaneously, the difference in values is minimal, even though this statement is only valid for the 5-th and 95-th percentileses. This is demonstrated by the better coverage earned by the longer periods that compensate for the 50-th percentile's low accuracy. Limitation: In the previous section, we presented DeepAR and Prophet trends in the mean CRPS score as liste months (2 years). We evaluate the performance of the two methods for a forecast horizon of 48 months, as displayed in both Figures 8 and 9, It can be obviously seen that the two methods exhibited quite similar performance almost equally well when the length of wells more than 300 months, for the most part, they well capture the trends of oil production rate in blind tests, and the predictions yielded by each of the models appear to be quite similar. The models were good at predicting trends and flat lines, but sometimes undershot/overshot the peaks and troughs, i.e., Well-ID8. However, both the Prophet and DeepAR did not match production data including quantifying uncertainty, with a small historical data length. Based on the previous results, we can highlight that the two methods enrich the family of time series analysis models by extracting the weighted differencing/trend feature, and contribute to better performance in short-term oil production forecasts, and it can be an alternative way for oil production forecasting in practical application. Discussion and conclusion The purpose of this work is to demonstrate a method of machine learning that could replace or accelerate manual DCA for short-term oil and gas well forecasting. Probabilistic Prophet time series analysis and more accurate deep learning models, DeepAR, were considered to solve this problem. These two have been selected as they outperform the state-of-the-art methods of forecasting on many topics. For time series forecasting, Prophet is a Bayesian non-linear univariate generative model proposed by Facebook. The Prophet is also a structural time series analysis method that specifically models the impact of patterns, seasonality, and events. For the Prophet, the cyclical duration and event date parameters are set the same as our model. In contrast, DeepAR is an auto-regressive model based on cells with GRU or LSTM recurrent neural networks. It learns the parameters for each forecast horizon from a given probability allocation. Then, by sampling several times, one can sample from certain probability distributions to forecast each horizon or compute confidence intervals. The model validation was carried out on 22 separate midland reservoir field oil production datasets. Each has had their outliers removed and missing data replaced. They were also standardized as a pre-and post-processing to increase the model's accuracy. Their performances were evaluated based on mean CRPS metrics. The prediction length was initially fixed to 24 months and planned to be increased to 48 months. The models first went through a hyperparameter optimization to select to optimal parameters of each methods of each well. The results showed that the deep learning approach and Prophet analysis yield a satisfactory result in short term forecast, but they may fail to identify long-term trends in predictions unless the predictions are constantly adjusted. However, The both approaches relies on the volume and granularity of data to develop capability for predicting production over a long-time horizon. This approach can be regarded as "model-free" because, unlike the traditional DCA, the selection of a specific decline curve model is not required. However, It is important to highlight some potential drawbacks of applying time series deep learning for oil production prediction. Deep learning models may suffer significant errors when used for long-term forecasts. This is in addition to their limited interpretability. That is because the predictions are computed sequentially and depend on past predictions that have been appended to the data. Thus, there is a gradual accumulation of error over time. Deep learning models have to be retrained periodically as more data are collected. Otherwise, their predictions become highly inaccurate after a long period. Furthermore, another difficulty that may arise when applying deep learning is that an intermediate-to-expert level of knowledge may be required during model creation and training, as opposed to other out-of-the-box machine learning methods that can be trained easily by adjusting their hyperparameters. Therefore, general NNs may require some adjustments to their cell architecture. In conclusion, the precise prediction and learning performance presented in the paper suggests that both Prophet and DeepAR are eligible for use in the petroleum industry's non-linear short-term forecasting problems. Many steps should be taken to further improve the performance of forecasting over long time horizons, such as the application to spatiotemporal tasks or the use of an encoder-decoder from sequence to sequence, where the contextual data (static and dynamic) would be integrated into the model architecture. Additionally, integrating physics constraints during the training of a deep neural network. An advantage of such approach is that physics can be introduced into ML approaches and could replace or speed up manual DCA to perform long term forecast of oil and gas well.
9,004.8
2021-05-17T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Genome-wide inference of the Camponotus floridanus protein-protein interaction network using homologous mapping and interacting domain profile pairs Apart from some model organisms, the interactome of most organisms is largely unidentified. High-throughput experimental techniques to determine protein-protein interactions (PPIs) are resource intensive and highly susceptible to noise. Computational methods of PPI determination can accelerate biological discovery by identifying the most promising interacting pairs of proteins and by assessing the reliability of identified PPIs. Here we present a first in-depth study describing a global view of the ant Camponotus floridanus interactome. Although several ant genomes have been sequenced in the last eight years, studies exploring and investigating PPIs in ants are lacking. Our study attempts to fill this gap and the presented interactome will also serve as a template for determining PPIs in other ants in future. Our C. floridanus interactome covers 51,866 non-redundant PPIs among 6,274 proteins, including 20,544 interactions supported by domain-domain interactions (DDIs), 13,640 interactions supported by DDIs and subcellular localization, and 10,834 high confidence interactions mediated by 3,289 proteins. These interactions involve and cover 30.6% of the entire C. floridanus proteome. Results and Discussion Generating the interactome of ant C. floridanus. PPIs are typically mediated by interactions between domains that are often evolutionary conserved across species 15 and form stable interactions 16,17 . PPI (protein-protein interaction) maps from experiments on D. melanogaster were collected and augmented by PPI data from the DIP database (Database of Interacting Proteins). This provided a basis for interaction predictions according to interologs from C. floridanus: conserved proteins compared to Drosophila should also be conserved in their interactions 6,18 (see Materials and methods for details). Optimally, for such predictions several methods are combined 19 (Fig. 1). We combined the orthology prediction methods InParanoid 20 and OrthoMCL 21 . This did yield a first estimate of the C. floridanus interactome with 6274 nodes and 51866 edges 22 . However, the preliminary ant PPI network could have several false positive interactions acquired from the interologs of template data as shown previously in similar other studies 5,10,23,24 , including transfer to curated databases 25 . To reduce false predictions, we counter-checked all our data by domain-domain interactions (DDI). DDI are often used as an approach independent from sequence homology-based methods to predict protein-protein interaction networks and thus strongly reduce the number of false positives 7,26,27 . Generally, some of the PPIs are achieved via interactions between short motifs that are often transient interactions 28 . On the other hand, conserved interactions are mediated by conserved interaction domains across species 6 . Moreover, many signals and processes in the cell rely on conserved interacting protein domains 16,29 . There were 51866 conserved proteins (interologs) and 20544 ant protein-protein interactions that also were associated with DDI pairs, yielding a curated C. floridanus interactome with 4589 nodes and 20544 edges. For final curation of the interactome we used the subcellular localization of ant proteins: interacting proteins have to share the same subcellular localization (summarized in Table 1), predicted interactions between proteins not in the same location were removed. This led to a consolidated ant interactome consisting of 3914 nodes and 13640 edges. The highest proportion of interactions were identified in the cytoplasm followed by nucleus and plasma membrane respectively. A closer inspection of the interactions that were enriched across subcellular compartments (such as Golgi apparatus-cytoplasm) showed that in numerous cases at least one of the interacting proteins was alternatively localized to a compartment other than its major site of localization and thus the interacting proteins did indeed share a common compartment. For instance, in 482 interaction pairs (Table 1) at least one protein showed both the Golgi apparatus localization and cytoplasmic localization. It should be noted that these interaction partners are multiple localized proteins and may also appear in other cellular compartments. This is not an uncommon situation, as > 50% of proteins of our final interactome network annotated with predicted subcellular localization information are, in fact, localized at two or more compartments. As a final step of network reduction, isoforms of proteins are shown as a single node. These steps of successive filtering ultimately reduce the complexity of the network and increase the confidence of the C. floridanus interactome. Figure 1 summarizes the C. floridanus protein-protein interaction databases, our workflow, pruning steps and resulting ant network. It consists of 3289 nodes and 10834 edges (more details in 22 ). The complete four networks are provided in the Datasheets 1-4 in Supplementary Material. We also identified several novel interactions predicted to be present in C. floridanus. For instance, an interaction was observed between S-phase kinase-associated protein 1 (SkpA, Cflo_N_g10272) and immune receptor peptidoglycan-recognition protein LC (PGRP-LC, Cflo_N_g10272). As an important component of ubiquitin-proteasome pathway SkpA is involved in Immune Deficiency (IMD) pathway regulation in D. melanogaster 30 . Since PGRP-LC is also a regulator of the ant IMD pathway 3 , the interaction we identified suggests that SkpA can modulate the IMD pathway by the interaction with PGRP-LC. Not only the interaction between protein complexes such as laminin subunit beta-1 (Cflo_N_g14102) and laminin subunit gamma-1 (Cflo_N_g9869) but also the interaction between Cflo_N_ g14102 and C-type lectin precursor (Cflo_N_g765) was resolved (see Datasheets 3 in Supplementary Material for all the interactions). To further supplement the proposed ant interactome, we performed a topology-based scoring of the network. The method CAPPIC 31 used the intrinsic modularity of PPI network for assessing the confidence of individual interactions. 88.5% of the total interactions are high confidence (Fig. 2) while 9.65% were assigned to medium confidence and 1.8% to low confidence. We applied the Mann-Whitney test to compare the average confidence scores of all four PPI networks and observed significant increase of confidence score for the first three steps from the preliminary network through DDI mediated filtering and localization-based filtering ( Supplementary Fig. 1). The mean confidence score of the final interactome, after the isoform merging, did not change much. This is because the merging of this last step also eliminated some high confidence PPIs mediated by the isoforms. Nevertheless, the comparison of the proportions of high-confidence PPIs in the preliminary interactome and the final ant interactome indicates that it has a significantly increased number of high confidence interactions (in the preliminary network these are 78%, in the final 89%; Fisher's exact test p-value < 2.2e- 16). Note that the applied filtering steps also eliminated most of the low confidence PPIs (see low confidence zone in Supplementary Fig. 1). To further confirm the elimination after successive filtering steps, we compared the low confidence PPIs proportions in all four interactomes in a pairwise way with Fisher's exact test and show a significant decrease in the number of the low confidence PPIs between the preliminary, DDI-filtered and localization-filtered interactomes (in preliminary 4.5%, in DDI-filtered 2,2%, in localization-filtered 1.6% with maximum p-value < 3.4e-05). These analyses clearly demonstrate the improvement of network quality after filtering steps. www.nature.com/scientificreports www.nature.com/scientificreports/ Network analysis of C. floridanus interactome and accuracy assessment. The resulting PPI summarizes the whole network and reveals central connecting nodes. The final high confidence ant interactome showed a clustering coefficient of 0.094 with a mean shortest path length of 4.359, network diameter 14 and an average degree of 6.970. As a typical biological network [32][33][34] it shows small-world connectivity and scale-free topology. We further tested whether the proposed interactome aligns with the properties of a real biological network. To assess this, we derived three independent datasets and compared their topological properties with the proposed network. The average z-statistic value (Datasheet 5 in Supplementary Material) clearly indicates comparatively less variation of the ant interactome from the 'Barabási-Albert scale free model' (z-statistic = 23.06, −5.28) in terms of clustering coefficient and mean shortest path. However, the differences were high while comparing that of with 'Watts-Strogatz small world graph model' (z-statistic = −30.95, −58.49) and 'Erdős-Rényi flat-random' network model (z-statistic = 171.03, −52.72). Scale-free networks have been often observed in biological systems such as PPI and gene regulatory networks 35 , therefore the bias towards such a network is an indicator of the equality of the reconstructed ant interactome. To test another factor, the degree distribution of the ant interactome was www.nature.com/scientificreports www.nature.com/scientificreports/ much closer to Watts-Strogatz model (z-statistic = 0.49), although the differences with Barabási-Albert model was not too high (z-statistic = 2.45). The nodes in the network obey a power-law distribution indicating a typical, biological small-world and scale-free network. Gene ontology (GO) enrichment analysis. The molecular function GO term over-representation analysis indicates enriched protein functions in the ant networks (FDR <0.05; Table 2 and Datasheet 6 in Supplementary Material). Over-represented functional categories include the term 'binding' as to be expected from the PPI construction and a validation criterion. Out of 2804 proteins annotated as GO term GO:0005488 'binding' in C. floridanus proteome, 46.11% proteins are present in the final interactome. In total, 64 binding-related GO terms were identified constituting 34.97% of all over-represented GO terms. We only found the under-representation of two GO terms: GO:0003964 'RNA-directed DNA polymerase activity' and GO:0034061 'DNA polymerase activity' . This indicates during the filtering we did not lose most of the functional proteins that are involved in molecular binding. We further compared the semantic similarity scores of the interacting pairs with the random networks of non-interacting proteins. We first assigned the level-4 GO annotations (for molecular function) to all the proteins coded by the ant genome using Blast2GO 36 . Next, we used the GOGO algorithm 37 to measure the semantic similarity scores of the high confidence interacting pairs in the proposed ant interactome. We further generated 30 The distribution of the different confidence levels were computed with CAPPIC 31 . Score distributions were separated into low, medium and high confidence category and the density for each category was plotted. In the three subsets scores range between 0 and 0.3 for subset 1 (green, low confidence), 0.3 and 0.7 for subset 2 (blue, medium confidence) and 0.7 and 1 for subset 3 (red, high confidence www.nature.com/scientificreports www.nature.com/scientificreports/ random networks each with 100 random interactions among the proteins that were assigned to level-4 molecular function GO annotations using a custom-made Perl script which can be accessed from the GitHub repository (https://github.com/ShishirGupta-Wu/ant_ppi). We made sure the random networks did not contain any proteins pairs apparent in the preliminary interactome. Using the GOGO algorithm 37 semantic similarity scores were also assigned to the random networks (non-PPIs) and these scores were further compared with the interacting proteins in a pairwise way using the Mann-Whitney U test. We observed that the interacting protein set had not only the highest average score of 0,47, this was also well separated and significantly higher than the average score in all the 30 non-PPI sets (Fig. 3). This comparison demonstrates the interactions in our calculated ant interactome are functionally relevant and clearly different from random networks. C. floridanus interactome protein conservation compared with seven organisms. Proteins that perform essential functions are expected to be evolutionary conserved. We further investigated the evolutionary conservation of ant interactome proteins. Higher degree proteins are generally evolutionary better conserved 38 , some caveats are discussed in 39,40 . To analyze this, node degree and the fraction of proteins present in the ant interactome that are conserved in different model organisms were compared. It turns out that in general the interactions are conserved and supported by most species tested and not just by one (Fig. 4). There was a positive correlation between degree and conservation in the evolutionary closest analyzed species A. gambiae (Spearman's rank r = 0.62, p-value = 3.5e-09). Similar correlations are observed between ant and human (r = 0.60), and mouse (r = 0.51). Between ant and worm the correlation was weak (r = 0.33), while no significant correlation is observed between ant and A. thaliana, P. falciparum, and yeast. An ortholog table is provided in Datasheet 7 in Supplementary Material. Overall conservation and infection induced hubs and bottlenecks in the ant interactome. We also evaluated the overall conservation of all the ant proteins with the other seven model organisms and compared the relatedness of the ant interactome proteins using the chi-square test. The analysis indicated the relatedness of corresponding proportions with p-value < 0.05 in each case. The differences in the number of orthologs can be clearly visualized (Fig. 5a) in case of ant comparison with protozoan parasite, yeast and plant. Due to the large phylogenetic distance to these three organisms there are less orthologs but these are well conserved (chi-square test). www.nature.com/scientificreports www.nature.com/scientificreports/ The remaining set of the other four organisms including insect, human, mouse and worm together consists of/ contains higher number of orthologs in comparison to the ant proteins ( Fig. 5b and Datasheet 7 in Supplementary Material). 187 proteins of the ant interactome are ant-specific in this comparison: they do not have orthologs in any of the analysed organisms (Fig. 5b). The analysis of central topological properties of a PPI network helps to identify key multifunctional components of the network 41 . Infection induced proteins of C. floridanus are conserved in related organisms including key interactions. The degree of the node 42 and the betweenness centrality 43 www.nature.com/scientificreports www.nature.com/scientificreports/ We applied Fisher's exact test to compare the proportion of multi-localized proteins in hubs and bottlenecks to non-hubs and non-bottlenecks, respectively. Supplementary Fig. 2 shows differences between the localization of bottlenecks and hubs of the ant interactome. For bottleneck proteins, 70% were found to be multi-localized (versus 56% for non-bottleneck proteins; significant difference; p = 9.6 × 10 −10 ). On the other hand, 62% of the hub proteins had multiple localizations (versus 56% for non-hub proteins; significant difference, p = 0.001575). GO ID GO molecular function term FDR Integration of the RNASeq data 3 with the ant interactome revealed differentially expressed infection-induced hubs and bottlenecks during the bacterial infection of C. floridanus (Fig. 5c). These include also well-known key proteins involved in C. floridanus immune response such as nuclear factor NF-kappa-B p110 (Relish, Cflo_N_ g6082), acidic mammalian chitinase (Cflo_N_g2277), as well as stress-related protein cytochromes P450 6A1 (Cflo_N_g11706) 3 . Given the high importance of hubs and bottlenecks in PPI networks and their differential expression during bacterial infection, all the identified proteins are expected to participate in the defense against bacterial pathogen, and hence can also be examined for decoding immune mechanisms. The insect peritrophic membrane (PM) imposes protective physical barriers over the midgut epithelium 44 . The PM related proteins have shown their potential as targets for pest control 45,46 . Therefore, the important ant peritrophic membrane protein 1 (Cflo_N_g4555) (Fig. 5c) with no human homology could be further tested as a potential pest target. However, differential expression does not guarantee a protein to be the best target 47,48 and therefore, other topologically important proteins in the network without human homology (Datasheet 7 in Supplementary Material) should also be considered as potential pest targets in future. conclusions Our curated ant interactome is the first large-scale PPI network of an ant. It allows besides numerous analysis of network biology to study how different cellular processes connect to each other including hub proteins and different types of crosstalk, for instance in immunity. Similarly, the PPI maps of other sequenced ants can be reliably predicted using the interologs of the reconstructed high-confidence C. floridanus interactome. Moreover, detailed cross-validation, comparison with random networks, GO annotation, and conservation analysis support the high quality of the resulting ant interactome and its construction steps. The network analysis including evolutionary conserved network proteins further suggest that topologically important proteins could also be exploited as future pest targets. For instance, cytochrome P450 6A1 (Cflo_N_g11706), peritrophic membrane protein 1 (Cflo_N_g4555), flexible cuticle protein 12 (Cflo_N_g6859), endocuticle structural glycoprotein SgAbd-1 (Cflo_N_g7775) were identified as topologically important differentially expressed proteins with no human orthologs. Nevertheless, specific interactions highlighted from our global analysis will need individual follow up by detailed investigations. Materials and Methods Reconstructing protein-protein interaction map of C. floridanus. We compiled the list of experimentally verified high-confidence PPIs available in Database of interacting proteins (DIP) 49 , D. melanogaster PPIs from DroID 50 database which includes data from different studies including interactions from high throughput Gal4 proteome-wide yeast two-hybrid (Y2H) screens 32 , LexA Y2H system screens [51][52][53] , PPIs from fly protein interaction map 54 , interactions determined in large-scale co-affinity purification (co-AP)/MS screens 55,56 , interactions from BIND 57 , BioGRID 58 , MINT 59 , IntAct 60 , and databases available in DroID v2014_10. The C. floridanus interologs of the entire template PPIs were determined using orthology predictions from the software InParanoid 20,61 and OrthoMCL 21 . These were further customized using own perl and bash scripts. For DIP interactors we used the default parameters of InParanoid. For the fly data orthology was determined using the stricter Blosum80 matrix. For the OrthoMCL based interologs mapping a Blast e-value of 1e-05 was used and the MCL inflation index set to 1.5. InParanoid distinguished seed orthologs with co-orthologs and left fewer possibilities of mixing outparalogs in orthologous clusters. Consensus predictions of InParanoid and OrthoMCL were added to InParanoid seed orthologs to create a set of interologs. pruning ppis with domain-domain interactions. The amino acid sequences of non-redundant preliminary PPIs were extracted and domains were assigned to them using Pfam version 27.0 62 . The list of non-redundant domain-domain interactions was prepared from the meta-databases Domine 63 , DIMA 3.0 64 and IDDI database 65 . These use complexes available in the Protein Data Bank (PDB) 66 to identify by interacting domains the Pfam families containing these domains. These Pfam families are then predicted to be interacting. This list was used to parse the template PPIs. All interactions were categorized whether they are supported (good interactions, used for further filtering steps) or not by domain-domain interactions (DDIs). Subcellular localization filtering. The subcellular localization of C. floridanus proteins was determined with orthology to Swiss-Prot proteins and the extended version of KnowPredsite 67 available at UniLoc server (bioapp.iis.sinica.edu.tw/UniLoc/), a knowledge-based classifier for protein subcellular localization. If in a binary interaction, both proteins do not share the same localization or at least one compartment in multiple localized proteins, the interaction was ruled out as probable not occurring. Isoform filtering. The information on C. floridanus protein isoforms and their function was extracted from our previous publication of C. floridanus re-annotation and transcriptome sequencing 3,68 . To reduce network complexity and noise, isoforms of any specific protein present in the network were represented as a single node. Although, the data files for all the networks are provided in the Supplementary Tables (1-5) which allow interested readers to analyze the network of their choice further if they wish. here s and t are network nodes different from node n, σ st is the number of shortest paths from s to t, and σ st (n) gives the number of shortest paths from s to t that goes through node n. Hubs and bottlenecks in the network were identified with cytoHubba 75 . Hubs were defined as proteins connecting with ≥5 proteins. Moreover, top 20% of bottlenecks and hubs were considered for mapping of the RNASeq expression data which was collected from our previous publication 3 . Random networks. We generated random networks following the Erdős-Rényi Model 76 , Barabási-Albert Model 77 and randomized the proposed (final) ant interactome while preserving the total number of interactome nodes using the Network Randomizer plugin 78 of Cytoscape 73 . A total of 1000 random simulation were employed to generate the undirected random graphs. For all three network sets we computed topological parameters, mean shortest path, degree distribution and clustering coefficient and compared their differences to the native ant interactome using the statistical Z-test 79 . functional annotation. Blast2GO 36 was used to annotate the Gene Ontology (GO) terms of proteins involved in the reconstructed interactome. Over-representation analyses of GO terms was performed using the Gossip package 80 of the Blast2GO suite. A two-tailed Fisher's exact test followed by false discovery rate (FDR) correction for multiple testing 81 was applied to see the functional difference of ant interactome proteins annotations (foreground set) and full C. floridanus proteome annotations 3 (background set). Only differences having an adjusted p-value < 0.05 were considered significant. Orthology analysis. InParanoid 20 was used to identify the orthologs of topologically important nodes in seven model organisms: Anopheles gambiae, Arabidopsis thaliana, Caenorhabditis elegans, Homo sapiens, Mus musculus, Plasmodium falciparum, and Saccharomyces cerevisiae. Only the ortholog with 100% bootstrap support was considered as true ortholog. As a note of caution, the conservation was calculated rather conservatively demanding double orthology relations. Hence, the absence of an ortholog (Suppl. Datasheet 7) only indicates that the highly restrictive threshold was not met. Generally, a sequence related protein may still be found by less restrictive algorithms (e.g. BLAST). For exact quantification of the degree of conservation of ant PPIs we did not check the possible restricted conservation of the binary ant PPIs, but more general the conservation of proteins that are present in the ant interactome and have orthologs in seven other species. After calculation of the orthology relationships between ant and other organisms we identified for every degree the occurrence value of the ant interactome and how many orthologs are present in other species. For each organism the fraction of proteins at a particular ant interactome degree is considered as the number of ant protein orthologs at that particular degree and greater divided by the number of proteins in the set. Data availability All data generated and analysed during this study are included in this published article (and its supplementary Information files). The dataset and codes for random network generation are also available at https://github.com/ ShishirGupta-Wu/ant_ppi.
5,044.2
2020-02-11T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Design Project Using 45nm CMOS Design Project Using 45nm CMOS — this project is aimed to design two impedances matching network to let the source degeneration and common gate amplifier to achieve the ideal characteristic. In the source degeneration case as the resistance is so small, so we just use down converting matching network. As the source degeneration gives us a high resistance , we just match with a up converting network. I. INTRODUCTION The objective is to minimize the input noise figure and S11 and maximize the voltage gain for a source degeneration amplifier and a common gate amplifier operating at 10GHz from 1.2V supply voltage using 45nm CMOS gpdk. There are four characteristics should be follow. S11 at 10GHz should less than -40dB, voltage gain should more than 20dB, Noise figure should less than 2dB, DC power should less than 1mW. In the source degeneration case, Lg and Ld are ideal induction, the other components are all non-ideal which means the other components may have noise in it and the value of these components vary in a small range. As we know that the source degeneration amplifier's gain is related to the impedance at source node and the impedance at drain node, so we may easily calculate the gain= -Ld/Ls and this may help us to select the value of Ld cause Ls is a non-ideal inductor and we should select Ld to match the Ls value. In the common gate amplifier, Ld is an ideal inductor but the other components are non-ideal. As we know that the common gate gain= gm*rds, and Rin= gm*Ls/Cgs, we may know how to select the width and length of the mosfet to get the proper gm, rds and Cgs. A. Review Stage To finish the project, there are so many step should follow. --First, analysis the circuit and build up the proper test bench, then us the DC analysis in cadence to let the nmos transistor works in saturation region. --Second, knowing that in source degeneration case, we should use up converting matching network so we just use narrow band matching network to match the test port's 50 Ω to the impedance seen from gate. As we should let the whole circuit have low noise factor which means we should use less components to finish the matching. --Third, then we just use ADE in cadence and use AC analysis to select the port 1 and port 2 to calculate the Sparameter. --Fourth, we select the input port and output port to calculate the voltage gain. --Fifth, change the test bench by deleting the two port and select the supple by adding noise then use ADE to see the noise factor. --Sixth, as noise factor is related not only by noise at source, but also related to the amplifier's gain, so we just changed the value of inductor and mosfet's width and length to get the better value of noise factor. --Seventh, the common gate amplifier has the same procedure as the source degeneration. III. FINAL DESIGN RESULT In source degeneration case: Fig. 1 Figure 1 to 7 shows the test bench for source degeneration and the value of S11, voltage gain and noise factor. In common gate case: Fig. 8 Figure 8 to 14 shows the test bench for calculate S11, voltage gain and noise factor. IV. DISCUSS This project aimed to let us know how to design a LNA by knowing the characteristic of it. At first the professor told us to use non-ideal components to finish the design, which means the component cannot select the ideal value, so we just by fixed the inductor's value and then select the capacitor's value because the range of capacitor is larger than inductor. Then we also know that we should transform the central frequency to 10GHz and let the S11, noise factor have the optimize value at 10GHz. At first we just want to calculate the noise factor, S11 and voltage gain in the same test bench, then we find that these three characteristic have trade off relationship because when we get a good S11 value we may not let the noise factor to become small at 10GHz which means we should use two test benches to get the S11 and noise factor. V. CONCLUSION In this project we know that we should know how to select the non-ideal component's value because we cannot use ideal components in real circuit, and we also know that there is trade off relationship between the characteristic of amplifier. APPENDIX The hand calculation that amplifier required, has already been written down on the A4 paper.
1,076.6
2022-01-13T00:00:00.000
[ "Engineering", "Physics" ]
Intra-Night Variability of OJ 287 with Long-Term Multiband Optical Monitoring Wei Zeng 1,2 ID , Qing-Jiang Zhao 3, Ze-Jun Jiang 2,4, Zhi-Hui Kong 4, Zhen Liu 1,2, Dong-Dong Wang 1,2, Xiong-Fei Geng 1,2, Shen-Bang Yang 1,2 and Ben-Zhong Dai 1,2,* ID 1 Department of Physics, Yunnan University, Kunming 650091, China<EMAIL_ADDRESS>(W.Z<EMAIL_ADDRESS>(Z.L<EMAIL_ADDRESS>(D.-D.W<EMAIL_ADDRESS>(X.-F.G<EMAIL_ADDRESS>(S.-B.Y.) 2 Key Laboratory of Astroparticle Physics of Yunnan Province, Yunnan University, Kunming 650091, China 3 Department of Physics and Technology, Kunming College, Kunming 650214, China<EMAIL_ADDRESS>4 Department of Astronomy, Yunnan University, Kunming 650091, China<EMAIL_ADDRESS>(Z.-J.J<EMAIL_ADDRESS>(Z.-H.K.) * Correspondence<EMAIL_ADDRESS> Introduction BL Lacertae (BL Lac) objects, which have either very weak or no emission lines [1], and flatspectrum radio quasars (FSRQs) with strong emission lines [2,3] form a subclass of radio-loud active galactic nuclei (AGNs) known as blazars. Blazars are characterised by non-thermal emission, and strong and rapid flux variability across the entire electromagnetic spectrum, from radio-to gamma-rays. The emission is normally attributed to the relativistic jet oriented at a small angle from the line of sight [4]. Blazar flux variability timescales extending from a few minutes to years and even decades can be broadly divided into three classes: a large variation over hours to days is often known as intra-day or -night variability (IDV) or micro-variability [5,6]; variation on timescales of days to weeks, or even a few months, is considered short-term variability (STV); meanwhile, long-term variability (LTV) can have timescales from several months to years [7][8][9][10]. OJ 287 (α = 08:54:48.9, δ = +20:06:30.6, J2000) is a blazar at redshift z = 0.306 [11]. Sillanpää et al. [12] pointed out for the first time that there is a double-peak structure in the cyclic optical outbursts of OJ 287 by using the optical V-band observations starting from 1890. The curve exhibited periodic outbursts at intervals of ∼12 years. A binary pair of super-massive black holes system was used to explain this quasi-periodic light curve. Sillanpää et al. [13] (1996a [13], 1996b [14]) reported the next double-peak outburst occurrence in 1994-1995, which occurred almost exactly at the predicted times. Then a double-peaked outburst was also seen during the next recurrence in 2005-2008 [15]. These predicted recurrences are usually interpreted as OJ 287 housing a binary pair of black holes system with a period of ∼12 years. In the case of OJ 287, outbursts occurring roughly every 12 years are almost certainly produced, and it was predicted that OJ 287 should show a major outburst in 2015-2016 [16]. In our work, along with the confirmation of this predicted outburst during 2015-2016 reported by [10,17], we study the properties of variability and spectral variation in a short timescales. Observations and Data Reduction Our photometric observations of blazar OJ 287 were performed using the 1.02 m (YAO1.02) optical telescopes at the Yunnan Astronomical Observatory. Since 2009, this telescope has been equipped with an Andor DW436 CCD camera (2048 × 2048 pixels) at the Cassegrain focus ( f = 13.3 m). The readout noise and gain are 6.33 electrons and 2.0 electrons/ADU in a 2 µs readout time model, or 2.29 electrons and 1.4 electrons/ADU in a 16 µs readout time model. The FOV of the CCD frame is 7.3 × 7.3 arcmin 2 with a pixel scale of 0.21 arcsec/pixel. We used standard Johnson broad-band filters with effective wavelength midpoints of V = 551 nm, R = 658 nm, and I = 806 nm. We performed optical observations in the V-, R-, and Ibands in the cyclic mode. The exposure times from 50 to 300 s were chosen according to seeing, the filter and the telescope. The time resolutions (the time interval between two adjacent data points in the same band) were less the 15 min and were approximately 10 min in most cases. Therefore, these data in the cyclic mode were considered as quasi-simultaneous measurements, which were explored for analysing inter-band correlation and colour index. During our observation campaign (6 March 2010 to 3 April 2016), we observed for a total of 34 nights, to obtain 2255 CCD frames (shown in Table 1). Table 1. A short summary of the observations in each band performed from 2010 and 2016. In total we observed for 34 nights and obtained 2255 CCD frames. 2010 10 183 196 197 2012 4 133 136 136 2013 7 142 153 156 2014 5 42 46 55 2015 2 19 25 39 2016 6 199 201 197 All images were reduced with standard image reduction and analysis facility (IRAF) 1 procedures after bias and flat-field corrections. Aperture photometry was performed on the source and comparison stars with APPHOT. Photometry of the source and comparison stars was performed with the same aperture, which was determined by the full width at half-maximum (FWHM) of the comparison stars and was the same for each observation. We compared the photometric results and found that an aperture radius of about 1.5 FWHM almost always provided the best S/N ratio, and thus we concentrated on this aperture for our analysis. Each night, we performed aperture photometry with different aperture radii. Year Nights V(N) R(N) I(N) For each CCD frame, the instrumental magnitudes of OJ 287 and the two comparison stars (listed in Table 2) were extracted. The brightness (magnitude) of OJ 287 was calculated as the average of that derived with respect to comparison stars 4 and 10, and the corresponding standard deviation was treated as the error of OJ 287 within each CCD frame. The deviation of the average of the differential instrumental magnitude of comparison stars 4 and 10, delta (star 4-star 10), was used to verify the 1 IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. stability of the comparison stars, which is also taken as the accuracy of the observations. Figure 1 shows the deviation of comparison stars 4 and 10 for the entire observational campaign. The deviations in bands V-, Rand I-bands are mainly distributed in ±0.025 mag. Variability The V, R and I light curves from 6 March 2010 to 3 April 2016 are shown in Figure 2. The average magnitudes in each band are V = 14.728 ± 0.352, R = 14.351 ± 0.328 and I = 13.763 ± 0.309. The amplitude variability (maximum minus minimum) in each filter are respectively ∆V = 1. m 335, ∆R = 1. m 237 and ∆I = 1. m 594. Presently, a number of statistical tests, such as the C-test, the F-test, the χ 2 -test and one-way analysis of variance (ANOVA), have been proposed to assess quantitatively whether there are IDVs [6,9,[19][20][21][22][23]. Here, we apply the F-test and χ 2 -test to cross-check the intra-day light curves. We consider variability when the light curve satisfied both criteria described below. The F-test is regarded as a powerful distribution statistic to check for the presence of variability, as introduced by [21]. When comparing two sample variances, the F-statistic values are calculated as: where S 2 BL−StarA , S 2 BL−StarB and S 2 StarA−StarB are the variances of the differential instrumental magnitudes of the blazar and comparison star A, the blazar and comparison star B, and comparison stars A and B, respectively. The F-statistic is compared with the F (α) ν BL ,ν c critical value. The number of degrees of freedom for each sample, ν BL and ν c , are the number of measurements minus 1 (N − 1), and α is the significance level set for the test. In this paper, the F-test was performed at two significance levels: 0.99 and 0.999. If either F 1 or F 2 was larger than the critical value, the null hypothesis (no variability) was discarded. The χ 2 -test was also used: where m i are the individual magnitudes, σ m i are their errors, and m denotes the mean value of magnitude for an observation night. This statistic is also compared against the critical value χ 2 α,ν obtained from the χ 2 probability function, where ν is the degrees of freedom and α is the significance level. χ 2 > χ 2 α,ν provides evidence of variability. OJ 287 was said to be variable (V) if the statistic from both tests was satisfied at the 0.999 level, while it was marked as "non-variable" (NV) if none of these criteria were met at the 0.99 level. For neither, it was marked as "probably variable" (PV). The results of the Fand χ 2 -test and the IDV observations in the V-, Rand I-bands are presented in Table 3. As shown in Table 3, 17 light curves were marked as variable among 100 light curves (18 light curves met the criterion with the F-test and 57 light curves with the χ 2 -test). As an illustration, the significant variability on two nights are presented, 6-8 January in 2016, which were marked as variable in the V-, Rand I-bands (shown in Figure 3). On January 6, 2016, the magnitudes showed a monotonically increasing trend. The magnitude changes of the V-, Rand I-bands were ∆V = 0. For each IDV, the variability amplitude (Amp) could be calculated by [24]: where A max and A min are the maximum and minimum magnitude, respectively, of the light curve for the night being considered, and σ is the corresponding standard deviation. Our observational data were obtained with different exposures and time intervals across 6 years. To avoid uncertainties caused by different observational set-ups, the fractional variability amplitude (F var ) was calculated to show the intrinsic variability amplitude of the source by removing the effects of measurement noise; it is defined as follows [25]: Here S 2 denotes the total variance of the light curve: where σ 2 err is the mean error squared, and m is the mean value of the magnitude. The error on F var is Our results of the IDV analysis are shown in Figure 4 and Table 3. Both the statistical distributions of two variability amplitudes show that the IDV amplitudes of OJ 287 during the whole observational campaign were small. Table 3. We extracted the variability for candidate light curves (marked as PV or V states) by an asymmetric flare template [26], which describes flares as exponentially rising and decaying by the function: where F 0 is the constant background flux level (could be a constant or linear model), F i is the flare normalisation, t i is the time of maximum flare, T r,i is the flux rising timescale, and T d,i is the flux decaying timescale. Most of candidate light curves showed a monotonically increasing or decreasing trend, which meant the timescale of the variability was larger than the length of the data. Only two V light curves and four PV light curves showed complete flares. The results of the light curves are shown in Figure 5 and Table 4. Colour Index It is common that brightness variations are often associated with changes in spectral shapes. Thus we also investigated the correlations between the colour indices and magnitude variations. Here we consider whether variations in the V − R colour indices of OJ 287 changed with respect to variations in its brightness in the R-band on short-and long-term bases. These colour and magnitude plots of OJ 287 are displayed in Figures 6 and 7. We calculated the best linear fit as shown by the straight lines in Figures 6 and 7 for the colour index against magnitude, and the slope, intercept, linear Pearson correlation coefficients and corresponding null hypothesis probability values are also listed in Table 5. A positive Pearson's r coefficient between the colour index and apparent magnitude of the blazar implies a positive correlation, which means the source tends to be bluer when it is brighter (BWB) or redder when fainter (RWF), while a negative Pearson's r coefficient suggests the opposite correlation: redder when brighter (RWB) or bluer when fainter (BWF) behaviour. We found a weak positive correlation between the colour indices and the R-band magnitude on long timescales during these observations of OJ 287 (shown as Figure 6). Then we measured Pearson's r coefficient between the colour indices and the R-band magnitude on intra-night timescales, as plotted in Figure 7. Ten nights showed a significant negative correlation between the colour indices and R-band magnitude on intra-night timescales, with probability values below 0.001; other nights showed nearly achromatic (ACH) behaviour. Table 5. Discussion and Conclusions Blazars, being dominated by a relativistic jet oriented at a small angle from the line of sight, display variability ranging from a few hundredths of a magnitude to more than several magnitudes. Their variability timescales range from minutes to years. Either factors intrinsic to the jet or due to changes in overall jet power could explain such variability. The shock-in-jet model can be used to explain most of variabilities in blazars [10,[27][28][29][30]. Any changes in the physical mechanisms of blazars, such as to the magnetic field, electron density or velocity, can trigger a shock that leads to flares when propagating along the relativistic jet. Turbulence behind the shock-in-jet can be a good way to explain the fast variability in the IDV of blazars [10,[31][32][33]. However, when blazars are in a low state, the emission from the accretion disk dominates the jet emission. Instability in the accretion disk, such as at "hot spots", can also produce IDVs [9,10,[34][35][36]. Perturbations on the accretion disks transfer into the relativistic jet and, being Doppler amplified, might also produce fluctuations [9,10,37,38]. Previous colour index studies suggest apparent dichotomy responses: BL Lacs become BWB and FSRQs become RWB [9,10,[39][40][41][42][43][44]. The BWB behavior in BL Lacs can be interpreted as electrons being accelerated to preferentially higher energies before radiatively cooling, and the RWB behavior in FSRQs is explained by the addition of redder, non-thermal jet emission to an already bluer, thermal disk component. These results are often preferentially observed during flaring, which causes the selection effect that the two extreme ends of the colour-magnitude correlation are most regularly reported. Ruan et al. [45] point out that spectral variability could be the result of hot spots in the accretion disk emission by using a sample of 604 variable quasars. Isler et al. [46] suggest colour-magnitude variability should be a continuum rather than a dichotomy. They use empirical data from 3C 279 to explain how blazars can smoothly evolve from a jet-quiescent, disk-dominated colour profile to an actively jet-dominated state and associated colour profile and back to a jet-quiescent state. Our results of colour-magnitude correlations on long and short timescales tend to be interpreted in the framework of a relatively bluer accretion disk and redder jet change in intensity. In addition, the jet intensity could change in a short timescales, while the disk intensity changed more slowly. Thus the intra-night colour behavior was alternatively BWF, RWB or ACH, which can be described by the variation in contributions of the non-thermal relativistic jet in OJ 287. We carried out multi-band optical photometric monitoring of the blazar OJ 287 over 34 observation nights between 6 March 2010 and 3 April 2016. We searched for flux and colour-magnitude variation on IDV timescales. The IDV amplitudes of OJ 287 were small for the observed flux variabilities. The blazar was in a highly active and variable state during January 2016, showing relatively large amplitude flux variations: F var of 1.3−2.1%, and Amp of 5.8−9.0%. OJ 287 showed a weak positive correlation on the long timescale, while negative correlations were found on IDV timescales. Our present analysis cannot distinguish between the possible physical mechanisms in detail. Very dense and highly precise simultaneous multi-band observations are necessary. Thus OJ 287 should continue to be monitored whenever possible.
3,648.4
2017-11-22T00:00:00.000
[ "Physics", "Environmental Science" ]
Japanese-Russian TMU Neural Machine Translation System using Multilingual Model for WAT 2019 We introduce our system that is submitted to the News Commentary task (Japanese<->Russian) of the 6th Workshop on Asian Translation. The goal of this shared task is to study extremely low resource situations for distant language pairs. It is known that using parallel corpora of different language pair as training data is effective for multilingual neural machine translation model in extremely low resource scenarios. Therefore, to improve the translation quality of Japanese<->Russian language pair, our method leverages other in-domain Japanese-English and English-Russian parallel corpora as additional training data for our multilingual NMT model. Introduction News Commentary shared task of the 6th Workshop on Asian Translation (Nakazawa et al., 2019) addresses Japanese↔Russian (Ja↔Ru) news translation. It is a very challenging task considering: (a) extremely low resource setting, the size of parallel data is only 12k parallel sentences; (b) how distant given language pair is, in terms of different writing system, phonology, morphology, grammar, and syntax; (c) difficulty of translating news from various topics which leads to large presence of unknown tokens in such extremely low-resource scenario. Usually, neural machine translation (NMT) (Cho et al., 2014;Sutskever et al., 2014;Bahdanau et al., 2015;Vaswani et al., 2017) enables endto-end training of a translation system requiring a large amount of training parallel data (Koehn and Knowles, 2017). Therefore, there are different techniques of involving other pivot languages to increase the accuracy of low-resource MT such as pivot-based SMT (Utiyama and Isahara, 2007), transfer learning (Zoph et al., 2016;Kocmi and Bojar, 2018), and multilingual modeling (Firat et al., 2016). Recently, a simple multilingual modeling (MultiNMT) was proposed by Johnson et al. (2017) which translates between multiple languages using a single model and an artificial token indicating a target language, taking advantage of multilingual data to improve NMT for all languages involved. Imankulova et al. (2019) showed that incorporating MultiNMT (Johnson et al., 2017) provided better BLEU scores than unidirectional and pivot-based PBSMT approaches and that domain mismatch had a negative effect on low-resource NMT. Therefore, we use MultiNMT modeling for an extremely low-resource Ja↔Ru translation involving English (En) as the pivoting third language (Utiyama and Isahara, 2007). Considering the importance of domain matching, we focus on only news domain of additional Ja↔En and Ru↔En auxiliary parallel corpora, which we will refer as pivot parallel corpora. And we investigate how translation results are improved by using in-domain pivot parallel corpora (Ja↔En and Ru↔En) in MultiNMT modeling. As a result, indomain pivot parallel corpora increases the coverage of Ja and Ru vocabulary, and it is clarified that the new tokens introduced from in-domain pivot corpora could be translated successfully. Related Work The existing state-of-the-art NMT model known as the Transformer (Vaswani et al., 2017) works well on different scenarios (Lakew et al., 2018;Imankulova et al., 2019). MultiNMT using the artificial token approach (Johnson et al., 2017) is known to help the language pairs with relatively lesser data (Lakew et al., 2018;Rikters et al., 2018) and outperform bi-directional and uni-directional translation approaches (Imankulova et al., 2019). Similarly, we exploit MultiNMT approach with Transformer architecture. Our work is heavily based on Imankulova et al. (2019). They proposed a multi-stage fine-tuning approach that combines multilingual modeling and domain adaptation. They utilize out-of-domain pivot parallel corpora to perform domain adaptation on in-domain pivot parallel corpora and then perform multilingual transfer for a language pair of interest. However, instead of utilizing out-ofdomain pivot parallel corpora, we investigate the impact of other in-domain pivot parallel corpora. Pseudo-parallel data can be used to augment existing parallel corpora for training, and previous work has reported that such data generated by so-called back-translation can substantially improve the quality of NMT (Sennrich et al., 2016). However, this approach requires base MT systems that can generate somewhat accurate translations (Imankulova et al., 2017). Therefore, instead of creating noisy pseudo-parallel corpora, we take advantage of other in-domain pivot parallel corpora. Data To train MultiNMT systems we used the news domain data provided by WAT2019 1 . More specifically, we used Global Voices 2 as a training data for Ja↔Ru, Ja↔En and Ru↔En, and manually aligned, cleaned and filtered News Commentary data was used as development and test sets. 3 Additionally, we utilized Jiji 4 and News 1 http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2019/index.html 2 https://globalvoices.org/ 3 https://github.com/aizhanti/JaRuNC 4 http://lotus.kuee.kyoto-u.ac.jp/WAT/jiji-corpus/ Commentary 5 data for Ja↔En and Ru↔En, respectively. Table 1 summarizes the size of train/development/test splits used in our experiments. We tokenized English and Russian sentences using tokenizer.perl of Moses (Koehn et al., 2007). 6 To tokenize Japanese sentences, we used MeCab 7 with the IPA dictionary. After tokenization, we eliminated duplicated sentence pairs and sentences with more than 100 tokens for all the languages. Systems This section describes our system TMU and our baseline which based on the same MultiNMT architecture (Johnson et al., 2017) but trained on different training corpora (Table 1). Here, MultiNMT translates from multiple source languages into different target languages within a single model. To realize such translation, an artificial token is introduced at the beginning of the input sentence to indicate the target language the model should translate to. Since we have 3 language pairs, we concatenate all pairs in both directions with oversampling to match the biggest parallel data. We add a target language token to the source side of each pair and treat it like a single language-pair case. We experiment with the following systems: • TMU: Our system is trained on a balanced concatenation of Global Voices, Jiji and News Commentary corpora on 6 translation directions. Only GV is used as a comparative model to investigate the effect of additional pivot corpora. Implementation We used the open-source tensor2tensor implementation of the Transformer model. 8 Table 2 contains some specific hyperparameters. The hyper-parameters not mentioned in this table used the default values in tensor2tensor. We over-sampled Ja→Ru and Ja→En training data so that their sizes match the largest Ru→En data for each model. However, the development set was created by concatenating those for the individual translation directions without any over-sampling. We also used tensor2tensor's internal sub-word segmentation mechanism. The size of the shared sub-word vocabularies was set to 32k. By default, tensor2tensor truncates sentences longer than 256 sub-words to prevent out-of-memory errors during training. We incorporated early-stopping by stopping training if BLEU score for the development set was not improved for 10,000 updates (10 check-points). At inference time, we averaged the last 10 check-points and decoded the test sets with beam size and a length penalty which were tuned by a linear search on the BLEU score for the development set. Length penalty for Ja→Ru was 1.0 and for Ru→Ja 1.1. Beam size was set to 12 and 3 for Ja→Ru and Ru→Ja, respectively. Although we train our models on 6 translation directions, we only report the BLEU scores on Ja→Ru and Ru→Ja test sets. Discussion We investigate the effect of adding Jiji and News Commentary corpora as pivot parallel corpora to original Global Voices training data. In extremely low-resource machine translation in the news domain, unknown tokens become a serious issue due to vocabulary coverage. Adding the pivot parallel corpora to training data can be expected to increase vocabulary coverage. Therefore, we investigate how much vocabulary coverage was improved by using pivot parallel corpora. For that purpose, we investigate the following vocabulary sets A and B: T is a set of unknown tokens from test data not included in the direct Ja↔Ru 12k training data, G is pivot Gloval Voices vocabulary set and P is Jiji and News Commentary training vocabulary set. A is the test data unknown tokens set covered by pivot Global Voices training data. B is the test data unknown tokens set covered by concatenated vocabulary of Jiji and News Commentary pivot paral- lel corpora added to A. By comparing the number of tokens and types of distinct words of A and B, you can see how much the coverage has increased. In addition, we investigate how correctly the tokens added by Jiji corpus and News Commentary are translated. If a token from vocabulary set of A or B appeared in both the gold sentence and the translated sentence of the system, it was counted as being correctly translated. Table 4 shows token and type coverage and correctly translated tokens and types of distinct words on test data for A and B, respectively. It can be seen that both Ru and Ja have improved B coverage compared to A. In particular, the coverage of Ru is greatly improved. And by adding Jiji corpus and News Commentary to the training data, you can see that the number of correctly translated tokens has increased. This shows that vocabulary coverage has increased and translation accuracy has improved. On the other hand, the number of correctly translated tokens is few compared to increased coverage from additional parallel data. This is considered to be due to difficulty of directly learning Ja↔Ru translation from added indirect Ja↔En and Ru↔En pivot corpora. Furthermore, in order to deepen the knowledge about the tokens covered using pivot corpora, we analyze the cases where the newly added tokens by Jiji and News Commentary corpora are translated correctly and incorrectly. By adding Jiji and News Commentary corpora, we define the vocabulary set newly covered by the test data vocabulary as C as follows: Table 5 shows translation examples of only GV and TMU systems. The [unknown tokens] in each sentence belong to C. The first sentence is an example (a) where TMU was able to correctly translate "株主" compared to Only GV. On the other hand, the second example shows that neither TMU nor Only GV could correctly translate an unknown token "表立っ" included in pivot parallel corpora. It is considered that it cannot be translated because the whole sentence was translated incorrectly. Conclusion In this paper, we introduced our system submitted to the News Commentary task (Ja↔Ru) of the 6th Workshop on Asian Translation. The difficult part of this shared task is unknown tokens due to difficult news domain covering various topics and extremely low-resource available parallel data. To address this issue, we investigated the coverage of translatable tokens by training MultiNMT using an in-domain pivot parallel corpora. As a result, we found out that our system can translate more tokens by taking advantage of additional pivot parallel corpora. In the future, we will explore whether translation results improve by using other Ja↔Ru (e.g. Tatoeba) and Ru↔En (e.g. UN) corpora. In the news domain, there is also a problem of completely new tokens, which is a type of unknown tokens, that cannot be dealt by simply increasing training data coverage since new information is out every day. Therefore, we plan to tackle the problem of new tokens that cannot be introduced by using additional corpora.
2,510.2
2019-11-01T00:00:00.000
[ "Computer Science" ]
Unorthodox dimensional interpolations for He, Li, Be atoms and hydrogen molecule We present a simple interpolation formula using dimensional limits $D=1$ and $D=\infty$ to obtain the $D=3$ ground-state energies of atoms and molecules. For atoms, these limits are linked by first-order perturbation terms of electron-electron interactions. This unorthodox approach is illustrated by ground-states for two, three, and four electron atoms, with modest effort to obtain fairly accurate results. Also, we treat the ground-state of H$_2$ over a wide range of the internuclear distance R, and compares well with the standard exact results from the Full Configuration Interaction method. Similar dimensional interpolations may be useful for complex many-body systems. Introduction Dimensional scaling, as applied to chemical physics, offers promising computational strategies and heuristic perspectives to study electronic structures and obtain energies of atoms, molecules and extended systems. [1][2][3][4] Taking a spatial dimension other than D = 3 can make a problem much simpler and then use perturbation theory or other techniques to obtain an approximate result for D = 3. Years ago, a D-scaling technique used with quantum chromodynamics 5 was prompted for helium. [2][3][4] The approach began with the D → ∞ limit and added terms in powers of δ = 1/D. It was arduous and asymptotic but by summation techniques attained very high accuracy for D = 3. 6 Other dimensional scaling approaches were extended to N-electron atoms, 7 renormalization with 1/Z expansions, 8 random walks, 9 interpolation of hard sphere virial coefficients, 10 resonance states 11 and dynamics of manybody systems in external fields. 12,13 Recently, a simple analytical interpolation formula emerged using both the D = 1 and D → ∞ limits for helium. 14 It makes use of only the dimensional dependence of a hydrogen atom, together with the exactly known first-order perturbation terms with λ = 1/Z for the dimensional limits of the electron-electron 1/r 12 interaction. In the D = 1 limit, the Columbic potentials are replaced by delta functions in appropriately scaled coordinates. 15 In the D → ∞ limit, the electrons assume positions fixed relative to another and to the nucleus, with wave functions replaced by delta functions. 16 Then at D = 3, the ground state energy of helium 3 can be obtained by linking 1 and ∞ together with the first-order perturbation coefficients (1) 1 and (1) ∞ of the 1/Z expansion. The first-order terms actually provide much of the dimension dependence. This article exhibits the applicability of an unorthodox formula, a blend of dimensions with first-order perturbations, to more complex many-body systems. We outline the following sections: in 2 the interpolation formula; in 3 treat helium; in 4 lithium; in 5 beryllium; in 6 hydrogen molecule. Each atom section 3-5 has four subsections: A for D = 1; B for D → ∞; C for (1) D , the first-order perturbation terms; D for 3 , the ground-state energy at D = 3 is obtained from the interpolation formula. For the hydrogen molecule section 6, the subsections deal how the internuclear distance R varies in the D = 1 and D → ∞ dimensions and mesh into D = 3. Finally, in 7 we comment on prospects for blending dimensional limits to serve other many-body problems. Dimensional interpolation For dimensional scaling of atoms and molecules the energy erupts to infinity as D → 1 and vanishes as D → ∞. Hence, we adopt scaled units (with hartree atomic units) whereby , so the reduced energy D remains finite in both limits. When expressed in a 1/Z perturbation expansion, the reduced energy is given by with λ = 1/Z, where Z is the total nuclear charge of the corresponding atom. The first-order perturbation coefficient is (1, 6): (1) It represents the expectation value, 1 r 12 , of the electron-electron repulsion evaluated with the zeroth-order hydrogenic wave function, exp(−r 1 − r 2 ). Accordingly, We aim to illustrate the interpolation formula more fully, presenting results with modest calculations having respectable accuracy for two, three, and four electrons. For the hydrogen molecule, a different scaling scheme will be used and illustrated. The rescaling of distances is: An approximation for D = 3 (where R = R ) emerges: on interpolating linearly between the dimensional limits, developed by Loeser in Refs. [17][18][19] 3 Two-electrons: Helium The formula worked very well for D = 3, helium with λ = 1/2: The One-dimension: D=1 We will calculate the ground-state energy of the Hamiltonian operator using the variational principle. It is less accurate than Ref., 15 but much easier to deal with two and more electrons. 20 The Hamiltonian with electrons in delta functions is: such that and obtain ξ 0 = 0.875, which put into Eq (15) gives the ground-state energy, 1 = −0.765625. This result is found in Refs., [20][21][22] but it is approximated by 2.9% since noted the exact value is 1 = −0.788843. Infinite-dimension: D → ∞ At large-D limit, the effective ground state Hamiltonian for a two electron atom, with inter-electronic correlation can be written as: with J(r 1 , r 2 , θ) = 1 for an inter-electronic angle θ. We minimize the above effective-Hamiltonian with respect to the parameters r 1 , r 2 , and θ respectively, and obtain the corresponding ground state energy to be: ∞ = −0.684442 (see Table 1 in, 14 and 22 ). First-order perturbations: (1) D In two-electron atom, with nuclear charge Z, the exact Hamiltonian in D-dimension using atomic units can be written as: where the Laplacian operator 2 r in D-dimension is defined as: For helium-like atoms we consider the two electrons are in a 1s-like state with spatial part being symmetric (both electrons are in the same state) and the spin part in the antisymmetric spin singlet. The spatial part of the electronic wave function can be written as: where the normalized wave functions χ 1 (r 1 ) and χ 2 (r 2 ) are defined as: and The normalization constant N is calculated as: with is the surface area of an unit sphere in D-dimension. In D-dimension, with the above wave functions, we obtain the following first-order coefficient: 14 As shown in Eq.(2) and for D = 1, 3, ∞ respectively (1) In conventional quantum chemistry textbooks treating D = 3 helium, the electronelectron interaction, 1/r 12 , is evaluated by first-order perturbation theory. The result is 3 = −0.687529 with accuracy of 5.29%. Three-electrons: Lithium The ground-state of the lithium atom had been calculated a long ago by using the variational method with complicated wave functions. [23][24][25] Here we present the interpolation formula, using the D = 1 and D = ∞ limits and the first-order perturbation terms. For the ground-state of the lithium atom our formula gave 3 = −0.839648, with approximation 1.04% compared the exact result 3 = −0.830896. 26 One-dimension: D=1 In a three-electron atom, with nuclear charge Z, the exact Hamiltonian in one-dimension using atomic units can be written as: with λ = 1/Z. In lithium atom we consider two electrons are in 1s state and third electron is in a 2s state, with spatial part being symmetric (both electrons are in the same state) and the spin part in the antisymmetric state. We write spatial part of the electronic wave function as: The two normalized wave functions χ 1 (r 1 ), χ 2 (r 2 ) are described in Eqs. (9) and (10). We assume that the 1s wave functions are orthogonal to the 2s wave function: We calculate the ground state energy of a three-electron atom using variational principle. We optimize the parameter ξ, defined in the wave functions χ 1 (r 1 ), χ 2 (r 2 ), χ 3 (r 3 ), and obtain the minimum value of the Hamiltonian operator H φ (ξ), which is defined as We divide the above Hamiltonian (31) into five parts, where is the kinetic energy of the three electrons, is the potential energy of the three electrons due to nuclear attraction, and are the interaction energies for inter-electronic repulsions in the system. We minimize the Hamiltonian operator H φ (ξ) with respect to ξ, with such that and obtain ξ 0 = 0.697856, which put into Eq (37) gives the ground-state energy, 1 = −0.693979. Infinite-dimension: D → ∞ At large-D-limit the effective ground state Hamiltonian for three-electron atoms, with correlation can be written as: where J(r 1 , r 2 , r 3 ) = 1 with γ ij = γ ij = cos θ ij , and θ ij is the angle between r i and r j . The quantities Γ (i) and Γ are called the Gramian determinants. In equation (39) the quantity Γ (i) Γ is effectively defined as: See page 111, equation (35) in 7 for more details. We minimize the above effective-Hamiltonian with respect to the parameters r 1 , r 2 , r 3 , and θ 12 , θ 13 , θ 23 respectively and obtain the corresponding ground state energy ∞ = −0.795453. First-order perturbations: (1) D As the electrons reside in two orbits, 1s 2 2s, there are three electron-electron pairs: one 1 r 12 from 1s 2 , the two others 1 r 13 and 1 r 23 from 1s2s. Thus each (1) D coefficient is comprised from the three electron pairs: (1) (1) The D = 1 item is obtained via subsection 4.1. The D = 3 item is attained from Ref. 27 Here we will develop both D = 3 and D → ∞ bringing the third electron akin with the two-electron treatment in subsection 3.3. As the Hamiltonian is evident in equations (19) and (20), we start with the electronic wave function: The two normalized functions χ 1 (r 1 ), χ 2 (r 2 ) are taken care of in Eqs. (22), (23), (24) and (25). We assume that the 1s wave functions are orthogonal to the 2s wave function: The normalization is: with α = 3 2D . To obtain the first-order terms for D = 3 and D → ∞ we need to assemble some integrals associated with the key f (D) function shown in Eqs. (2) and (26). The output is: and the hyprgeometric function F 1 2 , 3−D 2 ; D 2 ; y enters in (26). The parent integral is, and From G D (a, b) we compute the following integral: In the integrals, we used the normalized wave functions χ 1 (r 1 ), χ 2 (r 2 ), and χ 3 (r 3 ) already specified, such a typical term: From Eq. (53), we see that we have to put a = 2 and b = 1, so y = 1/9. In Eq. (48) (55) the hyprgeometric function is available in tabulations. 28 We computed up to D = 10 6 to see that the function converges to For D = 3, the function gives Interpolation for D=3 Again we use the interpolation formula shown in Eq. (6), now with λ = 1/Z = 1/3. The input from our A, B, C subsections was: Our interpolation gave the Li atom ground-state energy with error 1%: 3 = 0.839648, compared with the exact result 3 = 0.830896. 26 Four-electron: Beryllium The electronic structure of the beryllium atom is highly interesting because it's implication in different areas of modern science, for e.g. stellar astrophysics and plasmas, high-temperature physics etc. The ground-state energy for the Be-atom has been calculated by applying various methods for e.g. the Configuration Interaction (CI) method with Slater-type orbitals (STOs), 29 the Hylleraas method (Hy), 30 the Hylleraas-Configuration Interaction method (Hy-CI), 31 and the Exponential Correlated Gaussian (ECG) method. 32,33 In this section we present the dimensional interpolation formula, by using the results from D = 1 and D = ∞ limit, to obtain the ground state energy of the four-electron atoms. With dimensional interpolation we obtain the ground state energy of beryllium atom to be 3 = −0.910325, compared to the exact energy 3 = −0.916709, with a percentage error of 0.6%. One-dimension: D=1 In Four-electron atoms, with nuclear charge Z = 1/λ, the exact Hamiltonian in onedimension using atomic units can be written as: In beryllium atom we consider the first two electrons are in 1s states, and other two electrons are in 2s states with spatial part being symmetric (both electrons are in the same state) and the spin part in the antisymmetric state. We write spatial part of the electronic wave function as follows: φ(r 1 , r 2 , r 3 , r 4 ) = χ 1 (r 1 )χ 2 (r 2 )χ 3 (r 3 )χ 4 (r 4 ), The three normalized wave functions χ 1 (r 1 ), χ 2 (r 2 ), χ 3 (r 3 ) are described in Eqs. (9), (10) and (30). We assume that the 1s wave functions are orthogonal to the two 2s wave functions χ 3 (r 3 ) and χ 4 (r 4 ) = 9ξ 20 We calculate the ground state energy of a four-electron atom with variational principle. We optimize the parameter ξ, defined in the wave functions χ 1 (r 1 ), χ 2 (r 2 ), χ 3 (r 3 ), χ 4 (r 4 ), and obtain the minimum value of the Hamiltonian operator H φ (ξ), which is defined as: We divide the above Hamiltonian into five parts, where is the kinetic energy of the four electrons, is the potential energy of the four electrons due to nuclear attraction, and are the interaction energies for inter-electronic repulsions in the system. Infinite-dimension: D → ∞ In large-D-limit the effective ground state Hamiltonian for four-electron atoms, with inter-electronic correlation can be written as: where J(r 1 , r 2 , r 3 , r 4 ) = 4 i,j=1 1 with γ ij = γ ij = cos θ ij , and θ ij are the angle between r i and r j . The quantities Γ (i) and Γ are the Gramian determinants. In equation (70) the quantity Γ (i) Γ is effectively defined as follows: See page 111, equation (35) in 7 for more details. First-order perturbations: (1) D As the electrons reside in two orbits, 1s 2 2s 2 , there are six electron-electron pairs: one 1 r 12 from 1s 2 , four others 1 r 13 , 1 r 14 , 1 r 23 , 1 r 24 from 1s2s; and another lonely 1 r 34 from 2s 2 . Each (1) The D = 1 item is obtained via subsection 5.1. Here we will develop both D = 3 and D → ∞ bringing the fourth electron akin with the three-electron treatment in subsection 4.3. As the Hamiltonian is evident in equations (19) and (20), we start with the electronic wave function: The two normalized 1s wave functions χ 1 (r 1 ), χ 2 (r 2 ) are taken care of in Eqs. (22), (23), (24) and (25). We assume that the 1s wave functions are orthogonal to the 2s wave functions χ 2 (r 2 ), defined in 46, and : with normalization constant N 1 defined in 47. We take same approach as subsection 4.3 to calculate the first-order term (the 2s 2 electron-electron repulsion term) at D → ∞ limit with the help of equations (50, 52): with, y = a−b a+b 2 and f (D) function shown in Eqs. (2) and (26). This is same functional expression as in lithium atom (53), but the arguments are different. To calculate the first-order perturbation coefficient 1 r 34 for beryllium we use the normalized wave functions χ 1 (r 1 ), χ 2 (r 2 ), χ 3 (r 3 ) and χ 4 (r 4 ) already specified, which gives rise to a typical term like From the above Eq. (79), we see that we have to put a = 1 and b = 1 , so y = 0. In Eq.(78) the hyprgeometric function and f (D) → 2 −1/2 at D → ∞ limit. At D → ∞ limit (78) gives 1 For D = 3 we use the following formula from 1 and: 28 At D = 3 the 2s wave function with α = 1 such that To calculate the inter-electronic repulsion energy 1 r 34 from (85) we use the above type of integrals G k 3 (a, b) in Eq. (82) and K 3 (i, j, k) in Eq. (83), with a = 1, b = 1, and k = 0. With the help of (82, 83) we calculate the first-order coefficient (2s-2s part) for beryllium atom in three dimension: Interpolation for D=3 We again use the interpolation formula shown in Eq. (6), One-dimension: D=1 In H 2 , with nuclear charge of each atom Z, the electronic part of the Hamiltonian in one-dimension using atomic units can be written as: 20,39 with a = R/2, where R is the distance between the two nuclei located at r = ±a; also λ = 1/Z = 1. The Hamiltonian energy eigenvalues provide symmetric and antisymmetric states under exchange of the electrons. The symmetric state pertains to the ground-state potential energy: 20 The total binding energy is obtained by adding the nucleus-nucleus-interaction term (1/R) with the electronic energy. with a = R/2 and J(ρ 1 , ρ 2 , z 1 , z 2 , φ) = 1 In the D → ∞ limit, the Hamiltonian has two locations for electrons, namely: symmetric, with ρ 1 = ρ 2 and z 1 = z 2 , and antisymmetric, with ρ 1 = ρ 2 and z 1 = −z 2 . When R has the nuclei well apart, in the symmetric case, both electrons cluster near one of the nuclei (H 2 → H − + H + ); in the antisymmetric case, each electron resides near just one of the nuclei (H 2 → H + H). Thus, the antisymmetric case is much more favorable for the ground-state energy. We minimize the Hamiltonian (90) with respect to ρ's and z's to obtain the ground state energy, ∞ (R); we numerically evaluate the corresponding optimized parameters ρ * 1 , ρ * 2 , z * 1 , z * 2 , and φ * for different values of R. The total binding energy is obtained by adding to ∞ (R) the internuclear-interaction term (1/R). Interpolation for D=3 Unlike the atoms, our interpolation will be different for a molecule. An atom has only one nucleus, with the electrons orbiting about the positive charge; then our interpolation deals with the first-order perturbation works well but not for a molecule. For a diatomic molecule, V (R) is fundamental, with R distance roaming between the nuclei. As mentioned in Eqs. (4) and (5), our interpolation for H 2 uses a modified rescaling scheme developed by with the D = 1 and D → ∞ dimensional limits: The rescaled distances are: In D = 1 : r i → r i /3 and R → R /3, for i = 1, 2 ; In D → ∞ : The rescaled Hamiltonians have distinct factors in the kinetic and potential energy parts: In D = 1: Hamiltonian (88) becomes: In D → ∞: Hamiltonian (90) becomes: with a = R/2 and We minimized these rescaled Hamiltonians (93) with respect to the rescaled distances (92). Conclusion and prospects The formula used for atoms we consider unorthodox, as it recently emerged 14 whereas other D-interpolations are elderly. 44,45 The fresh aspect links the energies 1 and ∞ together with the first-order perturbation coefficients Those perturbations arise from of electron-electron pair interactions, 1/r ij ; they actually provide much of the dimension dependence. For H 2 we used a different scaling than with the atoms, since H 2 links the distance R between the two nuclei. Then the rescaling is: R → 1/3R for D → 1; R → 2/3R for D → ∞. Interpolating between the dimensional limits gave a fair approximation of the binding energy for D = 3, when compared with the full configuration interaction (FCI). In tally, our sections 3 4 5 treat He, Li, Be; in 6 dealt with H 2 . In subsections we describe the D = 1 limit, the D = ∞ limit, the first-order perturbations, and the interpolation output. The ingredients of the interpolation are well suited for computing. We expect the method to hold true for larger atomic, molecular and extended systems. More than ground-state energies are accessible. However, there are prospects for combining dimensional limits to serve other many-body problems. One is examining dimensional dependence of quantum entanglement. 46,47 Another is the isomorphism between the Ising model 48 and two-level quantum mechanics. 49 Long ago the Ising model was solved in one, two and infinite dimensions, 50-52 as well much activity near four dimensions. 53 The unknown solution at D = 3 remains a challenge even by quantum computing. 54,55 More light on the solution might come by blending of dimensions akin to our unorthodox interpolated formula.
4,930.8
2020-04-23T00:00:00.000
[ "Chemistry", "Physics" ]
Pharmacological targeting of a PWWP domain demonstrates cooperative control of NSD2 localization NSD2 is the primary enzyme responsible for the dimethylation of lysine 36 of histone 3 (H3K36me2), a mark associated with active gene transcription and intergenic DNA methylation. In addition to a methyltransferase domain, NSD2 harbors two PWWP and five PHD domains believed to serve as chromatin reading modules, but their exact function in the regulation of NSD2 activity remains underexplored. Here we report a first-in-class chemical probe targeting the N-terminal PWWP (PWWP1) domain of NSD2. UNC6934 binds potently (Kd of 91 ± 8 nM) to PWWP1, antagonizes its interaction with nucleosomal H3K36me2, and selectively engages endogenous NSD2 in cells. Crystal structures show that UNC6934 occupies the canonical H3K36me2-binding pocket of PWWP1 which is juxtaposed to the DNA-binding surface. In cells, UNC6934 induces accumulation of endogenous NSD2 in the nucleolus, phenocopying the localization defects of NSD2 protein isoforms lacking PWWP1 as a result of translocations prevalent in multiple myeloma. Mutation of other NSD2 chromatin reader domains also increases NSD2 nucleolar localization, and enhances the effect of UNC6934. Finally we identified two C-terminal nucleolar localization sequences in NSD2 that appear to drive nucleolar accumulation when one or more chromatin reader domains are disabled. These data support a model in which NSD2 chromatin engagement is achieved in a cooperative manner and subcellular localization is controlled by multiple competitive structural determinants. This chemical probe and the accompanying negative control, UNC7145, will be useful tools in defining NSD2 biology. Introduction Nuclear receptor-binding SET domain-containing 2 (NSD2, also known as MMSET and WHSC1) is a protein lysine methyltransferase that belongs to the NSD family, which also includes NSD1 and NSD3. Functionally, NSD2 is responsible for the bulk of H3K36me2 in diverse cell types. Dimethylation of H3K36 by both NSD1 and NSD2 recruits DNMT3a at intergenic regions to control DNA methylation and regulate development and homeostasis 8,9 . NSD2 is also required for efficient non-homologous end-joining and homologous recombination, two canonical DNA repair pathways 10,11 . In addition to its catalytic domain, NSD2 has multiple protein-protein interaction (PPI) modules with known or potential chromatin reading functions, including five PHD (plant homeodomain) and two PWWP (proline-tryptophan-tryptophan-proline) domains 2 , as well as a putative DNA-binding HMG-box (high mobility group box) domain (Fig. 1a). Mounting evidence suggests that these domains play important roles in NSD2 function, but the individual and/or collective roles of the NSD2 chromatin reader domains are still being elucidated 2 . Many PWWP domains are known H3K36me2,3 reading modules that engage methyl-lysine while simultaneously interacting with nucleosomal DNA adjacent to H3K36 12,13 . The isolated N-terminal PWWP domain of NSD2 (NSD2-PWWP1) binds H3K36 di-and trimethylated nucleosomes; this interaction presumably is mediated by a conserved aromatic cage and stabilizes NSD2 at chromatin 8 . Mutation of the aromatic cage residues abrogates NSD2-PWWP1 binding to nucleosomal H3K36me2, but has only modest effect on global H3K36 methylation level in cells 8 . However, H3K36 methylation has been shown to be abolished upon mutation of the second PHD domain (PHD2) 14 . Chromatin association is preserved with the PHD2 mutant but lost upon combined truncation of PWWP1, PHD1, PHD2 and part of PHD3 (i.e. the RE-IIBP isoform) due to cytoplasmic retention 14 . Conversely, nucleolar accumulation was observed in the case of NSD2-PWWP1 truncated variants 15,16 . Overall, these data suggest a complex regulatory system for NSD2 in which distinct combinations of structural modules contribute to subcellular localization, substrate engagement, and lysine methylation. Due to their role in disease and biology, there has been much recent interest in targeting the NSD family of methyltransferases with small molecules. High-quality, cell-active, and selective inhibitors of NSD2 and NSD3 catalytic activity remain elusive; however, irreversible small molecule inhibitors of the NSD1 SET domain that demonstrate on target activity in NUP98-NSD1 leukemia cells have recently been reported 17 . We have had recent success targeting the PWWP domains of NSD3 and NSD2 suggesting that PWWP domains are druggable, while the PHD domains have so far not been targetable. Specifically, we reported a potent chemical probe targeting the N-terminal PWWP domain of NSD3 that repressed MYC mRNA levels and reduced the proliferation of leukemia cell lines 18 . Additionally, we recently described the development of the first antagonist of NSD2-PWWP1 which binds with modest potency and abrogates H3K36me2 binding 19 . Therefore, we hypothesized that targeting the PWWP domain(s) of NSD2 with highly potent and selective chemical probes may be a strategy to modulate NSD2 engagement with chromatin, subcellular localization, and/or catalytic function. In this study, we report a first-in-class chemical probe, UNC6934, that selectively binds in the aromatic cage of NSD2-PWWP1, thereby disrupting its interaction with H3K36me2 nucleosomes. UNC6934 potently and selectively binds full-length NSD2 in cells and induces partial disengagement from chromatin, consistent with a cooperative chromatin binding mechanism relying on multiple protein interfaces. UNC6934 promotes nucleolar localization of NSD2, phenocopying previously characterized PWWP1-disrupting mutations prevalent in t(4;14) multiple myelomas 15,16 . Furthermore, we identified two active nucleolar localization sequences in NSD2 and demonstrated cooperativity between multiple chromatin reader modules to prevent nucleolar sequestration. Our data demonstrate that UNC6934 is a potent and selective drug-like molecule suitable as a high-quality chemical probe to interrogate the function of NSD2-PWWP1. 19 led to MRT866 and finally the chemical probe UNC6934. UNC7145 is a structurally similar negative control compound. Discovery of a potent ligand targeting NSD2-PWWP1 We recently described the use of virtual screening, target class screening, and ligand-based scaffold hopping approaches to identify ligands of the NSD2 PWWP domains as starting points for further development 19 . This initial effort led to compound 3f which binds NSD2-PWWP1 with a K d of 3.4 ± 0.4 µM as determined by surface plasmon resonance (SPR). Based on the crystal structure of 3f in complex with NSD2 (PDB 6UE6) molecular docking simulations predicted that a benzoxazinone bicyclic ring would favorably replace the cyanophenyl group of 3f. We confirmed that compound MRT866 binds NSD2-PWWP1 with a K d of 349 ± 19 nM (Fig. 1b, Supplementary Fig. 1) and occupies the aromatic cage of PWWP1 similarly to compound 3f as determined by x-ray crystallography (PDB 7LMT, Supplementary Fig.1 and Supplementary Table 1). The benzoxazinone ring of MRT866 makes more extensive Van der Waals interactions with NSD2-PWWP1 than 3f and engages in an additional hydrogen bond with the side-chain of Q321. Further structure-based optimization focused on the replacement of the thiophene ring and resulted in UNC6934 which binds NSD2-PWWP1 with a K d of 91 ± 8 nM by SPR ( Fig. 1b and 2a). Interestingly, conversion of the cyclopropyl group of UNC6934 to an isopropyl moiety (Fig. 1b) resulted in no appreciable binding up to 20 µM and therefore UNC7145 is an ideal negative control compound. To confirm that UNC6934 is binding in the methyl-lysine binding pocket of NSD2-PWWP1, we generated NSD2-PWWP1 with a key aromatic cage mutant (F266A). In contrast to wild type protein, UNC6934 did not result in a significant thermal stabilization of the NSD2-PWWP1 F266A mutant (Supplementary Fig. 2). Furthermore, UNC6934 is selective for NSD2-PWWP1 over 14 other human PWWP domains as assessed by differential scanning fluorimetry (DSF) (Fig. 2b) and did not inhibit any of a panel of 33 methyltransferase domains including the H3K36 methyltransferases, NSD1, NSD2, NSD3, and SETD2 (Supplementary Fig. 3). UNC7145 was similarly inactive against all PWWP domains and methyltransferases tested (Fig. 2a, Supplementary Fig. 3). NSD2-PWWP1 is postulated to stabilize the binding of NSD2 on chromatin, primarily through the recognition of H3K36me2 8 . Therefore, we next used an AlphaScreen-based proximity assay to investigate the effect of UNC6934 on the interaction of NSD2 with recombinant semi-synthetic designer nucleosomes (dNucs) 9 . We first evaluated His-tagged NSD2-PWWP1 binding to a range of lysinemethylated semi-synthetic dNucs (me0, me1, me2, and me3 at H3K4, H3K9, H3K27, H3K36, and H4K20), and confirmed that NSD2-PWWP1 binds di-and tri-methyl H3K36 with a preference for the former (Supplementary Fig. 4), as previously reported 8 . UNC6934 disrupted the interaction between NSD2-PWWP1 and nucleosomal H3K36me2 in a dose-dependent manner with an IC 50 of 104 ± 13 nM, while UNC7145 had no measurable effect (Fig. 2c). To determine whether the PWWP1 domain is necessary for mediating the interaction between full-length NSD2 (fl-NSD2) and nucleosomes, we similarly tested the effect of UNC6934 on fl-NSD2 binding to nucleosomal H3K36me2 in the AlphaScreen assay. Unlike with NSD2-PWWP1, we found that UNC6934 was unable to disrupt the interaction between fl-NSD2 and nucleosomal H3K36me2 (Fig. 2d). With H3K36me2 being proximal to nucleosomal DNA, we reasoned that electrostatic interactions between fl-NSD2 and DNA may abrogate disengagement of the protein by UNC6934. To test this hypothesis, we repeated the experiment in the presence of an excess of salmon sperm DNA (SSD), which is a commonly used blocker of non-specific DNA interactions. Under these conditions, UNC6934 could effectively disengage fl-NSD2 from nucleosomal H3K36me2 (IC 50 = 78 ± 29 nM) (Fig. 2e). Together, these results indicate that NSD2 binding to nucleosomes is multivalent, and therefore UNC6934 can disengage fl-NSD2 from H3K36me2-modified nucleosomes only in the presence of excess competitive DNA. UNC6934 occupies the H3K36me2 binding pocket of NSD2-PWWP1 PWWP domains have both a methyl-lysine-binding pocket and a DNA-binding surface 12 . To better understand these interactions, we solved the crystal structures of NSD2-PWWP1 in complex with Table 1). While DNA binds a basic surface area where the side-chains of K304, K309 and K312 are engaged in direct electrostatic interactions with the DNA phosphate backbone ( Fig. 3a, PDB 5VC8), UNC6934 occupies the canonical methyl-lysine-binding pocket adjacent to the DNA binding surface in an arrangement where the cyclopropyl ring is deeply inserted in the aromatic cage composed of Y233, W236 and F266 (Fig. 3b, c; PDB 6XCG). The extremely tight fit at the aromatic cage rationalizes the lack of binding of the corresponding negative control, UNC7145, where a bulkier isopropyl replaces the cyclopropyl group (Fig. 1b). The non-bonded carbons of an isopropyl group are separated by ~2.5 Å (ex: PDB 1NA3), while the corresponding atoms are 1.5Å apart in the cyclopropyl ring of UNC6934. The structure further explains the observed loss of binding of UNC6934 to the F266A NSD2-PWWP1 mutant (Supplementary Fig. 2). Unlike the DNA binding surface, the UNC6934 binding pocket is mildly electronegative, as would be expected for a site accommodating a positively charged methyl-lysine side-chain. Interestingly, the pocket is partly occluded in the apo structure and undergoes a conformational rearrangement of three side-chains (Y233, F266, E272) upon UNC6934 binding (Fig. 3d). Overall, our structural data confirms that UNC6934 competes directly with H3K36me2 for binding to NSD2-PWWP1 to disrupt high-affinity binding to H3K36me2 nucleosomes. Our structural data also helps to explain the exquisite selectivity of UNC6934 for NSD2-PWWP1 over other PWWP domains. Mapping the side-chains positioned within 5 Å of the bound ligand in our cocrystal structure onto a multiple sequence alignment of all human PWWP domains identified the degree ed s a ig. UNC6934 (Supplementary ng A tic nd e: ate er ee of conservation among binding pocket residues exploited by UNC6934. We find that the binding pocket of NSD3-PWWP1 is by far the closest to that of NSD2-PWWP1, as only three of the side-chains lining the binding pocket are not conserved between the two proteins ( Supplementary Fig. 5). In NSD2, G268 is at the bottom of a cavity accommodating the benzoxazinone of UNC6934 and replacing this residue with the serine of NSD3 would be expected to occlude ligand binding. This is consistent with the ability of UNC6934 to stabilize NSD2-PWWP1, but not NSD3-PWWP1 or any of the other PWWP domains in a thermal shift assay (Fig. 2b). These data strongly suggest that UNC6934 is likely selective for NSD2-PWWP1 versus any other human PWWP domain. Further supporting the overall selectivity of the compound, UNC6934 and UNC7145 were profiled against a set of 90 central nervous system receptors, channels, and transporters (Supplementary Table 2). Of those that were inhibited by UNC6934 greater than 50% at 10 µM (2 in total), the human sodium-dependent serotonin transporter receptor was the only protein inhibited by UNC6934 with a measurable inhibitory constant (Ki = 1.4 ± 0.8 µM). UNC6934 selectively engages NSD2-PWWP1 in cells To profile ligand selectivity and target engagement in a cellular context, we synthesized UNC7096, a biotin-labeled affinity reagent containing a close analog of UNC6934 for chemical pulldown experiments ( Fig. 4a). We first verified that UNC7096 is a high-affinity NSD2-PWWP ligand by SPR, measuring a K d of 46 nM, comparable to UNC6934 (Supplementary Fig. 6). UNC7096 efficiently enriched both NSD2-Short (MMSETI) and NSD2-Long (MMSETII) isoforms from KMS11 whole cell lysates as determined by western blotting (Fig. 4b). Chemiprecipitation of NSD2 by UNC7096 could also be blocked by pre-incubation of KMS11 lysates with 20 µM of UNC6934, but not the negative control compound UNC7145 (Fig. 4b). Label-free proteomic analysis of the UNC7096 pulldown experiments identified NSD2 as the only protein significantly depleted by competition with UNC6934 (Fig. 4c), whereas pre-incubation of lysates with UNC7145 did not considerably alter the enrichment profile. Perturbation of NSD2's Chromatin Binding Domains Promotes Localization to the Nucleolus Reader domains are critical for the recruitment and positioning of epigenetic proteins at defined loci across the genome and small molecule antagonism of reader domains is known to alter the localization of chromatin-associated target proteins 21,22 . We therefore reasoned that PWWP1 antagonism by UNC6934 may affect NSD2 localization within the nucleus. To test this hypothesis, we used confocal microscopy to evaluate the localization of endogenous NSD2 in U2OS cells treated for four hours with 5 µM UNC6934 or negative control UNC7145 (Supplementary Fig. 8). Upon treatment with UNC6934, we observed an increase in NSD2 signal within nucleoli, the sub-nuclear membrane-less organelles that house ribosomal DNA for pre-ribosomal transcription and processing. Interestingly, t(4;14) chromosome translocations in multiple myeloma, which juxtapose the IgH enhancer and NSD2 promoting overexpression of NSD2, can result in truncation/inactivation of PWWP1 and nucleolar enrichment, suggesting that the PWWP1 domain contributes to the exclusion of NSD2 from the nucleolus 15,16 . Therefore, UNC6934 appears to phenocopy NSD2 N-terminal PWWP1 truncations while UNC7145 has no effect. To validate this result, we repeated the imaging experiments while co-staining for the nucleolar marker fibrillarin and measured the extent of NSD2 and fibrillarin co-localization using Pearson correlation ( Fig. 5a, b). The Pearson correlation coefficient (PCC) is a common statistic used to describe co-localization; it measures the correlation in signal intensity between two fluorescent molecules (values range between -1 and 1 representing no correlation and absolute correlation, respectively). We observed a significant increase in the correlation between NSD2 and fibrillarin signal in response to UNC6934, confirming an increase in the nucleolar localization of NSD2. These data support the engagement of endogenous NSD2 by UNC6934 in cells and suggest that loss of H3K36me2 binding by NSD2-PWWP1 promotes nucleolar accumulation of NSD2. While we found no significant effect on ribosomal RNA transcription in response to UNC6934 (Supplementary Fig. 9), we do observe a steady-state pool of nucleolar NSD2 that is sensitive to RNA polymerase I inhibition (actinomycin D; 50 nM) or genotoxic agents (doxorubicin; 1 µM), conditions known to significantly alter the protein composition of nucleoli 23,24 (Supplementary Fig. 10). Overall, these results indicate that UNC6934 mediated antagonism of PWWP1 leads to accumulation of endogenous NSD2 in the nucleolus. To test if subnuclear localization is exclusively mediated by the PWWP1 domain, we mutated GFPtagged NSD2 at several critical sites within distinct chromatin-recruitment modules. This included an Nterminal short linear motif that engages BET proteins (K125A) 21,25 , two key aromatic cage residues in the PWWP1 domain (W236A and F266A) 12 , two PHD2 mutants that disrupt recruitment to target loci and disable H3K36me2 methyltransferase activity in cells (H762R and H762Y) 14 , a presumptive inactivating aromatic cage mutation in PWWP2 (W894A), and a catalytic-dead mutant of the methyltransferase domain (Y1092A) 1 . We found that disruption of any one of the canonical reader domains (PWWP1, PHD2, and PWWP2) promoted enrichment of NSD2 in nucleoli ( Fig. 6a, b). These observations suggest that it is the loss of chromatin binding that leads to the nucleolar retention of NSD2, and that multiple NSD2 reader domains cooperate to maintain appropriate nuclear sublocalization. (c) Assessing domain cooperativity by treating NSD2-GFP point mutants with DMSO control, 5 µM UNC7145 or 5 µM UNC6934 and measuring co-localization by PCC (n=5, significant p-values derived from a Welch's unpaired t-test compared to the DMSO control for each panel are indicated in order as ** = 0.0059, **** = 8.7 x 10 -5 , ** = 0.0062). (d) Computational prediction of Nucleolar Localization Sequences in NSD2 using the Nucleolar localization sequence Detector algorithm (NOD) 26 . (e) Representative fluorescent images of cells expressing GFP tagged with putative nucleolar localization sequences from NOD. We next used UNC6934 to test the cooperativity of the NSD2 reader domains towards nucleolar localization. To do so, we treated cells transfected with RFP-fibrillarin and GFP-tagged NSD2 (wild-type, W236A, H762R, or W894A) for four hours with 5 µM UNC6934 or UNC7145. In cells expressing wildtype NSD2-GFP we observed an increase in nucleolar signal upon treatment with UNC6934 . Importantly, while the PWWP1 aromatic cage mutant (W326A) had a higher baseline nucleolar localization compared to WT, no further increase was observed upon UNC6934 treatment, again supporting PWWP1-dependent activity of the probe (Fig. 6c). However, in cells expressing the PHD2 and PWWP2 mutants, we observed an additive effect with UNC6934 treatment. In addition to higher baseline nucleolar localization due to their respective mutations, there was a further increase in nucleolar colocalization upon UNC6934 treatment compared to both the DMSO control and the negative control UNC7145. These observations support a model in which NSD2 reader domains act cooperatively in recruiting NSD2 to chromatin and preventing nucleolar sequestration. Antagonizing the interaction between NSD2-PWWP1 and H3K36me2 may therefore not be sufficient to fully disengage full-length NSD2 from chromatin. Indeed, we only observed a modest release of full-length NSD2 from chromatin upon treatment with UNC6934 in cell fractionation experiments, as well as no changes in global H3K36me2 levels or the proliferation of KMS11 t(4;14) multiple myeloma cells grown on bone marrow stroma in response to UNC6934 (Supplementary Fig. 11). To further define the features of NSD2 that drive its nucleolar targeting, we used NoD (Nucleolar localization sequence Detector) 27 to computationally predict putative nucleolar localization sequences (NoLS) within NSD2 (Fig. 6d). Of the three NoLS predicted with high-confidence, we found that two were arginine-rich sequences within the C-terminus that could robustly target GFP to nucleoli (Fig. 6e). These results suggest a competitive situation between chromatin reader-domains and NoLS sequences, and support a model in which perturbation of NSD2 chromatin-binding modules enables the activity of Cterminal NoLS's to dominate, leading to nucleolar accumulation. Discussion Here we describe the discovery of UNC6934, the first chemical probe to target NSD2, a potent and wellcharacterized driver of both hematological malignancies and solid tumours. UNC6934 follows the recent discovery of BI-9321, which targets the closely-related PWWP1 domain of NSD3 18 . UNC6934 and BI-9321 are chemically distinct and selective, establishing PWWP domains as tractable targets for future chemical biology and drug discovery efforts. Additionally, we found no significant off-targets by chemical proteomics and in vitro screening of functionally relevant proteins, including a large number of human PWWP domains, methyltransferases, and membrane proteins. Further demonstration of cellular activity and specific target engagement is evident from the robust changes in endogenous NSD2 localization in response to PWWP1 antagonism by UNC6934. We also show limited cytotoxicity by UNC6934 and its negative control counterpart UNC7145, signifying their suitability for cell biology experiments exploring the function of the NSD2 PWWP1 reader domain. Nucleoli are dynamic membrane-less nuclear structures that not only act as the site of ribosome transcription and pre-assembly, but also as integral organizational hubs in the regulation of many noncanonical functions 31 . It is now widely appreciated that the shuttling of proteins between the nucleolus and nucleoplasm is a critical feature of nuclear biology, regulating many processes, including stress response, DNA repair, recombination, and transcription [31][32][33] . Here we define active nucleolar localization sequences in the NSD2 C-terminus, which likely drive nucleolar sequestration of NSD2 in response to modulation of its chromatin-binding domains. Altering the balance of nucleoplasmic versus nucleolar NSD2 through PWWP1 antagonism by UNC6934 did not have a significant effect on ribosome transcription or global levels of H3K36me2, suggesting instead that these features may provide a mechanism to rapidly tune the sub-nuclear localization of NSD2 in response to stimuli. Supporting this idea, a recent report showed that epigenetic proteins, including NSD2, are sequestered within the nucleolus in response to heat shock stress as a mechanism for subsequent rapid recovery and epigenome maintenance (data from Azkanaz et al. highlighting NSD2 shown as Supplementary Fig. 12) 34 . Given our observations of a steady-state pool of nucleolar NSD2, the question remains how balance and control of nucleolar-nucleoplasmic NSD2 levels may influence NSD2 function in normal and disease biology. Cytoplasmic localization of an NSD2 variant lacking PWWP1 and the first 3 PHD domains was also reported 14 , suggesting that the sub-cellular compartmentalization of NSD2 is fine-tuned by its reader domains, which could be achieved by masking sub-cellular localization sequences or by engagement of subcellular-specific substrates. Our data highlight the multivalent nature of NSD2 recruitment to chromatin, whereby NSD2 chromatin reader domains and DNA binding interfaces act cooperatively to coordinate its activity on chromatin. These findings highlight the utility of UNC6934 as a tool to interrogate the contributions of NSD2-PWWP1 in the interplay between reader domains. Finally, because of its role in multiple myeloma and other cancers, NSD2 has long been a drug target of interest, but despite much community effort, there is no selective, cell-active inhibitor of its catalytic activity. UNC6934 provides a clear starting point for the development of bifunctional molecules, like PROTACs, able to induce proteasomal degradation of NSD2 to antagonize its function in disease. Acknowledgements The Structural Genomics Consortium is a registered charity (no: 1097737) that receives funds from; Competing interests EpiCypher is a commercial developer and supplier of reagents and platforms used in this study: recombinant semi-synthetic modified nucleosomes (dNucs) and the dCypher® binding assay. Expression and Purification of biotinylated NSD2-PWWP1 Construct and Expression: DNA fragment encoding human NSD2 (residues 208-368) was amplified by PCR and sub-cloned into p28BIOH-LIC vector, downstream of an AviTag and the upstream of a poly-histidine coding region. Molecular Docking The X-ray structure of the PWWP domain of NSD2 in complex with 3f (PDB ID: 6UE6) was prepared with PrepWizard (Schrodinger, New York) using the standard protocol, including the addition of hydrogens, the assignment of bond order, assessment of the correct protonation states, and a restrained minimization using the OPLS3 force field. Receptor grids were calculated at the centroid of the ligand with the option to dock ligands of similar size and a hydrogen bonding constraint with the backbone of A270 was defined. Over 6,000 commercially available chemical analogs of 3f were prepared with LigPrep (Schrodinger, New York). The resulting library was then docked using Glide SP (Schrodinger, New York) with default settings. Also, the core docking option was turned on to allow only ligand poses that have their core aligned within 1.0 Å of the reference core (the cyclopropyl and the amide group of 3f). Only 448 compounds fitted and were ranked by Glide. Finally, after a visual inspection, 20 compounds were ordered. Optimization and SAR leading from MRT866 to UNC6934 was guided by free energy perturbation and will be presented elsewhere. dCypher binding assays Recombinant semi-synthetic designer nucleosomes (dNucs) were from EpiCypher. Two phases of dCypher ® testing on the PerkinElmer AlphaScreen ® platform were performed as previously described 9 . Selectivity assays Selectivity of UNC6934 for NSD2-PWWP1 over 14 other PWWP domains was tested using differential scanning fluorimetry (DSF) as previously described 35 The diffraction data for NSD2-PWWP1+UNC6934 was collected at 100K on the home source Rigaku FR-E superbright and data set was processed using the HKL-3000 suite 38 46 . Cell Culture Cell lines were cultured according to standard aseptic mammalian tissue culture protocols in 5% CO 2 at by variance stabilizing normalization and tested for differential enrichment relative to pulldowns competed with DMSO vehicle control. Fluorescence microscopy For immunofluorescence, cells were fixed with 2% formaldehyde in 1x phosphate buffered saline (PBS) for 10 minutes at room temperature, followed by 3 washes in 1x PBS and permeabilization with 0.25% For confocal microscopy, images were acquired with Quorum Spinning Disk Confocal Microscope equipped with 405, 491, 561, and 642 nm lasers (Zeiss) and processed with Volocity software (Perkin Elmer) and ImageJ. For localization measurements of fluorescent fusion proteins, images were acquired with a EVOS™ FL Auto 2 Imaging System (Thermo Scientific™ Invitrogen™). Co-localization measurements were quantified using a custom CellProfiler (v3.1.9) 47 analysis pipeline. Western blotting for global H3K36me2 Cells treated with compound for 72 hours before harvesting by centrifugation at 300 x g for 5 min. Cells Proliferation was monitored by counting GFP-expressing cells over time using an IncuCyte live-cell imaging and analysis platform (Sartorius). 5-ethyl uridine (5-EU) incorporation assay 5-EU incorporation assays to measure changes in nucleolar transcription were performed as previously General Chemistry Procedures Reactions were carried out using conventional glassware. All reagents and solvents were used as received unless otherwise stated. Reagents were of 95% purity or greater, and solvents were reagent grade unless otherwise stated. Any anhydrous solvents used were purchased as "anhydrous" grade and used without further drying. "Room" or ambient temperature varied between 20-25˚C. Analytical thin layer chromatography (TLC) was carried out using glass plates pre-coated with silica gel (Merck) impregnated with fluorescent indicator (254 nm). TLC plates were visualized by illumination with a 254 nm UV lamp. Analytical LCMS data for all compounds were acquired using an Agilent 1260 Infinity II system with the UV detector set to 254 nm. Samples were injected (<10 µL) onto an Agilent ZORBAX Eclipse Plus C18, To a 50 mL flask equipped with a stir bar was added methyl 4-formylbenzoate (1.0 g, 1 Eq, 6.1 mmol) and methanol (10 mL), followed by cyclopropylamine (0.35 g, 0.43 mL, 1 Eq, 6.1 mmol). The flask was capped and stirred at room temperature overnight. The next day, the flask was cooled in an ice water bath and sodium borohydride (0.46 g, 2 Eq, 12 mmol) was added portionwise. Borohydride addition was accompanied by effervescence and heating of the solution. After 4 hours, at which time the reaction had come to room temperature, the reaction was quenched by addition of saturated sodium bicarbonate and extracted three times with ethyl acetate. The combined organic layers were washed once more with saturated sodium bicarbonate, once with brine, then dried over sodium sulfate and concentrated to an oil. Normal phase chromatography over silica (0-100% ethyl acetate in hexanes) provided the free base as a colorless free-flowing oil. The oil was dissolved in 25 mL of diethyl ether and cooled in an ice water bath, and trifluoroacetic acid (1.2 g, 0.80 mL, 1.7 Eq, 10 mmol) was added dropwise. A voluminous white solid formed, which was filtered and washed rigorously with diethyl ether to provide S1-P ( To a scintillation vial was added S3-P (18 mg, 1 Eq, 50 µmol), tert-butyl 4-aminobenzoate (19 mg, 2 Eq, 0.10 mmol), EDC (19 mg, 2 Eq, 0.10 mmol), DMAP (12 mg, 2 Eq, 0.10 mmol), and DMF (0.2 mL). The reaction was heated to 50 ˚C and stirred overnight. The next day, the reaction was partitioned between water and ethyl acetate. The layers were separated, and the aqueous layer was extracted twice more with ethyl acetate. The combined organic layers were washed twice with water, once with saturated sodium bicarbonate, and once with brine, then dried over sodium sulfate and concentrated to an off-white residue. The next day, the reaction was diluted with distilled water and purified directly by reverse phase chromatography (10-100% methanol in water + 0.1% TFA) and lyophilized to provide UNC7096 (6.51 mg, 5.26 µmol, 81%) as a white hygroscopic solid. 1 Data availability The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE 50 partner repository with the dataset identifier PXD017641. The structure of NSD2-PWWP1 in complex with MRT866 and UNC06934 were deposited to the Protein Data Bank with accession number 7LMT and 6XCG respectively.
6,573
2021-03-07T00:00:00.000
[ "Chemistry", "Medicine" ]
Heat Transfer Enhancement by Shot Peening of Stainless Steel In heat exchange applications, the heat transfer efficiency could be improved by surface modifications. Shot peening was one of the cost-effective methods to provide different surface roughness. The objectives of this study were (1) to investigate the influences of the surface roughness on the heat transfer performance and (2) to understand how the shot peening process parameters affect the surface roughness. The considered specimens were 316L stainless steel hollow tubes having smooth and rough surfaces. The computational fluid dynamics (CFD) simulation was used to observe the surface roughness effects. The CFD results showed that the convective heat transfer coefficients had linear relationships with the peak surface roughness (Rz). Finite element (FE) simulation was used to determine the effects of the shot peening process parameters. The FE results showed that the surface roughness was increased at higher sandblasting speeds and sand diameters. Introduction Stainless steels have been used in endless applications ranging from construction, transportation, medical, nuclear, and chemical industries due to their excellent properties [1]. Mainly because of its ability to resist corrosion, this material has long been used in virtually all cooling waters and many chemical environments [2]. One of the most common uses of stainless steel is as a heat exchanger, because it works well in high-temperature conditions (resistance to corrosion, oxidation, and scaling). Generally, stainless steel surfaces are also easy to clean. Most importantly, this material is economical in terms of cost and long-term maintenance service. The heat transfer efficiency of exchangers can be improved by modifying the geometry of the heat exchange tube/plate and altering fluid flows patterns. Recently, surface modification or texturing has shown its potential in many technological developments, such as friction reduction [3][4][5], biofouling [6,7], artificial parts [8], and even stem cell research [9]. As a result, the trend in using surface modification/texturing in heat transfer enhancement has been on the rise, and some of the reviewed literature is presented below. The influences of dimple/protrusion surfaces on the heat transfer were investigated by Jing et al. (2018) [10]. Chen et al. (2012) found out that the asymmetric dimple with skewness downstream was better than the symmetric shape in heat exchange [11]. The asymmetric flow structures were numerically evaluated by Turnow et al. (2018), and the heat transfer was found to be improved with the asymmetric vortex structures [12]. Du et al. (2018) discovered that the dimple location significantly affected the flow structure and heat transfer [13]. The enhanced heat transfer by dimples was also found in the work of Zheng et al. (2018) [14]. observed that a bleed hole in a dimpled channel helped improve the heat transfer [15]. numerically studied the 3D turbulent flow and convective heat transfer of the dimpled tube [16]. Both dimples and protrusions were investigated by the same research group, and they found both the dimpled and protruded surface flow mixing, which provided a better heat transfer rate [17]. The authors went on to study the effect of the teardrop surface and found flow mixing improvement [18]. Some other dimples and surface textured shapes for heat transfer enhancement in various conditions can also be found in many research studies [19][20][21]. By looking at the effects of surface roughness on heat transfer, many studies have also shown similar findings. Dierich and Nikrityuk (2013) observed that the roughness influenced the surface-average Nusselt number, and the authors also introduced the heat transfer efficiency factor [22]. Pike-Wilson and Karayiannis (2014) did not find a clear relationship between the heat transfer coefficient and surface roughness [23]. Ventola et al. (2014) proposed the heat transfer model, taking into account the size of the surface roughness and turbulent fluid flow [24]. Tikadar et al. (2018) looked at heat transfer characteristics in a roughed heater rod. They found that there was an abrupt increase in the heat transfer coefficient at the transition region from the smooth to the surface roughness area [25]. Several manufacturing processes can create surface textures on metals; for instance, laser texturing [26], rolling [27], elliptical vibration texturing [28], and extrusion forging and extrusion rolling processes [29]. Shot peening has been widely used to create surface irregularities (surface roughness) on metal parts [30]. Most of the research studies on the shot peening of stainless steels were focused on residual stress [31], fatigue and corrosion [32,33], surface characteristics [34,35], and tribology [36][37][38]. The main interest of this research is to determine the efficiency of heat transfer of a shot peened surface in stainless steel tube. Most of the literature mentioned above [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25] used the finite volume method (FVM) to determine the heat transfer characteristics of textured surfaces. In addition, many studies utilized the finite element method (FEM) to understand the effects of shot peening process parameters [39][40][41][42][43][44][45][46][47]. The FVM was carried out in this study to analyze the heat convection performance of a considered shot peened surface in comparison with the smooth one. Then, the FEM was used to predict the shot peening parameters that would provide the enhanced heat transfer surface. Heat Transfer of Pinned Fins Since the heat exchanger of interest in this study was a 316L stainless steel tube, the heat transfer testing apparatus, as shown in Figure 1, was used. Table 1 presents the material properties of the considered tube in this study. The primary purpose of the heat transfer experiment was to determine the heat convection performance of the tube surfaces: smooth surface and rough surface. The fin specimens were hollow tubes 21.34 mm in diameter, 100.00 mm in length, and 2.87 mm in thickness. Note that both ends of each fin were covered with thin circular plates. The smooth surfaces were prepared by machining and polishing to obtain the peak surface roughness (R z ) of 0.015 µm. The rough surfaces were prepared by machining and shot peening to obtain the peak surface roughness of (R z ) of 25 µm by using the sand diameter of 350 µm. In each test, four fins were pinned to the heater. The fan was installed below the heater at the Air Inlet location. Three thermocouples were attached to the following locations: Inlet Temperature (T in ), Mid-Point Temperature (T mid ), and Outlet Temperature (T out ) through the Air Outlet. The thermocouples (N-type) with the ±1% • C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both T mid and T out , and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. (Tout) through the Air Outlet. The thermocouples (N-type) with the ±1% °C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both Tmid and Tout, and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. Coatings 2020, 10 The equation of the convective heat transfer coefficients (hconv) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The equation of the convective heat transfer coefficients (h conv ) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The total air mass flow was: . The power of the heater could be described as: Since T in = T air , the temperature of the heater was obtained from: The heat transfer rate was found to be: Thus, the convective heat transfer coefficient could be determined by the following equation: If the new constant (C) was set to be: then the convective heat transfer coefficient could be rewritten as: Coatings 2020, 10, 584 of 15 In addition, Q in could be obtained from the following equation: Q heater dt ≈ Q avg t heat (9) As a result, the convective heat transfer coefficient of the pinned fins could be determined from: Note that the C value could be calculated by using the variable values in Table 3. The variables Q avg , V air , t heat , and T in were input variables. In addition, T out = T mid if the measured temperature point was at the Midpoint Temperature (T mid ). Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. In the actual heat transfer experiment, the heater was turned on for 20 min (theat) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (Vair), and the temperatures on Tin, Tmid, and Tout were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (hconv) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (Rz) and airflow speed (Vair). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the hconv value of each condition. In the actual heat transfer experiment, the heater was turned on for 20 min (t heat ) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (V air ), and the temperatures on T in , T mid , and T out were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (h conv ) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (R z ) and airflow speed (V air ). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the h conv value of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (D S ), Impact Angle (θ), and Impact Velocity (V I ), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 6 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (R z ) of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (DS), Impact Angle (θ), and Impact Velocity (VI), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 5 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (Rz) of each condition. Effects of Surface Roughness to Heat Convection The results of the CFD model validation with the heat transfer experiment are presented in Figure 6. Note that the results of the CFD calculations were mesh-independent. In Figure 6a, the temperature-velocity plot at T out of the experimental and simulation surfaces was illustrated. The errors between the experiments and simulations at T out ranged from 1% to 7% (Figure 6b). The temperature-velocity plot at T mid was also displayed in Figure 6c, and the errors at T mid were illustrated in Figure 6d. The highest error value at T mid was approximately 3%. Considering the error values both at T out and T mid , the CFD model was considered valid in this study. In Figure 7, the temperature contour maps of the smooth and rough surfaces at varying flow speeds are shown. At V = 0 m/s, this condition was considered natural or free convection because the fan was not running, allowing the heated airflow to pass the measured temperature points naturally. If the air velocities were set to be 1.21 or 2.42 m/s, the conditions were forced convection. In the free convection conditions, the rough surfaces provided higher temperatures at both T mid and T out than the smooth surfaces did. In the forced convection scenarios, the temperatures at T mid and Tout significantly dropped in comparison to those of the rough surfaces. The results implied that the rough surfaces provided a better heat transfer performance. The velocity contour maps in Figure 8 could also be used to explain the phenomena. In the natural convection conditions (no influence of air velocity), the heated airflows in the rough surfaces were already higher than those of the smooth surfaces, Coatings 2020, 10, 584 9 of 15 leading to the higher temperatures in T mid and T out . The main reason was due to the increased surface areas of the pinned fins, which allowed more air to exchange heat. With the influences of air velocity (forced convection), the increased in airflow speed generally led to reduced temperatures, which could be clearly noticed in the smooth surfaces. At higher airflow velocities, the small vortex (air swirling) around the dimpled areas accumulated to the large swirling around at the downstream side of the rough surfaces. This large air circulation area (vortex shading) had a lower air velocity, allowing more air to exchange heat with the fins. As a result, more airflow could carry heat from the rough surfaces to the measured temperature points. On the contrary, the smooth surfaces did not develop the large vortex sharing area downstream. Thus, only a fraction of low-velocity airflows downstream was generated around the smooth fins. As a result, there was a lower amount of air to exchange heat at high airflow velocities. The predicted temperatures and the convective heat transfer coefficients (hconv) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the hconv of each condition. The higher value of hconv represented a higher heat transfer efficiency. According to the figure, the hconv values of the smooth surfaces were equal to zero The predicted temperatures and the convective heat transfer coefficients (h conv ) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the h conv of each condition. The higher value of h conv represented a higher heat transfer efficiency. According to the figure, the h conv values of the smooth surfaces were equal to zero in the natural convection cases. In the forced convection cases, the h conv values increased with airflow velocity and surface roughness. Comparing the h conv values between T mid and Tout, the h conv values at T out were lower, since the measured location was further away from the heat source. Increasing the airflow velocities from 1.21 to 2.42 m/s caused the h conv values to double. If the surface roughness values were increased up to 250 µm, the h conv could be increased up to 155% at V = 1.21 m/s and 192% at V = 2.42 m/s. Most importantly, if the rough surface conditions were considered (R z = 25 to 250 µm), the linear relationships between the h conv and R z values could be observed. Although the h conv values linearly increased with R z , the surface roughness could not be increased without limits to enhance the heat transfer efficiency further. The most critical issues were the manufacturability of such a high-depth surface and the heat-transfer limits. In this study, the ratio of surface roughness (R z = 250 µm) to pipe diameter (21.34 mm) or R SD was approximately 0.01. In the forced convection conditions, the convective heat transfer coefficient's ratios between rough and smooth surfaces or h conv-RS ranged from 1 to 2.9. Thus, the ratio between h conv-RS and R SD ranged from 100 to 290, which could be used to compare with other tubes having different diameters and surfaces. The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation (11). (11) where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and νair is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 μm were not zero (approximately 32). The Reynold (R e ) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation (11). Re = dairVairD/νair R e = d air V air D/ν air (11) where d air is the density of air, V air is the velocity of air, D is the tube diameter, and ν air is the dynamic viscosity of air. The R e numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the R e numbers. In the free convection conditions, the increased surface roughness did not affect the R e numbers. Note that the R e numbers at 250 µm were not zero (approximately 32). The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation (11). (11) where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and νair is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 μm were not zero (approximately 32). Effects of Shot Peening Parameters to Surface Roughness The effects of the shot peening parameters on surface roughness are presented in Figure 11. Note that the results of the FE calculations were mesh-independent. The R z values were plotted against the impact angles at various impact velocities in Figure 11a,b. The R a values were plotted against the impact angles at different impact velocities in Figure 11c,d. The FE results demonstrated that increasing sand diameters and impact velocities led to an increase in both R a and R z values in all the cases. Larger sand diameters and higher blasting speeds created higher momentum, leading to higher impact energy and causing higher deformed (dimpled) areas. However, increasing the impact angles led to the reduced impact areas and thus decreasing R a and R z values. Based on the results, it could be noticed that using the sand diameters of 100 µm up to 350 µm resulted in the R a and R z values ranging from 1 µm up to 18 µm, which would provide the increased h conv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 µm would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. led to the reduced impact areas and thus decreasing Ra and Rz values. Based on the results, it could be noticed that using the sand diameters of 100 μm up to 350 μm resulted in the Ra and Rz values ranging from 1 μm up to 18 μm, which would provide the increased hconv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 μm would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. Discussion The results of this work were divided into two parts: (1) the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and (2) shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (Rz) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the Rz values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot Discussion The results of this work were divided into two parts: (1) the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and (2) shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (R z ) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the R z values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot sandblasting process investigating mixed sand diameters at controlled speeds is currently ongoing. The results of this future work should be more applicable to the actual shot peening processes to obtain any desired surface roughness values, particularly on tube profiles. The ultimate impact of this continuous research work would provide more significant benefits to the energy-saving systems. Conclusions The project investigated the effects of surface roughness on the convective heat transfer coefficients of the 316L stainless steel tubes and the effects of the shot peening parameters on the surface roughness. The CFD simulation was carried out to study the influences of the surface roughness at varying airflow velocities. In the free convection conditions, the pinned fins' increased surface areas allowed more air to exchange heat. In the forced convection conditions, more airflow could carry heat from the rough surfaces to the measured temperature points. The linear relationships between the peak surface roughness (R z ) and the convective heat transfer coefficients were found. The influences of the shot peening process parameters (sand diameter, impact angle, and impact velocity) on the tube surfaces were investigated by using the FE simulation. The results showed that the increase in sand diameter and impact velocity increased surface roughness. The increased in convective heat transfer coefficient values (up to 50%) could be obtained by using the sand diameters of 100 µm up to 350 µm, resulting in the shot peened surface having the R a and R z values ranging from 1 µm up to 18 µm. The established links among the heat transfer efficiency, surface roughness, and shot peening parameters in this study could be used to enhance the heat transfer efficiency.
6,934.2
2020-06-23T00:00:00.000
[ "Materials Science", "Engineering" ]
Current and future constraints on Higgs couplings in the nonlinear Effective Theory We perform a Bayesian statistical analysis of the constraints on the nonlinear Effective Theory given by the Higgs electroweak chiral Lagrangian. We obtain bounds on the effective coefficients entering in Higgs observables at the leading order, using all available Higgs-boson signal strengths from the LHC runs 1 and 2. Using a prior dependence study of the solutions, we discuss the results within the context of natural-sized Wilson coefficients. We further study the expected sensitivities to the different Wilson coefficients at various possible future colliders. Finally, we interpret our results in terms of some minimal composite Higgs models. Introduction Within the Standard Model (SM), the Higgs mechanism is essential to understand the mass generation via electroweak symmetry breaking. The discovery at the Large Hadron Collider (LHC) of a Higgs-like particle by the ATLAS and CMS experiments [1,2] could be therefore seen as the experimental confirmation of the last ingredient of the SM. The relevant question in this situation is, however, Is this particle the SM Higgs? Indeed, the Higgs is not only a key ingredient of the SM, but also the source of some of the main questions that motivate the belief that there must be new physics beyond the SM. Such new physics could leave its imprints in deviations of the Higgs couplings with respect to the SM predictions. The extraordinary performance of the LHC during its eight years of operation has brought us precise measurements of some of these Higgs properties [3][4][5], which can be therefore used to look for this kind of indirect new physics effects. Also, it is important to note that, with current observables and a precision of at most ten percent in the couplings of the Higgs-like scalar, it is not yet clear if the electroweak symmetry is broken by the minimal SU (2) L doublet of the SM, or if a different mechanism is at work. The absence of any direct observations of new states at the LHC motivates addressing this search of new physics in the Higgs sector in a model-independent way. The negative to extrapolate these results to what we could learn at future Higgs factories. Both the SMEFT and the ewχL have been used in fits to experimental data to search for indirect signals of new physics. Apart from the experimental information, it is sometimes also useful to take theory considerations in such analyses into account. In particular, we are interested here in testing the ewχL hypotheses within the regime that is expected by the power-counting rules of the chiral Lagrangian, or, conversely, to clarify to what extent current data is sensitive to "natural"-sized EFT contributions. This can be easily done within the framework of Bayesian statistics, where such theoretical information can feed into priors in the process of parameter estimation [63]. In Bayesian inference (see for example [64] for a review), probabilities are interpreted as degree-of-belief and always depend on some background information. The central formula of Bayesian inference can then be obtained from the Bayes Theorem, prob(hypothesis|data, I) = prob(data|hypothesis, I) · prob(hypothesis|I) prob(data|I) ⇔ posterior ∼ likelihood × prior, where we denote the probability of an event X, given the background information I as prob(X|I). In this formalism, the relevant information to address the questions "Do we see indirect signs of new physics in current Higgs data?" or, conversely, "How large can new physics be, while still being consistent with Higgs data?", is contained in the posterior distributions for the Wilson coefficients. These directly feed from the information of the likelihood, where we included the most recent experimental data. Bayesian methods are now widely-used in Higgs fits [45,52,61,[65][66][67], and it is also the framework that we will use in our EFT studies, so we can discuss the consistency of the experimental results with the EFT considerations. The paper is organized as follows. In section 2 we review the basics of the electroweak chiral Lagrangian, and present the actual parameterization we will be testing in our fits. These are performed using the HEPfit package [68,69], in which we implement the relevant signal strengths computed with the ewχL. We discuss HEPfit, the settings of the fits, and the experimental data set included in our analyses in section 3. Section 4 contains the main phenomenological results of this article. We discuss the constraining power of current data and the impact of different priors in testing the ewχL power counting. We present the final result of our fit, describing the uncertainties and correlations of the ewχL parameters. These results are extended in section 5, with the study of the projected uncertainties of the Wilson coefficients at future colliders. Both current and future results are then related in section 6 to the SO(5)/SO(4) minimal composite Higgs model. We conclude in section 7. Supplementary information is presented in three appendices. In appendix A we list the relevant formulas for the calculation of the Higgs signal strengths. We also compare the results of the ewχL fit with those obtained within the phenomenological approach provided by the κ-formalism [8,9] in appendix B. Finally, we collect some of the input used in the fits presented in Section 5 in appendix C. The Higgs electroweak chiral Lagrangian In this paper we use the bottom-up EFT that was derived in [70], to describe potentially large deviations from the SM in Higgs observables. It is based on the electroweak chiral Lagrangian [29-35, 38, 71-80]. Such large deviations in the Higgs sector are motivated in many scenarios of physics beyond the SM, like for example composite Higgs models [81][82][83][84]. As any bottom-up EFT, its Lagrangian is completely defined via the particle content and the symmetries of the low-energy theory, while the effective expansion is determined by power counting rules. The explicit assumptions that go into the construction of the specific EFT we consider are described in what follows: • Particles: We assume the SM particle content, but no relation between the Higgs scalar h and the three Goldstone bosons ϕ i of electroweak symmetry breaking. • Symmetries: We assume the SM gauge symmetry, and that the new physics conserves custodial symmetry. Therefore, the global symmetry breaking pattern in the scalar sector is We further assume conservation of baryon and lepton number, a SM-like flavour structure in the Yukawa interactions of the Higgs, as well as CP-symmetry in the Higgs sector. The latter is also motivated by current experimental constraints [85][86][87][88]. • The power counting of the electroweak chiral Lagrangian is given by a loop expansion, which equivalently can be expressed in terms of chiral dimensions [38]. With the assignments [bosons] χ = 0 and [fermion bilinears] χ = [derivatives] χ = [weak couplings] χ = 1, the total chiral dimension of a term in the Lagrangian equals 2L + 2, with L being the loop order and therefore the order of the EFT expansion. If the new physics is decoupled from the SM to some degree, it is useful to parametrize the deviations from the SM by the parameter ξ = v 2 /f 2 , where v ≈ 246 GeV is the electroweak vacuum expectation value, and f is the scale of new physics. The latter could correspond, for example, to the scale of global symmetry breaking in composite Higgs models. If ξ 1, we can perform an expansion in ξ (and therefore in canonical dimensions) on top of the loop expansion. This yields a double expansion in ξ and 1/16π 2 [13]. The leading-order chiral Lagrangian, not expanded in ξ (i.e. for ξ = O(1)), is then [80] where U = exp (2iϕ a T a /v) collects the Goldstone bosons, T a are the generators of SU (2), P ± = 1/2 ± T 3 , and · denotes the trace. As already said, we do not assume a relation between h and the Goldstone bosons in U . This yields free coefficients for all Higgs couplings in V (h), F U (h), and Y ψ (h) for any fermion ψ. To allow for a possible strongly-coupled origin of h, we do not truncate the polynomials at any order in h. All terms in L LO have chiral dimension two. The list of NLO operators (i.e. with chiral dimension four) is lengthy [80] and we will not list all the operators here, as only a few operators are important for our analysis. In particular, we will focus on single-Higgs production processes. (We briefly comment on double-Higgs production at the end of this section.) Working at the leading order in each process we can therefore focus on operators with a single Higgs field. At tree level, this includes couplings of h to W + W − , ZZ,tt, bb,cc, τ + τ − , and µ + µ − . Normalized to their SM values, we expect the couplings to be 1 ± O(ξ) by the power counting arguments from above. Couplings to lighter fermions have not been observed so far, and therefore we do not include them in our fit. Nevertheless, to illustrate what effect those couplings could have in the fit, we still include the effective charm coupling. There is also experimental information for loop-induced processes, involving Higgs couplings to gg, γγ, and Zγ. The amplitudes of these processes receive contributions of O((1 + ξ)/16π 2 ) coming from the modified leading-order couplings that enter in the loop. In addition, there are operators in L NLO that, when included at tree level, contribute at O(ξ/16π 2 ) in these amplitudes. We therefore include these operators as well. Figure 1 shows the contributions of L LO and L NLO to the example process h → γγ schematically. The resulting Lagrangian, which we use for our fit, is then [52,70] where the Wilson coefficients are 3 Note that the coefficients c γ and c Zγ are independent at the considered order. These can be induced by the following three operators, (2.5) These operators contribute to the following four interactions yielding one linear dependent operator in (2.6). However, corrections induced by the two last operators are subleading (at O(ξ/16π 2 )) compared to the leading-order contributions parametrized by c V (at O(ξ)) and are therefore neglected. As indicated above, we focus our study to single-Higgs processes. To describe double-Higgs production consistently within the electroweak chiral Lagrangian, we would need to include several more parameters in the fit (at least three more to describe gluon fusion, corresponding to the interactions h 3 ,tth 2 , ggh 2 [6,97,98]). Given the low current sensitivity of the ATLAS and CMS experiments to double-Higgs production (the best upper limits are of the order of 20-30 times the SM [99,100]) and that these parameters cannot be constrained by the other available measurements, we decided to not include double-Higgs production in our analysis. Finally, let us mention that the analysis with the leading-order electroweak chiral Lagrangian is closely related [70], but not identical to the κ-framework [8,9], which was introduced by the LHC Higgs cross section working group. We discuss the relation and the differences in appendix B. 2 Similar parametrizations have been also discussed, using phenomenological motivations, in Refs. [89][90][91][92][93][94][95]. 3 While the assumptions that lead to these generic power counting estimates hold for many models of new physics, there are also exceptions: if there are, for example, different sources of electroweak symmetry breaking for different generations of fermions, it could be that the Higgs couplings to the light fermions are enhanced by larger factors, see [96]. Methodology In this section we review the details of our phenomenological analysis. The fits are performed using the HEPfit package [68,69], a general tool designed to easily combine the information from direct and indirect searches and test the SM and extensions including new physics effects. The code is available under the GNU General Public License. The current developers' version can be downloaded at [101]. The flexibility of the HEPfit framework allows to easily introduce new physics models and observables as external modules to the main core of the code, which we have used to implement the Higgs electroweak chiral Lagrangian (see section 2) and its modifications on the Higgs sector. HEPfit includes a Markov-Chain Monte-Carlo implementation provided by the Bayesian Analysis Toolkit [102], which we use to perform a Bayesian statistical analysis of the model. More details on the code can be found in [69], and related phenomenological studies using it are presented in [57,103]. As an illustration of the performance of HEPfit, the global fit presented later in this paper (9 parameters -using both Gaussian and flat priors-and a total of 126 observables in the likelihood) was performed using a total of 24 Markov chains with 10 7 iterations each. This can be done within a total of 1200 CPU hours, or roughly two days if parallelized over 24 cores. On the experimental side, we include in the likelihood all available Higgs boson signal strengths measured by ATLAS and CMS, both at the LHC runs 1 and 2, as well as the experimental results obtained by the CDF and D ¡ 0 collaborations at the Tevatron. The actual experimental results included in the fits are summarized in table 1, which contains the references to the experimental measurements used for each Higgs decay channel. Whenever available, we include in the likelihood directly the signal strengths measured per category in each experimental analysis. The signal strengths are defined by with the sums running over all the different production mechanisms that contribute to the study of each final state. The SM predictions for the different production cross sections and branching ratios are taken from [9]. The experimental efficiencies, ε i , are assumed to be SM-like: ε i ≈ ε SM i . This is a good approximation in presence of small new physics effects, or if these do not modify significantly the kinematical distributions of the final states. The validity of this approximation must nevertheless be checked a posteriori, in light of the results of the fit pertaining the new physics effects. In any case, none of the interactions considered here introduce vertices with tensor structures different than the SM ones and based on the natural size expected for the EFT coefficients one expects ε i ≈ ε SM i to be a good approximation. Correlations between the observed signal strengths for the different categories are usually not provided by the experimental groups and are ignored here. 4 In some cases, the references for the experimental analyses do not provide all the information needed to reconstruct the signal strengths per categories. In such Table 1. Higgs boson signal strengths included in our fits to the electroweak chiral Lagrangian, classified according to the final states and indicating the integrated luminosity, L, of the corresponding data sets. Multiple entries in a single cell refer to different production modes. We summarize 7 and 8 TeV data for brevity. We also include CDF [141] and D ¡ 0 [142] data sets in our fit. cases we use as observables the fit values of the signal strength per production mechanism (e.g. µ ggF, V BF, V h, tth ) or decay channel. Finally, for the h → Zγ channel, only limits on the signal strengths are available. In that case the limit is transformed into a Gaussian contribution to the likelihood. (In any case, the constraining power of the h → Zγ channel is very small and, within the model-independent hypotheses we are testing, has almost no impact on the fits.) On the theory side we consider the leading order (LO) corrections from eq. (2.2) to the SM Higgs boson production cross sections and branching ratios. The explicit expressions for the different observables are provided in appendix A. In our analysis we work within a Bayesian statistical framework, where the information we know about the model parameters before the analysis can be encoded into a prior distribution. In this regard, all the SM parameters are taken as fixed parameters. (In the expressions for the new physics corrections in appendix A, the SM inputs have been fixed to the central values from the fit in [57].) For the coefficients of the EFT, this a priori information comes from the EFT power counting: To decide how to implement this information into a prior we follow the principle of maximum entropy [143], which selects the prior that reflects the current state of knowledge best [144]. The use of a flat prior only contains information in the boundaries of the region where it is non-vanishing but, apart from that, it does not reflect a preference for a particular size of the model parameters. This is convenient to show (exclusively) all the information from the actual data included in the likelihood in the desired region. A Gaussian prior, on the other hand, seems to be more convenient to implement the information in eq. (2.3), by choosing the SM expectation of the coefficients as a mean value and adjusting the standard deviation, σ, of the distribution to favour (penalize) effects within (beyond) the expected size of O(ξ). To a certain extent, the result of the fit is prior-dependent. As we will see in sec. 4.1, the result of the fit changes when we restrict the Wilson coefficients c i to be around the SM solution only. However, the fit results should not depend on the form we put the "condition of natural-sized coefficients" into equations. This is why we study the prior dependence and its impact on the quality/consistency of our results using both, flat and Gaussian distributions. The information in the prior can also help to prevent from overfitting when fitting a theoretical model to experimental data [63]. The term overfitting refers to cases in which the parameter values are very fine-tuned to data. As we will explain, in our EFT, this would for example correspond to an unnaturally large value of the Higgs-charm coupling. A reasonable choice of priors, ensuring the condition of "natural-sized Wilson coefficients in the EFT", reduces the risk of overfitting substantially [63]. Phenomenological results As mentioned in the previous section, the use of flat priors is convenient to establish the constraining power of the experimental data contained in the likelihood in absence of extra information about the model. We therefore start by presenting our results using flat priors for all EFT coefficients, and then move to check the stability of such results under the hypothesis of natural-sized Wilson coefficients. The main results presented here are obtained with the full data from Tevatron and LHC runs 1 and 2 (see table 1). We will also comment in section 4.2 on the comparison of the constraining power of run 1 and run 2 data separately. Flat priors From the equations of the production cross sections and decay widths in appendix A the existence of several approximate symmetries in the EFT parameter space is apparent. This will yield a certain degree of degeneracy in the posterior distributions. Indeed, all observables are unchanged under a simultaneous change of sign This follows from a more general reparameterization invariance of the electroweak chiral Lagrangian where each Higgs couplings is modified as with n h the number of Higgs fields in each Lagrangian term. This change of sign can then be absorbed in the Higgs field, h → −h, leaving the action invariant. The invariance (4.1) is enhanced at the tree-level, since observables remain unchanged under c i → −c i for each coefficient independently. Only when we include loop-generated observables, e.g. gg → h or h → γγ, the interplay between the different diagrams contributing to the effective hgg, hγγ and hZγ vertices makes the fit sensitive to the relative sign of the coefficients. The description of loop-generated observables, however, also introduces three extra parameters that do not enter in the tree-level decay widths, namely c g , c γ and c Zγ , which each enter only in one effective vertex. In general, for a given point in the c i parameter space, one can therefore flip the signs of c ψ and c V independently, and adjust the previous local parameters to obtain exactly the same prediction as the original point, and therefore the same likelihood. Note now that there is basically no direct sensitivity in current data to the charm coupling (e.g. via h →cc) or any of the other light quarks. (The most stringent bound limits the h →cc signal strength to be smaller than 110 [145].) From the point of view of the fit, the corresponding Wilson coefficients can therefore be used as compensating parameters to balance out the effects of deviations of the other c i away from the SM. Firstly, the absence of a significant handle to Br(h →qq) would allow to use c q 1 to effectively compensate a global enhancement of the decay widths for all the observed channels, leaving the corresponding branching ratios intact. Secondly, c q could play a role similar to c g , c γ and c Zγ , and be adjusted to cancel some of the contributions induced by the other couplings in all the one-loop effective vertices. (The couplings to light leptons, on the other hand, could only play a similar role in electroweak loops.) Note that these partial cancellations in the effective loop vertices are possible for several different patterns of deviations of the EFT couplings. However, since the SM branching ratios into light quarks are very small, the first effect (compensating enhancements in the observed decay widths) is only possible for couplings that are larger, in magnitude, than in the SM. Both effects also require very large values of the corresponding c q . For the case of the charm, being the heaviest quark after the bottom, c c can still have a visible effect for O(1 − 10) values. 5 Let us illustrate this with a simple example. If we switch off the one-loop parameters (c g = c γ = c Zγ = 0), enhance the c c coupling by a factor of 5 and set the other tree-level couplings to c V = c t = c b = c τ = c µ = 1.25, we will increase the total decay width of the Higgs by almost a factor of 2.3. 6 With this knowledge, we can estimate the signal strengths for tree-level processes. For instance, the h → V V signal strengths in the VBF or Vh production channels roughly scale with c 4 V /2.3 ≈ 1.06 now. But also signals strengths involving loop processes like gluon fusion or the diphoton decay do not feature strong deviations from the SM values. The charm contribution to the loop functions is about 5 While weak, the experimental limits in the h → µµ channel are still restrictive enough to prevent any compensating effect of the cµ in electroweak loops. 6 In this regard, let us comment that the existing bounds on the Higgs width, e.g. [146], do not apply in a straightforward manner to our analysis. Indeed, current experimental limits on Γ h depend on certain theory assumptions, like that gluon fusion production is dominated by the effects of the top loops, while the ewχL hypotheses also allow extra contributions coming from cg. It is because of this that we ignore such bounds on Γ h in our fits. In any case, while the strongest experimental bounds -Γ h < 13 MeV at 95% C.L. [146]-could alleviate the overfitting issue by preventing excessively large values of cc, it does not completely avoid it. The example discussed is, in fact, not only consistent with the above mentioned limit on h →cc, but also with Γ h < 13 MeV. Disregarding the width measurements completely, much larger values of cc are possible, with the other ci adjusted accordingly. 1% of the top terms and its relative boost by a factor of 5/1.25 = 4 is not sufficient to contribute in a significant way to the total amplitude. Even if the experimental resolution was precise enough to measure these effects in the loop observables, we would always have the freedom to compensate the charm loop by the local terms. With a little bit of tuning, it is easy to bring all signal strengths to SM values within the desired accuracy. The fact that c c is a priori experimentally unconstrained with current data, together with the above-mentioned possibility of being used to partially cancel the effects of other couplings, introduces a problem in the global fits if we treat it as a floating parameter. Assuming unbounded flat priors for all c i parameters, such a fit would suffer from a clear case of overfitting of the EFT hypothesis to data, in the sense that the EFT parameters are "pulled" towards unnatural values while preserving the quality of the fit to data. As illustrated in the example above, large absolute values of c c offer more room for possible cancellations between this and the other parameters. This opens the regions of the parameter space where large values of c c and tuned combinations of the other parameters leave the production cross sections times branching ratios approximately unchanged with respect to the SM expectation, and are therefore consistent with the experimental data. Since the likelihood along these regions is approximately the same as the SM one, we would expect some degeneracy in the posterior for the corresponding parameters, in correlation with c c . Instead, what we observe is a preference for scenarios with |c c | 1 and absolute values for the other c i above the SM expectation. This can be understood as follows: around the SM limit, all c i must have very specific values in order to obtain SM-like signal strengths. If, on the other hand, |c c | is large, the other parameters have more freedom to compensate this contribution with various different correlations. In other words, if we do not know anything about c c , the larger the c c is the more fine-tuned the SM point looks like, and it is therefore less likely to be scanned by the MCMC, i.e. the SM neighborhood seems less probable. Once again, let us illustrate this with another example: the most precise measurement of a Higgs decaying to a τ pair leaves an uncertainty of around 35%. Leaving all other observables like in the SM, we can vary c τ only between 0.81 and 1.16. But if the total width is scaled by 2.3 and the production enhanced by 1.25 2 , the possible range for c τ is between 0.98 and 1.41 and thus by a factor ≈ 1.25 larger. In general, if we allow all parameters c i to vary independently, large values of c c induce a global suppression in all signal strengths, via an enhancement of the total decay width with respect to the SM. This allows a wider range of different parameters to be consistent with data than in the case of a SM-sized decay width. The larger size of the allowed regions together with the multiplicity of the solutions thus leads to a preference for the large c c values. This overfitting issue for c c is illustrated in fig. 2, where we show the 2D marginalized distributions in the c b vs. c c and c g vs. c c planes from a fit with flat priors for all parameters, allowing |c c | values as large as 5. We observe how the large prior for c c broadens the corresponding posteriors -for the range displayed in the figure this is clearly visible in the 68% probability regions-a clear indication of overfitting [63]. In the case of the c Zγ , a flat prior is less problematic, because this coupling only enters in the h → Zγ decay but does not modify gluon fusion. We find an upper bound of around 35 on its magnitude in this case. However, large values of the |c i | are clearly disfavoured by the EFT interpretation. If we are interested in observing the effects of natural deviations of c c from 1 without running into this kind of technical and interpretational issues one can do it so by, instead of fixing c c = 1, assuming a prior consistent with the EFT expected power counting. The results of such a fit are shown also in fig. 2 and, for the rest of parameters, in fig. 3. The posterior distributions and correlations in those figures were obtained assuming the following set of priors, denoted in what follows as P 0 : , with σ c,Zγ = 0.5. The bounds on the flat priors in P 0 are chosen to ensure that all regions allowed by the data are sampled, consistently with the small deviations in c c , c Zγ permitted by their corresponding Gaussian priors. The exact choice of σ c , σ Zγ = 0.5 as the standard deviations in the Gaussian priors in eq. (4.3) will be justified in the next section. As expected, the result of the fit still shows multiple possible solutions. 7 The approximate invariance c ψ ↔ −c ψ is apparent for all fermionic couplings tested by the tree-level observables. All the fermionic couplings are consistent, in magnitude, with SM values. This includes also c µ , even though this coupling is still poorly constrained by data. The same applies to the coupling of the Higgs to the EW vector bosons. Also here, opposite sign couplings with respect to the SM are allowed by data. There are several possible patterns of (multi-parameter) correlations between those and non-SM values of the gluon and photon couplings c g (c γ ), allowing regions with values as large as ±2 (±10). All such regions are, while consistent with ex- We show the 68.3%, 95.4% and 99.7% probability regions in the c i vs. c j planes for i, j = V, t, b, τ, µ, g, γ. We do not assume that the c i are close to the SM values, which are represented by the dashed black lines, but we use wide flat priors in order to find all possible regions compatible with the current h signal strength bounds. Only for the experimentally poorly constrained c c and c Zγ we impose Gaussian priors with standard deviation σ = 0.5. These choices correspond to the set of priors P 0 described in eq. (4.3). See the text for details. perimental observation, unnatural from the point of view of the EFT used in the fit. In other words, the set of available observations and the precision of the employed data is not sufficient to constrain the ewχL parameters in a consistent way without extra information. Fits around the Standard Model solution Now we concentrate on the region of the parameter space that is expected by the EFT power counting. As a first step, we need to decide on how to treat the poorly constrained Wilson coefficients c c and c Zγ . From the previous section we know that too much freedom for c c leads to the problem that also the other parameters are "artificially" shifted away from the SM solution towards larger values in the fit. Nevertheless, we want to allow for the possibility that c c and c Zγ can differ from their SM values. The suppression of the mentioned overfitting regions can be achieved by assigning a Gaussian prior to both parameters [63,143]. In order to find a reasonable value for the standard deviation of these priors, we choose Gaussian priors for all c i with a universal standard deviation σ. We therefore denote this set of normal prior distributions as N σ : This choice is also motivated by the naturalness argument: the c i are defined to be deviations from the SM limit, so one would expect them to have values around their SM values, in agreement with the principle of maximum entropy [143]. In fig. 4, we show how the posterior distributions change [63] if we vary the standard deviation of the priors N σ from 10 −1.4 to 10 0 . For a better illustration, we explicitly highlight a case with a smaller (10 −1.3 ≈ 0.05) and one with a larger (10 −0.3 ≈ 0.5) standard deviation below and above each panel describing the dependence of the posteriors on σ. The central value of the fit remains constant throughout the scan for all c i except for c µ and c c . For instance, the central value for c µ moves closer to the SM for smaller σ, as the likelihood is highly asymmetric: values around 2 are very unlikely, while values around 0 are not excluded yet. The most differences arise in the size of the error bars in the posterior. One can see that for σ = 0.05 all posteriors simply reflect the priors and that all parameters, except for the charm and Zγ couplings, become less prior dependent and more dominated by data as we move to larger values for the universal standard deviation. However, for σ > 0.5, the upper 68% probability limit of the c c posterior exceeds the upper 68% probability limit of its prior. Also, the posterior distributions of c V and c t tend to have a larger upper limit. Both effects signal the presence of overfitting. We therefore decided that σ = 0.5 is a good compromise which allows as much freedom as possible for c c and c Zγ but at the same time keeps the overfitting error sufficiently small. Since we want to keep the prior dependence of our fit as small as possible, we refrain from using the universal Gaussian priors for all other parameters (c V , c t , c b , c τ , c µ , c g and c γ ) in the next step. We only decrease the range of the flat priors from the previous section and allow for a maximal deviation of 1 from the SM value. Furthermore, we assume that c t + c g > 0 in order to remove a second, non-SM-like solution in the c g vs. c t plane (see fig. 3). With all these, we eliminate all but the SM-like solutions in the fit. We denote this choice of prior as P EFT in the following, , with σ c,Zγ = 0.5. The result of a fit with these priors can be found in the background of the sigma dependent panels in fig. 4 and as black dotted line in the posterior distribution planes. Comparing it to the fit with universal Gaussian priors with σ = 0.5 (upper panels), we observe that both fits are in good agreement for most parameters. The overfitting error is visible in the corresponding c c plane, where the P EFT posterior slightly deviates from the N 0.5 prior, preferring larger values for c c . The muon coupling c µ is the parameter which deviates most from its SM value 1. Here, the discrepancy between the N 0.5 and the P EFT posterior is the largest and qualifies the choice of flat instead of Gaussian priors for all parameters except for c c and c Zγ . In any case, with the exception of these parameters, the results of the fit are prior independent. After finding the most suitable choice for the priors, we want to discuss in detail the resulting simultaneous fit to all c i . From the posterior of the fit we compute the median for all parameters, as well as their 68% probability uncertainties. For each c i , the latter are defined from the upper and lower boundaries of the 68% probability interval, after marginalizing over the other parameters. The numerical values of the results of the fit can be found in the second column of table 2. For a comparison, we also list the results for individual fits to only run 1 data and only run 2 data in the third and fourth column. 8 We observe that in the combined fit, the coupling to vector bosons c V is now determined with a precision of 6%. The uncertainties of the Higgs couplings to third generation fermions and gluons are around 10%. For the measurement of the Higgs coupling to coloured particles, the bounds from run 2 data are stronger. While after run 1 one could only extract an upper limit on |c µ |, the run 2 constraints allow us to determine the muon Wilson coefficient with a precision of 40%. (As already mentioned, its central value is the only which visibly deviates from SM expectations, but not at a significant level.) For the photon coupling, we observe that the results using run 1 and run 2 data, individually, feature small deviations from the SM limit. The results from the two data sets are, however, also in slight tension with each other, with run 1 (run 2) data preferring c γ < c SM γ (> c SM γ ). In the combined fit, both preferences average to a central value close to the SM, with an uncertainty of ±0.20. Finally, for the c c and c Zγ posteriors we get essentially the prior distributions; only the central value of the charm coupling is shifted by 4% towards larger values, being an effect of the above-mentioned tendency to overfitting. If we use σ c = 1 instead of σ c = 0.5 for c c , the shift of the central value amounts to 16%. Apart from the median values and the 68% probability allowed ranges we also want to address the two-dimensional correlations between all parameters. These numerical correlations are given in table 3 and illustrated in fig. 5. The matrix in fig. 5 also contains the information about the one-dimensional posterior distributions of our fit to run 1, run 2 and the combined data sets on the diagonal as a graphical translation of table 2. The off-diagonal panels depict the corresponding two-dimensional posterior distributions in the c i vs. c j planes if we marginalize over all other parameters. As described above, the result for c Zγ it is completely dominated by its Gaussian prior. Also, from table 3 we see there are no noticeable correlations with this parameter in the results. Because of that, we do not include this parameter in fig. 5. Addressing correlations between the parameters, we can see that, due to the interplay in the gluon fusion production mechanism, the top and the gluon coupling are strongly anti-correlated. Also due to the loop couplings, but to a lesser extent, c γ is anti-correlated with c t and therefore correlated with c g . Further correlations worth mentioning can be found between c V , c b and c τ . Comparing the fits to only run 1 and only run 2 data, we find that all three c t contours are fully contained in the chosen range for the latter, while c t = 0 was allowed at 95% probability by run 1 signal strengths only. The reason for this is the improvement in the experimental sensitivity to tth production. In particular, the latest ATLAS results show evidence of this mechanism consistent with the SM [113], which helps to resolve the flat direction in the c g vs. c t plane. Therefore, this is also correlated with the reduction of the c g uncertainties. The parameter c µ must be smaller than 2 at 99.7% probability according to the run 2 measurements, while such a large value was compatible at the 95% probability level with run 1 data. Finally, as mentioned above, we can clearly see that run 1 and run 2 pull c γ into opposite directions, whereas the combined contours are centered around the SM limit and well within the range between −1 and +1. 8 What we label run 1 here and in the following also contains two Tevatron analyses [141,142]. Figure 5. For the parameters c i with i = V, t, b, c, τ, µ, g, γ we display the one-dimensional posterior distribution as well as their two-dimensional correlations. The regions allowed at 68.3%, 95.4% and 99.7% probability by current Higgs data are represented by the red, yellow and blue filled contours, respectively. Additionally, we show the single contributions from pre-13 TeV run data (green) and LHC run 2 data (purple). As a cross check, we compared our run 1 results with those existing in the literature and, in particular, with previous independent work from one of the authors in Ref. [52], where a fit to run 1 data was discussed in the context of the electroweak chiral Lagrangian. Comparing the results presented there with our run 1 fit, we see some small differences. These are, however, understood from an improved treatment of the tth production channel and differences in the treatment of the experimental information used as inputs in the fits: Ref. [52] used the fitted signal strengths per production modes, while here we use directly the experimental information from all categories that enter in such fits. At any rate, all values found in [52] are within the 68% probability region of our fit. Projections for future Higgs factories After having scrutinized the current limits on the Higgs electroweak chiral Lagrangian parameters c i in the previous section, we would like to address here the potential for improvements of such constraints at the end of the LHC life-cycle (see [53,60] for studies including also the information from differential Higgs distributions) and at future Higgs factories. We therefore consider several scenarios: • The High-Luminosity upgrade of the LHC (HL-LHC), where the precision for those Higgs observables whose uncertainty is statistically dominated could be largely improved. We use the precisions for the signal strength modifiers corresponding to an integrated luminosity of 3 ab −1 in Refs. [147][148][149][150]. • The International Linear Collider (ILC) is designed as an e + e − Higgs factory. The current operation baseline assumes collisions at √ s = 250 GeV. The possibility of running at 500 GeV and 1 TeV centre-of-mass energies has been also discussed, as well as the possibility of a luminosity upgrade, which we consider here. We use as ILC inputs the precisions detailed in [151]. We label the 250 GeV scenario, assuming full luminosity (1.15 ab −1 ), as "ILC-250". We also consider an scenario corresponding to using the data accumulated during all the three stages discussed in [151]: a total 5.25 ab −1 of data, distributed in 1.15 ab −1 at √ s = 250 GeV, 1.6 ab −1 at 500 GeV and 2.5 ab −1 at 1 TeV. We denote this scenario as "ILC-all". • The Future Circular Collider project at CERN, and in particular the e + e − collider option (FCC-ee). The projected work points include centre-of-mass energies of √ s = 240 GeV and 350 GeV, where we could be sensitive to e + e − → Zh and e + e − → ννh production. The FCC-ee envisions the largest luminosity of all projected future e + e − machines (10 ab −1 of data at 240 GeV and 2.6 ab −1 at 350 GeV, assuming 4 interaction points). The projected experimental precisions have been extracted from [152]. • The Circular Electron Positron Collider (CEPC) is another e + e − circular collider which would be based in China. Like the FCC-ee, it contemplates a "Higgs-factory" run at √ s = 250 GeV, but there is no information about a 350 GeV run in the current project design [153]. The total accumulated luminosity at 250 GeV is expected to be of the order of 5 ab −1 . All the precisions for the Higgs observables are taken from [153]. • Finally, we also consider the proposed design of the Compact Linear Collider (CLIC) at CERN. This is also an e + e − linear collider, with a particular focus on the exploration at high energies. The different CLIC runs would operate at √ s =380 GeV, 1.4 TeV and 3 TeV. As in the ILC case, we choose one scenario based on 0.5 ab −1 of data from the lowest energy run at 380 GeV ("CLIC-380") 9 , and one scenario adding also the 1.5 ab −1 and 2 ab −1 of data taken at 1.4 TeV and 3 TeV, respectively ("CLIC-all") [151,154]. For all future lepton machines it is expected that, apart from reconstructing the different decay channels, one could also measure the total e + e − → Zh cross section using the distribution of the mass recoiling against the Z. This is the dominant production channel around 250 GeV. At 350 GeV, while still small compared to Zh, the cross section of e + e − → ννh production via W boson fusion is already sizable to provide sensitivity to this mode. The energies attainable at future circular e + e − colliders are however, not enough to open the tth production mode, and hence direct sensitivity to modifications of the top Yukawa coupling. This is only possible at linear colliders. In addition to an increased precision on the couplings, the future colliders data sets will also start to constrain further parameters that we neglected in our analysis because they are currently not accessible. An example is the Higgs self-coupling, see [98,155]. As we are only interested in this study in the comparison of present and future bounds on Higgs couplings, and in this regard double-Higgs production would not add a significant improvement, we also ignore such measurements in our study of the precisions at future colliders. In the fits presented in this section we use flat priors for all c i , and we construct the likelihood assuming Gaussian distributions around the SM values for the future signal strength measurements, with errors given by the corresponding future experimental uncertainty. The experimental inputs for these uncertainties, as obtained from the corresponding references given above, are collected in appendix C. The numerical results for the sensitivities, defined as the 68% probability uncertainty on the fit parameters, are given in table 4. The results for the future lepton colliders are presented combined with the HL-LHC projections. To illustrate the individual constraining capabilities of each type of collider, we also show in parentheses the results obtained without the HL-LHC information. 10 The corresponding one-dimensional and two-dimensional posterior distributions from each individual collider can be found in fig. 6, in which we also show the current distributions from fig. 5 in the background. For the purpose of comparing the different collider options it is very important to keep in mind that the ILC results were derived assuming a high-luminosity upgrade, as detailed in [151], while for the other colliders only information about the baseline options is available. On a related note, for the comparison between the FCC-ee and CEPC results one must also take into account that the precision on the Higgs observables for the former in [152] assume 4 interaction points, compared to only 2 for CEPC. Note however that, even rescaling the luminosities to 2 interaction points -thus equating the luminosities at 240 GeV to 5 ab −1 for both circular colliders-the FCC-ee has an advantage due to the extra measurements that would be taken during the 350 GeV run. From our results one can see that the coefficients c V , c b , c c and c τ could be measured to a high precision at both linear and circular electron positron colliders. The latter cannot disentangle the correlations of c t , c g and c γ , for which the strongest future bounds would come from the HL-LHC or linear colliders. What can also be extracted from the fits are upper limits on certain combinations of these three parameters. The strongest bound here is the one on c t + c g , which can be determined with a precision of ∼ 1% at linear colliders. All future colliders provide limits on c µ of the same order. Finally, weak bounds on c Zγ can be extracted from HL-LHC data and, with even lower precision, from all combined CLIC measurements. While the other machines could in principle also observe the Zγ decay mode, there is no official information to asses their sensitivity to the corresponding coefficient. fig. 5) we illustrate the presumed impact of future colliders on the parameters c i with i = V, t, b, c, τ, µ, g, γ, Zγ. For the future projections, we only show the 95% probability regions; for the corresponding colours we refer to the legend and to table 4. (Note that, while current results for c c and c Zγ are obtained using the priors P EFT , we use flat priors for the calculation of future uncertainties.) Application to minimal composite Higgs models A popular solution to the hierarchy problem are composite Higgs models (CHM). The Higgs emerges as pseudo-Nambu-Goldstone boson of a global symmetry breaking at the scale f ≥ v in these scenarios. Because of their strong dynamics, CHMs are best described by the electroweak chiral Lagrangian at low energies. In this section we therefore use the results previously obtained in term of the general ewχL parameterization to estimate the The couplings of the Higgs to the fermions depend on the SO(5) representation where the SM fermions are embedded, and are therefore model-dependent. The smallest representations are the 4 [81] and the 5 [82]. In these, the fermion-Higgs coupling becomes [77] c (4) respectively. (Other cosets and representations may also exhibit a similar structure, see [157,158] for other examples and generalizations.) Since in these two cases the couplings c V and c ψ depend only on the parameter ξ, the parameter space of these models corresponds to 11 For a recent dedicated analysis of the Higgs signal strengths within the context of minimal composite Higgs models see also [156]. 12 See [13] for an operator matching to the ewχL. a line in the c ψ vs. c V plane. We show them labeled as CHM-4 (CHM-5) for third-generation fermions in the 4 (5) representation in fig. 7. Note that these lines do not exceed c i = 1 because of the positivity of ξ. A simple estimate of the allowed size of ξ can therefore be obtained from the intersection of these ξ-lines with the corresponding contours in the c i parameter space. From the intersection with the 95% probability contours of current Higgs limits, we find that the parameter ξ cannot exceed 0.22 (0.12) in the model CHM-4 (CHM-5). This bound stems from the Higgs coupling to t and b (t) and can be translated into a minimal new-physics scale f of 530 (710) GeV, in agreement with [158]. Moving onto the expected sensitivities at future colliders, we can use the projected experimental limits on the parameters in fig. 6 to quantify the attainable impact on composite Higgs scenarios. 13 In the top row of fig. 8 we compare the current limits from fig. 7 focussing on the projected HL-LHC limits in the c ψ vs. c V planes for third generation ψ. We find future HL-LHC bounds of ξ < 0.10 and ξ < 0.042 for the CHM-4 and CHM-5 scenarios, respectively. These limits stem from the 95.4% probability boundaries for c t and Table 5. Estimates on the size the CHM ratio ξ = v 2 /f 2 from the intersection of the ξ-lines with the 95.4% probability c i -contours. We also translate the result into the corresponding value of the symmetry breaking scale f . The ILC, CLIC, CEPC and FCC numbers also include the final HL-LHC results; their individual limits are given in parentheses. c b and can be translated into f > 770 GeV and f > 1200 GeV. The second row shows the magnification of the grey frames from the first row, to focus on the expected precisions at future lepton colliders. The resulting estimates on the size of ξ and f are given in table 5, where we also show for comparison the current and future estimates from the LHC. As in table 4, the results for each future lepton collider are shown in combination with the HL-LHC, and their individual limits in parentheses. From that table we observe that all future lepton machines would be able to test values of ξ up to O(10 −2 ) or, equivalently, f scales of the order to 3 to 4 TeV. Conclusions The discovery of a 125 GeV mass scalar at the LHC immediately raised the question of whether the newly-discovered particle was the SM Higgs. To clarify this, precise measurements of the properties of the scalar particle are needed, in the same way that the precision tests of the properties of the Z boson were crucial in the determination of the validity of the SM description of the electroweak interactions. In order to fully determine the nature of the Higgs-like particle, i.e. whether it is a doublet or not, one needs, in particular, to measure the correlations between single-Higgs and multi-Higgs processes. Currently, however, only accurate information about the former can be experimentally accessed. This is still enough to at least address the question of whether new physics may be hiding in the basic single-Higgs couplings. In this paper we have addressed this particular question, using the general formalism of the Higgs electroweak chiral Lagrangian, and updated the current knowledge about single-Higgs interactions using a fit including the latest LHC results from run 2. We have performed a global fit of the electroweak chiral Lagrangian Wilson coefficients, c i , to the currently available Higgs signal strength measurements. First, we have discussed the overall constraining power of the data, explaining the importance of using priors to "regulate" the coefficients that are currently weakly bounded and that can compensate the effect of other interactions, leading to an overfitting problem. The charm Wilson coefficient is one of these couplings. Not only the direct bounds on h →cc are poor but also c c can have a sizable impact in the Higgs width and important loop processes, e.g. gluon fusion. As we explained, a flat prior allowing large values, of O(1 − 10), for this parameter would "artificially" pull all the ewχL couplings toward values larger than the SM without conflict with Higgs observables. We therefore set a Gaussian prior for c c to contain it within the natural EFT region. Still, our results show that regions of the parameter space far away from the SM are allowed, coming from accidental symmetries in the signal strength formulas. These regions are identified to be around −1 for the fermion and vector boson coupling parameters, around c g ≈ ±1.5 and the photon coupling can also have values of roughly ±1.5, ±7 or ±8.5. Such deviations are, however, not expected to be consistent with the deviations predicted by the EFT approach. In order to study more in detail the EFT natural region, we isolate it by using minimal priors for all the parameters. Apart from c c , we also apply a Gaussian prior on c Zγ . (In this case, the use of a flat prior is less problematic than for c c and would result in an upper bound |c Zγ | 35.) All the other parameters have a flat prior in the range ±1 around their SM values in order to cut away all solutions not consistent with the EFT power counting. The results of this fit are summarized in table 2. They illustrate to what extent the data is consistent with the SM hypothesis, and what the allowed size of new physics effects in the Higgs couplings is. Indeed, the SM limit of all individual c i is mostly compatible with the results of the fit at the 68% probability level. With this statistical significance, the vector boson coupling has an uncertainty of 6% and the third generation fermion and the gluon couplings can deviate at most by 7% to 13% from their SM limits. The muon coupling features the largest discrepancy of all parameters between the SM and the fit result, but its uncertainty of 40% is still sizeable. Also the photon coupling has a rather large uncertainty of 20%. Again, we note that these results only have implications regarding how large new physics in single-Higgs couplings can be, but this level of consistency between the data and the SM has no direct implications from the point of view of whether the Higgs is a singlet or a doublet of SU (2) L . After studying in detail the sensitivity to new physics of current data, we have used the future projections of the high-luminosity upgrade of the LHC to quantify how precisely the c i will be determined at the end of the LHC era. Applying flat priors around the SM values for all parameters in eq. (2.2), we find the following HL-LHC sensitivities: the vector boson coupling could be tested with a precision of 3%, the third generation fermion couplings as well as the gluon and muon couplings will be known up to roughly 5%. The sensitivity to a direct Higgs coupling to two photons would be at the level of 7.5%, while the coupling to one photon and one Z boson could be extracted with an uncertainty of 95%. Apart from these bounds, we have also discussed the potential sensitivity to the ewχL Wilson coefficients at future lepton colliders. Linear collider concepts like ILC and CLIC are each able to measure c V at the sub-percent level and c b , c c and c τ at the percent level during their low-energy runs. Summing up all further data from potential runs at higher centre-of-mass energies, these bounds decrease to few per mil for c V and c b and to the percent level for c c and c τ . A precision of a few percent would be attainable on c t and c g , and of order 10% on c γ and c µ . We contrast this with the CEPC and FCC-ee projections based on circular accelerators. The latter could also reduce the uncertaintiy of c c below 1%, and to the few per mil for c V , c b and c τ . The precision on c µ would be similar to the one obtained at the HL-LHC. Taking into account the differences in the integrated luminosities, one can see that all future concepts would be able to measure the Higgs coupling to vector bosons, bottoms, charms, taus with similar precision. This is not the case, however, if linear colliders only run at the low-energy stage, in which case they cannot compete in precision with the circular colliders sensitivities. On the other hand, the high-energy runs would also allow the linear options to constrain c t , c g and c γ separately to a good accuracy. The absence of a direct handle to tth production at the energies the circular colliders will operate, however, restricts them only to constrain certain linear combinations of these parameters. Finally, we further analysed the implications of our model-independent results for two manifestations of the minimal composite Higgs models. Their characteristic parameter ξ is found to be smaller than 0.22 or 0.12, depending on whether the fermions are embedded in the representations 4 or 5, respectively. This translates into lower bounds of the typical symmetry breaking scale f around 530 GeV and 710 GeV. Such limits could be further extended to the multi-TeV range at future lepton colliders. A Theoretical expressions for Higgs observables In this appendix we list all the relevant formulae for the calculation of the Higgs signal strengths, µ X→h→Y , for the different production modes X ∈ {ggF, Vh, VBF, tth} and decay channels Y ∈ {W W, ZZ, γγ, Zγ,bb, τ + τ − , µ + µ − }. We assume the narrow-width approximation holds for all the values of the ewχL Wilson coefficients we consider, and decompose the corresponding cross sections as σ X × Br Y . 14 Using the Lagrangian in eq. (2.2), we find, at the leading-order, where we use Γ Y to denote the partial width of the h decay to Y . The tree-level decay rates of h decays to massive gauge bosons V = W, Z and to fermions ψ = b, c, τ, µ get rescaled compared to the SM by a factor of c 2 V and c 2 ψ , respectively, For the decays into γγ, gg and Zγ we use the loop expressions [161][162][163], which also include the tree-level contributions from c γ , c g and c Zγ . In the equations above, x i = 4m 2 i /m 2 h , λ i = 4m 2 i /m 2 Z , and Q ψ is the electric charge of a fermion ψ. The η x,Y QCD are QCD corrections of O(α s ). We only take into account η t,gg QCD = 1 + 11α s /4π and η t,γγ QCD = η t,Zγ QCD = 1 − α s /π. Other contributions [162][163][164] have a very small effect in our 14 See [160] for an example of finite-width effects on the measurements of Higgs on-shell rates. results and we neglect them. The one-loop functions are with T 3 ψ being the third component of the weak isospin of the fermion ψ, θ w the weak mixing angle, and Finally, the functions f (x) and g(x) read B Relation to the κ-framework The so-called κ-framework was introduced as a recommendation from the LHC Higgs cross section working group, to explore deviations of the couplings of a Higgs-like particle with respect to the SM [8,9]. While one of its goals is to avoid reference to specific models, one significant (simplifying) assumption is that the tensor structure of the Higgs couplings, and therefore the kinematic distributions of Higgs processes, are SM like. In other words, in this framework only modifications of the SM coupling strengths are considered. These are parameterized via scale factors, denoted as κ i . Moreover, these coupling modifiers are defined "phenomenologically", in the sense that each κ i is defined as ratios of cross sections and decay widths: so the SM is recovered for κ i = 1. While eq. (B.1) may resemble some of the ewχL corrections described in the previous appendix, the 2 formalisms are fundamentally different. Indeed, the Wilson coefficients of the ewχL are introduced at the Lagrangian level, in a well-defined theory where one can compute predictions at any order in perturbation theory. In the κ-formalism, on the other hand, higher-order accuracy is lost for κ i = 1. In this appendix, we briefly comment on the connection between the two approaches. From the point of view of how both approaches describe new physics effects in the data, the main practical difference between the ewχL and the κ-framework is in the treatment of the one-loop induced processes. The κ-formalism allows to express the couplings associated to loop-induced processes as a function of the κ i couplings of the particles running in the loops. However, in the general effective treatment which allows, e.g. new particle effects in the loops, κ γ and κ g are treated as independent parameters in the fits. This is the scenario we consider here. In the ewχL approach, on the other hand, contributions from modified couplings on the loops and new local corrections are parameterized separately. From this point of view, the ewχL is clearly a better way of parameterizing new physics effects in the data, as it provides a cleaner separation of the origin of new physics effects with the same number of parameters [70]. The mapping from the Wilson coefficients c i to the κ i parameters is well defined using the relations of appendix A. These relations can be written as where A is the corresponding transition amplitude of each process. The absolute value on the right hand side is necessary, as the loop functions of the light fermions (b, τ, µ, . . . ) for the κ γ and κ g are complex. The inverse of eq. (B.2) is, however, not a well-defined function. We can still obtain an approximate inverse, to connect both formalisms in the opposite direction. This can be easily obtained if we assume that all the imaginary parts are negligible. While this is a good approximation for some of the coefficients in f i (c j ), for example for the coefficient of c t , it is not the case for the coefficients of the light fermion loops, where real and imaginary parts are of similar size. Nevertheless, as long as the Wilson coefficients stay relatively close to the SM value, neglecting the imaginary parts completely is still a good approximation, because in κ g (κ γ ) the real part of the top loop (top and W loops) contribution dominates over all the other terms. With the assumption of vanishing imaginary parts, eq. (B.2) becomes We checked the validity of the approximation of vanishing imaginary parts by translating the central values of c i coefficients from our fit into the κ i parameters. We used both eq. (B.3) and the exact expressions that can derived from the equations in appendix A. As expected, the dominance of the top and W loops results in both approaches giving the same result with negligible differences. The inverse of eq. (B.3) is (B.4) With these relations one can translate the results of a κ i fit into the ewχL formalism and vice-versa. In order to do so, however, it is important to have all the relevant information about the fits. In particular, the median and errors of the parameters are not sufficient, since there may be also significant correlations between them. For instance, the results of the κ-fit to the data of table 1 are shown in table 6 and figure 9. From the figure, one can see the existence of significant correlations between, e.g., κ V and κ γ . Ignoring this and using eq. (B.4) to translate the κ i results into the c i parameterization would result in a significant increase on the uncertainty on c γ , compared to fitting directly to the ewχL. This can be understood from the large (6, 1) matrix element in eq. (B.4), and is illustrated in the last two columns in table 6. These columns show the results of the direct c i fit and the translation of the κ i one using eq. (B.4) and ignoring correlations. Using the full Table 6. one can reproduce the exact c i results to a good accuracy, with differences given only by the deviations from Gaussianity of the fits. The same considerations about the importance of providing all the necessary information to reconstruct the posterior of a fit (at least at the Gaussian level) applies if one wants to translate the κ i or c i results in terms of specific models. Figure 9. For the parameters κ i with i = V, t, b, , g, γ we display the one-dimensional posterior distribution as well as their two-dimensional correlations. The regions allowed at 68.3%, 95.4% and 99.7% probability by current Higgs data are represented by the green, blue and purple filled contours, respectively. Additionally, we show the single contributions from pre-13 TeV run data (dark gray) and LHC run 2 data (orange). C Projected uncertainties at future colliders In this appendix we collect the different inputs used in the analysis presented in Section 5. For the HL-LHC projections, we report in table 7 the uncertainties on the Higgs signal strength that could be measured using in the different categories defined in the ATLAS Refs. [147][148][149]. From CMS we use the signal strengths per decay mode given in Ref. [150]. In both cases we use the numbers corresponding to the most optimistic scenario in terms of theoretical uncertainties. For the projections at future lepton colliders in Table 8, the projected uncertainties are separated according to the main production mechanism: associated production with a Z boson (Zh), e + e − → ννh via W boson fusion (WBF), or associated production with a tt pair (tth). Table 8. Inputs for the Higgs signal strength's uncertainties at ILC [151], CLIC [151,154], CEPC [153] and FCC-ee [152].
15,962.8
2018-03-02T00:00:00.000
[ "Physics" ]
SCA-Net: A Spatial and Channel Attention Network for Medical Image Segmentation Automatic medical image segmentation is a critical tool for medical image analysis and disease treatment. In recent years, convolutional neural networks (CNNs) have played an important role in this field, and U-Net is one of the most famous fully convolutional network architectures among many kinds of CNNs for medical segmentation tasks. However, the CNNs based on U-Net used for medical image segmentation rely only on simple concatenation operation of multiscale features. The spatial and channel context information is easily missed. To capture the spatial and channel context information and improve the segmentation performance, in this paper, a spatial and channel attention network (SCA-Net) is proposed. SCA-Net presents two novel blocks: a spatial attention block and a channel attention block. The spatial attention block (SAB) combines the multiscale information from high-level and low-level stages to learn more representative spatial features, and the channel attention block (CAB) redistributes the channel feature responses to strengthen the most critical channel information while restraining the irrelevant channels. Compared with other state-of-the-art networks, our proposed framework obtained better segmentation performance in each of the three public datasets. The average Dice score improved from 88.79% to 92.92% for skin lesion segmentation, 94.02% to 98.25% for thyroid gland segmentation and 87.98% to 91.37% for pancreas segmentation compared with U-Net. Additionally, the Bland–Altman analysis showed that our network had better agreement between automatic and manually calculated areas in each task. I. INTRODUCTION Medical image segmentation is an essential tool for current clinical applications, such as computer-aided diagnosis/detection (CAD) or therapy plan systems (TPSs) [1], [2]. Automation of medical segmentation can increase the speed and efficiency and greatly reduce tedious and timeconsuming work for doctors. In brief, the main target of medical image segmentation is to distinguish the target region of interest from the background effectively. However, it is a challenging task due to several factors. First, medical images are collected by different acquisition facilities and usually have low imaging quality, leading to incomplete segmentation or excessive segmentation. Second, some segmentation targets usually have a wide variety of shapes and scales from patient to patient, making it difficult to construct excellent performance. Additionally, some targets of interest to be segmented have a wide range of orientations and positions in the context of medical images, such as the pancreas in magnetic resonance imaging (MRI) [3], [4], [5]. In recent years, deep learning has become the mainstream research method in many fields, and deep convolutional neural networks (CNNs) have attracted much attention from researchers in the field of medical image segmentation because of their good performance. Compared with traditional medical image segmentation methods, the ability to extract the features automatically helps CNNs learn from the obtained dataset. Many state-of-the-art works have achieved noticeable performance in medical image segmentation tasks. However, there are still some problems with CNNs. First, the weight-sharing design of CNNs between the same input feature layer and output feature layer easily weakens the learning ability of CNNs for complex textures and shapes. At the same time, the increased number of channels causes redundant computation and memory consumption. Second, with the depth growth of CNNs, the network becomes hard to train, and the risk of gradient disappearance is increased. Third, continuous pooling operations cause important local and global context information to be lost. To efficiently enhance the network segmentation performance for the network constructed with convolution operation, some ideas have been fused in CNNs and showed signs of progress in the medical image segmentation field. For instance, U-Net [6], one of the most popular architectures in the medical image segmentation field, employed a symmetrical U-shaped structure with skip connections to concatenate multiscale feature maps from low-level and high-level layers. [7] employed a dilated convolution operation with multiple different dilation rates to extract contextual feature maps. [8] applied a fully connected CRF to maximize labeling similar pixel points and modeling the spatial contextual feature relationships in object classes. Although t feature extraction ability, the descriptive information for spatial features and channel features, which are very useful for medical image segmentation, is still limited. To learn more local related features and overlook irrelevant details from the feature maps, several variants of attention mechanisms have been proposed and have achieved better performances in computer vision tasks [9] [13]. Attention U-Net [13] employed the attention gate (AG), which fuses multistage contextual information from the encoder and the decoder. AGs learn to suppress the irrelevant characteristic response in the background while focusing on target regions. SE-Net [14] employed the SE-block, which is a kind of channel attention mechanism. It recalibrates the channel feature maps, assigns more weights to important feature channels and restrains irrelevant channels. The semantic segmentation methods proposed in [15] and [16] utilized similar ideas to enhance the network segmentation performance. [17] and [18] introduced an attention mechanism into the deep adversarial learning framework for capturing more contextual information. The results obtained by these works demonstrate the effectiveness of attention modules for segmentation tasks. Inspired by previous works of CNNs on medical image segmentation, this paper introduces multiscale spatial and channel information to achieve better segmentation performance for medical images. Based on the encoderdecoder architecture and the attention mechanism, we proposed a spatial and channel attention network (SCA-Net) for medical image segmentation tasks, which is shown in Fig. 1. In SCA-Net, two novel attention blocks are constructed for capturing the spatialwise and channelwise relationships. One is the spatial attention block (SAB), and the other is the channel attention block (CAB). The two blocks are integrated into the decoder. The SAB learns to focus on the target spatial regions and ignores the irrelevant background by resigning each pixel weight. The CAB emphasizes the relativity of different channels, which redistributes the critical channel information and overgoes unrelated channel information. In summary, the main contributions of our work are organized as follows: 1) We propose two attention blocks: a spatial attention block (SAB) and a channel attention block (CAB). The SAB is supported to recalibrate the features of spatial context information, and the CAB is supported to highlight the relevant channels and restrain the irrelevant channels. 2) The proposed blocks SAB and CAB are integrated in a novel network named SCA-Net. The ablation study shows that the proposed blocks can effectively capture the features for the targets of interest to be segmented. 3) Our proposed method was verified on three different medical image segmentation tasks. The experimental results show that SCA-Net has superior performance. A. CNNs FOR IMAGE SEGMENTATION Convolution is the core operation of CNNs. Without manually selecting features or prior knowledge, CNNs express the ability to learn features from acquired datasets automatically. In recent research, CNNs have been widely applied in different tasks [19] [21]. By deepening the CNN layers and using ReLU+dropout, AlexNet achieved the best classification results at that time [22]. By replacing the last fully connected layers of classification CNNs with convolution layers, fully convolutional network (FCN) architectures have made significant progress for natural semantic segmentation, such as DeepLab for semantic image segmentation [23]. Subsequently, SegNet [24] proposed the encoder and decoder architecture, which employed CNN as the base unit and achieved state-of-the-art performance for semantic image segmentation. However, the CNN performance is still limited by position-invariant convolutional kernels, without attending to spatial and channel information, which are very important for segmenting objects. B. MULTI-SCALE INFORMATION FUSION In computer vision tasks, rich contextual features extracted from multiscale information help the network achieve better segmentation performance. Many methods using multiscale information have been proposed and applied to 2D and 3D medical image segmentation. Similar to [24], the structure of U-Net [6] adopts a symmetrical encoder and decoder architecture with a skip connection to perform 2D medical image segmentation. To date, many models have been proposed based on U-Net, including U-Net++ [25], DoubleU-Net [26], and DUNet [27]. They have been successfully applied to different 2D medical image segmentation tasks. At the same time, 3D U-Net [28] and V-Net [29] were proposed for 3D medical image segmentation tasks. To compensate for lost feature details during the downsampling operation, dilation convolution [30] with different rates enlarges the receptive field to capture more contextual information [31] [34]. For instance, CE-Net designed a context extractor module to learn contextual semantic information [31]. It generated more presentive feature maps. [33] learned local geometric details using the cascaded pyramid architecture, which was fused in dilation convolution with different dilation rates. However, the scan area of dilation convolution is not continuous. For small targets, the gain is not worth the loss. Less attention has been paid to the interrelationship between spatial and channel characteristics. C. ATTENTION MECHANISM The attention mechanism has proven to be an efficient method to enhance CNN performance [35]. It mimics the biological observation process of paying attention to more detailed information about the desired target and suppressing useless information. [36] was the first to propose an attention mechanism for processing natural language translation. [37] relied on self-attention to capture the dependencies of inputs for machine translation. Meanwhile, the attention mechanism has been used in the field of computer vision [38]- [40]. [38] and [39] used spatial attention for image classification and image captioning. [40] employed a dual attention mechanism to capture global features for semantic segmentation. In many digital image segmentation tasks, the attention mechanism has also been adopted for better performance. Generally, attention modules can be plug-and-play in CNNs and help CNNs focus on more effective features of the target using spatial regions and channel interrelationships. Based on U-Net [6], AG Gate [13] focuses on the salient feature shape and size of the target through multiscale information. SE-Net [41] employed the squeeze and excitation (SE) block to recalibrate relevant channel feature maps and overgo irrelevant features. CBAM [42] emphasized the meaningful features in space and channels. It enhanced the feature representation of key regions related to the target. [43] designed an autofocus attention layer for semantic segmentation. It employed multiparallel attention branches, which had different scales of receptive fields to focus on the optimal scales. However, multiple branches increase the complexity of models and the difficulty of training. Inspired by previous methods, we hypothesize that the effective use of spatial information and channel-dependent features can improve the segmentation performance of our network. III. MATH Based on previous works, we use the effective architecture of the encoder and decoder as our backbone. As illustrated in Fig. 1, the architecture proposed by this paper has three major components: residual block, spatial attention block (SAB) and channel attention block (CAB). The encoder transforms the input image into multidimensional feature maps and extracts the segmentation information, and the decoder generates spatial feature maps across aggregating multiscale information and distributes the weight of feature map channels. In the encoding stage, we use the residual block to retain more original information and extract the feature maps. In the decoding stage, the SAB redistributes the spatial pixel weights by aggregating pooled feature information from high-level and low-level stages. The CAB exploits the channel features, which uses global average pooling and global max pooling to excite more channel contextual information. It reassigns the relationship of every channel and its neighbors to highlightmore important channel information. The details of these modules are described as below. A. RESIDUAL BLOCK With increasing network depth, the model generally has a better expression for tasks. However, it increases the risk of gradient degradation and explosion of the network at the same time. To solve these problems, [44] proposed the residual learning network, which employed the residual connection to ease the difficulty of network training and keep more learnable features. Inspired by the residual learning framework, we use two convolution blocks and one convolution block to generate the multidimensional feature maps, and another residual connection is employed to reserve the original feature information, which uses convolution to adapt the number of channels. A small batch size may cause training gradient degradation and decrease the network performance. Thus, we use group normalization [45] instead of BN in our entire network. The residual block avoids the risk of vanishing gradients and accelerates network convergence. The residual block used in this paper is shown in Fig. 2. B. SPATIAL ATTENTION BLOCK Previous works [31], [43] show that a deep convolution network with atrous convolutional blocks and multikernel branches can effectively extract contextual features from images. However, using these blocks consumes considerable memory and increases the complexity of the model. To use the multiscale contextual information and the experimental -UNet [13] utilized AG Gate to capture the spatial features from multiscale information. Motivated by these methods, we design SAB to fuse adjacent features of high-level and low-level spatial feature maps from multiple stages. By extracting the relationship of spatial interpixels, SAB can focus on meaningful spatial features and highlight prospective information. The SAB is shown in Fig. 3. represents the low-level feature input from the encoder with the shape of , where denotes input channels and indicate the height and width of input, respectively. represents the input of high-level features with the shape of , which are upsampled from the previous decoder layer. Compared with , has a higher spatial resolution. First, we concatenate them into the shape of , then feed them into global average-pooled and global max-pooled functions along the spatial dimension with the shape of and concatenate them by the channel dimension. One convolution kernel with an output channel of is employed to fuse the spatial feature. The activation function is applied to gain a spatial wise statistic . The size of feature map is . To calibrate the spatial feature maps, is subsequently multiplied by . To reuse the feature of , we employ the residual connection. is compressed by convolution with output channels as . Furthermore, the output is obtained as: where denotes the convolution with output channels and denotes the elementwise dot product. The number of output channels depends on the stage of . Here, is 128, 64, 32 and 16 for different dimensional stages. C. CHANNEL ATTENTION BLOCK The spatial feature maps from SAB contain considerable spatial interpixel information, as shown in Fig. 3. However, the output from SAB still contains unutilized channel feature information. To exploit critical features and suppress useless ones, we use CAB to redistribute the channel feature responses and strengthen important channel features provided by SAB. The details of CAB are shown in Fig. 4. SE-Net [41] shows the effectiveness of the squeeze-andexcitation block, which specifies the interchannel relationship. However, it only uses global average-pooled information. Compared with SE-Block, we additionally use the global maxpooled information, which stores more channel contextual information. Taking as an input with the shape of , global average pooling and global maximal pooling are separately applied along the channel dimension to obtain global channel information with the shape of . Inspired by ECA-Net [35], CAB employs onedimensional kernel convolution with a kernel size of to preliminarily capture the nonlinear cross-channel interaction. To decrease the parameters and complexity, the weights of convolution kernels are shared. We use the function to fuse the obtained channel information, and the result is fed into the active function to obtain the output with the shape of . Finally, the output of our channel attention module is: where denotes channelwise multiplication. The shape of is . D. LOSS FUNCTION Our proposed framework is an end-to-end training network. In our medical image segmentation tasks, we need to train our network to accurately predict the classification of each pixel. In recent years, the cross entropy loss function has been broadly used in the medical image segmentation field. However, some medical image segmentation objects often have a range of variations in scale and direction in the region of interest, particularly the pancreas and skin lesions. Accordingly, we used the soft dice loss function to alleviate the above problem. The soft dice loss function uses the predicted probability maps instead of thresholding and converts them into a binary mask. We used it in training and validation processing. It is described as: where denotes the ground truth point values and denotes the predicted probability point values. A. DATASET To assess the effectiveness of the proposed method, we applied our network to three medical image segmentation tasks: skin lesion segmentation from ISIC 2018, thyroid gland segmentation, and pancreas segmentation. Each task has its own challenge, and the sample of three datasets is shown in Fig. 5. B. IMPLEMENTION DETAILS During our experiment, the input images were resized with a uniform size of and normalized by the mean value and standard deviation. Fivefold cross-validation was employed to assess the performance of the proposed model. The dataset was randomly split at ratios of 70%, 10% and 20% for training, validation and testing, respectively. To reduce the risk of overfitting, we randomly rotated the training dataset at an angle of ( , ), which increased the number of training images. Our framework was implemented on the PyTorch platform. The training batch size was 16, and the Adaptive Moment Estimation (Adam) optimizer was employed to train the network. The initial learning rate is , and the weight decay is for our experiments. For each task, we iterate the network for 300 epochs. The experimental hardware used is one NVIDIA Tesla P100 with 16 GB for all experiments. The soft dice loss function is used to train our network. During the process of validation, we saved the best performing model with the smallest loss. It was used in the test dataset to evaluate model performance. C. EVALUATION METHODS To quantitatively evaluate the segmentation performance of networks, we used the following evaluation methods, which are shown below: where A denotes the region of the predicted probability segmentation map and B denotes the ground truth binary image. denotes the set of segmentation boundary points, and is defined as the set of ground truth boundary points. denotes the shortest Euclidean distance between point and all points of . Additionally, the Bland Altman plot, which is a commonly used method for analyzing the consistency of two technologies in medical statistics, is applied to visualize the potential bias between the areas segmented by the automatic method and in a manual manner. D. ABLATION ANALYSIS To prove the validity of SAB and CAB in our proposed SCA-Net, we evaluate the proposed module by ablation analysis. Each module performance was tested by segmenting skin lesions from the ISIC 2018 dataset. The residual block replaces all convolutional layers in U-Net [6] as our backbone. In the next ablation experiment, the residual block is used in the encoder path, and the decoder path integrates the SAB and CAB to extract the feature information from feature maps separately. The skip connection is used for concatenating features between the encoder and the decoder as implemented in the U-Net architecture [6]. The results of the quantitative comparison of these methods are shown in Table 1. U-Net with residual block is assumed to be the backbone. For the skin lesion segmentation task, the performance of the backbone with SAB and CAB is improved separately. Additionally, our proposed SCA-Net can significantly enhance the performance of medical image segmentation. Compared with the backbone, our proposed network improved the average Dice from 0.8944 to 0.9292. The visual segmentation result is shown in Fig. 6. We could learn that SAB shows up the target space region and that CAB pays attention to the edge information. Our SCA-Net achieves better segmentation result. E. SKIN LEISION SEGMENTATION To assess the performance of our proposed SCA-Net, we first put its paces on the skin lesion segmentation dataset from ISIC 2018. The dataset contains 2594 images with their ground truth [46], [47]. The skin lesion boundaries vary in scale, shape and color, necessitating automated segmentation methods to be extremely sensitive to these variations [48]. We present the comparison of our method with other stateof-the-art networks. The comparison was made with seven existing networks, including U-Net [8], ResUNet [50], U-Net++ [25], CE-Net [31], Attention-UNet [13], FCA-Net [17] and Singh et al. [18]. All of them are adopted with the original implementation, and the soft dice loss function is used uniformly. The properties describing the detailed results are displayed in Table 2. We calculated the means and standard deviation of the four assessed metrics in all experiments. Fig. 7 shows the visual segmentation results, and it is obvious that our framework outperformed other state-of-the-art methods in skin lesion segmentation. Our SCA-Net achieved a Dice score of 0.9292, an IoU score of 0.8730, an ASSD of 0.5079 and an RAVD of -0.0061 for skin lesion segmentation. The training parameters of U-Net [6] have 20.96 M but our network only has 13.36 M, showing that the complexity of the model is higher than our model, but our network performs better. From the sample performance images, the segmentation results show that other state-of-the-art networks produce missegmentation due to color and hair interference. Fig. 8 depicts the Bland Altman plots for the comparison difference between the segmentation areas of the ground truth and automatic segmentation methods. Compared with Singh et al. [18], our proposed method has a lower average deviation, which illustrates that our model is much more robust. F. THYROID GLAND SEMGENTATION We conducted the following evaluation task: thyroid gland segmentation [49], which consists of sixteen records of 3D volumes and their matching ground truth. To match our network input format, the 3D volumes and their corresponding ground truth were split into 4762 individual slices with the shape of . According to the ground truth marked by the sonographer, the unmarked slices were removed from the dataset. Finally, 3999 images and the corresponding ground truth were screened. The main challenge of this task is the diversity of thyroid tissue size and morphology in the thyroid ultrasound images. The complexity of the peripheral tissue also affects the segmentation performance. The result shown in Table 3 shows that our network is successful and achieves higher efficiency compared to other state-of-the-art networks. Our proposed network outperformed U-Net with a Dice score of 0.9825, an IoU score of 0.9661, an ASSD score of 0.0508, and a RAVD score of 0.0021. U-Net can segment the general outline of thyroid glands. However, it lacks the ability to segment both blurred and prominent edges. ResUNet has a better performance than U-Net, in which the residual connection enhances the segmentation ability. CE-Net, U-Net++, FCA-Net and Singh et al. have slight oversegmentation and undersegmentation, respectively. Compared with other networks, our model can segment the details of the thyroid edge. We show some samples of segmentation results for visual comparison in Fig. 9. In Fig. 10, all automatic methods dealing with the thyroid gland segmentation task present consistency with manual segmentation, and our proposed method performs better with a lower average deviation and smaller dispersion. G. PANCREAS SEGMENTATION Pancreas segmentation is the last experimental task. This dataset comes from The Nation Institutes of Health Clinical Center, which consists of 82 abdominal contrast-enhanced 3D CT scans from 53 male and 27 female subjects. Pancreases corresponding to the ground truth were manually slice-by-slice segmented by a medical student and inspected by an experienced radiologist. The anatomical structure of the pancreas is complex, and it is mainly located in the posterior peritoneum, with very high shape and volume variability in morphology among different slices. It is surrounded by adjacent tissues, and these tissues are close to the pancreas in CT images, which causes blurring of segmentation boundaries. Together with the noise of the CT images themselves, local body effects and the influence of tissue motion, pancreas segmentation is a very challenging problem. The result is displayed in Table 4. Our network has a better performance than other state-of-the-art networks. The model obtained the best Dice score of 0.9137, IoU score of 0.8530, ASSD of 0.3079 and RAVD of -0.0069. The samples of segmentation results for visual comparison are illustrated in Fig. 11, and the Bland Altman plots of these methods are presented in Fig. 12. In comparison with the segmentation samples of other state-of-the-art networks, we find that SCA-Net is slightly worse in complex boundary segmentation than CE-Net and Singh et al., which has more parameters and better fitting segmentation boundaries. However, they are more complex than our network, and our model excels at focusing on specific target areas. Although our SCA-Net has a slightly higher confidence interval than FCA-Net in Fig. 12, the difference is not obvious, and our proposed network has a lower bias. V. DISCUSSION For medical image segmentation tasks, better segmentation results help clinicians make a considerable preclinical diagnosis and assist them in clinical treatment. The variety of shapes, sizes and target locations, such as skin lesions, requires the network to have strong robustness. Original methods based on CNNs produced many channel feature maps and saved important features relying on simple concatenation operations. However, the relevant information is not utilized efficiently between multiscale spatial and channel features. The component structure of attention mechanisms handles the relevant feature maps, which improves the performance of segmentation tasks. Thus, we conceive a novel framework for medical image segmentation. The SAB connects the high-and low-level information from multiple stages to produce more representative contextual features. Additionally, the CAB redistributes the channel feature responses and strengthens important channel features. To further verify the validity and robustness of the model, we conducted tests in three different medical image domains, including RGB images, MRI slices and ultrasound images. Compared with state-of-the-art networks, our SCA-Net has a significant improvement over three representative datasets, which shows that SCA-Net has better performance for different medical image segmentation tasks. We are more interested in applying our network to 3D data in the future. We also find that our SCA-Net outperforms other networks in the thyroid gland segmentation task, but there are no significant segmentation differences. The reason we believe is that the boundary and the shape of the thyroid gland have small differences, and the distribution of locations is similar. Our proposed network can discern the boundary effectively. Compared with the skin lesion segmentation task, the color of the ultrasound image is gray, which may make it easier to learn the characteristics of the thyroid gland. In the pancreas segmentation tasks, SCA-Net scored significantly higher than the other networks. Our network shows more effectiveness of segmentation. Compared with other state-of-the-art methods, SCA-Net has fewer parameters and higher efficiency. VI. CONCLUSION Medical image segmentation tasks are crucial for clinical analysis and diagnosis. Due to the large variation in shape and texture of segmented targets, higher demands are placed on the robustness and performance of medical image segmentation networks. We introduced a spatial and channel attention network (SCA-Net) in this study, aiming to enhance the segmentation performance of medical image segmentation methods. Specifically, we design the SAB to consider the multiscale spatial information and the CAB to recalibrate the channel information. We train our SCA-Net, and the result demonstrates the superiority of our method in different tasks, including skin lesion segmentation, thyroid gland segmentation and pancreas segmentation. Our model can be used in a new application by fine-tuning using a new dataset and the manual ground truth. In this paper, we conducted three experiments to verify the effectiveness of our network on 2D medical images. In possible future work, we will develop an extension to process 3D data.
6,351.8
2021-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Spatial Capacity of UWB Networks with Space-Time Focusing Transmission Space-time focusing transmission in impulse-radio ultra-wideband (IR-UWB) systems resorts to the large number of resolvable paths to reduce the interpulse interference as well as the multiuser interference and to simplify the receiver design. In this paper, we study the spatial capacity of IR-UWB systems with space-time focusing transmission where the users are randomly distributed. We will derive the power distribution of the aggregate interference and investigate the collision probability between the desired focusing peak signal and interference signals. The closed-form expressions of the upper and lower bound of the outage probability and the spatial capacity are obtained. Analysis results reveal the connections between the spatial capacity and various system parameters and channel conditions such as antenna number, frame length, path loss factor, and multipath delay spread, which provide design guidelines for IR-UWB networks. Introduction Impulse-radio ultra-wideband (IR-UWB) signals have large bandwidth, which can resolve a large number of multipath components in densely scattered channels. For communication links connecting different pairs of users, the correlation between multipath channel coefficient vectors is weak even when the user positions are very close [1,2]. Exploiting these characteristics, time-reversal (TR) prefiltering technique was proposed in IR-UWB communications [3][4][5], which can focus the signal energy to a specific time instant and geometrical position. The space-time focusing transmission has been widely studied in underwater acoustic communications [6,7], and UWB radar and imaging areas [8][9][10]. In UWB communications, TR technique is usually used to provide low complexity receiver [3][4][5]. By prefiltering the signal at the transmitter side with a temporally reversed channel impulse response, the received signal will have a peak at the desired time and location. The physical channel behaves as a spatial-temporal matched filter. In time domain, the focused peak is a low duty-cycle signal; thus interpulse interference reduces and a simple one-tap receiver can be used. In space domain, the strong signal only appears at one spot, thus mutual interference among coexisting users can be mitigated. This is exploited for multiuser transmission in [11], where different users employ time-shifted channel impulse responses as their prefilters. TR techniques were evolved to multiantenna transmission in recent years. Applying TR technique for multiple input single output (MISO) systems was investigated by experiments in [12][13][14], and for multiple input multiple output systems (MIMO) was studied in [2,15,16]. With multiple antennas, the focused area is sharper both in time and in space domains [17], thereby the interference is significantly mitigated. To achieve better interference suppressing capability than TR prefilter, advanced preprocessors based on zero-forcing and minimum-mean-square-error criteria were used in [18,19]. To reduce the preprocessing complexity and the feedback overhead for acquiring the channel information, a precoder based on channel phase information was proposed in [20], where the performance loss is nevertheless unavoidable. A general precoding framework for UWB systems, where the codeword can take any real value, is considered in [21]. The detection performance is traded 2 EURASIP Journal on Wireless Communications and Networking off with the communication and computational cost by adjusting the number of bits to represent each codeword. IR-UWB communications are favorable for ad hoc networks with randomly distributed nodes, where transmission links are built in a peer-to-peer manner. Although experiment results demonstrate that space-time focusing transmission leads to much lower sidelobes of the transmitted signal, the impact of such kind of interference on the accommodable user density and spatial capacity has not been studied, as far as the authors know. For a given outage probability, the spatial capacity is the maximal sum transmission rate of all users who can communicate peer-to-peer simultaneously in a fixed area. In a landmark paper of ad hoc network capacity [22], the authors showed that the throughput for each node vanishes with √ n, when the channel is shared by n identical randomly located nodes with random access scheme. Some results of user capacity for direct sequence code-division multi-access (DS-CDMA) and frequency hopping (FH)-CDMA systems were presented in [23,24]. Essentially, spacetime focusing transmission in IR-UWB systems accesses the channel with a combined random time-division and random code-division scheme. On one hand, IR-UWB signals are low duty-cycle. After the prefiltering and multipath propagation, the cochannel interference signals are low duty-cycle as well if the interpulse interference are absent. On the other hand, the cochannel interference has a random power and occupies partial time of the pulse repetition period. The performance of the desired user degrades only when its focused peak collides with interference signals and the aggregate interference power exceeds its desired tolerance. The random propagation delay of the low duty-cycle signal leads to a random accessing time, and the random multipath response of the communication link induces a random "spreading code". Large number of multipath components will provide high "spreading gain", but may also lead to large collision probability. The combined impact on the spatial capacity is still not well understood. In this paper, we model the aggregate interference powers as two heavy-tailed distributions, that is, Cauchy and Lévy distributions, when path loss factor is 2 or 4. These yield explicit expressions of upper and lower bounds of the spatial capacity, which shows clearly the connections between the spatial capacity and the frame length, multipath delay spread, pulse width, transmit antenna number, link distance and outage probability constraint, and so forth. We also obtain optimal interference tolerance for each transmission link that maximizes the spatial capacity in different channel conditions. The rest of this paper is organized as follows. Section 2 introduces the network setting and the UWB space-time focusing transmission system. Then in Sections 3 and 4 the outage probability in additive white Gaussian noise (AWGN) channels and in multipath and multiantenna channels are, respectively, derived. Section 5 presents the closed-form expressions of the accommodable user density and the spatial capacity. Simulation and numerical results are provided in Section 6 to verify the theoretical analysis. The paper is concluded in Section 7. System Description We consider ad hoc networks without coordinators, where half-duplex nodes are distributed uniformly within a circle, as shown in Figure 1(a). Each node is either a transmitter or a receiver. Without loss of generality, we regard the receiver at the center as the desired user and all transmitters except the desired one as the interference users. This is an interference channel problem, whose equivalent model is shown in Figure 1(b). The link distance of the desired transmitter and receiver is r D , while the link distances between the interference transmitters and the desired receiver are random variables whose values are less than a threshold distance r T , where r T r D . The weak interference outside r T are neglected. We will show in Section 5 that such a threshold distance is unnecessary when we consider the per area user capacity. In IR-UWB systems, the transmitted signals are pulse trains modulated by the information data. For brevity, we only consider the pulse amplitude modulation, since the spreading gain and collision probability of the pulse position modulation will be the same with a random transmit delay. In AWGN channels, the channel response h(t) = δ(t), then the TR prefilter is also δ(t). The transmitted signal of the kth user is where P t is the transmit power, x (k) i is the ith data symbol, p(t) is the UWB short pulse with width T p and normalized energy, and T s is the pulse repetition period or the frame length in UWB terminology. In each frame, there are N s = T s /T p time slots. In multipath channels, define the channel response between the transmitter j and the receiver k as where L( j, k) is the total number of specular reflection paths with amplitude a l ( j, k) and delay τ l ( j, k). Since the channel response does not have imaginary part in IR-UWB systems, the TR prefilter at the kth transmitter for the kth receiver is h k,k (−t), and the transmitted signal is where " * " denotes convolution operation. At the intended receiver k, the received signal is a summation of the signals from all N u coexisting users that are further filtered by the multipath channels, that is, where A j,k and τ j,k are the signal amplitude attenuation and random propagation delay from the transmitter j to the receiver k, respectively, and z(t) is the AWGN. Since the prefilter h k,k (−t) matches with the channel response h k,k (t), there will be a focused peak at t = iT s + τ k,k , that involves the desired information from transmitter k. The unintended cochannel interference from other transmitters behaves as random dispersions since h j, j (t) and h j,k (t) are weakly correlated. When each transmitter equips with M antennas, the channel responses from each transmit antenna to the receive antenna are different. As a result, the prefilters at different transmit antennas are different. Denote the channel response and the propagation delay from the mth antenna of the transmitter j to the receiver k as h j,k,m (t) and τ j,k,m , and the average propagation delay from the transmitter j to the receiver k as τ j,k , respectively. Define Δ j,k,m = τ j,k,m − τ j,k as the transmit delay at the mth antenna; then the transmitted signal at the mth antenna of transmitter k is and the received signal of the desired user is where the amplitude attenuation coefficient A j,k reflects the large-scale fading between the transmitter j and the receiver k, h j,k,m (t) is the small-scale fading. From each antenna of transmitter k, there is a focused signal; these M peaks will all arrive at time instant t = iT s +τ k,k and accumulate coherently, thus an array gain M can be obtained. Assume that there is no intersymbol interference. The receiver k can apply a pulse-matched filter and then simply sample the focused peak for detection. The sampled signal is In these samples, the signal energy from the desired transmitter k is fully collected, while only parts of the energy from interferers are present due to the dispersion of interference signals. This leads to a power gain which is referred to as spreading gain because of its similarity with the gain obtained in conventional spreading systems. The value of this gain depends on the delay spread and crosscorrelation of channel responses h j, j,m (t) and h j,k,m (t). When the duration of h j, j,m (−t) * h j,k,m (t) is less than the frame length T s , the signal from transmitter j may not collide with the focused peak, thereby does not degrade the detection performance of the desired user k. Long T s will produce low collision probability. This leads to another gain to mitigate the interference which is referred to as timefocusing gain. The value of this gain approximately depends on the ratio of the frame length and the multipath delay spread, as will be shown in Section 4. When multiple antennas are used in each transmitter, the array gain obtained is in fact a space-focusing gain. Since the focused signals from M antennas arrive at the same time, the number of transmit antennas does not affect the collision probability between the desired signal and the interference. EURASIP Journal on Wireless Communications and Networking The value of this gain depends only on the antenna number, that is, G A = M. Outage Probability in AWGN Channels Outage probability is an important measure for transmission reliability. In the considered system, the outage probability depends on the number of interference users. When the interference from other users collides with the focused peak signal and the aggregate interference power exceeds the tolerance of the intended receiver, an outage happens. The spatial capacity is obtained as the maximal accommodable user number multiplied by the single-user transmission rate given the outage probability constraint. In this section, we will derive the outage probability of IR-UWB systems in AWGN channels. We will first study the distribution of single-user interference and aggregate interference; then the collision probability between the desired and interference pulse signals is derived. The outage probability is finally obtained considering both the impact of interference power and the impact of collision probability. The benefit of using interference avoidance techniques will also be addressed. It should be noted that we consider different path loss factors here, which may be an abuse of the concept of "AWGN channel". Despite that AWGN channel is appropriate for modeling free-space propagation environment where path loss factor is 2, the results in this section facilitate the derivation of the outage probability in multipath and multiple antenna channels later. In AWGN channels, each pulse is assumed to occupy one time slot, thus the pulses of different users may collide completely or do not collide at all. The Statistics of Single-User Interference. In AWGN channels, the received signals are the combined pulse trains from all users with different delays. When the pulses from different users fall in the same time slot, mutual interference will appear. Consider one interference user whose distance to the desired user is r. Since the interference users are uniformly distributed inside a circle with the radius r T , the PDF of r is The interference power depends on the propagation distance r and the path loss factor α, that is, [25] where P 0 = P t v 2 c r α 0 /(4π f c r 0 ) 2 is the received power at a reference distance r 0 , f c is the center frequency, and v c is the light speed. Note that the expression (9) is only exact in narrow-band systems, since in UWB systems P 0 cannot be determined only by the center frequency. Nonetheless, in the following analysis we will normalize the received power by P 0 , thereby this will not affect the derived outage probability. In free space propagation, the path loss factor α = 2, while in urban propagation environments, the path loss factor can be as large as 4. Other values of α between 2 and 4 reflect various propagation environments in suburban and rural areas. Knowing the PDF of the interference distance as shown in (8), we can then obtain the PDF of the interference power as It shows that P r has a heavy-tailed distribution, which means that its tail probability decays with the power law instead of the exponential law [26]. To simplify the notations, we define a normalized interference power as Its PDF can be obtained as 3.2. The Statistics of Aggregate Interference. When there are more than one interference users, the PDF of the aggregate interference power is the multifold convolutions of (12). It is hard to obtain its closed-form expression. Observing (12), we find that the distribution of λ can be approximated by Cauchy distribution when α = 2, and by Lévy distribution when α = 4. Cauchy distribution and Lévy distribution are both heavy-tailed stable distributions and their PDFs have explicit expressions (Stable distributions generally do not have explicit expressions of their density functions, except three special cases, i.e., Gaussian, Cauchy, and Lévy distributions.) A random variable is stable when a linear combination of two independent copies of the variable has the same distribution, except that the location and scale parameters vary [26]. Therefore, if we model the interference power from one user as Cauchy or Lévy distribution, the aggregate interference power from multiple users will also has a Cauchy or Lévy distribution. This allows us to obtain closed-form expressions of the outage probabilities. Furthermore, we can use the PDFs of Cauchy and Lévy distributions as the lower and upper bounds of (12) to accommodate various values of α, that is, to investigate the impact of various propagation environments. Cauchy distribution has a PDF as [26] f (x; (13) and has a cumulative distribution function (CDF) as [26] where x 0 is the location parameter indicating the peak position of the PDF, and b is the scale parameter indicating when the PDF decays to one half of its peak value. When n independent random variables of Cauchy distribution with the same location and scale parameters add together, their sum still follows Cauchy distribution where the location parameter becomes nx 0 and the scale parameter becomes nb. When α = 2, the PDF of λ can be lower bounded by a Cauchy distribution with x 0 = 0 and b = π/2, that is, where the coefficient 1/π in standard Cauchy distribution is replaced by 2/π because of the single-sided constraint λ ≥ 1, so that the integral of f λ (x) over λ is still 1. The sum of n independent copies of λ, defined as Λ n , still follows Cauchy distribution without considering the constraint λ ≥ 1. The CDF of Λ n can be obtained as When the constraint is considered, the practical PDF of Λ n has heavier tail than that obtained by Cauchy distribution, and thus the practical CDF of Λ n is smaller than F Λn (x; 0, π/2). However, we will see in the later simulations that (16) is a quite tight bound when few interference users exist. Lévy distribution has a PDF as [26] f and has a CDF as [26] F where x 0 is the location parameter, c is the scale parameter, and erfc(·) is the complementary error function, which is defined as erfc( When n independent random variables of Lévy distribution with the same location and scale parameters add together, their sum still follows Lévy distribution where the location parameter becomes nx 0 and the scale parameter turns to be n 2 c. When α = 4, the PDF of λ can be approximated by a Lévy distribution with x 0 = 1 and c = π/2, that is, where the constraint λ ≥ 1 is satisfied by the definition of Lévy distribution. Using this bound, the CDF of the sum interference power Λ n can be obtained as (20) Figure 2 shows the practical PDFs of the normalized interference power λ when α = 2, 3, 4, as well as the lower and upper bound obtained by Cauchy and Lévy distribution, respectively. We can see that the bounds are tight when the interference powers are strong. (11), we define the normalized signal power as Outage Probability. Similar to Assume that the required signal-to-interference-plusnoise-ratio (SINR) for reliable transmission is 6 EURASIP Journal on Wireless Communications and Networking where λ N = P N /(P 0 r −α T ) is the normalized noise power. If the SNR of the desired user is given as γ, that is, λ D /λ N = γ, then the normalized interference power tolerance will be where μ = 1/β − 1/γ. The communication will break when the normalized interference power exceeds λ I . We first consider that the pulses from n interference users arrive at the same time slot with that of the desired user, then the outage probability of the desired user is , UB, where erf (x) = 1 − erfc(x) is the error function, "UB" stands for upper bound, and "LB" stands for lower bound. The upper bound is derived from Lévy distribution and the lower bound is from Cauchy distribution. Since there are N s time slots in a frame, if there are N u interference users in total, then the number of users that occupy the same time slot with the desired user is a random variable. The probability that n users collide with the desired user is where C n Nu is the binomial coefficient for n out of N u . It is apparent that increasing N s will reduce the collision probability and thus reduce the average outage probability. This is the benefit brought by the low duty-cycle characteristic of the IR-UWB signals. The average outage probability is the summation of all the possibilities that n users generate interference and their aggregate power exceeds the designed tolerance, that is, Remarks 1. If the desired user can avoid the interference by transmitting at a slot with minimal interference power, then the outage only happens when no time slot is available for transmission, that is, the interference power is larger than the designed tolerance λ I in all the N s time slots. As a result, the outage probability is reduced to This is the minimum outage probability that an uncoordinated IR-UWB network is able to achieve. If all the users can further coordinate their transmit delays, the interference signals from all links may be aligned to occupy only part of the frame period excluding the slot used by the desired user, then interference-free transmission can be realized. The transmission scheme design for interference alignment is out of the scope of this paper, which can be found from [27,28] and the references therein. Outage Probability in Multipath and Multiantenna Channels In multipath channels with TR transmission, large multipath delay spread provides high spreading gain but induces high collision probability among users. In this section, we will first derive the spreading gain and collision probability, respectively, given the power delay profile of the multipath channels. Then the expressions of the outage probability in multipath channels with and without multiantennas in each transmitter will be developed. Spreading Gain and Collision Probability. It is known that the small-scale fading of UWB channels is not severe. Therefore, it is reasonable to assume that the received signal power only depends on the path loss and the shadowing [1,29]. Assume that ∞ 0 |h i, j (t)| 2 dt = 1, that is, the energy of multipath channel is normalized, and τ max < T s , that is, there is no ISI. Assume that the channel's power delay profile subjects to exponential decay (For mathematical tractability; here we employ a simple UWB channel model without considering the cluster features. The more realistic IEEE 802.15.4a channel model will be used in simulations to verify the analytical results), that is, where τ RMS is the root-mean-square (RMS) delay spread of the channel. From (4), we know that the composite response, that is, the convolution of the prefilter and the channel, of the desired channel is h k,k (t) = h k,k (−t) * h k,k (t), which has a focusing peak at t = 0 and the energy of the peak is ∞ 0 |h k,k (t)| 2 dt = 1. The duration of the peak signal is 2T p due to the pulse-matched filter, thus its power is 1/2T p . Similarly, the composite response of the interference channel is h j,k (t) = h j, j (−t) * h j,k (t), which is a random process and the average power is obtained as where the first equality comes from the uncorrelated property of the two channels. We can see that the average interference channel power subjects to double-sided exponential decay. To obtain explicit expressions of the spreading gain and the collision probability, we approximate the profile of the average interference power by a rectangle with the same area. The impact of this approximation will be shown through simulations in Section 6. Since the sum power of the interference channel is and the maximal value of (29) is 1/2τ RMS , the rectangle has a length 2τ RMS given the height 1/2τ RMS . Then the approximated interference channel power will always be 1/2τ RMS in a duration of 2τ RMS . Since the desired channel has a power 1/2T p and the interference channel has a power 1/2τ RMS , the spreading gain can be obtained as which reflects the interference suppression capability of the TR prefilter in multipath channels. Since the frame length is T s and the approximated interference duration is 2τ RMS , the probability that the signal of one interference user collides with the focused peak of the desired user is approximately The reciprocal of δ is actually the time-focusing gain, that is, which reflects the interference mitigation capability of TR prefilter through near orthogonal sharing of the time resource by exploiting the low duty cycle feature of IR-UWB signals. When totally N u users exist, the probability that n users simultaneously interfere with the desired user is Outage Probability. Due to the spreading gain, the influence of interference on the decision statistics in multipath channels reduces to 1/G S of that in AWGN channels when the same interference power is received. Consequently, when there are n interference signals, an outage happens when the sum power of the interference signals Λ n exceeds G S λ I . Then the average outage probability in multipath channels is When each transmitter equips with M antennas, the output power at each antenna reduces to 1/M of that in single-antenna case. At the receiver, the desired signal will be increased by the array gain while both the interference power and the collision probability between the interference and the desired signals will not change. Considering the antenna gain G A , the spreading gain G S , and the collision probability in multipath channel p Nu (n), the average outage probability when using multiple antennas is obtained as This outage probability can also be reduced significantly if the desired user can choose a time slot with the lowest interference power for transmission, whose expression is identical to (27). Accommodable User Density. Given a required outage probability , the accommodable user number in the network can be expressed as Observing (36), we find that the outage probability is associated with two terms, that is, p Nu (n) and P(Λ n > G A G S λ I ). The second term includes, respectively, an error function and an arctangent function in the upper and lower bounds. We can obtain much simpler expressions of these two functions by introducing approximations. EURASIP Journal on Wireless Communications and Networking The Maclaurin series expansions of erf (x) and arctan(x) are When the outage probability is small, both the error function and the arctangent function can be approximated as linear functions, that is, Using these approximations, (36) can be simplified as Remember from (23) that λ I = μr α T /r α D ; it will be much larger than n when the threshold distance r T approaches infinity. Therefore, in the following approximations, we will replace the term G A G S λ I − n with G A G S λ I in the expression of upper bound. Using the property and the relationship the upper and lower bounds of the outage probability become Therefore, given the outage probability constraint P out (N u ) = , the accommodable user number can be expressed as By contrast to the outage probability, the upper bound of the accommodable user number is obtained from Cauchy distribution which can be achieved when α = 2 and the lower bound is obtained from Lévy distribution which can be achieved when α = 4. It is shown from (44) that increasing the time-focusing gain, the space-focusing gain and the spreading gain all lead to high accommodable user number. However, the increasing speed is different in terms of the upper bound and the lower bound. In fact, these three gains are not totally independent. The space-focusing gain can be provided by using more than one transmit antennas, but the spreading gain and the timefocusing gain both rely on the multipath channel response. As shown in (31) and (33), large delay spread will introduce high spreading gain but low time-focusing gain. As a result, it can be observed from (44) that longer channel delay spread will not lead to more coexisting users. Upon substituting (23) into (44), we obtain the accommodable user density, that is, per area user number, as Then the auxiliary variable r T vanishes, which is assumed in the beginning as an interference distance threshold. Spatial Capacity. The expressions (44) and (45) tell us how many users can be accommodated in a given area. However, it does not fix the transmission rate of each user, thus the sum rate of all users; in a given area is not known. In IR-UWB systems, the symbol rate R s is determined by the reciprocal of the frame duration T s , and the number of bits modulated on each symbol is determined by the SINR of the received signals. According to Shannon's channel capacity formula, the achievable transmission rate of each user will be given the SINR of the desired user β as in (22). From (23) we know that, in interference-limited environment, the impact of cochannel interference is dominant and the impact of noise can be neglected; therefore, β can be EURASIP Journal on Wireless Communications and Networking 9 approximated as 1/μ. The sum data rate of all users in a unit area can be obtained as In the expression of the upper bound, the term μlog 2 (1 + 1/μ) is a convex function of μ and it has a maximum value 1.44 when μ approaches infinity. In the expression of the lower bound, the term √ μlog 2 (1 + 1/μ) is also a convex function of μ. We can obtain its peak value by optimization algorithms, which is 1.16 when μ equals to 0.255. Substituting these results to (47), we obtain the maximal value of the sum rate, that is, the spatial capacity, as Through this expression, we can observe the impact of various parameters. In the following, we will analyze this expression and provide some insights into the design of the space-time focusing transmission UWB system. Impact of Single-User Transmission Rate. It was seen from (48) that the spatial capacity is independent from two parameters μ and T s . However, μ and T s determine the singleuser transmission rate as shown in (46), (22), and (23). The spatial capacity depends on the single-user transmission rate through two ways. If the single-user transmission rate is enhanced by reducing T s , the accommodable user number will be correspondingly decreased, and the spatial capacity will not be changed. This is why the spatial capacity does not depend on T s . There are optimal values of μ to maximize the upper and lower bounds of the sum data rate. For α = 2, the optimal μ is infinity, that means the optimal SINR is infinitesimal. To ensure the error-free communications, it would be better to apply low-rate coding, low-level modulation, and large gain spreading, and so forth. For α = 4, the optimal operating point is SINR = 6 dB (1/μ = 4), which is a normal value for nonspreading communication system [30]. Impact of Path Loss Factor. When path loss factor is different, the relationship of the spatial capacity and the parameters M, τ RMS , and T p will differ. Since τ RMS T p = G S T p , the upper bound is 1.24 MG S larger than the lower bound. This indicates that large path loss factor will reduce the spatial capacity. When path loss factor is large, despite that both the desired signal power and the interference power attenuate faster, the aggregate inference power is more likely to exceed the interference tolerance given the total user number. Impact of the Delay Spread. It can be observed that the delay spread does not affect the spatial capacity when α = 2, whereas the spatial capacity decreases with √ τ RMS when α = 4. As we have analyzed earlier, large delay spread will introduce high spreading gain, while it will also increase the collision probability among users. It can be seen from (44), when α = 2, that there exists a balance between these two competing factors. However, when α = 4, the effect of spreading gain is in square root, thus it cannot compromise the performance degradation led by the collisions. Impact of the Array Gain. We can see that the spatial capacity grows linearly with the antenna number M when α = 2 and grows sublinearly with √ M when α = 4. Impact of the Link Distance. It is shown that the spatial capacity decreases with r 2 D no matter if the path loss factor equals to 2 or 4. As shown in (45), to guarantee a given outage probability, the user density will reduce when the coverage of the single-hop link increases. Remarks. We have seen that the spreading gain and the time-focusing gain are mutually inhibited in improving the spatial capacity. To break such a balance, there are two possible approaches. The first one is to apply the interference avoidance technique, which makes the user access the channel at a time slot with weaker interference. The collision probability will therefore be reduced without altering the spreading gain. In a decentralized network, the interference avoidance might be hard to implement, since the optimal transmit time slot of one user depends on the transmit time slot of other users, and it will be soon changed if a user enters or leaves the network. Therefore, the decentralized interference coordinating schemes, such as the interference alignment technique [31,32], would be studied to use in the space-time focusing UWB transmission systems in further researches. The second approach is to apply advanced prefilters instead of TR prefilter, such as those introduced in [18,19]. With an enhanced interference mitigation capability, a larger spreading gain can be obtained given the multipath channel delay spread, that is, the time-focusing gain. Simulation and Numerical Results In this section, we will verify the outage probability expressions derived in AWGN and multipath channels through simulations. Since the spatial capacity is obtained from these outage probability expressions, it can be verified also though indirectly. In the simulations, we set the link distance of the desired user r D = 100 m, and the threshold distance of the 10 EURASIP Journal on Wireless Communications and Networking interference users r T = 1000 m. Consider that the SNR of the desired user is 10 dB, and the required SINR is 4 dB, then the normalized interference power tolerance λ I = 0.3λ D . The statistics of the interference power derived previously does not consider the shadowing. Shadowing is often modeled as a log-normal distribution, with its impact the PDF of interference power has no explicit expression any longer, but it is more close to Lévy distribution as will be shown in the simulations. 6.1. Outage Probability with α = 2. We first verify the outage probability obtained in AWGN channel. The number of time slots in each frame is set to be N s = 10. The outage probabilities obtained through numerical analysis and simulations are shown in Figure 3. The results of Cauchy bound and Lévy bound are obtained from (26). The curves labeled "0 dB", "3 dB", and "6 dB" are simulation results with corresponding standard derivations of shadowing. We can see that Cauchy bound is quite tight as a lower bound when the user number is less than 10 and the shadowing is low. When more users coexist in the network, the lower bound becomes loose. As we have mentioned, Lévy bound is an upper bound. With the increase of the shadowing standard derivation, the outage probability will gradually approach the upper bound. 6.2. Outage Probability with α = 4. The numerical and simulated outage probabilities in this case of AWGN channel are presented in Figure 4. We can see that Cauchy bound is loose now, but Lévy bound is quite tight. Although with the increase of the shadowing standard derivation the simulated outage probabilities will exceed the upper bound, the differences between them are very small. The results shown in Figures 3 and 4 are consistent with our analysis in Section 3. Since the CDF of the standard Cauchy distribution is used for that of the single-sided Cauchy distribution with constraint λ ≥ 1, the lower bound has some bias when users number is large. Outage Probability with Interference Avoidance. When the desired user applies the interference avoidance technique, the numerical and simulation results in AWGN channels are shown in Figure 5, where N s = 4 and shadowing is not considered. Here, Cauchy bound and Lévy bound are, respectively, obtained with α = 2 and α = 4, and the simulations are obtained with these two path loss factors as well. Comparing with the results in Figures 3 and 4, interference avoidance dramatically reduces the outage probabilities as expected, despite that using a smaller N s increases the collision probability. Due to the power of N s in the expression of the outage probability shown in (27), the bias of the Cauchy bound is amplified. Moreover, in this scenario, the Lévy bound is lower than the Cauchy bound. As can be seen from (23) and (26), this is because different interference tolerance λ I is used in calculating the outage probability when different values of α are used. Outage Probability in Multipath Channels. IEEE 802.15.4a channel model is used to generate the multipath channel response [33], where "CM3" environment is considered and the multipath delay spread τ RMS = 10 ns. In multipath channels, both the power and the duration of the interference signals are random variables in different channel realizations. The numerical results are obtained from (35), where the rectangle approximation of the average interference power profile is used. Figure 6 shows both the numerical and simulation results, where the pulse width T p = 1 ns, the frame length T s = 100 ns, and other conditions are the same with those in AWGN channels. Again, α = 2 and α = 4 are used, respectively, for Cauchy bound and Lévy bound, and the shadowing is not considered. The numerical results are shown to agree well with the simulation results. In this scenario, Lévy bound is higher than Cauchy bound. In addition to the influence of different λ I , delay spread has different impact on these two bounds. As indicated by (43), longer delay spread will lead to higher Lévy bound, whereas Cauchy bound is independent of the delay spread. Conclusion In this paper, the spatial capacity of the IR-UWB networks with space-time focusing transmission is analyzed. We derived the upper and lower bounds of the outage probability for different path loss factors and then developed the closedform expressions of the accommodable user density and the spatial capacity. Analysis results showed that the spatial capacity is independent of the frame length and is associated with specific interference tolerance. The spatial capacity reduces with large path loss factor. Depending on the path loss factor being 2 or 4, the spatial capacity grows either linearly or sublinearly with the antenna number. Using more transmit antennas or shorter pulse is more efficient when the path loss factor is small. When the coverage of the UWB singlehop link extends, the accommodable user density should be reduced to guarantee a given outage probability, and thus the spatial capacity is also reduced. With TR prefiltering, long channel delay spread provides large spreading gain but also induces high collision probability among users. As a result, the spatial capacity will not increase with longer channel delay spread. Moreover, this leads to lower efficiency of using the bandwith and antenna resources when the path loss factor is large. To further improve the spatial capacity, we can employ advanced prefilters instead of the TR prefilter and apply the interference avoidance or interference alignment schemes.
9,206.4
2010-04-01T00:00:00.000
[ "Engineering", "Physics" ]
Tagged Back-translation Revisited: Why Does It Really Work? In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts. Such NMT systems better translate human-produced translations, i.e., translationese, but may largely worsen the translation quality of original texts. Our analysis reveals that adding a simple tag to back-translations prevents this quality degradation and improves on average the overall translation quality by helping the NMT system to distinguish back-translated data from original parallel data during training. We also show that, in contrast to high-resource configurations, NMT systems trained in low-resource settings are much less vulnerable to overfit back-translations. We conclude that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown. Introduction During training, neural machine translation (NMT) can leverage a large amount of monolingual data in the target language. Among existing ways of exploiting monolingual data in NMT, the so-called back-translation of monolingual data (Sennrich et al., 2016a) is undoubtedly the most prevalent one, as it remains widely used in state-of-the-art NMT systems (Barrault et al., 2019). NMT systems trained on back-translated data can generate more fluent translations (Sennrich et al., 2016a) thanks to the use of much larger data in the target language to better train the decoder, especially for low-resource conditions where only a small quantity of parallel training data is available. However, the impact of the noisiness of the synthetic source sentences generated by NMT largely remains unclear and understudied. Edunov et al. (2018) even showed that introducing synthetic noise in back-translations actually improves translation quality and enables the use of a much larger quantity of back-translated data for further improvements in translation quality. More recently, Caswell et al. (2019) empirically demonstrated that adding a unique token at the beginning of each back-translation acts as a tag that helps the system during training to differentiate back-translated data from the original parallel training data and is as effective as introducing synthetic noise for improving translation quality. It is also much simpler since it requires only one editing operation, adding the tag, and non-parametric. However, it is not fully understood why adding a tag has such a significant impact and to what extent it helps to distinguish back-translated data from the original parallel data. In this paper, we report on the impact of tagging back-translations in NMT, focusing on the following research questions (see Section 2 for our motivation). Q1. Do NMT systems trained on large backtranslated data capture some of the characteristics of human-produced translations, i.e., translationese? Q2. Does a tag for back-translations really help differentiate translationese from original texts? Q3. Are NMT systems trained on back-translation for low-resource conditions as sensitive to translationese as in high-resource conditions? Motivation During the training with back-translated data (Sennrich et al., 2016a), we can expect the NMT system to learn the characteristics of back-translations, i.e., translations generated by NMT, and such characteristics will be consequently exhibited at test time. However, translating translations is a rather artificial task, whereas users usually want to perform translation of original texts. Nonetheless, many of the test sets used by the research community for evaluating MT systems actually contain a large portion of texts that are translations produced by humans, i.e., translationese. Translationese texts are known to be much simpler, with a lower mean sentence length and more standardized than original texts (Laviosa-Braithwaite, 1998). These characteristics overlap with those of translations generated by NMT systems that have been shown simpler, shorter, and to exhibit a less diverse vocabulary than original texts (Burlot and Yvon, 2018). These similarities raise Q1. Caswell et al. (2019) hypothesized that tagging back-translations helps the NMT system during training to make some distinction between the backtranslated data and the original parallel data. Even though the effectiveness of a tag has been empirically demonstrated, the nature of this distinction remains unclear. Thus, we pose Q2. The initial motivation for back-translation is to improve NMT for low-resource language pairs by augmenting the training data. Therefore, we verify whether our answers to Q1 and Q2 for highresource conditions are also valid in low-resource conditions, answering Q3. Data As parallel data for training our NMT systems, we used all the parallel data provided for the shared translation tasks of WMT19 1 for English-German (en-de), excluding the Paracrawl corpus, and WMT15 2 for English-French (en-fr). 3 As monolingual data for each of English, German, and French to be used for back-translation, we concatenated all the News Crawl corpora provided by WMT, and randomly extracted 25M sentences. For our simulation of low-resource conditions, we randomly sub-sampled 200k sentence pairs from the parallel data to train NMT systems and used these systems to back-translate 1M sentences randomly sub-sampled from the monolingual data. For validation, i.e., selecting the best model after training, we chose newstest2016 for en-de and newstest2013 for en-fr, since they are rather balanced on their source side between translationese and original texts. For 1 http://www.statmt.org/wmt19/ translation-task.html 2 http://www.statmt.org/wmt15/ translation-task.html 3 After pre-processing and cleaning, we obtained 5.2M and 32.8M sentence pairs for en-de and en-fr, respectively. evaluation, since most of the WMT test sets are made of both original and translationese texts, we used all the newstest sets, from WMT10 to WMT19 for en-de, and from WMT08 to WMT15 for en-fr. 4 All our data were pre-processed in the same way: we performed tokenization and truecasing with Moses (Koehn et al., 2007). NMT Systems For NMT, we used the Transformer (Vaswani et al., 2017) implemented in Marian (Junczys-Dowmunt et al., 2018) with standard hyper-parameters for training a Transformer base model. 5 To compress the vocabulary, we learned 32k byte-pair encoding (BPE) operations (Sennrich et al., 2016b) for each side of the parallel training data. The back-translations were generated through decoding with Marian the sampled monolingual sentences using beam search with a beam size of 12 and a length normalization of 1.0. The back-translated data were then concatenated to the original parallel data and a new NMT model was trained from scratch using the same hyperparameters used to train the model that generated the back-translations. We evaluated all systems with BLEU (Papineni et al., 2002) computed by sacreBLEU (Post, 2018). To evaluate only on the part of the test set that have original text or translationese on the source side, we used the --origlang option of sacreBLEU with the value "non-L1" for translationese texts and "L1" for original texts, where L1 is the source language, and report on their respective BLEU scores. 6 Results in Resource-Rich Conditions Our results with back-translations (BT) and tagged back-translations (T-BT) are presented in Table 1. When using BT, we consistently observed a drop of BLEU scores for original texts for all the translations tasks, with the largest drop of 12.1 BLEU points (en→fr, 2014). Conversely, BLEU scores for translationese texts were improved for most tasks, with the largest gain of 10.4 BLEU points 4 For WMT14, we used the "full" version instead of the default filtered version in sacreBLEU that does not contain information on the origin of the source sentences. 5 The full list of hyper-parameters is provided in the supplementary material (Appendix A). 6 sacreBLEU signatures where "L1" and "L2" respectively indicates a two-letter identifier for the source and target languages of either de-en, en-de, fr-en, or en-fr, and "XXX" the name of the test set: BLEU+case.mixed+lang.L1- L2+numrefs (de→en, 2018). These results give an answer to Q1: NMT overfits back-translations, potentially due to their much larger size than the original parallel data used for training. Interestingly, using backtranslations does not consistently improve translation quality. We assume that newstest sets may manifest some different characteristics of translationese from one year to another. Prepending a tag (T-BT) had a strong impact on the translation quality for original texts, recovering or even surpassing the quality obtained by the NMT system without back-translated data, always beating BT. The large improvements of BLEU scores over BT show that a tag helps in identifying translationese (answer for Q2). In the supplementary material (Appendix B), we present additional results obtained using more back-translations (up to 150M sentences) showing a similar impact of tags. However, while a tag in such a configuration prevents an even larger drop of the BLEU scores, it is not sufficient to attain a BLEU score similar to the configurations that use less back-translations. Interestingly, the best NMT system was not always the same depending on the translation direction and the origin of the test sets. It is thus possible to select either of the models to obtain the best translation quality given the origin of the source sentences, according to the results on the validation set for instance. 7 7 Since this observation is rather secondary, we present results for best model selection in the supplementary material (Appendix C). Note also that these BLEU scores can potentially be further increased by using a validation set whose source side is either original texts or translationese respectively to translate original texts or translationese at test time. Results in Low-Resource Conditions In low-resource conditions, as reported in Table 2, the translation quality can be notably improved by adding back-translations. Using BT, we observed improvements of BLEU scores ranging from 0.7 (fr→en, 2011) to 12.4 (de→en, 2010) BLEU points for original texts and from 2.1 (en→de, 2011) to 21.1 (de→en, 2018) BLEU points for translationese texts. These results remain in line with one of the initial motivations for using backtranslation: improving translation quality in lowresource conditions. In this setting without backtranslated data, the data in the target language is too small for the NMT system to learn reasonably good representations for the target language. Adding 5 times more data in the target language, through back-translation, clearly helps the systems without any negative impact of the noisiness of the backtranslations that were generated by the initial sys-tem. We assume here that since the quality of the back-translations is very low, their characteristics are quite different from the ones of translationese texts. This is confirmed by our observation that adding the tag has only a negligible impact on the BLEU scores for all the tasks (answer to Q3). Tagged Test Sets A tag on back-translations helps identifying translationese during NMT training. Thus, adding the same tag on the test sets should have a very different impact depending on the origin of the source sentences. If we tag original sentences and decode them with a T-BT model, then we enforce the decoding of translationese. Since we mislead the decoder, translation quality should drop. On the other hand, by tagging translationese sentences, we help the decoder that can now rely on the tag to be very confident that the text to decode is translationese. Our results presented in Table 3 Discussions We empirically demonstrated that training NMT on back-translated data overfits some of its characteristics that are partly similar to those of translationese. Using back-translation improves translation quality for translationese texts but worsens it for original texts. Previous work (Graham et al., 2019;Zhang and Toral, 2019) showed that stateof-the-art NMT systems are better in translating translationese than original texts. Our results show that this is partly due to the use of back-translations which is also confirmed by concurrent and independent work (Bogoychev and Sennrich, 2019;Edunov et al., 2019). Adding a tag to back-translations prevents a large drop of translation quality on original texts while improvements of translation quality for translationese texts remain and may be further boosted by tagging test sentences at decoding time. Moreover, in low-resource conditions, we show that the overall tendency is significantly different from the high-resource conditions: backtranslation improves translation quality for both translationese and original texts while adding a tag to back-translations has only a little impact. We conclude from this study that training NMT on back-translated data, in high-resource conditions, remains reasonable when the user knows in advance that the system will be used to translate translationese texts. If the user does not know it a priori, a tag should be added to back-translations during training to prevent a possible large drop of translation quality. For future work, following the work on automatic identification of translationese (Rabinovich and Wintner, 2015;Rubino et al., 2016), we plan to investigate the impact of tagging translationese texts inside parallel training data, such as parallel sentences collected from the Web. A NMT system hyper-parameters For training NMT systems with Marian 1.7.6 (1d4ba73), we used the hyper-parameters, on 8 GPUs, presented by Table 4 and kept the remaining ones with their default values. C Best Model Selection As discussed in Section 3.3, among the original model, the one trained with back-translation (BT), and the one trained with tagged back-translation (T-BT), the best-performing model is not always the same depending on the translation direction. For de→en and en→de, the best model is always T-BT. However, for fr→en, the system that does not use any back-translation is the best to translate original texts while T-BT is the best for translationese texts. For en→fr, the best system for translating translationese texts is BT while the best system for translating original texts is T-BT. This selection is performed by evaluating the translation quality for each model on the validation sets original and translationese texts. By applying this selection strategy, we can significantly improve the overall translation quality for given test sets, as reported in Table 6: BLEU scores for all the systems for en-fr on the overall test sets. "selection" denotes that decoding is performed by using the best model given the origin of the source sentence.
3,159.8
2020-07-01T00:00:00.000
[ "Computer Science" ]
Placing Students at the Heart of the Iron Triangle and the Interaction Equivalence Theorem Models There are many socio-economic factors involved in the demand for and engagement with higher education in general (e.g. Oxford Economics, 2014), and open education in particular (Lane, 2013a); which can make it difficult to understand and predict the individual and collective impacts of those factors. However, it is widely believed that greater participation rates in higher education impact upon the social and economic performance of nations. An OECD (2006) report is clear about the benefits of educational attainment: Introduction There are many socio-economic factors involved in the demand for and engagement with higher education in general (e.g. Oxford Economics, 2014), and open education in particular (Lane, 2013a); which can make it difficult to understand and predict the individual and collective impacts of those factors. However, it is widely believed that greater participation rates in higher education impact upon the social and economic performance of nations. An OECD (2006) report is clear about the benefits of educational attainment: A well-educated and well-trained population is important for the social and economic well-being of countries and individuals. Education plays a key role in providing individuals with the knowledge, skills and competencies to participate effectively in society and the economy. Education also contributes to an expansion of scientific and cultural knowledge. The level of educational attainment of the population is a commonly used proxy for the stock of "human capital" that uses the skills available in the population. (p7) At the same time there is significant debate over the nature of teaching within higher education created by the increasing provision of, and seemingly demand for, online education and open education (e.g. Ilyoshi and Kumar, 2008). These debates touch upon how students might learn, how teachers could teach and what role educational content plays in both those processes. The complexity of these systems leads many to try and represent the key factors involved to focus discussions and actions. A number of visual models have been proposed to help explain the interplay and interactions between specified components of higher education systems at different A number of visual models have been proposed to help explain the interplay and interactions between specified components of higher education systems at different levels and to take account of emerging trends towards open education systems. At sector and institutional levels the notion of an iron triangle has been posited, linking firstly access, quality and cost and latterly accessibility, quality and efficiency in order to suggest means for widening access to higher education for the same or lower cost without compromising outcomes. At the level of teaching and learning an interaction equivalence theorem was developed to explain the relative contributions to successful study of teachers, students and educational content in formal settings and which has recently been extended to informal settings. However both models deal mainly with the supply side of the educational systems they attempt to represent, namely impacts of the availability and accessibility to more people of the elements in the models, and largely ignore the demand side in terms of the affordability and acceptability of the available and accessible provision to students and learners alike. Further, while stimulating debate there has to date been limited empirical studies undertaken to validate both these models. Despite this lack of testing, this paper explores ways of extending these existing models both visually and conceptually by adding in the perspective of the prospective learner or student in respect to their organisational capacity to invest sufficient time for studying, the levels of preparedness and/or confidence that they hold before they engage in learning and the level of motivation for undertaking those studies. It is argued that these modified models provide a new contextual framework with which to examine the capacity of more open education systems at the national, institutional and individual learner level to be expanded effectively and equitably. It is hoped that these extended models provide a new basis for undertaking empirical studies to test out the underlying assumptions. Keywords: open educational resources; iron triangle; interaction equivalence theorem; diagrams; engagement; education systems levels and to take account of the emerging trends towards more open education systems involving open entry, open educational resources (OER) and Massive Open Online Courses (MOOCs). As with many such visual models they are there to reinforce or help explain an argument or conceptual logic, but can equally conceal as much as they reveal unless tested out empirically (Lane, 2002;2013b). In this paper I look at two major visual models that have been proposed and have gained a degree of attention but have been subject to varying degrees of empirical testing and then add to both of them a greater emphasis on the nature of the student or learner body in order to reveal some hidden assumptions that link them and provide a new perspective to stimulate further debates and decisions on empirical testing. These models are the iron triangle and the interaction equivalence theorem. The iron triangle model At sector and institutional levels the notion of an iron triangle for education has been posited, linking firstly access, quality and cost (and latterly accessibility, quality and efficiency) in order to suggest means of using open, distance and e-learning (ODeL) and/or OER for widening access to higher education for the same or lower cost without compromising outcomes (Immerwhar et al, 2008;Daniel and Uvalic-Trumbic, 2011;Mulder, 2013). Figure 1 shows the basic triangle as outlined by Daniel and Uvalic-Trumbic with equal length sides representing the three factors, in this model, of scale, quality and cost. The assumption is that increases in one point of the triangle will inevitably lead to stresses in the other points. This is particularly assumed to be so because of the relatively fixed costs of the physical infrastructure of universities and the number of teachers they employ due to the relatively small cohorts that each teacher can manage to teach successfully (there are many debates worldwide about optimum class sizes and effects on pedagogic quality but the physical limitations of most existing classroom sizes in expensive buildings and their occupancy rates are universal). They go on to visualize changes within this triangle of inter-related factors (Figure 1-A). These changes make the basic point that with conventional teaching in classrooms there is little scope to alter these factors advantageously because improving one factor will worsen the others. Pack more students into the class and quality will be perceived to suffer (Figure 1-A1). Equally, try to improve quality by providing more learning materials or better teachers and the overall cost will go up (Figure 1-A2). In effect the area under the triangle does not change because of these physical limitations. From this basic position, Daniel and Uvalic-Trumbic assert that ODeL, because it is not so constrained by physical limits, is able to change the shape and size of the triangle because it can provide quality in the educational experience (e.g. in the educational resources or support structures) at greater scale for a similar or even lower cost than place-based learning. This means giving the learner more flexibility in their studies such that the learner is not constrained to studying in expensive to build and maintain campuses but where they live and work and where quality can be measured by their achievements and not by exclusivity of access. As Daniel et al (2009) conclude: The aims of wide access, high quality, and low cost are not achievable, even in principle, with traditional models of higher education based on classroom teaching in campus communities. A perception of quality based on exclusivity of access and high expenditure per student is the precise opposite of what is required. One based instead on student achievement enable developing countries to scale up their higher education APRs [age participation rates] without breaking the bank or fatally compromising quality. Interestingly Mulder (2013) has recently modified this model from a 2-dimensional to a more 3-dimensional one, focussing on the accessibility, quality and efficiency of education as the three factors, the aim for all being maximisation of the factor rather than minimisation as it is for cost in the original model. Mulder also postulates that a radical intervention such as OER, rather than just technology, can end up increasing all three factors and so enlarging the educational space represented by the triangle and thus the increase the numbers of people participating. To quote from Mulder (2013): Figure 2(a) shows such a 3D representation of the performance of Dutch education at a certain moment with values along the three axes for accessibility, quality, and cost-efficiency. These are interconnected through a three-point plane. Suppose one wants to improve the performance in efficiency. In Figure 2(b) we see an example (in red), where indeed cost-efficiency is increasing, however at the cost of both quality and accessibility, which are decreasing. Figure 2(c) presents another example where the performance in quality is better, but this goes handin-hand with lower cost-efficiency and more or less equal accessibility. If circumstances and conditions do change, the pattern can look different. A radical system intervention with OER (see Figure 2(d)) is an example of an innovation, which can result in simultaneous performance improvement in all three dimensions. Indeed, the accessibility of the learning materials is at a maximum with their full and free online availability. And the quality is being served with OER, because many more experts and users are involved in the development of the learning materials, which moreover are evaluated, corrected, and reviewed. Finally, cost-efficiency is promoted since there is actually no rationale any more for multiple full-scale development of courses on the same subject with similar learning objectives by different educational institutions. Whichever model is deemed a better representation of national or institutional education systems, the argument they both support is that if a suitable educational system is developed and supplied then increasing numbers of people and proportions of a country's population can be suitably educated at tertiary level. However, while the iron triangle has been used to frame debates there has not been any rigorous testing of these models through either the analysis of secondary data or the collection of primary data, possibly because it is difficult to agree on suitable units of measurement for the different dimensions e.g. quality. The interaction equivalence theorem model At the level of teaching and learning within a course, and particularly within ODeL, an interaction equivalence theorem or EQuiv (Figure 3) was proposed and developed to explain the relative contributions to successful study of teachers, students and educational content in formal settings (Anderson, 2003;Miyazoe and Anderson, 2010), and which has recently been extended to informal settings using OER and MOOCs, with passing mention of links to the original iron triangle model (Miyazoe and Anderson, 2013). Building on the original formulation of the primary forms of interaction (student-student; student-teacher; student-content) proposed by Moore (1989) the basic premise of the EQuiv is that: '… deep and meaningful learning is supported as long as one of the three forms of interaction (student-teacher; student-student; student-content) is at a high level. The other two may be offered at minimal levels, or even eliminated, without degrading the student experience'. (p2). While there has been much theoretical development of the EQuiv there have not been many empirical studies undertaken to support it for either higher education or wider training. However, a recent doctoral study by While this supports the basic triadic model, Figure 3 shows a lot more than just the interactions between students, teachers and content. It also highlights the relationships between teachers and the content being used; and the fact that teachers interact with other teachers and that content can interact with other content, most notably in the case of dynamic digital content within educational software. I am not going to address these elements in this paper as I intend to focus more on developing a fuller representation of the student side of interaction. In particular I want to explore the implication that, just as with the iron triangle model, the basic interaction equivalence theorem implies that a suitably designed and delivered educational provision will inevitably lead to success by the students involved and that even some failings on the part of the design around interactivity can be compensated for by the other elements. Supply side versus demand side While there are more elements to the EQuiv models than presented here, and while Miyazoe and Anderson (2013) have also acknowledged that the 'ability to manage the cost and the time for learning is becoming extremely critical to formal students and lifelong learners' (p11-12), the theorem in itself does not fully address the wider range of capabilities of the prospective learner or student as conditioned by their intrinsic psychological characteristics and their extrinsic socio-economic context and/ or status. A recent review of student satisfaction with interactivity in online learning by Croxton (2014) has examined some of those intrinsic psychosocial characteristics and the role of interactivity and concludes that 'Findings suggest that interactivity is an important component of satisfaction and persistence for online learners, and that preferences for types of online interactivity vary according to type of learner'. (p314). This indicated the not unexpected finding that students vary in their capacity and capability to engage meaningfully with the educational provision on offer. As implied earlier, it is often a strategic governmental aim to widen access to and participation in higher education by as large a proportion of the adult population as is reasonably possible (Lane, 2012) to boost social and economic returns. However, when considering the scope for widening participation to people who would not traditionally attend higher education because of low previous educational attainment or through suffering multiple deprivation it can be useful to consider the availability, accessibility, affordability and acceptability of the provision to learners and their families (ibid). Thus both these models deal mainly with the supply side of the educational systems they attempt to represent, namely impacts of the availability and accessibility to more people of the teaching or interaction elements in the models, and largely ignore the demand side in terms of the affordability and acceptability of the available and accessible provision to students and learners alike as seen from their own contexts and life experiences. In the next section I attempt to address this deficiency by adding to and modifying these two visual models. Modifying the models One of the strengths of diagrammatic models is to test out your thinking -to do some thought experiments that may be supported by existing evidence or that provide suggestions for where further empirical or experimental research could be directed. What follows are my initial attempts at extending the representation of the students' or learners' contexts within the models, focussing on ODeL systems rather than traditional face-to-face educational systems. Adding a circle of success to the iron triangle A defining feature of many higher education systems has been one of selecting students based on prior educational experiences and achievements, thus ensuring that they are more likely to be well prepared and confident in the learning abilities (Lane, 2013). Where ODeL has been used then often greater efforts are made to accommodate less advantaged students (Lane, 2012). In extreme cases, such as The Open University UK, there are no formal entry requirements, enabling up to 40% of undergraduate entrants to not have the school level qualifications expected of entrants in other universities (while up to a third already hold a previous higher qualification) . However such open entry also means that retention rates are lower, with many fewer not completing either a module or their chosen qualification (Woodley, 2011). Nevertheless, Open University students consistently rate the quality of their education as being very good in both internal and external surveys. Thus while the iron triangle may be expanded, but not broken, by open and distance learning from the perspective of the sector and institution, there are apparently plenty more people to replace the ones that drop out (Woodley, 2011). This expansion of opportunity does not, in itself, indicate what other measures of success might be, such as from more of a student perspective. To do just that, I have firstly added a 'circle of success' to the iron triangle (Figure 4-A) to represent students who participate completing their chosen studies in good standing 1 . In this case any changes in the triangle as noted before (e.g. increased cost; a drop in quality; fewer students) will inevitably breech this circle of success (Figure 4-A1 & 2), thus representing a lowering of the numbers left in good standing. A student centred iron triangle By itself, adding a circle of success does not add much to the existing iron triangle as it is also difficult to see how that dimension could be effectively measured. So for my next step I modified the iron triangle itself to reflect the perspective of the prospective learner or student rather than that of the institution. From my knowledge of the literature around the factors influencing participation in or engagement with higher education (Lane, 2009(Lane, , 2012 I chose three key factors that might be measurable through surveys, namely their organisational capacity to invest the time required to study, the levels of confidence and/or preparedness that they hold and their motivations for undertaking those studies. It can be reasonably be assumed that increased levels of all of these will benefit the student or learner but also because it is likely that high levels of one factor can compensate for lower levels in the other two. It also implies that if all three are at a low level then the chances of success in terms of persistence in engagement and levels of attainment will be very low. This new triangle therefore captures and adds in key aspects of the learners' or students' own context and prior experiences (Figure 5-A). And as in Figure 4 I have also added a circle of success that can easily represent that a student's chances of completing their chosen studies will be compromised if, for example, they are low in preparedness (figure 5-A1) or cannot devote sufficient time to their studies (Figure 5-A2), but still acknowledging that this is a difficult dimension to measure. One factor that I have left out is ability to pay financially as opposed to devoting adequate time for the educational provision. Adding this affordability factor in or using it to replace one of the other factors does provide a direct link to the original iron triangle model and just as Mulder (2013) has proposed cost-efficiency rather than cost as the driving factor then affordability could be seen to be a key factor, but fee levels, bursaries and student loans vary within countries, let alone across countries, making this factor quite complex. For educational institutions the costs associated with their educational provision have to be offset by the returns gained on the investment of time and money involved. It is this issue which dominates the discussions around the sustainability of OER (and now MOOCs). However, many educational institutions, particularly universities, also have broader social missions in which it is not just whether OER might act as a recruitment vehicle for students or reduce costs of developing and delivering educational content but also act as a means of enhancing reputation or visibility. These returns are not direct monetary ones but a social return on investment that adds value to existing activities, particularly in the case of publicly funded educational institutions. For learners education can similarly provide both economic and social returns on the investment of time and money that they make. As noted at the beginning of the article education provides such benefits in general which is why there is substantive public investment in education systems. But equally the cost, quality and access iron triangle means that expanding access leads to increased costs and increasingly private funding is expected to support this aim, especially for higher education where tuition fees are generally increasing. To justify the increases in tuition fees many governments and other agencies highlight the personal economic returns on education and particularly higher education. This usually relates to improved career prospects and higher lifetime earnings. However, researchers are now trying to widen the debate on returns on investment by trying to estimate the social returns on investment (SROI) for adult education in the UK (Fujiwara, 2012). The key findings of this study are: Participating in adult learning is found to have significant positive effects on individual health, employability, social relationships, and the likelihood of participating in voluntary work. In turn these domains have positive impacts on individual well being. (p2) A student centred Interaction Engagement Equivalence Theorem Such concerns about the likely returns on investment, financial or social, are also likely to impact on how we might view interaction within educational provision. As already noted, just because high levels of content or interaction might be available and accessible does not mean that is affordable (in terms of money or time) or acceptable (if ill prepared or poorly motivated) in which cases the student will be unlikely to engage in deep and meaningful learning but is more likely to engage in shallow and meaningless learning and, at the extreme, ' drop-out' or withdraw from the educational system on offer because they are disillusioned and dis-satisfied with the quality or the interactions. To understand this demand side of the education ' equation' I propose another model, an interaction engagement equivalence theorem (Figure 6). This replaces the simple notion of a student in the EQuiv with the new student centred iron triangle introduced above, changing the assumption of just a student to one of student engagement with the interactions on offer to them. It also aligns the two different sets of equivalences within the same conceptual framework. Thus, as seen with the earlier model, high levels in one of either motivation, ' organisedness' 2 , or preparedness on the part of the student for engaging in the educational interactions on offer to them can offset lower levels in the others. For example, a highly motivated person with no previous qualifications and few study skills can succeed if they are able to engage fully with such study skills through the learning design and other support interventions. However if all three engagement factors are low then successful learning is also likely to be low, whatever the learning design and whatever efforts are put in by others to support and encourage greater engagement with their studies. Interestingly, Croxton (2014) concluded from her literature review that 'Student-instructor interaction was also noted to be a primary variable in online student satisfaction and persistence' while Rodriguez concluded that delivery of the provision has to match the chosen type of interaction involved, suggesting that well thought through learning design and delivery are likely to improve success with online learning and/or open education. In that respect, the ideas represented in these extended models could also be applied to the teacher and content parts of the basic triadic model -but that is beyond the scope of this article. Discussion There is much debate as to whether and how OER and/ or MOOCs will provide cheaper and more scalable solutions to increasing participation rates in higher education compared to the current face to face or ODeL solutions available from higher educational institutions. A logical examination of the iron triangle model indicates that a hidden constraint is the capabilities of the student. So, even if it is possible to increase one factor, such as a lower unit cost per student, as has been possible with ODeL and could be even more so with MOOCs, it may not increase successful student participation owing to lack of motivation or preparedness on the part of additional students from non-traditional backgrounds. This may also help explain why most MOOC participants that ' complete' their courses do not apparently lack preparedness, motivation or organisedness as implied in the findings of many MOOC studies to date. The Interaction Equivalency Theorem model highlights the significance of high levels of interaction for successful learning but it also ignores the capabilities of the students to be able to engage with those interactions. The creation of an Interaction Engagement Equivalence Theorem visual model highlights once more that increases in OER and MOOCs, or even e-learning within formal education, may not in itself increase meaningful learning without these engagement issues being addressed by some means or other. This paper argues that neither the iron triangle or interaction equivalence theorem model adequately reflects the influence that learners' personal attributes and circumstances have on the phenomena that they are trying to account for. Through the thought experiments embodied in the revised models described above, it also argues that to support and increase the level of successful engagement and attainment by less privileged learners requires the use of extended visual models that addresses many of the tensions and opposing forces inherent in these two models. The modified visual models presented here provide a revised conceptual framework with which to examine the capacity of more open education systems at the national, institutional and individual learner level to be expanded effectively and equitably. They also indicate that such models need to be rigorously tested and evaluated against the particular contexts to which they might be applied. The challenge now is to gather and analyse secondary and primary data that can be used to either validate these models or suggest further modifications, and in particular to focus on their contributions to widening access to and success within higher education. Notes 1 There are separate debates to be had about what constitutes participation, completion and good standing both within formal courses and also informal MOOCs or OERs. 2 I use this term rather than organisation to imply it is a property of the student and one that can be difficult to change
6,020
2014-12-23T00:00:00.000
[ "Economics", "Education" ]
Evidence for a finite-momentum Cooper pair in tricolor d-wave superconducting superlattices Fermionic superfluidity with a nontrivial Cooper-pairing, beyond the conventional Bardeen-Cooper-Schrieffer state, is a captivating field of study in quantum many-body systems. In particular, the search for superconducting states with finite-momentum pairs has long been a challenge, but establishing its existence has long suffered from the lack of an appropriate probe to reveal its momentum. Recently, it has been proposed that the nonreciprocal electron transport is the most powerful probe for the finite-momentum pairs, because it directly couples to the supercurrents. Here we reveal such a pairing state by the non-reciprocal transport on tricolor superlattices with strong spin-orbit coupling combined with broken inversion-symmetry consisting of atomically thin d-wave superconductor CeCoIn5. We find that while the second-harmonic resistance exhibits a distinct dip anomaly at the low-temperature (T)/high-magnetic field (H) corner in the HT-plane for H applied to the antinodal direction of the d-wave gap, such an anomaly is absent for H along the nodal direction. By carefully isolating extrinsic effects due to vortex dynamics, we reveal the presence of a non-reciprocal response originating from intrinsic superconducting properties characterized by finite-momentum pairs. We attribute the high-field state to the helical superconducting state, wherein the phase of the order parameter is spontaneously spatially modulated. the crystal structure lacks a center of inversion, the SOI may dramatically change the electronic properties, leading to nontrivial quantum states.The key microscopic ingredient in understanding the physics of such non-centrosymmetric materials is the appearance of antisymmetric SOI of the single electron states.Asymmetry of the potential in the direction perpendicular to the two-dimensional (2D) plane ∇V ∥ (001) induces Rashba type SOI, α R g(k) • σ ∝ (k × ∇V ) • σ, where α R is the Rashba coupling, k is the wave number, g = (−k y , k x , 0)/k F with k F the Fermi wave number, and σ is the Pauli matrix 5 .Rashba SOI splits the Fermi surface into two sheets with different spin structures.The energy splitting is given by α R , and the spin direction is tilted into the plane, rotating clockwise on one sheet and anticlockwise on the other. The Rashba SOI has profound consequences on the superconducting states 6,7 .For example, parity is generally no longer a good quantum number, leading to exotic states with a mixture of spin-singlet and spin-triplet components.When the Rashba splitting becomes sufficiently larger than the superconducting gap energy, it has been theoretically proposed that an even more fascinating superconducting state may emerge in 2D superconductors by applying strong parallel magnetic fields; a conventional BCS state with zero-momentum pairs (k ↑, −k ↓) formed within spin-textured Fermi surfaces (Fig. 1a) changes into a superconducting state with finite-momentum pairs formed within each spin nondegenerate Fermi surface [1][2][3][4] (Fig. 1b).Such a superconducting state appears as a result of the shift of the Rashba-split Fermi surfaces by external parallel fields.When the magnetic field is applied parallel to x axis (H = H x), the center of the two Fermi surfaces with different spin helicity are shifted along ŷ axis in opposite directions.This state, referred to as a helical superconducting state, is characterized by the formation of Cooper pairs (k +q R ↑, −k +q R ↓), where Fermi energy E F and quasiparticle mass m.This pair formation leads to a state in which the magnitude of the superconducting order parameter is constant, while its phase rotates in space with period π/|q R | as ∆(r) =∆ 0 e 2iq R •r .We note that the helical state is essentially different from the Fulde-Ferrell (FF) and Larkin-Ovchinnikov (LO) states, in which finite-momentum Cooper pairs are formed between sections of the Zeeman-split Fermi surfaces 8,9 (Fig. 1c).A potential FF or LO state has been reported in several candidate materials, by showing a phase transition inside the superconducting state 10,11 through the measurements of magnetization 12 , specific heat [13][14][15] , nuclear magnetic resonance [16][17][18] , thermal conductivity 19 , ultrasound 20,21 , and scanning tunneling microscope 22 .In the FF state, the finite momentum Cooper pairs lead to the phase modulation of the superconducting order parameter, which is difficult to detect directly.In the LO state, the spatial modulation of the superconducting order parameter due to such pairs gives rise to periodic nodal planes in the crystal.However, it should be emphasized that no direct evidence showing such periodic nodes has been reported so far.This is mainly due to the inherent challenge in directly measuring the momentum of Cooper pairs within a superconducting state, calling for a novel probe to investigate the Cooper pair momentum. Very recently, it has been theoretically proposed that superconducting states with finitemomentum Cooper pairs exhibit a current-direction dependent critical current, namely the superconducting diode effect [23][24][25][26] .This diode effect appears due to the non-reciprocal nature of the pair momentum-dependence of the free energy.Notably, the diode effect is significantly enhanced upon entering the helical superconducting state both in s-wave 24,26 and d-wave superconductors 27 .The enhancement naturally leads to characteristic behaviors of non-reciprocal electron transport (NRET) in general.Therefore, measurements of the NRET provide a powerful tool for revealing the helical superconductivity.The resistance of a 2D film can be described as R = R 0 (1 + γµ 0 H × ẑ • I), where R 0 and I are the resistance in the zero-current limit and an electric current, respectively.The coefficient γ gives rise to different resistance for rightward and leftward electric currents and can be finite in non-centrosymmetric materials.Unless the resistive transition in magnetic fields is very sharp due to strong pinning, the NRET response can be obtained by measuring the second harmonic resistance R 2ω .The comparison between R 2ω at low frequencies in the DC limit and the differential in the critical current has been well-documented across various systems, and the general consensus is that if one is finite, the other will also be finite 28,29 . Non-reciprocal electron transport (NRET) has been studied in several superconductors 28,30,31 .However, it remains an arduous task to discern whether the observed NRET response stems from intrinsic superconducting phenomena, such as exotic pairing states that contain finite-momentum Cooper pairs.This is because the NRET response can also arise from extrinsic effects such as asymmetric vortex pinning at the edge, surface, and interface, the ratchet effect of the pinning center, and geometry-dependent Meissner shielding effects [32][33][34][35] .To overcome this challenge, we fabricated tricolor Kondo superlattices comprised of atomically thin CeCoIn 5 layers and meticulously isolated the intrinsic superconducting effects, carefully eliminating the extrinsic effects. CeCoIn 5 is a well-known heavy-fermion superconductor with the highest bulk T c of 2.3 K, in which d x 2 −y 2 superconducting gap symmetry is well established 36,37 .The bulk CeCoIn 5 possesses the inversion center.Then, fabricating tricolor superlattices with an asymmet- CeCoIn 5 with atomic layer thickness, we can introduce the global inversion symmetry breaking (Fig. 2a) [38][39][40][41] .Given that this superlattice comprises three distinct materials, it will be designated as "tricolor" henceforth.This tricolor system provides an ideal platform for revealing the helical superconducting state for the following reasons.First, Ce atoms have a large SOI, and the condition that the Rashba-SOI well exceeds the superconducting gap has been confirmed in various superlattices of CeCoIn 5 , including the present tricolor superlattice, by the highly enhanced upper critical field from Pauli limited critical fields in bulk (see SI and 41 ).Second, Cooper pairs can be confined in atomic CeCoIn 5 layers, forming 2D superconductivity 40 .Third is the strong electron correlation effect in CeCoIn 5 .It has been theoretically pointed out that the correlation further strengthens the effect of Rashba SOI 42 .Furthermore, the suppression of the orbital pair-breaking effects promotes the appearance of helical superconducting phases.These features make the CeCoIn 5 superlattice system unique and suitable for realizing helical superconductivity compared to weakly correlated systems.Finally, d-wave superconductors are expected to respond differently to in-plane magnetic fields directed for the nodal and anti-nodal directions, possibly allowing the intrinsic NRET to be extracted by changing the field direction. The tricolor superlattices with c-axis-oriented structure are epitaxially grown on MgF 2 substrate using the molecular-beam-epitaxy technique.The 3-unit-cell-thick (3-UCT) YbCoIn 5 , 8-UCT CeCoIn 5 and 3-UCT YbRhIn 5 are grown alternatively, where YbCoIn 5 and YbRhIn 5 are conventional non-superconducting metals (Fig. 2a).In these tricolor superlattices, all layers are not mirror planes, and broken inversion symmetry can be introduced along the stacking direction.Given the necessity for a precise in-plane application of the magnetic field in this study, we employed the 8-UCT CeCoIn 5 tricolor superlattice sample previously characterized in ref. 40 .This referenced work extensively investigated the temperature and angular dependence of the upper critical field for the sample.Moreover, the presence of the strong Rashba SOI is confirmed by the suppressed Pauli limit 40 .For the sake of achieving a high current density and ensuring meticulous control over the current orientation, the sample underwent patterning utilizing a focused ion beam (FIB) as depicted in Fig. 2b.We note that both the T c and upper critical field of this sample changed only slightly before and after FIB patterning (See Fig. S1).The sample becomes superconducting at T c = 0.83 K defined as the temperature at which the dc resistance R dc drops to 50% of its normal state value at the onset (Fig. 2c).Non-reciprocal transport measurements are carried out by the standard lock-in technique (see Materials and Methods in SI).The R 2ω curves are anti-symmetrized with respect to a magnetic field.The misalignment of H from the ab plane is less than 0.05 • .(see the inset).For both configurations, finite R 2ω is detected only in the superconducting states, demonstrating that R 2ω originates from the Cooper pairs.Therefore, despite the broadening of resistive transition that the inhomogeneity may cause, only the superconducting response of R 2ω can be extracted.For the nodal configuration, R 2ω increases with H at low temperatures, peaking at µ 0 H ∼ 6 T and disappearing at high fields.It should be noted that such a single-peak structure as a function of H in the superconducting state has also been observed in NbSe 2 31 and ion-gated SrTiO 3 30 .It was found that such a structure can be explained by the vortex motion.On the other hand, for the antinodal configuration, while a similar peak is observed at high temperatures, the peak is suppressed around µ 0 H ∼ 5 T at low temperatures, exhibiting a distinct dip anomaly. Nonreciprocal responses can manifest even in the normal state in the presence of inversion symmetry breaking.However, no discernible nonreciprocal response is observed in the normal state of the present superlattices.Therefore, the observed response can be attributed to the superconducting properties.There are two possible sources for the NRET response in the superconducting state.One is extrinsic origin such as Meissner screening current and vortex motion, and the other is intrinsic origin due to the exotic superconducting state with finite momentum pairing.We here discuss extrinsic origins.The direction-dependent critical current can be induced by the combination of the Meissner screening effect and the asymmetric vortex surface barrier arising from the sample edges 43 .However, since such an effect is important only at a very low field around the lower critical field, it is negligibly small in the present field range, which significantly exceeds the Pauli limit. Another extrinsic origin is the asymmetric vortex motion.In the present superlattices, normalized by R n , is plotted in color; the gray area displayed in Fig. 3 corresponds to ∆R 2ω /R n .In the light blue area at low fields in Fig. 4, while finite R 2ω is observed for both configurations due to extrinsic contributions from vortex motion, ∆R 2ω is negligibly small. In the red area at high fields, the finite ∆R 2ω appears due to the intrinsic contribution originating from Cooper pairs.Note that as shown by arrows in Fig. 3 indicating H c2∥ determined by R dc , ∆R 2ω vanishes at around H c2∥ , while small but finite R 2ω remains at H c2∥ , likely due to the superconducting fluctuation effect and inhomogeneity. No NRET is observed in the bicolor superlattice (Fig. S5).In this system, as discussed above, the response arising from the vortex motion is canceled out.The fact that the bicolor superlattice preserves global inversion symmetry leads us to conclude that the emergence of NRET difference in the tricolor lattice, ∆R 2ω , is an intrinsic phenomena arising from the Rashba SOI.The intrinsic NRET emerges as a direct consequence of the state with finite-momentum pairs and such an effect is negligibly small in the BCS state.Therefore, the results of Fig. 4 provide evidence for the appearance of a high-field superconducting state at the low-T /high-H corner, distinct from the low-field BCS state.Although the anomalous upturn behavior of H c2∥ at low temperatures has been suggested in the previous study 40 , the superconducting state at high fields had remained an unresolved issue, including the possible existence of a new phase.We note that we can rule out the possibility that the observed nonreciprocal phenomena are tied to the so-called Q-phase ?, in which the superconductivity may be intertwined with magnetic order, in bulk CeCoIn 5 for the following reasons.Firstly, in the basic Drude model, nonreciprocal transport is independent of spin.Then, the primary effect of the Q-phase on nonreciprocal transport is the Brillouin zone folding, but this has a negligible effect.In addition, in the Q-phase, where spatial modulations of order parameter appear, electron scattering should increase, resulting in the suppression of the nonreciprocal response.However, our observations indicate the opposite in the current case.Furthermore, unlike the FFLO states, Cooper pairs in the Q-phase do not carry a finite momentum. Hence, even when the inversion symmetry is broken, the momentum of the Cooper pair remains unchanged. There are two scenarios for finite-momentum pairing states other than the helical superconducting state: the FF and LO states 10 (Fig. 1c), where pairing between sections of the Zeeman split Fermi surfaces results in Cooper-pairs (k + q Z ↑, −k + q Z ↓) with momentum q Z ≈ µ B H/ℏv F (v F is the Fermi velocity).In the FF state, the superconducting order parameter is described as ∆(r) ∝ exp(2iq Z • r) with constant amplitude and spatially varying phase, while in the LO state with ∆(r) ∝ cos(2q Z • r), the amplitude oscillates in space.However, it should be stressed that we can discard the possibility of both FF and LO states as the origin of the intrinsic NRET phenomena.This is because the energy of the Rashba spin splitting is overwhelmingly larger than the superconducting gap energy in the present tricolor superlattices, as demonstrated in 40 (see Fig. S2).In this situation, FF-and LO-type pair formation cannot occur.In addition, since the LO state contains q Z and −q Z , NRET should be canceled out.In the absence of inversion symmetry, the LO phase can be characterized by a general order parameter, formulated ae 2iq Zr + be −2iq ′ Zr , a ̸ = b.This phase, as defined by the order parameter, is commonly refered to as the 'stripe phase'.However, theoretical predictions suggest that the stripe phase only emerges within a narrow region at low temperatures in the HT -phase diagram 2 , whereas helical superconductivity manifests within a more expansive phase region surrounding it.Considering this, it seems plausible that our experimental observations represent helical superconductivity.It may be possible, however, to observe a stripe phase at even lower temperatures.Based on these results, we conclude that the high field regime indicated by the red color in Fig. 4 represents the helical superconducting state, and the low field regime by light blue corresponds to the BCS state. The strong field-orientation dependence of intrinsic NRET likely appears as a result of the direction-dependent Doppler shift of the quasiparticles in d-wave superconductors.When H is applied parallel to the nodal direction, quasiparticles around the nodes perpendicular to the magnetic field are excited.When the current is applied parallel to these nodes, the system exhibits more metallic behavior compared with that for the antinodal direction.Although such a simple interpretation should be scrutinized, the present results also point toward the importance of nodal structure for the direction-dependent NRET.We note that the finitemomentum pairing state has been suggested in the pair-density-wave (PDW) state in the pseudogap phase of cuprates by scanning tunneling microscope measurements 44 .Therefore, it is highly intriguing to apply the present direction dependent NRET to the putative PDW state. The NRET effect arising from the intrinsic superconducting response observed in the tricolor d-wave superconducting superlattice with strong Rashba interaction provides evidence for the emergence of a superconducting state with finite-momentum Cooper pairs at high fields, most likely a helical superconducting state.Such a unique state provides a platform to investigate the novel fermionic superfluid systems beyond the BCS pairing states.Fermi surface between the states of (k + q R ↑, −k + q R ↓), leading a gap function with modulation of phase ∆(r) ∝ exp(2iq R • r).Cooper-pairs have finite center-of-mass momentum q R .c, FF and LO pairing states.Pairing with (k + q Z ↑, −k + q Z ↓) occurs between sections of the Zeeman split Fermi surfaces, where q Z ≈ 2µ B H/ℏv F .Cooper-pairs have finite centre-of-mass momentum q Z . In FF state, the order parameter varies as ∆(r) ∝ exp(2iq Z • r), while in the LO state, it varies as ∆(r) ∝ cos(2q Z • r). Figure 3 depicts Figure 3 depicts R 2ω normalized by normal state resistance R n when both in-plane H and I are applied to nodal (H ∥ [110], I ∥ [1 10]) and antinodal (H ∥ [100], I ∥ [010]) directions the elliptical vortices may be formed within the CeCoIn 5 layer in a parallel field because the thickness of the CeCoIn 5 layers is comparable to the c-axis coherence length.When both H and I(⊥ H) are applied in-plane, the vortices move in and out across the interfaces.If there is an asymmetric vortex pinning potential at the interface of the different materials, NRET may occur.In the tricolor superlattices, different vortex thread pinning potentials on either side of the superconductor interface may induce NRETs of different amplitudes, leading to an asymmetric motion that can generate a net NRET signal.In fact, we measured the NRET response in bicolor • • • A/B/A/B • • • stacking superlattices with canceling contributions on both sides of the interface and observed no NRET response (Fig.S5).This supports the presence of the NRET response arising from vortex motion perpendicular to the layers.To rule out the possibility of pancake vortices perpendicular to the layer created by small but finite misalignment of H out of the 2D plane inducing the NRET effect, we measured the NRET by applying H tilted about 4 • from the ab plane and found no such effect in the tricolor superlattices (See Fig.S4).Additionally, no NRET effect was observed in the Lorentz force-free geometry, H ∥ I.These results indicate that the extrinsic NRET response, if present, arises from asymmetric vortex motion perpendicular to the layers.The present tricolor superlattices with tetragonal crystal symmetry have no twin boundaries.It is then unlikely that the vortex motion perpendicular to the 2D plane depends on the in-plane H and I(⊥ H) directions.To separate the intrinsic contribution from the extrinsic one, we take the difference between two configurations, as represented by the gray area in Fig.3.Notably, except for the gray area at low temperatures, R 2ω from the two configurations nearly overlaps.This indicates that R 2ω for both configurations is dominated by extrinsic vortex motion except for the regime where R 2ω exhibits a dip anomaly at low temperatures around µ 0 H ∼ 5 T for the antinodal configuration.Therefore, the dip anomaly when both H and I are applied to antinodal directions is attributed to an intrinsic origin arising from the Cooper pairs superimposed on the extrinsic vortex contribution.To obtain further information on the origin of the dip, R 2ω was measured at different relative angles of H and I; H ∥[100] and I ∥[1 10], and H ∥[110] and I ∥[010] (see SI for details).The results show that the appearance of the dip anomaly is determined by field direction, not by the current direction, implying that the dip anomaly is related to the superconducting gap structure.The NRET response provides pivotal information on the superconducting phase diagram of the tricolor superlattice displayed in Fig. 4. The solid line in Fig. 4 represents upper critical field for H ∥ [100], H c2∥ , determined by H where R dc reaches 50% of R n .We find that H c2∥ line for H applied nodal direction well coincides with that for anti-nodal direction, indicating a similar HT -phase diagram (Fig S1).The upper right (colored in light brown) area in Fig. 4 represents the normal state.In the superconducting state below H c2∥ , the difference of R 2ω between two configurations, ∆R 2ω ≡ R 2ω (H||[110], I||[1 10]) − R 2ω (H||[100], I||[010]), FIG. 1 . FIG. 1. Schematic of the various types of Cooper pairings.a, Conventional BCS pairing state.Zero momentum pairing with (k ↑, −k ↓) occurs between electrons in states with opposite momentum and opposite spins.b, Helical superconducting state.Arrows on the Rashba-split Fermi surfaces indicate spins.H parallel to x axis shifts the center of the small and large Fermi surfaces by q R along +y and −y directions respectively.Pairs are formed within each Rashba split FIG. 2 .FIG. 3 . FIG. 2. Tricolor d-wave superconducting superlattices a,Schematic representation of noncentrosymmetric tricolor Kondo superlattices with • • • A/B/C/A/B/C • • • structure.The sequence of YbCoIn 5 (3)/CeCoIn 5 (8)/YbRhIn 5 (3) is stacked repeatedly for 30 times, so that the total thickness is about 300 nm.The orange arrows represent the asymmetric potential gradient ∇V , which gives rise to Rashba splitting of the Fermi surface with different spin structure.The crystal structure of Ce(Yb)Co(Rh)In 5 is also illustrated.b, The scanning electron microscopy image of a tricolor superlattice patterned by focused-ion-beam (FIB).The black line regime corresponds to the area cut by FIB.Red, blue and green lines indicate the current path along [100], [010], and [110], respectively.The width of the current path is 20 ± 2 µm.c, The temperature dependence of the dc resistance R dc for H ∥[100] and I ∥ [010].
5,192.6
2024-03-25T00:00:00.000
[ "Physics" ]
Photothermal Porosity Estimation in CFRP by the Time-of-Flight of Virtual Waves Porosity is an unavoidable defect in carbon fiber reinforced polymers and has noticeable effects on mechanical properties since gas filled voids weaken the epoxy matrix. Pulsed thermography is advantageous because it is a non-contacting, non-destructive and fast photothermal testing method that allows the estimation of material parameters. Using the Virtual Wave Concept for thermography data, ultrasonic evaluation methods are applicable. In this work, the pulse-echo method for Time-of-Flight measurements is used, whereby the determined Time-of-Flight is directly related to the thermal diffusion time of the examined material. We introduce a signal-to-noise dependent approach, the optimum evaluation time, for evaluating only relevant time ranges which contain information of heat diffusion. After the validation of the method for heterogeneous materials, effective medium theories can be used for quantitative porosity estimation from the estimated diffusion time. This model-based approach for porosity estimation delivers more accurate results for transmission and reflection configuration measurements compared to thermographic state-of-the-art methods. The results are validated by X-ray computed tomography reference measurements on a wide range of different porous carbon fiber reinforced plastic specimens with different number of plies and varying porosity contents. Introduction Porosity is an unavoidable defect in carbon fiber reinforced plastic (CFRP). It is caused by the formation of air-filled voids during the manufacturing process. Particularly, the autoclave molding of preimpregnated fibers (prepreg fabrics or unidirectional tapes) is a critical step in manufacturing. Enclosed air during the lay-up and insufficient hydrostatic pressure results to keep any moisture or volatiles dissolved in the resin until gelation occurs [1]. The aim is to reduce the B Holger Plasser<EMAIL_ADDRESS>1 porosity to a minimum using optimal process parameters. The presence of porosity has effects on the mechanical properties of the laminate since voids weaken the epoxy matrix. Especially, the matrix-dominated material properties, such as the transverse tensile strength and the interlaminar shear strength, decrease with increasing porosity [2,3]. In the production of safety-relevant CFRP components and structures, the characterization of porosity is indispensable and has to be characterized quantitatively with sufficient accuracy, e.g. in the aerospace industry a porosity content of lower than 2% is accepted for the most safety-critical parts. Balageas et al. [4] have shown the ability of active thermography for detection of defects with respect to variations in thermal properties. Since porosity has a measurable influence on the thermal diffusivity, former studies [5], based on the photothermal effect in transmission mode have been carried out for the estimation of porosity distribution in CFRP. Due to the strong dependence on the pore shape and distribution, the general properties of the microstructure (e.g. averaged pore shape and orientation) have to be known to allow the prediction of the effective thermal diffusivity [6]. Based on effective medium theories (EMT), a sophisticated model was introduced by Mayr et al. [7]. This model enables the estimation of porosity in CFRP without the knowledge of the actual sample thickness, therefore the apparent thermal diffusivity is introduced and expressed as a quadratic model. Cernuschi [8] has shown the estimation of thermophysical properties by the use of thermographic techniques for thermal barrier coatings (TBC) with an overall uncertainty of ±5 % porosity. Currently, the precision of the porosity estimation based on the photothermal measurement of the thermal diffusion time is only appropriate for transmission measurements [6]. In this work, the quantitative non-destructive porosity estimation is also shown for reflection measurements with a comparable precision as in transmission mode. Due to the orthotropic stiffness of fabric CFRP, the specimen volume increases in out-of-plane direction, orthogonal to the layers, as a consequence of porosity. To determine this porosity dependent thickness increase and the simultaneously reduced thermal diffusivity, the Virtual Wave Concept (VWC) is applied. Former studies have shown the reliable parameter estimation by the VWC on homogeneous isotropic material [9]. For the first time, in this study the VWC concept is applied on heterogenous materials. The validation of the parameter estimation is shown on a CFRP step wedge and the ability to estimate porosity is shown on different porous CFRP test specimen (5-, 10-and 20-plies). Heat diffusion causes entropy production that is equal to information loss. Hence, the transformation of the measured temperature signal to a virtual wave signal is a severely illposed inverse problem. To obtain an appropriate solution for this ill-posed inverse problem, we assume prior information in form of positivity and sparsity to overcome the diffusion based information loss, partly [10,11]. In order to evaluate only time ranges which contain information about the heat diffusion inside the sample a SNR dependent approach is introduced. Using this optimum evaluation time, in addition the undesired influence of heat losses due to convection are reduced. This enables a reliable quantitative porosity estimation for transmission and reflection measurements. Virtual Wave Concept In this work, we assume 1D heat diffusion for the following reasons: -spatially homogeneous photothermal excitation of the surface by flash lamps -plane-parallel CFRP test coupons with sufficient lateral expansion -porous CFRP test coupons can be considered as an effective medium To estimate the porosity-dependent thermal diffusion time t d = L 2 α −1 , where α is the thermal diffusivity and L the sample thickness, the Virtual Wave Concept (VWC) is applied [12]. Trough the VWC, ultrasonic evaluation methods, are applicable on photothermal measurements. Therefore, a virtual wave field T virt (r, t ) is calculated by applying a local transformation for the different timescales t and t to the measurement data T (r, t). This transformation is a linear inverse problem and can be formulated as a Fredholm integral of the first kind for t > 0. (2) In our case the temperature distribution T (r, t) is obtained by a thermographic experiment. The kernel K (t, t ) is given exactly by the mathematical model of the virtual waves. α is the thermal diffusivity of the examined material and the virtual speed of sound c can be chosen arbitrarily. Discretization The sample surface temperature T (r, t) is measured with an infrared (IR) camera in reflection (z = 0) as well as in transmission mode (z = L). The temperature data T k is pixel-wise recorded at many discrete time steps k starting simultaneous to the thermal stimulation (t = 0). The different discrete time scales are given by t k = (k − 1)Δ t and t j = ( j − 1)Δ t with the running variables k = (1, 2, . . . , n) and j = (1, 2, . . . , m). In this work, we assume that the different time scales have equal time resolution (Δ t = Δ t ) and time steps (n = m). After discretization, Eq. 1 can be written in matrix form [11]: Considering a 'Dirac-Delta' like heating function h(t) = δ(t), which is a good approximation for the flash light excitation, the components of the discrete kernel remain as follows: with the dimensionless virtual speed of sound ∼ c and the discrete Fourier number Δ Fo [13] Herein Δ z denotes the spatial resolution. Regularization with Prior Information Entropy production during heat diffusion causes information loss [14,15]. Hence the calculation of the inverse of Eq. 3 is a discrete severely ill-posed inverse problem and regularization techniques are necessary to calculate an appropriate solution. To show the benefit of incorporating prior information, we compare the regularization techniques truncated singular value decomposition (T-SVD) and alternating direction method of multipliers (ADMM). For T-SVD, we only incorporate the knowledge about noise of the IR-camera detector as prior information. To include prior information in form of positivity and sparsity we use ADMM. Positivity is based on the assumption that the thermal wave and thus also the virtual wave propagates 1D. In analogy to the 1D photoacoustic wave, the 1D virtual wave includes only positive values [16]. Because of the effective medium assumption no intermediate echoes are expected in the sample. Consequently the virtual wave reflection occurs only at the front wall and back wall surface. In prior works we have shown that a Dirac-Delta like heating causes a Dirac-Delta like impulse response for the virtual wave [9]. Consequently we have a sparse signal that enables the incorporation of sparsity as prior information for regularization. According to ADMM, we split the subsequent objective function into two parts [17]: Therefore, the new problem is given by: with f (T virt ) = 1 2 KT virt − T 2 2 , g(z) = λ T virt 1 and subject to T virt = z. (8) λ denotes the regularization parameter that is determined via the solution norm T virt 1 and residual norm KT virt − T 2 . A good estimation for λ is found at the edge of the L-curve [18], as schematically illustrated in Fig. 1. Estimation of the Thermal Diffusion Time To estimate the thermal diffusion time t d , we apply the Timeof-Flight (ToF) method from ultrasonic testing for the virtual wave data. After the absorption of optical radiation at the sample surface (z = 0), the calculated virtual wave originates and propagates into the specimen with a velocity c. For the visualization of the virtual wave, we use an A-Scan representation. The amplitude of the virtual wave field is displayed versus the time t = c −1 z [19]. By the arbitrarily chosen virtual speed of sound c, it is possible to estimate the thermal diffusion time by evaluating the time between the initial pulse δ(t 0 ) and the corresponding echo at the back surface δ(t L ). Figure 2 shows the principle of the Time-of-Flight evaluation for two different positions on the test specimen by A-Scan representations. The point-wise evaluation allows a mapping of the diffusion time which could depend on spatially variations of the micro-structure, e.g. porosity, fiber density, degree of cure. The mesh at z = 0, in Fig. 2, depicts the pixel or measurement points of the IR-camera. For each pixel we can calculate the ToF of the virtual wave signal for both transmission and reflection configuration. The estimation of the thermal diffusion time via virtual wave signal and ToF is based on the 1D analytical solution of the heat conduction equation for adiabatic boundary conditions [20]. Here, a 'Dirac-Delta' like heating pulse δ(t) introduces the heat energy at t = 0 on the surface z = 0, yielding the initial condition (IC) T 0 (z, t = 0) = q 0 /(ρc p )δ(z). Via the IC and the Greens function solution equation we can compute the corresponding temperature function: where q 0 [J m −2 ] is the absorbed heat energy density in an infinite thin surface layer δ(z), ρ [kg m −3 ] is the density, c p [Jkg −1 K −1 ] the specific heat, α [m 2 s −1 ] is the thermal diffusivity and L [m] the sample thickness. In Fig. 3, the temperatures T Z22 , calculated with Eq. 9, in reflection as well as in transmission configuration, applying the geometrical and physical properties of Table 1a, are shown. T Z33 represents the solution of the 1D heat conduction equation for boundary conditions of the third kind, to demonstrate convective influences. Optimum Evaluation Time The optimum evaluation time is defined by the fact that only time ranges are evaluated which contain information about the heat diffusion inside the sample. If the temperature change reaches an order of magnitude equal to the noise, the signal is truncated at the time t end . This approach also reduces the undesired influence of heat losses due to convection on the measurement result (see T Z33 in Fig. 3). The optimum evaluation time is defined with the dimensionless Fourier number: Fo end = α L −2 t end . For its derivation, we use the signal-to-noise ratio SNR = T a /σ , where T a = q 0 /(ρc p L) represents the adiabatic temperature. The simulated temper-ature data T Z22 is distorted with a additive white Gaussian noise (AWGN) with a standard deviation σ of 25 mK [21]. To derive an analytical expression for Fo end as a function of the SNR, we utilize an approximation of Eq. 9 with n = 1. The dimensionless representation of the approximation is given by with the dimensionless thickness z D = z L . For Fo > 0.1 the approximation with n = 1 shows a sufficient agreement with the exact solution to define the cut-off value of the temperature Θ c for the optimum evaluation time determination: By substitution of these limit values in Eq. 10, the optimum evaluation time can be determined Virtual A-scan Representation For the derivation of the virtual wave, we use the truncated temperature data T end Z22 , corresponding to the optimum evaluation time (Eq. 12). In whereby in reflection configuration ∼ c has to be halved because of the two way diffusion. The analytic virtual wave T a virt is derived by applying the method of images to solve the PDE of the virtual wave equation [9]. Based on the two way diffusion process the back wall echos in reflection configuration result in a lower amplitude than in transmission configuration. The values of the virtual waves based on T-SVD regularization oscillate around 0 K, which makes the evaluation more difficult in contrast to ADMM, where only positive values are allowed. Additionally, the prior information in form of sparsity reduces the full half width of the peaks for T ADMM virt , whereby an exact localization of the back wall echos can be achieved. Table 1a 3 .3 Virtual Time-of-Flight Measurements For the estimation of the thermal diffusion time t d , virtual ToF measurements can be used. If the thermal diffusivity of the examined material is known, the kernel K can be calculated accurately. Depending on spatial variations of the micro-structure, the thermal diffusivity α is unknown and literature based values α init have to be assumed for the initial calculation of the kernel K . We determine the ToF, respectively the position of the back-wall peak, in transmission configuration by measuring the time between the initial pulse and the time when the propagating virtual wave arrives at the back wall (Fig. 4a). In reflection configuration the ToF is (Table 1) determined by measuring the time between the initial pulse and the corresponding echo at the back surface of the sample (Fig. 4b). For the calculation of the virtual waves, shown in Fig. 5, we used two different temperature data sets derived by Eq. 9 from values given in Table 1. We assumed the same initial thermal diffusivity α init = 4.28e-7 m 2 s −1 for both examinations, whereby in case (a) α = α init and in case (b) α < α init . The dashed lines in Fig. 5 depict the thickness L and the dotted lines show the estimated thickness L est where Δ z denotes the spatial resolution and N is the index of max(T virt ) that corresponds to the estimated back wall peak. The resulting ToF can only be determined correctly, if the thermal diffusivity of the examined material is known α = α init . Substituting Δ 2 z /α α = α init = Δ t d in Eq. 5 results in whereby the ToF related thermal diffusion time is given by For both examples in Fig. 5, independent of prior knowledge of thickness L and thermal diffusivity α the thermal diffusion time t d results given in Table 2. The difference between the theoretic solution for the thermal diffusion time t d in Table 1 and the experimental solution ( Table 2) is less than 1 % and results from discretization. Experimental Results To demonstrate the applicability of the VWC for parameter estimation in CFRP, two different experiments were performed. The repeatability and accuracy of the VWC is shown with experiments on a step wedge, where the results are compared with state-of-the-art methods. Furthermore, the VWC is applied on different CFRP coupons for porosity estimation. Based on an effective medium theory (EMT), porosity values can be derived, which are validated with 3D X-ray computed tomography measurements. Pulsed-Thermography Set-Up The experimental setup for optical-excited pulsed thermography experiments in reflection as well as in transmission configuration is shown in Fig. 6. Two flash lamps with an electrical energy of 12 kJ and a pulse duration of approx. 2 ms, driven by a signal generator, are used for thermal excitation. The absorbed heat energy density is approximately q 0 = 11.8 kJ m −2 , which was derived by adapting the theoretical to the experimental adiabatic temperature. The data acquisition is triggered by a PC and timed to the excitation signal, whereby the temperature measurements were carried out with an IR camera equipped with an indium antimonide (lnSb) detector. The cooled 1280 × 1024 pixel focal plane array camera has a NETD of about 25 mK and is sensitive in a spectral range of 1.5-5.1 microns. The spatial resolution Test Specimen For the validation of the VWC on heterogeneous material we examine two step wedges (each 15 steps), made from plain epoxy based woven fabric CFRP (Fig. 7), with transverse isotropic material behaviour [25]. The samples are not porous (Φ = 0), thereby their thermophysical properties are only dependent to the thermophysical properties of the matrix material (fibre and resin). Since there are no porosity dependent microstructural variations, the thermal diffusivity α is assumed to be constant for all steps. The whole size of the step wedges in lateral direction is (750 × 150) mm 2 and the corresponding size to one step is (50 × 150) mm 2 . Five steps, with a thickness from L = 1.59 mm up to L = 4.86 mm, are examined. Measurement Parameters For the experimental evaluation of the thermal diffusion time t d , the width of the discrete time step Δt, respectively the frames per second (FPS) were chosen in dependence of the specimen thickness. The final measurement time is equal to the characteristic diffusive time scale t N = t d corresponding to Fourier number Fo = 1. The characteristic diffusive time scale t d = L/α 2 describes a long-time conduction regime, where the temperature in the body reached the adiabatic plateau T a . Under the initial assumption to have the same number of data points (N = 1000) for each step of the step wedge, the discrete time step Δ t results in Table 3. For further data processing, due to the truncation of the measurement data (Eq. 12), only time ranges are evaluated which contain information about the heat diffusion inside the sample. In addition, the influence of heat losses due to convection on the measurement results is reduced. The width of the discrete spatial steps was chosen such that the reconstruction area corresponds to four times the thickness of the component: Δz = 4L/(N · Fo end ). Thus ensures for each measurement nearly the same number of data points (N Fo end = N · Fo end ), in dependence to the SNR. Since the truncated measurement data and the dimension of the kernel K ∈ R N Fo end × N Fo end are dependent on the number of remaining discretizations, the computational costs are reduced due to Fo end < 1. The used regularization technique ADMM is designed for filtering noise based influence to the inverted solution [26]. We assume the experimental noise for each single position r in every measurement is nearly constant. For acceleration of the regularization operation we use a global regularization parameter. It is determined at position r(1, 1) and used in every position r. We take into account 72 pixels per step for the calculation of the mean value and standard deviation. Results for Validation of the Method The result of the inverse problem (Eq. 3) is directly dependent on the thermal measurement, so the truncated experimental temperatures T meas are analysed and compared to the regularized temperature for model validation purposes. In Fig. 8 the measured temperatures on the back-side (Fig. 8a) and on the frontside (Fig. 8b) are shown for five different steps # = [7,11,15,19,23], as these steps cover nearly the entire range of the step wedge. for five different steps # = [7,11,15,19,23]. The measurement parameters are given in Table 3 The comparison of the measured temperature values T meas (for only one pixel each step) to the regularized temperatures T ADMM reg show good agreement and demonstrate the filter effect due to regularization. Based on virtual ToF measurements we estimate the thermal diffusion time t d in transmission (Fig. 9a) and reflection configuration (Fig. 9b) whereby the results are plotted versus the thickness L, shown in Fig. 9. In both examination configurations the VWC is compared to respective state-of-the-art methods, the Linear Diffusivity Fitting (LDF) method [27,28] und Thermographic Signal Reconstruction (TSR) [29,30]. The dashed lines show a quadratic fit whereby the fitting parameter, the diffusiv- In reflection configuration a significant improvement of the standard deviation of the estimated thermal diffusion times is shown. Due to the large scattering of the thermal diffusion times estimated by TSR, where the experimental log-log thermogram is fitted by a logarithmic polynomial of degree n=9 [30], no quantitative porosity estimation was possible, as also concluded by Mayr et al. [6]. For all TSR evaluations, the measurement data T meas was not truncated and the full [7,11,15,19,23] temporal data (Fo = 0.05 -Fo = 1) was used for evaluation. Despite the doubled diffusion process through the material in reflection configuration the standard deviation of the estimated thermal diffusion time, carried out by the VWC, is also less than 4 % and therefore more accurate than the stateof-the-art method TSR. In Fig. 10 the A-Scan representation of the virtual waves for the examined steps is shown. The solution of the inverse problem was calculated for every position r where the depicted virtual waves represent the spatial averaged T ADMM virt for 72 pixels in transmission configuration (Fig. 10a) and in reflection configuration (Fig. 10b). In order to gain the correct ToF, respectively the thermal diffusion time t d , for the A-Scan representation we assumed for the respective initial thermal diffusivity α init the prior determined spatial averaged thermal diffusivity α for each step. The given reference lines are based on the determined fitting parameter α (Fig. 9) and represent the nominal value of the thermal diffusion time for the examined step. The amplitude of the virtual wave decreases with increasing thickness of the sample. For high thicknesses, due to increasing peak width despite a fixed number (transmission configuration I = 200 and reflection configuration I = 400) of regularization iterations, the error in the estimation of the thermal diffusion time increases. In heat diffusion the wavenumber k corresponds to the thermal diffusion length μ for μ = L follows a wavelength λ PT = 1.26e-2 m for a thickness L = 2 mm for example. Since such high values of λ PT correspond to the thickness of tens of plies, with the strongly damped thermal waves, no interlaminar interface echos, known from ultrasonic testing (λ UT ≈ λ PT /100 for testing frequency of 5 MHz), occur when investigating heterogeneous materials. In photothermal testing, the test specimen can be treated as a homogeneous material as you can only detect the front and back-wall echos. The prior information of sparsity is feasible for regularization. These validation measurements on CFRP step wedges, demonstrate reliable results of high accuracy in estimating the thermal diffusion time t d . Thus allows a suitable quantitative porosity estimation of heterogeneous materials, based on thermal diffusion time measurements in transmission as well as in reflection configuration. Test Specimen For quantitative porosity estimation in CFRP we examine 31 calibrated prepreg porosity coupons, made from plain epoxy based woven fabric, with a different amount of plies N = [5, 10, 20] (Fig. 11). The size of the coupons in lateral direction is (40 × 20) mm 2 and the thickness of the specimen depends on porosity Φ for a specific amount of plies N where L 0 = Nl x is the nominal specimen thickness without porosity (Φ = 0) and l x = 0.216 mm represents the nominal ply thickness of one laminae. The nominal coupon thickness varies from 1.08 mm (5 plies) to 4.32 mm (20 plies). Cone beam X-ray computed tomography (XCT) and image analysis were carried out to obtain the microstructure in a representative volume of porous CFRP. Table 4 Measurement parameters for the pulsed thermography experiments in dependence of the porosity coupon thickness (5-, 10-and 20-ply material) Measurement Parameters The temporal discretization, respectively the frames per second (FPS), was determined by using the thickness information in Table 4 in exactly the same way as for the validation of the method in Sect. 4.2.2. For the 5-ply material, based on the size of the observation window of the IR-Camera, the desired measuring frequency of FPS = 373 1/s could not be reached due to hardware limitations, so the maximal possible measuring frequency corresponding to the observation window size was chosen. To evaluate only time ranges containing heat diffusion information and to reduce the influence of heat losses due to convection, we truncate the measurement data corresponding to the optimum evaluation time (Eq. 12). For the correct temporal truncation of the porosity affected samples in reflection configuration we used the already, in transmission configuration, estimated diffusion times t d . To prevent failure due to edge effects, a region of interest (ROI) with a size of 11 × 31 pixel was processed for the calculation of the mean value and standard deviation of the thermal diffusion time. In contrast to the measurements on the step wedge, we do not use positivity as prior information for regularization to determine the virtual wave in reflection mode from porous samples. The reason for this is that high-frequency thermal waves also contribute to the measured temperature curves, especially in the short-term range. The high-frequency thermal waves are scattered and reflected at near-surface pores and thereby influence the resulting surface temperature. This means that the necessary scale separation for the assumption of an effective medium is no longer given and so a multidimensional heat flux must be considered and the restriction of only positive values of the virtual wave is not more valid. In contrast to that, in transmission configuration the sample acts as a low pass filter and only low frequency components of the thermal waves contributes to the measured temperature data. So, in transmission mode the assumption of positivity can also be used for measurement data of porous CFRP. Effective Medium Theory For a quantitative determination of porosity, based on thermal diffusion times t d estimated by ToF it is imperative to use a material model based on EMT. Since the thickness L (Eq. 19) and the effective thermal diffusivity α eff is affected by porosity Φ we apply the model for the nominal thermal diffusion time, derived in Appendix 1 which enables modeling without information of the actual sample thickness L(Φ). α 0 is the thermal diffusivity of the void-free matrix and α 1 is the sensitivity coefficient which represents the change of the effective thermal diffusivity due to the change of the porosity in dependence on the averaged pore shape. The used value α 0 = 0.374 mm 2 s −1 can be derived by a measurement of a void free (Φ = 0) CFRP coupon and α 1 = −0.673 mm 2 s −1 is derived by a Mori-Tanaka Approximation [25,31] using the thermophysical properties in Table 5. Results of Porosity Estimation For demonstration of statistical uniform distribution of the pores, in a first instance diffusion time images (Fig. 12) for three 20-ply porosity coupons with different porosity values Φ = [1.55 %, 5.62 %, 10 %] are processed exemplary. We used the truncated experimental temperatures T meas and applied the pixel-wise, ToF based, estimation of the thermal diffusion time t d . Along the dash-dot lines line profiles of the determined thermal diffusion times are depicted in Fig. 12, to obtain an expressive representation of the homogeneity of the samples. The region of interest for the further calculation of mean values and standard deviations, is marked by a black rectangle (11 × 31 pixel). In total we examined 31 calibrated porosity coupons with porosity up to 10%. Previous Cone-beam XCT measurements with a resolution of (10µm) 3 have been used for characterization of the microstructure for EMT modelling and to gain a reference porosity value. The pores were determined by a segmentation method [34,35] that separates pore and matrix material, whereby only pores with a volume greater than 27 voxels (corresponding to a sphere equivalent diameter from 37.2µm) were considered. The resulting spatial mean values and the standard deviations of the estimated thermal diffusion times t d for each single coupon are illustrated logarithmically in Fig. 13 versus the porosity value, derived by XCT (Φ = 0% to 10%). The model for the nominal thermal diffusion time (Eq. 20), based on EMT, is represented by dashed lines for 5-, 10-, and 20-ply material. With increasing thickness (number of plies) the thermal diffusion time increases and with a given number of layers it increases for higher porosity values. The estimated thermal diffusion times from experimental data correspond to the predicted values, derived by EMT, quite well. The overall uncertainty for a photothermal porosity estimation by thermal diffusion times from VWC is less than Φ = ±0.8 % in transmission configuration. In addition to transmission configuration, a reliable porosity estimation based on the novel approach of virtual ToF measurements for thermographic data, can also be performed quickly and with good accuracy for reflection configuration measurements. Due to the double diffusion process in reflection configuration the standard deviation is slightly larger whereby the overall uncertainty is less than Φ = ±1.5 %. Conclusion and Outlook In this paper, we apply the Virtual Wave Concept on CFRP for the estimation of the porosity-dependent thermal diffusion time from flash-excited pulsed thermography experiments. Considering a novel, SNR dependent, approach for the temporal truncation of measurement data only time ranges which contain information about the heat diffusion inside the sample are evaluated. In addition this optimum evaluation time also ensures the reduction of undesired influences by convection. Due to the VWC, pulse-echo and through-transmission method are applicable for the Time-of-Flight determination of a virtual wave corresponding to the truncated photothermal measurements. For the calculation of the virtual wave field a local transformation is applied to the temperature data, whereby this transformation is an ill-posed 1D heat conduction problem. To gain an appropriate solution, the ADMM is used for regularization, which allows the inclusion of prior information. We assume positivity due to 1D heat diffusion and a sparse virtual wave field, based on the assumption of an effective medium. This assumptions lead to virtual waves containing only positive values and sharp peaks, which allow the exact localization of the back wall echo. After reference measurements on a CFRP step wedge, where an improvement of the standard deviation compared to state-of-the-art methods in transmission and reflection configuration is shown, the VWC is applied for quantitative porosity estimation. A model for the nominal thermal diffusion time is derived from an effective medium theory based linear model for effective thermal diffusivity. The model is validated with the aid of a large number of porosity coupons. It is shown that the results of the photothermal porosity estimation by the Virtual Wave Concept match the porosity determined by cone beam X-ray computed tomography reference measurements quite well. The overall uncertainty for photothermal porosity estimation from VWC is highly improved for reflection configuration, since the standard deviation of the estimated thermal diffusion times is more accurate than state-of-the-art methods. Owing these improved reflection configuration results, a future, fast and non-contacting quantitative porosity estimation from thermographic measurements will be possible even for complex shapes and hybrid structures. in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Appendix: Nominal Thermal Diffusion Time Model The model for the nominal thermal diffusion time, given in Eq. 20, is derived from the EMT based, linear model for effective thermal diffusivity proposed by Mayr et al. [7]. Whereby in case of porosity (Φ > 0) the effective thermal diffusivity decreases where k eff is the effective heat conduction and (ρc) eff the effective volumetric heat capacity, both describing a quasi-homogeneous material. The thermal diffusion time is given by t d = L(Φ) 2 /α eff whereby the thickness L(Φ) (Eq. 19) and the thermal diffusivity α eff is affected by porosity Φ. To determine the effective thermal diffusivity α eff from the measured thermal diffusion time t d the knowledge of the actual sample thickness is mandatory and vice versa. A suitable way for modeling the porosity affected thermal diffusion time results in the apparent thermal diffusivity α app , where is no need for the knowledge of the actual thickness of the sample. Taking into account the porosity dependent thickness change (Eq. 19) and the linear model for effective thermal diffusivity (Eq. A.1) the apparent thermal diffusivity is given by substituting Eq. A.4 into Eq. A.3 results in (A.5)
7,840
2020-09-24T00:00:00.000
[ "Materials Science" ]
Uninephrectomy in rats on a fixed food intake results in adipose tissue lipolysis implicating spleen cytokines The role of mild kidney dysfunction in altering lipid metabolism and promoting inflammation was investigated in uninephrectomized rats (UniNX) compared to Sham-operated controls rats. The impact of UniNX was studied 1, 2, and 4 weeks after UniNX under mild food restriction at 90% of ad libitum intake to ensure the same caloric intake in both groups. UniNX resulted in the reduction of fat pad weight. UniNX was associated with increased circulating levels of beta-hydroxybutyrate and glycerol, as well as increased fat pad mRNA of hormone sensitive lipase and adipose triglyceride lipase, suggesting enhanced lipolysis. No decrease in fat pad lipogenesis as assessed by fatty acid synthase activity was observed. Circulating hormones known to regulate lipolysis such as leptin, T3, ghrelin, insulin, corticosterone, angiotensin 1, and angiotensin 2 were not different between the two groups. In contrast, a select group of circulating lipolytic cytokines, including interferon-gamma and granulocyte macrophage–colony stimulating factor, were increased after UniNX. These cytokine levels were elevated in the spleen, but decreased in the kidney, liver, and fat pads. This could be explained by anti-inflammatory factors SIRT1, a member of the sirtuins, and the farnesoid x receptor (FXR), which were decreased in the spleen but elevated in the kidney, liver, and fat pads (inguinal and epididymal). Our study suggests that UniNX induces adipose tissue lipolysis in response to increased levels of a subset of lipolytic cytokines of splenic origin. Introduction Disease conditions such as the metabolic syndrome, diabetes, obesity, inflammation and infection, are often associated with diminished kidney function. It is generally believed that this reduction in kidney function is a consequence of the progression of the disease. Recent evidence in both humans and animal models suggests that a primary reduction in kidney function may also play a role in altering metabolism (Odamaki et al., 1999;Zhao et al., 2011) inflammation and oxidative stress (Zheng et al., 2011) and hence in the pathogenesis of the disease. Previous studies of the consequences of uninephrectomy (UniNX) in Sprague Dawley rats have shown that there was no difference in body weight and no evident changes in metabolic profile Abbreviations: ASP, acylation stimulating protein; ATGL, adipose triglyceride lipase; FXR, farnesoid x receptor; GM-CSF, granulocyte-macrophage colony stimulating factor; HSL, hormone sensitive lipase; IFNγ, interferon-gamma; UniNX, uninephrectomy. and tissue pathology up to 3 months. Afterwards, pathologies start to appear, in particular deterioration of kidney function, fatty infiltration into various tissues (Zhao et al., 2008(Zhao et al., , 2011 and the progressive development of glucose intolerance (Sui et al., 2007). However, the mechanisms underlying these temporal changes from subtle changes to chronic severe changes in metabolic and immune regulation are not clearly defined. Angiotensin may play a role in metabolic and immune changes observed in kidney disease (Amorena et al., 2001;Deferrari et al., 2002), but its contribution in early reduced kidney function remains to be determined. More recently, studies have shown that other factors regulating metabolism and inflammation are modified by diminished kidney function in humans (Wu et al., 2006;Spoto et al., 2012) and in rodent UniNX models (Gai et al., 2014b), including the sirtuin SIRT1, farnesoid X receptor (FXR), inflammation and complement factors. Activation of SIRT1 and FXR can counter the metabolic syndrome by acting on lipid and glucose metabolism. We and others have recently shown that bile salts and their receptor FXR are modified by UniNX (Penno et al., 2013;Gai et al., 2014a,b;Chin et al., 2015). It has also been reported that SIRT1 may regulate FXR activity (Liu et al., 2014). Only a few studies have investigated the role of cytokines in UniNX-induced metabolic changes (Mak et al., 2006;Zhang et al., 2014). However, recent studies in mice suggest that cytokines and their signaling pathways are altered by UniNX (Zheng et al., 2011;Gai et al., 2014b). In a more severe form of reduced kidney function, 5/6 nephrectomy, cytokines have been shown to play a role in pathology (Gao et al., 2011). The role of cytokines in metabolic disease especially concerning lipid metabolism is complex as the dose administered of the cytokine is important; at different doses different phenotypes can occur (Feingold et al., 1992;Khovidhunkit et al., 2004). Furthermore, the source of cytokines in kidney disease (Spoto et al., 2012) may not be the same as in obesity or metabolic disease where adipose tissues are believed to be a major source (Fruhbeck et al., 2001). In other inflammation/infection models, other tissues such as spleen and liver can be a major source of cytokines (Arsenijevic et al., 1998;Park et al., 2010). It has been shown in chronic human kidney disease that there is an association between circulating cytokines and body weight (Pecoits-Filho et al., 2002). At both extremes of body weight perturbations, obesity and cachexia, it has been shown that cytokines can alter body composition and metabolic pathways. These pathways include protein, lipid and glucose metabolism (Johnson, 1997). Cytokines can act directly on tissue or indirectly via the brain to affect tissue metabolism (Johnson, 1997;Sanchez-Lasheras et al., 2010). In pilot studies we found that UniNX decreased fat pad weight and increased certain circulating cytokines. We therefore conducted studies to investigate whether UniNX induces changes in body composition, in particular body fat pad lipolysis and lipogenesis, under conditions of fixed food intake (90% of ad libitum intake) and whether those changes are associated with selected hormones or cytokines. We also investigated the tissue source of lipolytic cytokines and whether anti-inflammatory tissue regulators FXR/SIRT1 were modified in tissues. Animal Model Male Sprague Dawley rats were purchased from Elevage Janvier (Le Genest-St-Isle, France) at 5 weeks of age with an average weight of 160 g/rat. They were placed individually in cages and given pellet food ad libitum. After a 1 week acclimation period, rats were either sham operated or uninephrectomized (UniNX) by removal of the left kidney. One day prior to surgery a group of eight non operated rats were sacrificed (day 0 group). Surgery Rats were first anesthetized with sevoflurane and then placed on a heated mat. The left flank was shaved and swabbed with polyvidone-iodine (Braunoderm, Braun). Anesthesia and analgesics were given i.p.: medetomidine hydrochloride (Domitor) 150 µg/kg, Midazolam (Dormicum) 2 mg/kg, fentanyl 5 µg/kg and to awake by atipamezole hydrochloride (Antisedan) 0.75 mg/kg, Sarmazenil (Sarmasol) 0.2 mg/kg, Naloxone (Narcan) 120 µg/kg. A small incision was made in the left flank to gain access to the left kidney. The kidney was ligated with non-absorbable thread (Ethilon 11 4-0, Johnson-Johnson) and was then cut loose with surgical scissors. The incision sites were sutured with absorbable thread (Vicryl 3-0 Johnson-Johnson) and metal Michel suture clips (Provet, Switzerland) were applied to close the wound. Metal clips were removed after 14 days. Post-operation analgesic treatment with buprenorphine 0.05 mg/kg s.c. was given 2X/day for 3 days to Sham and UniNX animals. Diet Most of the studies analyzing the impact of UniNX have been done under ad libitum fed conditions. We chose instead to put the rats under a fixed food intake (90% of ad lib fed diet) to ensure the same caloric intake between sham and UniNX rats. Ad lib feeding results in uncontrolled levels of nutrition, which can influence metabolites, hormones, inflammation and oxidative stress, parameters of interest (Diamond, 1990). Dietary intake differences can also influence other variables such as locomotor activity and metabolic rate (Leveille and O'Hea, 1967). Therefore, fixed intake obviates some of these confounding factors encountered in ad libitum experiments. This fixed intake approach has previously been used successfully to study the mechanisms underlying body composition regulation during catch-up growth and energy balance in young Sprague-Dawley male rats (Summermatter et al., 2009). After surgery animals were given a fixed intake of normal chow paste. Dry food powder (Maintenance diet composed of 23.5% protein, 12.9% fat, and 63.6% carbohydrates as percentage of metabolisable energy: Cat. No. 3433, Provimi-Kliba, Cossonay, Switzerland) was mixed with an equal amount of tap water and was prepared daily (equivalent to 90 kcal/rat) and given in food cups. Experimental Protocol Rats were kept in individual cages and had free access to water. The environmental temperature was maintained at 22 ± 1 • C in a room with a 12 h light/dark cycle (light 7.00 a.m.-7.00 p.m.). Body weight was measured daily before feeding (9.00-11.00 a.m.). Operated rats were sacrificed at 1, 2, and 4 weeks after surgery. At each time point, eight sham rats and eight UniNX rats were sacrificed for collection of blood, tissue samples and animal carcasses. Rats were anesthetized with ketamine (70 mg/kg) for sacrifice, then decapitated for immediate blood collection. Animals were placed on ice for collection of peritoneal macrophage using pyrogen free phosphate saline buffer (see below) and small pieces of tissues were collected for analysis. All experimental protocols were approved by the Ethical Committee of the Veterinary Office of Fribourg, Switzerland. Body Composition For body composition analysis the rats were killed by decapitation. The skull, thorax and abdominal cavity were incised and the gut was cleaned of undigested food. The carcasses were dried in an oven maintained at 60 • C for 2 weeks, after which they were homogenized. Carcass fat content was measured by the Soxhlet fat extraction method using petroleum-ether (Entenman, 1957). Body water content was determined by subtracting the weight of the animal after the 2 weeks in the oven to the weight prior to this. The fat free dry mass (FFDM) was calculated as the fat mass subtracted from the dry homogenate. RT-PCR in Epididymal/Inguinal White Adipose Tissue (EWAT/IWAT) and Liver Total RNA was isolated as previously described (Arsenijevic et al., 1997). The RNA was then treated with DNase, after which it Values are means ± SE; n = 8/group. *** P < 0.001 corresponds to Sham vs. UniNX. was reverse transcribed (Promega). Thereafter we ran a RT-PCR (iQ cycler Bio-Rad). Each sample was normalized to its cyclophilin value. For the list of primers used and their sources, see Table 2. Samples were incubated in the iCycler instrument (BioRad, iCycler iQ, Version 3.1.7050) for an initial denaturation at 95 • C for 3 min, followed by 40 cycles of amplification. Each cycle consisted of 95 • C for 10 s, 60 or 62 • C for 45 s, and finally 95, 55, and 95 • C for 1 min each. Green I fluorescence emission was determined after each cycle. The relative amount of each mRNA was quantified by using the iCycler software. Amplification of specific transcripts was confirmed by melting curve profiles generated at the end of each run. Cyclophilin was used as the control for each study and the relative quantification for a given gene was normalized to cyclophilin mRNA values. Note that as representative of subcutaneous white adipose tissue (SWAT) we used inguinal fat (IWAT) for PCR, western blot and other analysis. Western Blot Analysis Western blots on protein extracts from pulverized tissue were performed as previously described (De Bilbao et al., 2009) Lipogenic Enzyme Activity Assays Fatty acid synthase (FAS) activity was measured according to a method described by Penicaud et al. (1991). The frozen white adipose tissue pads were homogenized on ice in four volumes of freshly prepared polyethylene glycol buffer, pH 7.3 (100 mmol/l KH 2 PO 4 , 5 mmol/l EDTA, and 1.5 mg/ml glutathione in reduced form). After centrifugation, these extracts were assayed using 15 µl of extract in 1.7 ml of FAS buffer (50 mmol/l K-phosphate stock solution, pH 6.8, and 0.1 mg/ml NADPH) and using a spectrophotometer set at 340 nm and 37 • C. The readings were performed by sequentially adding 15 µl of extract in 1.7 ml of FAS buffer to the cuvettes, followed by 10 µl of 7.5 mmol/l acetyl-CoA, and followed by 10 µl of 8 mg/ml malonyl-CoA. Macrophage Intracellular ROS Production Reactive oxygen species (ROS) production was measured from isolated macrophages by measuring their ability to reduce nitro blue tetrazolium. Peritoneal macrophage layers in 96 well plates were isolated from peritoneal cavity of Sham and UniNX (n = 8) with ice cold pyrogen free phosphate buffered saline (PBS). After being centrifuged and washed with PBS three times macrophages were counted and plated at 100,000 per well and let to adhere to plates by incubating at 37 • C for 30 min. After this period a solution of nitro blue tetrazolium with 5% glucose in PBS was incubated for a further 3 h at 37 • C. The supernatant was removed and gently washed with PBS 3 times. Cells were then fixed with 70% methanol and allowed to dry. Formazan was solubilized with 2 M KOH and dimethyl sulphoxide. The absorbance was determined at 630 nm (Arsenijevic et al., 2000). Data Analysis All data are presented as means ± SE. Statistical analysis were performed using Kruskal-Wallis One-Way non-parametrical ANOVA or Mann-Whitney (non-parametrical) for 2 sample comparisons. A value of p < 0.05 was considered as significant. * p < 0.05, * * p < 0.01 and * * * p < 0.001. Uninephrectomy Effect on Kidney Function Left nephrectomy resulted in hypertrophy of the remaining right kidney (Figure 1A), which was 38% heavier than the right Sham kidney on week 4. A mild reduction in kidney function is reflected by the increased plasma Cystatin-C and urea levels (Figures 1B,C). Body Weight, Body Composition, and Organ Weight Over the 4 week period, there was no significant difference in body weight between the UniNX and Sham groups (Figure 2A). However, UniNX animals had a tendency to weigh less than Sham animals. Body composition analysis showed that body water, dry body weight and FFDM (Figures 2B-D) did not differ significantly between the two groups over the 4 week period. Total body fat was only reduced significantly by week 4 in the UniNX group (Figure 2E), which was reflected in the fat to FFDM ratio ( Figure 2F). Fat pad weights were significantly decreased during the course of the 4 weeks. In general, the UniNX group had significantly lower epididymal, mesenteric, subcutaneous and retroperitoneal fat pads than Sham counterparts (Figures 3A-D). These significant decreases in fat mass were not associated with a significant increase in FFDM although there was a tendency for FFDM to be higher in the UniNX rats. This small increase may be explained by increases in non-muscle tissues such as spleen (Sham 1.12 ± 0.14 g/rat and UniNX 1.44 ± 0.15 p < 0.05) and gastrointestinal tract (for intestines-Sham 7.25 ± 0.83 g/rat vs. UniNX 8.50 ± 0.76 g/rat p < 0.01; stomach-Sham 1.52 ± 0.17 g/rat and UniNX 1.82 ± 0.17 g/rat p < 0.01). No significant differences between Sham and UniNX were seen for the liver weights. Blood Lipid Metabolites Plasma triglyceride concentrations showed a transient increase after UniNX, declining after 1 week so that by week 4 UniNX triglycerides were similar to Sham levels ( Figure 4A). Total blood cholesterol and high density lipoprotein (HDL) levels were not significantly different between Sham and UniNX groups (Figures 4B,C). However, from week 2 to week 4, free fatty acids in the UniNX group were reduced compared to the Sham Figure 4D). Blood β-hydroxybutyrate, a product of fatty acid oxidation, showed a marked increase from week 2 to week 4 ( Figure 4E). Circulating glycerol, a product of lipolysis, was persistently elevated over the 4 weeks ( Figure 4F). Lipid Metabolism Assessment by RT-PCR and Western Blots in Tissues Hormone sensitive lipase (HSL) and adipose triglyceride lipase (ATGL) mRNA (Figures 5A-D) were elevated in the EWAT and IWAT fat pads over the 4 weeks in the UniNX group. Fatty acid synthesis as determined by FAS activities were similar in Sham and UniNX groups (Figures 5E,F) in EWAT and IWAT fat pads. The free fatty acid transporter CD36 mRNA in the liver was higher from week 2 to week 4 in UniNX animals than Sham (Figure 6A). In addition, we also observed increased CD36 mRNA in selected UniNX tissues (by 142% in the kidney and by 79% in the gastrocnemius muscle). Interscapular brown adipose tissue (IBAT) thermogenic uncoupling protein 1 (UCP1 protein levels) showed no differences between Sham and UniNX ( Figure 6B) at 4 weeks. Serum and Tissue Cytokines IL1α, IL1β, IL1RA, IL6, IL10, ASP, CRP, EPO, TNFα, GM-CSF, IFNγ, and neopterin were measured in serum after UniNX. Serum IL1α, GM-CSF, EPO, IFNγ, and ASP were all higher in UniNX than in Sham controls from week 1 to week 4 (Figure 8). Neopterin, a specific indicator of IFNγ activated macrophages, was also higher over the 4 week period in the UniNX group than in Sham controls. CRP, an indicator of liver inflammation state, was lower in the UniNX group from week 2 to week 4 (Figure 8). Four selected cytokines (TNFα, IL6, GM-CSF, and IFNγ) were measured in various tissues at week 4, as shown in Figure 9. In most tissues, UniNX decreased tissue cytokine protein levels compared to the Sham group. The only tested UniNX tissue that showed a marked increase in the cytokines was the spleen. Peritoneal macrophage ROS production was doubled in UniNX rats (Figure 9), reflecting immune activation. Tissue SIRT1 and FXR Protein Levels Since SIRT1/FXR have anti-inflammatory properties it was decided to determine whether their levels were modified by UniNX. On week 4 SIRT1 and FXR proteins levels in IWAT, EWAT, kidney, and liver were higher in UniNX animals than in Sham controls. In sharp contrast, SIRT1 and FXR were lower in UniNX spleen (Figure 10). Values are means ± SE; n = 8/group. ** P < 0.01, *** P < 0.001 corresponds to Sham vs. UniNX. Discussion Compared to Sham controls, UniNX in young male rats resulted in a mild reduction in kidney function as judged by the chronic elevation of circulating Cystatin-C and urea over a 4 week period. No significant differences in body weight were observed between Sham and UniNX groups. However, UniNX reduced fat pad weight, and this decrease was also evident in total body fat content as determined by body composition analysis 4 weeks post UniNX. The causes of decreased fat pad weight could not be attributed to differences in food intake since we used fixed intake feeding; in addition it could not be explained by reduced FAS as no difference in the activity of this enzyme was found between the two groups in inguinal and epididymal fat pads. Since UCP1 protein levels were not different between the two groups, increased brown adipose tissue thermogenesis does not appear to be involved in the lower body fat content following UniNX. Analysis of plasma lipid metabolites revealed that glycerol was chronically elevated over the 4 weeks after UniNX, suggesting enhanced lipolysis after UniNX. Indeed, increased ATGL and HSL lipase mRNA levels were found in IWAT and EWAT. Although one may have expected that circulating triglycerides and fatty acids increased in plasma associated with the increased lipolysis, these were not observed, possibly because of increased lipid clearance. Indeed, fatty acid transporter CD36 mRNA was elevated in the liver and in selected tissues (kidney and gastrocnemius). This may explain, at least in part, the previously reported findings (Zhao et al., 2008(Zhao et al., , 2011 that UniNX led to excessive fatty infiltration and lipid accumulation in tissues (as determined by histology), albeit at time points greater than 3-6 months. Since we did not observe fatty infiltration in liver and kidney in our shorter duration study of 4 weeks, intracellular lipids are likely handled in a different manner in our time frame. They may be metabolized more rapidly, but not completely oxidized as is suggested by the higher circulating β-hydroxybutyrate and lack of increased UCP1 in brown adipose tissue. Our data showed that circulating levels of hormones that regulate energy expenditure and body fat, such as leptin, T3, insulin, ghrelin, angiotensin 1 and angiotensin 2, were not significantly different between the two groups over the 4 week period. Hence, these hormones are unlikely to explain the activation of lipolysis (i.e., elevated circulating glycerol levels, increased fat pad lipases ATGL and HSL levels). Similar lack of differences in hormones have been found in UniNX in ad lib standard chow-fed rats in the first 6 months after UniNX (Zhao et al., 2011). Other potential candidates for increasing lipolysis are cytokines. Of the increased circulating cytokines, IFNγ is of particular interest since its elevation induces both lipolysis and increases circulating ketone bodies in vivo (Khovidhunkit et al., 2004), which is what we observe in the UniNX group. Furthermore, our in vivo data reveal increased circulating neopterin and increased macrophage ROS production, which are both IFNγ-dependent. IFNγ has also been shown to increase lipid metabolism in vitro in adipocytes (Waite et al., 2001), kidney mesangial cell culture (Hao et al., 2013) and in whole animals studies (Feingold et al., 1992). We showed that other circulating cytokines known to induce lipolysis such as ASP, TNFα, and IL1α FIGURE 9 | TNFα, IL6, GM-CSF, and IFNγ cytokine levels in inguinal fat pad-IWAT, epididymal fat pad-EWAT, kidney, liver, and spleen on the 4th week in Sham operated and UniNX rats. Peritoneal macrophage reactive oxygen production capacity as determined by measurement of nitro blue tetrazolium. Values are means ± SE; n = 8/group. * P < 0.05, ** P < 0.01, *** P < 0.001 corresponds to Sham vs. UniNX. were also elevated. GM-CSF and erythropoietin have not been shown to directly mediate lipolysis but they can clearly regulate body weight and fat in rodent models (Reed et al., 2005;Lee et al., 2008;Meng et al., 2013;Alnaeeli et al., 2014). Interestingly we showed that UniNX resulted in antiinflammatory state in most tissues and this was associated with reduced cytokines in tissues such as liver, kidney and fat pads. Recently, it has been shown that in mouse UniNX models, tissues including fat pads showed a reduced inflammatory state (Sui et al., 2010;Chin et al., 2015). In our study in contrast, IFNγ and GM-CSF protein levels were increased in the UniNX spleen, suggesting that the increased circulating levels of these cytokines may arise from the spleen. A role for cytokine production by the spleen after kidney removal has been shown in mice (Andres-Hernando et al., 2012). Furthermore, nephrectomy can activate immune cells in the spleen (Lukacs-Kornek et al., 2008). In human kidney donors, the activation of cytokine signaling pathways through STATs and SOCS has been shown to occur (Xu et al., 2014). Since we had previously shown increases in bile salts following UniNX, we chose to investigate whether bile salt receptor FXR (Penno et al., 2013;Gai et al., 2014a) and its potential regulator FIGURE 10 | SIRT1 and FXR protein levels in liver, kidney, IWAT, kidney, and spleen on the 4th week in Sham operated and UniNX rats. Values are means ± SE; n = 8/group. * P < 0.01 corresponds to Sham vs. UniNX. Frontiers in Physiology | www.frontiersin.org SIRT1 (Garcia-Rodriguez et al., 2014) were altered in various tissues. Here we show that both were modified in tissues by UniNX. These two factors can regulate not only metabolism but also inflammation. We observed an inverse relationship between tissue cytokine levels and tissue anti-inflammatory FXR/SIRT1 protein levels. The higher the tissue cytokines (in spleen), the lower the FXR/SIRT1 protein levels, and conversely the lower the cytokines (in adipose tissue, liver, kidney), the higher the FXR/SIRT1 levels. Although we have previously shown that the bile salt receptor FXR at the mRNA level showed a tendency to be elevated in the liver (Gai et al., 2014a), we now provide evidence that UniNX may increase FXR protein levels in liver, kidney and IWAT. Whether these increases represent the active form of the FXR warrants further studies. Interestingly SIRT1 may affect activity of various other signaling pathways by modifying the acetylated state of regulatory proteins including STATs (Liu et al., 2014). The age-dependent fatty infiltration of tissue could also potentially be attributed to decreased SIRT1, which is known to be down-regulated with age and is considered responsible for age-related metabolic changes (Kitada et al., 2013). It would therefore be of interest to determine whether these age effects of UniNX pathological fat infiltration and increased tissue inflammation are associated with decreases in SIRT1. The increases in FXR and SIRT1 levels found here in nonimmune tissues also support our findings of the leaner phenotype following UniNX. In summary, our study shows that, under conditions of a fixed intake of normal chow, young male rats that have undergone UniNX had lower body fat. This was associated with enhanced lipolysis and was paralleled by increases in subsets of circulating cytokines rather than changes in circulating hormones levels. Of the measured cytokines, IFNγ appear to be the best candidate for explaining body composition changes after UniNX, based on our in vivo physiological activation of IFNγ (increased circulating neopterin, β-hydroxybutyrate and increased macrophage ROS production). Further studies are required to determine whether these cytokines, and especially IFNγ, are acting directly on peripheral tissue or indirectly via the brain. Support for kidney-brain interactions have been shown to occur in chronic kidney disease induced by 5/6 nephrectomy (Mak et al., 2006;Cheung and Mak, 2012) and which results in wasting/cachexia. Altered body composition with loss of body fat and lean mass (Cheung et al., 2008;Cheung and Mak, 2012) after 5/6 nephrectomy implicates cytokines and central melanocortin 4 receptor (MC4R) pathways. However, the neuronal circuits involved, and whether these neurons have receptors for cytokines, remain to be demonstrated.
5,895.4
2015-07-10T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Bayesian Methods for Inferring Missing Data in the BATSE Catalog of Short Gamma-Ray Bursts : The knowledge of the redshifts of Short-duration Gamma-Ray Bursts (SGRBs) is essential for constraining their cosmic rates and thereby the rates of related astrophysical phenomena, particularly Gravitational Wave Radiation (GWR) events. Many of the events detected by gamma-ray observatories (e.g., BATSE, Fermi, and Swift) lack experimentally measured redshifts. To remedy this, we present and discuss a generic data-driven probabilistic modeling framework to infer the unknown redshifts of SGRBs in the BATSE catalog. We further explain how the proposed probabilistic modeling technique can be applied to newer catalogs of SGRBs and other astronomical surveys to infer the missing data in the catalogs. Introduction The discovery of the first Gravitational Wave Radiation (GWR) event in 2015 [1,2] and the first joint detection of gravitational and electromagnetic radiation from a binary neutron star merger in 2017 [3] have revolutionized multimessenger astronomy and resulted in the 2017 Physics Nobel Prize. These observations have given credibility to the hypothesis of binary neutron star (BNS) mergers as the primary progenitors of Short Gamma-Ray Bursts (SGRBs). Other channels of SGRB formation include neutron stars-black hole or binary black hole mergers. Unlike Long Gamma-Ray Bursts (LGRBs), which are believed to be due to the collapse of supermassive stars and are frequently associated with core-collapse supernovae [4], the light curves of SGRBs are comprised of short-hard intense gamma-ray pulses that generally last a few milliseconds to seconds. A question of great interest in the field of GWR astronomy concerns the cosmic rates of GWRs and the fraction of detectable events with the current and planned GWR detectors [5]. LGRBs and SGRBs are among the major sources of GWR events. Therefore, knowledge of the cosmic rates of GRBs can yield tight constraints on the cosmic rates of GWR events. The challenge, however, lies in quantifying the population properties and the cosmic rates of GRBs. Unraveling the physics and the intrinsic properties of GRBs requires knowledge of their distances from Earth represented by the quantity known as the cosmological redshift (z). Despite significant progress made over the past two decades, most events detected by the existing gamma-ray observatories lack measured redshifts. For example, less than 4% of the entire catalog of >3000 GRBs detected by the Gamma-ray Burst Monitor (GBM) onboard the Fermi Gamma-ray Space Telescope have measured redshifts, and even fewer of those with measured redshifts belong to the class of SGRBs, a primary GWR formation channel. While there seems to exist a consensus on the range of the redshift distributions of LGRBs and SGRBs [6][7][8][9], the shapes of the distributions and the knowledge of the redshifts of individual events remain hotly debated [9] and elusive. Previous studies have already utilized the phenomenologically discovered prompt gamma-ray correlations to derive pseudo-redshifts for a certain fraction of events in the second-largest catalog of GRBs available to this date, homogeneously detected by the BATSE Large Area Detectors onboard the now-defunct Compton Gamma-Ray Observatory [10][11][12][13]. More recent studies have applied similar concepts and ideas to the problem of estimating the unknown redshifts of GRBs in the Fermi-GBM catalog [14,15]. These methods, however, can lead to highly biased estimates of the unknown redshifts of GRBs where the discovered high-energy correlations result from a small calibration sample. The calibration samples are typically the brightest detected GRBs whose redshift measurements have been feasible. Such small samples are often collected from multiple heterogeneous surveys and potentially neither represent the unobserved intrinsic cosmic population of GRBs nor the complete observed sample (with or without measured redshifts). More importantly, the potential effects of the detector threshold and sample incompleteness on the proposed phenomenological correlations are poorly understood. These unknown effects and systematic biases manifest themselves in predicted redshift values that are highly inconsistent with the estimates obtained from other independent methods; examples of which are well studied in the literature [16][17][18][19]. In this article, we propose and lay out the details of a data-driven approach to reconstructing the missing redshift and other information in GRB catalogs. The proposed methodology is generic, is applicable to any astronomical or other type of datasets, and opens the venue toward further quantification of the answers to some of the major questions in GRB research, including but not limited to: What are the cosmic rates of Short-and Long-duration GRBs? Can the observational properties of GRBs be used as cosmological standard candles? Is there a reliable indirect method of inferring the redshifts of GRBs in real time? Are there any deviations in the cosmic rates of LGRBs from the Star Formation Rate? To explain the proposed probabilistic framework, we particularly focus on the BATSE catalog of 565 SGRBs due to the simplicity of the BATSE triggering mechanism. Despite the extremely large uncertainties in the BATSE SGRBs' observational data, we show the feasibility of setting reasonable constraints on the individual redshifts of most BATSE SGRBs. Although we frequently refer to the Fermi-GBM and Swift-BAT catalogs of GRBs in discussions, we leave a complete treatment of these catalogs to future studies. This study is a continuation of a previous similar study of the BATSE LGRBs catalog [1]. Methodology Our proposed approach to reconstructing the missing data in GRB catalogs comprises three major steps: Firstly, describe the probabilistic foundations of our proposed Bayesian approach to inferring missing data from the available observational catalog(s) combined with any prior knowledge from independent resources. Such techniques should naturally incorporate all sources of uncertainty into the analysis. Secondly, we develop a detailed minimally biased model of the detection process of GRBs. Such careful mathematical modeling of gamma-ray photon detectors is crucial for an accurate study of the population properties of GRBs and the estimation of their cosmic rates. Lastly, we introduce a probabilistic framework for calibrating, validating, and selecting the most plausible model for the population properties of the prompt gamma-ray emission of GRBs, which can be subsequently used to infer the unknown missing data in GRB catalogs, most importantly, the unknown redshifts of the individual GRBs. A more detailed explanation of this process is given in Sections 2.1-2.3. Development of the Statistical Techniques The essence of the proposed approach to estimating the missing data, specifically the unknown redshifts of GRBs, is summarized in Figure 1. For illustration, consider a toy model where the observed properties of one GRB class (e.g., SGRBs) are collectively represented by the single blue-colored lines in this figure. Each blue line represents one GRB event. If we knew the redshifts of these individual events, represented by the black lines in the middle subplot, we would be able to determine, with high precision, the intrinsic properties of all detected GRBs, represented by the red lines in the top subplot. The background shaded areas in the top and middle subplots represent the generating distributions of the corresponding lines in the foreground. Two approaches to inferring the cosmic rates of GRBs and individual redshifts can be taken, depending on the state of knowledge of individual GRB redshifts in a given GRB catalog. In the presence of some GRBs with redshifts, as is the case with the Fermi Gamma-ray Burst Monitor (GBM) [20] and Swift catalogs [21], we can use this partial knowledge of individual redshifts to infer the overall cosmic rates of GRBs (the gray distribution in Figure 1), the intrinsic population distributions of GRBs (the red distribution), and the posterior predictive distributions of the unknown redshifts of individual catalog GRBs (the black lines). This approach is particularly feasible for LGRB catalogs with a nonnegligible number of events with measured redshifts. For example, there are currently >120 LGRBs with redshifts and >2350 LGRBs without redshifts in the Fermi catalog. Previous modeling attempts [8] based on only 120 Swift LGRBs with known redshifts and 205 Intrinsic GRB properties LGRBs without redshifts have successfully shed more light on the intrinsic cosmic rates of LGRBs. Therefore, one expects the order-of-magnitude-larger sample size of the Fermi catalog to only lead to significantly more robust LGRB rate inferences and more stringent comparison with the existing models of Star Formation Rate (SFR). LGRBs with missing redshifts can be readily incorporated into such analysis via Bayesian marginalization [22]. Modeling Scenario 2 In the absence of any redshift knowledge, as is the case with the BATSE catalog [1,23] and most SGRB catalogs, we can still adopt a multilevel Empirical Bayesian methodology to estimate both the redshifts and the intrinsic properties of individual GRBs probabilistically. This novel approach may resemble magic as it enables us to infer two unknowns (the red distribution and the black lines in the subplots of Figure 1) from a single known quantity (the blue-colored lines). However, the power of the method stems from our ability to include an independent prior knowledge of the overall redshift distribution of GRBs in the analysis. This prior knowledge is represented by the gray distribution in the middle subplot of the figure. It can be chosen to be any plausible intrinsic GRB cosmic rate scenario. For LGRBs, the prior could be set to the most recent SFR models in the literature [24,25], or the hotly debated LGRB cosmic rate models that deviate from the SFR at low redshifts [9,[26][27][28] or at high redshifts [8,29]. In the case of the Fermi catalog, the available 120 spectroscopically and photometrically measured redshifts can be compared with the corresponding predicted redshifts using this Empirical Bayesian approach. Finally, quantifying and tabulating the results of this comparison via simple linear correlation measures for different prior models enables us to identify the most plausible cosmic rate model for GRBs. Modeling the Population and the Cosmic Rates of SGRBs Except for a handful, most SGRBs in the BATSE, Fermi, Swift, and other SGRB catalogs lack redshifts. Therefore, the modeling approach of scenario 1 will lead to degenerate solutions for the rates of SGRBs due to data scarcity. The modeling approach of scenario 2 is nevertheless identically applicable to SGRBs. There is now strong observational and theoretical evidence for binary neutron star mergers as the primary progenitors of SGRBs. It is, therefore, reasonable to assume that the cosmic rate of SGRBs follows that of LGRBs or SFRs, convolved with appropriate merger delay time distributions. Population synthesis simulations [30][31][32][33][34] already provide theoretical estimates of merger delay time distributions independently of observational data. This is the only additional complexity in modeling the rates and the population properties of SGRBs compared to LGRBs. Unbiasedness and Consistency Checks It is reasonable to question the unbiasedness of the observed redshift distributions of various GRB catalogs used in scenario 1 compared to the underlying intrinsic GRB redshift distribution. While quantifying the potential unknown biases in the spectroscopic and photometric redshift measurements of GRBs is highly challenging, a simple consistency check can be employed to ensure the absence of bias in the observed redshift distribution of GRBs. This can be accomplished by performing the modeling of scenario 1 twice, first only for the sample of GRBs with redshift and the second time by including the entire catalog of GRBs with and without redshift. The absence of any significant difference in the results for the two datasets will be strong evidence against the presence of any significant biases in the observed redshift distribution of GRBs. Notably, independent studies of this matter based on the Swift catalog LGRBs have not led to any noticeable differences between the redshift distributions of LGRBs with and without redshifts [8,35]. Furthermore, the sound rules of probability theory require a quantitative consistency of the results obtained from the two independent modeling scenarios 1 and 2. Any significant deviations between the results from the two scenarios either imply potential parameter degeneracy of the model of scenario 1 or wrong choices of redshift priors in scenario 2. This purely probabilistic framework provides a robust self-consistent approach to model verification and validation in this data-driven framework. Although the handful of SGRBs with redshifts in GRB catalogs does not help infer their intrinsic cosmic rates, they provide a valuable benchmark against which the calibrated SGRB models could be validated. Modeling of Data and the Gamma-Ray Detector Thresholds The practical implementation of the proposed mathematical methodology in Section 2.1 can be qualitatively understood by introducing the four main attributes with which the prompt gamma-ray emissions of GRBs are characterized. These spectral and temporal prompt emission attributes are illustrated as an example GRB light-curve in Figure 2: (1) the bolometric peak energy flux (P bol ); (2) the bolometric fluence (i.e., the total observed energy, S bol ); (3) the observed time-integrated spectral peak energy (E p ); (4) the observed prompt gamma-ray duration (e.g., as defined by the 90% of the prompt-emission interval, T 90 ). These four observer-frame attributes are readily available for all BATSE and Fermi-GBM, and many of the Swift-BAT GRBs and can be mapped to the corresponding GRB rest-frame properties via bolometric isotropic peak luminosity: L iso = P bol × 4π × d L (z) 2 , total bolometric isotropic emission: E iso = S bol × 4π × d L (z) 2 /(z + 1), intrinsic time-integrated spectral peak energy: E pz = E p × (z + 1), intrinsic prompt-emission duration: where d L (z) represents the cosmological luminosity distance as a function of redshift z, which can be readily computed for a given choice of cosmology (e.g., ΛCDM). Taking the logarithm of both sides of all equations, we obtain a set of linear mappings from the rest-frame to the observer-frame properties of GRBs in the logarithmic space. Therefore, the observed distributions of the four GRB properties result from the convolution of the distributions of the corresponding rest-frame GRB properties with the distributions of the logarithms of the mathematical terms that are exactly determined by z (i.e., d L (z) and z + 1). The larger the variances of the redshift-related terms in these equations are (relative to the variances of the distributions of intrinsic GRB attributes), the more the observed properties of GRBs will be affected, more so by the redshift as opposed to the intrinsic properties. The joint 4-dimensional distribution of the observed properties of SGRBs and LGRBs in both BATSE and Fermi catalogs strongly resembles multivariate log-normal distributions severely censored by the gamma-ray detection thresholds [7,36,37]. Such an orderly distribution shape in the observer frame implies an even more orderly log-normal shape for the joint distribution of the intrinsic attributes of SGRBs and LGRBs in the rest frame. Therefore, we will consider the multivariate log-normal as our primary statistical model describing the intrinsic population distribution of the prompt-emission properties in both GRB classes. Nevertheless, it is a common practice in astronomy to model the extensive properties (e.g., energetics and luminosity) of celestial objects with power-law distributions [9]. Therefore, we could also consider scenarios where the distributions of GRB energetics (L iso and E iso ) are jointly modeled as power laws combined with log-normal distributions for the intensive GRB properties (E pz and T 90z ). This would be particularly feasible for modeling the population distribution of LGRBs for which ample data and redshift information are available. Notably, we use the isotropically computed energetics of GRBs in this model. In reality, these isotropic values must be corrected by the beaming factor measured for each GRB. This observational information is seldom available. Nevertheless, we do not expect the lack of beaming factor correction to have any significant effects on the accuracy of the final results since the available theoretical and observational evidence points to narrow widths for the distributions of the beaming angles of the jets within each class of GRB [38][39][40][41][42]. In other words, such beaming factor corrections to the GRB energetics, even if possible, would only effectively shift the entire GRB data linearly in the logarithmic space of L iso and E iso . That is, the resulting variations due to the beaming corrections would be orders of magnitude smaller than the intrinsic variability in the energetics of LGRBs and SGRBs. GRB detector threshold model. An accurate estimation of the cosmic rates of GRBs also requires incorporating a detailed minimally biased model of the detection threshold of GRB detectors [37,43,44]. By design, some GRB detectors are significantly more difficult to simulate than others. For example, the Swift Burst Alert Telescope is well-known for its immensely complex triggering algorithm. It comprises at least three separate detection mechanisms [45] that complement each other: 1. The first type of trigger is for short time scales (4 ms to 64 ms). These are traditional triggers (single-background), for which about 25,000 combinations of time-energyfocal plane subregions are checked per second. 2. The second type of trigger is similar to that of HETE detectors [46]: fits to multiple background regions are made to remove trends for time scales between 64 ms and 64 s. About 500 combinations for these triggering mechanisms are checked per second. For these rate triggers, false triggers and variable non-GRB sources are also rejected by requiring a new source to be present in an image. 3. The third type of trigger works on longer time scales (minutes) and is based on routine images that are made of the field of view. By contrast, the Fermi-GBM detection mechanism is relatively similar to that of its predecessor, BATSE LADs. The GBM can trigger upon detecting GRBs at several independent timescales from 16 ms to 8.192 s. A naive approach to modeling the Fermi-GBM triggering mechanism would be to use the sensitivity measurements of the detector determined in laboratory settings by the Fermi team. The modeling of BATSE and Swift catalogs, however, provides evidence against such an approach [7,8,37,44]. This is because the operational sensitivities of gamma-ray photon-counters tend to differ from the sensitivities measured in isolated laboratory environments. Additionally, GRB catalogs do not form a complete sample with respect to the detection thresholds of the relevant gamma-ray detectors. Such simple methods of detector threshold modeling can easily lead to biases in GRB rate estimates that show surprising deviations from the expected GRB rates [1,47]. As an alternative, the effects of the gamma-ray detectors can be accounted for by modeling the detection threshold as part of the modeling of GRB properties [7,37]. For example, to remove the strong dependence of the detection threshold of Fermi-GBM on the timescale used for the definition of a GRB's peak flux, we can define an effective timescale-free peak flux for all BATSE and Fermi GRBs. The existence of such an effective peak flux is noticeable in the plot of the ratio of GRB peak fluxes at different timescales as a function of their observed durations. Figure 3 illustrates the strong dependence of the peak flux ratios at 64 ms and 1024 ms timescales on the durations of GRBs measured by T 90 in the BATSE and Fermi catalogs. The two GRB classes are segregated via fuzzy classification methods [37,48,49] applied to E p and T 90 of the events in both catalogs [50]. We have already shown [7,37] the utility of this relationship in removing the effects of varying-timescale detector thresholds from the BATSE catalog. A similar procedure can be adopted for modeling the detection thresholds of Fermi-GBM and Swift-BAT gamma-ray detector thresholds. In the case of Fermi-GBM, the difference is minimal and amounts to only a wider range of triggering timescales (0.016 s to 8.192 s) compared to that of BATSE (64 ms to 1.024 s). Furthermore, a realistic modeling of the detection threshold requires modeling the inherent fluctuations in the background gamma-ray photon counts as a Poisson process [22,37], leading to a fuzzy detection threshold at any given triggering timescale. This approach properly includes all GRBs in any catalog, down to the faintest events. Model Calibration, Validation, and Selection Once we implement the mathematical modeling approach described in Section 2.1 and the detection threshold model as laid out in Section 2.2, we can combine them to obtain probabilistic models for the observed rates of LGRBs and SGRBs in the given catalog of interest. The two LGRB and SGRB world models collectively yield the probability of observing the entire GRB catalog for a given set of input parameters. The most plausible parameters can be obtained by maximizing the likelihood of observing the catalog. The last challenge lies in the maximization of the multivariate likelihood function resulting from the probabilistic models. This is due to the complex dependencies of the photon-counting detection mechanism of gamma-ray detectors in a limited energy window at different timescales on the intrinsic peak luminosity (L iso ), hardness (E pz ), duration (T 90z ), and redshift (z) of each GRB. Notably, the BATSE LADs and Fermi-GBM have the nominal triggering energy window of 50-300 keV and trigger only on particular timescales, as noted previously. Furthermore, the intrinsic fuzziness of the detection threshold (due to background fluctuations) creates a nontrivial 5-dimensional fuzzy cut through the constructed probabilistic GRB world models. Thus, computing the probability of detection of GRBs for each input parameter set requires recomputing the probabilistic model's normalization factor. In other words, calculating the likelihood of each parameter set of the models requires solving a 5-dimensional integration in the space of GRB properties and the gamma-ray detector characteristics. Previous similar studies with the BATSE catalog data indicate that each computation of this multidimensional integral takes on the order of 100-1000 milliseconds [1,7,22,37]. The incorporation of data uncertainties in catalogs that are larger than BATSE catalog (such as Fermi-GBM) into this analysis via the Bayesian methods that we have detailed above will likely increase this computational cost by 1-2 orders of magnitude. As such, sampling the posterior probability density of the parameters of the GRB world models requires parallel Monte Carlo sampling algorithms. Existing software, such as ParaDRAM and ParaNest algorithms of the ParaMonte library [51][52][53][54] or MultiNest [55], are capable of distributing multiple simultaneous calculations of objective functions across a few processors in parallel. In the presence of multiple competing GRB world models, the Bayesian probability theory offers a natural method of comparing and ranking multiple competing probabilistic world models for GRBs. This can be achieved by computing the plausibility of each model according to the Bayes rule. Consider, for example, the Bayesian problem of selecting the best model from a set of m competing models M = M 1 , ..., M m capable of describing all available data D. For each model M i in the set, the posterior distribution of the parameters can be written as where π(·) denotes the probability density function and θ represents the set of unknown parameters of the model M i . One can integrate the Bayes rule (Equation (1)) over the entire parameter space Θ of the model and rearrange the equation to obtain Equation (2) provides a method of computing the denominator of the Bayes rule in Equation (1), which, by definition, is the likelihood of observing data D in a given model M i . It is sometimes called marginal likelihood since it is calculated by marginalizing the likelihood function over the entire parameter space of the model. However, more frequently, it is known as the Bayesian evidence or model evidence or simply the evidence. The utility of evidence goes beyond just serving as a normalizing constant in the Bayes rule, as it can be used for the calculation of Bayesian plausibility in yet another rewriting of the Bayes theorem, this time for the set of models M: Equation (3) gives the posterior probability density of the ith model M i in the set of all rival models M, given the prior probability knowledge π(M i ) about model M i being correct. It is called the Bayesian plausibility, since it provides a measure of the plausibility of model assumptions in the light of available data and prior knowledge about the model. In the case of complete prior ignorance about all competing models, the Jaynes principle of maximum entropy [56] dictates the assignment of uniform equal prior probabilities to each of the competing models. Despite the mathematical simplicity of Equation (3), its computation is frequently a challenging task. Nevertheless, the computation of Bayesian plausibility can be greatly simplified via analytical approximations to Equation (3) that are valid only under certain assumptions and asymptotic behaviors, such as in the limit of large datasets or when the posterior distribution of the parameters of the models can be well-approximated by a multivariate normal distribution. In such cases, approximate methods such as the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC) offer simple elegant solutions to model comparison [50,57]. In the special case of modeling GRB catalogs, however, the full numerical computation of the Bayesian plausibility for the competing models might be necessary. The aforementioned Monte Carlo sampling and integration tools can handle such intricate, computationally expensive numerical integrations. Application to the BATSE Catalog of SGRBs We now present an implementation of the Bayesian probabilistic approach to reconstructing the missing data (e.g., redshifts) of BATSE SGRBs. Observational Data We follow the same fuzzy clustering procedure applied to the BATSE catalog data as described in Shahmoradi and Nemiroff [7] to segregate SGRBs from LGRBs in the BATSE catalog. The resulting 565 SGRB sample that we obtain and use in this study is identical to that of Shahmoradi and Nemiroff [7]. Following Shahmoradi [37], Shahmoradi and Nemiroff [7], and Osborne et al. [1], we use the four observational prompt gamma-ray emission properties of SGRBs available in the BATSE catalog to constrain the population properties and the unknown redshifts of individual BATSE SGRBs (Figure 2). Model Construction The crucial step in modeling the population properties of BATSE SGRBs is to realize that one can use the existing prior knowledge about the overall cosmic redshift distribution of SGRBs to integrate over all possible redshifts for each observed SGRB in the BATSE catalog to infer a range of plausible values for the intrinsic properties of the corresponding SGRBs. These individually computed probability density functions (PDFs) of the intrinsic properties can be then used to infer the unknown parameters of the joint population distribution of the intrinsic properties of SGRBs. Once the SGRB world model parameters are constrained, we can use the inferred population distribution of the intrinsic SGRB properties together with the observed properties to estimate the redshifts of individual BATSE SGRBs, independently of each other. The estimated redshifts can be again used to further constrain the intrinsic properties of SGRBs, which will then result in even tighter estimates for the individual redshifts of BATSE SGRBs. This recursive modeling can theoretically continue until convergence to a set of fixed individual redshift estimates occurs, although practical considerations frequently limit the procedure to only one cycle. The lack of knowledge of the cosmic rate of SGRBs proves to be the largest source of uncertainty in SGRB population studies. At first glance, the above simple semi-Bayesian mathematical approach may sound like magic and perhaps, too good to be true. Sometimes it is. However, as explained in the previous sections, it can also lead to reasonably accurate results if some conditions regarding the problem and the observational dataset are satisfied. Let D lc obs,i represent the ith SGRB event in the BATSE catalog, with the four main SGRB prompt emission properties, Here, the superscript lc is used to indicate that data are extracted exclusively from the light-curves of the events. These are essentially the values reported both in the BATSE catalog and in Shahmoradi and Nemiroff [58]. The entire BATSE dataset of 565 SGRB events attributes can then be represented as the collection of these events, The peak brightness, P bol , is included in our GRB world model as it, along with E p , determines the peak photon flux, P ph , in the 50-300 keV range (the BATSE nominal detection energy window). Given the available observed BATSE dataset, D lc obs , the primary goal now is to constrain the probability density functions of the redshifts of individual BATSE SGRBs. To do this, the process of SGRB observation is modeled as a nonhomogeneous Poisson process whose mean rate parameter is the 'observed' cosmic SGRB rate, R obs . Each SGRB can be described as having the intrinsic properties in the 5-dimensional attributes space, Ω(D int ), of the 1024 ms isotropic peak luminosity (L iso ), the total isotropic emission (E iso ), the intrinsic spectral peak energy (E pz ), and the intrinsic duration (T 90z ), as a function of the parameters, θ obs , of the observed SGRB rate model, R obs . The probability of these SGRBs occurring with the given properties is then given by π D int,i |R obs , θ obs ∝ R obs D int,i , θ obs (8) where the term R obs represents the BATSE-censored rate of SGRB occurrence in the universe. This can also be rewritten in terms of the intrinsic cosmic SGRB rate, R int , along with the BATSE detection efficiency function, η eff , as for a given set of input intrinsic SGRB attributes, D int , with θ obs = {θ eff , θ int } as the set of the parameters of our models for the BATSE detection efficiency and the intrinsic cosmic SGRB rate, respectively. Assuming that there is no systematic evolution of SGRB characteristics with the redshift, the intrinsic SGRB rate itself can be written as int is a statistical model, with θ lc int denoting its parameters, that describes the population distribution of SGRBs in the 4-dimensional attributes space of D lc int = [L iso , E iso , E pz , T 90z ], and the termζ(z, θ z ) represents the comoving rate density model of SGRBs with the set of parameters θ z , while the factor (1 + z) in the denominator accounts for the cosmological time dilation. The comoving volume element per unit redshift, dV dz , is given by [59,60] dV with Ω M and Ω Λ representing the dark matter and dark energy densities, respectively, and d L standing for the luminosity distance given by where C represents the speed of light and H 0 represents the Hubble constant. The cosmological parameters in Equation (12) are set to h = 0.70, Ω M = 0.27, and Ω Λ = 0.73 [61]. If the three rate models, (ζ, η eff , R lc int ), and their parameters were known a priori, one could readily compute the PDFs of the set of unknown redshifts of all BATSE SGRBs, as π Z |D lc obs , R obs , θ obs ∝ R obs Z, D lc obs , θ obs . For a range of possible parameter values, the redshift probabilities can be computed by marginalizing over the entire parameter space, Ω(θ obs ), of the model: π Z | D lc obs , R obs = Ω(θ obs ) π Z |D lc obs , R obs , θ obs × π θ obs |D lc obs , R obs dθ obs . The problem, however, is that neither the rate models nor their parameters are known a priori. Even more problematic is the circular dependency of the posterior PDFs of Z and θ obs on each other: To break this circular dependency, we can adopt the empirical Bayes approach described in Equation (2) to estimate the redshifts of BATSE SGRBs. First, we propose models for (ζ, η eff , R lc int ), whose parameters have yet to be constrained by observational data. Given the three rate models, we can then proceed to constrain the free parameters of the observed cosmic SGRB rate, R obs , based on BATSE SGRB data. The most appropriate fitting approach should take into account the observational uncertainties and any prior knowledge from independent sources. This can be achieved via the multilevel Bayesian methodology [22] by constructing the likelihood function and the posterior PDF of the parameters of the model while taking into account the uncertainties in observational data (see Equation (61) in [22]): π θ obs |D lc obs , R obs where Equation (17) holds under the assumptions of independent and identical distribution (i.e., the i.i.d. property) of BATSE SGRBs and there is no measurement uncertainty in the observational data, except redshift (z), which is completely unknown for BATSE SGRBs. Once the posterior PDF of the model parameters is obtained, it can be plugged into Equation (15) to constrain the redshift PDF of individual BATSE SGRBs at the second level of modeling. The SGRB Redshift Prior Knowledge The main assumption in this work is that SGRBs are due to the coalescence of binary neutron stars or the merger of a neutron star and a black hole. It is widely believed that binary mergers require significant cosmological time to occur after the deaths of the parent stars and the formations of the neutron stars. In this scenario, the cosmic rate of SGRBs follows the Star Formation Rate (SFR) convolved with a distribution of the delay time between the formation of a binary system and its coalescence due to gravitational radiation. There is currently no consensus on the statistical moments and shape of the distribution of the delay time between the deaths of supermassive stars and their subsequent coalescence to form SGRBs, solely based on observations of individual events and their host galaxies. The median delays vary widely in the range of ∼0.1-7 billion years, depending on the assumptions involved in estimation methods or in the dominant binary formation channels considered. Recent results from population synthesis simulations, however, favor very short delay times of a few hundred million years with long, negligible tails towards several billion years [7]. The extreme computational expenses imposed on this work by the complex mathematical models strongly limit the number of possible scenarios that could be considered for the cosmic rate of short GRBs. Thus, in order to approximate the comoving rate densitẏ ζ(z) of SGRBs, we adopt the Star Formation Rate (SFR) model,ζ, described in [7] in the form of a piecewise power-law function, with parameters This SFR model is then convolved with a log-normal model of the delay time distribution [7,62] with parameters [µ, σ] = [log(0.1), 1.12] in units of billion years (Gyrs) adopted from [7], such that the comoving rate density of SGRBs is calculated aṡ with the universe's age t(z) at redshift z given by 3.4. The SGRB Properties Rate Model: R lc int As for the choice of the statistical model for the joint distribution of the four main intrinsic properties of SGRBs, D lc int , a multivariate log-normal distribution, R lc int ≡ LN , is assumed in this work, whose parameters (i.e., the mean vector and the covariance matrix), θ lc int = {µ, Σ}, will have to be constrained by data. The justification for the choice of a multivariate log-normal as the underlying intrinsic population distribution of SGRBs is multifolded. First, the observed joint distribution of BATSE SGRB properties highly resembles a log-normal shape [36] that is censored close to the detection threshold of BATSE. Second, unlike the power-law distribution which has traditionally been the default choice of model for the luminosity function of SGRBs, log-normal models provide natural upper and lower bounds on the total energy budget and luminosity of SGRBs, eliminating the need for setting artificial sharp bounds on the distributions to properly normalize them. Third, the log-normal and Gaussian distributions are among the most naturally occurring statistical distributions in nature, whose generalizations to multidimensions are also well-studied and understood. This is a highly desired property especially for our work, given the overall mathematical and computational complexity of the model proposed and developed here. The BATSE Detection Threshold: η eff Compared to Fermi-GBM [63] and Swift-BAT [64], BATSE has a relatively simple triggering algorithm. The BATSE detection efficiency and algorithm have been already extensively studied by the BATSE team as well as independent authors [7,37,44,58]. However, the simple implementation and usage of the known BATSE trigger threshold for modeling the BATSE catalog's sample incompleteness can lead to systematic biases in the inferred quantities of interest. Out of BATSE triggered on 2702 GRBs, only 2145, or approximately 79%, have been consistently analyzed and reported in the current BATSE catalog, with the remaining 21% either having a low accumulation of count rates or missing full spectral/temporal coverage [7]. Thus, the extent of sample incompleteness in the BATSE catalog is likely not fully and accurately represented by the BATSE triggering algorithm alone. BATSE LADs generally triggered on a GRB if the number of photons per 64, 256, or 1024 ms arriving at the detectors in the 50-300 keV energy window, P ph , reach a certain threshold in units of the background photon count fluctuations, σ. This threshold was typically set to 5.5 σ during much of BATSE's operational lifetime. However, the naturally occurring fluctuations in the average background photon counts effectively lead to a monotonically increasing BATSE detection efficiency as a function of P ph , instead of a sharp cutoff on the observed P ph distribution of SGRBs. Although the detection efficiency of most gamma-ray detectors depends solely on the observed peak photon flux in a limited energy window, the quantity of interest that is most often modeled and studied is the bolometric peak 'energy' flux (P bol ). This variable depends on the observed peak photon flux and the spectral peak energy (E p ) for the class of LGRBs [37] and also on the observed duration (e.g., T 90 ) of the burst for the class of SGRBs [7]. The effects of GRB duration on the peak flux measurement are very well illustrated in the left plot of Figure 3, where it is shown that for BATSE GRBs with T 90 1024 ms, the timescale used for the definition of the peak flux does indeed matter. This is particularly important in modeling the triggering algorithm of BATSE Large Area Detectors when a short burst can be potentially detected on any of the three different peak flux timescales used in the triggering algorithm: 64 ms, 256 ms, and 1024 ms. Therefore, the detection modeling approach of Shahmoradi and Nemiroff [7] is adopted to construct a minimally biased model of BATSE trigger efficiency for the population study of shorthard bursts. Results Now, with a statistical model at hand for the observed rate of short GRBs, we proceed by first fitting the proposed censored cosmic SGRB rate model R obs to 565 BATSE SGRB data under the redshift distribution scenario prescribed in the previous section. The posterior PDF of parameters of the cosmic rate model of SGRBs is explored by the Parallel Delayed-Rejection Adaptive Metropolis-Hastings Markov Chain Monte Carlo algorithm (the ParaDRAM algorithm) that is part of the larger Monte Carlo simulation package ParaMonte available in C, C++, Fortran, MATLAB, Python, and other programming languages available online (as of 31 March 2022) at https://github.com/cdslaborg/paramonte [52,53,65,66]. However, due to the complex truncation imposed on SGRB data and the world model by the BATSE detection threshold, the maximization of the posterior distribution of the parameters of the cosmic rate model of SGRBs is not only analytically intractable but also computationally extremely complex. Calculation of the posterior distribution as given by Equation (17) requires a multivariate integral over the four-dimensional space of SGRB variables at any given redshift. In addition, due to lack of redshift (z) information for BATSE SGRBs, the probability for the observation of each SGRB given the model parameters must be marginalized over all possible redshifts, adding another layer of integration to the four-dimensional integration. These numerical integrations make sampling from the posterior distribution of the parameters of the SGRB cosmic rate model an extremely difficult task. Therefore, the inclusion of the measurement uncertainties, which would make the computations far more complex, is not considered in this work. The joint posterior distribution of the model parameters is then obtained by iterative sampling using a variant of Markov Chain Monte Carlo (MCMC) techniques known as Adaptive Metropolis-Hastings [67]. To further the efficiency of MCMC sampling, we implement all algorithms in Fortran [68,69] and approximate the numerical integration in the definition of the luminosity distance of Equation (12) by the analytical expressions of Wickramasinghe and Ukwatta [70]. This integration is encountered on the order of one billion times during the MCMC sampling of the posterior distribution. The computations were performed on 96 processors in parallel on two Skylake compute nodes of the Stampede 2 supercomputer at Texas Advanced Computing Center. We performed extensive tests to ensure a high level of accuracy of the high-dimensional numerical integrations involved in the derivation of the posterior distribution of the parameters of the censored cosmic rate model for SGRBs as given in Equation (17). The resulting best-fit parameters of the cosmic SGRB rate model are summarized in Table 1, and the marginal distributions of their parameters are compared with each other in Figure 4. Once the parameters of the censored cosmic rate model Equation (9) are constrained, we use the calibrated model at the second level of the analysis to further constrain the PDFs of the unknown redshifts of individual BATSE SGRBs according to Equation (15). This iterative process can continue until convergence to a specific set of redshift PDFs occurs. However, given the computational complexity and the expense of each iteration, the iterative refinement process is stopped after obtaining the first round of estimates. Table 1. Mean best-fit parameters of SGRB world model compared to LGRB world model of Shahmoradi [37]. Parameter SGRB World Model LGRB World Model Redshift Parameters (Equation (18) The mean redshifts together with 50% and 90% prediction intervals for the three rate density scenarios are also reported in Table 2. On average, the redshifts of individual BATSE SGRBs can be constrained to within a 50% uncertainty range of 0.51. At a 90% confidence level, the prediction intervals expand to wider a uncertainty range of 1.31. Figure 5 shows the derived probability density functions (PDFs) of a subset of 565 BATSE SGRBs. As illustrated, the redshifts of BATSE SGRBs are generally better constrained at lower redshifts. Discussion and Concluding Remarks In this work, a semi-Bayesian data-driven methodology was proposed to infer the unknown redshifts of 565 BATSE catalog SGRBs. Towards this, first, the two populations of BATSE LGRBs and SGRBs were segregated using the fuzzy C-means classification method based on the observed durations and spectral peak energies of 1966 BATSE GRBs with available spectral and temporal information. Then, the process of SGRB detection was modeled as a nonhomogeneous spatiotemporal Poisson process whose rate parameter was modeled by a multivariate log-normal distribution as a function of the four main SGRB intrinsic attributes: the 1024 ms isotropic peak luminosity (L iso ), the total isotropic emission (E iso ), the intrinsic spectral peak energy (E pz ), and the intrinsic duration (T 90z ). To calibrate the parameters of the rate model, a fundamental assumption was made: SGRBs trace the Cosmic Star Formation Rate (SFR) convolved with a model for the binary neutron star merger delay distribution. Then, the resulting posterior probability densities of the model parameters were used to compute the probability density functions of the redshifts of individual BATSE SGRBs. Although sample incompleteness may strongly affect an observational dataset, the proposed semi-Bayesian modeling framework enables us to overcome the limitations of the observational samples and missing data via reasonable prior distributions and appropriate modeling of the potential biases [71] present in observational data. The proposed methodology is different from and offers a parametric alternative to the existing nonparametric methods for quantifying the impact of missing data [72,73]. While generic and applicable to a wide range of research problems and datasets, this parametric probabilistic method requires certain assumptions to be met regarding the observational data to yield reasonably accurate unbiased constraints on the missing data. Most importantly, the effectiveness of the method correlates strongly and positively with the quality of data and the impact of the unknown components of data (e.g., redshift) on the known (observed) data. These two factors can together explain the significant difference between the tight constraints that Osborne et al. [1] obtain on the individual redshifts of BATSE LGRBs and the inferred redshifts of BATSE SGRBs in this work, as illustrated in Figure 5. We expect the Swift-BAT and particularly the Fermi-GBM catalogs to yield significantly tighter constraints on the unknown individual redshifts of Swift and Fermi LGRBs and SGRBs, due to the higher quality of data and the availability of redshift information for a significant number of events in these catalogs. This will ultimately lead to better independent estimates of the cosmic rates of LGRBs and SGRBs and their improved utilities in constraining the rate of GWR events. This will also enable the implementation of our proposed probabilistic framework for validating the inferred redshifts in these catalogs, an issue that remains untouched in present work due to the lack of any measured redshifts in the BATSE (SGRB) catalog. Nevertheless, Osborne et al. [1] show that our proposed methodology is capable of constraining the redshifts of GRBs in the presence of sufficient high-quality data.
10,487.4
2022-04-28T00:00:00.000
[ "Physics" ]
Deep Learning for Large-Scale Real-World ACARS and ADS-B Radio Signal Classification Radio signal classification has a very wide range of applications in the field of wireless communications and electromagnetic spectrum management. In recent years, deep learning has been used to solve the problem of radio signal classification and has achieved good results. However, the radio signal data currently used are very limited in scale. In order to verify the performance of the deep learning-based radio signal classification on real-world radio signal data, in this paper, we conduct experiments on large-scale real-world ACARS and ADS-B signal data with sample sizes of 900 000 and 13 000 000,, respectively, and with categories of 3143 and 5157, respectively. We use the same inception-residual neural network model structure for ACARS signal classification and ADS-B signal classification to verify the ability of a single basic deep neural network model structure to process different types of radio signals, i.e., communication bursts in ACARS and pulse bursts in ADS-B. We build an experimental system for radio signal deep learning experiments. The experimental results show that the signal classification accuracy of ACARS and ADS-B is 98.1% and 96.3%, respectively. When the signal-to-noise ratio (with injected additive white Gaussian noise) is greater than 9 dB, the classification accuracy is greater than 92%. These experimental results validate the ability of deep learning to classify large-scale real-world radio signals. The results of the transfer learning experiment show that the model trained on large-scale ADS-B datasets is more conducive to the learning and training of new tasks than the model trained on small-scale datasets. I. INTRODUCTION Radio signal classification has a very wide range of applications in the field of wireless communication and electromagnetic spectrum management [1]- [3].In adaptive modulation and coding communication, the receiver can recognize the modulation and coding mode used by the transmitter, and then demodulate and decode the received signal by using corresponding demodulation and decoding algorithms, which helps to reduce the protocol overhead.In the field of spectrum management, cognitive radio [4] can detect the primary user signal by identifying the radio signal in the sensing band, thereby avoiding harmful interference to the primary user.Furthermore, by identifying various illegal users and interference signals, the security of the physical layer of the wireless communication can be improved, and the legal use of the spectrum can be guaranteed [5] [6]. In recent years, with the development of deep learning [7] [8] technology, it has been widely used in the fields of image recognition [9][10], speech recognition [11], natural language processing [12], and wireless communications [13]- [20].In the past two years, deep learning has also been used to solve the problem of radio signal classification [21] [22] and has obtained superior performance over traditional feature-based methods.However, the existing dataset used in the study of radio signal classification based on deep learning is relatively limited in scale, and many of the data are generated by the USRP.There is little research on the realworld signal data radiated by existing commercial radio transmitters.In order to verify the performance of the deep learning-based radio signal classification method on real radio signal data, we conduct experiments with large-scale real-world ACARS (Aircraft Communications Addressing and Reporting System) [23] and ADS-B (Automatic Dependent Surveillance -Broadcast) [24] signal data.Furthermore, we use the same convolutional neural network (CNN) model to realize ACARS signal classification and ADS-B signal classification, which verifies the ability of a single basic deep neural network model to deal with different types of radio signals (communication signal bursts and pulse bursts). A. RELATED WORK The research on radio signal classification based on deep learning mainly focuses on two aspects: automatic modulation classification and radio frequency (RF) fingerprinting.In deep learning based automatic modulation classification, the authors used a simple CNN for modulation classification [25], and the experimental results show that the method obtained performance close to the feature-based expert system.They further exploited the deep residual (ResNet) network to improve the classification performance [21].In addition to CNN and ResNet, the performance of six neural network models are compared in [26] and the training complexity is reduced from the perspective of reducing the input signal dimension.In most cases, the input size is fixed CNN-based modulation classification.The authors [27] proposed three fusion methods to improve the classification accuracy when the signal length is greater than the CNN input length.The above studies are all based on the classification of raw in-phase and quadrature (IQ) data.In addition to IQ, other forms of signal input are also considered.For example, the authors [28] used the instantaneous amplitude and instantaneous phase of the signal as the input of the long short-term memory (LSTM) network for modulation classification.Others convert IQ data into images by transformation, and then use the deep learning method of image classification to classify radio modulation [29] [30].In addition, there are many works employing generative adversarial networks (GANs) to analyze the security of the classification network [31] or for data augmentation [32].In terms of modulation classification, the size of the signal dataset in [21] is relatively representative.The data set includes 24 modulation types, and the signals are transmitted and received through the USRP. In addition to automatic modulation classification, RF fingerprinting has also begun to adopt deep learning methods.A CNN was used in [22] for fingerprinting identification of five ZigBee devices.Bispectrum of the received signal was calculated as a unique feature in [33] and a CCN was used to identify specific emitters.A CNN was also used in [34] to identify five USRP devices.The deep learning-based method obtained better performance than SVM and Logistic regression-based methods.16 static USRP devices placed in a room were considered in [35] for CNN-based emitter classification.In order to consider a much more realistic where the topology evolving over time, a series of datasets gathered with 21 emitters were considered in [36] for emitter classification and state-of-the-art performance was obtained. In [2], we used CNN and the commercial real world ACARS signal data to identify the aircrafts.The number of aircraft classified is 2,016, and the total sample size is 60,480.Table 1 summarizes the data used in the current classification of radio signals based on deep learning.It can be seen that except the dataset used in our prior work [2], the radio signal dataset used in the study of radio signal classification based on deep learning is very limited in scale. In order to further verify the performance of deep learning on the real-world large-scale radio signal classification, in this paper we expand the radio signal dataset scale, and carry out experimental verification from two datasets, ACARS and ADS-B, respectively.From the perspective of the sample size and the number of classification categories, the dataset used in this paper is the largest real-world dataset used for radio signal classification so far. B. CONTRIBUTIONS AND STRUCTURE OF THE PAPER In summary, the contributions of the paper are as follows:  We test the radio signal classification method based on deep learning on large-scale real-world datasets. The datasets used include ACARS and ADS-B.Large-scale is reflected in the sample size and the number of categories.Sample sizes of ACARS and ADS-B are as high as 900,000 and 13,000,000, respectively.The number of ACARS classification categories is 3,143, and the number of ADS-B classification categories is 5,157.To the best of our knowledge, it is the first time that a deep learning method for radio signal classification has been carried out on such large-scale real-world data. We use the same CNN model structure for both ACARS signal classification and ADS-B signal classification.The ACARS signal is in the form of a communication burst, while the ADS-B signal is in the form of a pulse burst.In this paper, the same CNN model is used for ACARS classification and ADS-B classification, which verifies the ability of a single basic deep neural network model to deal with different types of radio signals. We build an experimental system for radio signal deep learning experiments.The experimental system operates in the 30MHz-3GHz frequency band and supports large-scale radio signal acquisition, distribution, storage, training, and real-time reference. The classification experiments of ACARS and ADS-B signals in this paper are conducted on this experimental system.The rest of this paper is organized as follows.In Section II, we introduce ACARS and ADS-B signals and the datasets used in the rest of the paper.In Section III, we present the CNN model.In Section IV we discuss the experimental results, and finally in Section V we summarize the paper. A. INTRODUCTION OF RADIO SIGNALS There are many different types of radio signals.In order to facilitate data acquisition and analysis, the radio signals used in this paper are ACARS and ADA-B. ACARS is a digital data link system that transmits short messages between aircrafts and ground stations via radio or satellite.The VHF ground-to-air data link with the ACARS system can realize real-time two-way communication of ground-air data with the transmission protocol ARINC618.This paper uses the downlink signal.Each ACARS message contains a maximum of 220 bytes.The data frame format is shown in Fig. 1. ADS-B has been widely used as a next-generation air traffic management surveillance technology worldwide.An ADS-B message consists mainly of two parts, the preamble pulse part and the data pulse part, as shown in Fig. 2. The ADS-B considered in this paper is the 1090 MHz Extended Squitter (1090ES) mode.There is also an S-mode long response signal at 1090 MHz, which also conforms to the pulse specification shown in Fig. 2, but with specific data bit information.The ADS-B dataset in this paper contains both ADS-S signals and S-mode long response signals. B. THE LARGE-SCALE DATASETS We use an acquisition system to receive and label the VHF band ACARS signals and the 1090 MHz ADS-B and S mode response signals.The system components are described in Section IV.After a long time of collection, some samples are selected, and the two datasets formed are as follows.Some of the samples are shown in Fig. 3.  ACARS: The total sample size is 900,000. III. THE CNN MODEL CNNs are a widely used deep neural network.Most CNN structures are inspired by LeNet.Classic CNNs often contain four basic layers: convolutional layer, normalized layer, nonlinear activation layer and pooling layer.In recent years, with the development of research, various special CNN models have emerged, including ResNet [37], Inception CNN [38], DenseNet [39], and so on. The CNN structure considered in this paper is the Inception-residual network structure.As the depth of the traditional CNN increases, the training error will become larger, that is, the degradation problem occurs.In [37], the authors proposed a residual network to solve the training problem of deep networks.The basic component of the residual network is shown in Fig. 4. The channels of each layer of the Inception network are provided with convolution kernels of different sizes, and then the respective channels are connected and combined to the next layer.Therefore, the Inception network is characterized by multi-resolution analysis.The basic module is shown in Fig. 5. Since there are many sizes of convolution kernels in each layer, the learning ability is improved, which is beneficial to improve the network performance. Based on the Inception module and the residual module, we construct a deep Inception residual network structure for radio signal classification, as shown in Fig. 6.Details of the Inception-res blocks are given in Appendix.The network input is the original sample sequence of the received signal.In this paper we use this same model structure to achieve signal classification for both ACARS and ADS-B datasets.Deep Inception-residual network layout.In the figure, "conv" represents the convolutional layer; the number before "conv" represents the size of the convolution kernel, and the following number indicates the number of convolution kernels; "S" indicates that the convolution contains padding so that the input and output are of the same size, and "/2" indicates that the downsampling factor is 2, which means the output size is reduced to half of the input size; "maxpool" represents the maximum pooling; "Global avgpool" represents the global average pooling; "fc" represents the fully connected layer, and the number after that represents the number of neurons; "depth cat" indicates the concatenation layer; M is the number of categories; the final output is the category.All layers are activated by ReLU, and the batch normalization layer is also included between the convolutional layer and the nonlinear activation layer, which is not shown in the figure for the sake of simplicity.Details of Inception-res Block1, Inception-res Block2, and Inception-res Block3 are given in Appendix. EXPERIMENTAL SYSTEM The experimental system is upgraded on our previous big data processing system [2] for radio signals.The physical composition is shown in Fig. 7.It is mainly composed of an antenna, an RF receiving and sampling module, a storage system, a computation system and a data exchange network.Details are as follows:  Antenna.The antenna used is an omnidirectional antenna with vertical polarization, a typical gain of 0 dB, and an operating frequency range of 30 MHz to 3 GHz, which can completely cover the ACARS and ADS-B bands. 2) BASIC CLASSIFICATION RESULTS The test accuracy of ACARS data and ADS-B data is 98.1% and 96.3%, respectively.For ACARS, the classification accuracy of 4,863 in the 5,157 categories is greater than 90%.For ADS-B, the classification accuracy of 3,022 in the 3,143 categories is greater than 90%.Fig. 9 and Fig. 10 show the classification accuracy of each category of ACARS and ADS-B, respectively.It can be seen that the classification accuracy of a few categories is much lower than that of others. 3) PERFORMANCE WITH INJECTED WHITE GAUSSIAN NOISE To measure the classification performance at different noise levels, we added additive white Gaussian noise to the test data for experiments.Fig. 11 and Fig. 12 show the classification performance of ACARS and ADS-B at different noise levels.It should be noted that the signal-tonoise ratio (SNR) here refers to the SNR after noise is added when we regard the original signal as a pure signal without noise (in fact, the original signal itself also contains noise).Therefore, the actual SNRs are smaller than those shown in the figure.It can be seen that although these noisy samples have not been trained, the model has good performance to classify these noisy samples.When the SNR is greater than 9 dB, the classification accuracy under both datasets is greater than 92%.The figures also illustrate the average confidence of the correctly classified samples under different SNRs.It can be seen that the lower the SNR, the lower the confidence.In addition, comparing the classification accuracy of ACARS and ADS-B at different SNRs, the ADS-B classification is more robust to noise, which may be caused by the ADS-B signal being a burst of pulses. 4) PERFORMANCE OF TRANSFER LEARNING To further illustrate the importance of large-scale data deep learning, we also conducted a transfer learning (TL) experiment using the ADS-B classification as an example.Consider a new learning task that categorizes 190 categories of ADS-B signals.The signal data of these 190 categories are not in the dataset introduced in Section II.The number of samples per category is between 50 and 100.The total number of samples is 14,000.For performance comparison, we selected data of 50 and 500 categories from the dataset discussed in Section II, respectively.The total number of sa mp les wa s 360 ,0 00 a nd 2 ,8 50,000, respectively.The trained networks with these two datasets are indicated as Net50 and Net500.The network trained with the entire dataset is indicated as NetAll.We conduct the training in two ways: without TL and with TL.The non-TL method directly trains with the data of 190 categories of ADS-B signals, while the TL method performs fine tuning on the three network models that have been trained, i.e., Net50, Net500, and NetAll.Fig. 13 shows the experimental results.It can be seen that TL method converges faster than non-TL training and obtains higher final classification accuracy.Specifically, the final classification accuracy of the non-TL method is 81.24%, and the classification accuracy rates of the three TL methods are 88.56%, 90.42%, and 92.55%, respectively. The number of categories (number of aircrafts) is 3,143, and the sample size of each category ranges from 60 to 1,000.The sample rate is 16 ksps and the length of each sample is 13,500.The labels are the aircraft IDs.40 samples of each category are selected to form the test set, and the remaining samples are used as training samples.Three of these samples are shown in Fig.3(a). ADS-B: The total sample size is 13,000,000.The number of categories (number of aircrafts) is 5,157, and the sample size of each category varies from 150 to 9,400.The sample rate is 100 Msps and each sample is 13,500 in length.The labels are the aircraft IDs.For each category, 50 samples are selected to form the test set, and the remaining samples are used as training samples.Some of the samples are shown in Fig. 3 (b).It can be seen that some samples contain external interference pulses. FIGURE 6.Deep Inception-residual network layout.In the figure, "conv" represents the convolutional layer; the number before "conv" represents the size of the convolution kernel, and the following number indicates the number of convolution kernels; "S" indicates that the convolution contains padding so that the input and output are of the same size, and "/2" indicates that the downsampling factor is 2, which means the output size is reduced to half of the input size; "maxpool" represents the maximum pooling; "Global avgpool" represents the global average pooling; "fc" represents the fully connected layer, and the number after that represents the number of neurons; "depth cat" indicates the concatenation layer; M is the number of categories; the final output is the category.All layers are activated by ReLU, and the batch normalization layer is also included between the convolutional layer and the nonlinear activation layer, which is not shown in the figure for the sake of simplicity.Details of Inception-res Block1, Inception-res Block2, and Inception-res Block3 are given in Appendix. FIGURE 9 . FIGURE 9. ACARS classification accuracy of the test set.(a) Classification accuracy of each category and (b) histogram of classification accuracy. FIGURE 10 . FIGURE 10.ADS-B classification accuracy of the test set.(a) Classification accuracy of each category and (b) histogram of classification accuracy. FIGURE 11.ACARS classification accuracy of the test set under different noise levels. FIGURE 13 . FIGURE 13.Transfer learning of ADS-B classification. Fig. 6 shows the basic CNN framework used in this paper.The structures of the three Inception-res blocks are shown in Fig. 14, Fig. 15 and Fig. 16. TABLE 1 . Radio signal data set used in some of the works. Table 2 further shows the number of training iterations required to achieve the target classification accuracy.It can be seen that as the size of data used to train the TL network increases, the time required for TL training is decreases.Especially with NetAll TL, in order to achieve 80% classification accuracy, the number of training iterations is reduced by more than 80 times.These indicate that the larger the dataset size, the TABLE 2 . Number of iterations in training.
4,389
2019-04-20T00:00:00.000
[ "Engineering", "Computer Science" ]
Intelligent recognition algorithm for social network sensitive information based on classification technology . In the social network, there is the problem of network sensitive information with low accuracy rate of information recognition. To effectively improve the accuracy of intelligent identification of sensitive information, an intelligent recognition algorithm for sensitive information based on improved fuzzy support vector machine is proposed in this paper. The information is collected. The trajectory of the best movement of the information node is found in the low energy cache. In the limited time, the performance of information acquisition is improved by using the mobility of information nodes. According to DFS criterion, the features are added into the feature subset or eliminate the sensitive information. The feature selection algorithm based on multi-label is applied to feature selection of the collected information, so that the information gain between information feature and label set can be used to measure the importance. The improved support vector machine classification algorithm is used to classify the information selected by feature selection, and select effective candidate support vector, reduce the number of training samples, and improve the training speed. The new membership function is defined to enhance the effect of support vector on the construction of fuzzy support vector machine. Finally, the nearest neighbor sample density is applied to the design of membership function to reduce the noise, and achieve intelligent recognition of the sensitive information in the social network. Experimental results show that the accuracy rate of sensitive information intelligent recognition can be effectively improved by using the proposed algorithm. information is generated every day. In order to obtain and use this information conveniently and effectively, it is needed to be classified [7,12]. In this way, the artificial intelligence technology has been developed. The application of target recognition is widely used in the field of computer vision. It plays an important role in the industrial application, aerospace and military field [20]. For the perception of the world, most of the information comes from the vision. The description and recognition of the object feature is the key to cognition. The current computer visual perception is not able to meet human needs. Therefore, it is a major challenge in the field of recognition to simulate the learning of the human brain to accomplish the learning of the target, and it is also a hot research [4,21]. In the social network, the target recognition of sensitive information is also called as the pattern recognition of the vision. The purpose is to process the information by using the theory and method of information processing and pattern recognition, in order to determine whether it contains sensitive information, extract useful information, determine the location of the target, and realize the description, analysis, judgment and recognition of sensitive information. Pattern recognition is a subject of classification and description of the information or physical procesess. However, research results show that the classification algorithms that use different theories on the same data will obtain different results. In practice, in order to achieve the best effect, researchers need continuous experiments [3,9]. In present, a target recognition algorithm for convolution neural networks based on unsupervised pre-training and multi-scale partitioning is proposed. A sparse automatic coder is trained by using a non-labeled image to obtain a filter set that meets the characteristics of the dataset with good initial value. The feature is finally used for information classification, and feature input classifier is used to achieve information target recognition. However, the classification performance of the method is poor. 2. Intelligent recognition algorithm for social network sensitive information based on classification technology. 2.1. Social network sensitive information collection. In the social network, when the data is collected in the data cache with the distance from network center l = 2(R 2 − 2r 2 )/2, the total energy consumption of the whole network is low [11]. Randomly select a small region dxdy on a circular ring with the center O, which is given by ρdρddθ in the polar coordinate. The number of the node in the region is nρdρddθ πR 2 . When ρ ≺ 1 − r, at least l−ρ r − 1 jump is required for each information node to reach the cache area. When l+r ≺ ρ ≤ R, at least l−ρ r −1 jump is required. Energy consumption for transmitting or receiving unit data is e. The amount of data collected by each node is q. The total energy consumption of the information node in the cache area is related to the data amount of the node [14,16]. Assume the data of all information nodes are transmitted to the sensitive information node, and the total energy consumption of the nodes to transmit and receive data is p t and p r , respectively. Then the total energy consumption in the cache area is given by In addition to the cache area, the total energy consumption of the information nodes in the other regions along the shortest path for data transmission is approximately expressed as When f (l) = 6l 2 + 6r 3 − 3R 2 = 0 and l = 2(R 2 − 2r 2 )/2, f (l) obtains the minimum value, that is, the total energy consumption of the network is low when the information node moves in the cache area with the distance l from the network center. Constrained by the mobility of information nodes, information nodes traverse all the nodes in the cache area sequentially, which takes longer time and longer distance to move, and the more frequent the information nodes move, the worse the network stability is. In order to further shorten the movement distance of information nodes, different access probabilities are set for nodes in the cache area, which is to further reduce the number of nodes directly visited by information nodes [5,8]. When the information node accesses one of the nodes, all the neighbor nodes can communicate directly with the information node. So the information node does not need to move to the actual location of the neighbor nodes to collect data. In order to ensure when the information node moves in the cache area, all the nodes have the opportunity to communicate directly with the information node, the width of the cache area is set to r. When the width of the cache area is r and the mobile step of the social network information node is less than √ 3r, it can guarantee that all nodes in the network have the opportunity to communicate directly with the sensitive information node. As shown in Fig. 1, the current location of the sensitive information node is O. The communication scope of the next node to be visited by the information node must cover the two points of A and B to ensure that the sensitive information node can communicate directly with all the nodes in the network. A and B is taken as the center, respectively, and r is as the radius, then the curves intersect at the point C. Only when OC is less than √ 3r, it can guarantee that the two points of A and B is in the communication scope, as shown in the shadow part in Fig. 1. Let SS = {SS(1), SS(2), . . . , SS(n s )} is the set of all nodes, N S(i) is the number of all the neighbor nodes. Assume the first visited node of the sensitive information node is SS(i). After collecting the data of this node, the node with the maximum probability is selected as the object of the next access [18,23]. The transfer Figure 1. Information node access probability between nodes is expressed as where α and β is a constant between 0 and 1, respectively, d(i, j) is the distance between SS(i) and SS(j), N S(j)/n s is the ratio of the number of neighbor nodes of SS(j) to the total number of nodes in the cache area. The node with more neighbor nodes is more likely to be accessed by the sensitive information node. When a node is accessed by a sensitive information node, the access probability of all neighbor nodes will be 0. The value limit of d(i, j) can ensure that all nodes have the opportunity to communicate directly with the sensitive information nodes to prevent the loss of data. The set of d(i, j)/ √ 3r makes the probability that the nodes in the shadow area far away from the current location of the sensitive information node become more likely to be the next access object. The moving step of the sensitive information node is increased, thus reducing the number of times the sensitive information node moves [22]. The probability matrix of the sensitive information node moving between nodes is expressed as The matrix V S(i) denotes the optimal access point set of the sensitive information node as the start of SS(i). pID denotes the ID of the current node accessed by the sensitive information node, nID denotes the ID of the next node to be accessed by the sensitive information node, D(i) denotes the moving distance of the sensitive information node from the SS(i) to SS(i) in a sampling cycle. In order to obtain the minimum moving distance of the sensitive information node, a path for the sensitive information node with the smallest distance is selected from the n s candidate path as the optimal movement strategy [6,13]. Then the optimal access node set of the sensitive information node is V S(arg min D(i)). In a certain network delay time, according to the sum of the cache data of each node in the communication scope of sensitive information, the pause time can be changed to improve the data collection performance [17]. Assume the time limit is T and the maximum moving speed of the sensitive information node is v m , then the total moving time is D(x) vm . The total pause time and the pause time of each accessed node are given by where a k is the total number of the nodes communicated with the sensitive information node for the kth accessed point, q kj is the number of the member nodes of the jth node for the kth accessed point [1]. If multiple nodes transmit the data to the sensitive information node at the same time, it will cause collision conflict and the loss of information data. In order to avoid conflict, the TDMA mechanism is used in this paper. The sensitive information node generates TDMA rules based on the pause time and the member number of each node. Then these rules are sent to the nodes in the communication scope. After receiving the rules or messages, the node is in the sleep state and only transmits data in the specific time. This not only reduces the loss rate of data and improves the efficiency of data acquisition, but also reduces the energy consumption of a single node and improves the utilization of energy. Define the ratio of the collected data amount q i of the ith node to the buffered data amount q buf f er is the data collection rate of the ith node. The computation is given by Eq. (7), where 0 ≺ p i ≤ 1. p i = 1 denotes all the buffered data of the node is transmitted to the sensitive information node. The greater the value of p i , the stronger the data collection ability. The sensitive information collection on the social network is achieved as the above process. 2.2. Social network sensitive information feature selection. Assume two classification problems in m-dimensional real space R m , The size of the training sample set of the sensitive information is n, the number of samples of positive and negative classes are n + and n − . The training set is The discernibility of feature subsets (DFS) of the sensitive information with i(i = 1, 2, . . . , m) features is given by is the mean of the jth feature in the whole dataset, the positive class dataset, and the negative class dataset, respectively, x (+) k,j is the value of the jth feature of the kth positive class sample point, x (−) k,j is the value of the jth feature of the kth negative class sample point. In Eq. (8), the numerator denotes the sum of the square of the distance between the mean vector of the feature subset with i features for the positive class and the negative class and the mean vector of the feature subset with the i features for the whole sample set. The denominator denotes the sum of the variance of the sensitive information feature subset with i features for the positive class and the negative class. The larger numerator represents the interclass of the feature subset is sparser and the smaller denominator represents the interclass is more clustered [2,15]. The larger value of DFS represents the discernibility ability is stronger. For the l(l ≥ 2) class classification problem, assume the size of the sensitive information training sample set is n, the dimension of sample space is m. The training sample set is {(x k , y k ) |x k ∈ R m , m 0, y k ∈ {1, . . . , l} , l ≥ 2 }, where the number of the jth class samples is n j , y k |y k = j, k = 1, . . . , l. DF S i of the i(i = 1, . . . , m) features is defined by wherex andx (j) is the mean vector of the feature subset in the whole dataset and the jth class dataset, respectively, x (j) k is the feature vector of the current i features of the kth sample in the jth class. When i = 1, DF S i becomes the criterion for interclass discernibility of a single feature, which is the improved F-score criterion, given by is the mean of the ith feature in the whole dataset and the jth class dataset, respectively, x (j) k,i is the value of the ith feature of the kth sample in the jth class. In Eq. (10), the numerator denotes the sum of the square of the distance between the centers of each class of the ith feature and the center of the whole sample set, and the denominator denotes the variance of interclass of the ith feature of each class [19]. Therefore, F i represents the ratio of the distance to the variance of interclass of the ith feature. The larger value shows the stronger discernibility ability. Given a sensitive information feature x and label set L = {l 1 , l 2 , . . . , l m }. Assume IG(l i |x ) is the information gain of the feature x and the label l i . Then the information gain of the feature x and the label set L is given by As IG(l i |x ) ≥ 0, IGS(L |x ) ≥ 0. From the information theory, If the sensitive information feature x and each label in the label set L = {l 1 , l 2 , . . . , l m } are independent, the information gain has minimum value. As the nonnegativity of the information gain, if the label and the feature x are independent, then IG(l i |x) = 0 and IGS(L |x ) = 0. As GS(L |x ) ≥ 0, the information gain has minimum value. If each label l i is decided by the feature x, the information gain has maximum value. Then IGS(L |x ) = H(l i ) and H(l i ), the information gain has maximum value. From IGS(L |x ), it can be found the feature which has no effect on the label set L. In this way, a reasonable threshold can be set to remove the features which have little correlation with the label set. In order to compute the threshold, the information gain of different feature x and the label set L is transformed. Assume the distribution of the sensitive information gain obeys the normal distribution. It can be transformed to the standard normal distribution by using where µ is the mean of the sensitive information gain, σ is the standard variance [10]. Given a threshold δ 0, if the absolute value of the information gain |IGS(L |x )| ≥ δ, the feature x is related to the label set, otherwise, they are unrelated. In order to make the information gain of the features and the label has the same measurement range, the information gain is normalized, that is, IG ( IG(l i |x ), it can be known that the information gain is related to the number of labels. As the number of the datasets is different, one specific threshold is not suitable. To address this problem, it is necessary to design a method that can automatically calculate the threshold value of the sensitive information gain according to different applications [24]. Assume the number of the candidate labels m = 20. For two different features x 1 and x 2 , IGS(L |x 1 ) = 0.5 and IGS(L |x 2 ) = 1.6. It can be seen that the importance of x 2 is greater that x 1 , so the feature x 1 is reserved. Based on the standard normal transformation, is used for setting the threshold, where |IGZ(L |x i )| is the absolute value of the sensitive information gain IGZ(L |x i ). In Eq. (14), the mathematical expectation of the absolute value of the transformed sensitive information gain is used as the current threshold, which can increase the adaptive ability of the algorithm. The selection of the sensitive information feature of the social network is achieved as above. 2.3. Classification and recognition of social network sensitive information. Combined with the feature selection, the two classification problems are used in this paper, which are the positive class and negative class training sample. Because the training speed of the support vector machine is related to the size of the training sample set, the support vector which is distributed on the class boundary plays a decisive role in the decision. Therefore, the number of sensitive information of training samples is reduced by preselecting effective candidate support vectors to improve the training speed. The sample selected from the center distance of the mutual center (the distance between the sample and the disparate class center) is less than the distance of two class sample centers as the effective candidate support vector. (1) Linearly separable case. The known sample set is, then the average feature of the samples of this class sensitive information is called the center m, which is given by (2) Nonlinearly separable case. Known two vectors x and y. They are mapped to the feature space H by the nonlinear function, then the Euclidean distance of Figure 2. Preselected effective support vector the two vectors in the feature space is given by where K(·) is the kernel function. The center vector m φ of the samples in feature space is given by According to Eq. (15) or Eq. (17), the class centers of the two classes are obtained, which are the positive class center m + and the negative class center m − . The distance of the two class centers is given by By using Eq. (19), the distance from all samples to the disparate class center m is calculated. The sample with the distance less that D is selected as effective candidate support vector. The sensitive information sample satisfying D ≺ D is retained as effective candidate support vector, shown in the arc part of Fig. 2. After preprocessing the sensitive information sample, the aim of reducing the number of training samples is achieved and the training speed is improved. In order to reduce the influence of noise points on the construction of support vector machine, a fuzzy membership sensitivity information function based on class center is proposed. The membership function considers the location of noise point or outlier point far away from the class center. Therefore, the influence of noise point or outlier point can be reduced by giving smaller membership to the sample far from the class center. However, the design of the membership function ignores the support vector is also far from the class center. Thus, the reduction of the noise point or outlier point introduces the reduction of the support vector, as shown in Fig. 3. Therefore, a new membership function is designed to make the membership of the sample increases with the increase of the distance from the class center. In this way, the support vector will obtain larger membership. The distance of each positive class sample and the positive class center is d + i = |x + i − m + | and the distance of each negative class sample and the negative class Assume after preselecting support vector the positive class sensitive information sample set is X + , and the negative class sample set is X − . Then the designed membership function is given by where δ is a small enough positive number to avoid s(x i ) = 0. The membership function is based on the measurement of the farthest distance from the class center, which make the sample farthest from the class center has greater membership. In this way, the role of the support vector in the construction of the optimal classification surface is ensured. However, as the noise point is also located in the farthest region, it will enhance the noise point or outlier point. The membership function is weighted by the nearest neighbor sample density to distinguish the noise point from the normal sample. The more samples in the nearest neighbor e of the sample x 1 represent the larger nearest neighbor sample density and the less samples of the noise point x 2 represent the smaller nearest neighbor sample density. Therefore, the weighting of the membership function can suppress the noise point. The nearest neighbor sample density function of each sample is calculated to quantify the sample density of the nearest neighbor sensitive information. For each sample x, by calculating the sample x j with the distance satisfying Eq. (12) in the sample set, the nearest neighbor sample subset X i of the ith sample is formed. where d ij is the distance between the two samples of x j and x i , e 1 is the nearest neighbor of the sample and min(d ij ) ≤ e 1 ≺ max(d ij ), numX is the number of the samples in the sample set. Sample density is defined by the distance between the nearest neighbors. Assume there are k nearest neighbor samples in the nearest neighbor sample subset X i , the nearest neighbor sample density function is given by where a is a small penalty constant. The normalization of the nearest neighbor sample density is given by where z i is the nearest neighbor sample density of the ith sample x i . The samples in the nearest neighbor are more, w i is larger. Because the influence of the classification of the nearest neighbor sample on the category of the sample is different, the nearest neighbor sample density can be adjusted. If the nearest neighbor sample subset is the similar class sample, the sample is not confused with the disparate class sample and the nearest neighbor sample density is kept constant. If the nearest neighbor sample subset includes the disparate class sample, the sample is confused with the disparate class sample and the membership is decreased. If the nearest neighbor sample subset is the disparate class sample, the membership is set to 0, in order to reduce the effect on construction of support vector machine. The final membership function is given by The new membership function is combined with the membership function based on class center and nearest neighbor sample density, which can enhance the support vector and reduce the noise. The membership function obtained with Eq. (24) is used to train the fuzzy support vector machine for classification. The intelligent recognition of social network sensitive information is achieved with the classification result. 3. Experimental results and analysis. To verify the proposed algorithm, simulation experiment is carried out and analyzed with the current algorithm. In the experiment, 4 artificial data sets and 5 UCI data sets are used for classification. Experimental environment is with CPU Intel i52, 60GHz, 4GB RAM, 64 bit Windows8, and MATLAB2017. In the experiment, the sensitive information training sample set is randomly generated two classes of two-dimensional samples with normal distribution, which are positive class sample and negative class sample, and the 2% random noise data is added. A random two-dimensional sample is taken as a test sample with addition of 1% noise data. The parameters of the two algorithms is selected the same (C = 100). The selection of e 1 is different with the sample and 3∼5 times of min(d ij ). With the increase of the data set, the classification results of the two algorithms are shown in Fig. 4 and Fig. 5 for the case of the 200 positive samples and 200 negative samples. In Fig. 4 and Fig. 5, triangle represents positive class sample, circle represents negative class sample, and square represents the deleted sample after preselecting candidate support vector. From Fig. 4 and Fig. 5, it can be known that, with the increasing number of sensitive information training samples, the proposed algorithm has improved in training time and classification accuracy compared with the current algorithm. A large number of kernel matrix operations and storage are carried out in support vector machine training to make the training slow, and the size of the matrix is related to the number of training samples. As the current method does not reduce the number of samples, the training time is longer, and the current algorithm adds class centripetal degree to set membership, which results in the slower training speed. However, the proposed algorithm preselects candidate support vector and deletes some samples to reduce the number of training samples. Experiment proves that the time cost of the proposed algorithm in membership setting process is less than the time cost of training samples. In addition, the proposed algorithm gives support vector greater membership to enhance its role, and weights the membership with the consideration of smaller nearest neighbor sample density of the noise point and the outlier point, which can effectively reduce the noise point or the outlier point and improve the classification accuracy. Assume S is the set of all sensitive information related to the information identification in the social network, R is the set of the recognized sensitive information, s is the number of the recognized related sensitive information for one detection, m is the number of the recognized unrelated sensitive information for one detection, and n is the number of unrecognized related sensitive information. The recall ratio The high recall ratio and precision ratio represents the good performance. But generally, they are contradictory. When the precision ratio is high, the recall ratio is low, and vice versa. Comparison of the recall ratio and the precision ratio between the proposed algorithm and the original algorithm is shown in Fig. 6. As the test sensitive information is increasing, the trend of the recall ratio is rising and the trend of precision ratio is down. As the support vector machine in the proposed algorithm eliminates some error samples which can affect the result, and the precision ratio and the recall ratio is improved compared with the original algorithm without adding location information. The results show that the information recognition rate of the proposed algorithm is high, and it can effectively improve the security of network operation.
6,494.6
2019-01-01T00:00:00.000
[ "Computer Science" ]
Construction and Simulation of Injury Early Warning Model for Retired Athletes Based on Improved Self-organizing Neural Network With the progress of sci-tech, the interdisciplinary and comprehensive development, and various advanced sci-tech gradually integrated into the field of sports, it has become possible to study how to reasonably prevent sports injuries, minimize the risk of sports injuries, and maintain the best physical condition of retired athletes. Due to the long-term high-load exercise of retired athletes during their sports career, athletes’ physical functions have been damaged to varying degrees, resulting in more injuries. According to the characteristics that many factors need to be considered in the prediction of retired athletes’ injuries, this paper puts forward an improved self-organizing neural network (SOM) method to predict retired athletes’ injuries. In this paper, an early warning analysis model of retired athletes’ susceptibility to injury based on SOM is proposed, which screens the state of retired athletes’ physical function variables in each stage, considers athletes’ physical function data whose standard deviation is higher than the limit specification of susceptibility to injury as susceptible injury data, quickly judges all vulnerable injury data, and completes the high-speed early warning analysis of retired athletes’ susceptibility to injury. Introduction Nowadays, the professionalization process of competitive sports has been accelerating, various competitions have become more and more frequent, the competition is becoming and more and more intense and tense, the difficulty requirements of technical movements have become higher, the training time has become longer, and the sports load on athletes has also been increasing [1]. Injuries also pose a great threat to the health of athletes. Some serious injuries will not only make athletes unable to participate in competition and training, but also cause physical injury and even disability [2]. In actual sports training, there are many factors causing athletes' sports injury, including internal factors such as athletes' age, physical quality, and health status and external environmental factors such as weather and equipment. ere are many injuries in the process of athletes' long-term sports [3]. In order to improve the recovery quality of retired athletes' physical function, we should carry out early warning analysis on athletes' vulnerable injuries according to the physical function data of retired athletes, and then carry out targeted treatment on athletes to enhance the health care quality of athletes' physical function [4]. In order to effectively prevent retired athletes from injuries and reduce their occurrence probability, it is necessary to clarify the relationship between the above factors and athletes' injuries by collecting and mining relevant data [5]. Many factors should be screened and important indicators should be determined, so as to realize the effective control and early warning of retired athletes' injuries. In the process of sports training, it acts on the human body as a stimulus, and the human body will produce corresponding response or adaptation [6]. When athletes as organisms are stimulated, the relative stability of their bodies will be broken. When the load exceeds the athlete's maximum bearing capacity, the athlete's body will deteriorate. Severe cases will also lead to excessive fatigue [7], cause sports function decline, sports injury, etc. erefore, it is very necessary to carry out real-time early warning for athletes' training adaptation. e more indicators used in the early warning process, the wider the fields involved, the more comprehensive and reliable the evaluation will be, but the greater the workload and the more the scientific researchers needed [8]. e uneven quality and professional level of scientific researchers and the limited level and depth of analysis conclusions of test indicators have become difficult problems in the early warning of athletes' training adaptation [9]. e traditional athlete's injury analysis model usually analyzes each target feature separately and does not comprehensively analyze the correlation between the physical function characteristics of different athletes, resulting in a large deviation in the depth mining of injuries, which greatly reduces the accuracy of athlete's vulnerable injury analysis and has certain limitations [10]. If we use neural network to integrate all the test results and use artificial intelligence to warn athletes' training and competition status, we can effectively solve the above problems. It is the premise for coaches to implement the training plan to carry out real-time early warning of athletes' training adaptation status and timely understanding athletes' training adaptation status [11]. With the progress of sci-tech, when the interdisciplinary and comprehensive development of various disciplines and various advanced sci-tech are gradually integrated into the field of sports, it is possible to study how to reasonably prevent sports injury, minimize the risk of sports injury, and maintain the best physical state of retired athletes. In the judgment of athletes' injuries, there are many index factors and different index factors have different effects on athletes' injuries [12]. To find the main related indicators, we need to scientifically screen the indicators in the transportation data. If we can dig out the risk indicators that can directly or indirectly cause sports injury from a large number of internal and external factors that can cause sports injury, as well as the potential relationship between each index and sports injury, and control and pay attention to it in future actual sports training, it will greatly reduce the occurrence of sports injury of athletes, so as to play a real preventive role in sports injury [13]. is paper presents an early warning analysis model of retired athletes' vulnerable injuries based on self-organizing neural network (SOM). e SOM is used to screen the state of retired athletes' physical function variables in each stage, and the athletes' physical function data whose standard deviation is higher than the limit specification of vulnerable injuries are regarded as vulnerable injury data, quickly judging all the vulnerable injury data and completing the high-speed early warning analysis of retired athletes. e innovative contribution of this paper is to propose an improved self-organizing neural network (SOM) method to predict the injury of retired athletes. is paper puts forward the early warning analysis model of injury susceptibility of retired athletes based on SOM, selects the state of physical function variables of retired athletes in each stage, takes the athlete's physical function data whose standard deviation is higher than the injury susceptibility limit specification as the susceptible injury data, quickly judges all the susceptible injury data, and completes the high-speed early warning analysis of injury susceptibility of the retired athletes. is paper is divided into five parts. e first and second parts describe the relevant research and development background. e third part includes materials and methods, expounds the self-organization characteristics of sports training adaptation and SOM, and the SOM model learning algorithm. e fourth part discusses the results and compares the SOM before and after the improvement. Compared with the improved algorithm, the standard SOM algorithm has more iterations, lower convergence accuracy, and higher improved algorithm. Finally, the full text is summarized. is method can efficiently and accurately extract the key indicators of injury factors of retired athletes and evaluate their early warning level, so as to effectively obtain the injury of athletes and reduce the probability of injury recurrence of retired athletes. Related Work At present, many scholars have studied sports training early warning from different angles. Wang et al. [14] gives early warning to sports training through special indicators such as blood lactic acid, blood urea, HRV, urinary protein composition, immunity, and psychology. In order to effectively prevent athletes' injuries and reduce their probability, we must collect and mine relevant data to clarify the relationship between many factors and retired athletes' injuries. is needs screening of many factors and determining the important indicators, so as to realize the effective control and early warning of retired athletes' injuries. Ma et al. [15] established a dynamic etiological model of sports injury in order to vividly describe the relationship between internal risk factors, external risk factors, and stimulation events and sports injury in the process of sports training. Yan et al. [16] believes that the internal risk factors will not directly cause injury to athletes, but only have the tendency of injury. If combined with the role of external risk factors, it is likely to make athletes vulnerable to injury. If there is a stimulus or induced event at this time, it will cause injury to athletes. e research by Li et al. [17] shows that the injury of retired athletes is affected by many factors, and there are complex links between the influencing factors, which is difficult to be explained by the structural causal model, and this interdependence between data is the most important and useful characteristic of the research object. With the progress of sci-tech, when the interdisciplinary and comprehensive development of various disciplines and various advanced sci-tech are gradually integrated into the field of sports, it is possible to study how to improve sports performance, reasonably prevent sports injury, minimize the risk of sports injury, and maintain the best competitive state of athletes. Song et al. [18] established five subsystems of sports training early warning and constructed the theoretical system of sports training early warning. Peng et al. [19] have systematically monitored the implementation of athletes' training process for a long time and diagnosed athletes' physical function, technical characteristics, and psychological state. Ma et al. [20] takes retired athletes as the mining object and combines the association rule model to predict and analyze the injuries and injuries of retired athletes. Hassan et al. [21] uses the data in the injury management system for mining and gives some mining results and rules. Tian et al. [22], combined with relevant algorithms, proposed a potential identification method of athlete's injury based on SOM, and the results show that the algorithm improves the accuracy of athlete's injury prediction. Compared with the traditional statistical methods, the SOM method can better reflect the epidemic law from the transmission mechanism of the disease, so that people can understand some global characteristics in the epidemic process [23]. Sun et al. [24] shows that compared with the traditional statistical methods, the SOM method can better reflect the injury law of retired athletes from the injury mechanism. In the application of the SOM network, there are some shortcomings that the classification results are related to the sample input order and easy to fall into local optimization. erefore, this study uses the attribute reduction algorithm to improve the classification accuracy and generalization performance of the SOM algorithm, so as to better reflect the injury occurrence law of retired athletes. e improved SOM makes the weight learning rate and neighborhood size of each neuron change with the affinity of neurons, so as to ensure that the network converges to the global optimization with great probability and overcome the deficiency that the classification effect of SOM is affected by the input order. Self-Organizing Characteristics of Sports Training Adaptation and SOM. Sports training is a process in which the organism receives external stimuli and introduces negative entropy, and the tissues, organs, and systems in the organism are organized and ordered independently through cooperation and competition. e human body is an open, nonlinear, and highly complex giant system, which constantly exchanges material, energy, and information with the outside world. When the internal and external environment changes, its subsystems will fluctuate, and the microfluctuations near the critical point will be amplified into giant fluctuations by a nonlinear mechanism and suddenly change. rough the nonlinear interaction between the subsystems and the overall synergistic effect, the original relative balance state is broken and transformed into another orderly state in time, space, or function. SOM can find out the rules and relationships from the input data sample information, and the network automatically classifies the input patterns through its own training. SOM Model Learning Algorithm. In the SOM, we imitate the thinking of human brain, pass through the transmission of neurons, and then pass through the weight function, and then get different output results [25][26][27]. en, according to the error between the output results and the actual expected results, we push back the optimization parameters and then continue to learn and adjust to achieve the goal of optimal learning. SOM is a new double-layer network and each input node is connected by weight w, so as to realize nonlinear dimension reduction mapping of input signals [28][29][30]. Topological invariance is maintained in mapping, that is, similar inputs in topological sense are mapped to the nearest output node. e typical structure of SOM is shown in Figure 1. e SOM network is composed of input layer and competition layer, and the neurons of input layer and competition layer are completely connected. Model learning samples are composed of samples with n classification indexes. Assuming that these samples with the same category or some similar characteristics are relatively close in n-dimensional space, these samples form a class and form a cluster in the n-dimensional space. When the input samples belong to multiple classes, the n-dimensional space will show the characteristics of multiple cluster distribution. Each cluster represents a type, and the center of the cluster is the cluster center of the type. e distance between samples belonging to the same class and the cluster center of this class is small. is distance can be measured by the Euclidean distance as follows: In the formula, x i is the classification index, W ij is the cluster center of the j-th dynamic type, and D j is the Euclidean distance. rough the analysis of the related algorithms of the SOM, and according to the specific environment of the application field, the algorithm is now modified to meet the requirements of the system. e algorithm steps are as follows: First give the threshold β. β is used to control the thickness of the classification. e larger the β, the thicker the classification and the fewer types. e smaller the β, the finer the classification and the greater the number of types. erefore, the determination of the β value requires trial calculation. Let the initial number of neurons in the output layer be, i.e., j � 1, and a learning sample to give the connection weight W ij is chosen as the initial value. A new learning sample is entered and the Euclidean distance D j is calculated between it and each dynamic type of cluster center W ij . e minimum Euclidean distance is calculated as If D j < β, it is considered that the current input sample belongs to the dynamic type represented by the output neuron, and the connection weight W ij is adjusted as follows: Computational Intelligence and Neuroscience In the formula, ΔW ij is the adjusted value of W ij and h j belongs to the current sample number of the j-th dynamic category. en, go to step (3). If D j > β, it means that although the output neuron wins the competition, the current input sample still cannot be regarded as belonging to the dynamic type represented by the output neuron, but should belong to a new type. erefore, the input sample is used as the initial value of W (ij+1) . en, turn to step (3). is cycle is repeated until all samples have been learned. Finally, the output neuron number of the network model is the type number of all samples, and the connection weight is the cluster center value of each dynamic type. Attribute Reduction Algorithm Model. e attribute reduction algorithm has strong data classification ability. Without any prior knowledge and additional data information, it uses knowledge reduction to reveal the relevance and decision-making hidden among various data and can comprehensively analyze and identify the injuries and illnesses of retired athletes [31,32]. When calculating the sports attribute reduction algorithm, it is necessary to build the mathematical model of attribute reduction algorithm and establish the early warning model of athletes' injuries. e data model construction steps are shown in Figure 2. Set the data collection and attribute positioning that caused the athlete's injury to C and c j , respectively, then c j ∈ C. en, v j is expressed as the measured value of attribute c j , v c ′ is expressed as the dimensionless value of attribute c j , v max j is defined as the maximum value of attribute c j , and v min j is defined as the minimum value of c j , and then the above definition is used for modeling. A mathematical model for quantifying quantitative attributes for the dimensionless value v c ′ of attribute c j can be constructed and expressed as Suppose the index evaluation set is represented by A, r j is represented by the membership degree vector, and the dataset is represented as r j � r j1 , r j2 , r j3 , r j4 , r j5 . In the formula, r j represents the membership degree vector corresponding to the index evaluation set A. Assumptions are as follows: Among them, B i is used as the scale element corresponding to the i-th evaluation in the dataset B. rough the dataset B, the data membership vector representing the injury of the athlete can be effectively integrated into a scalar. e formula is expressed as where V is the quantitative value of the qualitative evaluation index of the athlete's injury data under the given scale B. Acquiring Injury Complexity Model Based on Improved SOM. e data basis of the SOM is the polynomial sequence of increasing complexity in the n-dimensional compact retired athlete's body function set C can approximate any point in C with arbitrary precision, shaping the Kolmogorov-Gabor polynomial composed of variable ( x 1 , x 2 , . . . , x m ) as Among them, a indicates the qualified coefficient of the physical function of the retired athletes, i and j, respectively, describe different physical function variables, and m represents the change threshold of the physical function. Equation (8) shows that with the increasing number of independent variables and polynomial complexity, polynomial sequences can fit arbitrary data with high precision. e observed physical function sample data of retired athletes are divided into a training subset and a test subset. e intermediate candidate injury models are generated in the training subset through internal specifications, and the intermediate candidate injury models are collected in the test subset through external specifications. In the process of modeling, the SOM algorithm screens the input physical function variables of retired athletes at all levels by adopting relevant specifications, and then combines them to obtain the screening model of the next level, until the best injury complexity model is finally obtained. Divide the retired athlete's physical function sample dataset W into training set A and test set B, then W � A ∪ B. If the prediction model is modeled, the prediction subset C needs to be divided again to ensure that W � A ∪ B ∪ C. Shape the general functional relationship between the output y of the injury model and the input x 1 , x 2 , . . . , x n . e Kolmogorov-Gabor polynomial is as follows: In addition, treating each of the monomials as m input models in the original structure of the modeling network, (10) Self-organizing process adaptively forms the first-level intermediate model, In addition, in the training set A, the parameter prediction method is used to predict the coefficient of z k . In the test set B, the competition model z k is filtered through external specifications, and the middle candidate injury model w k � (z k ) is collected and used as the input of the second layer of the network. Continuing the process (10) and (11), the second layer and the third layer can be formed successively until the minimum value of the external specification is obtained, and the best injury complexity model can be obtained. e mining accuracy of different methods is shown in Figure 3. With the continuous increase in the signal-to-noise ratio, the mining accuracy of traditional methods has a significant downward trend, while the mining efficiency of this method has a gentle downward trend. e mining efficiency of this method is much higher than that of traditional methods, which shows that this method has higher superiority and achieves satisfactory results. Normalization Processing. Studies on the brain have shown that the brain is composed of a large number of neurons that work together. e neural network of brain is a very complex feedback system, which contains various feedback effects, including global feedback, local feedback, and chemical interaction. Clustering is an extremely important function in the process of brain processing information. e brain recognizes external signals through clustering process and produces self-organizing process. e data of the training adaptation state are given as input, including the brain information entropy value, ratio of the brain main sequence parameters, parameter competition changes of 8 Hz, 9 Hz, and 10 Hz, athletes' sports skill level optimization state level, brain function state, central tension, central fatigue score, average score, best score in each test stage, etc. e data utilization function is normalized, so that the index data, which are not in the same order of magnitude are mapped between [−1, +1]. Result Analysis and Discussion Comparing the SOMs before and after improvement, Figure 4 shows the training error trend diagram of the standard SOM, and Figure 5 shows the training error trend diagram of the improved SOM. Comparing the curves of the two Figures, it is easy to draw that in the training process, in order to achieve the training target error, the standard SOM has not converged to the expected value after 3000 iterations, and the improved SOM has a fast convergence speed and meets the requirements after 12 iterations. Compared with the improved algorithm, the standard SOM algorithm has more iterations and lower convergence accuracy. is is because one of the important reasons for the slow convergence of the standard SOM algorithm is that it is difficult to choose the learning rate, which is too large to oscillate and too small to converge slowly. e learning rate of the improved SOM algorithm can be adjusted adaptively, which accelerates the convergence speed. Figures 6 and 7 are the training error curves of standard SOM and improved SOM, respectively. e error of standard SOM in training 140 sets of sample data is still very large. e error of the improved SOM is very small when training 100 groups of sample data, and the error hovers around 0 when the training samples are increased from 100 groups to 140 groups, so the training of the improved SOM is quite successful. Multilayer SOM can improve the recognition rate of failure modes because the clustering area built in the feature graph will converge with the increase in SOM neural network layers, but the number of SOM layers cannot be determined in advance. e fault pattern recognition can also be considered to divide the fault pattern clustering area in the SOM feature map in pairs. e SOM feature map is a discrete two-dimensional lattice composed of 0, 1 elements, which can be classified by the binary image segmentation method. Figures 8 and 9 are fitting curves of standard SOM and improved SOM training, respectively. It can be easily concluded from the two figures that the simulated output of the improved SOM after training is basically consistent with the actual output, which shows that its accuracy is high. ere are many pattern points on the boundary of the region when the pattern area is divided in the discrete two-dimensional plan, and these boundary points cannot determine its pattern category. is is because the region boundary is determined by a discrete generation algorithm, and there is at least one boundary point in a row/column between any two pattern regions, and sometimes there are two boundary points. ere are more points on the boundary in one row/ column in the multimode region division. In order to reduce the number of points on the boundary of a pattern region, a pattern region division method using continuous variables as the judgment condition of boundary points is used. Let the boundary of the mode area be a curve in continuous space and only a few points are on the boundary in the discrete state. All the above graphs show that the improved SOM is more effective than the standard SOM in the early warning of retired athletes' injuries. e application of the attribute reduction algorithm and neural network in early warning of retired athletes' injuries can not only improve the performance of the neural network, reduce the complexity of network, and reduce the training time of network, but also prevent track and field sports injuries efficiently, conveniently, and in real time to a certain extent, which has certain practical value in the actual training process. For the injury warning of retired athletes, SOM does not reflect the results with one neuron, but with several neurons at the same time. e memory of learning mode is not completed once, but by repeated learning, which dissolves the statistical characteristics of input mode to each connection weight and has strong anti-interference ability. Once a neuron is damaged or completely failed for some reason, the remaining neurons can still guarantee that the corresponding memory information will not disappear. is redundancy will be reflected in the superior stability of equipment in practical applications. e global topology also avoids the problem that many neural network methods fall into local optimum. Conclusions In this paper, an early warning method of injury based on self-organization and restrictive norms is proposed. e selforganization neural network is used to screen the physical function variables of retired athletes at all levels by adopting relevant norms, and the best injury complexity model is Computational Intelligence and Neuroscience obtained. e best injury complexity model is analyzed by mining restrictive norms. In this study, according to various factors of early warning of retired athletes' injuries, the idea of using SOM to predict its epidemic law is put forward, and the attribute reduction algorithm is introduced into the adjustment of the neuron neighborhood range and the learning of weights in the SOM network, so as to overcome the shortcomings of the SOM network algorithm that its classification results are related to the sample input order and easily fall into local optimum. e learning algorithm of the self-organizing competitive neural network is improved, and the attribute reduction algorithm is used for data mining. e classification of the input learning samples is known, and they have been classified at the time of input. Using this sample to train the network model can shorten the learning time and improve the classification accuracy, compared with other algorithms, such as emperor butterfly optimization (MBO), earthworm optimization (EWA), elephant grazing optimization (EHO), moth search (MS), slime mold algorithm (SMA), and Harris Hawkes optimization (HHO). e mining efficiency and accuracy of this method are higher than those of traditional methods, and it has strong anti-interference ability and can achieve satisfactory results. After the network training, the prediction becomes very simple, and the evaluation results can be obtained quickly by inputting the monitoring data of the samples to be evaluated. e method can efficiently and accurately extract the key indicators of the injury factors of retired athletes and evaluate their early warning levels, so as to effectively obtain the injury situation of athletes and reduce the probability of recurrence of injuries of retired athletes. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
6,448.4
2021-10-18T00:00:00.000
[ "Engineering", "Computer Science" ]
Pulse Dispersion in Phased Arrays Phased array antennas cause pulse dispersion when receiving or transmitting wideband signals, because phase shifting the signals does not align the pulse envelopes from the elements. This paper presents two forms of pulse dispersion that occur in a phased array antenna. The first results from the separation distance between the transmit and receive antennas and impacts the definition of far field in the time domain. The second is a function of beam scanning and array size. Time delay units placed at the element and/or subarrays limit the pulse dispersion. Introduction The demand for high data rates in wireless systems has pushed the technology for wideband communications systems.Figure 1 shows a plot of the bandwidth and data rates associated with the digital wireless standards over time [1].Up to today, the signals of interest were relatively narrowband, but future systems must process signals with very wide instantaneous bandwidths.These large bandwidths equate to high data rates that significantly impact the design of phased array antennas. There are usually two definitions for a wideband phased array antenna [2][3][4].The first and most widely used is the operational bandwidth.In other words, the array components are wideband, but the array only processes narrowband (low data rate) signals that lie within a wide frequency range.The second definition is based on the signal bandwidth.In this definition, the bandwidth is a function of the array size and beam scan angle [5] BW (%) < sin , where is the number of elements, the element spacing, the wavelength, and the scan angle.Large phased arrays with wide scan angles have very narrow bandwidths. Phased array bandwidth limits are based on two related factors [6].First, a phased array scans the main beam using a linear phase shift across the aperture that is calculated at the center frequency.Frequencies above and below the center frequency cause the main beams to squint toward or away from broadside, respectively.The array bandwidth is defined by a maximum beam squint bounded by the 3 dB beamwidth of the center frequency main beam as defined in (1).An alternative factor that can be used to define array bandwidth is pulse dispersion or a widening in the pulse width.For the purposes of this paper, we assume that a pulse represents one bit.In this definition, the signal pulse width must be greater than the length of the array which is traditionally defined as aperture fill time [7].This assumption also leads to the definition in (1). This paper starts with an overview of pulse dispersion that occurs in a phased array antenna.Pulse dispersion has long been a problem in optical fibers, but until recently, phased array designers rarely worried about wideband signals, unless the array was very large with a large field of view.In Section 3, we explain two causes of pulse dispersion in a phased array.The first occurs in the near field and impacts antenna measurements.The second results from the aperture size and the scan angle.The next section outlines the need for time delay units in array systems with wide instantaneous bandwidth signals.Time delay aligns signal envelopes and minimizes pulse dispersion. Pulse Dispersion Pulse dispersion [8][9][10] results when the received pulse has a longer duration than the transmitted pulse.It occurs due to the following: (i) Multipath: a transmitted signal arrives at the receiver via more than one path.The different path lengths cause different signal delays. (ii) Polarization: two orthogonal polarizations in an optical fiber travel at different speeds. (iii) Intramodal: it is also known as chromatic dispersion when the index of refraction changes with frequency inside the material. (iv) Intermodal: modes travel at different speeds. (v) Array: arrival times at the elements are different. The extended pulse width increases intersymbol interference (ISI) in communications system which in turn increases the bit error rate (BER) [11].This paper only addresses the pulse dispersion created by antenna arrays due to the position of the elements. Figure 2 shows a signal, (), with a pulse of length incident at an angle on an -element, equally spaced linear array lying along the -axis.The pulse arrives at element 1 first and then sequentially at all the elements up to the last one.The signal from each element is weighted ( ), time-delayed, and summed to get the array output If the time delay at element is then the signal maximum corresponds to the main beam pointing at .Steering the main beam in this manner is known as time delay steering. The output from a linear array that receives a single frequency signal is given by the array factor Time is ignored in this steady state scenario, because there is no signal envelope.If then the array output is a maximum at which corresponds to the main beam pointing in the direction of the signal. Steering the main beam in this manner is known as phase steering.Note that the phase shift needed to point a beam maximum at increases with frequency. A phased array uses a phase shifter at each element to align the signal phases in order to coherently add all the pulses.Phase shifters have a constant phase shift across their operational bandwidth.As a result, a constant phase shift over frequency in (5) means that the scan angle changes with frequency.Figure 3 shows the relative timing of the signals arriving at each of the 20 elements of a linear array ( = 30 cm) operating at = 10 GHz with the signal arriving from = 30 ∘ .The pulse width is set to 0.95 ns to ensure the aperture fill time is satisfied for all elevation scan angles.Phase steering the main beam to = 30 ∘ results in the coherent addition of the 20 signals as shown by the plot at the bottom of Figure 3.The 0.95 ns transmitted pulse spreads into a 1.43 ns received pulse. Pulse dispersion is a function of the arrival and scan angle as shown in Figure 4.At broadside, the received pulse has the same shape and duration as the transmitted pulse.When the pulse arrives at 30 degrees and the array is phase scanned to 30 degrees, then the pulse expands by about 1.5 times.Increasing the angle of arrival and the phase scan to 60 degrees expands the pulse even further.At this angle the 0.95 ns pulse has spread to 1.78 ns or 1.87 times its original length.This plot shows that ISI will increase between consecutive pulses as the scan angle increases.Also, the increased ISI will result in a higher BER. Pulse Dispersion in the Near Field The far field of a receive antenna occurs when the wave from the transmitter is approximately a plane wave.Figure 5 shows a spherical transmitted wave impinging on the receive antenna.This spherical wave arrives at the edge Δ/ seconds after arriving at the center.Since the antenna aperture receives the pulse at different times across its extent, dispersion occurs. In antenna measurements, the far field starts when the distance from the transmit antenna to the receive antenna edge ( + Δ) exceeds the distance to the center of the antenna () by Δ ≤ /16 or /8 radians.Using this value for Δ, the IEEE antenna standard defines the far field in terms of the receive antenna diameter () [12] 2 + ( 2 ) This approximation manifests itself as errors in the sidelobes of the far field pattern close to the main beam as shown in Figure 6 for several values of .Decreasing results in phase Relative receive pattern (dB) errors that distort the nulls and sidelobes close to the main beam. The far field definition in ( 6) is for the antenna pattern at a single frequency, so it applies to narrowband signals that do not exhibit pulse dispersion.Using the diagram in Figure 5, a time domain version of ( 6) can be derived by assuming This equation is independent of wavelength.Instead, the separation distance depends on the pulse width as well as the antenna size. In order to quantify the effects of pulse dispersion on the separation distance between a transmit antenna and a receive array, assume that the receive antenna is stationary and the transmit antenna moves in a circle centered on the receive antenna as shown in Figure 7 [14].For 0 ≤ ≤ 90 ∘ the time delay between when the pulse hits the first element and when it hits the last element is given by where and are the longest and shortest paths the pulse takes to the edge elements which are given by Note that at broadside and at endfire The longest distance is always from the transmitter antenna to the last element on the far edge.The closest element, however, depends upon .At broadside, the closest element is at the center of the array, while at endfire, the closest element is at the nearest edge.Figure 8 has plots of the total time delay given by ( 8) across an array of diameter at a distance from the transmitter for several values of .At broadside, the shortest delay time occurs from the transmitter to the center of the array with the largest delay time from the transmitter to the edges.Time delay increases with increasing and decreases with increasing .As increases, the time delay due to the separation distance between the transmitter and the receive array decreases until at = 90 ∘ , it disappears.The near field time delay dominates at angles close to broadside while the scan angle time delay dominates everywhere else. Pulse Dispersion versus Scan The last section introduced pulse dispersion in linear phased array antennas primarily due to the separation distance between the transmit and receive antennas.This section presents results for pulse dispersion only due to angle of incidence on a receive phased array when the transmitter is at = ∞ [15,16].In general, the spatial delay across the aperture is many wavelengths which corresponds to a phase greater than 2.This delay is not a problem for narrowband signals, because phase shifters easily align the signals at the elements.The compensating phase provided by the phase shifter is up to one phase cycle or period which is insufficient for wideband signals. Consider that a 0.95 ns rectangular pulse centered at 10 GHz is incident on a 20-element linear array with a 25 dB Taylor taper.Pulse dispersion is zero at broadside, because the signal hits all the elements at once as shown in Figure 9(a). Off boresight, the signal no longer coherently adds and the pulse spreads out in time.When the pulse enters the sidelobes, it experiences increased dispersion the further it is from broadside.Scanning the beam to 30 and 60 degrees to receive the pulse at those same angles shows that the resulting received pulse entering the main beam experiences significant dispersion that intensifies as the scan angle increases (Figures 9(b) and 9(c)). Time Delay Steering Time delay units compensate for the pulse dispersion and beam squint experienced by an antenna array.Replacing phase shifters at the elements with time delay units allows the array to steer the beam in a way that aligns the signal envelopes as well as the signal phases as predicted by (2). Figure 10 shows the output of a 20-element linear array with a 25 dB Taylor taper when a 0.95 ns rectangular pulse centered at 10 GHz is incident at three different angles.The broadside case is identical to the broadside case for phase steering, because no time delay or phase shift is needed to coherently add the signals at each element.No pulse dispersion occurs when the signal and timed-delay steering correspond to = 30 ∘ and = 60 ∘ .Signals entering the sidelobes do experience a large spreading.Since these signals are not important, their dispersion is not a problem. To gain a better understanding on how time delay differs from phase shift, we show the phase-shifted and time-delayed signals for a center element (element number 10) in the 20-element array example.Figure 11 shows a comparison between how these two beam steering techniques operate.With the phase shift approach, the signal is aligned in phase within its envelope.On the other hand, with a time delay the envelope of the signal is shifted which ultimately resolves all issues associated with pulse spreading.Time delay units are needed to steer the beam of a large, wideband array. If all the time delay bits are placed at the element, then the beam pointing and sidelobe levels are minimally perturbed.Time delay units become larger and more expensive as the number of bits and physical size of the bits increases [17,18].Some of the time delay bits must be placed at the subarray levels in the corporate feed network to minimize the cost of the time delay units (Figure 12) [19].Also, time delay units that exceed a few nanoseconds of delay occupy a large physical area and do not easily fit behind an element [20]. When their area exceeds an element unit cell they are placed at the subarray ports.As the bits are moved back in the feed network, time delay quantization increases and produces errors in the array factor [21].Large bits need to be carefully placed in the subarray structure in order to avoid large quantization lobes that are of the magnitude of the main beam.At least one bit must appear at each level between the element and the highest level, or quantization lobes appear. In order to lower costs and not sacrifice much performance, phase shifters are used at the element levels and time delay is distributed at the subarray levels. Conclusions Pulse dispersion is a serious issue when designing antenna arrays for receiving and transmitting wideband signals.The elements of a phased array receive the transmitted signal at different times depending upon the distance from the transmitter, the angle of incidence, and the size of the array.When the antenna is in the near field, the spherical wavefront from the transmitting source causes time of arrival differences at the array elements which results in pulse dispersion.Normally, the far field is defined in terms of a phase error for narrowband antennas.A similar time domain far field definition is possible as well.A wideband pulse coming from the far field at an angle off broadside arrives at the elements at different times based upon the angle of incidence and the size of the aperture.Contour plots of the pulse dispersion for scanning arrays were presented that show the extent of the dispersion over the main beam and sidelobes.Time delay units are needed to correct pulse dispersion.This added expense can be mitigated by placing them at the subarray levels. 2 InternationalFigure 1 : Figure1: Increase in the bandwidth of the wireless standards over time[1]. Figure 2 : Figure 2: Plane wave incident on a linear array. Figure 6 : Figure 6: Far field patterns as a function of separation distance between the transmit and receive antennas. Figure 7 : Figure 7: The transmit antenna moves in a circle about the receive array.
3,449.6
2017-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Double Negation Semantics for Generalisations of Heyting Algebras . This paper presents an algebraic framework for investigating proposed translations of classical logic into intuitionistic logic, such as the four negative translations in-troduced by Kolmogorov, G¨odel, Gentzen and Glivenko. We view these as variant semantics and present a semantic formulation of Troelstra’s syntactic criteria for a satisfactory negative translation. We consider how each of the above-mentioned translation schemes behaves on two generalisations of Heyting algebras: bounded pocrims and bounded hoops. When a translation fails for a particular class of algebras, we demonstrate that failure via specific finite examples. Using these, we prove that the syntactic version of these translations will fail to satisfy Troelstra’s criteria in the corresponding substructural logical setting. Introduction Schemes for translating classical logic into intuitionistic logic have been studied since the 1920s and are important for understanding the computational content of classical logic. These so-called negative translations or double negation translations such as those proposed by Kolmogorov, Gödel, Gentzen and Glivenko are generally presented as syntactic translations and are studied by mainly syntactic methods (e.g., see [9,11]). In this paper we use an algebraic framework for investigating proposed double negation translations. The arguments justifying the syntactic Kolgomorov and Gödel translations do not need the rule of contraction and hence we develop our framework in the context of two generalisations of Heyting algebras: bounded pocrims and bounded hoops. In logical terms these correspond to the conjunctionimplication fragment of intuitionistic affine logic and what we call intuitionistic Lukasiewicz logic, respectively. We view a translation as a variant semantics for the logical language and we give a semantic formulation of Troelstra's criteria for a satisfactory translation. The algebras that correspond to classical logic are called involutive (i.e., they satisfy ¬¬x = x). We associate with each bounded pocrim A two involutive pocrims: • a bounded pocrim A C called the involutive core of A, whose universe is a subset of the universe of A, and • a bounded pocrim A R called the involutive replica of A, whose universe is a quotient of the universe of A. A generalisation of the first construction (involutive core) was studied in [20], where it is called a c-retraction. The injection ι : A C → A and the projection π : A → A R are not necessarily homomorphisms when A is a general bounded pocrim, but they are homomorphisms when A is a bounded hoop. The involutive core and the involutive replica turn out to be naturally isomorphic via the composite π • ι. The two constructions give complementary ways of viewing the double negation operation δ(x) = ¬¬x. Using the involutive core and the involutive replica, we show that the Kolmogorov and Gödel translations satisfy our algebraic formulation of Troelstra's criteria for a satisfactory negative translation in any reasonable class of bounded pocrims. We also show by explicit finite examples, that the Gentzen and Glivenko translations fail to satisfy our algebraic formulation of Troelstra's criteria in general. The proofs that the Gentzen and Glivenko translations fail are based on specific finite classes of finite bounded pocrims. Using these counter-examples we can prove that the syntactic versions of these translations fail to satisfy Troelstra's formulation of his criteria. For bounded hoops, the situation is much simpler. The double negation operation is a homomorphism implying that all reasonable double negation translation schemes are equivalent and hence satisfy our formulation of Troelstra's formulation. The results for bounded hoops is dependent on certain algebraic identities, some of which are not easy to derive from the axioms for this class of algebra. We use an indirect semantic method to verify the harder identities (see Section 4.2). Related work Cignoli and Torrell [8] investigate Glivenko's negative translation scheme in the setting of bounded BCK algebras, the algebraic models of the implicative fragment of intuitionistic affine logic. They study an analogue for BCK algebras of what we call the involutive core of a bounded pocrim, and discuss extensions of their results on the Glivenko translation to bounded pocrims and bounded hoops. In the present paper, we are interested in negative translation schemes in general and give a framework for comparing different translations. Galatos and Ono [14] look at the Glivenko and Kolmogorov translations for substructural logics over the full Lambek calculus, taking again an algebraic approach studying involutive sub-structures of residuated lattices. In particular, they show that every involutive sub-structural logic has a minimal substructural logic that contains the first via a double negation interpretation. Commutativity is not assumed, so the paper has to deal with two forms of negation. A proof-theoretic presentation of the results in [14] for the Glivenko translation are then presented by Ono [20], looking at the weakest extension of full Lambek calculus needed to derive the Glivenko theorem for classical logic. The work that is perhaps closest to ours is that of Farahani and Ono [10], where they also study various negative translations, analysing the role of the double negation shift principle in the treatment of the quantifiers in predicate logic. In their final section on "algebras" they discuss a construction (c-retraction), which can be viewed as a generalisation of our involutive core construction. In the present paper our goal is to create a general framework for negative translations, enabling us to identify situations where particular translation schemes fail to have the required algebraic properties for a negative translation. In our study we also an alternative to the cretraction/involutive core construction, the involutive replica, which turns out to fit more naturally in some cases. Syntactic Negative Translations As mentioned above, we are studying here classes of algebras that capture the semantics of some well-known logics. A formula is provable in the conjunction-implication fragment of intuitionistic affine logic iff it is valid in all bounded pocrims. Similarly, provability in the conjunction-implication fragment of GBL (the fragment that we call intuitionistic Lukasiewicz logic) is captured by validity in the algebraic class of hoops. The classical counterparts of these logics, i.e. the extension of these logics with the double negation elimination (DNE) principle A ⊥⊥ → A, can be also captured by the sub-class of involutive pocrims/hoops, i.e. bounded pocrims/hoops which satisfy x ⊥⊥ = x. Negative translations provide a way to eliminate DNE from classical proofs of a formula A, turning these into intuitionistic proofs of the translation of A. Although various negative translations have been proposed in the literature [15,16,17,19], it is well known that all negative translations which satisfy Troelstra's criteria [22,Section 1.10] are intuitionistically equivalent. Formally, Troelstra calls a formula translation A → A N a negative translation if (i) A and A N are classically equivalent; (ii) If A is provable classically then A N is provable intuitionistically; (iii) A N is equivalent to a formula in the negative fragment (negated atomic formulas, implication and conjunction). The point behind (iii) is that, for this negative fragment, classical and intuitionistic provability coincide, and in particular (A N 1 ) N 2 is intuitionistically equivalent to A N 1 . Assume then that two translations A N 1 and A N 2 satisfy the above. By (DNS1), we have that A N 1 → A holds classically. Hence, by (DNS2), (A N 1 → A) N 2 is intuitionistically valid. With a further assumption that these translations are modular (see [11]), we also have ( Pocrims The most general class of algebras we consider is the class of pocrims: partially ordered, commutative, residuated, integral monoids [3]. Pocrims provide the natural algebraic models for the fragment of intuitionistic logic known as minimal affine logic, whose connectives are implication (φ ⇒ ψ) and a form of conjunction (φ ⊗ ψ) that is not required to be idempotent (so that the law of contraction need not hold). The underlying ordered set of a pocrim is bounded above but not necessarily below; bounded pocrims, i.e., those in which the order is bounded below provide the context for our study of negation. Definition 2.1 (Pocrim). A pocrim is a structure for the signature ( , ·, →) of type (0, 2, 2) satisfying the following laws, in which x ≤ y is an abbreviation for x → y = : x · y ≤ z iff x ≤ y → z. [r] We will refer to the operations · and → as conjunction and residuation respectively. We adopt the convention that residuation associates to the right and has lower precedence than conjunction. So the brackets in x · ((x → y) → y) are all necessary while those in (x · z) →(y → z) may all be omitted. Throughout this paper, we adopt the convention that if P is a structure then P is its universe. If P is a pocrim, the laws [m i ], [o j ] and [t] say that (P ; , ·; ≤) is a partially ordered commutative monoid with the identity as top element. Law [r], the residuation property, says that for any y and z the set {x | x · y ≤ z} is non-empty and has supremum y → z. It is an easy exercise in the use of the axioms to show that x → y is monotonic in y and antimonotonic in x. A pocrim is said to be bounded if it has a (necessarily unique) annihilator, i.e., an element ⊥ such that for every x we have: [ann] Note that any finite pocrim P is bounded, the annihilator being given by x∈P x. In a bounded pocrim P, we have that ⊥ = x · ⊥ ≤ x · = x for any x, so that (M ; ≤) is indeed a bounded ordered set. We write ¬x for x → ⊥ (and give ¬ higher precedence than the binary operators). Proof. The proofs are easy exercises in the use of the bounded pocrim axioms. An element x of a bounded pocrim is said to be regular if it satisfies the double-negation identity: [dne] For example, and ⊥ are regular in any bounded pocrim. A bounded pocrim is said to be involutive if all its elements are regular. This class of algebras corresponds to the ( , ⊥, ⇒, ∧)-fragment of classical affine logic. See [21] for further information about pocrims in general and involutive pocrims in particular. We will often write δ(x) for ¬¬x. Lemma 2.3. The following are valid in all bounded pocrims: Proof. Let us prove part 6: using [r] several times, we have that ( * ) x·¬y ≤ ¬(x → y), whence: The proofs of the other parts are similar exercises in the use of the bounded pocrim axioms together with Lemma 2.2 and the monotonicity properties of · and → as necessary. Example 2.4. There is a unique pocrim B with two elements. It is involutive and provides the standard model for classical Boolean logic. Thus the order type of C ⊕ D is the concatenation of the partial orders (C \{ }; ≤) and (D; ≤). Remark 2.6. As alluded to in Section 1.2 the equational theory of pocrims can be viewed as a logical theory, where a term t is viewed as a formula that holds in a pocrim P iff t = under all assignments of variables in t to values in P . Conversely, as x = y in a pocrim iff (x → y) · (y → x) = , the equational theory can be recovered from the logical theory. In the sequel, we concentrate on the case of bounded pocrims. If C is a class of bounded pocrims, we write Th(C) for the logical theory of C, i.e., the set of all terms t over the signature ( , ⊥, ·, →) of a bounded pocrim with variables drawn from the set Var = {v 1 , v 2 , . . .}, such that t = under any assignment Var → P taking values in a member P of C. It can be shown that a deductive system called intuitionistic affine logic, which we will refer to as AL i is sound and complete for the logical theory of all bounded pocrims. AL i is essentially the usual intuitionistic propositional logic IL without the rule of contraction. Involutive pocrims In general, N is not closed under conjunction and hence is not a subpocrim and δ does not respect either · or →: There is a bounded pocrim U with elements > a > b > c > ⊥ and with ·, → and δ as follows: However, in the above example, if we define x· y = δ(x · y), we find that N = (N ; ,·, →, ⊥) is an involutive pocrim whose residuation agrees with that of U. Dually, we find that the equivalence relation whose equivalence classes form the partition is an involutive pocrim where· is induced from · by the monoid congruence. Using the following lemma, we will see that these constructions generalise to all bounded pocrims. Let the relation θ be defined on P by x θ y iff δ(x) = δ(y). Then θ is a congruence on the monoid (P, , ·). Lemma 2.8 justifies the following definition: Definition 2.9. Given a bounded pocrim P we define the following structures over the signature of a bounded pocrim: • P C , the involutive core of P, is (P C , , ⊥,·,→) where P C = im(δ) ⊆ P , where and ⊥ are as in P and where· and→ are defined as follows: We write ι : P C → P for the inclusion. where P R is the quotient P/θ of P by the equivalence relation defined by x θ y iff δ(x) = δ(y) and where, writing [x] for the equivalence class in P R of x ∈ P , we define· and→ as follows: We write π : P → P R for the projection. We will write≤ and≤ for the order relation on P C and P R respectively. Theorem 2.10. Let P be a bounded pocrim. Then: 1. P C is an involutive pocrim and the inclusion of (P C ,≤) in (P, ≤) is strictly monotonic (x≤ y iff x ≤ y). 2. P R is an involutive pocrim and the projection of (P, 3. P C and P R are isomorphic bounded pocrims via the composition of the inclusion ι : P C → P and the projection π : P → P R . Proof. 1. Noting that→ is the restriction to P C of →, the claim about strong monotonicity is clear and we can write ≤ for≤. The bounded pocrim axioms are then easily proved with the exception of [m 1 ] (associativity of·) and [r] (residuation). For associativity, we have: So x·(y· z) = δ(x · y · z) and similarly (x· y)· z = δ(x · y · z), giving us the associativity of·. For residuation, the right-to-left direction is clear: if x ≤ y→ z = y → z, then x · y ≤ z and then x· y = δ(x · y) ≤ δ(z) by Lemma 2.3, part 3. But z = δ(z) since z ∈ P C = im(δ). Hence, x· y ≤ z. For the converse, assume x· y ≤ z, i.e. δ(x · y) ≤ z. By Lemma 2.3, part 5, we have that x · δ(y) ≤ δ(x · y), and hence x · δ(y) ≤ z. But y = δ(y) since y ∈ P C = im(δ), so we have x · y ≤ z, which by residuation in P gives x ≤ y → z. To conclude the proof of part 1, we must show that P C is involutive, but this is clear since negation in P C is the restriction to P C = im(δ) of the negation in P and all the elements of im(δ) are regular by Lemma 2. We have Finally, we must show that P R is involutive. , so negation and hence, also, double negation commute with the projection of P onto P R . As, by construction [δ(x)] = [x], P R is indeed involutive. 3. We must show that π • ι is one-to-one, onto and respects the pocrim operations. To see that π • ι is one-to-one, let x, y ∈ P C , so that x = δ(x) and y = δ(y), and assume iff δ(x→ y) = δ(x → y), which holds by definition. Remark 2.11. For any bounded pocrim P, ι : P C → P is a homomorphism of the ( , ⊥, →)-reduct of P, and π : P → P R is a homomorphism of the ( , ⊥, ·)-reduct of P. In general, however, neither map is a pocrim homomorphism (see the discussion of the bounded pocrim U in Example 2.7). As P C and P R are isomorphic pocrims, one could focus attention on one of the two constructions, and several authors work solely with their analogue of P C . We prefer to have both constructions available, since, in some contexts it is convenient for the ( , ⊥, →)-structure to be respected, while in other contexts it is more convenient for the ( , ⊥, ·)-structure to be respected (cf. the proofs of Theorems 3.5 and 3.6). Generalised and Double Negation Semantics Beginning with Kolmogorov [19], logicians have studied double negation translations (or negative translations) that represent classical logic in intuitionistic logic. Kolmogorov's translation inductively replaces every subformula of a formula by its double negation. Other authors have devised more economical translations: Gödel's translation [17] applies double negation to the right-hand operands of implications and at the outermost level; Gentzen's translation [15] applies double negation to atomic formulas only; and Glivenko's translation [16] is the most economical off all and just applies double negation once at the outermost level. In this section we undertake an algebraic study of these translations. Generalised semantics We wish to undertake an algebraic analysis of translations such as the various double negation translations. We will view the translations as variant semantics and so we need a framework to compare different semantics. Typically, these translations are defined by recursion over the syntactic structure of a term, sometimes composed with an additional top-level transformation. See, for example, [11] where top-level transformations are handled by redefining the provability relation. Here, rather than working with syntax, we prefer to think of a syntactic term t as its denotation viewed as a family of maps α → x, where x ranges over the universe of a bounded pocrim P and α is an assignment of values in P to the free variables of t. The modularity properties of a translation scheme which are needed for our proofs (see, for example, Theorem 3.11) are then captured by the following definition: Definition 3.1. Let Poc ⊥ be the category of bounded pocrims and homomorphisms and let Set be the category of sets. Given any set X, let H X : Poc ⊥ → Set be the functor that maps a pocrim P to Hom Set (X, P ), i.e., the set of all functions from X to P , and maps a homomorphism h : P → Q to f → h • f : Hom Set (X, P ) → Hom Set (X, Q). Now let Ass = H Var and Sem = H L where L is the set of all terms over the signature ( , ⊥, ·, →) of a bounded pocrim with variables drawn from the set Var = {v 1 , v 2 , . . .}. We define a semantics to be a natural transformation µ : Ass → Sem. So given a bounded pocrim P, Ass(P) denotes the set of assignments α : Var → P , while Sem(P) denotes the set of all possible functions s : L → P . A semantics µ is a family of functions µ P indexed by bounded pocrims P such that µ P : Ass(P) → Sem(P) and such that for any homomorphism f : P → Q the following diagram commutes. Ass(P) The standard semantics µ S is the one that simply uses the given assignment α : Var → P to give values to the variables in a term in L and then calculates its value interpreting the operations in the obvious way: Kolmogorov translation corresponds to a semantics µ Kol defined like µ S , but applying double negation to everything in sight: The Gödel translation 1 corresponds to a semantics that applies double negation to the right operands of residuation and at the outermost level. We define it using an auxiliary semantics µ * . The Gentzen and Glivenko translations correspond to semantics obtained by composing the standard semantics with double negation: where δ X denotes the natural transformation from H X = Hom Set (X, ·) to itself with δ X P = f → δ • f . Double negation semantics Definition 3.2 (Double negation semantics). Let C be a class of bounded pocrims, we say that a semantics µ is a double negation semantics for C if the following conditions hold: (DNS1) If P ∈ C is involutive, then µ P = µ S P . (DNS2) Given a term t, if, for every involutive Q ∈ C and every β : Var → Q, we have: µ S Q (β)(t) = , then, for every P ∈ C and every α : Var → P , we have: (DNS3) δ L • µ P = µ P , for every P ∈ C. Note that these condition are trivially true if C is empty. If C is nonempty but does not contain any involutive pocrim, the conditions only hold if µ P (α)(t) = for every P ∈ C, assignment α : Var → P and term t. Remark 3.3. Subject to one proviso, the above definition can be seen to agree with the usual syntactic definition of a double negation translation due to Troelstra, as summarised in Section 1.2. The proviso is that we must have Th(I) = Th(C) + [dne], where I comprises the involutive pocrims in C and where Th(C) + [dne] denotes the smallest set of terms that contains Th(C) that is closed under rewriting with equations that either hold in every member of C or have one of the forms ¬¬x = x or x = ¬¬x. Definition 3.4. We say a class C of bounded pocrims is inv-closed if whenever P ∈ C, then there is Q ∈ C such that Q is isomorphic to the involutive core, or equivalently the involutive replica, of P. Theorem 3.5. The Kolmogorov semantics, µ Kol , is a double negation semantics for any inv-closed class C of bounded pocrims. Proof. (DNS1) and (DNS3) are easy to verify. As for (DNS2), let P ∈ C and let t be a term such such that µ S Q (β)(t) = for every assignment β : Var → Q when Q is involutive. Then, if α : Var → P , it is easy to see by induction on the structure of any term s that the Kolmogorov semantics of s in P under an assignment α agrees with the standard semantics of s on P C , the involutive core of P, under the assignment δ • α: (For the inductive step for residuation use the identity δ(δ(x) → δ(y)) = δ(x) → δ(x), which follows from Lemma 2.3 parts 3 and 6.) Now δ • α is an assignment into the involutive pocrim P C , which by assumption is isomorphic to some Q ∈ C, via some isomorphism φ : P C → Q. Hence, using our hypothesis on involutive members of C, and the fact that µ S is a natural transformation, we have: completing the proof of (DNS2). Theorem 3.6. The Gödel semantics, µ Göd is a double negation semantics for any class C of inv-closed bounded pocrims. Proof. We follow a similar line to the proof of Theorem 3.5 using the involutive replica in place of the involutive core. Again (DNS1) and (DNS3) are easy. For (DNS2), given a bounded pocrim P, we see by induction on the structure of a term s that for any assignment α : Var → P , we have: where π : P → P R is the natural projection onto the involutive replica. Hence if µ S Q (β)(t) = for every assignment β : Var → Q where Q is involutive, then, using our hypothesis on involutive members of C, and the fact that µ S is a natural transformation, we have: where φ : P R → Q is an isomorphism of P R with some involutive Q ∈ C. Now π(x) = π(y) iff δ(x) = δ(y), so δ(µ Göd P (α)(t)) = δ( ) = , but clearly δ • µ Göd = µ Göd and we have proved (DNS2). We will now exhibit classes of bounded pocrims where the Gentzen and Glivenko semantics fail to give double negation semantics. These classes involve the pocrims defined in the following examples. Example 3.7. The pocrim P 4 comprises the chain > p > q > ⊥. The operation tables for P 4 are as follows. In P 4 , δ(q) = p, so P 4 is not involutive. However, the involutive core of P 4 is actually a subpocrim: namely the subpocrim with universe {⊥, p, } which (in anticipation of Example 4.4), we will refer to as L 3 . Example 3.8. Consider the pocrim Q 6 with six elements > p > q > r > s > ⊥ and with ·, → and δ as shown in the following tables: · p q r s ⊥ p q r s ⊥ p p p r r s ⊥ q q r r r ⊥ ⊥ r r r r r ⊥ ⊥ s s s ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ → p q r s ⊥ p q r s ⊥ p q q s ⊥ q p s s r s s s q ⊥ δ p q q r q s s ⊥ ⊥ Q 6 is not involutive, as δ(x) = x fails for x ∈ {p, r}. In Q 6 , double negation is an implicative homomorphism: ¬¬x → ¬¬y = ¬¬(x → y) for all x, y. Double negation is not quite a conjunctive homomorphism in Q 6 : ¬¬x · ¬¬y = ¬¬(x · y) unless {x, y} ⊆ {q, r}, in which case ¬¬x · ¬¬y = r < q = ¬¬(x · y). The involutive replica of Q 6 turns out to be a quotient pocrim: as indicated by the block decomposition of the above operation tables, there is a homomorphism h : Q 6 → Q 4 , where Q 4 is the involutive replica of Q 6 and comprises the chain > u > v > ⊥ with operation tables as follows: The kernel congruence of h has equivalence classes { , p}, {q, r}, {s} and {⊥} which are mapped by h to , u, v, respectively in Q 4 . Theorem 3.9. (i) The Gentzen semantics µ Gen is not a double negation semantics for any class of bounded pocrims that contains the pocrim Q 6 of Example 3.8. (ii) The Glivenko semantics µ Gli is not a double negation semantics for any class of bounded pocrims that contains the pocrim P 4 of Example 3.7. Proof. By the remarks after Definition 3.2 we can assume that the class of bounded pocrims contains at least one involutive pocrim in both cases. (i): We show that (DNS2) does not hold for µ Gen in Q 6 . Let x, y ∈ Var and let t be the formula δ(x · y) → x · y. Clearly, µ S P (α)(t) = , for any involutive pocrim P and any α : Var → P . Thus (DNS2) requires µ Gen Q 6 (α)(t) = for any α : Var → Q 6 . However, if α(x) = α(y) = r, we have: (ii): we argue as in the proof of (i), but taking t to be δ(x) → x. Then, if α(x) = q, we have: Theorem 3.10. Let C 1 comprise the two bounded pocrims P 4 and L 3 of Example 3.7 and let C 2 comprise the two bounded pocrims Q 6 and Q 4 of Example 3.8. Then: (i) The Gentzen semantics, µ Gen , is a double negation semantics for C 1 , but the Glivenko semantics, µ Gli , is not. (ii) The Glivenko semantics, µ Gli , is a double negation semantics for C 2 , but the Gentzen semantics, µ Gen , is not. Proof. (i): By Theorem 3.9, µ Gli is not a double negation semantics for C 1 . As for µ Gen , (DNS1) is easily verified. For (DNS3) and (DNS2), note that for any α : Var → P 4 , we have: where in the last expression we have identified L 3 with the bounded subpocrim of P 4 whose universe is im(δ). Thus evaluation under µ Gen with an assignment in any bounded pocrim in C 1 is equivalent to evaluation under the standard semantics, µ S , with an assignment in the involutive pocrim L 3 . (ii): By Theorem 3.9, µ Gen is not a double negation semantics for C 2 . As for µ Gli , (DNS1) and (DNS3) are immediate from the definition of µ Gli . For (DNS2), let t be a formula, such that µ S Q 4 (α)(t) = , for any assignment α : Var → Q 4 . As Q 4 is the only involutive pocrim in C 2 , we must show that µ Gli P (α)(t) = for P ∈ C 2 under any assignment α : Var → P . This is easy to see for P = Q 4 , since the Glivenko semantics is the double negation of the standard semantics and Q 4 is involutive. As for P = Q 6 , let α : Var → Q 6 be given. As discussed in Example 3.8, there is a quotient projection h : Q 6 → Q 4 , so, as µ S is a natural transformation, the following diagram commutes: Hence, by the assumption on t, we have: So µ S Q 6 (α)(t) ∈ h −1 ( ) = { , p}. As δ( ) = δ(p) = , we can conclude: µ Gli Q 6 (α)(t) = δ(µ S Q 6 (α)(t)) = . Theorem 3.11. There are extensions of intuitionistic affine logic AL i in which the syntactic Gentzen translation meets Troelstra's criteria for a double negation translation but the syntactic Glivenko translation does not and vice versa. Hoops If x and y are elements of a pocrim, x · (x → y) is a lower bound for x and y as is y · (y → x). Pocrims in which the two lower bounds coincide (and hence x · (x → y) is the meet of x and y) turn out to have many pleasant properties, motivating the following definition. The following lemma provides some useful characterizations of hoops. 2. P is naturally ordered. I.e., for every x, y ∈ P such that x ≤ y, there is z ∈ P such that x = y · z. 3. For every x, y ∈ P such that x ≤ y, x = y · (y → x). Proof. 1 ⇒ 2: Assume that P satisfies x · (x → y) = y · (y → x) and that x, y ∈ P satisfy x ≤ y, i.e., x → y = 1. Taking z = y → x, we have: 2 ⇒ 3: Assume that P is naturally ordered and that x, y ∈ P satisfy x ≤ y. Then x = y · z for some z. By the residuation property, we have z ≤ y → x, hence x = y · z ≤ y · (y → x) ≤ x and so x = y · (y → x). 3 ⇒ 4: assume that P satisfies x = y · (y → x) whenever x, y ∈ P and x ≤ y. 4 ⇒ 1: exchange x and y and use the fact that ≤ is antisymmetric. The axiom [cwc] is often referred to as the axiom of divisibility in the literature, for reasons which become clear if one uses the alternative notation x/y for y → x, so that the formula of part 3 of Lemma 4.2 reads x = y ·(x/y). Involutive hoops Example 4.3. We write I for the involutive hoop whose universe is the unit interval [0, 1] and whose operations are defined by = 1 x · y = max(x + y − 1, 0) x → y = min(1 − x + y, 1) I provides an infinite model of classical Lukasiewicz logic, (which we refer to as LL c ). Example 4.4. For n ≥ 2, let L n be the subhoop of I generated by 1 n−1 . It is easy to see that the universe of L n is L n = {0, 1 n−1 , 2 n−1 , . . . , n−2 n−1 , 1}. The hoops L n are involutive and provide natural finite models of classical Lukasiewicz logic LL c . A hoop H is said to be Wajsberg, see [1,13], if it satisfies Proof. In a bounded Wajsberg hoop H we have therefore H is involutive. For the other direction, assume H is an involutive hoop and let x, y ∈ H. Since H is involutive, it is enough to show that ¬((x → y) → y) is symmetric in x and y which one may prove as follows: = (x → y) · (y → x) · ¬(y · (y → x)) Lemma 2.2, part 3 where the application of Lemma 4.2 uses that ¬y ≤ y → x. By [cwc], the last expression is symmetric in x and y. There are, however, unbounded Wajsberg hoops, for instance: Example 4.6. Let O be the unbounded hoop whose universe is the half-open interval (0, 1] and whose operations are: = 1 x · y = xy x → y = min( y x , 1) O is easily seen to be a Wajsberg hoop because (x → y) → y = max(x, y). Example 4.7. Apart from L 3 there is one other pocrim with 3 elements, namely G 3 = B ⊕ B. G 3 is the first non-Boolean example in the sequence of idempotent pocrims defined by the equations G 2 = B and G n+1 = G n ⊕ B. G n can be taken to be a set of n real numbers {1, x 1 , x 2 , . . . , x n−2 , 0} with = 1 > x 1 > x 2 . . . > x n−2 > 0 = ⊥ and with operations defined by The G n are finite Heyting algebras. They were used by Gödel to prove that intuitionistic propositional logic requires infinitely many truth values [18]. In G n , ¬x = ⊥ unless x = ⊥, so for n > 2, G n is not involutive. It is easy to check from the definitions that C ⊕ D is a hoop iff both C and D are hoops. Example 4.8. It can be shown that there are 7 pocrims with 4 elements: B × B, L 4 , G 4 , B ⊕ L 3 , L 3 ⊕ B, P 4 and Q 4 , where P 4 and Q 4 are as described in Examples 3.7 and 3.8 respectively. P 4 and Q 4 are the smallest pocrims that are not hoops: P 4 is not a hoop since it is not naturally ordered: there is no z with p · z = q. Likewise Q 4 is not a hoop, because there is no z with u · z = v. De Morgan identities in hoops In this section we prove two De Morgan identities for conjunction and residuation in bounded hoops. The proof of the identity for conjunction is elementary. The identity for residuation is proved using an indirect method captured in the following lemma. Here in each case S is subdirectly irreducible, Wajsberg and generated by the x i ∈ S. S is not necessarily bounded in cases (i) and (ii). Proof. The proof uses Birkhoff's theorem (e.g., see [6,Theorem II.8.6]) to show that H is isomorphic to a subdirect product of subdirectly irreducible hoops and then uses the characterization of subdirectly irreducible hoops due to Blok and Ferreirim [1,Thorem 2.9]. Details are left to the reader. Note that in case (i) of the lemma H is isomorphic to S and so is a bounded Wajsberg hoop and hence involutive. Example 4.10. If 0 < k ∈ N, the identity ¬x k → δ(x) → x = clearly holds in any involutive hoop. It also holds in any hoop of the form B ⊕ S (since in such a hoop, either x = ⊥ or ¬x k = ⊥). This covers cases (i) and (iii) in Lemma 4.9. As the identity has only one variable, there is nothing to prove in case (ii). Hence, ¬x k → δ(x) → x = ⊥ holds in any bounded hoop.
8,793.4
2020-05-25T00:00:00.000
[ "Computer Science" ]
Does Monetary Incentives Have Stronger Influence on Workers' Productivity Other Than Any Form of Motivational Incentives? Lilian Daramola Human Resources Officer Oxpeach Strategy LTD, Lagos, Nigeria<EMAIL_ADDRESS>Abstract An incentive is a reward given to a person to stimulate his or her actions to a desired direction. It has motivational powers and is widely utilized by small and large organizations to motivate employees. These incentives can either be monetary or nonmonetary. The aim of this study is to find out whether monetary incentives has a stronger influence on workers productiveness other than any form of motivational incentives, using a case study of BORBDA. In order to achieve this, questionnaire was designed, processed and analyzed using Chi-Square. The study revealed that monetary incentives do not exert stronger influence on workers’ productivity than any other form of motivational factor. In view of this, money is not the only motivating factor that has stronger influence on workers’’ productivity, as there are other forms of motivational incentives for employees. The head of organization should look inward for better incentives to motivate their employees without necessarily using monetary incentives. Background to Study Motivation can be defined as the set of factors that cause people to behave in certain ways (Schwartz, 2006;Daramola, 2019). It is derived from the term motive which is a reason for doing something (Amstrong, 2008), the reason can either be internal (intrinsic) or external (extrinsic) factors (Herzberg et al., 1957). Motivation is among the key concerns of organizations in the modern business environment, as it has been identified to be critical in achieving business goals and objectives. Among the factors that determine employee motivation are satisfaction, recognition, appreciation, inspiration and compensation (Bowen, 2000). Clegg and Birch (2002) argues that the thought of incentive is in itself motivational, in fact most motivation comes from anticipation than the delivery of the incentive itself. An incentive is a reward given to a person to stimulate his or her actions to a desired direction. It has motivational powers and is widely utilized by small and large organizations to motivate employees. These incentives can either be monetary or non-monetary. Monetary incentives are financial incentives used mostly by employers to motivate employees towards meeting their targets. Money is a symbol of power, status and respect; it plays a crucial role in meeting the social, security and physiological needs of a person. However, money seems not to be a motivator factor when the psychological and security needs are met. This is when it becomes a maintenance factor, as put by Herzberg. Certain problems of inadequate motivation however do arise as it concerns certain individuals who come into the work situation with differences in expectation, behaviour and outlook. These problems of individual motivation inadequately may be divided into two categories. Firstly, the inability of certain individuals to be motivated may stem from the fact that there is a deficiency in their personality. For such people, the desire to avoid failure may be too strong while paradoxically, the motive to produce positive results may be too weak. This could produce a general resistance to achievement-oriented activity that should naturally be overcome by other extrinsic modes of motivation if there is to be any spur to achievement oriented activity at all. Secondly, even when the achievement motive is relatively strong, the challenges before the individual worker may be proven to be inadequate or too difficult, whichever of these that apply to the individual worker will usually manifest themselves in different ways such as lack of enthusiasm or premature surrender (Bryans and Crouin, 2005). In spite of all these apparent attendant problems of motivation, and productivity, every organisation do necessarily seek means of ensuring continuous productivity, which would be geared towards the accomplishment of organisation goals. The organizational system under study cannot be said to be different in any way, in terms of producing the result for which it was set up. In all these processes the public organisation and indeed the Benin Owena River Basin Development Authority (BORBDA) has significant impact in Ondo State. The aim of this study is to find out whether monetary incentives has a stronger influence on workers productiveness other than any form of motivational incentives, using a case study of BORBDA. 2.1Monetary Incentives When creating a reward program to motivate employees, decision makers and company owners need to understand that the reward or incentive neither guarantees quality output nor loyalty but just a bonus that encourages workers to meet their goals without compromising on quality. Bhasin (2017) explains some of the common examples of monetary incentives; Piece Rates: This is mostly used in production industries where employees are given a certain amount of money on each produced piece. Piece rates motivate employees to work harder and quickly to produce more pieces as each has a monetary incentive attached to it. However, when issuing piece rates, production supervisors must ensure quality is not compromised. Pay Raise: These are mostly offered to employees who have worked in a company for a considerable longer period of time. Some companies also give pay rises to employees who have reached a certain level of production or those who have completed the required training programs. Some offer annual salary increment to loyal workers. Bonuses: Another good form of monetary incentive is issuance of bonuses. These might be bonuses to individuals who have met their sales quotas or even bonuses to teams that have completed their projects in time or have surpassed their production targets. Some companies give yearly Christmas bonuses to long serving employees as a way of rewarding loyalty Sharing Profits: This is another excellent way of rewarding employees, in which a small profit portion is shared with employees based on their position, duration with the company and input in attaining the overall set goals. Profit sharing is preferred by most companies since it gives employees a sense of belonging and ownership. Contests: These are mostly offered to sales and production personnel. An additional price or bonus is given to the employee or to a team with the highest production level. Again, Employers can offer cash rewards to employees with best suggestions just to encourage more input in terms of positive ideas that improve on sales, production or performance. Apart from the above listed forms of monetary incentives, others may include; retirement and education funds, off duty payments and payments to different employee training programs among others. Models Supporting Employee Motivation The fields of employee motivation and employee performance are solidly grounded in the researcher of Maslow, Taylor, and Herzberg, to name just a few. The concepts of motivation and performance are constructs within the larger organizational behaviour model. While each of these constructs can be reviewed on their own, employee motivation is linked closely to employee performance. By conducting the search in this manner the resultant articles were specific case studies of employee motivation in various organizations. The resultant case studies looked at a range of topics on both employee motivation and employee performance and how these constructs can be connected. One particular study looked specifically at " the followers" of an organization and what key factors a leader needs to know about the various types of followers. The case studies in this review expand upon the work of Maslow in brief, and Herzberg. In ' Beyond the Fringe' , Simms discusses how various organizations utilize tailored versions of " non-cash rewards" as employee incentives. Simms suggests that Herzberg' s view of salary as not being a motivator holds. The ability to hold up an incentive that doesn' t get absorbed by the employee' s monthly bills has a larger effect on employee motivation. He also suggests it may be more acceptable to boast about a special award or party rather than an employee' s salary raise. Simms then goes on to expand the discussion of non-cash rewards such as flex time, employee of the month, and tailored goal incentives. Simms argues it is important for employers to communicate these benefits to employees because many employees don' t understand their total compensation package. By communicating the total package, the employer reinforces their commitment to the employees and helps to motivate the employee. This motivation leads to greater employee satisfaction and performance (Simms, 2007). The case study of the Harrah' s Entertainment sales teams lays out the use of team incentives to increase sales across the various branches of the Harrah' s Entertainment family of products. However, the core to the incentive packages, that Jakobson discusses, is the use of Merchandise Awards. Jakobson states that Merchandise Awards are even more effective than Top Seller Trips. Harrah' s also uses simple employee motivation tactics such as recognition at weekly and monthly sales meetings of the top sales teams (Jakobson, 2007). Whiteling (2007) looks at the cases of Reuters and supermarket giant Sainsbury' s to show how important it is to create a culture where employees become directly involved in suggestions for change. By creating a culture where employee input is valued and utilized, the changes faced by the organization are better understood and receive the support of the employees. This also has the side effect of creating employee motivation to support and accomplish the organizations goals and change efforts (Whiteling, 2007). Silverman utilizes a similar strategy to create a high-performance workforce. Silverman suggests keeping employees engaged by working with storytelling. Employers can systematically ask employee' s to tell their story for good or not-so good situations. In this way, an employee/employer relationship can be forged which can help foster mutual support and idea sharing (Silverman, 2006). Similar to Whiteling, Silverman suggests that the organizations culture needs to be developed around the concept of storytelling. Employees need to feel their stories are being heard, understood, and valued by those requesting the stories. By forging these relationships, the employee feels valued by the employer, supervisor, and organization as a contributor. This value translates into higher work performance and stake within the organization (Silverman, 2006;Whiteling, 2007). Sharbrough ' s (2006) study looks at the correlations between leader' s use of Motivating Language (ML) and employee job satisfaction and the perception of a supervisor' s effectiveness. In both cases, there was a statistically significant correlation in this study between a leader' s use of ML and employee job satisfaction and the perception of a supervisor' s effectiveness. This correlation can be utilized by organizations to measure a leader' s use of ML and determine levels of employee satisfaction as well as determine the perceived effectiveness of a supervisor Kellerman(2007) has expanded the work of Zaleznik, Kelley, and Chaleff to create what he calls a level of engagement to classify the followers of an organization. This employee continuum ranges from " feeling and doing absolutely nothing" to " being passionately committed and deeply involved." In this way, a leader can assess their subordinates and tailor a leadership approach to maximize the affect a particular effort will have on employee motivation. Abraham Maslow Theory Abraham Maslow (1954) attempted to synthesize a large body of research related to human motivation, prior to Maslow, researchers generally focused separately on such factors as biology, achievement, or power to explain what energizes, directs, and sustains human behavior. Maslow posited a hierarchy of human needs based on two groupings: deficiency needs and growth needs. Within the deficiency needs, each lower need must be met before moving to the next higher level. Once each of these needs has been satisfied, if at some future time a deficiency is detected, the individual will act to remove the deficiency. Maslow' s theory is difficult to test empirically and has been subject to various interpretations by different writers. Reviews of the need hierarchy model suggest little clear or consistent support for the theory and raise doubts about the validity of the classification of basic human needs. However, it is important to stress that Maslow himself recognizes the limitations of his theory and did not imply that it should command widespread, empirical support. He suggested only that the theory should be considered as a framework for future research and points out: ' it is easier to perceive and to criticize the aspects in motivation theory than to remedy them.' Although Maslow did not originally intend that the need hierarchy should necessarily be applied to the work situation, it still remains popular as a theory of motivation at work. Despite criticisms and doubts about its limitations, the theory has had a significant impact on management approaches to motivation and the design of organizations to meet individual needs. It is a convenient framework for viewing the different needs and expectations that people have, where they are in the hierarchy, and the different motivators that might be applied to people at different levels. The work of Maslow has drawn attention to a number of different motivators and stimulated study and research. The need hierarchy model provides a useful base for the evaluation of motivation at work. Frederick Herzberg (1923-) had close links with Maslow and believed in a two-factor theory of motivation. He argued that there were certain factors that a business could introduce that would directly motivate employees to work harder (Motivators). However there were also factors that would de-motivate an employee if not present but would not in themselves actually motivate employees to work harder (Hygiene factors). Motivators are more concerned with the actual job itself. For instance how interesting the work is and how much opportunity it gives for extra responsibility, recognition and promotion. Frederick Herzbeg Theory of Motivation Hygiene factors are factors which ' surround the job' rather than the job itself. For example a worker will only turn up to work if a business has provided a reasonable level of pay and safe working conditions but these factors will not make him work harder at his job once he is there. Importantly Herzberg viewed pay as a hygiene factor which is in direct contrast to Taylor who viewed pay, and piece-rate in particular. Herzberg believed that businesses should motivate employees by adopting a democratic approach to management and by improving the nature and content of the actual job through certain methods. Some of the methods managers could use to achieve this are: Job enlargement: Workers being given a greater variety of tasks to perform (not necessarily more challenging) which should make the work more interesting. Job enrichment involves workers being given a wider range of more complexes, interesting and challenging tasks surrounding a complete unit of work. This should give a greater sense of achievement. Empowerment means delegating more power to employees to make their own decisions over areas of their working life. Incentives as Motivational Tools In order to keep workers motivated their needs must be addressed as project goals are reached. Satisfying workers' needs can be viewed as distributing incentives when certain objectives are achieved. Employees have needs that they want met and employers have goals that they reach and they can work together as a team to satisfy the wants of both the employees and their employers. Workers who are motivated to help reach the goal of the employer and do so should be recognized with an incentive/reward. When considering what type of incentives to use there are two types to be aware of, extrinsic and intrinsic. Extrinsic rewards are external rewards that occur apart from work, such as money and other material things. On the other hand, intrinsic rewards are internal rewards that a person feels when performing a job, so that there is a direct and immediate connection between work and reward. The power of incentives is immense and pervasive, which is all the more reason they require careful management (McKenzie and Lee 1998). Heap (1987) has summarized a list of these advantages and disadvantages associated with financial incentives. Many construction companies have already considered that there can be advantages and disadvantages of developing an incentive program. A study by Sanders and Thompson (1999) showed that those companies that keep their program simple with the main objective of the program in mind (to benefit the project in reference to cost, schedule, customer service, environment and quality) are also deemed success of any incentive program. Incentives are usually defined as tangible rewards that are given to those who perform at a given level. Such rewards may be available to workers, supervisors, or top managers. Whether the incentive is linked directly to such items as safety, quality or absenteeism, the reward follows successful performance (MaKenzie and Lee, 1998). Many companies feel that pocket money is to longer a good motivator; others contend that small rewards such as toasters and blenders do not motivate. Many companies therefore offer profit sharing plans; or companies have abandoned monetary rewards and instead offer lavish trips to such places as Europe and some Caribbean islands. Because of the expense, these programs require careful monitoring. Some companies merely reward good producers with an extra day off with pay. Other concerns reward top performers with better working conditions. Since incentive programs aim to increase workers' performance levels, the measure used to decide if a reward has been earned should be carefully set. The performance level must be attainable or workers won' t try to reach the goal. That fact underscores the usefulness of having workers themselves contribute their ideas about what constitutes a reasonable level of performance. An incentive scheme may also fail if the measure of success ignores quality or safety. An obvious problem exists when an incentive is applied to work that is machine paced. Incentives should b e clearly linked to performance, but not all incentives can be clearly tied to objective criteria. Some incentive rewards are issued on the basis of a subjective assessment by a superior on the merit of particular workers. This method, in particular, may cause conflicts between workers, especially those who do not win rewards (Turkson, 2002). Every organization is concerned with what should be done to achieve sustained high levels of performance through its workforce. This means giving close attention to how individuals can best be motivated through means such as incentives, rewards, leadership etc. and the organization context within which they carry out the work (Armstrong, 2006). The study of motivation is concerned basically with why people behave in a certain way. In general it can be described as the direction and persistence of action. It is concerned with why people choose a particular course of action in preference to others, and why they continue with chosen action, often over a long period, and in the face of difficulties and problems (Mullins, 2005). Motivation can therefore be said to be at the heart of how innovative and productive things get done within an organization (Bloisi, 2003). It has been established that motivation is concerned with the factors that influence people to behave in certain ways. Effects of Motivation on Productivity Productivity in general has been defined in the Cambridge International and Oxford Advance Learner' s dictionaries as the rate at which goods are produced with reference to number of people and amount of materials necessary to produced it. On the other hand, productivity has been defined as the utilization of resources in producing a product or services (Gaissey, 1993). It has further been defined as the ratio of the output (good and services) and input (Labour, capital or management). The definition of productivity is utilized by economists at the industrial level to determine the economy' s health, trends and growth rate whiles at the project level, it applies to areas of planning, cost estimating, accounting and cost control (Mojahed, 2005). Several factors affect labour productivity and prominent among them is the basic education for any effective labour force. In addition to the above is the diet of the labour force and social overhead such as transportation and sanitation (Heizer and Render, 1999). Furthermore, motivation, team building, training and job security have a significant bearing on the labour productivity. Coupled with the afore-stated factors, labour productivity cannot be achieved without maintaining and enhancing the skills of labour and human resource strategies. Better utilized labour with stronger commitment and working on safe jobs also contribute to affect labour productivity (Wiredu, 1989). Effects of Motivation on Performance The performance of employees will make or break a company; this is why it is important to find a variety of methods of motivating employees. " Motivation is the willingness to do something," wrote Stephen Robbins and David A. DeCenzo in their book " Supervision Today." " It is conditioned by this action's ability to satisfy some need for the individual." The most obvious form of motivation for an employee is money; however, there are other motivating factors that must be considered. Every employee within a company is different and, therefore, is motivated to perform well for different reasons. Due to the differences within an organization, it is important for a manager to get to know her employees and understand what motivates their performance. " If you're going to be successful in motivating people, you have to begin by accepting and trying to understand individual differences," Robbins and DeCenzo report in their book " Supervision Today." Money is the most important motivator for employee performance but it is important for companies to find other ways to motivate. This involves getting to know their employees and what drives them, then making sure managers utilize appropriate motivational techniques with each employee. When appropriate motivation techniques are used, employee performance will improve. Research Design A cross section of the staff of BORBDA comprising of 45 subjects drawn from every class and cadre of the organization was sampled. For the purpose of this study, the workers were divided into three major groups namely: the contract staff; the permanent staff officers; and the management staff officers. From analyses, 9% are management staff, 55.5% are permanent staff, and the remaining 35.5% are contract staff. Information was gathered from the population. Questionnaire was used for collecting responses from the subject selected for the study. A total of 45 respondents completed and returned the questionnaires. They all filled and returned questionnaires for analytical purposes. Data Analysis Technique Chi-Square test was employed to achieve the analysis. The Chi-Square distribution is a theoretical or mathematical distribution which has wide applicability in statistical analysis. The term 'Chi Square' (pronounced with a hard 'ch') is used because the Greek letter χ is used to define this distribution. It can be seen that the elements on which this distribution is based are squared, so that the symbol χ2 is used to denote the distribution (Adeniran, 2018). The Chi Square statistic is commonly used for testing relationships between categorical variables. The null hypothesis of the Chi Square test is that no relationship exists with the categorical variables in the population; they are independent. Also, it is commonly used to evaluate tests of independence when using a cross tabulation (also known as a bivariate table). Cross tabulation presents the distributions of two categorical variables simultaneously, with the intersections of the categories of the variables appearing in the cells of the table. The Test of independence assesses whether an association exists between the two variables by comparing the observed pattern of responses in the cells to the pattern that would be expected if variables were truly independent of each other (Stephanie, 2018). In the same vein, χ2 statistic appears quite different from the other statistics because it can be used for achieving the goodness of fit test and the test of independence. For both of these tests, the data obtained from the sample are referred to as the observed numbers of cases. These are the frequencies of occurrence for each category into which the data have been grouped. In the Chi-Square tests, the null hypothesis makes a statement concerning how many cases are to be expected in each category if this hypothesis is correct. The Chi Square test is based on the difference between the observed and the expected values for each category. Issues regarding Chi Square test There are a number of important considerations when using the Chi Square statistic to evaluate a cross tabulation. 1. Chi Square value is extremely sensitive to sample size, when the sample size is too large (approximately 500), almost any small difference will appear statistically significant; 2. It is sensitive to the distribution within the cells and the statistical software tool gives warning message if cells have fewer than five (5) cases. This can be addressed by using categorical variables with a limited number of categories (e.g., by combining categories if necessary to produce a smaller table) (Adeniran, 2018). In order to test the association between variables, Chi Square tests are used. Chi Square are suitable for nominal type of data (i.e., data that are put into classes: e.g. gender (male, female).type of job (skilled, unskilled, semi skilled) to determine whether they are associated. Nominal data as the word connotes are data that the researcher nominate values to when coding. A Chi Square is said to be significant if there is an association between two variables, and non significant if there is not an association. As put by Adeniran (2018), Chi Square is said to be useful in a parametric test if the population is small. This is so because a small population requires no statistical estimation for sample size, hence it is needed for significance testing. But it cannot be suitable for parametric test if the population is large. This is so because in order to have an accurate sample size estimated from the population such that the population will be adequately represented in the sample; there is need for random sampling technique which is probability sampling. Hence, probability sampling cannot work for Chi Square but non-probability sampling. It measures the association between two variables, or indicates whether the variables are related (Stephaine, 2018;Adeniran, 2018). Study Area The Benin Owena River Basin Development Authority (BORBD) together with other Basins in Nigeria was established through the decree that charged them to enhance: i. Increase in production of food and other raw materials to meet the country' s growing population and expanding industries and to attain self-sufficiency in food production. ii. The expansion of employment opportunities at the rural levels and the need to develop underground water domestic use (FRN,Gazette, 1976;Akindele and Adebo, 2004 Akindele and Adebo, 2004). The River Basin Authorities were charged with the following functions: a) To undertake comprehensive development of both surface and under ground water resources for multi-purpose use; b) To provide water from reservoirs and lakes under the control of the Authority for irrigation purposes to farmers and recognized association as well as for urban water supply Authority concerned; c) The control of pollution in rivers, lakes, lagoons, and creeks in Authority' s area in accordance with nationally laid standards; d) To resettle persons affected by the works and schemes specified under special resettlement schemes; e) To develop fishes and improve navigation on the rivers, lakes, reservoirs, lagoons and creeks in the authority' s area; f) To undertake the mechanical clearing and cultivation of land for the production of crops and livestock etc. g) To undertake large-scale multiplication of improved seeds, live stock and tree seedlings for distribution to farmers and for afforestation schemes; h) To process crops, livestock products and fish produced by farmers in the authority' s area in partnership with state agencies and any other person; i) To assist the state and local governments in the implementation of rural development works (construction of small dams, provision of power for rural electrification schemes, establishment of grazing reserves, training of staff) in the Authority' s areas (RBDA Decree 87, 1979;Akindele and Adebo, 2004). In 1984 under the Buhari regime the eleven River Basin Authorities metamorphosed into eighteen Authorities and were redesignated as " River Basin and Rural Development Authorities (OBRDAS) (FFSDSRP) with one serving the purpose of each state and one for Ogun and Lagos State combined. It was as a result of this metamorphosis that ORBRDA gained autonomy for independent existence in 1984 when it was excised from the old Benin-Owena River Basin and Development Authority (BORBDA) to cater exclusively for Ondo State (ORBRDA, 1987;Akindele and Adebo, 2004;Daramola, 2019). Discussions of Findings For the purpose of achieving the Monetary incentives and rewards exert a stronger influence on workers than any other form of motivational incentives. The Chi-Square (X 2 ) analytical method was employed. The following questions were considered. Fat salaries are the best tools with which to motivate workers; and only monetary rewards can bring out the best in workers. The analysis revealed that the calculated value of X 2 (20.7) exceeds the Table value of X 2 (9.49), hence the null hypothesis will be rejected and the alternative hypothesis will be accepted. As such, it will be concluded that monetary incentives and rewards does not exert stronger influence on workers' productivity than any other form of motivational factor. In practice and the basic principle of practical management merit pay has been contended that it does not motivate, it could reinforce high performance, extinguish low performance, increase instrumentalities, safety needs, achieve equity and so forth. The reason it does not work has to do with implementation and the manner of practice that violate the principle. Most often, performance measures are not valid or accurate. The budget is usually small without much flexibility. Directors are reluctant to give small raises that are insulting or lower than the cost of living and they don' t want to make enemies or be accused of favouring their friends. Conclusion and Recommendation From the study finding, it was therefore concluded that monetary incentives and rewards does not exert stronger influence on workers'' productivity than any other form of motivational factor. In view of this, money is not the only motivator factor that has stronger influence on workers'' productivity, as there are other forms of motivational incentives for employees. This could justify the perception of Herzberg that money is a maintenance factor and not the only motivator factor when the psychological and security needs are met. It is prudent to affirm that the reward system or incentives of an organization should be a well thought out plan that is all inclusive and satisfactory. Employees' performance should be monitored and objectively measured against the set goals after which a direct link should be created between the actions of employees and the eventual reward. The head of organization should look inward for better incentives to motivate their employees without necessarily using monetary incentives.
7,105
2019-10-26T00:00:00.000
[ "Economics", "Business" ]
Infused virtue as virtue simply: the centrality of the Augustinian definition in Summa theologiae I/2.55–67 Abstract ‘Virtue is a good quality of the mind, by which one lives rightly, which no one uses badly, which God works in us without us.’ Thomas Aquinas quotes this ‘Augustinian’ definition near the beginning of his treatment of virtue in general. Because it fails to apply to acquired virtues, some conclude that Aquinas presents this definition only to set it aside. Against such interpretations, I demonstrate that Thomas’ use of the definition is the key to understanding the treatment of virtue at Summa I/2.55–63. First, I show why Thomas places the definition where he does, at the end of question 55. Second, I show that the definition is not peripheral but rather discloses the inner logic of his treatment of virtue. Finally, I show that for the reader who grasps this inner logic, the conclusion drawn explicitly at Question 65 – that only infused virtue is virtue simply – is revealing but not surprising. At the centre of Summa theologiae I/2, the reader finds a comprehensive but sometimes vexing treatment of virtue in questions 55-67.Among its puzzles is the handling of a definition of virtue taken from Peter Lombard's Sentences.Assembled mostly from Augustinian texts, particularly the nineteenth chapter of De libero arbitrio, the definition reads: 'Virtue is a good quality of the mind, by which one lives rightly, which no one uses badly, which God works in us without us' (55.4 co). 1 As a definition, it seems inadequate, since it covers only some instances of virtue -those that are infused rather than acquired.Accordingly, it seems reasonable to conclude with Martin Rhonheimer that the definition is one which Thomas 'presents' but 'then sets aside', because it is 'only useful for theology'. scottish journal of theology Attention to the treatment of virtue as a whole, Rhonheimer claims, shows Thomas to engage in the substantive 'rejection of this definition'. 2t is not difficult to see why someone would interpret Thomas in this way.Given the poor fit between the definition and the acquired virtues, the easiest manner of resolving the tension is to dismiss the definition.But easy resolutions do not make for good readings.In what follows, I will show that, contrary to Rhonheimer's claim, the 'Augustinian' definition is not mentioned as a pious nod to the tradition and then set aside.Rather, close attention to the definition and its deployment is the key to discerning more clearly Thomas's intent in the questions on virtue that appear in Summa I/2.55-63. To this end, I will proceed in four steps.In an opening section, I will propose an account of just why Thomas places the definition where he does -that is, at the end of question 55.Then I will go on to argue that the definition, far from being peripheral, discloses the inner logic of the treatment of virtue that runs from questions 55 to 63.Finally, I will show that, for the reader who grasps this inner logic, the conclusion drawn explicitly at question 65 -that only infused virtue is virtue simpliciter -is revealing but not surprising.The conclusion will not seem anomalous or unduly theological -as it is bound to seem to those for whom Thomas' ethics must proceed mainly along Aristotelian lines. 3Instead, it will appear as the manifest image of what is latently present in the prior questions. The strategic placement of the Augustinian definition Thomas places the 'Augustinian' definition near the beginning of Summa I/2's treatment consideration of virtue.To see why, it is necessary to track the dialectical motion of its first question.The aim of question 55 is to Infused virtue as virtue simply say what virtue is, to disclose its essence.Thomas starts by indicating its genus (article 1).Virtue is a habitus, a disposition good for a human being to have.After identifying habitus as the genus of virtue, he proceeds to isolate two specific differences (articles 2-3).Only at the end of the question does Thomas consider the definition (article 4). The primal signification of virtus, as Thomas well knows, is 'power'.The etymology's relevance should not be dismissed.Unless virtue is understood in relation to power, its character as a habitus is bound to be misunderstood.To see just how fundamental the virtue/power relation is for Thomas, consider the opening sentence of the body in each of question 55's first three articles: 'Virtue names a certain perfectio of a power' (55.1 co).'Virtue, from the very definition of its name, denotes a certain perfectio of a power' (55.2 co).'Virtue denotes a certain perfectio of a power' (55.3 co). 4 To grasp virtue as a habitus, one must probe what Thomas means by the 'perfectio of a power'.In the term perfectio, the reader should simultaneously hear notes sounded by 'perfection' and 'completion'.Thomas considers the core notion of completion inseparable from the notion of an end: 'the completion of anything whatever consists chiefly in the order to its end' (55.1 co).The powers of which human virtue is the completion are powers that produce 'works of reason (opera rationis), which are proper to a human being' (55.2 ad 2).If virtue is the completion of such powers, its end must consist in productive activity.Virtue, then, cannot be just any habit, but an operative habit.Lest this conclusion seem insufficient, since not every operatio enhances a person's real strength, article 3 adds a necessary clarification: since 'every evil is an enfeeblement', as Dionysius says, the operation that stands as virtue's end must be 'good operation' (bona operatio). Thomas' aim is to disclose the core notion of human virtue -its ratio.Articles 2 and 3 are partial symmetrical disclosures of the ratio.Their symmetry can be glimpsed by inspecting the videtur quod of both articles.For each it seems that a certain description 'does not belong to the ratio of human virtue'.In both cases, the description in question -habitus operativus in article 2, habitus bonus in article 3 -does belong to the ratio.Articles 2 and 3 thus constitute a diptych, a balanced pair whose members seem jointly to disclose the whole ratio of virtue.Were the diptych sufficient to disclose Each of the key terms chosen by Thomas in this assessment -the adverb perfecte, the verb complecitur, the adjective tota -stresses the definition's utter completeness.What the first portion of the question had been striving for -an articulation of virtue's essence -the definition provides.It captures not only this or that fragment; it discloses the whole ratio.Because the definition succeeds in capturing the whole account of virtue, the article in which the definition appears is rightly interpreted as the culmination of question 55, rather than an afterthought or appendix. Why suppose that the 'Augustinian' definition captures the 'whole account' of virtue?Thomas answers: 'The complete account (perfecta ratio) of anything whatever is gathered from all of its causes.Now the aforementioned definition comprehends all the causes of virtue' (55.4 co).Virtue's formal cause is that is a 'good quality' (bona qualitas).Its material cause corresponds, first of all, to that which bears virtue: it is 'of the mind' (mentis) rather than the body.Its final cause is its good.Virtue is that 'by which one lives rightly, which no one uses badly' (qua recte vivitur, qua nullus male utitur).Finally, the definition gives its efficient cause by indicating what brings about the existence of virtue in human beings.Virtue is a quality 'which God works in us without us' (quam Deus in nobis sine nobis operatur). The definition deployed: virtue's material cause Thomas makes use of an Aristotelian scheme -namely, that of the 'four causes' -to explicate the Augustinian definition.That use, however, does not imply that he interprets the definition's content as Aristotelian.Before considering the definition's non-Aristotelian import, however, I want to establish decisively -against Rhonheimer and others who suppose that the definition is mentioned and then quietly disappears -that the definition in fact serves as the structuring principle of questions 55-63. The elements of the Augustinian definition correspond to the four causes in a particular order: formal, material, final, efficient.That Thomas adopts this very order in the sequencing of questions 55-63 can be seen in his decision to begin with the formal cause -that is, the essence of virtue, the topic of question 55.Likewise, question 63 -the end of the sequence of questions on virtue itself, before the turn to its 'properties' -plainly concerns the efficient cause.What needs to be shown is that questions 56-62 correspond to the material and the final cause.In this section, I will show that questions 56-60 correspond to the material cause.In the next, I will show that questions 61-2 correspond to the final cause. Because virtue belongs to the mind, Thomas holds, there is no 'matter from which' (materia ex qua) it is made.Nonetheless, it may be said to have a material cause in two ways.First, every virtue has a 'matter about which' (materia circa quam) it is concerned, its own proper field or domain.Second, there is the 'matter in which' (materia in qua) the virtue resides, which Thomas identifies with its 'subject' (subiectum).Question 56, in which Thomas expressly addresses the subject of virtue, corresponds to the second of these two senses of material cause.The notion of a virtue as a 'completion of a power' demands that it be seated somewhere.The completion must be 'in that of which it is the completion' (56.1 co).Any virtue, therefore, must be subjected in a power of the soul.In which powers are human virtues seated?Question 56's primary task is to address this issue.Some virtues have the intellect as their materia in qua.These are the five 'intellectual virtues' named by Aristotle.Except for prudence, they are not virtues simply speaking, because they do not satisfy the Augustinian definition's requirement that virtues cannot be badly used.By contrast to the intellectual virtues, virtues subjected in the appetitive powers are virtues, properly speaking.Any habitus that orders an appetitive power to its end and thereby completes it will be a virtue not secundum quid but simpliciter. The same contrast between two materiae, intellect and appetite, informs question 57 on the intellectual virtues.Three of the intellectual virtueswisdom (sapientia), knowledge (scientia) and insight (intellectus) -are seated in the speculative intellect and have no necessary relation to the appetitive powers.Art likewise has no concern with the appetite as such.It cares only about 'how the work is, which it accomplishes'.It 'does not look to appetite' (57.4 co).Prudence, by contrast, does 'look to appetite' (57.4 co) and so differs from the other intellectual virtues.Beneath the art/prudence distinction is the deeper contrast between the intellective and the appetitive powers.Though prudence's subject is the intellect -otherwise it would not be an intellectual virtue -it differs from the rest of the intellectual virtues in that it essentially concerns the appetitive powers.Question 58's distinction between moral and intellectual virtue is similarly grounded in the difference between reason and appetite.'Just as appetite is distinguished scottish journal of theology from reason, so moral virtue is distinguished from intellectual virtue' (58.2 co).Questions 56-8 form a unity in the general treatment of virtue.What holds these the questions together is their focus on virtue's 'material cause', construed as materia in qua. Questions 59 and 60 move to Thomas's second sense of material cause: the 'matter about which' (materia circa quam) a virtue is concerned.To identify the material cause in this sense is the same as to discern a virtue's object: 'the materia circa quam is the object of a virtue' (55.4 co).It is the virtue's field or domain, what the virtue is 'about'.Because the passions supply a large part of the matter for moral virtue, Thomas writes question 59 as a preliminary to question 60.Question 59 seeks to clarify the 'relation ' (comparatio) between the moral virtues and the passions, fixing the scope of the materia circa quam.Question 60 proceeds to distinguish the moral virtues from one another, according to their particular matter.Just as questions 56-8 are governed by 'material cause' in the sense of materia in qua, so questions 59 and 60 correspond to 'material cause', understood as materia circa quam. Taken as a whole, questions 56-60 correspond to the material cause.This is the second of the four causes from which Thomas says the 'whole account' (tota ratio) of virtue is gathered.Let us now move to the third cause. The Augustinian definition and virtue's final cause If the Augustinian definition of virtue as read by Thomas -that is, as pointing to the formal, material, final and efficient causes of virtue, in that order -is the key to the inner logic of questions 55-63, then we can expect questions 61-2 to correspond to the final cause, and question 63 to the efficient cause.The third element of the Augustinian definition holds that virtue is a quality 'by which one lives rightly, which no one uses badly' (qua recte vivitur, qua nullus male utitur).Thomas reads this element as pointing toward virtue's final cause. To grasp the link between questions 61-2 and the final cause, it is helpful to understand both questions as attempts to illuminate the virtues by placing them in closer relation to the ends of human life.The topic of question 61 is the cardinal or principal virtues.It begins by distinguishing between two kinds of habits according to the relation of each to usus -a clear echo of the Augustinian definition's qua nullus male utitur.Some habits confer the 'capacity (facultas) for acting well', but do not of themselves 'cause the use (usus) of a good work' (61.1 co).Others not only confer the capacity, but also bring about something good.Habits of the first type, Thomas argues, can be called virtues only by analogy.Those of the second type, by contrast, are virtues 'according to the complete ratio of virtue'.Their power to preserve the rightness of appetite, along with the direction given to them by prudence, ensures their good use, and thus secures their connection to the final cause, which consists in good operation. Each of the cardinal virtues is directed toward a good.But are they directed to the highest good?Thomas introduces this query in question 61's closing article.Up to this point, he declares, we have considered the virtues only 'as they exist in a human being according to the condition of his nature' (61.5 co).Such virtues, notwithstanding their apparent conformity to the complete ratio of virtue, are virtues at the lowest level.They are appropriately called 'political' virtues, since they essentially belong to a person considered as animal politicum (61.5 co).In this article, Thomas introduces a passage from Macrobius that quotes Plotinus.The passage evokes a ladder of virtues, starting from the political virtues, 'of which we have spoken until now', moving through 'cleansing virtues' (virtutes purgatoriae) and virtues in souls that have 'already been cleansed' (iam purgati animi), and culminating in 'exemplar virtues'. Without reference to a transcendent final cause, the cardinal virtues are not fully intelligible.If one remains on the Aristotelian plane, as it were, one will have only truncated versions of the cardinal virtues -and therefore of the moral virtues as a whole, since all the other moral virtues are 'contained' under the cardinal virtues (61.3 co).Only by heeding non-Aristotelian auctoritates about the final cause are we able to perceive the full range of cardinal virtues.Thomas starts with a 'horizontal' consideration of eleven moral virtues, distinguished by their materia in an Aristotelian fashion.As question 61 enacts the reduction of these dispositions to four principal virtues, he seems to remain on the horizontal plane.But the motion cannot remain on this plane indefinitely.It gives way to a 'vertical' motion, reaching upwards to the final cause that transcends the polis.This vertical motion opens the way to question 62's treatment of the theological virtues. Both question 61 and question 62 are essentially about virtue's final cause, but in different ways.If question 61 most directly corresponds to the part of the Augustinian definition which stipulates that virtue is a quality 'which no one uses badly', question 62 deepens the theme that virtue is that 'by which one lives rightly' (qua recte vivitur).What it means to live rightly can for Thomas be understood only in relation to the end of human life.Question 62 begins on this very note.'By virtue a human being is completed in relation to the acts by which she is directed to blessedness' (62.1 co).But the blessedness or happiness of a human being, Thomas adds, is duplex: One is proportioned to human nature, namely to that which a human being can reach by the principia of her nature.The other, however, is a blessedness that surpasses the nature of a human being, to which a human scottish journal of theology being can reach only by divine virtue, according to a certain participation in divinity.(62.1 co) Even in their most complete state, Thomas argues, the cardinal virtues are able to make a human being blessed only in the first sense.They are completions of 'the natural principia by which a person is directed to the connatural end -but not, however, without divine help' (62.1 co).They are utterly incapable of moving a person toward blessedness in the second sense, which Thomas calls 'supernatural blessedness' (beatitudo supernaturalis). If a human being is to be directed toward supernatural blessedness, additional principia beyond the natural principia (and the cardinal virtues which complete them) are required.Such principia, Thomas says, are the 'theological virtues'.They have God for their object, 'so far as by them we are rightly directed (recte ordinamur) toward God' (62.1 co).An attentive reader will hear the phrase recte ordinamur as an echo of the Augustinian definition's recte vivitur.The virtues that conform most fully to this requirement of the definitionand which appear at the very centre of Summa I/2's questions on virtue -are the theological virtues. The twofold character of the final cause is the deepest ground of the distinction between question 61 and question 62.It is also what unifies them.Questions 61 and 62 should be seen as a not-quite-symmetrical pair.Both consider the highest virtues in relation to the final cause.But the double nature of beatitudo ensures that different things will need to be said about each group of virtues.Though 'principal' in relation to the other moral virtues, the cardinal virtues are teleologically subordinated to the theological virtues.Article 2's argument sed contra signals as much: what is secundum naturam hominis is lower than what is supra naturam hominis (62.2 sc).The object of the theological virtues, Thomas argues in the next article, is 'God himself, who is the ultimate end of things, as he surpasses the knowledge of our reason' (62.3 co).The end of the theological virtues remains bona operatio.But to the extent that good productive activity is directed toward God, the other virtues will now be directed by charity, the highest of the theological virtues. To be sure, the other virtues retain a relative autonomy.They are not, Thomas argues, turned into so many versions of charity, as though each of them were 'essentially' charity (62.2 ad 3).But they depend on charity, because charity directs their particular goods to the highest good.Thomas does not assert the reorientation of the virtues by charity as an abstract claim -as if it were to leave a person just as she was before.On the contrary, he says, the will is now directed to the end 'as to a certain spiritual union, by which it is in a way transformed into that end, which happens by charity' (62.3 co).From the perspective of question 62, anodyne phrases like 'human fulfilment' are impoverished descriptions of virtue's final cause.For a person infused with charity, the end is an ever-increasing participation in blessedness -one that asymptotically approaches deification. The definition's explicit reappearance: virtue's efficient cause Up to this point, I have argued that Thomas's reading of the 'Augustinian' definition of virtue -placed strategically as the culmination of question 55 on virtue's essence -informs the deep structure of the eight questions that follow.This suffices to raise a serious doubt about Rhonheimer's claim that Thomas cites the definition only to set it aside.Nonetheless, Rhonheimer could reply that the definition itself is not quoted within these questions.Any such appeal, however, crashes upon the reef of question 63, where Thomas again invokes the definition explicitly. Virtue is a quality 'which God works in us without us' (quam Deus in nobis sine nobis operatur).This last clause of the Augustinian definition, Thomas tells us, points to the efficient cause.Addressing the 'cause of virtue' (causa virtutis), question 63 begins with a topic from Aristotle: are the virtues in us by nature?After defending an answer drawn from book 2 of the Ethics as 'more true' than competing responses given by Avicenna and the Platonists (article 1), Thomas proceeds to ask whether virtues are caused by the 'habituation of works' (article 2).Some of them, he says, are -namely, those directed to a good measured by the rule of human reason.But what about virtues directed to a good whose measure is divine law?Are such virtues limited to the theological virtues treated in question 62? Thomas addresses this question by asking whether any of the moral virtues are in us by divine infusion (article 3).If they are, Thomas asks, are such infused moral virtues of the same kind or species as the virtues that we acquire from works (article 4)? This survey of question 63 is enough to show that its topic is virtue's efficient cause.If further evidence is needed, one may observe that the last clause of the Augustinian definition (quam Deus in nobis sine nobis operatur) reappears twice within the question (63.2 co, 63.4 sc).Before considering these reappearances, we must notice two significant claims that Thomas makes in question 55 about the clause's proper interpretation.First, the clause should be understood to imply that while God causes virtues in us without our action, God does not do so without our consent (55.4 ad 6).Divine infusion is not violent insertion.Second, the clause applies only to the infused virtues.'If this little part (quae particula) be withdrawn, the rest of the definition is common to the virtues, both acquired and infused' (55.4 co).Question 55's concluding remarks leave open a series of perplexities.How scottish journal of theology can Thomas hold both that the Augustinian definition covers only some of the cases and that it discloses the 'whole notion' (tota ratio) of virtue?If the definition fails to cover all the cases, should it not be regarded as faulty?If it is faulty, why does he decide to reproduce and defend it?Why does he choose to structure his general treatment of virtue around it? In question 63, Thomas begins to construct an answer to these questions.The acquired virtues are in us by nature, but only 'according to aptitude and inchoation' (63.1 co).What is naturally in us and naturally known, Thomas holds, is not virtue itself, but 'certain beginnings' (quaedam principia) of virtue.These beginnings may be compared to 'certain seminalia' -seeds or nurseries from which the virtues grow through 'habituation' (assuetudo), from human works and actions.Because of their greater naturalness, Thomas asserts, the principia are 'nobler' (nobilior) or 'higher' (altior) than the virtues that grow out of them (63.2 ad 3).But notice also that the acquired virtues are proportional to the principia, as the lower is proportional to the higher.The proportion, Thomas adds, 'does not extend beyond nature' (63.3 ad 3). Here Thomas has begun to set up an analogy between the acquired virtues and infused moral virtues.If humans are ultimately directed to a goal beyond nature, a supernatural end, other beginnings are required.The natural seminalia are not nearly enough, precisely because they are natural.These other beginnings, 'added from above' (superaddita, 63.3 ad 3), are the theological virtues.'In the place of these natural principia, the theological virtues are bestowed on us by God, virtues by which we are directed to the supernatural end' (63.3 co).On the supernatural plane, the theological virtues occupy the place of the natural principia.Do we find anything that is below the theological virtues, but that proportionally corresponds to them?We do, Thomas says: So it is necessary that to these theological virtues, there correspond proportionally other habits divinely caused in us, which other habits are related to theological virtues, as the moral and intellectual virtues are related to the natural principia of the virtues.(63.3 co) The 'other habits divinely caused in us' are infused moral virtues.Though sufficient to direct their possessor to God without mediation, Thomas says, the theological virtues are nonetheless a 'kind of beginning' (quaedam inchoatio).The full completion of their work, which he calls their consummatio, requires other virtues that are proportioned to them.Since the direction of the human being to God does not characteristically occur without mediation, 'it is necessary that the soul is completed by other infused virtues that are about other things -yet as ordered to God' (63.3 ad 2).The Augustinian definition, it turns out, covers not only the theological virtues, but the entire range of virtues by which a person is effectively directed toward the supernatural end. We are now in a position to judge the analogy -that is, a similarity inscribed within a deeper difference -between the acquired virtues and infused moral virtues.Both kinds of virtue exhibit a distinction between inchoatio and consummatio.Both feature principia that are noble in themselves, but require other habits for their completion.Such lower habits are required, so that the 'whole structure of good works rises up' -as Gregory remarks about the cardinal virtues (61.2 sc).But the structural similarities between acquired and infused virtues cannot efface a deeper difference between them.Infused virtues, Thomas argues in article 4, differ in species from acquired virtues, because both the act and end of virtues infused by God differ from those of virtues that are humanly acquired. Why Thomas decides to use the Augustinian definition as a map for the 'whole account' of virtue is now clear.Precisely and unproblematically, the definition covers the virtues that conform to virtue's 'complete account' (perfecta ratio).These are the infused virtues.Other virtues are also covered by the definition, but problematically, since its 'little part' (particula) about the efficient cause does not apply.The definition's problematic coverage of the acquired virtues is not a flaw in the definition, but a sign of a deeper judgement on Thomas's part.Acquired virtues, though rightly called virtues, are nonetheless virtues in a problematic sense.No matter how splendid it appears, their consummatio seems paltry when compared to the consummatio of infused virtues whose end is supernatural blessedness.If one allows Thomas' view to emerge gradually, observing the course charted by his 'four causes' reading of the Augustinian definition, one must simply reject the claim that he regards the definition as faulty.The fault lies rather in the 'incomplete' virtues that approach the perfecta ratio of virtue but do not conform to it fully. Infused virtue as virtue simpliciter Structured by the Augustinian definition, which Thomas defends as giving the 'whole account' (tota ratio) of virtue, Summa I/2.55-63 may be supposed to give an adequate treatment of virtue in general.But the general treatment is not quite done, since Thomas proceeds to a consideration of 'certain properties' of virtue in questions 64-67.The consideration is distributed across four questions, which treat the mean of virtues (64), their connection (65), their equality (66) and their duration (67).Questions 64-67 do not contain any startling reversals of the teachings that appear within questions 55-63.But they do offer some revealing extensions and clarifications.Unlike some later commentators who find Aristotle's theory of the mean virtually scottish journal of theology useless, Thomas defends the notion.But his decision to treat it as one of the 'properties' of virtue should not be taken for granted.He might, for example, have included the Aristotelian definition of virtue as a 'habit marked by choice, residing in the mean relative to us, a characteristic defined by reason and as the prudent person would define it' (Ethics 2.6, 1106b36-1107a2), within the sequence of questions that correspond to the 'whole account' of virtue.But he chooses not do this.Instead, he relegates the mean to the consideration of virtue's properties. In the next question on the connection of the virtues, Thomas directly confirms a suspicion that arises from reading question 63 on the cause of virtue.As we have seen, Thomas concludes that question not by rejecting the Augustinian definition as inadequate, but rather by suggesting that humanly acquired virtues are not virtues in the full sense.What question 63 quietly suggests receives emphatic articulation in question 65, which distinguishes 'complete moral virtue' (perfecta moralis virtus) from 'incomplete moral virtue' (imperfecta moralis virtus).Incomplete moral virtues are not necessarily connected; they may be little more than natural or customary inclinations.Taken in this sense, no one virtue implies any other virtue.But if we are speaking of complete virtues, we have reason to believe that the cardinal virtues are connected.Some version of the connection, Thomas claims, is 'set down by nearly everyone' (65.1 co).It can be justified, he adds, either by reflecting on the 'general conditions of the virtues', or by examining the reciprocal interdependence of prudence and the moral virtues (65.1 co). To assert the connection of habits that are complete moral virtues, as article 1 does, is to raise some obvious questions: Just what is a complete moral virtue?What is required of a moral virtue for its completion?Article 2 addresses these questions by pulling together some hints and clues that have been scattered throughout the treatment of virtue.Acquired moral virtues are 'productive of a good as directed to an end that does not surpass the natural capacity of humanity' (65.2 co).Such virtues, Thomas holds, can be acquired without charity -as they were, he adds, by some pagans.But 'according as virtues are productive of a good as directed to the last supernatural end, they completely and truly (perfecte et vere) have the ratio of virtue; and they cannot be acquired by human actions, but are infused by God' by divine charity (65.2 co).'Only infused virtues', Thomas now says explicitly, 'are complete (perfecta) and are called virtues simpliciter, since they direct a person well to the last end simpliciter.''The other virtues, namely the acquired, are virtues secundum quid' (65.2 co). What Thomas had said earlier about the intellectual virtues (56.3) now applies to the entire range of acquired virtues, including the moral virtues. They are virtues only secundum quid.Commentators such as Rhonheimer, who are inclined to dismiss Thomas' quotation of the Augustinian definition, might find this result surprising or anomalous.But the reader who grasps Thomas's deployment of the Augustinian definition as a reliable guide to virtue's perfecta ratio will not be surprised by 65.2's conclusion that humanly acquired virtues are virtues by analogy. 5o understand Thomas's treatment of virtue, one must carefully attend to his handling of the Augustinian definition, rather than crudely dismiss it. 6Without such attention, Summa I/2 on virtue is all-too-likely to appear, as Andrew Pinsent observes, as an 'heterogeneous assemblage of materials from disparate sources held together in a kind of constructive tension … a vast but imperfect aggregation of Christian materials'. 7The reading given here concurs with Pinsent's judgement, but goes beyond it by specifying more precisely the internal coherence of Summa I/2.55-63.What gives these questions their coherence, I have argued, is their tight correspondence to the four parts of the Augustinian definition, read as corresponding to the formal, material, final and efficient causes (in that order).Questions arise, of course, from taking seriously the claim that only infused virtue is virtue simpliciter.How far does the priority of infused virtue alter our understanding of acquired virtue?If someone possesses the infused virtues, to what extent does she need the acquired virtues, and why?Readers may 5 To say that humanly acquired virtues are virtues by analogy is to say that such virtues fall short of the complete or total ratio of virtue.It does not imply that such virtues are 'fake' or 'counterfeit' or 'false'.Thomas does not regard them as such. 6Another possibility is simply to ignore the Augustinian definition.David Decosimo provocatively attributes a version of this strategy to those whom he labels 'public reason Thomists', so-called from their desire to 'find a way of safely and justly navigating our pluralistic world and democratic politics by the deliverances of reason alone' (Ethics as a Work of Charity (Stanford, CA: Stanford University Press, 2014), p. 7).With Decosimo, I agree that using Thomas to find 'a solution that bears the imprimatur of Nature, the promise of Neutrality, and the unassailability of Reason' (p.8) is bound to result in serious misreadings or distortions of the Summa.Decosimo's alternative reading, the details of which I cannot enter into here, explores the possibility that Thomas strives in multiple ways 'to be Aristotelian by being Augustinian and vice versa' (p.9).The present argument about the structure of Summa I/2.55-63 is not, I think, entirely incompatible with Decosimo's strategy, though my reading judges Thomas to engage in a more thoroughgoing subordination of Aristotle to Augustine than he wants to allow.One can draw a contrast between 'coordinating' interpretations such as Decosimo's, where an acknowledged 'either/or' occurs within a more comprehensive 'both/and'; and 'subordinating' interpretations that acknowledge Thomas's desire to discover (or construct) 'both/and' relations between Aristotle and Augustine, while insisting that all such relations are inscribed within a more comprehensive 'either/or'. 7Pinsent, Second-Person Perspective in Aquinas's Ethics, p. 30.scottish journal of theology become frustrated, understandably, with Summa I/2's apparent lack of clear answers to such questions.As a partial antidote to such frustration, one may recall that Thomas writes Summa I/2 as a preface to Summa II/2.The point of a more attentive reading of Summa I/2.55-63 is not to answer perplexing questions about the virtues, but to enable their adequate formulation. The Perspective of Morality: Philosophical Foundations of Thomistic Virtue Ethics, tr.Gerald Malsbary (Washington, DC: Catholic University of America Press, 2011), pp.197-8, n. 18. 3 Five years ago, Eleonore Stump remarked that 'scholars discussing Aquinas's ethics typically understand it as largely Aristotelian' ('The Non-Aristotelian Character of Aquinas's Ethics: Aquinas on the Passions', in Sarah Coakley (ed.), Faith, Rationality and the Passions (Malden, MA: Wiley-Blackwell, 2012), p. 91), citing Irwin, McInerny and Kenny as examples.Her remark still rings true today.One should note, however, that some readers had laid stress on the non-Aristotelian aspects of his ethics years before (e.g.Mark Jordan, On the Alleged Aristotelianism of Thomas Aquinas (Toronto: Pontifical Institute of Medieval Studies, 1992); John Inglis, 'Aquinas's Replication of the Acquired 2 Martin Rhonheimer, Moral Virtues: Rethinking the Standard Philosophical Interpretation of Moral Virtue in Aquinas', Journal of Religious Ethics 27 (1999), pp.3-27).Others have since amplified and extended the non-Aristotelian reading of Thomas' ethics, particularly Andrew Pinsent, The Second-Person Perspective in Aquinas's Ethics: Virtues and Gifts (London: Routledge, 2012).
7,978.6
2018-11-01T00:00:00.000
[ "Philosophy" ]
Risky Multi-Attribute Decision-Making Method Based on the Interval Number of Normal Distribution : Focusing on risky decision-making problems taking the interval number of normal distribution as the information environment, this paper proposes a decision-making method based on the interval number of normal distribution. Firstly, the normalized matrix based on the decision maker’s attitude is obtained through analysis and calculation. Secondly, according to the existing properties of standard normal distribution, the risk preference factors of the decision makers are considered to confirm the possibility degree of each scheme. The possibility degree is then used for establishing a possibility degree matrix and, consequently, sequencing of all schemes is conducted according to existing theories of possibility degree meaning and the value size of possibility degree. Finally, the feasibility and validity of this method is verified through calculation example analysis. Introduction Decision-making generally exists in politics, economics, technology, and the daily life of humans. With the continuous development of social economy, decision-making with a single target and attribute is applied less and less in actual economic and management activities. In real multiattribute decision-making, the circumstance of an uncertain natural state occurs frequently, namely risky multi-attribute decision-making. Since risky multi-attribute decision-making has an extensive practical background in the field of new product development, investment project selection, and engineering project development, the question of how to solve risky multi-attribute decisionmaking problems is therefore an important topic with academic research value and practical significance. In regard to multi-attribute decision-making problems in which the attribute value is an interval number, scholars generally agree that the distribution rule of the interval number is uniform distribution [1,2], however, there are also studies [3][4][5][6] that consider the distribution rule of the interval number is normal distribution, which is more reasonable. To give a few examples, the distribution of students' examination results, the life distribution of a species, and the height distribution of people. In recent years, the research into the interval number of normal distribution has attracted the attention of experts and scholars, and has been widely used in the field of multiattribute decision-making problems, but the research has not been mature nor perfectly optimized. For example, Liu et al. [3] analyzed the possibility measure of interval numbers, and studied the interval number complying with the normal distribution rule. They focused on the limitation that the interval number of uniform distribution was adopted to describe fuzzy evaluation value in decision-making, and proposed the method of conducting multi-attribute decision-making by applying the interval number of normal distribution. Wang and Xiao [4] provided several aggregation operators, and proposed a multi-attribute group decision-making method with incomplete information based on the interval number of normal distribution in group decisionmaking situations. Concentrating on the existing intersectional situation of two interval numbers, Xu and Lv [5] put forward the intercomparable concept of the possibility degree between interval numbers in normal distribution and a comparative method of interval numbers; according to the principle of maximum deviation, they obtained the method of ascertaining attribute weights and thus provided a multiple attribute decision-making method. Yang et al. [6] were mainly concerned with the effective supplement of incomplete information and the full utilization of uncertain information in grey number sequence prediction, and conducted the random implementation of true value to interval grey numbers of normal distribution under effective numerical coverage. Focusing on multi-attribute decision-making problems in the interval number of normal distribution, Ding and Mao [7] proposed the aggregated method for the interval number of normal distribution, established a possibility degree matrix by using possibility degree, and obtained the optimal decision by using an ordering vector method. For multi-attribute decision-making problems taking the interval number of normal distribution as the information environment, Mao et al. [8] provided the concept and related properties of cross entropy for the interval number of normal distribution, and proposed a decision-making method based on cross entropy and score function. Zhang et al. [9] used the Choquet integral to propose the normal distribution interval number for a Choquet ordered averaging operator. Chiranjibe and Madhumangal [10] attempted to lay a foundation for providing a new approach of a single-valued neutrosophic soft tool which considers many problems that contain uncertainties. In the present study, new aggregation operators of single-valued neutrosophic soft numbers have so far not yet been applied for ranking of the alternatives in decision-making problems. Song et al. [11] focused on and identified both primary strategic and operational elements that will aid managers in evaluating and making risky multi-criteria decisions on green capacity investment projects. In relevant research fields, Liu and Ren [12] only considered the deviation between membership degree and non-membership degree of the existing intuitionistic fuzzy entropy, and excluded the self-contained hesitation information in the intuitionistic fuzzy set, proposing a new class of multiple attribute decision method for intuitionistic fuzzy entropy. Shao and Zhao [13] studied the multi-criteria decision-making (MCDM) problem with completely unknown weights and evaluation information of intervalvalued intuitionistic fuzzy number (IVIFN). Considering the influence of the hesitancy degree, a vector representation derived from alternative schemes and positive ideal schemes, negative ideal schemes were proposed, and a method of vector projection measure was put forward for interval intuitionistic fuzzy information. Fu et al. [14] discussed the multi-attribute decision-making (MADM) problem for attribute value with the form of IVIFN carrying incomplete attribute weight information, and comprehensively considered the correlations among attributes, proposing a decision method to address such problems. Novak Zagradjanin et al. [15] considered the multirobot system based on the cloud technology with a high level of autonomy, which is intended for the execution of tasks in a complex and crowded environment. The proposed concept uses a multirobot path-planning algorithm that can operate in an environment that is unknown in advance. With the aim of improving the efficiency of path planning, the implementation of multi-criteria decision-making while using the full consistency method is proposed. In conclusion, this study considers that the attribute value is the interval number of normal distribution, and the occurrence probability of each attribute in a natural state, and in order to solve multi-attribute decision-making problems, puts forward a risky decision-making method based on the interval number of normal distribution. In theoretical research, the interval number of normal distribution is more in accordance with social and natural laws than with the interval number of uniform distribution, but it has been less researched than the interval number of uniform distribution, so the paper will help to strengthen the research in this field. In terms of practical application, the method mentioned in the paper can normalize the decision-making matrix according to different degrees of recognition of decision makers to attributes, and thus make it more widely applicable. The result of this study will help decision makers to make more reasonable decisions in social and economic activities, and thus bring more economic and social benefits. Interval Number ∈ ; wherein, represents the lower limit value, represents the upper limit value, thus a closed interval in real number axis = [ , ] will be referred to as the interval number. In addition, if = , will degenerate into a certain number, namely the ordinary real number will be regarded as a special interval number [2]. Interval Number of Normal Distribution In the multi-attribute decision-making process, the attribute value provided by decision makers is usually stable, and the attribute value tends towards a certain point, namely, an attribute value with maximum possibility. At this point, the interval number of normal distribution can be used to represent this attribute value [3]. According to 3 principle of normal distribution, and in the above definitions can be determined by the following formula [4]: Sequencing for the Interval Number of Normal Distribution For the comparison of two interval numbers of normal distribution without intersection, the sequencing can be conducted according to Definition 2, and while the intersection exists between two interval numbers of normal distribution, the sequencing can be conducted according to the following method [16]: represents the possibility degree of > , if > 0.5, scheme will be superior to scheme and, otherwise, scheme will be superior to scheme [21]. Problem Description Consider a certain risky multi-attribute decision-making problem, as a matter of convenience, record as = { , , ⋯ , } which represents the set of alternative schemes, wherein represents the alternative scheme of number ; = { , , ⋯ , } represents the set of number attributes, wherein represents the attribute of number ; = { , , ⋯ , } represents the weight vector of the attribute, wherein is the weight or importance degree of attribute which meets ≥ 0 and = 1 ; = { , , ⋯ , } represents the set of natural states, wherein represents the state of number and represents the occurrence possibility of state , which meets ≥ 0 and ∑ = 1 . Suppose the interval number = [ , ] represents the attribute value in natural state for scheme in allusion to attribute , thus the risk decision-making matrix is established [22]. Normalization of Decision-Making Matrix In solving risky multi-attribute decision-making problems, generally, the attributes can be divided into benefit-type and cost-type, where the bigger benefit-type attribute value will be better, and the smaller cost-type attribute value will be better. When both appear in a decision-making process at the same time, in order to eliminate the influence on decision-making results caused by different dimensions of different attributes, the normalization of each attribute value should be processed. There are numerous methods for the normalization of the attribute values of interval values, and each has its own pertinence, for example, the extreme value in the range transformation method significantly influences the specification results. This study adopts the range transformation method to process the normalization of the decision-making matrix, and the specific normalization formulas are shown below. Benefit-type attribute: Cost-type attribute: Concentrating on risky multi-attribute decision-making problems, in view of the limitations of common normalization methods, the normalization result can be amended according to the preference of the decision makers. According to Formula (5), transform the results of the above steps again and obtain the normalization results: wherein the value of is based on the judgment of decision makers, take = 1,2,3, and the value of is the sequence number of attribute , 1 ≤ i ≤ n. If the value of is bigger, the degree of recognition for its corresponding attribute will be lower [23,24]. Decision-Making Steps In regard to the above problem, the specific steps of the risky multi-attribute decision-making method based on the interval number of normal distribution mentioned in the paper are as follows: Step 1 According to Formulas (3) and (4), conduct the normalization to risky decision-making matrix by using the range transformation method, and obtain the decision-making matrix . Step 2 In view of the limitations of the common normalization method, according to Formula (5), transform the attribute value again based on the preference of the decision makers, and obtain the decision-making matrix . Step 3 For the decision-making matrix after normalization, according to Formula (6), conduct weighting operation to attribute value through occurrence probability in different natural states , obtain decision-making matrix = ( , ) [25]. Step 4 Combine with the weight vector of attribute , calculate the deviation of each scheme according to Formula (7). Step 5 According to the sequencing method for the interval value of normal distribution derived from the above steps, establish the possibility degree matrix of pairwise comparison , and hereafter, conduct the sequencing of all schemes. Calculating Example Analysis Consider a selection problem of a new product development project [1]. A company proposes to develop an electronic product; where there are 5 schemes ( , , ⋯ , ) available, the main attributes to be considered include developing cost , sales quality of product , and rate of return . In these three attributes, is cost-type attribute, and are benefit-type attributes, and the attribute values of all attributes are interval numbers. Suppose the value located in the interval complies with normal distribution, furthermore, ( , , ) exist in the future market environment, representing excellent, average and poor states respectively, and their probability of occurrence is = (0.3,0.4,0.3) respectively. Suppose the attribute weight vector provided by the decision makers is = (0.35,0.25,0.4) , the risky decision-making matrix is shown in Table 1. (1) According to Formulas (3) and (4), the range transformation method is adopted for processing the normalization of risky decision-making matrix to get decision-making matrix , shown in Table 2. (2) According to Formula (5), transform attribute value again based on the preferences of the decision makers, suppose decision makers attach importance to each attribute comparably, take = 1, = 1, = 1, and obtain the decision-making matrix , shown in Table 3. From this, the sequencing result of each scheme is > > > > , therefore, when the value of is 1, Scheme will be optimal. (6) In order to verify the influence on decision-making result caused by parameter , suppose the corresponding values of three attributes are , , respectively, now the sequencing result of each scheme is shown in Table 4. Value Sequencing result of each According to the decision-making method considering regret aversion mentality in Reference [1], its final sequencing result is the same as the result of this research, in which = 3, = 3, and = 1. From the above table, when decision makers are more concerned about cost and rate of return, Scheme 3 will be superior to Scheme 5; when decision makers are more concerned about sales volume and rate of return, Scheme 5 will be superior to Scheme 3. The characteristics of the method mentioned in [1] are that it considers the regret aversion mentality and behavior of decision makers and obtains the sequencing result by calculating the utility value of the attribute as well as the regret and delight values between schemes. From Table 4, it can be seen that this method tends toward scheme with lower cost and higher rate of return; for scheme with higher cost and rate of return, its sequencing result is lower due to higher risk and regret aversion mentality. From original data, it can be seen that both the cost and the sales value of Scheme 5 are higher than those of Scheme 3. In conclusion, the optimal scheme using method in this reference tends to be conservative. The method mentioned in this paper can be adapted to the requirements of various decision makers for decision making, since can take different values. Conservative decision makers can take a smaller corresponding value of cost-type attribute , while optimistic decision makers can take a smaller corresponding value of benefit-type attribute . Therefore, the decision-making method mentioned in this paper is closer to actual production and life in the application of multiattribute decision-making problems, can serve decision makers with different personalities, and can be more widely applied. Conclusions For interval number multi-attribute decision-making problems, the interval number of normal distribution undoubtedly has great practical application value, since the distribution rule of interval numbers receives less attention. The paper utilizes the basic theory of normal distribution, transforms the sequencing of decision-making schemes into the comparison of possibility degrees, and thus obtains the optimal decision after normalizing the initial matrix based on the attitude of decision makers. As a result, it provides a multi-attribute decision-making approach based on the interval number of normal distribution and verifies the feasibility and effectiveness of the aforementioned methods by combining calculation example analysis, and enriches the application of decision-making methods [26]. The method mentioned in this study is appropriate for fuzzy decision-making environments in view of its superiority in a number of respects. Not only does the interval number of normal distribution describe fuzzy evaluation value in comparison to the interval value of uniform distribution but, furthermore, the decision-making problems described by the interval number of normal distribution are closer to real life, and the influence on decisionmaking behavior caused by risk preference factors is taken into full consideration. Therefore, this method can be widely applied, and has both promotional and actual decision-making value.
3,753.4
2020-02-08T00:00:00.000
[ "Mathematics" ]
Operational properties of fluctuation X-ray scattering data X-ray scattering images collected on timescales shorter than rotation diffusion times using a (partially) coherent beam result in a significant increase in information content in the scattered data. In this communication, an intuitive view of the nature of fluctuation scattering data and their properties is provided, the effect of such data on the derived structural models is highlighted, and generalizations of the Guinier and Porod laws that can ultimately be used to plan experiments and assess the quality of experimental data are presented. Introduction In biology, materials science and the energy sciences, structural information provides important insights into the understanding of matter. The link between a structure and its properties can suggest new avenues for designed improvements of materials, nanoparticles and proteins. For samples without long-range order, such as solutions of biological macromolecules, disordered organic polymers or magnetic domains, as well as (partially) ordered materials, such as selfassembled block copolymers, liquid crystals or assemblies of nanoparticles, structural information can be obtained efficiently using traditional small-and wide-angle X-ray scattering (SAXS/WAXS) techniques (Gann et al., 2012;Dyer et al., 2014). Samples lacking long-range order typically display angular isotropic X-ray scattering patterns, where the mean intensity as a function of scattering angle is directly related to the average shape and local organization of the material investigated (Feigin et al., 1987;Glatter & Kratky, 1982). The isotropic nature of these SAXS/WAXS diffraction patterns is a result of orientational averaging of the scattering species, due to the fact that the timescale of X-ray exposure exceeds that of rotational diffusion. The advent of coherent X-ray sources (Emma et al., 2010;Ishikawa et al., 2012;Vartanyants et al., 2007;Feldhaus et al., 2013;Borland, 2013) such as free-electron lasers (FELs) and ultra-bright synchrotron light sources allows one to reduce the exposure timescale below that of rotational diffusion such that the non-isotropic intensity fluctuations (or speckle) in the scattering pattern can be resolved. The first experimental demonstration of this technique, termed by the inventor (Kam, 1977) as fluctuation X-ray scattering (FXS), was provided by Kam et al. (1981) on frozen tobacco mosaic virus in the early days of synchrotron-based small-angle scattering. Subsequently, fluctuation scattering has been used to detect hidden symmetries in colloids (Wochner et al., 2009) and magnetic domains (Su et al., 2011), for the structure determination of two-dimensional particles (Pedrini et al., 2013;Chen et al., 2012;Saldin, Poon et al., 2010), and for the characterization of liquid crystals (Kurta et al., 2013) and glasses (Cowley, 2001). XFEL-based fluctuation (X-ray) scattering data and structure determination have been demonstrated from single and multiple inorganic nanoparticles (Liu et al., 2013;Mendez et al., 2014) and single polystyrene dumb-bells (Starodub et al., 2012). Information is extracted from the experimental speckle patterns by computing in-frame angular intensity correlations (Kam, 1977;Saldin, Poon et al., 2010;Saldin, Poon, Bogan et al., 2011;Saldin et al., 2009). These angular intensity correlation curves, the FXS data, can be used for structure determination, either via reciprocal-space techniques Saldin, Poon, Schwander et al., 2011; or via real-space methods (Chen et al., 2012;Liu et al., 2013). In earlier studies, FXS has been presented as a method for overcoming experimental and theoretical hurdles in single-particle imaging (Kam, 1977;Saldin et al., 2009). In contrast with this viewpoint, we demonstrate here that FXS is a natural extension of SAXS/WAXS. Despite the increased attention paid to fluctuation scattering due to newly constructed and future light sources, there is a significant lack of understanding of the basic properties of such data. The absence of a basic grasp of the general nature and characteristics of the data makes assessment, validation and proper use of the experimental data a challenge. This communication will provide an in-depth view of the nature of fluctuation X-ray scattering data, resulting in the derivation of Guinier and Porod relations and other operational properties. We furthermore present the effect of the progressive inclusion of FXS data when reconstructing threedimensional models, demonstrating the superior quality of models that can be obtained from limited FXS data. The benefits of FXS data apply not only to low-resolution shape or structure determination, but extend to model-based structural refinements as well, allowing one to determine structural changes due to ligand binding or other externally induced perturbations. Results and discussion 2.1. FXS extends traditional small-and wide-angle X-ray scattering The diffraction pattern of an ensemble of molecules frozen in space and time will contain the signature of many particles, combining effects from the shape and internal structure of the particles, the so-called form factor, and their mutual arrangement in space, the structure factor. In the case of an ideal dilute solution, one can show that the mean angular intensity correlation function, C 2 (q, Á'), averaged over a large number of independent multiple-particle shots, is equivalent to that obtained from single-particle data (Kam, 1977;Saldin et al., 2009), assuming no interparticle interactions (Kam, 1977;Saldin et al., 2009;Kirian et al., 2011;Altarelli et al., 2010) and the presence of a flat X-ray wavefront during the scattering process (Lehmkü hler et al., 2014;Schroer et al., 2014). The potential effects of the coherence properties of the X-ray beam on the resulting angular correlations will be discussed elsewhere. The angular correlation function can be obtained from the experimental data by averaging a large number of in-frame intensity correlation functions where I j (q, ') denotes the intensity as recorded on the j-th diffraction pattern at polar coordinate (q, ') [q = (4/)sin, where is half the scattering angle and is the wavelength of the incident radiation]. Note that additional cross-resolution and n-point correlations can be derived as well (Kam, 1977) but are not considered at this point. The function C 2 (q, Á') can be further decomposed into orthogonal components where B l (q) are resolution-dependent weights and F l (Á') is given by Here, P l (Á) is a Legendre polynomial and where is equal to the wavenumber 2/, with the wavelength of the incident radiation. Note that, due to Friedel's law, B l (q) terms for odd l are equal to 0 (Kam, 1977). The set of resolution-dependent expansion coefficients B l (q), as obtained from the experimental data, is related to the three-dimensional structure (x) (Kam, 1977;Saldin et al., 2009). Although the derivation relating the three-dimensional structure to the expansion coefficients B l (q) is relatively straightforward, it does not provide an intuitive insight into the nature of the data. Traditionally, fluctuation scattering data are presented starting from the Fourier transform of the real-space structure of the sample (Kam, 1977). Additional insights are obtained when following the route typically used to derive standard relations in small-angle X-ray and neutron scattering. A graphical depiction of fluctuation scattering and how it is related to standard SAXS is shown in Fig. 1, in which the mathematical relations outlined below are referenced. Starting from the real-space structure (x), the Patterson function (u) can be obtained via a self-convolution research papers By switching to a spherical coordinate system and expressing the Patterson function as a spherical harmonics series, we obtain where lm (r) are the expansion coefficient curves of the realspace autocorrelation function and Y lm (Á) is a spherical harmonic function. Given that the scattered intensity is proportional to the Fourier transform of the real-space autocorrelation function, one has Expressing this intensity function as a spherical harmonics series one obtains (Baddour, 2010) where j l (Á) is a spherical Bessel function of order l. These intensity function expansion coefficients are related to the fluctuation scattering curves B l (q) via (Kam, 1977;Saldin et al., 2009) From the above equations and Fig. 1, it is clear that fluctuation scattering is a natural extension of small-angle X-ray scattering. In the analysis of traditional SAXS data, the system is assumed to be statistically isotropic, resulting in the assumption that coefficients I lm (q) for l > 0 are not experimentally accessible. The term I 00 (q) is of course equal to SAXS data, as it models the mean intensity as a function of momentum transfer q. Upon further inspection of equation (9) The 'magic square' of scattering is expanded to show the relation between the real-space electron density (r), the associated autocorrelation function (r) and its Fourier transforms, F(q) and I(q), respectively. When expressing (r) and I(q) in a spherical coordinate system, Hankel transforms relate the associated expansion coefficients. Orientation-averaged quantities in the grey column, such as SAXS data and the radial distance distribution, can be obtained by selecting curves for which l = 0. The numbers in parentheses relate key operations to the corresponding equations given in the text. where 00 (r)r 2 can be recognized as the pair distance distribution function P(r) (Feigin et al., 1987;Glatter & Kratky, 1982). Whereas SAXS data only provide experimental information about the zero-order polar Fourier transform of the realspace autocorrelation of the real-space object [equation (11)], fluctuation scattering extends the data into higher-order descriptors of the sample. Given that both SAXS and fluctuation scattering data can be described as l-th order spherical Hankel transforms of radial expansion coefficients, it should come as no surprise that certain operational properties from SAXS data can easily be expanded into the fluctuation scattering framework. Guinier and Porod laws for FXS data As is the case for SAXS data, the low-resolution behaviour of fluctuation scattering data can provide insights into the structural parameters in a model-free fashion and can be used to check the general quality of the data. Using an infinite series expression for spherical Bessel functions (Bowman, 1958) in equation (9) and truncating the series to the second term, as done when deriving the standard Guinier relation, one quickly obtains whereÎ The quantities Q n lm are the n-th order multipole moments of the autocorrelation function with P lm (r) = lm (r)r 2 . Y lm ð! r Þ denotes complex conjugation of the spherical harmonic Y lm (! r ). Note that, in general, I lm (q), I I lm and R 2 lm are complex quantities unless l = 0. Equation (12) can be substituted into equation (10), ultimately resulting in where R 2 l is equal to the mean real part of R 2 lm (Àl m l) and B Ã l is related to the average absolute value ofÎ I lm for a fixed value of l. Linearizing this expression yields a generalized Guinier plot where the slope and intercept provide information on the sample-dependent properties B Ã l and R 2 l . From this general formulation of the Guinier equation, it now becomes evident that B Ã l and R 2 l represent the average amplitudes of the zeroand second-order multipole moments, Q n lm . For l ¼ 0, i.e. a monopole, this is synonymous with the square of the total scattering length, I(0) 2 , and the squared radius of gyration, R 2 g , of the particle. For l > 0, these two quantities likewise describe the higher-order moments (quadrupoles, hexadecapoles etc.) of the particle shape. The relative magnitudes of these invariants for different values of l are influenced by the symmetry of the particle, leading to systematic absences of B l (q) (Saldin, Poon, Schwander et al., 2011). A generalized Guinier plot from synthetic data is shown in Fig. 2, using satellite tobacco mosaic virus (STMV) as an example. The generalized Guinier equation also allows one to estimate the location of the first local maximumq q l in B l (q), such thatq where the height,B B l , can be shown to be equal tô Although B Ã l andB B l are related, the latter quantity is on a similar numerical scale to the total scattering length, making the use of this quantity more intuitive.B B l can be made scaleinvariant by normalizing the data such that B 0 (q) = 1, which is assumed in the following paragraphs. Model B l (q) coefficients from STMV for l = 0 (black), 2 (red), 4 (green) and 6 (blue). The inset depicts generalized Guinier plots with linear fits. The location of the first maximumq q l in B l (q), as obtained from the Guinier analysis, is indicated. The Porod behaviour of the data, characterized by an asymptotic fall-off proportional to q À8 , is shown by dotted lines. The valuesB B l and R l can be used as model-free shape classifiers beyond what is provided by the radius of gyration (R g = R 0 ) as obtained from a standard Guinier analysis. This is exemplified for l = 2 in Fig. 3, where theB B 2 values and R 2 /Rg ratio have been computed for a number of different sized cylinders, ellipsoids and a representative set of 6709 protein assemblies from the Protein Data Bank (Berman et al., 2000) (see Appendix A for details). From the cylinder and ellipsoid data, it is evident that the combination of R 2 /R g andB B 2 provides a combination of unique shape classifiers that allows one to distinguish prolate from oblate structural features. The value of R 2 /R g indicates whether a shape has prolate or oblate characteristics, while the value ofB B 2 measures the extent or strength of the anisotropy, as large values ofB B 2 indicate significant deviations from sphericity. Higher-order moments can be used to expand this formalism further to provide a more fine-grained shape classification. The above generalized Guinier analysis characterizes fluctuation scattering curves at low resolution. For SAXS/WAXS data, high-resolution data trends are described by Porod's law: This trend holds for well defined three-dimensional particles. Following Porod's derivation (Feigin et al., 1987;Glatter & Kratky, 1982), but using the l-th order spherical Hankel transform and with an asymptotic approximation for j l (Á) for large q (Bowman, 1958), one readily obtains and one can thus show that, for large q values, Porod's law extends to fluctuation scattering data An illustration of this trend for STMV is depicted in Fig. 2. The Porod behaviour of shapes such as discs [B l (q) / q À4 ] and rods (B l / q À2 ) also displays the same characteristic falloff (Feigin et al., 1987;Glatter & Kratky, 1982) as expected for squared SAXS intensities (Fig. 4). A practical use of the predicted Porod behaviour is to use the expected fall-off as an inverse weight when fitting molecular or bead models to data, as is done in SAXS studies (Svergun, 1999). This combination of Guinier and Porod analyses provides a set of model-independent tools to characterize and validate the quality of the experimental data, in the same way that Guinier and Porod analyses are used in biological small-and wide-angle scattering (Feigin et al., 1987;Glatter & Kratky, 1982). The tools presented here provide straightforward guidelines for the evaluation of experimental FXS data or can be used to plan FXS experiments. An example of the use of the generalized Guinier analysis is the prediction that the first maximum in B 2 (q),q q 2 , is expected to lie between 2.2/R g and 1.6/R g (see Appendix A). If an R g estimate is available from standard synchrotron SAXS studies, its value can be used in the experimental design of FEL-based experiments, to ensure that high-quality low-angle FXS data can be obtained. (a) Example shape descriptors for l = 2. The ratio R 2 /R g is plotted against the anisotropic ratio for ellipsoids (black dots) or cylinders (red squares), allowing the identification of prolate or oblate features. (b) Including the use ofB B l as a shape classifier, normalized againstB B 0 , provides further discriminative power between shapes. Small values ofB B 2 represent approximately spherical particles, while large values represent either prolate or oblate particles. The density in part (b) represents the empirical distribution of ðR 2 =R g ;B B 2 Þ pairs, as obtained from known PDB structures (see Appendix A for details). Figure 4 The Porod behaviour of FXS data for one-dimensional rods (red) and two-dimensional discs (blue) follows the same fall-off trends as seen in SAXS/WAXS data. Curves for l = 2 are shown; similar trends for higherorder curves exist. Increased information content The derivation of the basic properties of FXS data allows one to characterize and evaluate the quality of the experimental data, but fails to explain the reason why these types of experiment are beneficial. The principal advantage of FXS, as shown in Fig. 1, is the additional data made accessible in a fluctuation scattering experiment. This increase in experimental information, even in limited q and l ranges, allows the recovery of more structural detail compared with using B 0 (q), i.e. the SAXS data alone, in the same q range. This effect is illustrated in Fig. 5, in which average ab initio reconstructions obtained from both SAXS and FXS data are shown. The reconstructions are compared with the reference density from which the calculated data were obtained (see Appendix A for details). The reconstructions and analyses here are limited to a relatively low order of l, since these curves are experimentally more easily accessible, and thus provide a conservative overview of the benefit of FXS data compared with standard SAXS data. As is clear from Fig. 5, the addition of limited higherorder scattering information already provides a spectacular increase in reproducible details in the proposed models. One of the reasons why we do not recover the target structure in an error-free fashion is that the optimization problem is still under-constrained (Elser, 2011). However, the main benefit is that FXS is able to reconstruct or derive structural details with greater confidence than can be accomplished from the SAXS data alone, ultimately leading to a better understanding of the structure-related properties. A similar view of the use of FXS data is obtained when we consider model-based refinement techniques for SAXS/ WAXS data (Petoukhov & Svergun, 2005;Gorba & Tama, 2010). Given the stark differences in results obtained in ab initio modelling (Fig. 5), the further addition of geometric restraints from a known molecular model could resolve structural ambiguities to such a level that physiologically relevant conformational changes in macromolecules could be confidently deduced from FXS data. For example, when assuming that the structure of a resting state is known, an FXS experiment on the perturbed molecule can provide significantly more data than can be obtained from a SAXS experiment alone. This is illustrated in Fig. 6 The reference density (a) shows significant detail in the core of the virus, which is largely absent when only SAXS data are used (b) but which is reproduced, with increasing quality, when terms up to l = 6 (c) and l = 12 (d) are considered. Another striking improvement is the distinctly non-spherical outer boundary of the particle when fluctuation scattering data are used. The second row [parts (f)-(h)] displays the associated standard deviations in the electron density as obtained from the ten independent aligned reconstructions. The black bar [parts (a)-(d) and (f)-(h) represents 10 Å . The bottom row [parts (i)-(k)] shows the agreement between the data (black circles) and the MOSA-refined (multi-objective simulated annealing; see Appendix A) expansion coefficients [B 0 (q) red, B 2 (q) green, B 4 (q) blue, B 6 (q) magenta, B 8 (q) orange, B 10 (q) cyan and B 12 (q) yellow] for SAXS [part (i)] and for fluctuation scattering [parts (j)-(k)]. The error bars represent the standard deviation from the ten reconstructions. data from carbon monoxide-bound haemoglobin are compared with its unligated intermediate. The relative difference in the data at the Shannon sampling points (Feigin et al., 1987) is depicted as well, indicating that the sensitivity of B l (q) is enhanced for larger values of l. If high-order l data up to larger scattering angles are available, difference maps can be obtained as well . It is worth noting that the extraction of a difference FXS signal will require optimal instrumental and sample conditions, as well as fine-tuned dataprocessing routines. This increased information content of FXS data compared with SAXS can play an important role in determining the structural foundation of dynamic processes in biology. As shown earlier (Chen et al., 2013), FXS from a mixture can be described as the component-weighted sum of curves from the individual species. By performing time-resolved FXS experiments, one can obtain B l (q) curves for intermediate shortlived structural species, akin to standard practices in the analysis of time-resolved WAXS data at synchrotrons (Cammarata et al., 2008;Andersson et al., 2009) or, as recently demonstrated, at an FEL (Arnlund et al., 2014). Thus, the use of fluctuation scattering will ultimately lead to a more accurate depiction of the structural dynamics of macromolecules in solution. Conclusions In conclusion, we have shown that fluctuation scattering is a natural extension of traditional small-angle X-ray scattering, and that a number of operational properties translate from SAXS/WAXS into fluctuation scattering. Given the increased detail that can be obtained from fluctuation scattering data and the ever-increasing availability of X-ray sources at which these experiments can be performed, we expect that these experiments will become routine in the future. The extended standard Guinier and Porod methods can be used to validate data and characterize samples rapidly in a model-free fashion. APPENDIX A Additional details The cylinder and ellipsoid models used in Fig. 3 were obtained by generating voxelized representations of these shapes on a 41 Â 41 Â 41 voxel cubic grid with 40 Å edges. The set of shapes was obtained by varying their radii and lengths (cylinders) or their main and minor axes (ellipsoids) while keeping a constant volume. The anisotropy ratio as used in Fig. 3 is defined as where z is the moment of inertia, along the axis of revolution for ellipsoids or the cylindrical axis for cylinders. x is the moment of inertia perpendicular to the z axis. The anisotropy ratio is 1 for a perfect one-dimensional rod and À1 for a twodimensional disc. Fig. 3 indicates that R 2 /R g is expected to lie between 1.25 and 1.65. Using equation (19), it follows that the first maximum in B 2 (q),q q 2 , lies between 2.2/R g and 1.6/R g . When determining R l andB B l from FXS curves for l > 0, one can either use interpolation and peak picking and equation (19), or use the generalized Guinier transform, equation (18). The empirical distribution PðR 2 =R g ;B B 2 Þ, as shown in Fig. 3(b), was obtained from 6709 PDB files with low (<30%) sequence identity. The distribution shown contains 98% of the density. A small number of structures displayed R 2 /R g ratios below 1.25 or above 1.65, withB B 2 typically close to zero. All FXS data were computed from either the atomic coordinates (PDB models) or the electron density (real-space reconstructions) or voxelized representations (cylinders and ellipsoids) using the three-dimensional Zernike polynomial expansion method (Liu et al., 2012). The maximum expansion order, n max , was determined such that n max ! q max r max , which resulted in n max = 30 for STMV and n max = 40 for haemoglobin. The B l (q) coefficients were evaluated to a maximum momentum transfer, q max , of 0.3 Å À1 ($20 Å ) for STMV (PDB code 1a34) and 0.6 Å À1 ($10 Å ) for haemoglobin (PDB codes 1bbb and 2hbb). The ab initio reconstructions were obtained without the use of symmetry or connectivity restraints via a multi-objective simulated annealing (MOSA) (Smith et al., 2008) adaptation of our reverse Monte Carlo procedure (Liu et al., 2013), which has been shown to be less (a) FXS data calculated from the two haemoglobin crystal intermediates 1bbb (CO-haemoglobin; green and red cartoon and dotted lines) and 2hbb (deoxy-haemoglobin; blue cartoon and solid lines) in the Protein Data Bank. The average root mean-square difference between the two intermediates was approximately 2 Å and data were computed for l 4 (black, red and green curves). (b) The relative differences, |ÁB l (q)|/ B l (q), between the two states at the Shannon sampling points (multiples of /d max = 0.044 Å À1 ) (black squares, red circles and green triangles), indicate the average increased sensitivity of B l (q) for l > 0, as illustrated by the dotted lines. This additional sensitivity, combined with the independent nature of the higher-order curves, ultimately results in a more precise determination of macromolecular structures in solution. prone to local minima than when using an aggregated 2based target function. The starting model for the reconstruction was a hollow sphere with a radius of 96 Å on a 61 Â 61 Â 61 voxel cubic grid.
5,734.4
2015-03-20T00:00:00.000
[ "Physics" ]
A Bayesian Estimator of the Intracluster Correlation Coefficient from Correlated Binary Responses : Clustered binary samples arise often in biomedical investigations. An important feature of such samples is that the binary responses within clusters tend to be correlated. The Beta-Binomial model is commonly applied to account for the intra-cluster correlation – the correlation between responses within the clusters – among dichotomous outcomes in cluster sampling. The intracluster correlation coefficient (ICC) quantifies this correlation or level of similarity. In this paper, we propose Bayesian point and interval estimators for the ICC under the Beta-Binomial model. Using Laplace’s method, the asymptotic posterior distribution of the ICC is approximated by a normal distribution. The posterior mean of this normal density is used as a central point estimator for the ICC, and 95% credible sets are calculated. A Monte Carlo simulation is used to evaluate the coverage probability and average length of the credible set of the proposed interval estimator. The simulations indicate that for the situation when the number of clusters is above 40, the underlying mean response probability falls in the range of [0.3;0.7], and the underlying ICC values are • 0 . 4, the proposed interval estimator performs quite well and attains the correct coverage level. Even for number of clusters as small as 20, the proposed interval estimator may still be useful in the case of small ICC ( • 0 . 2). Introduction The intraclass correlation coefficient (ICC) ρ has wide applications in biology, epidemiology and medical research.In family studies, it is used to measure the degree of intra-family resemblance with respect to characteristics such as blood pressure, weight and height.Moreover, it is used as means of investigating the heritability of certain traits, whether such traits are continuous or dichotomous.An extensive literature, summarized in Shoukri and Ward (1985) and Donner (1986) exists for the statistical analysis of ICC for continuous response variables using the frequentist approach.Recently, Bayesian techniques have been developed; for example Spiegelhalter (2001).For binary traits statistical analysis of the ICC were extensively discussed by Mak (1988), Donner, Klar and Eliasziw (1995), and Gao, Klar and Donner (1997).Less developed is Bayesian statistical analysis of ICC for binary outcomes, which are of considerable practical importance in many toxicological and psychological studies.Within the Bayesian paradigm, the application of multivariate normal theory of continuous outcomes to binary variables is not strictly valid, and appropriate modeling strategy is needed.Turner, Omar and Thompson (2006) obtain the posterior distribution of ρ under the hierarchical logistic model and the beta binomial model using MCMC (Markov Chain Monte Carlo) methods.The posterior median is used as a point estimator and 95% interval estimates are calculated. In this paper we develop a Bayesian estimator of the ICC under the Beta-Binomial model of correlated binary responses.In particular, we derive the approximate/asymptotic posterior distribution of the ICC, and use it to obtain the point and interval estimators.The remainder of the paper is organized as follows: Section 2 describes the underlying Beta-Binomial model.The proposed Bayesian estimator and its approximate posterior distribution are presented in Section 3. To study the coverage probability and average length of the credible set of the proposed interval estimator, a simulation study is given in Section 4. This is followed by an example in Section 5, and discussion in Section 6. The Beta-Binomial Model Consider a random sample of k clusters each of size n i (i = 1, 2, . . ., k).We assume further that all subunits within the ith cluster , Y ij (j = 1, 2, . . ., n i ), are binary taking one of two possible values of Y ij , success or failure (coded as one and zero, respectively).The Y ij 's are conditionally independent with probability ∑ n i j=1 Y ij denote the total number of successes in the ith cluster.Thus, the conditional distribution of Y i• |p i is Binomial(n i , p i ).In addition, it is assumed that the p i 's are identically independently distributed as Beta(a, b) The correlation between any pair of responses within the same cluster corr(Y ij , Y il ), j = l, is given by the intracluster correlation coefficient ρ = 1/(1 + a + b) (Moore, 1987).Prentice (1986) showed that the Beta-Binomial distribution is valid for ρ values falling in the range min where n max is the size of the largest cluster. A positive correlation means that responses within a cluster are more alike.An example for positive correlation is in toxicological experiments designed to study teratogenic effects of chemical compounds on animals.Fetuses within a litter tend to respond more similarly than fetuses from different litters, a phenomenon known as litter effect.Less commonly, the observations within a cluster are negatively correlated.This occurs for example in family studies where children may be competing for maternal care.The focus of this paper is on applications where the ICC is positive. Following the logic of Moore (1987), the distribution of the p i 's may be reparameterized in terms of π and ρ, i.e. p i ∼ Beta(π, ρ). A Bayesian Estimator of the ICC The Beta-Binomial model in (2.1) is a popular example for hierarchical models.In the first stage, Y i• |p i are independent Binomial(n i , p i ).In the second stage, p i |π, ρ are i.i.d.Beta(π, ρ) .For a full Bayesian analysis, the hyperparameters, π and ρ , are in turn drawn from a hyperprior distribution.In this paper, it is assumed that they are distributed as Uniform(0, 1), and that they are -a prioriindependent of each other.The joint posterior distribution is obtained via Bayes theorem (Carlin and Louis, 2000, p. 19): The marginal posterior distribution of ρ is then derived via integrating p out, that is The evaluation of the previous integrals is intractable and computationally difficult.An alternative is to use asymptotic techniques to obtain approximations of the posterior density.In this paper, when the number of clusters k gets large and the cluster size n remains small, g(π, ρ|y) is approximated using Laplace's method.Following the logic of Kass and Steffey (1989), under mild regularity conditions, g(π, ρ|y) is asymptotically Bivariate Normal with mean and variance given by the posterior mode λ and the inverse of the negative Hessian of the log posterior evaluated at the mode is the prior introduced on λ = (π, ρ).When the Uniform prior on λ is adopted, which is the case here, λ and Σ can be replaced by the MLE's, π and ρ, and the inverse of the observed information matrix.That is asymptotically, g(π, ρ|y) ∼ Bivariate Normal( λ, Σ), where, λ = (π, ρ) and Σ = ( Îobs ) −1 , the inverse of the observed information matrix evaluated at π and ρ.The elements of the observed information matrix are given by Îij = , By properties of the Bivariate Normal distribution, the marginal posterior distribution g(ρ|y) is asymptotically Normal (ρ, σ 2 * ), where σ 2 * is the corresponding diagonal element of Σ. Griffiths (1973) describes an approach for ML estimation of the Beta-Binomial distribution with n = n i (i = 1, 2, . . ., k).The ML estimators of π and θ, where θ = ρ/(1 − ρ), are the solution of the following two equations (Griffiths, 1973): where, S i = ∑ I y=0 f y , (i = 0, 1, 2, . . ., n) and f y is the observed frequency (y = 0, 1, 2, . . ., n) .Equations (3.1) may be solved iteratively using numerical algorithms such as Newton-Raphson method (Press et al., 2007, chap. 9) or the Jenkins-Traub algorithm (Jenkins and Traub, 1970).The second derivatives can be easily obtained to find the information matrix of π and θ, Î(π, θ), and are given by: . By the Delta method (Kendall and Stuart, 1986, p. 324), Thus, the asymptotic posterior distribution of ρ is ) . The posterior mean, mode or median can be used as a point estimator of ρ.In this paper, a squared error loss function is adopted.Therefore, the Bayes rule (the point estimator that minimizes the posterior risk) is the posterior mean.A 100(1 − α)% credible set (Bayesian confidence interval) for ρ is given by the 2.5 and 97.5 per cent quantiles of this posterior distribution. Simulation Study In order to evaluate the accuracy of the Bayesian credible set, we need to conduct a large scale Monte Carlo simulation.Since there are many parameters involved (n, k, π, ρ), a theoretical evaluation is difficult to conduct.Therefore, the simulation approach is adopted, to study the coverage probability and average length of the credible set of the proposed estimator.For the case of cluster size n = 2, the situations were considered where the number of clusters k equals 20, 40, 100 and 200, the underlying mean response probability p equals 0.1, 0.2, 0.3, 0.4, 0.6, and 0.7 and the ICC, ρ, takes one of four values 0.1, 0.2, 0.4, 0.6.A fully factorial combination of these three factors was used, giving a total of 96 combinations.For each combination, 2000 valid samples were generated.In situations where the parameter π was near the edge, i.e., outside the range [0.3; 0.7], the percentage of samples that gave invalid solutions was quite high, sometimes reaching 70%.Whenever a generated sample led to an invalid point estimate of π or ρ (i.e., when either parameter falls out of range), the sample was discarded and replaced by a new one until a total of 2000 valid samples were obtained.As for the limits of the interval estimates, if they exceeded one or fell below the minimum possible value given in equation (2.2), then they were replaced by the appropriate extreme value.For each combination of the simulation factors, the coverage probability and the average length of the credible set were calculated.A SAS program was used to generate the data from the Beta-Binomial distribution 1 .The random numbers were generated in two steps.First, the probability 1 SAS Institute Inc., (2004).Version 8.2.Cary, NC, USA. of success, p i , were generated from the Beta distribution.In the second step, the y i 's were generated from a Binomial(2, p i ).The NSolve function for numerically solving sets of simultaneous equations in Mathematica software was used to find the roots of the ML equations 2 . Table 1 shows for 95% nominal credible sets, the estimated coverage probability, the average length of the resulting credible sets, and the percentage of invalid samples.First, note that the estimated coverage probability is above or less than the nominal level by no more than 3% in 67 out of the 96 cases, i.e., 69.79% of the time.This ratio reaches 97.22% for the case where, simultaneously, the number of clusters increases (m ≥ 40) , ICC values are ≤ 0.4 , and π is in the interval [0.3; 0.7].Even for number of clusters as small as 20, the proposed interval estimator may still be useful in the case of small ICC (≤ 0.2).On the other side, the worst situation occurs when ρ = 0.6.Here, the coverage probability does not attain its nominal level. Examining the average length of the confidence interval, it can be seen that , for all other conditions fixed, the average confidence interval length tends to decrease as either the number of clusters increases, or when π increases. For π ≤ 0.4, all invalid samples in this simulation study are the result of the case when f 2 = 0.The number of invalid samples increases as π decreases.Recall the following relationship between P rb[y i1 = 1, y i2 = 1] and p i , and consequently π, (Shoukri and Pause,1998, p. 66): A small π/p i will result in a small P rb[y i1 = 1, y i2 = 1], and thus the chances of obtaining f 2 = 0 will increase.For any fixed π, the percentage of invalid samples decreases as either the value of ρ increases or the sampled number of clusters,m, increases.This can be explained also by equation (4.1),where it is clear that there is a proportional relationship between ρ and P rb[y i1 = 1, y i2 = 1].It also makes sense to say that as more clusters are sampled, the chances of obtaining two positive responses within any cluster will increase.Therefore, as m increases, the probability of obtaining f 2 = 0 will decrease.By a similar logic, for π ≥ 0.6, the invalid samples are the result of the case when f 1 = 0. Based on this simulation study, it is recommended to use the proposed estimator when the sampled number of clusters is at least 40, when it is believed that the ICC values are ≤ 0.4, and the probability of positive response π is in the range [0.3;0.7].A limitation of this simulation study is that it is restricted to the case of cluster size 2. Example To illustrate the application of the proposed estimator, an example is considered.In ophthalmologic studies, the eye is the unit for statistical analysis rather than the individual.Typically, an individual contributes two eyes worth of information whose values might be correlated.Each person is a cluster of size two.To obtain valid inference, the statistical analysis must account for the effect of the intracluster correlation.Berson, Rosner and Simonoff (1980) and Rosner (1982) describe an ophthalmologic study conducted at the Massachusetts Eye and Ear Infirmary from 1970 to 1979 of an outpatient population of 216 persons aged 20-39 with retinitis pigmentosa (RP).This population was classified into the genetic types of autosomal recessive RP (AR), autosomal dominant RP (DOM), sex-linked RP (SL) , and isolate RP (ISO) for a study of differences among these four groups on certain measurements.The details of the study design and the procedures for genetic classification are given by Berson, Rosner and Simonoff (1980).In the current paper, the binary outcome of interest is the best corrected Snellen visual acuity (VA).Any eye is considered affected if VA is 20/50 or worse, and normal if VA is 20/40 or better.The number of affected eyes for the given sample is presented in Table 2. Hence, S 0 = f 0 = 92, S 1 = 129, and S 2 = 216.The goal is to estimate the ICC for this outpatient population using the Bayesian estimator described in section 3. The MLE estimators of π and θ are the solution of the following two equations: Equations (5.1) can be easily solved using a rootfinding routine in mathematical or statistical computer software such as Mathematica.Using the Nsolve function of Mathematica, the solution of equations (5.1) is found to be π = 0.488426 and θ = 1.917357 .The covariance matrix is given by: Therefore, var( θ) = 0.190609 , and the MLE of ρ is ρ = θ/(1 + θ) = 0.657224.By the Delta method, var(ρ) = 0.00263139.The asymptotic posterior distribution of ρ/y is N (0.657224, 0.00263139).The proposed point estimate of the ICC is given by the posterior mean, which is equal 0.657224.A 95% credible set of the ICC is [0.55668148, 0.7577665] .Rosner (1982) analyzed the same data set using the classical approach, and similar results were obtained.The effective number of units within a cluster, denoted by e, is used in his analysis.Easy manipulation shows that the ICC is related to e through the the following function: ρ = 2/e−1.The MLE of e obtained by Rosner is ê = 1.207.Therefore, the MLE of the ICC is ρ Rosner = 0.657001. Discussion We have presented a Bayesian estimator for the ICC under the Beta-Binomial model.The approach is based on Laplace's method to approximate the posterior distribution and moments of the ICC given the data.The technique is asymptotic, that is for sufficiently large number of clusters.An advantage of adopting a Bayesian perspective is the possibility of incorporating prior beliefs on likely values of the ICC.The approach is flexible to accommodate informative and noninformative priors for π and ρ.In the current study a Uniform prior has been adopted.When a nonuniform prior of π and ρ is used, then the mean and variance of the Normal approximation are given by the posterior mode and the inverse of the negative Hessian of the log posterior evaluated at the mode (Kass and Steffey, 1989).As noted by Turner, Omar and Thompson (2006), ICC values are generally reported unaccompanied by confidence intervals, which makes them of limited value as in estimation of any other parameter.The current paper provides point and interval estimators for a single ICC.The proposed approach is simple, relatively easy to implement, and is not computer intensive, which makes it useful for practitioners from the biomedical fields.This is in contrast to already existing Bayesian approaches depending on simulations and MCMC methods.It is clear that the technique used here to find the MLE's of the Beta-Binomial distribution (Griffiths, 1973) assumes equal cluster sizes.When the clusters are not of equal sizes, two nonlinear equations need to be solved numerically to obtain the posterior estimators of ρ and π. Table 1 : Performance of the proposed credible set based on Monte Carlo simulation: the estimated coverage probability for the ICC at nominal level 0.95, average length of the credible set (in parentheses), and the proportion of invalid samples[in brackets] Table 2 : Distribution of the number of affected eyes
3,997
2021-07-10T00:00:00.000
[ "Computer Science", "Mathematics" ]
ZnO Nanogenerator Prepared from ZnO Nanorods Grown by Hydrothermal Method In this study, a zinc oxide (ZnO) film was deposited by sputtering on an indium tin oxide (ITO) glass substrate. ZnO nanorods were then grown on the film by the hydrothermal method, then assembled with a gold electrode to fabricate a nanogenerator. The ZnO nanostructure and nanogenerator were analyzed by field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD), and the measurement of current–voltage characteristics. The results of FESEM show that the length of the ZnO nanorods increased with the growth time, and the optimal dimensions of the ZnO nanorods were a length of 2 μm and a diameter of 130 nm at the growth time of 6 h. In the XRD pattern, ZnO (002) and (103) peaks were observed at 2θ = 34.45 and 62.51°, respectively, confirming that the ZnO nanorods were grown on the substrate. The nanogenerator was driven by an ultrasonic wave to measure its voltage and current. The highest average current and voltage were 3.46 × 10−6 A and 5.63 × 10−2 V, respectively. These results indicate that the ZnO nanorods prepared by the hydrothermal method are suitable for the fabrication of a nanogenerator. Introduction Zinc oxide (ZnO) is a II-VI semiconductor material with a direct band gap of 3.37 eV, corresponding to a wavelength in the ultraviolet region, and it also has a large excitation binding energy (~60 meV).In addition, ZnO has low resistivity and high transparency; therefore, it is considered as a promising material for application in optoelectronics.Recently, ZnO materials with the characteristics of one-dimensional (1D) nanomaterials have been realized, which can exhibit different nanostructures depending on the fabrication method of the materials.Over the past ten years, the synthesis of ZnO nanostructures such as nanowires, nanocycles, nanobelts, and nanocombs has been successfully achieved.The most representative structure is nanowires. Nanotechnology involves the use of nanomaterials with various dimensions to assemble the desired structure.Nanostructures such as quantum dots and quantum wells can be used for illumination or metering.ZnO nanomaterials not only can be used for the basic theoretical study of physical properties, such as light, electricity, magnetism, and mechanics, but also have great potential as nanooptical components, such as light-emitting diodes (LEDs), (1) field emission elements, (2) surface coatings of conductive materials, (3) laser diodes, (4)(5)(6) solar cells, gas sensors, photonic crystals, field-effect transistors (FETs), and photodetectors. (7)In addition, ZnO is also a piezoelectric semiconductor material when prepared as a film.Some types of ZnO film can be applied to a surface acoustic wave (SAW) device.10) In this study, a ZnO film was deposited by sputtering on an indium tin oxide (ITO) glass substrate, then ZnO nanorods were on the film grown by the hydrothermal method to fabricate a nanogenerator.The characteristics of the ZnO nanorods and nanogenerator were investigated by field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD), and current-voltage measurements, and the results reveal that a ZnO nanogenerator can be fabricated successfully by this low-cost hydrothermal method. Experimental Procedure A ZnO thin film was prepared by sputtering the ZnO target (3 inch, 99.9%) onto an ITO glass substrate as a seed layer.The thickness of the ZnO seed layer was 500 nm.ZnO nanorods were grown on the seed layer in zinc nitrate hexahydrate [Zn(NO 3 ) 2 , 0.03 M] and hexamethylenetetramine (HMTA, 0.03 M) for 3, 6, 9, or 12 h at 90 ℃ by the hydrothermal method, and then the grown ZnO nanorods were removed from the solution, rinsed with distilled water, and dried in air.The surface morphology of the ZnO nanorods was observed by FE-SEM.The crystallinity of the ZnO nanorods was analyzed by XRD.To assemble the nanogenerator, a gold film was deposited on the etched ITO glass substrate as an electrode.The microcurrent of the nanogenerator was measured in DI water by driving ultrasonic waves with a frequency of 42 kHz and exhibited a power of 100 W. The ZnO nanogenerator generated an output current and demonstrated a Schottky-like current-voltage characteristic. Results and Discussion Nanorods with four different lengths and widths were grown on the seed layer for 3, 6, 9, and 12 h at 90 ℃ by the hydrothermal method.Figure 1 shows top-view and cross-sectional SEM images of the ZnO nanorods.The lengths of the ZnO nanorods grown on the seed layer for 3, 6, 9, and 12 h were 800 nm, 2 μm, 2 μm, and 2 μm and the widths were 80, 130, 150, and 170 nm, respectively.These results reveal that the length and width of the ZnO nanorods increase with the growth time, and the length of the ZnO nanorods hardly changes when the growth time is longer than 6 h. Figure 2 shows the energy-dispersive X-ray spectroscopy (EDS) spectrum of the ZnO nanorods grown for 6 h.The result indicates that the atomic ratio of Zn to O is nearly 1, and there are other elements in the ZnO nanorods.Figure 3 shows the XRD pattern of the ZnO nanorods grown for 6 h.As shown in Fig. 3, a strong ZnO (002) diffraction peak was observed at 2θ = 34.45°.It indicates that the ZnO nanorods on the substrate are mainly oriented along the c-axis.In addition, a (103) peak also appears since not all the ZnO nanorods have c-axis orientation and some of them are skewed. To assemble the nanogenerator, a gold film was deposited on the etched ITO glass substrate as an electrode.After cleaning the substrate with acetone, isopropanol, and DI water, the etched ITO glass was placed in the oven to dry.The gold film was coated by evaporation, and its thickness was about 100 nm. Figure 4 shows a diagram of the deposition of the gold electrode on the ITO glass substrate.-7 show the current voltage characteristics of the ZnO nanogenerators fabricated by ZnO nanorods grown for 3, 6, and 9 h, respectively.The nanogenerators were driven by a fluctuation generated with an ultrasonic oscillator.The current voltage characteristics show Schottky contact behavior between the metal and the oxide layer, and the average currents and voltages are 9.52 × 10 −8 A and 7.04 × 10 −2 V for 3 h, 3.46 × 10 −6 A and 5.63 × 10 −2 V for 6 h, and 4.75 × 10 −7 A and 3.38 × 10 −2 V for 9 h, respectively, and the output powers are 6.7 × 10 −9 W for 3 h, 1.95 × 10 −7 W for 6 h, and 1.6 × 10 −8 W for 9 h.From the viewpoint of the total power, the best nanogenerator is fabricated with the ZnO nanorods grown for 6 h.Note that because thinner ZnO nanorods can be bent more easily, they can generate more output power than the thicker ZnO nanorods.From Fig. 1, the lengths of the ZnO nanorods grown for 3, 6, and 9 h are 800 nm, 2 μm, and 2 μm and the widths are 80, 130, and 150 nm, respectively.Because the length of the ZnO nanorods hardly changes after 6 h, the optimal ZnO nanogenerator is fabricated with ZnO nanorods grown for 6 h.This may also be due to the ZnO nanorods grown for 6 h having a suitable length of 2 μm while having a smaller diameter than the ZnO nanorods grown for 9 h, resulting in the highest output power when driven by the fluctuation generated by the ultrasonic oscillator. Fig. 4 . Fig. 4. (Color online) Diagram of deposition of gold electrode on ITO glass substrate. Figures 5 Figures5-7show the current voltage characteristics of the ZnO nanogenerators fabricated by ZnO nanorods grown for 3, 6, and 9 h, respectively.The nanogenerators were driven by a fluctuation generated with an ultrasonic oscillator.The current voltage characteristics show Schottky contact behavior between the metal and the oxide layer, and the average currents and voltages are 9.52 × 10 −8 A and 7.04 × 10 −2 V for 3 h, 3.46 × 10 −6 A and 5.63 × 10 −2 V for 6 h, and 4.75 × 10 −7 A and 3.38 × 10 −2 V for 9 h, respectively, and the output powers are 6.7 × 10 −9 W for 3 h, 1.95 × 10 −7 W for 6 h, and 1.6 × 10 −8 W for 9 h.From the viewpoint of the total power, the best nanogenerator is fabricated with the ZnO nanorods grown for 6 h.Note that because thinner ZnO nanorods can be bent more easily, they can generate more output power than the thicker ZnO nanorods.From Fig.1, the lengths of the ZnO nanorods grown for 3, 6, and 9 h are 800 nm, 2 μm, and 2 μm and the widths are 80, 130, and 150 nm, respectively.Because the length of the ZnO nanorods hardly changes after 6 h, the optimal ZnO nanogenerator is fabricated with ZnO nanorods grown for 6 h.This may also be due to the ZnO nanorods grown for 6 h having a suitable length of 2 μm while having a smaller diameter than the ZnO nanorods grown for 9 h, resulting in the highest output power when driven by the fluctuation generated by the ultrasonic oscillator.
2,104.2
2019-03-29T00:00:00.000
[ "Materials Science" ]
Shear Bonding Strength and Thermal Cycling Effect of Fluoride Releasable/Rechargeable Orthodontic Adhesive Resins Containing LiAl-F Layered Double Hydroxide (LDH) Filler This study aims to investigate the shear bonding strength (SBS) and thermal cycling effect of orthodontic brackets bonded with fluoride release/rechargeable LiAl-F layered double hydroxide (LDH-F) contained dental orthodontic resin. 3% and 5% of LDH-F nanopowder were gently mixed to commercial resin-based adhesives Orthomite LC (LC, LC3, LC5) and Transbond XT (XT, XT3). A fluoroaluminosilicate modified resin adhesive Transbond color change (TC) was selected as a positive control. Fifteen brackets each group were bonded to bovine enamel and the SBS was tested with/without thermal cycling. The adhesive remnant index (ARI) was evaluated at 20× magnification. The fluoride-releasing/rechargeability and cytocompatibility were also evaluated. The SBS of LC, LC3, and LC5 were significantly higher than XT and TC. After thermal cycling, the SBS of LC, LC3, and LC5 did not decrease and was significantly higher than TC. The changes of ARI scores indicate that failure occurred not only cohesive but also semi-cohesive fracture. The 30 days accumulated daily fluoride release of LC3, LC5, and TC without recharge are higher than 300 μg/cm2. The LDH-F contained resin adhesive possesses higher SBS compared to positive control TC. Fluoride release and the rechargeable feature can be achieved for preventing enamel demineralization without cytotoxicity. Introduction Wearing a brace leads to difficulty in cleaning the mouth, food debris and plaque often accumulate around the structurally complex brackets in orthodontic patient [1,2]. The number and proportion of Streptococcus mutans in dental plaque will increase during orthodontic treatment [3,4] It will also metabolize food to produce organic acids, causing 60.9% of enamel demineralization (white spots) in just one month [5]. In addition to administering fluoride (such as toothpaste, gel, varnish and mouth rinses) for maintaining patient's oral hygiene [6], some studies tried to reduce the increasing Materials 2019, 12, 3204 2 of 13 risk in tooth decay during the orthodontic period via changing the surface roughness and surface energy of the bracket [7], doping the silver nanoparticles to orthodontic adhesives [8] or using an orthodontic adhesive that releases fluoride ions [9]. The bacteria often accumulated mostly at the junction of the adhesive and the tooth surface [1], thus the development of orthodontic adhesives with antibacterial properties can be an effective way to control the growth of microorganism as well as the enamel demineralization [10,11]. Fluoride shows promising ability to prevent dental caries though three major mechanisms, inhibits bacterial metabolism, inhibits demineralization and enhances remineralization [12,13]. Systemic review showed that application of fluoride releasing orthodontic adhesives apparently reduce demineralization of enamel around brackets during orthodontic treatment [14,15]. Although traditional glass ionomer cements (GICs) release fluoride and prevent enamel demineralization, they have limited adhesive strength and are not recommended for clinical use [14,16]. The resin adhesive is often selected for its higher adhesive strength than the GIC adhesives, however it does not release fluoride. Recently, we developed a LiAl layered double hydroxide (LDH) with a beneficial anions-exchangeable feature [17]. The LiAl LDH intercalated with a fluoride ion was prepared with different particle sizes, which contains monovalent (z = 1) and trivalent matrix cations as [Li 1−x + Al x + (OH) 2 (2x−1) + (A 2x−1/n n− )·mH 2 O [18]. In this study, we used the LDH-F powder as a fluoride reservoir filler in a commercial resin-based orthodontic adhesive. The shear bond strength (SBS)/thermal cyclic effect, fluoride release/recharge capability, and cytotoxicity of LDH-F contained orthodontic resin adhesives were compared to a commercial fluoride releaseable orthodontic adhesive. Preparation of LDH-F Contained Orthodontic Adhesives The synthesized LDH-F powder [17] with a mass fraction of 3% and 5% were weighted and dispersed into non-fluoride releasing resin-based adhesives, Orthomite LC (Sun Medical Co. Ltd., Moriyama, Japan) or Transbond XT (3M Unitek, Monrovia, CA, USA), and manually stirred for 3 min. A fluoroaluminosilicate modified resin adhesive Transbond color change (compomer), was selected as a positive control to compare with the LDH-F contained orthodontic adhesives. The tested orthodontic adhesives and codes are summarized in Table 1. For fluoride releasing and cytotoxicity tests, the mixtures were then prepared as disks (6-mm diameter, 2-mm thick) in Teflon molds. Light curing for both sides was proceeded using a light cure unit (Litex 696 LED Cordless Curing Light, Dentamerica, City of Industry, CA, USA) which has the intensity at 400-500 nm about 1200 mW/cm 2 for 40 seconds. Shear Bonding Strength (SBS) Test One hundred bovine anterior incisors were disinfected, stored in distilled water and tested within 3 months. Each incisor was embedded in epoxy resin and the buccal enamel was exposed by grinding on sandpaper (#400, #600, #1000) before the shear bonding strength (SBS) test. Fifteen brackets (MicroArch, Roth type, 0.018 slot, TOMY ® , Tokyo, Japan) in each group were seated, using etchant or primer on the bovine enamel first according to the manufacturer's recommendations. A 350 g weight was applied to each bracket perpendicular to the exposed enamel for 60 seconds. After scraping off excess adhesive, each sides of the specimen was photopolymerized at a 45 degree angle using a light curing machine (Litex 696 LED Cordless Curing Light, Dentamerica, City of Industry, CA, USA) under 1200 mW/cm 2 for 20 seconds. After kept in 37 • C distilled water for one day, the bracket-adhered tooth was tested using a desktop testing machine (JSV-H1000, Japan Instrumentation System, Nara, Japan) at a constant rate of 1 mm/minute according to the standard ISO 29022:2013 [19] as illustrated in Figure 1. The SBS value was calculated using the following formula (1): The bond strength (MPa) = The force required to debond the bracket (N)/area of the bracket base (mm 2 ) (1) Shear Bonding Strength (SBS) Test One hundred bovine anterior incisors were disinfected, stored in distilled water and tested within 3 months. Each incisor was embedded in epoxy resin and the buccal enamel was exposed by grinding on sandpaper (#400, #600, #1000) before the shear bonding strength (SBS) test. Fifteen brackets (MicroArch, Roth type, 0.018 slot, TOMY ® , Tokyo, Japan) in each group were seated, using etchant or primer on the bovine enamel first according to the manufacturer's recommendations. A 350 g weight was applied to each bracket perpendicular to the exposed enamel for 60 seconds. After scraping off excess adhesive, each sides of the specimen was photopolymerized at a 45 degree angle using a light curing machine (Litex 696 LED Cordless Curing Light, Dentamerica, City of Industry, CA, USA) under 1200 mW/cm 2 for 20 seconds. After kept in 37 °C distilled water for one day, the bracket-adhered tooth was tested using a desktop testing machine (JSV-H1000, Japan Instrumentation System, Nara, Japan) at a constant rate of 1 mm/minute according to the standard ISO 29022:2013 [19] as illustrated in Figure 1 (c) (d) Figure 1. Illustration of SBS sample preparation and testing procedures. (a) Each incisor was embedded in epoxy resin and the buccal enamel was exposed by grinding on sandpaper. (b) Use etchant or primer on the bovine enamel first according to the manufacturer's recommendations. (c) A 350 g weight was applied to each bracket perpendicular to the exposed enamel for 60 seconds. (d) The adhered bracket was de-bonded at a constant displacement rate of 1 mm per minute. Thermal Cycling Test The International Organization for Standardization (ISO) TR 11450 standard indicates that a thermocycling regimen comprising 500 cycles in water between 5 and 55 °C (dwell time ≥ 20 s) is an appropriate artificial ageing test [20]. The present thermal cycling test was performed using a hot/cool condition cycle motion system (Chung Chiao Technology Co., Taichung, Taiwan) of 700 cycles under 5 °C to 55 °C with a dwell time of 30 seconds. Incisor resin blocks with adhered brackets were prepared as same methods in SBS test. Analysis of Residual Adhesives After SBS test, the amount of remained adhesive was evaluated according Adhesive remnant index (ARI) originally developed by Artun and Bergland [21]. The debonded surface of the enamel blocks were observed by an optical microscope under 20× (Olympus BX40, Olympus Optical Co. Ltd., Japan). ARI scores were used as a means of defining the sites of bond failure between the enamel, adhesive, and the bracket base. The ARI was scored "0" to "3", as follows: score "0" means no adhesive left on the tooth, score "1" means less than half of the adhesive left on the tooth, score "2" means more than half of the adhesive left on the tooth, and score "3" means almost all the adhesive left on the tooth with the mesh pattern visible. The chemical composition of residual adhesives on the surface of brackets were analyzed by energy dispersive spectra equipped on a scanning electron microscopy (JSM-6300, JEOL Ltd., Tokyo, Japan) under 500×. Fluoride Release/Recharge Assay In the fluoride release assay, specimens were made in triplicate, the concentration of fluoride ions were averaged and expressed as mean ± SD in μg/cm 2 . The specimens were immersed and stored in centrifuge tubes with 3 mL deionized (DI) water individually at 37 °C. For the fluoride measurement, the specimens were removed from the centrifuge tubes and then placed to new Metal bracket 350 g Metal jig with a "V" shaped groove Testing rod connect to machine Figure 1. Illustration of SBS sample preparation and testing procedures. (a) Each incisor was embedded in epoxy resin and the buccal enamel was exposed by grinding on sandpaper. (b) Use etchant or primer on the bovine enamel first according to the manufacturer's recommendations. (c) A 350 g weight was applied to each bracket perpendicular to the exposed enamel for 60 seconds. (d) The adhered bracket was de-bonded at a constant displacement rate of 1 mm per minute. Thermal Cycling Test The International Organization for Standardization (ISO) TR 11450 standard indicates that a thermocycling regimen comprising 500 cycles in water between 5 and 55 • C (dwell time ≥ 20 s) is an appropriate artificial ageing test [20]. The present thermal cycling test was performed using a hot/cool condition cycle motion system (Chung Chiao Technology Co., Taichung, Taiwan) of 700 cycles under 5 • C to 55 • C with a dwell time of 30 seconds. Incisor resin blocks with adhered brackets were prepared as same methods in SBS test. Analysis of Residual Adhesives After SBS test, the amount of remained adhesive was evaluated according Adhesive remnant index (ARI) originally developed by Artun and Bergland [21]. The debonded surface of the enamel blocks were observed by an optical microscope under 20× (Olympus BX40, Olympus Optical Co. Ltd., Japan). ARI scores were used as a means of defining the sites of bond failure between the enamel, adhesive, and the bracket base. The ARI was scored "0" to "3", as follows: score "0" means no adhesive left on the tooth, score "1" means less than half of the adhesive left on the tooth, score "2" means more than half of the adhesive left on the tooth, and score "3" means almost all the adhesive left on the tooth with the mesh pattern visible. The chemical composition of residual adhesives on the surface of brackets were analyzed by energy dispersive spectra equipped on a scanning electron microscopy (JSM-6300, JEOL Ltd., Tokyo, Japan) under 500×. Fluoride Release/Recharge Assay In the fluoride release assay, specimens were made in triplicate, the concentration of fluoride ions were averaged and expressed as mean ± SD in µg/cm 2 . The specimens were immersed and stored in centrifuge tubes with 3 mL deionized (DI) water individually at 37 • C. For the fluoride measurement, the specimens were removed from the centrifuge tubes and then placed to new centrifuge tubes with a fresh 3 mL DI water. The remaining solution was analyzed using fluoride ion selective electrode (Orion 9609 BNWP, Thermo Fisher Scientific, Waltham, MA, USA) after addition of a total ionic strength adjustor and a buffer solution (TISAB-III, Thermo Fisher Scientific, Waltham, MA, USA). The measurements were made each day and lasted for 90 days. Among the period, fluoride recharging was carried out daily during day 30-60 by immersing the specimens into 1000 ppm fluoride-containing solution for 4 min. After the immersion, the specimens were immediately washed for 1 min using DI water. Cytocompatibility Specimens (6 mm in diameter and 2 mm thick, n = 6) were ultrasonically cleaned in the DI water for 10 min and disinfected with ultraviolet light. Each specimen was then immersed into a centrifuge tube with 6 mL medium (α-MEM with 10% horse serum (Gibco, Grand Island, NY, USA) and 1% penicillin-streptomycin) and incubated at 37 • C for three days. The L929 cells (5 × 10 4 cells/well, ATTC®catalog No. CCL-1, mouse fibroblasts) were treated with the conditioned medium for one day and three days, and cytotoxicity test (quintuplicate) were conducted using a Cell Counting Kit-8 (Sigma-Aldrich, St Louis, MO, USA) [17]. The results were compared to a blank control group and 5% dimethyl sulfoxide DMSO (Sigma-Aldrich, St Louis, MO, USA). Statistical Analysis We analyzed data by overall one-way analysis of variance (ANOVA) followed by Bonferroni test for individual between-group comparisons using a software (Origin 8.0, Microcal Software Inc., Northampton, MA, USA). The SBS or ARI score for the groups of bonding materials after thermal cycling were compared by two-way ANOVA. The two factors for ANOVA were adhesives and thermal cycling. Before Thermal Cycling The SBS are significant different among groups ( Figure 2, P = 0). LC3 and LC5 showed higher SBS than LC group (P = 0.036, P = 0.002). And the SBS of TC and XT are significant lower than that of LC, LC3, and LC5 (P < 0.05). ANOVA shows that the ARI scores are significant different ( Table 2, P = 1.8 × 10 −8 ). The ARI score of XT, XT3, and TC are mainly "3", while the scores are more diverse for LC and LC3. The ARI scores of most LC5 are fall in "2" (86.7 %). The chemical composition of residual adhesives on the de-bonded brackets were summarized in Table 3. The F concentration in LDH-F contained resin adhesives increased with the increasing doping percentage. However, the F concentration of the residuals of 5 wt.% LDH-F adhesive sample (2.71 at.%) is still much lower than that of TC (8.83 at.%). The SBS of LC, LC3, LC5, and TC were significant different (Figure 3, P = 5.5 × 10 −16 ). After thermal cycling test, Bonferroni test shows that LC added with 3% or 5% LDH powder do not affect the SBS, although the mean SBS of LC3 and LC5 decreases after thermal cycling, there are no statistical differences. On the contrary, the SBS of TC decreases after thermal cycling test (P = 0.0094) and also lower than the SBS of LC, LC3, LC5 after thermal cycling (P < 0.0001). By two-way ANOVA, the population means both thermal cycling and adhesives significant affect the SBS (for thermal cycling P = 0.04; for different adhesives P = 1.1 × 10 −11 ). The Bonferroni test shows that the SBS is affected by the thermal cycling test (P = 0.04). The post hoc comparison also demonstrated that SBS of TC is lower than LC, LC3, and LC5 considering both before and after the thermal cycling test (P < 0.0001). The SBS of LC, LC3, LC5, and TC were significant different (Figure 3, P = 5.5 × 10 −16 ). After thermal cycling test, Bonferroni test shows that LC added with 3% or 5% LDH powder do not affect the SBS, although the mean SBS of LC3 and LC5 decreases after thermal cycling, there are no statistical differences. On the contrary, the SBS of TC decreases after thermal cycling test (P = 0.0094) and also lower than the SBS of LC, LC3, LC5 after thermal cycling (P < 0.0001). By two-way ANOVA, the population means both thermal cycling and adhesives significant affect the SBS (for thermal cycling P = 0.04; for different adhesives P = 1.1 × 10 −11 ). The Bonferroni test shows that the SBS is affected by the thermal cycling test (P = 0.04). The post hoc comparison also demonstrated that SBS of TC is lower than LC, LC3, and LC5 considering both before and after the thermal cycling test (P < 0.0001). The residual resins on the base of the brackets after thermal cycling were observed by SEM ( Figure 4). The meshes of the stainless steel bracket base were partially covered with residual adhesives, and the fracture surface of the adhesives was irregular (Figure 4a-c). This type of fracture occurred when cracks propagate between both bracket/adhesive and adhesive/adhesive, which will then leave some adhesives on the enamel and the ARI score would fall into "1" or "2". On the contrary, Figure 4d shows that the meshes are completely visible, which indicated that adhesives are left on the enamel (ARI "3"). As shown in Table 4, excepted TC group, ARI scores of LC, LC3, and LC5 all increased after thermal cycling where the percentages of score "3" became dominate. However, the Bonferroni test indicated that the ARI scores among LC-T, LC3-T, LC5-T, and TC-T are not statistically different. By two-way ANOVA, the thermal cycling changes the ARI scores significantly (P = 0.006), and the differences between adhesives are significant (P = 1.1 × 10 −6 ). The ARI scores of LC and LC5 are higher than that of TC considering both before and after the thermal cycling test (P = 2.42 × 10 −6 , P = 1.83 × 10 −4 ). The residual resins on the base of the brackets after thermal cycling were observed by SEM ( Figure 4). The meshes of the stainless steel bracket base were partially covered with residual adhesives, and the fracture surface of the adhesives was irregular (Figure 4a-c). This type of fracture occurred when cracks propagate between both bracket/adhesive and adhesive/adhesive, which will then leave some adhesives on the enamel and the ARI score would fall into "1" or "2". On the contrary, Figure 4d shows that the meshes are completely visible, which indicated that adhesives are left on the enamel (ARI "3"). As shown in Table 4, excepted TC group, ARI scores of LC, LC3, and LC5 all increased after thermal cycling where the percentages of score "3" became dominate. However, the Bonferroni test indicated that the ARI scores among LC-T, LC3-T, LC5-T, and TC-T are not statistically different. By two-way ANOVA, the thermal cycling changes the ARI scores significantly (P = 0.006), and the differences between adhesives are significant (P = 1.1 × 10 −6 ). The ARI scores of LC and LC5 are higher than that of TC considering both before and after the thermal cycling test (P = 2.42 × 10 −6 , P = 1.83 × 10 −4 ). Fluoride Release and Recharge-Ability The positive control compomer (TC) presents highest fluoride ion release in all test periods as shown in Figure 5a. The releasing burst in initial stage (~7 days) is obvious in the first release period (Figure 5b) for groups except LC, which is a non-fluoride releasing resin adhesive. During the fluoride recharge period, the daily release fluoride ion concentration of LC3 and LC5 was raised to a stable plateau (~20 μg/cm 2 ) (Figure 5c). In the secondary release period, the daily fluoride ions release gradually decrease to the initial level, although the fluoride release of LC3 and LC5 still a little higher than the LC (Figure 5d). The accumulation of fluoride ions at the fluoride recharge period is 410.8 μg/cm 2 , 550.9 μg/cm 2 , 513.4 μg/cm 2 for LC, LC3, and LC5 respectively. However, only LC3, LC5 and TC can release over 300 μg/cm 2 in the secondary release periods (Table 5). Fluoride Release and Recharge-Ability The positive control compomer (TC) presents highest fluoride ion release in all test periods as shown in Figure 5a. The releasing burst in initial stage (~7 days) is obvious in the first release period (Figure 5b) for groups except LC, which is a non-fluoride releasing resin adhesive. During the fluoride recharge period, the daily release fluoride ion concentration of LC3 and LC5 was raised to a stable plateau (~20 µg/cm 2 ) (Figure 5c). In the secondary release period, the daily fluoride ions release gradually decrease to the initial level, although the fluoride release of LC3 and LC5 still a little higher than the LC (Figure 5d). The accumulation of fluoride ions at the fluoride recharge period is 410.8 µg/cm 2 , 550.9 µg/cm 2 , 513.4 µg/cm 2 for LC, LC3, and LC5 respectively. However, only LC3, LC5 and TC can release over 300 µg/cm 2 in the secondary release periods (Table 5). Figure 6 shows the cytocompatibility of LC, LC3, LC5, and TC compared to blank and DMSO at day 1 and day 3. The L929 cells cultured with conditioned medium have similar viability among the experiment groups and higher than the DMSO at the first day. After three days, there are significant differences between the experiment groups (P = 4.09 × 10 −9 ). The L929 cells in TC group present lowest viability when compared to LC, LDH-F contained LC, and the blank control. Although the cell viability of LC5 was slightly lower than LC3 (P = 0.044), it still above 70% of the blank control, which means an acceptable biocompatibility according to the ISO 10993 standard [22]. Figure 6 shows the cytocompatibility of LC, LC3, LC5, and TC compared to blank and DMSO at day 1 and day 3. The L929 cells cultured with conditioned medium have similar viability among the experiment groups and higher than the DMSO at the first day. After three days, there are significant differences between the experiment groups (P = 4.09 × 10 −9 ). The L929 cells in TC group present lowest viability when compared to LC, LDH-F contained LC, and the blank control. Although the cell viability of LC5 was slightly lower than LC3 (P = 0.044), it still above 70% of the blank control, which means an acceptable biocompatibility according to the ISO 10993 standard [22]. Discussion During orthodontic treatment, the bracket must be firmly adhered to the teeth through the adhesive to withstand the forces generated by the orthodontic devices. Previous studies indicated that to meet clinical needs, the SBS must be above 5.88-7.84 MPa [23,24], some believes SBS should at least 8-9 MPa [25]. In this study, all groups meet the previous criteria before the thermal cycling test. Stress concentration occurs between the bracket, adhesive, and teeth due to the differences in thermal expansion coefficients, eventually induce crack propagation and reduce the SBS significantly [26][27][28]. Our results demonstrate that after thermal cycling the adding LDH-F to resin does not significantly reduce the SBS, but TC-T has a significant lower SBS (7.6 ± 3.0 MPa), which is at the acceptable margin. The SBS of resin adhesive increased after addition of LDH-F which may contribute to the dispersive strengthening mechanism. However, due to the presence of abundant hydrophilic hydroxyl groups in the layers, LDH-F intrinsically has a poor affinity with hydrophobic resins [29]. In our previous study, the addition 3% or 5% LDH-F of nanoparticles did not strengthen the resin composite [17], but the SBS of the LC3 and LC5 was significantly increased by 22% and 17% compared with LC in this study. The possible reason is that monomers in previous used resin composite (Esthet-X Flow, contained modified bisphenol A diglycidildimethacrylate (BisGMA), its ethoxylated version (BisEMA), triethylene glycol dimethacrylate (TEGDMA)) are hydrophobic Instead, in order to infiltrated to the demineralized matrix, the primers and resin adhesives contained hydrophilic monomers such as 2-Hydroxyethyl Methacrylate (2-HEMA) or 10-Methacryloyloxydecyl dihydrogenphosphate (10-MDP). The higher affinity between LDH-F and the hydrophilic moiety results in good dispersion of LDH in the polymer, which will limit the mobility of the polymer chain and enhance the strength/toughness of this nanocomposite [30]. Although a previous study did not find the correlation between the ARI score and SBS [31], in our finding, the changes in ARI scores reflects the changes in fracture modes in different adhesives and also effects by the thermal cycling test. Table 2 shows that the ARI score of XT and TC are 100% in "3", which means the crack is between metal bracket and the adhesive and de-bond is in a complete non-cohesive mode. This result is consistent with previous study by Sharma et al. [24]. On the contrary, the ARI scores shift to lower values ("0" to "2") in the LC and LDH-F contained resin Discussion During orthodontic treatment, the bracket must be firmly adhered to the teeth through the adhesive to withstand the forces generated by the orthodontic devices. Previous studies indicated that to meet clinical needs, the SBS must be above 5.88-7.84 MPa [23,24], some believes SBS should at least 8-9 MPa [25]. In this study, all groups meet the previous criteria before the thermal cycling test. Stress concentration occurs between the bracket, adhesive, and teeth due to the differences in thermal expansion coefficients, eventually induce crack propagation and reduce the SBS significantly [26][27][28]. Our results demonstrate that after thermal cycling the adding LDH-F to resin does not significantly reduce the SBS, but TC-T has a significant lower SBS (7.6 ± 3.0 MPa), which is at the acceptable margin. The SBS of resin adhesive increased after addition of LDH-F which may contribute to the dispersive strengthening mechanism. However, due to the presence of abundant hydrophilic hydroxyl groups in the layers, LDH-F intrinsically has a poor affinity with hydrophobic resins [29]. In our previous study, the addition 3% or 5% LDH-F of nanoparticles did not strengthen the resin composite [17], but the SBS of the LC3 and LC5 was significantly increased by 22% and 17% compared with LC in this study. The possible reason is that monomers in previous used resin composite (Esthet-X Flow, contained modified bisphenol A diglycidildimethacrylate (BisGMA), its ethoxylated version (BisEMA), triethylene glycol dimethacrylate (TEGDMA)) are hydrophobic Instead, in order to infiltrated to the demineralized matrix, the primers and resin adhesives contained hydrophilic monomers such as 2-Hydroxyethyl Methacrylate (2-HEMA) or 10-Methacryloyloxydecyl dihydrogenphosphate (10-MDP). The higher affinity between LDH-F and the hydrophilic moiety results in good dispersion of LDH in the polymer, which will limit the mobility of the polymer chain and enhance the strength/toughness of this nanocomposite [30]. Although a previous study did not find the correlation between the ARI score and SBS [31], in our finding, the changes in ARI scores reflects the changes in fracture modes in different adhesives and also effects by the thermal cycling test. Table 2 shows that the ARI score of XT and TC are 100% in "3", which means the crack is between metal bracket and the adhesive and de-bond is in a complete non-cohesive mode. This result is consistent with previous study by Sharma et al. [24]. On the contrary, the ARI scores shift to lower values ("0" to "2") in the LC and LDH-F contained resin adhesives, which demonstrated a semi-cohesive or cohesive mode participates in the de-bonding process. The ARI scores are similar between the adhesives after thermal cycling, obviously, the percentages of score "3" increased in the LC and LDH-F contained resin adhesives (Table 4). Removing the residual adhesive from the teeth after removing the brackets requires very delicate techniques to not damage the teeth, reduce the amount of residual adhesive on the teeth can shorten the clinical operation time [8,32]. Previous studies have suggested that inhibiting the growth of oral streptococcus requires at least 100-200 µg/mL of fluoride ion per day [23,33]. Under recharge period, the daily release fluoride ions of LDH-F contained resin composite is around 20 µg/mL. Even with compomer adhesives, the fluoride ion daily release cannot reach the proposed level after the initial burst (>200 µg/mL/day only occurs in the first two weeks). Featherstone et al. concluded that the mineralized ions in saliva are sufficient, (1) when the fluoride ion concentration is greater than 0.03 ppm (equal to 9.5 µg/cm 2 in this study), the remineralization would be activated, and (2) the most ideal concentration for remineralization is 0.08 ppm (equal to 89.2 µg/cm 2 in this study) [12,34]. In addition, studies have shown that when fluoride ions are continuously present in saliva, plaque or enamel, the demineralization of the teeth can be reduced, hence inactivate the progression of caries and prevent secondary caries [7,12,13]. Dijkman et al. suggested that the accumulation of fluoride ions accumulated in 200-300 µg/cm 2 within one month completely inhibited the formation of cavities [23]. In the present study, the accumulated fluoride ions concentrations of LDH-F contained resin adhesives were higher than 300 µg/cm 2 in the three test periods, but the LC group was able to accumulate to this concentration only during recharge period. The extract medium of TC which containing burst fluoride ions seems toxicity to L929 cells after culture for 3 days. However, adding 3% or 5% LDH-F to LC resin present the same good biocompatibility as LC resin. In the present study, the medium for SBS tests with thermal cycling and fluoride release/rechargeable was distilled water (pH 6.8). Whereas, in fact, the dental orthodontic resin would function in a saliva environment which possesses complex ingredients (buffer electrolytes, enzymes, and cells) as well as fluctuating of pH value while eating. The effect of salivary pH on SBS between adhesive resins and orthodontic brackets has been studied [35]. After being immersed the samples in saliva with different pH (pH 3.8, 4.8, 5.8, and 6.8) for two months, the mean SBS value in pH 3.8 group was significantly lower than that in other groups. And the differences between other groups were not significant. It is to say that if the oral environment keeps in a relatively neutral pH, the bonding of the bracket would not be influenced. The second issue regarding the limitation of the testing medium (distilled water) is the effects on fluoride release. A previous study found that the test medium indeed affects fluoride release in some GIC restorations [36]. They concluded that the difference between released fluoride ions in distilled water and in saliva was probably due to the formation of CaF 2 precipitates on the surface of the material. We have conducted an experiment to understand the difference of fluoride release between LDH-F in distilled water and in saliva. The preliminary result shows that when LDH-F was in high concentration (15 mg in 15 mL), an initial burst of fluoride ions was obvious in the water group (but not in the saliva group). However, under low concentration (1.5 mg in 15 mL) the real-time fluoride ions concentration and accumulated fluoride release curves were similar between the water group and the saliva group. Therefore, it can be said that the results of this study can be applied in the normal oral environment, but for the acidic environment or the application of LDH to other dental resins must be further studied. Conclusions The addition of LDH-F significantly increases the shear bonding strength of the orthodontic resin adhesive and additionally benefits the effective fluoride ion release and rechargeability. The debonding mode of the LDH-F contained resin adhesives slight changed after the thermal cycling test but the shear bonding strength does not significantly reduce. The cytocompatibility test demonstrated that the addition of 5% LDH-F to the dental orthodontic resin adhesive possesses acceptable biocompatibility.
7,291.2
2019-09-30T00:00:00.000
[ "Materials Science", "Medicine" ]
The Effects of Harvesting on the Dynamics of a Leslie–Gower Model with x1(0)≥ 0 and x2(0)≥ 0, where x1(τ) and x2(τ) are the prey and predator population densities, respectively, r, s, a1, a2, n, e1, e2 > 0, and τ is the time. Note that (a2x2/(n + x1)) is Leslie–Gower term in which the carrying capacity of the predator’s environment is a linear function of the prey size (x1/a2) + (n/a2). (a1x1x2/(n + x1)) is the number of prey consumed by the predator in unit time which shows that when the number of the prey x1 is severe scarcity, and the predators can switch over to other populations as food. Constants r and s are the intrinsic growth rate of the prey and predator, respectively, and e1 and e2 denote the harvesting efforts for the prey and predator, respectively. Since the first prey-predator dynamical models which is the Lotka–Volterra model was built in the 1920s by Mathematician Lotka and Volterra, more and more researchers are interested in such issues, and they start from different angles to think the problem and many important results have been obtained [1–10]. In particular, in 2003, Aziz-Alaoui and Daher Okiye [11] considered the following Leslie–Gower predator-prey model: Introduction In this paper, we consider Leslie-Gower predator-prey model with harvesting effect, with x 1 (0) ≥ 0 and x 2 (0) ≥ 0, where x 1 (τ) and x 2 (τ) are the prey and predator population densities, respectively, r, s, a 1 , a 2 , n, e 1 , e 2 > 0, and τ is the time. Note that (a 2 x 2 /(n + x 1 )) is Leslie-Gower term in which the carrying capacity of the predator's environment is a linear function of the prey size (x 1 /a 2 ) + (n/a 2 ). (a 1 x 1 x 2 /(n + x 1 )) is the number of prey consumed by the predator in unit time which shows that when the number of the prey x 1 is severe scarcity, and the predators can switch over to other populations as food. Constants r and s are the intrinsic growth rate of the prey and predator, respectively, and e 1 and e 2 denote the harvesting efforts for the prey and predator, respectively. Since the first prey-predator dynamical models which is the Lotka-Volterra model was built in the 1920s by Mathematician Lotka and Volterra, more and more researchers are interested in such issues, and they start from different angles to think the problem and many important results have been obtained [1][2][3][4][5][6][7][8][9][10]. In particular, in 2003, Aziz-Alaoui and Daher Okiye [11] considered the following Leslie-Gower predator-prey model: where x 1 is the numbers of prey and x 2 is the numbers of predators. Existence and stability of the fixed points were studied by using the Lyapunov function. In 2006, Lin and Ho [12] discussed the local and global stability for system (2) by using Poincaré-Bendixson theorem and Dulac's criterion. Harvesting is an effective way for humans to control the size of predators and prey so that the population has continued to develop healthily and produced good economic benefits [13][14][15][16]. Academically, researchers often only consider the harvesting of prey in order to control the size of the population. In 2010, Zhu and Lan [17] investigated the Leslie-Gower predator-prey systems: In 2013, Gupta and Chandra [18] discussed the following Leslie-Gower predator-prey model with harvesting on the prey and the environment providing the same protection to both the predator and prey: For ecological balance and healthy economic development, for fisheries, wildlife resources, etc., we not only need to consider the harvesting of prey, but also the predator. erefore, in this paper, we study Leslie-Gower predatorprey model (1) with harvesting on the prey and predator. where α is a positive constant, t ≥ 0, and x(0) > 0, we have Lemma 2 (see [20,21]). Consider system _ X � f(X, α) and suppose that f(X 0 , α 0 ) � 0, n × n Jacobian matrix (J ≡ Df(X 0 , α 0 )) has a simple eigenvalue s � 0 with eigenvector V, and the transpose of the Jacobian matrix J T has an eigenvector W to the eigenvalue s � 0. en, the system _ X � f(X, α) experiences a transcritical bifurcation at the equilibrium point X 0 as the control parameter α passes through the bifurcation value α � α 0 if the following conditions are satisfied: e rest of this paper is organized as follows. In Section 2, we study boundary of solutions. In Section 3, we discuss existence of equilibria points. In Section 4, we discuss stability of the equilibrium points. Boundedness of Solutions In this section, we prove that every solution of system (5) is positive and uniformly bounded with initial conditions Theorem 1. Consider system (5). For any given initial conditions (x 0 , y 0 ) ∈ R 2 + , the solution (x(t), y(t)) of system (5) exists and is unique and positive and ultimate bounded. Obviously, function f(x(t), y(t)) is continuous differentiable on (x, y) ∈ R 2 + , so for any given the initial conditions (x 0 , y 0 ) ∈ R 2 + , the solution (x(t), y(t)) of the system (5) exists and is unique. Furthermore, x− axis and y− axis are the solutions of the system (5); by the uniqueness of the solution, the solutions (x(t), y(t)) of the system (5) with the initial value x 0 > 0, y 0 > 0 cannot cross with x− axis and y− axis. Next, we show the solutions (x(t), y(t)) of system (5) with the initial value x 0 > 0 and y 0 > 0 which is ultimate bounded. From system (5), we have Combining Lemma 1, we have So, the following inequality is established: (10) By Lemma 1, we have where Existence of Equilibria In order to find the equilibrium points of system (5), we let f(x(t), y(t)) � 0, i.e., It is clear that equation (12) has a trivial solution E 0 ≔ (0, 0). Furthermore, by calculation, we find other solutions of equation (12): (13) is shown in Figure 1. erefore, we have the following results.where Theorem 2. Consider system (5) admits x− axial only and y− axial equilibria under following conditions. (i) e x− axial equilibrium, is a boundary equilibrium of system (5) if and only if (ii) e y− axial equilibrium, is a boundary equilibrium of system (5) if and only if Theorem 3. Consider system (5) admits a unique positive equilibrium, if only if Remark 2. Existence regions for equilibrium points of system (12) is shown in Figure 2. Equilibrium points E 1 and E 2 exist, but positive equilibrium point E 3 does not in region II and equilibrium points E 1 , E 2 , and E 3 coexist in region Stability of of Equilibria Proof. Firstly, we show the equilibria E 0 is a unstable equilibrium. e Jacobian matrix about E 0 is given by Secondly, we show the equilibria E 1 is a unstable equilibrium. e Jacobian matrix about E 1 is given by Discrete Dynamics in Nature and Society 3 Proof. e Jacobian matrix about E 2 is given by erefore, E 2 is locally asymptotically stable. ) hold, then the positive equilibrium E 3 is locally asymptotically stable. Furthermore, assume that there is α < ρβ; then, the positive equilibrium E 3 is globally asymptotically stable. Proof. e Jacobian matrix about E 3 is given by with trace A direct calculation gives we obtain erefore, the positive equilibrium E 3 is unstable. Local Bifurcation hold, then system (5) undergoes a Hopf bifurcation with respect to bifurcation parameter m around the equilibrium point E 3 � (x ∞ , y ∞ ). Furthermore, the direction of the Hopf bifurcation is subcritical and the bifurcation periodic solutions are orbitally asymptotically stable if e direction of the Hopf bifurcation is supercritical and the bifurcation periodic solutions are unstable if Proof. From (39), we have So, m H > 0. e Jacobian matrix of system (5) evaluated at the point E 3 is given by e trace z � Tr(J E 3 ) and the determinant D � Det(J E 3 ) of Jacobian matrix J E 3 are given by In addition, we have So, (zz/zm)| m H < 0. erefore, this guarantees the existence of Hopf bifurcation around E 3 . We translate the equilibrium E 3 to the origin by the translation x � x − x ∞ and y � y − y ∞ . For the sake of convenience, we still denote x and y by x and y, respectively. So, the system (5) becomes Rewrite system (47) to where Denote the eigenvalues of J E 3 by φ + iω with φ � (z/2) and ω � ( Discrete Dynamics in Nature and Society where G � ((ω(m + x ∞ ))/αx ∞ ) and Obviously, and (52) By the transformation, where with In order to determine the stability of the periodic solution, we need to calculate the sign of the coefficient b(m H ), which is given by where all partial derivatives are evaluated at the bifurcation point (0, 0, m H ). ), ω 0 , G 0 , and N 0 , we have 8 Discrete Dynamics in Nature and Society where m � m H and Discrete Dynamics in Nature and Society 9 Combining (41), we have b(m H ) < 0. erefore, according to Poincare-Andronow's Hopf bifurcation theory, we have the direction of the Hopf bifurcation is subcritical and the bifurcation periodic solutions are orbitally asymptotically stable. In addition, combining (42), we have b(m H ) > 0. erefore, according to Poincare-Andronow's Hopf bifurcation theory, we have the direction of the Hopf bifurcation is supercritical and the bifurcation periodic solutions are unstable. Numerical Illustrations In this section, we perform numerical simulations about system (5). Figure 3 shows that E 0 is an unstable node point, E 1 is a saddle point, E 3 does not exist, and E 2 is asymptotically stable and every orbit tends to it. Figure 4 shows that E 0 is an unstable node point, E 1 is a saddle point, E 2 is unstable, E 3 is unstable, and there is a limit cycle around E 3 to which every orbit tends. Figure 5 shows that E 0 is an unstable node point, E 1 is unstable and E 2 is also unstable, but E 3 is asymptotically stable and every orbit approaches this equilibrium. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that they have no conflicts of interest.
2,510.2
2021-05-06T00:00:00.000
[ "Mathematics", "Environmental Science" ]
Optical Character Recognition Mobile App for Address Matching in Integrated Social Welfare Data Verification Process . The Ministry of Social Affairs of the Republic of Indonesia has Integrated Social Welfare Data called Data Terpadu Kesejahteraan Sosial (DTKS) and uses it as a basis for the distribution of Social Fund Assistance, or Bantuan Sosial (BANSOS). The fact that occurred in the field was that there were many BANSOS recipients who were not impoverished and did not qualify to be the target of this program. One of the reasons is that there are weaknesses in the system that have the potential for data manipulation during the verification and validation processes. Therefore, a system improvement is needed to minimize the possibility of the data being manipulated. This study proposes a digital verification system using Optical Character Recognition (OCR) and reverse geocoding to make sure that the registrant provides their own citizen ID card and their own house address that meet the qualifications. These technologies in the developed mobile app perform address matching between address extracted from citizen ID card and address obtained from reverse geocoding. The results of this application trial achieved a success rate of 95.7%. Introduction The impoverished and homeless in Indonesia are managed by the state in accordance with the 1945 Constitution of the Republic of Indonesia by fulfilling their fundamental needs for human well-being.The regulation about managing the impoverished is regulated by Law Number 13 of 2011.This law states that the management of the impoverished can be done through community institutional empowerment, capacity enhancement of the poor to develop basic skills and abilities to do business, security and social protection to ensure a sense of security for the poor, partnerships and cooperation between stakeholders, and/or coordination between ministries/agencies and local governments. In order to manage the impoverished, the Ministry of Social Affairs of the Republic of Indonesia operates a Social Fund Assistance program (BANSOS).This program is supported by the government in the form of money or goods, which are given to the recipient.The distribution of BANSOS is selective and not continuous [1].The recipient selection process is conducted by performing a series of procedures for data collection, verification, and validation that have been determined by the Ministry of Social Affairs of the Republic of Indonesia [2]. The data collection of the impoverished is organized by the institution that organizes government affairs in the field of statistics.Human resources in the field of social welfare in the sub-district or village will conduct a verification and validation process of the recorded data of the impoverished.The results of the verification and validation are reported to the regent or mayor, then submitted to the governor and forwarded to the minister [2].The data that has been verified and validated must be integrated and technology-based under the responsibility of the minister.This integrated data is used by relevant ministries or agencies as the basis for distributing BANSOS.This integrated data owned by the Ministry of Social Affairs is called Integrated Social Welfare Data (DTKS) [3]. The facts on the field are different from what should happen.Many recipients of BANSOS were identified as people who should not have been the target of the program.The Minister of Social Affairs found several cases, such as the registration of a BANSOS recipient who actually had a large house.There was also a case where the village head himself entered his name as a recipient of BANSOS [4]. The Social Affairs Department needs to strictly ensure that the data of DTKS registrants is the right target of the BANSOS program.This assurance can be made during the verification and validation processes.This study developed an application that helps the social affairs department during the verification stage of DTKS registrant data.The Ministry of Social Affairs actually has an application called the Social Welfare System Next Generation (SIKS-NG) to manage the Integrated Social Welfare Data (DTKS).However, verification and validation of DTKS using this application are still not optimal [5].The authorized officer inputs the required data through the SIKS-NG application, but the verification and validation processes are still performed manually.As a result, there is still the possibility of manipulating the data. The existing DTKS verification system requires technological enhancements to reduce the possibility of data manipulation.This study develops an Optical Character Recognition (OCR) and reverse geocoding application to help the DTKS data input verification process by matching the address extracted from the Indonesian citizen ID card (KTP) with the address obtained from the reverse geocoding result at the time of taking the photo of the KTP.The OCR implemented in this application will help the officer from the department of social affairs input the registrant's personal information automatically by taking an image of the registrant's KTP.The reverse geocoding implemented in this application will retrieve the address of the device used to take KTP image during household visit.Then, the address extracted from KTP will be compared to the address retrieved from the reverse geocoding process.This address matching process is needed to ensure that the registrant provides their own KTP and their own house address that meet the qualifications.If the address from KTP extraction matches the address from reverse geocoding, then the registrant information is considered valid data input.If this condition is met, then the verification process will proceed to the next stage, which will not be discussed as it falls outside the scope of the research problem addressed in this paper. This application was developed for mobile devices with Android OS.The reasons for choosing mobile as a platform for this application are portability and camera requirement.This application is intended to be used during household visits in the DTKS data input verification process and needs a camera attached to the device to capture images for OCR.Hence, a mobile device will be the most convenient to use.The results of this development are expected to help the Ministry of Social Affairs and related parties minimize the registration possibility of non-targeted BANSOS recipients.This study is organized into several sections.Section 2 elaborates on the related works to this study.Section 3 describes the research methodology.Section 4 discusses the results of the study.The last section concludes the research. Related Works This research uses OCR and reverse geocoding to perform address matching for the DTKS data input verification process.Some previous studies related to the topic of this study.One of these studies is verification and validation for two kinds of BANSOS, namely the Family Hope Program (PKH) and Non-Cash Food Support (BNPT), to check the registrant qualifications.The verification process in this previous study was done using the Social Affair Geographic Information System (SAGIS).This application was developed by the Ministry of Social Affairs of the Republic of Indonesia to collect the data of the registrants [6].This system has similar goals to the mobile application developed in this study.The difference is that data input and verification are performed manually.Meanwhile, in this study, the DTKS data input verification process is performed automatically using OCR and reverse geocoding. A study implements reverse geocoding intended for site visit documentation.In this study, the application performs a reverse geocoding process to obtain the address and then attaches it to images taken during the site visit.The result of this process will ensure that the images were actually taken at the site location.This system is similar to the feature in the application developed in this study.The developed DTKS data input verification application will ensure that the KTP image is taken at the same location as the KTP owner's home.If the addresses match, then the data is considered verified and will proceed to the next verification step [7]. Another study with the case study of the Ministry of Social Integrated Welfare Data, also developed a mobile application for validating field data surveys.This previous study developed a mobile application for the overall DTKS data input verification process.The application developed can be divided into two major parts, namely the input system and the recommendation system.The input system in this previous study is enhanced by the mobile application developed in this study for the DTKS data input verification process.This previous study did not use Artificial Intelligence (AI) for verification.Whereas in the mobile application developed in this study, the OCR is involved in the DTKS data input verification process [8]. Research Method The main objective of this research is to develop an OCR and reverse geocoding mobile app that helps the Ministry of Social Affairs with DTKS data input verification.The initial stage of this study was doing a literature review to support the research.The literature used in this study are related to DTKS, especially about the procedure of DTKS registration and verification.At this stage, the information collected is also related to the OCR and reverse geocoding library and their use in developing application. The next stage is defining the system overview and limitations.The overview of this application is to perform address matching between address extracted from KTP and address obtained from the reverse geocoding results.The result of address matching is used to verify that the owner of the registered KTP data is the owner of the house visited during the household visit.The application development in this study is limited only to the Android operating system.In this stage, the user of this application is also defined, which is an officer from the department of social affairs who conducts household visits during the verification process.The overall system flowchart is shown in Figure 1. To develop the mobile application in this study, system requirement analysis and system design are needed.This stage is conducted by doing an interview with the local Social Welfare Department.From the information gathered, the functional and non-functional requirements are defined.Functional requirements are system requirements regarding what activities will be performed by the system in general.Functional requirements include processes and information that must exist and be generated by the system [9].The functional requirements defined for the DTKS data input verification application are registering accounts, logging in, adding verification logs for registrant data, taking KTP images, extracting KTP, viewing KTP extraction results, reverse geocoding, matching the address from KTP extraction results and the address from reverse geocoding results, and saving data on address matching results.The non-functional requirements for this application are usability and compatibility.The system design in this study is done by creating activity diagrams, sequence diagrams, class diagram, and design testing.The activity diagram in this research is used to represent the flow of activities between actors, systems, and databases.The sequence diagrams of KTP extraction application and reverse geocoding in this study were used to represent the flow of interaction between classes involved in a task.The class diagram of the application in this study is used to define the classes that are involved in the system.Testing design in this research is done by testing the success of system processes, black box testing, SUS (System Usability Scale) testing, and compatibility testing. The development of the mobile application in this study was done using Kotlin 1.5.31 as the programming language and Android Studio 4.2.1 as the IDE.The database used in this study is Firebase Realtime Database.Firebase Realtime is a cloud-based database developed by Google [10].There are four entities involved in the database implementation: users, KTP extraction result, reverse geocoding result, and address matching results. The DTKS data input verification application in this study uses OCR technology and reverse geocoding.The OCR technology in this application was developed using the Text Recognition library from Google Vision [11].This library is used for extracting text from images [15], which in this study is KTP image.The KTP extraction process begins with getting the image bitmap.The Text Recognition library has a builder to build the recognizer object.After building the recognizer successfully, the developed application will build a frame object to be a container for the image that will be extracted.The recognizer will detect several text blocks in the frame.The text will be arranged by the string builder.The result of this extraction will then be saved to the database and passed to the KTP extraction result page of this application.The workflow of OCR in this application can be seen in Figure 2. The reverse geocoding process in this application was developed using the LocationManager class and the Geocoder class from Kotlin.The LocationManager class will fetch the latitude and longitude of the device [12].Then the developed application will pass the latitude and longitude information to the Geocoder class.This class will transform the latitude and longitude information into address format [13].The address result from the reverse geocoding process will be saved to the database and passed to the reverse geocoding result page.The workflow of reverse geocoding in this application can be seen in Figure 3.The verification process is performed by matching the address from KTP with the address from the reverse geocoding process.The KTP extraction result contains information about the address.The application will retrieve the address information and put it into the KTP address variable.Meanwhile, in the address from reverse geocoding, the application will retrieve the information about the street name and house number, then put it into the reverse geocoding address variable.These two variables will be matched against each other, and the result will be shown on the verification result page of this application. Result and Discussion The OCR and reverse geocoding mobile application developed in this study are tested by several types of testing.The first test is about the success of the system's process.The testing of the KTP extraction process is conducted by comparing the address stated in KTP with the address from KTP image extraction.If the address shown in the KTP extraction result page has more than 85% similarity to the address written in KTP, then the process is considered successful.An example of the KTP extraction process is shown in Figure 4.The testing of the reverse geocoding process is conducted by comparing the current location address with the address from reverse geocoding.If the address from the reverse geocoding result has more than 85% similarity with the current location address, then the process is considered successful.An example of the reverse geocoding process is shown in Figure 5.The testing of address matching is conducted by comparing the address from KTP extraction with the address from reverse geocoding.If the address from KTP extraction has more than 85% similarity with the address from reverse geocoding, then the result page will display "Match".Otherwise, the page will display "Doesn't Match!".This process is considered successful if the address matching result page shows the correct result.An example of the address matching result page is shown in Figure 6.This testing process of this application was conducted by examining the success of KTP extraction, reverse geocoding, and address matching in 10 trials.The testing of the verification process is shown in Table 1.The testing code column represents the code used to identify each test conducted.The address in the KTP extraction result column represents the address shown on the KTP extraction result page.The address in the reverse geocoding result column represents the address shown on the reverse geocoding result page.The address matching result column represents the comparison result between the address from KTP extraction result and address from reverse geocoding.The status of the test will be considered success if the KTP extraction result has more than 85% similarity with address written in KTP, reverse geocoding result has more than 85% similarity with current location address, and address matching result page shows the correct result.100% System validation testing is done using the black box testing.This test is performed on each function that has been defined in the needs analysis.The result is that all nine functionalities of this application show output in accordance with the results that have been defined in the use case scenario. The SUS questionnaire is used to measure the usability of the system.The SUS testing is conducted by asking 10 SUS questions to the respondents [14].In this study, there are five respondents who fill out a questionnaire related to the usability of the application.These respondents consist of 1 Social Welfare Department officer, 2 UI/UX experts, and 2 heads of neighborhood associations.Out of 5 respondents, only 3 respondents are valid.The other two respondent's results are considered invalid because the answers they gave to several questions are not in line with their own answers to the other questions.The final result of SUS testing of the application in this study is 92.5.The SUS result is shown in Table 3. The last test of the application developed in this study is compatibility testing.This testing is conducted by executing the application on devices that have a different version of the operating system from the device used in development.The Android versions used in this test are Android 10.0, 11.0, and 11.0.The result is that the application can run well on those three versions of the Android operating system.The DTKS data input verification application developed in this study has been tested on several types of tests, namely process testing, system validation testing, usability testing, and compatibility testing.The application's successes and limitations can be identified through these tests. The KTP extraction is executed by capturing the KTP image.The system successfully extracted the KTP image.The system also offers to retake the image if the captured KTP image is considered unclear.The application limitation in this function is that the user must crop the KTP image so the final image will only show the KTP owner's personal data information, without the KTP owner's picture.This should be done because the extraction result is best when the final image is only focused on the KTP owner's personal data information.The TextRecognizer library is used to extract text from KTP image.Overall, this library performs the extraction well, but sometimes there are some character recognition errors.For example, the letter I in KTP is recognized as the letter E, and the number 0 is recognized as the letter O. The reverse geocoding function also runs well, but the result depends on Google Maps data.Sometimes there are differences between the location recognized by the GPS system and the location recognized by the locals.This makes the system limited in house number accuracy.So that the matching of addresses is done by taking the address from the ID card and comparing it with the address from the reverse geocoding result without the street number. Conclusion The OCR and reverse geocoding mobile application in this study was developed to help the Ministry of Social Affairs with the DTKS data input verification.The OCR technology is used in this application to extract DTKS registrant personal data information from KTP and automatically input the data to the application.The reverse geocoding process transforms the information about the current latitude and longitude from the device into address format.Then address from KTP extraction result and address from reverse geocoding are compared.If the addresses match, then the system will verify that the owner of the registered KTP data is the owner of the house visited during the household visit.These technological enhancements can reduce the possibility of data manipulation in the DTKS data input verification system.The overall success rate of the system process in this application is 95.7%. Figure 6 . Figure 6.Address matching result page if (a) the addresses doesn't match (b) the addresses match Table 1 . Application testing result
4,419.8
2024-04-03T00:00:00.000
[ "Computer Science" ]
Reduced seismic activity after mega earthquakes Mainshocks are often followed by increased earthquake activity (aftershocks). According to the Omori-Utsu law, the rate of aftershocks decays as a power law over time. While aftershocks typically occur in the vicinity of the mainshock, previous studies have suggested that mainshocks can also trigger earthquakes in remote locations. Here we examine the earthquake rate in the days following mega-earthquakes (magnitude>= 7.5) and find that the rate is significantly lower beyond a certain distance from the epicenter compared to surrogate data. However, the remote earthquake rate after the strongest earthquakes (magnitude>= 8) can also be significantly higher than that of the rate based on surrogate data. Comparing our findings to the global ETAS model, we find that the model does not capture the earthquake rate found in the data, hinting at a potential missing mechanism. We suggest that the diminished earthquake rate is due the release of global energy/tension subsequent to substantial mainshock events. This conjecture holds the potential to enhance our comprehension of the intricacies governing post-seismic activity. Introduction Earthquakes pose a significant and perilous threat to humanity, given their highly destructive nature.While they are complex spatiotemporal phenomena, several empirical laws govern their behavior.Notable among them are the Gutenberg-Richter law [1], which describes the exponential decay of earthquake magnitude distribution, and the Omori-Utsu law [2,3], which elucidates the power-law decay of aftershock rates over time.Additionally, various scaling and power laws have been established concerning the distribution of waiting times between earthquakes [4][5][6][7][8].Extensive research has been conducted on these laws, providing insights into earthquake activity models such as the Epidemic-Type Aftershock Sequence (ETAS) model [9]. Researchers have been unable to identify definitive precursors that can be utilized to forecast the occurrence of large earthquakes in advance [10].However, the clustering of aftershocks suggests that earthquake timing is not entirely random [3,11].Moreover, a previous study demonstrated that consecutive interevent earthquake intervals exhibit correlated behavior rather than randomness.Specifically, shorter (longer) interevent intervals have a higher likelihood of being followed by shorter (longer) interevent intervals [12].Furthermore, the application of the Detrended Fluctuation Analysis (DFA) to interevent interval time series has revealed the presence of long-range (powerlaw) correlations and other memory measures within earthquake catalogs [13,14].In their research, Zhang et al. [7,8] studied the lagged interevent times and distances and found that there are significant correlations for short-time lags, but weaker correlations for longer time lags.From a different perspective, previous studies have indicated that the occurrence rate of seismic events (foreshocks) tends to increase prior to a mainshock [15][16][17][18][19][20][21][22], following the socalled "inverse Omori law" although this observation is not as robust as the Omori-Utsu law. Another crucial aspect of earthquakes pertains to the mechanisms underlying their spatial propagation.Strong earthquakes often trigger a series of subsequent aftershocks in their vicinity due to increased static Coulomb stress [23][24][25].However, the phenomenon of remote triggering, where seismic waves from a mainshock trigger earthquakes thousands of kilometers away, has been observed and explained by dynamic stress triggering [26][27][28].For instance, following the 8.6 Mw East Indian Ocean mainshock on April 11, 2012 (accompanied by a powerful 8.2 Mw aftershock), there was a significant global increase in the rate of earthquakes in remote areas [29].Nevertheless, other studies have found no evidence supporting such remote triggering [30,31].Moreover, a different study noted a decrease in earthquake activity in remote locations a few hours after a mainshock, attributing this decline to reduced detection capabilities caused by seismic waves [32].From a physical standpoint, the global occurrence rate beyond the immediate aftershock zone could decrease following large earthquakes due to energy and stress release.This relaxation effect can be likened to the suppression of short waiting times observed in the Abelian sandpile model during a massive avalanche [33]. The objective of this study is to examine how the earthquake rate varies as a function of distance and time following mega-earthquakes.In order to achieve our objective, we have developed a statistical approach that enables us to measure the real occurrence of earthquakes after a mega-shock in comparison to a distribution of earthquake rates starting from randomly selected times.This is done while taking into account the same location, time window, and distance from the epicenter as the mega-earthquake.We statistically find that the earthquake rate proceeding mega-earthquakes and beyond a critical distance of about 100 km from the epicenter is significantly lower than the mean rate at the location of the mega-earthquakes.We propose that this critical distance can serve as a measure for determining the extent of aftershocks.We also find that the strongest mega-earthquakes are followed by reduced remote earthquake activity or by enhanced remote earthquake activity. Results We utilized the comprehensive global earthquake catalog, between May 1979 and May 2023, setting a minimum magnitude threshold at 5.1 (for this value, the catalog can be regarded as complete; for details, see the Methods section).The distribution of earthquake magnitudes closely conforms to the Gutenberg-Richter law, exhibiting a slope of -1.07±0.02(Fig. S1).The spatial distribution of the events included in the catalog is depicted in Fig. 1a.Predominantly, seismic events occur along tectonic plate boundaries, propelled by the intricate interplay of Earth's crustal movement and compression. We next consider earthquakes of exceptional magnitude, above or equal to 7.5, the common definition of mega-earthquakes [34].Fig. 1b depicts the geographical distribution of these mega-earthquakes, with prominent active regions including South America, Indonesia, and Japan.We demonstrate our analysis using the Chiapas earthquake, an 8.2 mega-earthquake, which occurred on September 7, 2017; Fig. 1c shows its location.We analyzed the global earthquake occurrences (with magnitude m ≥ 5.1) subsequent to this mega-earthquake within a five-day temporal window denoted as T , while excluding earthquakes within a radius (R) of 500 km from the megaearthquake's epicenter.We found a total of eight such earthquakes that followed the Chiapas mega-earthquake; see Fig. 1c. We next study whether the earthquake rate after this mega-earthquake is larger, smaller, or equal to the mean earthquake rate associated with the mega-earthquake's location.We show the results in Fig. 1d, which depicts the Probability Density Function (PDF) of the number of earthquakes that occurred farther than 500 km from the epicenter of the 8.2 Chiapas megaearthquake and within a time window of five days from randomly selected times.This PDF is utilized to create a null hypothesis concerning the rate observed after a mega-earthquake.If the observed rate falls within the PDF's confidence interval, typically between the 10% and 90% quantiles, then the null hypothesis is not rejected, and the rate is considered "normal."However, b Same as a but for only mega-earthquakes with a magnitude larger than or equal to 7.5.c An example of the 2017 Chiapas earthquake with a magnitude 8.2 (red triangle) and its following events (circles) above a certain distance (i.e., 500 km, marked by the shaded dashed circle) and within a time window of five days.d Probability Density Function (PDF) of the number of events at distances longer than 500 km from the location of the Chiapas earthquake and within 5 days for 10 4 realizations of the surrogate data (i.e., 10 4 realizations of randomly selected initial times).The dashed black vertical lines represent the 10% and 90% quantiles.The red line shows the observed number of earthquakes following the real 2017 Chiapas earthquake, which is below the 10% quantile.The solid black vertical line represents the median. if the observed rate falls outside the confidence intervals, it is classified as either a low or high rate; see the Methods Section.Fig. 1d depicts the 10% and 90% quantiles (the dashed black vertical lines), as well as the observed rate (the solid red vertical line), indicating that the observed subsequent rate of the 2017 Chiapas mega-earthquake is low, below the 10% quantile.This procedure helps to prevent possible geographical biases. We next consider all the mega-earthquakes and study the earthquake rate that follows them.Fig. 2a depicts, for each mega-earthquake, the actual count of ensuing earthquakes spanning a five-day window (T ) and exceeding a distance of 500 km (R) from the epicenter.Also plotted is the 10%-90% interval, based on the surrogate data (Fig. 1d).Notably, certain mega-earthquakes exhibit counts falling below the 10% quantile, indicating reduced activity, while others surpass the 90% quantile, denoting increased activity (Fig. 2a).a The number of earthquakes n(T, R) in a time window of T = 5 days and beyond the distance R = 500 kilometers from the epicenter, following mega-earthquakes (colors represent their magnitudes).The grey shading indicates the 10% to 90% interval of the PDF of the number of events for the surrogate data (see Fig. 1d).b The spatial distribution of the mega-earthquakes shown in a that fall below the 10% quantile (red triangle) and above the 90% quantile (blue triangle). The spatial distribution of these statistically significant mega-earthquakes is displayed in Fig. 2b.It is apparent that the majority of these significant events are concentrated in the vicinity of active seismic zones.Additionally, mega-earthquakes followed by reduced activity outnumber those followed by increased activity.Quantitatively, the ratio, r, of mega-earthquakes followed by reduced activity is r = 0.194 (uncertainty between 0.176 and 0.212), while those followed by increased activity hold a ratio of r = 0.078 (see Methods).Within the context of the null hypothesis, we would anticipate a ratio close to 0.1, as predicted by our chosen 10% and 90% quantiles.The discernible departure from this expected ratio highlights the statistical significance of the ratio pertaining to reduced activity. Following the above, we study the dependence of the magnitude of the mega-earthquake on the corresponding ratio, r.Our analysis (Fig. S2) reveals that no definitive correlation exists between the mega-earthquake's magnitude and the resultant ratio.For most magnitudes, the earthquake rate after mega-earthquakes is reduced, i.e., the ratio of mega earthquakes that fall below the 10% quantile is significantly higher than 0.1, while the ratio of mega-earthquakes that fall above the 90% quantile is significantly lower than 0.1.Yet, the strongest mega-earthquakes (m ≥ 8.0) result in both ratios being significantly larger than the expected 0.1 ratio (Fig. S2), suggesting that such mega-earthquakes can either trigger long-distance worldwide earthquakes or lead to reduced worldwide seismic activity; see Table 1 for details regarding mega-earthquakes that were followed either by significantly reduced earthquake activity or by significantly increased earthquake activity. Next, we explore the dependence of the ratio of mega-earthquakes that fall below (above) the 10% (90%) quantile as a function of both the distance Table 1 The significant mega-earthquakes with magnitudes m ≥ 8.0 for a time window of T = 5 days and distance from the epicenter farther than R = 500 km.The last column shows the percentile of the number of events for the surrogate data smaller (smaller and equal) than the real number.from the epicenter R (from 10 to 8000 km) and the time window T (from 3 to 60 days).We first show these ratios as a function of the distance from the epicenter R of the mega-earthquakes for different time windows T (Fig. 3a-d). Region/location The effect of aftershocks is noticeable close to the mega-earthquakes' epicenters (R ≲ 250) km, where the ratio is well off the expected 0.1 ratio, indicating significantly increased earthquake activity after the mega-earthquakes; i.e., the ratio of mega-earthquakes that fall below (above) the 10% (90%) quantile is much larger (smaller) than 0.1.For distances R > 500 km, the ratios are more or less stable, indicating reduced earthquake activity after mega-earthquakes (ratio of ∼ 0.15 for ratios that fall below the 10% quantile in comparison to the expected 0.1 ratio).Notably, the ratio of mega-earthquakes that fall below the 10% quantile (red symbols in Fig. 3a-d) is more significant in comparison to the ratio of mega-earthquakes that fall above the 90% quantile (blue symbols in Fig. 3a-d). The ratios of mega-earthquakes that fall below the 10% quantile and above the 90% quantile as a function of the distance, R, from the epicenter and time window T are presented in Fig. 3e, f respectively.It is noticeable from Fig. 3e that the ratio of mega-earthquakes that fall below the 10% quantile is not so sensitive to the time window T -this ratio is smaller than 0.1 for distances smaller than ∼ 100 km and higher than 0.1 otherwise, indicating enhanced earthquake activity after mega-earthquakes close to their epicenters (aftershocks) and reduced earthquake activity at farther distances.Fig. 3f depicts the ratio of mega-earthquakes that fall above the 90% quantile.Here the ratio is far above the expected 0.1 ratio for close distances (R ≲ 300 km) and short time windows (T ≲ 30 days), indicating aftershock activity which decays with distance and time [11,15].For large distances (R > 1000 km), the ratio drops below the expected 0.1 ratio, indicating reduced earthquake activity after mega-earthquakes for distances greater than ∼ 1000 km.The transition across the 0.1 ratio is different for Fig. 3e (∼ 100 km) and Fig. 3f (∼ 1000 km), suggesting an aftershock extent for mega-earthquakes of about 500 km. To substantiate the robustness of our findings, we not only evaluate the seismic activity significance based on the 10% and 90% quantiles, but also explore alternative metrics such as the median, average, and multiples of the average, as detailed in the Supplementary Figures S3-S6.Notably, all analyses consistently indicate a reduced earthquake activity after the occurrence of mega-earthquakes.Moreover, the results for different magnitude thresholds, 5.0 and 5.2, also show similar behavior in Figs.S7 and S8. The global ETAS model [9,11,35,36] was proposed to model the statistical properties (the spatiotemporal clustering) of the global earthquake catalog; see the Methods Section.We next compare the results described above, which are based on the real global catalog, to the synthetic catalog of the ETAS model.The ETAS model reproduces the generation of a sequence of aftershocks following a major seismic event, which serves as the foundation for interlinking earthquakes in the model.Notably, from a mechanistic standpoint, the ETAS model lacks any intrinsic processes that model instances of reduced seismic activity.In Fig. 4a and c (Fig. 4b and d), we juxtapose the ratio of megaearthquakes that fall below (above) the 10% (90%) quantile, for the real global earthquake catalog and for the synthetic global ETAS model catalogs.As seen, the global ETAS model does not reproduce the reduced earthquake rate after mega-earthquakes for distances beyond several hundreds of km; see Fig. 4.This disparity arises due to the ETAS model's inability to replicate long-distance suppression of earthquake activity.However, for distances smaller than 500 km, the results based on the ETAS model synthetic catalogs align more closely with the results based on the real global catalog, attributed to the ETAS model's proficiency in representing aftershock clustering dynamics.The ETAS model's ratio of mega-earthquakes that fall above the 90% quantile slightly surpasses the 0.1 ratio for distances greater than several hundreds of km, particularly for T = 5 days.This discrepancy arises due to the possibility of long-distance triggering in the ETAS model as outlined in Eq. 4 in the Methods Section; see [9,11,35,36]. We quantify the fraction of ETAS model realizations with a ratio exceeding that of the real data in a spatiotemporal context, as depicted in Fig. 4e and f.This ratio should be around 0.5 when the ETAS model results are comparable to the results of the real earthquake catalog.At short distances, nearly 90% of the ETAS realizations exhibit a ratio of reduced activity after mega-earthquakes, surpassing that of the real data (Fig. 4e).This discrepancy signifies that the ETAS model underestimates the rate of aftershocks in the proximity of the mega-earthquakes' epicenter.However, this underestimation does not significantly impact the results at greater distances.For extended distances, the fraction below 10% in Fig. 4e underscores the substantial difference in the ratio of reduced activity between the real data and the ETAS model.In contrast, the ratio of mega-earthquakes that fall above the 90% quantile for real data is considerably lower than the corresponding ratio from the ETAS model, as shown in Fig. 4f.Following the above, we can conclude that the The ratio of the ETAS model is averaged over 50 independent realizations, and the shading indicates the standard deviation.The fraction of ETAS model realizations with a ratio exceeding that of the real data for the ratio of e megaearthquakes that fall below the 10% quantile and f above the 90% quantile as a function of the distance from the epicenter, R, and time window T .A value of 0.5 indicates the similarity of the ETAS model to the real catalog-it is apparent that the ratios based on the ETAS model catalog are different than those of the real catalog, for the vast majority of R and T . ETAS model does not reproduce the results of decreased earthquake activity after mega-earthquakes. Discussion and Conclusion In this present study, we investigated the earthquake rate after the occurrence of mega-earthquakes (with a magnitude larger than or equal to 7.5).We developed several statistical methods to investigate this question, all resulting in reduced earthquake activity beyond a distance of several hundred km from the mega-earthquakes' epicenter.For smaller distances, increased earthquake activity is observed after the occurrence of mega-earthquakes and is attributed to aftershocks.The transition distance of about 500 km is suggested to be a length scale of aftershocks following mega-earthquakes with a magnitude larger than or equal to 7.5.We compare the results based on the real global catalog to the results obtained based on the global ETAS model catalogs and find that the ETAS fails to reproduce the results based on the real global earthquake catalog.This suggests that some key processes are missing in the ETAS model. For the strongest mega-earthquakes (with magnitude m ≥ 8), we find that some were followed by significantly reduced remote earthquake activity while others resulted in significantly increased remote earthquake activity; the latter can be associated with remote triggering as was reported in previous studies.This observation should be verified more thoroughly as the total number of such mega-earthquakes is not large (37 events).The stress of faults could be transferred to more remote areas along tectonic plate boundaries or fault systems after a mega-earthquake occurs [37].We conjecture that the stress transfer can change the stress state of remote faults, either promoting or inhibiting seismic activity in remote areas.These areas may experience a period of seismic quiescence, characterized by reduced seismic activity, following a mega-earthquake.This quiescent period can be a natural response to the redistribution of stress.Our finding provides researchers with an opportunity to study the geological processes between remote areas responsible for stress accumulation and release in greater detail.This can enhance our understanding of seismic hazards and improve earthquake prediction models. Data We analyzed seismic events with a magnitude of 5.1 or higher (mW, mb, and ms) from May 1979 to May 2023 (44 years), using the United States Geological Survey (USGS) catalog (https://www.usgs.gov/).There were 57,085 earthquakes, with 193 events having a magnitude of 7.5 or higher, and 37 events with a magnitude of 8 or higher.The earthquakes included in the catalog followed the Gutenberg-Richter law (Fig. S1). Significance test It is widely recognized that mega-earthquakes have the potential to initiate a cascade of aftershocks within a condensed temporal window and spatial region.Here we examined post-mega-earthquake seismic activity, focusing specifically on regions located beyond a defined distance from the epicenter, and within a designated time interval.The metric utilized is the count of earthquakes, denoted as n(T, R), occurring within a time window of T days and at a distance exceeding R km from the epicenter.Notably, earthquakes occurring within the spatial confines of the immediate aftershock region are disregarded. The null hypothesis posits that the count n(T, R) and the timing of the mega-earthquake are statistically independent.Consequently, surrogate data is generated under the framework of this null hypothesis.For each megaearthquake, a random initial time is selected from within the time span of the global catalog, while the geographical coordinates remain as for the original mega-earthquake; the latter constraint is aimed at avoiding possible spatial biases.The random selection of times is repeated many (10 4 ) times, yielding the surrogate earthquake count denoted as n ′ (T, R).We then construct the PDF characterizing the distribution of n ′ (T, R).Using this PDF, we establish the significance levels at 10% for the lower bound and 90% for the upper bound.Should the actual count n(T, R) fall below (above) the 10% (90%) quantile, the mega-earthquake is considered to be a significant event, signifying that it was followed either by reduced or increased seismic activity, respectively. The ratio of significant events In the context of a sequence of mega-earthquakes, we determine the ratio r = N s /N m , where N s corresponds to the count of significant mega-earthquakes (i.e., mega-earthquakes that fall below/above the 10%/90% quantile), and N m denotes the overall count of mega-earthquakes.Notably, the quantile values of 10% and 90% may not be integral, necessitating the utilization of either the floor or ceiling integer values of these quantiles for comparison with the actual counts.Consequently, the count N s may exhibit variability when these floor or ceiling integers are applied, thus introducing uncertainty into the derived ratio.This uncertainty is especially large for short time windows that result in a low number of counts after the mega-earthquakes.To mitigate this uncertainty, we adopt the average of the counts obtained using both the floor and ceiling integers for the calculation of the ratio.The associated error bar is established as the difference between these two counts. The global ETAS model The ETAS model, a space-time stochastic point process, is employed to simulate synthetic earthquake catalogs [9,11].This model simulates seismic activities based on a rate function λ, defined at a location (x, y) and time t, conditioned on the prior history H t : λ(x, y, t | H t ) = µ(x, y) + ti<t k(M i )g(t − t i )f (x − x i , y − y i , M i ). (1) Here, t i indicates the time of past events, and M i represents their magnitude.The magnitude of each event (≥ M 0 where M 0 = 5.1 is the magnitude threshold of the catalog) is generated independently following the Gutenberg-Richter distribution with a magnitude-frequency parameter of b = 1.The background intensity µ(x, y) = µ 0 u(x, y) at location (x, y) is governed by the spatial PDF of background events denoted by u, estimated using the approach outlined in Refs.[35,36].µ 0 stands for the background rate of seismic events on a global scale. The magnitude-dependent triggering capability is formulated as where A represents the rate of earthquakes at zero time lag, and α denotes the productivity parameter.The temporal decay of triggered events is described by the Omori law, where c and p are Omori law parameters.Spatial clustering of aftershocks is introduced through the spatial kernel function f (x − x i , y − y i , M i ) [36], Here, ζ = D exp[γ m (M i − M 0 )] signifies that the distances between triggering and triggered events depend on the magnitudes of the triggering events.Parameters q, D, and γ m are estimated parameters.We note that the ETAS model exhibits minimal long-range spatial correlation (≫ D).The parameter values are estimated through the expectation-maximization algorithm, as detailed in Refs.[38,39], and summarized in Table 2. Fig. 1 Fig. 1 Demonstration of the proposed statistical method.a Spatial distribution of global earthquakes with a magnitude larger than or equal to 5.1 between May 1979 and May 2023.bSame as a but for only mega-earthquakes with a magnitude larger than or equal to 7.5.c An example of the 2017 Chiapas earthquake with a magnitude 8.2 (red triangle) and its following events (circles) above a certain distance (i.e., 500 km, marked by the shaded dashed circle) and within a time window of five days.d Probability Density Function (PDF) of the number of events at distances longer than 500 km from the location of the Chiapas earthquake and within 5 days for 10 4 realizations of the surrogate data (i.e., 10 4 realizations of randomly selected initial times).The dashed black vertical lines represent the 10% and 90% quantiles.The red line shows the observed number of earthquakes following the real 2017 Chiapas earthquake, which is below the 10% quantile.The solid black vertical line represents the median. Fig. 2 Fig.2Significance of mega-earthquakes (m ≥ 7.5) and their spatial distribution.a The number of earthquakes n(T, R) in a time window of T = 5 days and beyond the distance R = 500 kilometers from the epicenter, following mega-earthquakes (colors represent their magnitudes).The grey shading indicates the 10% to 90% interval of the PDF of the number of events for the surrogate data (see Fig.1d).b The spatial distribution of the mega-earthquakes shown in a that fall below the 10% quantile (red triangle) and above the 90% quantile (blue triangle). Fig. 3 Fig.3The ratio of mega-earthquakes that fall below the 10% quantile (red) and above the 90% quantile (blue) as a function of the distance from the epicenter, R, and time window T .a T = 3 days, b T = 5 days, c T = 25 days, and d T = 60 days.The dashed black horizontal line represents the ratio of the null hypothesis 0.1 and its uncertainty (grey shaded area).The dashed red vertical line indicates the transition distance of 500 km.The error bars are related to the uncertainty of the ratio as described in the Methods Section.The ratio of mega-earthquakes that e fall below the 10% quantile and f above the 90% quantile as a function of the distance from the epicenter, R, and time window T .The black contoured line represents the ratio 0.1. Fig. 4 Fig.4 Comparison of the ratio based on the real global earthquake catalog and the synthetic ETAS model catalogs.The ratio of mega-earthquakes that fall below the 10% quantile for a T = 5 days and c T = 25 days as a function of distance from the epicenter R, for the real catalog (red symbols) and for the ETAS model synthetic catalog (cyan symbols).The ratio of mega-earthquakes that fall above the 90% quantile for b T = 5 days and d T = 25 days as a function of distance from the epicenter R. The dashed black horizontal line represents the ratio of the null hypothesis 0.1.The dashed red vertical line indicates the transition distance of 500 km.The ratio of the ETAS model is averaged over 50 independent realizations, and the shading indicates the standard deviation.The fraction of ETAS model realizations with a ratio exceeding that of the real data for the ratio of e megaearthquakes that fall below the 10% quantile and f above the 90% quantile as a function of the distance from the epicenter, R, and time window T .A value of 0.5 indicates the similarity of the ETAS model to the real catalog-it is apparent that the ratios based on the ETAS model catalog are different than those of the real catalog, for the vast majority of R and T . Table 2 Estimated parameters of the global ETAS model.
6,239.8
2023-10-04T00:00:00.000
[ "Geology", "Physics" ]
Three Dimensional Imaging of the Nucleon and Semi-Inclusive High Energy Reactions We present a short overview on the studies of transverse momentum dependent parton distribution functions of the nucleon. The aim of such studies is to provide a three dimensional imagining of the nucleon and a comprehensive description of semi-inclusive high energy reactions. By comparing with the theoretical framework that we have for the inclusive deep inelastic lepton-nucleon scattering and the one-dimensional imaging of the nucleon, we summarize what we need to do in order to construct such a comprehensive theoretical framework for semi-inclusive processes in terms of three dimensional gauge invariant parton distributions. After that, we present an overview of what we have already achieved with emphasize on the theoretical framework for semi-inclusive reactions in leading order perturbative QCD but with leading and higher twist contributions. We summarize in particular the results for the differential cross section and the azimuthal spin asymmetries in terms of the gauge invariant transverse momentum dependent parton distribution functions. We also briefly summarize the available experimental results on semi-inclusive reactions and parameterizations of transverse momentum dependent parton distributions extracted from them and make an outlook for the future studies. I. INTRODUCTION With the deeply going of the study of the nucleon structure, three dimensional imaging has become the very frontier and a hot topic in recent years.It is commonly recognized that the three dimensional imaging contains much more abundant physics on the nucleon structure and the properties of quantum chromodynamics (QCD).The study was initially triggered by the experimental finding of striking single-spin asymmetries (SSA) in inclusive hadron production in hadron-hadron collisions with transversely polarized hadron [1].Gradually it grows into a field aiming at a comprehensive three dimensional description of the nucleon structure including spin and transverse momentum dependences. The one dimensional imaging of the nucleon is provided by the Parton Distribution Functions (PDFs) such as the number densities, q(x), the helicity distributions, ∆q(x), and the transversities, δq(x), for quarks of different flavors in the nucleon.These one dimensional PDFs can be studied in inclusive high energy reactions and are necessary for the description of such inclusive processes.In the three dimensional case, i.e.where the parton transverse momentum is also considered, not only the direct extensions of these distribution functions to include transverse momentum dependences are involved, but also many other correlation functions that describe in particular the correlations between the transverse momenta and spins such as the Sivers function, the Boer-Mulders function, the pretzelocity etc. exist.They are generally called transverse momentum dependent (TMD) PDFs.Moreover, higher twist effects become also important and need to be considered consistently.The content of the studies is therefore much more abundant and more interesting.These TMD PDFs can be studied in semi-inclusive reactions and are necessary for the description of such processes. The study on the three dimensional imaging of the nucleon is in a rapid developing phase and it is not so easy to make a comprehensive overview of all different aspects of the studies.Here, we choose to arrange the review in the following way: First we will make a brief review of what we did in one dimensional case with inclusive deep inelastic lepton-nucleon scattering (DIS).In this way, we hope that we can find out the main line of what we need to do in three dimensional case.After that we will try to summarize the progresses already achieved along this line and what we need to do next.Such a brief review of the one dimensional case will be presented in Sec. 2. In Sec. 3, we will make a short summary of TMDs defined via quark-quark correlator.In Sec. 4, we will present a brief overview of what we have for constructing the theoretical framework of semi-inclusive processes.In Sec. 5, we will make a short summary of the available experimental results and TMD parameterizations extracted from them.Finally we will make a short summary of this review in Sec. 6.This overview article is an extended version of a plenary talk at the 21st international symposium on spin physics (Spin2014) [2].As can be imagined that the simplest and basic picture is what we have at the leading order in perturbative QCD (pQCD) and at the leading twist.Hence, there are also two major directions in theoretical developments towards a comprehensive description of the semi-inclusive processes.One is to take higher order pQCD into account, and the second is to consider higher twist contributions.These contributions are important not only for higher accuracy but also for consistency.The major progresses that have been made in recent years are also in these two directions separately, i.e. either at the leading twist but leading and higher order in pQCD or leading order in pQCD but leading and higher twists.The talk [2] was mainly concentrated on the second direction.For higher order pQCD contributions where evolutions of PDFs are involved, an overview talk was also presented by Daniel Boer in the same conference [3].There are also many other reviews and monographs (e.g.[4,6,7]).The study for higher order in pQCD and higher twists seems to be rather difficult and even the factorization properties are unclear [5].In this article, we follow the same line as in the talk [2] but briefly summarize the progresses in the studies on QCD evolutions and refer the interested readers to those reviews. II. INCLUSIVE DIS & THE ONE DIMENSIONAL IMAGINING OF THE NUCLEON Our studies on the structure of a fast moving nucleon started with inclusive DIS such as e − + N → e − + X.We recall that, under one photon exchange approximation, the differential cross section is given by the Lorentz contraction of the well-known leptonic tensor L µν (l, l ′ , λ l ) and the hadronic tensor W µν (q, p, S ), i.e., dσ = 2α 2 em sQ 4 L µν (l, l ′ , λ l )W µν (q, p, S ) The leptonic tensor is calculable and is given by, Information on the structure of the nucleon is contained in the hadronic tensor defined as, W µν (q, p, S ) = 1 2π X p, S j µ (0) X X | j ν (0)| p, S × (2π) 4 δ 4 (p + q − p X ). (2.3) Here, l and p denote the 4-momenta of the lepton and the nucleon respectively, those with prime are for the final states; λ stands for the helicity, and S for the polarization vector of the nucleon.We use the light-cone coordinate and define the light-cone unit vectors as n = (1, 0, 0 ⊥ ), n = (0, 1, 0 ⊥ ), n ⊥ = (0, 0, n ⊥ ), so that a general four-vector can be decomposed as A µ = A + nµ + A − n µ + A µ ⊥ , with A ± = (A 0 ± A 3 )/ √ 2, and A ⊥ = (0, 0, A ⊥ ).We work in the center of mass frame of the γ * N and choose the nucleon's momentum as z-direction so that p and S are decomposed as, ) The Bjorken variable is defined as ; and we also define The theoretical framework for inclusive DIS has been constructed in the following steps.First, we studied the kinematics and obtained the general form of the hadronic tensor by applying the basic constraints from the general symmetry requirements such as Lorentz covariance, gauge invariance, parity conservation and Hermiticity, e.g., q µ W µν (q, p, S ) = 0, (2.6) where à denotes the results of A after space reflection, i.e., õ = A µ .The general form of the hadronic tensor is given by the sum of a symmetric part and an antisymmetric part, where W (S ) µν (q, p) and W (A) µν (q, p, S ) are given by, ) respectively.We found out that the hadronic tensor is determined by four independent structure functions F 1 , F 2 , g 1 and g 2 , where the first two describe the unpolarized case and the latter two are needed for polarized cases. Our knowledge of one dimensional imaging of the nucleon starts with the "intuitive parton model" that is very nicely formulated e.g. in [8].Here, it was argued that, in a fast moving frame, because of time dilation, quantum fluctuations such as vacuum polarizations can exist quite long.In the infinite momentum frame, such fluctuations exist forever.In this case, a fast moving nucleon can be viewed as a beam of free "partons".The probability of the scattering of an electron with a nucleon is taken as the incoherent sum of that of the scattering with each individual parton, more precisely, a convolution of the number density of the parton in the nucleon with the probability of the scattering with the parton, i.e., where f q (x) is the number density of parton of flavor q in the nucleon.In this way, we obtained the famous results [8], q x f q (x), (2.13) ) Here, we would like to point out that, with this intuitive parton model, we are doing nothing else but the impulse approximation that we often use in describing a collision process where we do the following approximations, • during the interaction of the electron with the parton, interactions between the partons are neglected; • the electron interacts only with one single parton each time; • the scatterings of the electron with different partons are added incoherently. Although the physical picture of the intuitive model is very clear and the model is elegant and practical, we are not satisfied with the formulation because it is partly qualitative or semi-classical hence it is not easy to control the accuracy.A proper formulation should be based on quantum filed theory (QFT) and is obtained by starting with the Feynman diagram Fig. 1(a).Here, from this diagram, we obtain immediately that, where k is the 4-momentum of the parton. is a calculable hard part.The matrix element is known as the quark-quark correlator describing the structure of the nucleon.By taking the collinear approximation, i.e. taking k ≈ xp, and neglecting the power suppressed contributions i.e. the o(M/Q) terms, we obtain This is exactly the same result as that obtained from Eq. (2.12) based on the intuitive parton model.At the same time, we obtain the QFT operator expression of f q (x) defined via the quark-quark correlator given by Eq. (2.18) as, By inserting the expanded expression of the field operator ψ(z) in terms of the plan wave and the creation and/or annihilation operators, we see clearly that f q (x) is indeed the number density of parton in the nucleon.However, from this expression, we see also immediately a severe problem, i.e. this expression is not (local) gauge invariant!We understand that the physical quantity has to be gauge invariant and therefore have to find a solution for this. The gauge invariant formulation is obtained by taking into account the multiple gluon scattering shown by the diagram series in Fig. 1(a-c).This is clear since (local) gauge invariance implies the existence of the gauge interaction that needs to be taken into account.In this way, we obtain, where µν (q, p, S ) represents the contribution from the diagram with exchange of j-gluon(s).They are all expressed as a trace of a calculable hard part and a matrix element depending on the structure of the nucleon.E.g., corresponding to Fig. 1(b), we have j = 1, and W (1) µν (q, p, S ) is given by, W (1,c) µν (q, p, S ), ( W (1,c) µν (q, p, S ) = 1 2π φ( 1) where c in the superscript represents different cuts (left or right) in the diagram.Similarly, corresponding to Fig. 1(c), we have, W (2,c) µν (q, p, S ), (2.25) The matrix element is now a quark-j-gluon(s)-quark correlator.We also immediately see that none of such quark-j-gluon(s)quark correlators is gauge invariant. To get the gauge invariant form, we need to apply the collinear expansion proposed in Refs.[9][10][11], which is carried out in the following four steps. (1) Make Taylor expansions of all hard parts at k i = x i p, e.g., and so on, where ω ρ ′ ρ is a projection operator defined by ω (2) Decompose the gluon field into longitudinal and transverse components, i.e., (2.30) (3) Apply the Ward identities such as, ) (2.34) (4) Add all terms with the same hard part together and we obtain, W(0) µν (q, p, S ) = 1 2π 4 Tr Ĥ(0) µν (x) Φ(0) (k, p, S ) , W(1) µν (q, p, S ) = 1 2π W(2) µν (q, p, S ) = 1 2π where Φ( j) 's are the gauge invariant un-integrated quark-quark and quark-j-gluon(s)-quark correlators given by, Φ( 1) D(y) is the covariant derivative defined as D ρ (y) = −i∂ ρ + gA ρ (y).The factor L(0; y) is obtained during summing different contributions with the same hard part together and is given by, (2.44) where P stands for the path ordered integral.L(0; y) is nothing else but the well-known gauge link that makes the quark-quark or quark-j-gluon(s)-quark correlator, thus also the PDFs defined via them, gauge invariant.In this way, we have constructed a theoretical framework for calculating the contributions to the hadronic tensor at the leading order (LO) in pQCD but leading as well as higher twists in a systematical way.The results are given in terms of the gauge invariant parton distribution and correlation functions (generally referred as PDFs). We would like to emphasize in particular the following two further points derived directly from these expressions.First, we note that after collinear expansion, the hard parts contained in the expressions for W( j) µν 's such as those given by Eqs.(2.37-2.39)are only functions of the longitudinal component x.They are independent of other components of the parton momentum k.We can carry out the integration over these components of k's and simplify them to, W(0) µν (q, p, S ) = W( 1) µν (q, p, S ) = W( 2) µν (q, p, S ) = where the matrix elements Φ's are given by, Φ( 1) From these expressions, we see explicitly that only x i -dependences of the quark-quark and/or quark-j-gluon-quark correlators are involved.This means that only one dimensional imaging of the nucleon is relevant in inclusive DIS.Second, due to the existence of the projection operator ω ρ ′ ρ 's, the hard parts can be further simplified to a great deal.They are given by, where ĥ(0 nγ σ γ ν are matrices independent of x i 's.We insert them into Eqs.(2.45-2.47)and obtain the simplified expressions for the hadronic tensor as, Tr ĥ(0 where, for explicitness, we omit p, S in the arguments of the correlators.These correlators are defined as, We see explicitly that all the involved components of the quark-j-gluon-quark correlators depends only on one single parton momentum.This means that only quark-j-gluon-quark correlators that depend on one single parton momentum are relevant in inclusive DIS.We emphasize in particular that the results given by Eqs.(2.37-2.39)and their simplified forms given by Eqs.(2.55-2.62)including the gauge links are derived in the collinear expansion.They are just the sum of the contributions from the diagram series shown in Fig. 1.This formalism provides us a basic theoretical framework for describing inclusive DIS at LO pQCD but leading and higher twist contributions in terms of gauge invariant PDFs. The PDFs are defined in terms of QFT operators via these quark-quark correlators by expending them in terms of γmatrices and basic Lorentz covariants.For example, for Φ(0) (x, p, S ), we have, The basic Lorentz covariants are constructed from p α , n α , S α and ε αβρσ .We obtain the following general results, where ε ⊥ρσ ≡ ε αβρσ nα n β , and the anti-commutation symbol The scalar functions f (x)'s, g(x)'s and h(x)'s are the corresponding PDFs.There are totally 12 such functions, 3 of them, i.e. f 1 (x), g 1L (x) and h 1T (x), contribute at leading twist and have clear probability interpretations, 6 of them contribute at twist-3 and the other 3 contribute at twist-4.We further note that the three time reversal odd terms e L (x), f T (x) and h(x) vanish in fact in the one dimensional case.We keep them in Eqs.(2.65-2.68)for late comparison with fragmentation functions. We also see that the PDFs involved here are all scale independent.This is because we have till now considered only the LO pQCD contributions, i.e. the tree diagrams.To go to higher order of pQCD, we take the loop diagrams, gluon radiations and so on into account.After proper handling of these contributions, we obtain the factorized form [6] where the PDFs acquire the scale Q-dependence governed by QCD evolution equations.In practice, PDFs are parameterized and are given in the PDF library (PDFlib). In summary, for studying one dimensional imaging of the nucleon with inclusive DIS, we take the following steps. • General symmetry analysis leads to the general form of the hadronic tensor and/or the cross section in terms of four independent structure functions. • Parton model without QCD interaction leads to LO in pQCD and leading twist results of structure functions in terms of Q-independent PDFs without (local) gauge invariance. • Parton model with QCD multiple gluon scattering after collinear expansion leads to LO in pQCD, leading and higher twist contributions in terms of Q-independent but gauge invariant PDFs. • Parton model with QCD multiple gluon scattering and "loop diagram contributions" after collinear approximation, regularization and renormalization leads to leading and higher order pQCD, leading twist contributions in factorized forms in terms of Q-evolved and gauge invariant PDFs. In the following, we will follow these four steps and summarize what we have achieved in the three dimensional case.As did in [2], we will mainly focus on the theoretical framework at LO pQCD but taking leading and higher twist contributions into account consistently.Before that, we would like to emphasize the following two of the historical developments that may be helpful to us in constructing the theoretical framework for the TMD case. First, as mentioned, the study of three dimensional imaging of the nucleon was triggered by the experimental observation of the single-spin left-right asymmetries (SSA) in the inclusive hadron-hadron collision with transversely polarized projectile or target.It was known that pQCD leads to negligibly small asymmetry for the hard part [12] but the observed asymmetry can be as large as 40% [13].The hunting for such large asymmetries lasts for decades with the following milestones: • In 1991, Sivers introduced [14] the asymmetric quark distribution in a transversely polarized nucleon that is now known as the Sivers function. • In 1993, Boros, Liang and Meng proposed [15] a phenomenological model that provides an intuitive physical picture showing that the asymmetry arises from the orbital angular momenta of quarks and what they called "surface effect" caused by the initial or final state interactions. • In 1993, Collins published [16] his proof that Sivers function has to vanish due to parity and time reversal invariance. • In 2002, Brodsky, Hwang and Schmidt calculated [17] SSA for SIDIS using an explicit example where they took the orbital angular momentum of quark and the multiple gluon scattering into account. • In 2002, immediately after [17], Collins pointed out [18] that the multiple gluon scattering is contained in the gauge link and that the conclusion of his proof in 1993 was incorrect because he forgot the gauge link.He further showed that by taking the gauge link into account the same proof leads to the conclusion that Sivers function for DIS and that for Drell-Yan have opposite sign.Belitsky, Ji and Yuan resolved [19,20] the problem of defining the gauge link for a TMD parton density in light-cone gauge where the gauge potential does not vanish asymptotically. The second historical development that we would like to mention concerns the azimuthal asymmetry study in SIDIS.It was shown by Georgi and Politzer in 1977 [21] that final state gluon radiations lead to azimuthal asymmetries and could be used as a "clean test to pQCD".However, soon after, in 1978, it was shown by Cahn [22] that similar asymmetries can also be obtained if one includes intrinsic transverse momenta of partons.The latter, now named as Cahn effect, though power suppressed i.e. at higher twist, can be quite significant and can not be neglected since the values of the asymmetries themselves are usually not very large. The lessons that we learned from these historical developments are in particular the following two points, i.e., when studying TMDs, • it is important to take the gauge link into account; • higher twist effects can be important. Both of them demand that, to describe SIDIS in terms of TMDs, we need the proper QFT formulation rather than the intuitive parton model. III. TMDS DEFINED VIA QUARK-QUARK CORRELATOR The TMD PDFs of quarks are defined via the TMD quarkquark correlator Φ (0) (x, k ⊥ ; p, S ) given by Eq. (2.40) (after integration over k − ).A systematical study has been given in [23] and a very comprehensive treatment can also be found in [24].Here, we first expand it in terms of γ-matrices and obtain a scalar, a pseudo scalar, a vector, an axial-vector and an anti-symmetric and space reflection odd tensor part, i.e., The operator expressions of these coefficients are given by the traces of the quark-quark correlator with the corresponding Dirac matrices.For example, for the vector component, we have, We then analyze the Lorentz structure of each part by expressing it in terms of possible "basic Lorentz covariants" and scalar functions.From Φ(0) (x, k ⊥ ; p, S ), we obtain the results as [23], These scalar functions are known as TMD PDFs.There are totally 32 such TMD PDFs.Among them, 8 contribute at leading twist and they all have clear probability interpretations such as the number density f 1 (x, k ⊥ ), the helicity distribution g 1L (x, k ⊥ ), the transversity h 1T (x, k ⊥ ), the Sivers function f ⊥ 1T (x, k ⊥ ), the Boer-Mulders function h ⊥ 1 (x, k ⊥ ) etc.; 16 contribute at twist-3 and the other 8 contribute at twist-4.We emphasize that they are all scalar functions of x and k ⊥ , i.e., depending on x and k 2 ⊥ .If we integrate over d 2 k ⊥ , terms that the basic Lorentz covariants are odd in k ⊥ vanish.Eqs.(3.3-3.7)just reduce to the corresponding Eqs.(2.64-2.68).At the leading twist, only 3 of 8 survive, i.e. the number density f 1 (x), the helicity distribution g 1L (x) and the transversity h 1T (x). We show the leading twist TMD PDFs in table I.Those twist-3 TMD PDFs are shown in table II.In these tables, we show also the results for the case that L = 1 , i.e. if we neglect the multiple gluon scattering and simply take a nucleon as an ideal gas system consisting of quarks and anti-quarks (see e.g.[24]).We also note that the conventions used here have the following systematics: f , g, and h are for unpolarized, longitudinally and transversely polarized quarks; the subscript L or T stands for longitudinally or transversely polarized nucleon, and those with subscript 1 for leading twist, without number for twist-3 and with 3 are for twist-4; the ⊥ in the superscript denotes that the corresponding basic Lorentz covariant is k ⊥ dependent. Higher twist TMD PDFs are also defined via quark-j-gluon(s)-quark correlators such as those given by Eqs.(2.59-2.62).Many of them are, however, not independent since they are related to those defined via the quark-quark correlator through the QCD equation of motion γ • D(z)ψ(z) = 0. We can get the relations such as, x Φ(0 It is interesting to see that [35], although not generally proved, all the twist-3 TMD PDFs that are defined via quark-gluon-quark correlator ϕ (1) ρ and involved in SIDIS are replaced by those defined via quark-quark correlator Φ (0) . We would like to emphasize that fragmentation is just conjugate to parton distribution.A systematic study for the general structure of fragmentation function (FF) defined via the corresponding quark-quark correlator is presented in [26].We should have one to one correspondence between TMD PDFs and TMD FFs.E.g., corresponding to the quark-quark correlator Φ (0) (k, p, S ) given by Eq. (2.40) and the expanded form Eq. (3.1), we have, For spin-1/2 hadron, we have perfect one to one correspondence to those given by Eqs.(3.3-3.7) for parton distributions in the nucleon, i.e., ) ) ) Comparing them with the results given by Eqs.(3.3-3.7),we see clearly the one to one correspondence between FFs and PDFs. As an example, we show the 8 leading twist components in table III.We do not show the results for the case of L = 1 for FFs. spin transfer (transverse) This is because even if we neglect the multiple gluon scattering that leads to the gauge link, final state interactions can still exist between h and X.In this case, time reversal invariance does not lead to zero results for the T-odd amplitudes. For spin-1 hadrons, the polarization is described by the polarization vector S and also the polarization tensor T (see e.g.[25] and [26]).The tensor polarization part has five independent components.They are given by a Lorentz scalar S LL , a Lorentz vector S µ LT = (0, S x LT , S y LT , 0) and a Lorentz tensor S µν T T that has two independent non-zero components S xx T T and S xy T T in the rest frame of the hadron.These polarization parameters can be related to the probabilities for the particles in different spin states [25].In this case, the TMD quark-quark correlator Ξ(0) (z, k F⊥ ; p, S ) is decomposed into a spin independent part, a vector polarization dependent part and a tensor polarization dependent part, i.e.Ξ(0) (z, k F⊥ ; p, S ) = ΞU(0) (z, k F⊥ ; p, S ) + ΞV(0) (z, k F⊥ ; p, S ) + ΞT(0) (z, k F⊥ ; p, S ).The spin independent and vector polarization dependent part ΞU+V(0) (z, k F⊥ ; p, S ) takes exactly the same decomposition as that for spin-1/2 hadron given by Eqs.(3.12-3.16).The tensor polarization dependent part is presented in [26] and is given by, ) ) ) We see that, for the vector polarization dependent part, similar to nucleon TMD PDFs, there are totally 32 components, 8 contributes at leading twist, 16 at twist-3 and the other 8 at twist-4.For the tensor polarization dependent part, there are totally 40 components, where 10 contribute at leading twist, 20 at twist-3 and the other 10 at twist-4.In Table IV, we list the twist-2 components for the tensor polarization dependent part. If we integrate over d 2 k F⊥ , we have, corresponding to Eqs. (3.12-3.16),for the spin independent and vector polarization dependent part, while for the tensor polarization dependent part, we have, We see that, for the spin independent and vector polarization dependent parts, 12 components survive, 3 of them contribute at twist-2, 6 at twist-3 and the other 3 at twist-4.This is exactly the same as those for PDFs for nucleon and we have exact one to one correspondence between the results given by Eqs.(3.22-3.26)and those given by Eqs.(2.64-2.68).For the tensor polarization dependent part, there are only 8 components survive, 2 of them contribute at twist-2, 4 at twist-3 and the other 2 at twist-4.This corresponds to the situation for PDFs for vector mesons.We should have a one to one correspondence between the tensor polarization dependent FFs for production of spin-1 hadron to those PDFs for spin-1 hadrons.We also listed the twist-2 components in table IV. IV. ACCESSING THE TMDS IN HIGH ENERGY REACTIONS The TMDs can be studied in semi-inclusive high energy reactions such as SIDIS e − +N → e − +h+X, semi-inclusive Drell-Yan h + h → l + + l − + X, and semi-inclusive hadron production in e + e − -annihilation e + + e − → h 1 + h 2 + X.With SIDIS, we study TMD PDFs and TMD FFs, while with Drell-Yan and e + e − annihilation, we study TMD PDFs and TMD FFs separately.We now follow the same steps as those for inclusive DIS and briefly summarize what we already have in constructing the corresponding theoretical framework. (I) The general forms of hadronic tensors: For all three classes of processes, the general forms of hadronic tensors have been studied and obtained.For SIDIS, it has been discussed in [27][28][29][30] and it has been shown that one need 18 independent structure functions for spinless h.For Drell-Yan, a comprehensive study was made in [31] and the number of independent structure functions is 48 for hadrons with spin 1/2.For e + e − -annihilation, the study was presented in [32] and one needs 72 for spin-1/2 h 1 and h 2 .The results are systematically presented in these papers and we will not repeat them here.However, we would like to present as an example for the general form of the differential cross section for e − N → e − hX.It is given by, ) where ε = (1 − y − 1 4 γ 2 y 2 )/(1 − y + 1 2 y 2 + 1 4 γ 2 y 2 ), γ = 2Mx/Q; the azimuthal angle ψ is that of the out going lepton l ′ around the incident lepton beam with respect to an arbitrary fixed direction, which in case of transversely polarized target is taken as the direction of S T .In the deep inelastic limit, neglecting power suppressed terms, dψ = dφ S . From Eqs. (4.1-4.7),we see explicitly that the 18 structure functions F's are determined by the different azimuthal asymmetries in different polarized cases.These different azimuthal asymmetries are just defined by the average value of the corresponding trigonometric functions.E.g., We also like to emphasize that they are the general forms independent of parton model and are valid at leading and higher twist and also leading and higher order in pQCD. (II) LO in pQCD and leading twist parton model results: These are the simplest parton model results and can be obtained easily.E.g., for SIDIS, LT , (4.10) where and C[w i f D] denotes the convolution of f and D weighted by w i , i.e., where the weights w i 's are given by, ) where phT = p hT /| p hT | is the corresponding unit vector.The result can be obtained from those given e.g. in [30] by neglecting all the power suppressed contributions.From Eqs. (4.10-4.16),we see in particular that, at leading twist, there exist 6 non-zero azimuthal asymmetries in different polarized cases, i.e., cos 2φ h sin 2φ h We would like to emphasize that the results given by Eqs.(4.10-4.21) is a complete parton model result at LO in pQCD and leading twist.It can be used to extract the TMDs at this order.Any attempt to go beyond LO in pQCD or to consider higher twists needs to go beyond this expression. (III) LO in pQCD, leading and higher twist results: For the semi-inclusive processes where only one hadron is involved, either in the initial or the final state, it has been shown [33][34][35][36][37] that the collinear expansion can be applied.Such processes include: semi-inclusive DIS e − + N → e − +q( jet) + X, and e + e −annihilation e + +e − → h+ q( jet)+X.By applying the collinear expansion, we have constructed the theoretical frameworks for these processes with which leading as well as higher twist contributions can be calculated in a systematical way to LO in pQCD.The complete results up to twist-3 have been obtained in Refs.[35][36][37].For polarized e − + N → e − + q( jet) + X, the simplified expressions for the hadronic tensor are very similar to those for the inclusive DIS given by Eqs.(2.55-2.58), Tr ĥ(0 W(1,L,si) W(2,L,si) W(2,M,si) and the complete results up to twist-3 are given by, where B(y) = 2(2 − y) 1 − y, D(y) = 2y 1 − y.For unpolarized e − + N → e − + q( jet) + X, the results up to twist-4 have also been obtained [34], These results are expressed in terms of the gauge invariant TMD PDFs or FFs and can be used as the basis for measuring these TMDs via the corresponding process at the LO in pQCD.We would like in particular to draw the attention to the results for e + +e − → h + q( jet) + X for h with different spins [37].Here, for hadronic tensor, we obtain again very much similar formulae also for this process, e.g., corresponding to Eqs. (4.28-4.30),we have, Tr ĥ(0 W(1,L,si) W(2,L,si) W(2,M,si) A complete twist-3 results for differential cross sections, azimuthal asymmetries, and polarizations have been obtained for hadrons with spin-0, 1/2 and 1 in [37].We see in partic-ular for spin-1 hadrons, tensor polarization is involved, even at the leading twist level, we have, for e + e − annihilation at the Z 0 -pole, S (0) LL (y, z, p T ) = q T q 0 (y)D 1LL (z, p T ) 2 q T q 0 (y)D 1 (z, p T ) , (4.44) S n(0) LT (y, z, p T ) = − 2| p T | 3zM q P q (y)T q 0 (y)G ⊥ 1LT (z, p T ) q T q 0 (y)D 1 (z, p T ) , (4.45) ) ) q P q (y)T q 0 (y)G ⊥ 1T T (z, p T ) q T q 0 (y)D 1 (y, p T ) , (4.48) where n and t denote the two transverse directions of the produced vector meson, one is normal to and the other is inside to the production plane.The coefficient T q 0 (y 2 and c e 3 = 2c e V c e A ; and y in this reaction is defined as y ≡ l + 1 /k + .P q (y) = T q 1 (y)/T q 0 (y) is the polarization of the quark produced at the Z 0 -decay and . This is a situation that is much less explored till now and is worthwhile for many further studies. For the above-mentioned three kinds of semi-inclusive processes, there are always two hadrons involved.Collinear expansion has not been proved how to apply for such processes. It is unclear how one can calculate leading and higher twist contributions in a systematical way.Nevertheless, twist-3 calculations that have been carried out for these processes [38][39][40][41], practically in the following steps: (i) draw Feynman diagrams with multiple gluon scattering to the order of one gluon exchange, (ii) insert the gauge link in the correlator wherever needed to make it gauge invariant, (iii) carry out calculations to the order 1/Q. Although not proved, it is interesting to see that the results obtained this way reduce exactly to those obtained in the corresponding simplified cases where collinear expansion is applied if we take the corresponding fragmentation functions as δ-functions. V. AVAILABLE DATA AND PARAMETERIZATIONS Experiments have been carried out for all three kinds of semi-inclusive reactions.The results are summarized e.g. in a number of plenary talks in Spin2014 by Marcin Stolarski and Armine Rostomyan [64,65].Here, we will just briefly summarize the main data available and then try to sort out the TMD parameterizations that we already have. At DESY, the first measurement on single-spin asymmetries for SIDIS with longitudinally polarized target was carried out by HERMES [66] for production of charged pions; then for the first time with transversely polarized target in [67].They found non zero Sivers and Collins asymmetries sin(φ h − φ S ) UT and sin(φ h + φ S ) UT .Measurements have then also carried out for π 0 and Kaons [68,69] and also for azimuthal asymmetries cos φ h UU and cos(2φ h ) UU in the unpolarized case [70]. At JLab, CLAS has carried out the measurements [79,80] on sin(2φ h ) UL for pions with different charges and sin φ h LU for π 0 .Hall A Collaboration has made the measurements [81][82][83][84] on Collins and Sivers asymmetries for π ± and K ± , cos(φ h − φ s ) LT for π ± and sin(3φ h − φ s ) UT .They are all summarized in table V. Although the data are still far from abundant enough to give a precise control of the TMDs involved, there are already different sets of TMD parameterizations extracted from them.We briefly sort them out in the following. (1) Transverse momentum dependence: This is usually taken as [96][97][98][99][100] a Gaussian in a factorized form independent of the longitudinal variable z or x, e.g., ) (5.2) The width has been fitted, the form and flavor dependence etc. have been tested.The typical values of the fitted widths are e.g.[96], k 2 ⊥ = 0.25GeV 2 , k 2 F⊥ = 0.20GeV 2 .Roughly speaking, this is a quite satisfactory fit.However, it has also been pointed out, e.g. in [99] for the TMD FF, that the Gaussian form seems to depend on the flavor and even on z, which means that it is only a zeroth order approximation. (2) Sivers function: All the data available from HER-MES [67][68][69], COMPASS [71-74, 76, 77], and JLab Hall A [81,82,84] on Sivers asymmetries in SIDIS for pions and Kaons have been used for the parameterization.The Sivers function is usually parameterized [96,[101][102][103][104][105][106] in the form of the number density f q (x, k ⊥ ) multiplied by an x-dependent factor N q (x) and a k ⊥ -dependent factor h(k ⊥ ), i.e., where N q (x) is taken as a binomial function of x, and h(k ⊥ ) is taken as a Gaussian, (5.5) Here the Sivers function ∆ N f q (x, k ⊥ ) is defined via, which is related to the Sivers function f ⊥ 1T (x, k ⊥ ) defined in Eq. (3.5) by, (3) Transversity and Collins function: A simultaneous extraction of them from SIDIS data from HERMES Collaboration [67][68][69][70] and COMPASS [71][72][73][74][75][76][77] on Collins asymmetries in SIDIS and e + e − data of Belle [85][86][87] have been carried out by the Torino group [97,107].A similar form as that for the Sivers function has been taken, e.g., and it has been obtained that also the Collins function is nonzero and has different signs e.g. for u → π + or d → π + , as shown in Fig. 3. Here, similar to the case for the Sivers function, the Collins function ∆ N D h/q (z, k F⊥ ) is defined via, (4) Boer-Mulders function: It was pointed out that [111] the HERMES and COMPASS data on cos 2φ asymmetry [70,78] provide the first experimental evidence of the Boer-Mulders effect in SIDIS.Studies in this direction has been made in [110,111] to extract Boer-Mulders function from the SIDIS data [70,78] and in [108,109,112] to extract from Drell-Yan data [90][91][92][93][94][95].A fit to the first moments of Boer-Mulders function of u and d quark is shown in Fig. 4. The form was taken again similar to the Sivers function, just multiply the Sivers function by a constant, e.g., h ⊥q 1 (x, k ⊥ ) = λ q f ⊥q 1T (x, k ⊥ ). (5.15) However, we would like to point out that the cos 2φ asymmetry receives twist-4 contributions due to the Cahn effect [22].A proper treatment of such twist-4 effect involves twist-4 TMDs as shown in Eq. (4.39) and in [34].Because of the multiple gluon scattering shown in Fig. 1, the twist-4 effects could be very much different from that given in [22] the results in which corresponds to the case of L = 1.A careful check might change the conclusion obtained in [108][109][110][111][112]. Attempts to parameterize other TMDs such as pretzelocity h ⊥ 1T have also been made [113].Although there is no enough data to give high accuracy constraints, the qualitative features obtained are also interesting. The second part concerns the QCD evolution of the TMDs.As mentioned earlier, this is a topic that develops very fast recently.A partial list of recent dedicated publications is [50][51][52][53][54][55][56][57][58][59][60][61][62][63].QCD evolution equations have been constructed in particular for unpolarized TMD PDFs and also for polarized TMDs such as the Sivers function.The numerical results obtained from the evolution equations show explicitly that QCD evolution is very significant for TMDs.Not only the form of the k ⊥ -dependence, but also the width of the Gaussian evolves with Q.More precisely, at small k ⊥ , Gaussian parameterization can be used but the width evolves with Q.At larger k ⊥ , the form of k ⊥ -dependence is determined mainly by gluon radiation and deviates greatly from a Gaussian and also evolve with Q.In Fig. 5, we see an example for the evolution of the Gaussian parameterization at small k ⊥ ; in Fig. 6, we see the evolution of the shape at large k ⊥ .It is also important to use the comprehensive TMD evolution rather than a separate evolution of the transverse and longitudinal dependences respectively.We show as an example in Fig. 7.The last thing for TMD parameterizations that we would like to mention is the TMD library (TMDlib).We are happy to see that, a first version has already been created [114] in the year 2014, and updated recently. VI. SUMMARY AND OUTLOOK In summary, by comparing with what we did in studying one dimensional imaging of the nucleon with inclusive DIS, we presented a brief overview of our studies on three dimensional imaging of the nucleon with semi-inclusive DIS and other semi-inclusive reactions.We summarized in particular the general form of the TMDs defined via quark-quark correlators both for TMD PDFs and FFs.We emphasized in particular on the theoretical framework for semi-inclusive reactions at LO pQCD but with leading and higher twist contributions consistently.Such theoretical framework is obtained by applying the collinear expansion technique developed in 1980s in inclusive DIS to these semi-inclusive processes.We summarized in particular that it applies now also to all processes where one hadron is involved.The results obtained in such a framework should be used as starting points for studying TMDs experimentally. At the end, we would like to emphasize that three dimensional imaging of the nucleon is a hot and fast developing topic in last years.Many progresses have been made and many questions are open.We see in particular that LO pQCD leading and higher twists framework for processes where one hadron is involved can be constructed using collinear expansions.Factorization theorem for leading twist but with LO and higher order pQCD contributions and QCD evolution equations for unpolarized TMD PDFs and the Sivers functions have also been established.Especially in view of the running and planned facilities such as the electron-ion colliders, we expect even rapid development in next years. The overview is far from complete.We apologize for many aspects that we did not cover such as the generalized parton distributions, the Wigner function, model calculations of TMDs, nuclear dependences, and hyperon polarization. . 7 ) FIG. 2: Example of the parameterizations of the Sivers functions for u and d flavors at Q 2 = 2.4(GeV/c) 2 by the Torino group.The figure is taken from [104]. FIG. 3 : FIG.3: Example of the Torino parameterizations of the transversity and Collins function.In the left panel, we see the transversities x∆ T q(x) = xh 1q (x) for q = u, d; in the right panel, we see the first moments of the favored and disfavored Collins functions.The figure is taken from[107]. 1 FIG. 5 : FIG. 5: Example showing the TMD evolution of the Gaussian parameterization in the low k ⊥ -region.The curves show the evolved Bochum Gaussian fits of up quark Sivers function at x = 0.1.This figure is taken from Ref.[54]. FIG. 6 : FIG. 6: Example showing the evolved k ⊥ dependence in the large k ⊥ region.Here we see the up-quark Sivers function at Q = 5 GeV and Q = 91.19compared with the corresponding Gaussian fits at low-k ⊥ region at x = 0.1.This figure is taken from Ref.[54]. FIG. 7 : FIG. 7:Example showing the difference between the results of the TMD evolution with a DGLAP evolution for x-dependence only for unpolarized TMD PDF.This figure is taken from Ref.[55]. TABLE I : The 8 leading twist TMD PDFs defined via the quark-quark correlator.A × means that the corresponding term disappears upon integrating the quark-quark correlator over d 2 k ⊥ . TABLE II : The 16 twist-3 TMD PDFs defined via the quark-quark correlator.A × means that the corresponding term disappears upon integrating the quark-quark correlator over d 2 k ⊥ . TABLE III : The 8 leading twist TMD FFs for spin-1/2 hadrons defined via the quark-quark correlator.A × means that the corresponding term disappears upon integrating the quark-quark correlator over d 2 k F⊥ . TABLE IV : The 10 tensor polarization dependent TMD FFs for spin-1hadrons defined via the quark-quark correlator.A × means that the corresponding term disappears upon integrating the quark-quark correlator over d 2 k F⊥ . TABLE V : Available measurements on azimuthal asymmetries in SIDIS + N → e + π ± X A sin φ h UL , A sin 2φ h UL [66] e
10,558.8
2015-06-24T00:00:00.000
[ "Physics" ]
A Transparent Ultrasound Array for Real-Time Optical, Ultrasound, and Photoacoustic Imaging Objective and Impact Statement. Simultaneous imaging of ultrasound and optical contrasts can help map structural, functional, and molecular biomarkers inside living subjects with high spatial resolution. There is a need to develop a platform to facilitate this multimodal imaging capability to improve diagnostic sensitivity and specificity. Introduction. Currently, combining ultrasound, photoacoustic, and optical imaging modalities is challenging because conventional ultrasound transducer arrays are optically opaque. As a result, complex geometries are used to coalign both optical and ultrasound waves in the same field of view. Methods. One elegant solution is to make the ultrasound transducer transparent to light. Here, we demonstrate a novel transparent ultrasound transducer (TUT) linear array fabricated using a transparent lithium niobate piezoelectric material for real-time multimodal imaging. Results. The TUT-array consists of 64 elements and centered at ~6 MHz frequency. We demonstrate a quad-mode ultrasound, Doppler ultrasound, photoacoustic, and fluorescence imaging in real-time using the TUT-array directly coupled to the tissue mimicking phantoms. Conclusion. The TUT-array successfully showed a multimodal imaging capability and has potential applications in diagnosing cancer, neurological, and vascular diseases, including image-guided endoscopy and wearable imaging. Introduction Ultrasound and optical imaging modalities are nonionizing, portable, affordable and can be realized in various forms, from table top size to miniaturized endoscopes or wearable devices [1,2]. Ultrasound (US) imaging provides the deep tissue structural information based on differences in acoustic impedance and complementary functional blood flow information through Doppler ultrasound [3]. Pure optical imaging methods such as fluorescence imaging enable biochemical information of targeted cells and tissue (e.g., autofluorescence from metabolic cofactors NAD/NADH: nicotinamide adenine dinucleotide) and therefore allow high diagnostic sensitivity and specificity [4][5][6][7]. Optical imaging provides best spatial resolution (submicrons to a few microns) when probing superficial depths (<1 mm). How-ever, strong scattering of optical photons inside the deep tissue severely limits the spatial resolution of pure optical imaging, which is typically in the range of 1/5 th to 1/10 th of an imaging depth [8]. Photoacoustic (PA) imaging, as a hybrid imaging modality, maps optical absorption contrast of deep tissue with ultrasonic spatial resolution. For example, hemoglobin absorption-based label-free imaging of vascular anatomy and functional oxygen saturation has been shown to be useful in diagnosing cancer, neurological, and vascular diseases [9][10][11]. In PA imaging, light undergoes only one way scattering inside the tissue medium that is from the skin surface to the target location. At the target location, such as a blood vessel, light is converted to ultrasound waves by light-absorbing chromophores. Because the generated ultrasound waves are about 100-fold less scattered than the light waves, PA imaging provides higher imaging depth and better spatial resolution (scalable with ultrasound parameters, typically 1/100 th of an imaging depth, that is 0.5 mm spatial resolution at 5 cm depth) compared to deep tissue optical imaging [8,12,13]. While PA imaging provides rich optical contrast from a wide range of light absorbing particles (e.g., proteins, small molecules, and nanoparticles), it is to be noted that the penetration depth and spatial resolution in PA imaging is still lower than conventional ultrasound imaging. Therefore, a synergistic integration of optical, US, and PA imaging technologies into a single multimodal imaging platform will provide complementary contrasts, penetration depths, and spatial resolutions. They are desired in many biomedical applications to simultaneously image a set of structural, functional, and molecular biomarkers. Different combinations of optical and ultrasound imaging systems have been reported for different clinical applications. In cancer imaging, Fatakdawala et al. demonstrated in vivo imaging of oral cancer in a hamster model using a bench-top combination of fluorescence lifetime (FLI), PA and US imaging techniques [14]. FLI revealed biochemical (NADH) changes on the tissue surface, with a lower fluorescence lifetime for the oral cancer tissue compared to the surrounding tissue. US imaging provided underlying tissue morphology and microstructure, and PA imaging detected high vascularization within the cancerous tissue. Similarly, Tummers et al. performed multimodal US, PA, and fluorescence imaging of a surgical removed pancreatic specimen obtained from a pancreatic ductal carcinoma (PADC) patient [15], who was intravenously administered with a near-infrared (NIR) fluorescent agent, Cetuximab-IRDye800, that binds to epidermal growth factor receptor. In this case, fluorescence imaging provided the surface projection of the targeted Cetuximab-IRDye800 agent, PA imaging showed the depth resolved optical absorption contrast from the IRDye800 and surrounding vasculature, and ultrasound imaging revealed the underlying tissue anatomy. For imaging atherosclerosis, a cardiovascular disease characterized by the accumulation of lipid plaques and several fibrous and cellular constituents, intravascular ultrasound (IVUS) and optical coherence tomography (OCT) technologies are commonly used in the clinics [16][17][18]. Recently, intravascular PA (IVPA) is also being actively studied for mapping deep tissue atherosclerosis based on high optical absorption contrast of plaque lipids in the NIR-IIb (1.5 μm-1.7 μm) optical window [19][20][21][22][23]. Similarly, neuroscience studies also require high-resolution multiparametric hemodynamic information (cerebral blood flow, blood volume, and oxygen saturation) obtained from optical and photoacoustic imaging for mapping resting state brain connectivity [24][25][26], studying neuromodulation [27], neurovascular coupling [28][29][30], and neurodiseases [31][32][33]. For this purpose, recently functional ultrasound (fUS) imaging, which provides high resolution images of microvascular blood flow, has been integrated with hemoglobin absorption-based PA vascular imaging [34]. However, the current experimental setups integrating fluorescence and US and PA technologies are limited to raster scanning the imaging device over the tissue sample, one imaging mode at a time [14]. Since real-time US imaging (e.g., IVUS and fUS) is performed using ultrasound transducer array, the most viable approach for real-time multimodal imaging is to integrate fluorescence (or other optical technologies) and PA imaging to the US imaging arraybased platform. However, optical opacity of conventional ultrasound transducers hinders coaxial and compact integration of the ultrasound transducer array with optical illumination and detection fibers. For example, real-time B-mode US and PA (USPA) imaging devices are developed by simply assembling optical fiber bundles around a conventional ultrasound transducer probe (Figure 1(a)). Due to the physical separation between the two optical fiber bundles, optical illumination is not available below the surface of the ultrasound transducer up to 1 -2 cm depth (see Figures 1(a) and 1(b)) [35,36]. To partially offset this problem and achieve coaligned optical and ultrasound fields on the tissue surface, USPA devices are operated with long working distances (>1 cm), visible as dark region (Figure 1(b)), using water or ultrasound gel as the coupling medium between the tissue and the probe surface [37,38]. This limits miniaturization of the multimodal imaging devices and longitudinal in vivo imaging capabilities, introduces artifacts, and increases ultrasound attenuation as well as ultrasound scattering if any bubbles are formed in the coupling medium. The requirement for long working distance also limits the imaging speed because of redundant data corresponding to nonilluminated region is also captured and processed both during US and PA data acquisition system. For example, the additional working distance required for real-time PA imaging will preclude its integration with power Doppler ultrasound-(PDUS-) based microvasculature imaging that needs high-frame rate (>10,000 frames per second) plane wave ultrasound imaging [3,39]. The above challenges can be overcome by employing transparent ultrasound transducers (TUT) that allow light delivery through the transducer, as shown in Figure 1(c). By doing so, the ultrasound transducer becomes a part of the optical system, instead of an obstruction to the optics. This will not only significantly reduce the beam engineering challenges but will also lead to the development of a more compact, portable, wearable, and versatile multimodal systems. For this purpose, both conventional piezoelectric materials [40][41][42][43] and capacitive micromachined ultrasound transducers (CMUTs) [44][45][46] have been studied for developing TUTs. Ilkhechi et al. reported transparent CMUT array for ultrasound imaging of a small size tissue phantom [45] and photoacoustic [46] imaging of pencil leads submerged in oil tank. Transparent CMUTs have not yet been demonstrated for deep tissue US and real-time dualmodality USPA imaging capabilities. Although CMUTs have unique advantages such as wide bandwidth and ease of fabrication in 1D and 2D arrays forms with different shapes, sizes, and frequencies, they require complex clean room fabrication processes, large bias voltages, and custom-developed integrated circuits for operation, leading to their incompatibility with current clinical ultrasound systems [10,[47][48][49]. All-optical photoacoustic detectors such as transparent optical ring resonators [50] and Fabry-Pérot etalons [51] have 2 BME Frontiers the ability to transmit light through them and into the tissue. However, these systems require additional laser and optical detectors for detecting generated photoacoustic waves, and as such not compatible with commercial ultrasound machines [52]. Moreover, these detectors cannot be used for ultrasound excitation/imaging required for dual-modality USPA imaging applications. While prior studies demonstrated the potential of transparent lithium niobate-(LN-) based single element TUTs for high sensitivity PA imaging [40][41][42][43], TUT-arrays are required for real-time multimodal imaging. To address above-mentioned limitations, in this work, we introduced the one-dimensional (1D) linear TUT-array using a transparent LN piezoelectric material and demonstrated its feasibility for a real-time multimodal deep tissue imaging. To the best of our knowledge, this is the first TUT-array which uses a transparent bulk piezoelectric material. We characterized the TUT-array using electrical and acoustic methods. The TUT-array enabled coalignment of acoustic and light pathways with minimal acoustic coupling. Imaging of tissue mimicking phantoms validated a quadmode US, PA, Doppler ultrasound, and fluorescence imaging capabilities of the TUT-array for providing respective structural, functional, and molecular information of the tissue without introducing any shadow regions. In the future, TUT-arrays can have broad biomedical applications such as compact multimodal endoscopy or wearable imaging applications and also for photo-mediated ultrasound therapy for deep vein thrombosis or wound healing. TUT-Array Design and Fabrication. The schematic of the proposed TUT-array is shown in Figure 2(a). The Krimboltz, Leedom, and Mattaei (KLM-) model-based simulation software (PiezoCAD, Sonic Concepts, Woodinville, WA, USA) [53] was used to study the electrical impedance, pulse-echo response, and corresponding bandwidth of the array element, while MATLAB Ultrasound Toolbox (MUST) was used to simulate the beam profile of the 16-element synthetic aperture of the array for different steering angles [54]. A center frequency of 6.5 MHz was chosen to match commonly used diagnostic ultrasound devices. This can be achieved by a 0.5 mm thick LN piezoelectric material. Double side indium tin oxide-(ITO-) coated LN was selected as the piezoelectric material due to its high optical transmission rate (>80% in the NIR wavelengths) and good electromechanical coupling coefficient (49%). The element width of 0.2 mm was chosen to be less than 0.6 × (element thickness) and greater than λ/2 to avoid spurious resonant modes. Here, λ represents the ultrasound wavelength in tissue medium. When designing element pitch, it needs to be within the range from λ/2 to 3λ/2 to avoid grating lobes. Therefore, a pitch of 0.3 mm was chosen for a 6.5 MHz linear array [55] with a total of 64 elements and an element height of 5 mm. 64 elements were created by dicing 400 μm deep inside the 500 μm LN wafer, leaving 100 μm for shorting all elements as the common ground. A 1 mm thick conductive glass slide was bonded to the LN which served as the first backing layer as well as the ground connection. An additional backing layer of transparent epoxy was placed on top of the glass slide to further reduce the acoustic reverberation. To individually address each element, a custom fabricated cable was anisotropic conductive film (ACF) bonded to the edge of the array as shown in Figures 2(a) and 2(b). To improve the ultrasound energy transmission, a quarter wavelength thick matching layer of Parylene-C (not shown in Figure 2(a) schematic) was deposited for acoustic impedance matching and waterproofing. The acoustic properties of the stacking materials used in the TUT-array fabrication are summarized in Table 1 and the design parameters of the array are summarized in Table 2. Further, the detailed TUT-array step-wise fabrication is presented in Section 4.1. Figure 2(b) shows the picture of the fabricated TUT linear array on top of a "Penn State" logo. Although discontinuity was observed between the elements due to the dicing kerf, the letters are clearly readable throughout the TUT-array. The zoomed in image in Figure 2(b) shows proper alignment and bonding between flexible cable traces and each LN element. Figure 3(a) shows a typical pulse-echo result obtained from the center element #32 of the array. Due to the mass loading effect from the attached glass slide, a dual frequency nature was observed similar to previously reported articles [59,60] and agreed well with the PiezoCAD simulation, as shown in Figure 3(b). The center frequencies of the element were found to be 5.94 MHz and 7.69 MHz, with a -6 dB fractional bandwidths of 6.2% and 7.6%, respectively. These results were similar to characteristics found in the simulation with 5.96 MHz and 7.20 MHz center frequencies with respective fractional bandwidths of 6.58% and 5.34%. To investigate the consistency across all the 64 array elements, the center frequencies and corresponding bandwidths of each element are plotted in Figure 3(c). The plotted center frequency only indicated the more dominant frequency, which exhibited the higher magnitude in frequency response (0 dB after normalization). The two-way pulse-echo peak-to-peak amplitudes for each array element are plotted in Figure 3(d) and corresponding B-scan images from all elements are plotted in Figure 3(e). This data can be categorized into three subgroups: subgroup 1: element #1 to #9; subgroup 2: element #10 to #41; and subgroup 3: element #42 to #64, with center frequencies to be 6.65 MHz, 5.93 MHz, and 7.41 MHz, respectively, and with corresponding averaged bandwidths of 8.1%, 6.45%, and 9.12%. Subgroup 1 showed significantly higher peak-to-peak amplitude than other two groups, while subgroup 2 showed the lowest peak-to-peak amplitude. We hypothesize that these differences were due to the uneven residual bonding epoxy thickness between the LN (Figure 2(a) L2) and the backing conductive glass (Figure 2(a) L3). To further confirm this, we performed PiezoCAD simulations of a TUT transducer element with varying residual epoxy thicknesses and compared with the experimental pulse-echo waveforms from three subgroups. The summary of these comparison results is plotted in Supplementary Figure S1, and it shows that the simulated pulse-echo and frequency responses for epoxy thicknesses 0 μm, 15 μm, and 30 μm closely match with the experimental pulse-echo waveforms of elements number 32 (E32), 55 (E55), and 4 (E4) in the array from subgroups 2, 3, and 1, respectively. For example, the simulated center frequencies were found to be 6.90 MHz, 5.99 MHz, and 7.23 MHz, respectively, for the residual epoxy thicknesses of 30 μm, 0 μm, and 15 μm, which are closely matched with the averaged center frequencies of the subgroups 1, 2 and 3, respectively. Crosstalk Measurements. Due to the subdicing of the TUT linear array in this work, a higher crosstalk between the elements was expected. To quantify the combined Figure 2(a)) [ Figure 2(a)) [57] First backing 5.8 14.56 0.06 Epotek 301 (L4 in Figure 2 BME Frontiers acoustic and electrical crosstalk, the TUT-array was placed in a tank with deionized water against a high frequency acoustic absorber (Aptflex F28, Precision Acoustics, Dorchester, UK). The TUT-array element #32 was fired by a function generator with 10 Vpp, 10-cycle burst, with frequency swept from 3 MHz to 11 MHz. The received voltages at the first, the second, and the third adjacent elements were measured and referenced to the excited voltage to assess the combined electrical and acoustic crosstalk [61]. As shown in Figure 3(f), the measured highest crosstalk was found to be -29.6 dB, -32.9 dB, and -34.49 dB for the first, the second, and the third adjacent elements, respectively, at 6 MHz. These crosstalk values were not significantly higher than<-33 dB crosstalk reported for linear arrays at similar frequency [62], which could possibly be due to the lower sensitivity of the array elements. Therefore, crosstalk at element #4 was measured across the same frequency range and shown in Supplementary Figure S2. Due to the higher sensitivity of subgroup 1 comparing to subgroup 2 ( Figure 3(d)), the crosstalk was increased to -26 dB at 3 MHz. Interestingly, 5 BME Frontiers no significant fluctuations of crosstalk were observed near the resonance frequency (~6.65 MHz for this subgroup), which may be due to the increased residual epoxy thickness for better acoustic absorption (1 dB/cm/MHz acoustic attenuation from Epotek 301 versus 0.06 dB/cm/MHz acoustic attenuation from glass). Electrical Impedance Measurements. The electrical impedance measurements were conducted for each element of the TUT-array and the characterization method was described in our previous literature [41]. A calibrated electrical impedance analyzer (Agilent 4990A, Keysight Technologies, Inc., Santa Rosa, CA, USA) was used to determine the phase and electrical impedance for each linear array element. Figure 4(a) shows the measured input impedance magnitude and phase plots for the center element: #32. Due to the dual frequency exhibited in these elements, two pairs of resonance ðF r Þ and antiresonance ðF a Þ frequencies were observed. The first pair resonance ðF r1 Þ and antiresonance frequency ðF a1 Þ were found to be 5.75 MHz and 6.08 MHz, respectively, and the second pair has resonance ðF r2 Þ and antiresonance frequency ðF a2 Þ of 7.2 MHz and 7.68 MHz, respectively. The electromechanical coupling coefficient was then calculated to be 0.325 and 0.348 for the two pairs according to the IEEE standard on piezoelectricity [63]. These resonance and antiresonance frequencies agreed well with the PiezoCAD simulation as shown in Figure 4(b), although discrepancies at the impedance values were observed due to the limitations of simulating the system electrical resistance, primarily from the high resistivity of ITO, with PiezoCAD. Then to examine the uniformity in electrical impedance across all elements, two pairs of the F r and F a for each array element are plotted in Figure 4(c). Interestingly, the variations from pulse-echo measurement were not present in the electrical impedance measurement. PiezoCAD simulation of electrical impedance of array elements for the residual epoxy thicknesses of 0 μm, 15 μm, and 30 μm and the corresponding experimental results from the three subgroups were shown in Supplementary Figure S3. The simulation results for the 0 μm and 15 μm thick residual epoxy closely matched in both magnitude and phase impedance curves with typical elements E32 and E55 from subgroups 2 and 3. However, the 30 μm epoxy simulation result exhibits slight differences in F r and F a 6 BME Frontiers values in comparison with experimental E4 element from subgroup 1, which can be attributed to nonhomogeneous residual thickness on the element that was not able to be simulated. Furthermore, the same impedance analyzer was used to measure the capacitance from each element, and the results are shown in Figure 4(d). Overall, the capacitance across 64 elements ranged from 40 pF to 80 pF, and the observed variations could be largely due to the uneven bonding thicknesses across the array. The first 9 elements showed higher capacitance than the rest of the elements, which may be contributed by the thicker residual epoxy. Beam Profile Mapping. As the current TUT-array was fabricated using subdicing method with a higher chance of grating lobe artifacts, to generate quality B-mode ultrasound images, we used a focused synthetic aperture beam transmitting strategy with a 16-element effective aperture at 0-degree beam steering and a focus at 15 mm away from the transducer surface. Therefore, in order to evaluate the performance of the proposed linear array in side lobes and grating lobes, the beam profile from 16 elements centered around element #32 and steered at 0, -10, and 10 degrees, was experimentally measured using a scanning hydrophone, and compared with the corresponding simulated beam profiles generated using the MUST software package. The hydrophone measurement procedure is similar to previously reported literature [64] and described in detail in Section 4. The results in Figure 5 show simulated and corresponding experimental beam profiles in the top and bottom rows agreed well overall for three angles: no grating lobes are observed when the beam transmitted at 0 degree ( Figures 5(a) and 5(d)), but strong grating lobes were observed when the beam was steered at an angle ( Figures 5(b), 5(c), 5(e), and 5(f)). The deteriorated focusing capability shown in experimental results, especially at 10 and -10 degrees, can primarily be due to the subdicing method on the TUT-array. Quad-Mode Imaging Validation. Using the fabricated TUT-arrays, we demonstrated a quad-mode US, PA, Doppler US, and optical fluorescence imaging capabilities. 6 MHz was selected as the imaging frequency as it was consistently one of the dual frequencies exhibited across all elements, despite different residual epoxy thicknesses (Figures 3, S1, S3). Three phantoms were used to validate the TUT-array for its capability of multimodal imaging. The schematic in Figure 6(a) represents a deep tissue phantom for validating BME Frontiers US and PA imaging capabilities, Figure 6(b) schematic represents a blood flow phantom for Doppler US imaging, and Figure 6(c) demonstrates a fluorescence bead phantom used for showcasing optical fluorescence imaging through the TUT-array. 2.3.1. USPA Imaging. The TUT-array was connected to the Vantage 256 ultrasound data acquisition system to perform real-time interleaved US and PA imaging on a tissue phantom prepared using a solution mixture of agarose and silica powder. The B-mode US imaging used a focused synthetic aperture beam transmitting strategy with a 16-element effective aperture at 0-degree beam steering. Each transmitted beam is focused at 15 mm away from the transducer surface. The US and PA imaging sequence is detailed in Materials and Methods: Imaging System and Data Acquisition Sequence. The phantom and imaging schematic is shown in Figure 6(a). The phantom consisted of four metal wire targets, each with a diameter of 50 μm and dyed with India ink to generate strong photoacoustic contrast. Figure 6(a) shows the approximate positions of the 4 black wire targets along the imaging depth of the phantom, with approximately 5 mm distance between the targets. The tissue phan-tom also consisted of two ultrasound only targets (H1 and H2) in cylindrical shape filled with agar solution to mimic hypoechoic regions in the tissue medium (see Methods: Multimodal Imaging Phantom Preparation). The TUTarray was directly placed on top of the phantom and the laser light irradiated the phantom through the TUT-array, demonstrating the advantage of using the TUT-array for dual-modality USPA imaging with minimal coupling. To compare the US imaging performance of the TUT-array with a commercial ultrasound linear array, the same USPA phantom was also imaged with a linear probe (ATL L7-4, Philips) operated at the same 6 MHz frequency. The US image from the L7-4 is shown in Figure 6(d) and the USPA imaging results acquired by the TUT-array are demonstrated in Figures 6(e) and 6(f). Both the US images in Figures 6(d) and 6(e) clearly show the four micrometal wire targets (~5 mm in axial plane and~3 mm in lateral plane) as these targets have different acoustic impedance compared to the background tissue-mimicking medium. The US image from the commercial probe L7-4 showed stronger contrasts for the wire targets and the background than that from the TUT-array with the same dynamic range. The wires were broadened on the lateral axis at deeper regions (>20 mm) Due to better sensitivity of the commercial probe, the hypoechoic targets showed stronger contrasts than those in the TUT-array US image. The PA imaging result in Figure 6(f) showed depth-resolved optical absorption contrast from the four metal wires dyed with India ink. The locations of all four wires are clearly displayed at expected locations with sufficient PA contrast from the background. By measuring the FWHM at W1, the axial and lateral resolutions of the PA imaged wires are found to be 583.6 μm and 363.1 μm, respectively. The two hypoechoic targets are not observable in the PA image due to no significant light absorption from the transparent agar-only medium, as expected. Doppler Ultrasound. To demonstrate that the fabricated TUT-array is sensitive for mapping the microparticle motioninduced ultrasound frequency changes, Doppler ultrasound imaging was performed using a phantom consisted of a polyethylene tube running circulated blood mimicking fluid (BMF, particles of 5 μm diameter). The schematic of this phantom is shown in Figure 6(b) and details are provided in Methods: Multimodal Imaging Phantom Preparation. A peristaltic pump was used to circulate the BMF in a loop through the tube (hence two parallel tubes in the field of view), while the TUT-array directly coupled to the phantom at an angle of 60°to the two tubes. The coregistered US and color Doppler image acquired and processed by Vantage 256 (see Methods: Imaging System and Data Acquisition Sequence). Figure 6(g) shows the same US speckle contrast from the two tubes in grayscale and an overlaid color Doppler image showed the opposite flow directions of the BMF in the two tubes in blue and red color scales. The measured size of these colored regions agreed well with the tube diameter of~2 mm. Fluorescence Imaging. In the final step, we validated the feasibility of fluorescence imaging through the TUT-array. For this purpose, ultraviolet-(UV-) reactive fluorescence beads with 50 μm beads diameter (Ultraglow, Techno Glow, Ennis, Texas, USA) were shaped to form "PSU" pattern within a rectangular area of 12 mm × 3 mm. The TUT-array was then placed on top of the pattern. Under the 120 Watts UV light excitation, the captured fluorescence emission signal from the "PSU" could be easily distinguished from the background in Figure 6(c). The discontinuities in the fluorescence image are due to the translucent kerfs in the TUT-array. Discussions In this paper, a transparent lithium niobate-based TUTarray was fabricated and validated for multimodal optical and ultrasound imaging applications. It is, to the best of our knowledge, the first transparent ultrasound linear array using bulk piezoelectric material. We successfully demonstrated the feasibility of the TUT-array for a quad-mode US, PA, Doppler US, and fluorescence imaging in realtime. Dual-modality USPA imaging results using the TUTarray demonstrated the potential of the TUT-array to acquire both US (30 frames per second) and PA (10 frames per second, limited by laser firing rate) images with a bare minimum acoustic coupling between the array and the imaging subject. The ability to illuminate light through the TUT-array and into the imaging object without any additional optical components not only reduced the complexity for building a multimodal ultrasound and optical imaging platform but also helped eliminate the shadow illumination problems commonly observed in the conventional B-mode dual-modality USPA imaging systems. Further, the experiments on blood flow mimicking phantoms demonstrated that the TUT-array is also capable of mapping the direction of blood flow using color Doppler ultrasound. In addition, the high optical transparency of the TUT-arrays was exploited for imaging fluorescence objects. Together, these experiments demonstrated the feasibility of developing multimodality optical and ultrasound imaging platform based on the TUT-array technology, in particular, the space constrained miniaturized multimodal endoscopy devices. For example, the proposed array may be made in an endoscopy form to integrate the current standard optical and ultrasound endoscopy [65][66][67] into one platform for imageguided biopsy for early cancer detection. Using this platform, the patient would only undergo one endoscopy procedure, but providing rich multimodal information: US-based structural information, Doppler US-based blood flow information, PA-based blood oxygenation information, and fluorescence-enhanced tumor metabolism (e.g., NADH). Such a comprehensive information is needed to assess the tissue function and pre and posttreatment efficiency [68]. However, the current TUT-array needs further optimization on below mentioned challenges before it becomes clinically applicable. Backing layer bonding: The uneven bonding epoxy thickness between the conductive glass slide and the piezoelectric material for ground connection led to variations in pulse-echo responses, resulting in three major frequency responses observed across the 64 array elements. BME Frontiers Our simulations confirmed that epoxy thickness between the LN and the conductive glass changes the electrical impedance and acoustic response of the element. One of the ways to overcome this issue in future is to replace conductive glass slide with a transparent electrode-coated epoxy block to serve as a homogeneous backing layer. Sensitivity and bandwidth: Comparing the ultrasound imaging capabilities of the current TUT-array to a commercial linear probe demonstrated that the TUT-array imaged the phantom targets with much lower contrast and lower axial resolution due to lower sensitivity and deteriorated bandwidths. In the future, the TUT-array sensitivity can be improved by optimizing each acoustic stacking layer in the TUT fabrication. First of all, a transparent piezoelectric material with higher piezoelectricity can be employed, such as the alternative current (AC) poled lead magnesium niobate-lead titanate (PMN-PT), which exhibited a d 33 of 2200 pC/N as compared to LN with a d 33 of 350 pC/N [59,69]. The bandwidth and sensitivity are also reduced by the glass backing layer. The glass bonded to the back side of the linear array induced the massloading effect and this resulted in a dual-frequency nature. Additionally, the conductive glass slide is not an ideal backing material to the ultrasound transducer because its acoustic impedance (~11 MRayls) is not matched to that of LN piezoelectric material (~34 MRayls) and the glass slide is not a good acoustic absorbing or damping material. These factors contributed to the significant ringing in the detected ultrasound pulse-echo waveform, and therefore reducing the bandwidth and deteriorating the axial resolution. The abovementioned conductive epoxy block as backing may also serve to reduce the significant mass-loading effect. Additionally, a novel translucent matching layer is also needed to improve the sensitivity and bandwidth of the transducer, such as our recently reported matching layer that uses translucent glass beads [70]. Dicing method: In order to maintain a common ground as an electrode, a subdicing method was employed in the current TUT-array fabrication. However, as a result, no beam steering and focusing capabilities could be exploited for ultrasound image formation due to concerns for strong grating and side lobes when the beam was steered at an angle ( Figures 5(e) and 5(f)). This subdicing method limited the ultrasound transmit beamforming to be at 0 degrees, and therefore, a synthetic aperture beamforming method was employed to generate B-mode ultrasound images. In the future, a fully diced TUT-array geometry will not only help reduce the crosstalk but also allow beam steering capabilities. System electrical resistance: Lastly, the current TUT-array also suffered from high resistance that mainly contributed by the transparent electrode-ITO. The high resistivity from ITO may increase loss and make it challenging to electrical impedance match with the driving circuit. Strategies to improve the system conductivity such as Cr/Au or Cr/Cu coating around the ITO [71] or new transparent electrodes with lower resistivity such as strontium niobate (SrNbO 3 ) [72] will be investigated in the future to improve the transducer performance. Despite these limitations, the new TUT-array fabricated using transparent LN demonstrated potential advantages in realizing an integrated multimodal optical, US and PA imaging device for providing complementary structural, functional, and molecular contrasts and spatial resolutions. The TUT platform can be scaled to develop multimodal devices of different length scales such as miniaturized endoscopy or wearable devices and therefore may open new avenues for combined optical, ultrasound, and photoacoustic imaging in preclinical and clinical studies. TUT-Array Fabrication and Packaging. The TUT-array was fabricated by dicing a rectangular double-side polished transparent lithium niobate (LN) piece. The step-by-step fabrication process is illustrated in Figure 7(a). Step (1): a 0.5 mm thick 36°Y-cut LN wafer (Precision Micro Optics, Burlington, MA, USA) was used for the designed center frequency of 6.5 MHz. Step (2): 200 nm ITO was deposited on both sides of the LN as a transparent and conductive electrode. Step (3): to form a common ground electrode, a one side ITO-coated glass slide was hard pressed to the ITOcoated LN using a small drop of transparent epoxy (EPO-TEK 301, Epoxy Technologies Inc., Billerica, MA, USA) as the bonding agent. The custom-made press-bonding platform was lapped and polished to ensure a flat surface during the bonding process. Care was taken to allow the bonding strength was enough to squeeze out the epoxy but not damaging the LN or the conductive glass slide. Step (4): a highprecision dicing machine (K&S 982-6, Giorgio Technology sales/service, Mesa, AZ, USA) with a 70 μm thick blade (1) (2) (3) (4) (5) (6) (7) Figure 7: Step-wise fabrication process of the TUT-array. Step 2: indium tin oxide was deposited on two sides of the lithium niobate as conductive electrode. Step 3: the coated lithium niobate was bonded to a conductive glass slide. Step 4: the elements were created by dicing the lithium niobate to 80% depth. Step 5: the custom-made flexible cable was bonded to the array by anisotropic conductive film bonding. Step 6: the wires were connected to the conductive glass slide as ground. Step 7: the array was then placed inside a brass housing and filled with transparent epoxy. 10 BME Frontiers was used to dice out 64 elements on the LN substrate with 0.3 mm pitch. The dicing depth was kept to be~80% of the depth (400 μm), so the ground electrode was intact. Step (5): we custom designed and fabricated a 50 μm thick polyimide base flexible cable with 3 μm thick and 100 μm wide copper traces (70 channels with 0.3 mm pitch at the array end and 0.5 mm pitch at the other end connected to Vantage 256). The flexible cable was bonded to the TUT-array by anisotropic conductive film (ACF) bonding. The ACF bonding procedure is similar to previously reported article [73]. In brief, an 18 μm thick and 1.5 mm wide ACF tape (AC-7206 U ACF, Hitachi, Tokyo, Japan) was tacked on the cable with 1 MPa pressure and 90°C for 10 seconds and then aligned to the transparent array elements. To maximize the transparent aperture and minimize the acoustic mismatching effect on array elements, the polyimide flex cable was bonded to the elements with minimal overlap (~2 mm). Next, the TUT-array/ACF tape/flex cable assembly was applied with 1 MPa pressure at 180°C for 25 seconds to allow trapping of conductive particles in the conductors. Step (6): ground wires were connected to the ITO-coated glass plate electrode with help of a small blob of silver epoxy (E-solder 3022, Von Roll Isola Inc., New Haven, CT, USA). Step (7): for protection and electromagnetic shielding purpose, a rectangular brass housing (size 12:7 mm × 38:1 mm × 10 mm) was used to surround the device and connected with the ground electrode. A transparent epoxy (EPO-TEK 301, Epoxy Technologies Inc., Billerica, MA, USA) was then poured inside the brass housing to fill the kerfs and serve as the second backing layer. A glass slide was placed on top of the brass tube to form a leveled epoxy layer. Step (8): lastly, an 80 μm thick Parylene C film was deposited on the full device to serve as the matching layer and waterproof layer. Matching layer can improve the acoustic wave transmission efficiency, and moreover, waterproof layer is critical to this device as ACF bonding tape is susceptible to humidity and can easily detach from the bonding. The fabricated linear array was then connected to a 70-pin 0.5 mm pitch commercial interface board (FPC050P070, Chip Quik Inc., Ancaster, ON, USA) by ACF bonding the interface end of the custom-designed flexible cable (same process as described above Figure 5(d). Time allotted for each focused beam acquisition was 150 μs, leading to a total B-mode frame acquisition time of 9.6 ms. The data corresponding to this frame is then transferred to the host for reconstruction and display. After every 3 US frame acquisitions, the control sequence waits for the laser trigger which happens every 100 ms as we have used a 10 Hz pulse repetition frequency laser. One PA frame is acquired at the laser trigger event with all 64 elements active in the receive mode, and the received PA data is transferred to the host for reconstruction and display of (1) the PA, (2) latest US frame in buffer, and (3) coregistered image of latest US and PA frames. Overall US frame rate achieved was 30 frames per second (FPS) and the PA imaging frame rate is 10 FPS, limited by the laser pulse repetition frequency of the laser. A function generator was used in master mode to synchronize both Vantage and the laser system, by setting the required time delays and thus allowing a proper interleaved, coregistered US + PA image formation. The corresponding timing diagram is demonstrated in Supplementary Figure S5 The transmit pulses used for Doppler ultrasound acquisition consisted of three complete cycles in contrast to one cycle used for B-mode acquisition to get higher Doppler sensitivity. All 64 transmit and receive channels were active for each plane wave acquisition of the Doppler ensemble. The velocity and power Doppler processing was asynchronous with 11 BME Frontiers respect to the Doppler ensemble acquisition and was performed using the Doppler processing routines provided by the Verasonics platform. Multimodal Imaging Phantom Preparation 4.4.1. USPA Imaging Phantom. Four 50 μm diameter micrometal wires (W1-W4) were dyed using India ink to generate both ultrasound and photoacoustic contrasts. Micrometal wires were placed 5 mm apart from each other on the axial plane and 3 mm apart from each other on the lateral plane in an acrylic tank filled with a solution mixture of agar and silica beads. Silica beads and agar are mixed with water at 1% and 1.5% weight ratio, respectively. Here, the silica beads were used to mimic the background ultrasound speckle contrast. Then, 1% agar solution was filled inside the 4.75 mm diameter cylindrical columns, next to the pencil leads, to serve as hypoechoic targets inside the tissue phantom for US imaging validation. Blood Flow Doppler Phantom. A blood-mimicking fluid (BMF-US, Shelley Automation, nylon particles with 5 μm diameter, 1548 ± 5 m/s speed of sound, 1037 ± 2 kg/m 3 fluid density, and 1.82% concentration) is circulated inside a polyethylene tube (outer diameter: 2.08 mm, inner diameter: 1.57 mm) using a peristaltic pump (model 3386, Cole-Parmer, Vernon Hills, IL, USA). The tube was partially submerged inside a tank filled with 1.5% agar solution. The tube was placed at 60 degrees to the imaging plane of the TUT-array as shown in Figure 6(b). Data Availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Conflicts of Interest The authors declare no conflicts of interest. Figure S1: comparison of experimental pulse-echo waveforms of typical elements of the array with simulated pulse-echo waveforms of TUT-array element with different residual epoxy thickness. Figure S2: combined acoustic and electrical crosstalk measurement at frequencies between 3 MHz and 11 MHz for element #4. Figure S3: comparison of experimental electrical impedance results of typical elements of the array with simulated impedance analysis results of TUT-array array element with different residual epoxy thickness. Figure S4: schematic of the TUT-array connection to the Vantage 256 ultrasound data acquisition system.
9,037
2021-11-11T00:00:00.000
[ "Physics", "Medicine" ]
Computational Identification and Characterization of New microRNAs in Human Platelets Stored in a Blood Bank Platelet concentrate (PC) transfusions are widely used to save the lives of patients who experience acute blood loss. MicroRNAs (miRNAs) comprise a class of molecules with a biological role which is relevant to the understanding of storage lesions in blood banks. We used a new approach to identify miRNAs in normal human platelet sRNA-Seq data from the GSE61856 repository. We identified a comprehensive miRNA expression profile, where we detected 20 of these transcripts potentially expressed in PCs stored for seven days, which had their expression levels analyzed with simulations of computational biology. Our results identified a new collection of miRNAs (miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p) that showed a sensitivity expression pattern due to biological platelet changes during storage, confirmed by additional quantitative real-time polymerase chain reaction (qPCR) validation on 100 PC units from 500 healthy donors. We also identified that these miRNAs could transfer regulatory information on platelets, such as members of the let-7 family, by regulating the YOD1 gene, which is a deubiquitinating enzyme highly expressed in platelet hyperactivity. Our results also showed that the target genes of these miRNAs play important roles in signaling pathways, cell cycle, stress response, platelet activation and cancer. In summary, the miRNAs described in this study, have a promising application in transfusion medicine as potential biomarkers to also measure the quality and viability of the PC during storage in blood banks. Introduction Platelet concentrate (PC) transfusions are widely used to save the lives of patients suffering from acute blood loss and are more often used in supportive prophylactic therapy for patients with various hematological diseases [1]. This blood component requires special storage in blood banks, normally being stored up to a maximum of five days at a temperature of 22 ± 2 • C, with gentle and continuous agitation, because even under ideal storage conditions, the PC can undergo modifications or degradations known as platelet storage lesion (PSL), a term introduced by Murphy et al., in 1971 [2], for describing the multifactorial mechanisms of this problem, which include the methods of collection, processing, storage, handling before or after collection, and expiration date [3][4][5]. In Brazil, the validity of the PC is three to seven We performed the reading mapping pipeline in genome and library mode, in which both modes used a common preprocessing step, mapping, expression profile, and miRNA prediction [28]. For this, the 3' adapter sequence was trimmed, and the sequence length distribution was analyzed. Sequences with a reading length between 15 and 27 nucleotides (nt) were aligned with the miRNA precursor sequences (pre-miRNA) of human miRBase, version 22 [29]. To this analysis, we applied the standard sRNAbench parameters as follows: (i) minimum length of adapter that needs to be detected, 10; (ii) alignment type, Bowtie seed alignment (GRCh38_p10_mp), seed length for alignment 20, minimum read count 2, minimum read length 15, allowed number of mismatches 1, the maximum number of multiple mappings 10; (iii) MEAN quality filtering was used, 20; and (iv) the annotation used miRNAs for species hsa (Homo sapiens). MicroRNA Expression and Quality Analysis between Different PCs We assessed the level of miRNA expression by counting per million reads (CPM) using the following formula: CPM = (reads number of one miRNA)/(total mapped reads to all annotated miRNAs) × 10 6 . We defined a miRNA expressed in a PC with a CPM > 1 in more than 50% of the six PC samples. The PC-specific miRNA was defined as a miRNA expressed exclusively in PC on the first day (PC-1), was used in this study as the platelet quality control. Because PCs are stored in a blood bank for a maximum period that varies between three and five days, depending on the plasticizer in the conservation bag [6], our analyses considered the first day as the high-quality control of platelets and the seventh day corresponded to the low-quality control of platelets [20]. In our computational results obtained with sRNA-Seq, we analyzed miRNAs expressions according to the methodology described by Pontes et al. [20] to check which PC bags were in good condition for transfusion. This method compared the relative expression between miR-127 and miR-320a [30]. When miR-127 presented a lower expression (<80%) as compared with miR-320a, storage lesions in this blood component were considered, suggesting the blockage of the PC bag for transfusion. The PC bag was considered to be suitable for transfusion when it had one of the following possibilities: (i) expression of miR-127 > miR-320a, (ii) equal expression between miR-127 and miR-320a, (iii) and a difference less than 20% in expression between these two miRNAs. In this study, the results generated for the expression levels of miRNAs were analyzed using computational biology simulations, employing unsupervised grouping and principal component analysis (PCA). Validation of microRNAs by qPCR on 100 PC Units Each PC which was used contained platelets from five healthy donors. Therefore, validation was performed in 500 donors (250 men and 250 women in the age group that comprises young adults, between 18 and 40 years old). All biosafety policy guidelines were applied in the involved laboratories, with the approval of the Ethics Committee (#194, 196, approval 17 October 2012). Seven days of storage were analyzed, and the first day was used as the control for the following days, totaling five comparisons of expression levels for each of the six miRNAs identified, accounting for 30 analyses. As the validation occurred in 100 PC units, we totaled this validation in 3000 analyses. Mir-191 was selected as an internal control for miRNA input and reverse transcription efficiency because the miRNA was most highly expressed on seven different days of storage [20]. All real-time quantitative PCR (qPCR) reactions were performed in triplicate for both miRNAs. Expression levels of miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p were isolated from 100 PC units with mirVana TM miRNA Isolation Kit (Thermo Fisher Scientific, Waltham, MA, USA). IsomiR Annotation Analysis In this analysis, we detected the miRNAs sequence variants called isomiR [31]. To detect isomiRs, we applied the following steps for the sRNAbench pipeline: (i) mapping the reads with the pre-microRNA genome or sequence using the Bowtie seed option [32], (ii) determining the coordinates of the mature microRNA, (iii) clustering of all reads that mapped within a window of the canonical sequence of the mature microRNA (miRBase), and (iv) applying a hierarchical classification of the variants [27]. Then, a subsequent analysis was performed to detect multiple NTA sequence variants which involved non-templated additions (enzymatically addition of a nucleotide to the 3 end, i.e., adenylation and uridylation) that included NTA(A), number of reads with a non-templated A (adenine) addition; NTA(U), number of reads with a non-templated U (uracil) and (T) thymine addition; NTA(C), number of reads with a non-templated C (cytosine) addition; and NTA(G), number of reads with a non-templated G (guanine) addition. The second class of length variants included 5 and 3 trimming and extension in the following forms: lv3pE, number of reads with 3 length extension (longer than the canonical sequence); lv3pT, number of reads with 3 length trimming (shorter than the canonical sequence); lv5pE, number of reads with 5' length extension (longer than the canonical sequence); lv5pT, number of reads with 5 length extension (shorter than the canonical sequence); and mv, number of reads classified as multiple length variants [28]. We organized the results of this analysis by the ranking of isomiRs identified in PCs and mature miRNA most expressed according to individual reading counts (CPM expression). For this analysis, we applied an information retrieval feature known as target mining in the miRWalk predictor, version 3.0 (http://mirwalk.umm.uni-heidelberg.de/), which hosts the three aforementioned predictors, to obtain the information from miRNA-gene interactions that were organized in a table with various predictive metrics, which were considered for binding probability (p > 0.95) [36], sites preferably conserved within the 3' UTR (untranslated region) and validated interactions [37]. Construction of the microRNA-Gene Interaction Network The subset files containing the predicted and validated interactions, identified in the previous analysis, were used in the construction of miRNA-gene interaction networks. For that, we filtered only the predicted and validated interactions of interest to remove those annotated symbol genes for more than one Refseq identifier. Then, we simulated the construction of two miRNA-gene interaction networks for the two predicted and validated interaction files (1) and (2), respectively, which were loaded and viewed in Cytoscape, version 3.8.0 (https://cytoscape.org/). The two networks were merged according to the tutorial (http://manual.cytoscape.org/en/stable/ Merge.html) with an intersection operator to obtain a single miRNA-gene interaction network common to the two previous networks, formed exclusively by interaction data predicted and validated. The central region of the network with the densest connections was detected with CytoHubba [38]. We applied the maximal click centrality (MCC) method to identify miRNA-gene interaction clusters. Functional Enrichment Analysis The target genes of the miRNA-gene interaction network from the previous analysis were used for functional enrichment with the tool the Database for Annotation, Visualization and Integrated Discovery (DAVID, version v6.8) [39]. These target genes were organized in a list containing only the Refseq identifier of the species (Homo sapiens). To avoid redundancies in terms, high-stringency classifiers with a similarity threshold and multiple linkage threshold equal to 0.50 were applied. The most significant terms for each gene were obtained from functional annotation clusters GO (Gene Ontology) [40] and from KEGG pathways (Kyoto Encyclopedia of Genes and Genomes) [41] with significant values noted with Log 10 , p-value < 0.05. Statistical Analysis All statistics were performed using R (https://www.r-project.org). The averages obtained with the non-parametric tests were calculated using the compare_means function. Principal component analysis (PCA) with the FactoMineR package [42] was used for exploratory investigation of multivariate data from miRNA using the prcomp function. The unsupervised grouping and heatmaps were built with the heatmap function.2. The correlation coefficients of the isomiRs were calculated with the following functions: cor and rcorr. Correlations with p-value > 0.01 were considered not significant. The ANOVA test was used to test the means among the miRNA variants. Data Preprocessing and Abundance of microRNAs in Platelet Concentrate The results of sRNA-Seq analysis from PCs of the GSE61856 repository, made with sRNAbench, showed that after a preprocessing of the data, more than 95% of the reads were recovered, which mapped between 81 and 85% of unique regions of the genome, with coverage genomics comprised of 95%, ( Table 1). We observed that the amount of read counts (RC) detected for miRBase hairpins showed a decrease from the fourth to the fifth day, increasing only on the last day. The same pattern was observed for mature miRNAs that presented more than 35% of miRNA expressed on the last day of storage (PC-7) ( Table 1). The abundance of miRNAs in stored PC bags described in Table 1, shows an increase in the number of reads until the third day of storage (PC-3), followed by a decrease in PCs on the fourth (p = 0.023) and the fifth (p = 8.9E-08) days, with an increase in the median expression, exceeding the average of normalized expression of 4.8 CPM ( Figure 1A). As a direct consequence of two more days of storage, the PC on the seventh day (PC-7) presented the largest number of reads and 939 miRNAs, which represented an increase of 2.5% observed after seven days of blood collection, confirmed by the decrease in the median. These results confirm that storage for more than five days in a blood bank causes a decrease in the levels of miRNA expression in the PC. Boxplot is designed from the 75th to the 25th percentile. The vertical lines above and below the box define the maximum and minimum values and the dots indicate outliers, the horizontal line inside the box represents the median value. Kruskal-Wallis test (p-value <0.001) was applied to compare the means between the six groups (** p < 0.01, **** p < 0.0001); (B) The heatmap shows an expression profile defined by the most abundant miRNA families in all PCs. Z-score was the metric applied to infer the best clustering between miRNA families. Gradients with a red tendency represent families of miRNAs with a lower Z-score and gradients with a blue tendency with a higher Z-score; (C-H) Donut chart shows the ranking of miRNA family positions on all PCs. PC, platelet concentrate. Items in the main columns of the table: gene family, coordinate string, pre-microRNA. UR, number of unique reads; RC, read count; RPM (lib), the read per million normalized by the total number of reads mapped to the library to a known microRNA; RPM (total), the reads per million normalized by the total number of genome mapped reads (genome mode) or the total number of reads in the analysis (sequence library mode); mature microRNAs sequence, miR 5p-/3p-arms, arm sequence as defined by the miRBase annotation. The order of classification of miRNAs is based on the RC. We provided a panoramic view of the distribution of these miRNA families in all PCs ( Figure 1C-H), where we could see the predominance of the mir-486, let-7, mir-25, and mir-191 families, except for on PC-3 which presented the most abundant mir-423. From this information, we also identified the top-20 miRNAs most expressed. Table 2 shows the detailed information on the precursor (pre-microRNA) and the mature miRNAs identified in this study from the annotation files. It shows a decrease in the level of expression of these miRNAs until PC-5 and an increase after two days on PC-7. In Supplementary Materials File S1, we present detailed information on several canonical miRNAs, with specimens that had their functions elucidated in platelets, in addition to a wide variety of miRNAs that have not been described in the literature. In all PCs, high levels of expression of miR-486-5p and miR-191-5p were identified, let-7i-5p being the third most expressed miRNA until PC-3, followed by miR-92a-3p, miR-181a-5p, and the other miRNAs that also occupied prominent positions listed in Table 2. We show the ranking of miRNAs with the normalized expression on the Log 2 CPM scale in two separate groups in the graphs depicted in Figure 2A,B. These two groups of miRNAs had their levels of expression analyzed by the analytical method which compares the relative expression of miRNAs identified by Pontes et al. In a PC bag we can test the following: i. Our results show that miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p decrease from the fourth to the fifth day of PC storage ( Figure 3). We compared the average expression of these miRNAs in 100 units of PC on the fourth day (PC-4), confirming that the storage time caused the decrease of these miRNAs, in a much more accentuated way for miR-486-5p more expressed in the sRNA-Seq data ( Table 2). All six miRNAs increased their levels of expression on PC-7 ( Figure 3). To understand how the increased of storage time caused changes in miRNAs profiles, we applied computational biology simulation to sRNA-Seq and qPCR data. The results showed miRNAs grouped with similar expression profiles, both for increasing and decreasing expression ( Figure 4A,C), and these miRNAs, most likely, could or could not have the same function in PCs with activated platelets. The PCA analysis ( Figure 4B,D) was able to identified MiRNAs groups that suffered expression variation caused by the increased storage time, as we observed miR-486-5p, miR-92a-3p, and miR-151a-3p at extreme points of the ellipses and miR-103a-3p, miR-181a-5p, and miR-221-3p (overlapping ellipses). The first principal components (PC1) explained 64.1% of the total miRNA expression variations in the qPCR experiment ( Figure 4D). The hierarchical grouping ( Figure 4A) shows an expression pattern of 20 miRNAs in six PCs identified with sRNA-Seq. Z-score was the metric applied to infer miRNAs with similar levels of expression. Gradients with a red tendency represent miRNAs with a lower Z-score and gradients with a blue tendency with a higher Z-score. The PCA analysis graph ( Figure 4B) shows the grouping of miRNAs with expression level <80% (in red, according to the miR-127-3p expression reference) and miRNAs with expression level 80% on the PC (in blue, according to the miR-320a-3p expression reference). The hierarchical cluster ( Figure 4C) and PCA ( Figure 4D) identify the associations of miRNAs, miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p validated with qPCR related to storage lesions. In the PCA analysis graph, ellipses were predicted with a probability of 0.95. The X-and Y-axes show principal component 1 and principal component 2. The first principal component (PC1), explained 94.5% and 64.1% of the total miRNA expression variations in the two experiments, respectively. In both PCAs, the divergences in the first two main components reflect the differences in miRNA profiles with a particularly distinct division between groups. IsomiR Quantification We identified a dominant pattern of expression of mature miRNAs in 5p-arm and 3p-arm, which were investigated systematically. Our results show that non-activated platelets (PC-1), present more than 88.73% of specific miRNAs in 5p-arm and only 11.27% in 3p-arm ( Figure 5A). We extended this count across all PCs in an attempt to find significant differences in 5p-arm and 3p-arm expression dominance. The data indicate that the percentage decrease in miRNA, caused by storage, was 85.75% in 5p-arm and 14.25% in 3p-arm (PC-5) as compared with previous PCs ( Figure 5A). We measured the density of miRNA expression by the log 2 ratio (5p-arm/3p-arm). A significant difference was found in the level of expression, tens, or hundreds of times greater in the 5p-arm than in the 3p-arm ( Figure 5B). A large number of miRNA different sequences variants were identified in all PCs in a reproducible way (Table 3). In Table 3(1) we identified the total of isomiR expressed in count reads in the form of NTA and length variants. In Table 3(2) we account for the mean and standard deviation of these variants in count reads, detected in the canonical sequence of the 20 miRNAs. The averages were tested with ANOVA (p < 0.001) and were significant. On the one hand, the amount of the NTA(U) variant increased from PC-1 to PC-7 to 67.31%, on the other hand, the NTA(A) variant decreased from PC-1 to PC-7 to 30.42%. The variants NTA(C) and NTA(G) showed less relevant variations in increase and decrease. (C) Correlogram calculated to highlight the miRNAs that are most correlated with the NTA variants, with an emphasis on miR-486-5p, miR-92a-3p, miR-320a-3p, miR-127-3p, let-7i -5p, let-7b-5p, miR-103a-3p, miR-151a-3p, miR-221-3p, miR-26a-5p, miR-423-3p, and miR-423-5p; (D) Correlogram calculated correlation to highlight the miRNAs that most correlate with the length variants, highlighting miR-486-5p, miR-127-3p, let-7i-5p, miR-103a-3p, miR-423-3p, miR-181a-5p, miR-22-3p, let-7d-5p, miR-423-5p, and let-7g-5p. MiRNAs that have a positive correlation for both variants at the same time were miR-486-5p, miR-127-3p, let-7i-5p, miR-103a-3p, miR-423-3p, and miR-423-5p. In the correlograms, the scales with blue gradient are positively correlated, while the gradients in red color are negatively correlated. Blank scale are miRNAs correlations with p-value >0.01 are considered no significant. The most abundant length variants were lv3pT which decreased from PC-1 to PC-7 to 63.64%, whereas lv5pE increased from PC-1 to PC-7 to 30.11%. The lv5pE, lv5pT, and mv variants were less abundant. Sequence variants were correlated to miRNAs by calculating the correlation matrix ( Figure 5C,D), which highlighted the following miRNAs: miR-486-5p, miR-127-3p, let-7i-5p, miR-103a-3p, miR-423-3p, and miR-423-5p (Table 3). These showed a positive correlation for both variants at the same time. We gathered more detailed information on these variants which is available in Supplementary Materials File S2. Functional microRNA-gene Interaction on the PC The results of the miRNAs target prediction genes showed that the subsets of predicted and validated interactions share 275 (23.3%) of accounted interactions ( Figure 6A). The first network of miRNA-gene interaction constructed with file (1), presented 339 nodes and 1070 edges, while the second network constructed with file (2) presented 127 nodes and 560 edges. The final network merged from the first two networks presents 108 nodes and 220 edges ( Figure 6B). The network topology presents a central region formed by denser connections with miRNA-gene interactions, ordered by the MCC mainly for let-7d-5p, let-7a-5p, let-7i-5p, let-7b-5p, let-7f-5p, and let-7g-5p, in addition to the YOD1 gene and compositions formed by miR-92a-3p, miR-423-5p and miR-103a-3p. Most of the genes in the network in Figure 6B were enriched in GO, mainly in the categories of molecular function for protein binding (GO: 0005515), cellular component for nucleoplasm (GO: 0005654), and in a biological process for negative regulation of transcription, DNA-templated (GO: 0045892) ( Figure 6C). These genes were also enriched with DAVID [39], generating a wide pathway panel with an important functional repertoire for the P53 signaling pathway, described in signs of oxidative stress, including DNA damage, activation of oncogenes, cell cycle arrest, senescence, and apoptosis. Pathways with an impact on cancer, cell cycle and platelet activation, and other pathways directly associated with cellular stimuli, signal transduction, cell signaling, and stress response were predicted ( Figure 6D). The results of these analyses are in Supplementary Material file S3. Discussion Aging is characterized by a functional decline in many physiological systems that can be triggered by environmental and endogenous stress, including the wear on telomeres, genomic instability, epigenetic changes, and loss of proteostasis, favoring cell damage and the progression of physiological aging [43]. Platelets are small anucleated cells that essentially originate from the fragmentation of pseudopods from the megakaryocyte cytoplasmic membrane in the bone marrow [44]. In a healthy person, platelets circulate in the blood for about seven to ten days, and then are removed from circulation and destroyed in the spleen [45,46]. Similarly, this aging also occurs in vitro, wherein blood banks in most countries, blood components widely used and routinely supplied in the form of PCs are discarded after five days of storage [47]. Platelet storage in blood banks causes a decrease in the abundance of miRNAs due to shear stress and platelet activation, being the two main factors responsible for the release of microparticles (MPs) rich in miRNAs [11][12][13]. Studies of this nature have shown a relationship between miRNA profiles with subsequent platelet reactivity, suggesting an important role in post-transcriptional regulation during storage [7,24,48]. In this study, we identified the 20 most expressed miRNAs in PCs stored for seven days with a genomic coverage of 95%. Specifically, in the PC with high-quality platelets (PC-1), we identified a total of 916 miRNAs (Table 1). However, we confirmed a 22.4% decrease in miRNA levels from the first to the fifth day, much more accentuated than the value found in our previous study [20], increasing the number of miRNAs only after seven days of blood collection with a rate of 2.5, which is, most likely, a response to inhibit the translation of proteins induced by stress caused by aging for more than seven days of storage [20]. On the basis of these results, we emphasize that the use of different bioinformatics pipelines to analyze the same sequencing data, generates results that can differ substantially. Whereas, some studies have highlighted the urgent need to ensure that the bioinformatics pipelines used for next-generation sequencing (NGS) analysis, undergo better validation, especially for applications in translational genomic medicine [49]. In our data, we observed the influence of storage on the unequal distribution of the abundance of miRNA families in all PCs (Figure 1). For example, the largest mir-486 family showed declines in expression, losing the most abundant position on PC-3 to the mir-423 family. The mir-191 family was replaced by members of the let-7 family on PC-2, which are very common in platelets [50]. Probably, post-transcriptional changes influenced the biogenesis and stability of miRNA during storage, as has been shown in studies that used molecular changes in DICER1 to reduce the number of miRNAs that strongly regulated platelet reactivity [51,52]. The measurement of miRNAs expression in the PCs is a variable that we have demonstrated to be associated with the quality of these blood components. When we evaluated the expression levels of miRNA in PCs using computational methodologies, we concluded that, in clinical practice, miRNAs are a very useful tool for testing PC bags that are close to expiration date during storage in a blood bank. In this study, we found a large number of miRNAs that are candidates as storage damage biomarkers that can replace miR-127, miR-191, and miR-320a miRNAs in clinical trials, which were found in our first study, especially miR-191 which has been used as an internal control for qPCR validation analysis [20]. In addition, we selected six new miRNAs, of the most expressed RNAs, for further validation by qPCR. Relative quantification indicates decreased expression levels of miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p from the fourth to the fifth day ( Figure 3). We also highlighted that this decrease occurred more accentuated for miR-486-5p, which was more expressed in the sRNA-Seq data (Table 2). Additionally, all six miRNAs increased their levels of expression on PC-7 ( Figure 3). We applied computational biology simulations (hierarchical grouping and PCA analysis) to the data generated by sRNA-Seq and qPCR which revealed how the increase in platelet storage time caused changes in the miRNA profiles confirmed in the validation. These computational methodologies have resulted in more accurately identifying miRNAs located in different groups based on the days of storage [53]. In our previous study, we used these methodologies which identified changes in the expression profiles of 14 miRNAs that were associated with PSL [21]. In this current study, we confirmed that changes in the profiles of the new miRNAs correlated with the instability of the half-life of these transcripts on the fourth day, which coincided with the time of onset of PSL. For example, PC bags that are tested and confirm that the expression of miRNAs (miR-151a-3p, miR-103a-3p, and miR-221-3p) is <80% of the expression of (miR-486-5p, miR-92a-3p, and miR-181a-5p) means that there are storage lesions. The miRNA's stability varies widely, with half-lives of~1.5 h, more than 13 h, and up to 48 h, in human biofluids [54,55]. Measuring the relative levels of miRNA in PC is subject to some challenges that need to be taken into account, because the relative stability of miRNAs has implications for their ability to transfer regulatory information, as they are very short and have highly divergent sequences, with a wide variation of the GC content that can favor the different hybridization properties among different miRNA sequences [55]. In a blood bank, the analytical method for testing PCs can be implemented quickly and with low cost to test PC bags stored for more than four days whihc still contain physiologically normal platelets. The durability of platelet physiology depends on individuals with ideal suboptimal health status (SHS), which is considered to be a subclinical and reversible stage of chronic disease. Individuals with SHS can have a progressive accumulation of senescent cells and a relative shortening of the telomeres that produces the early biological aging of platelets [56][57][58]. The increase of miRNA levels in the PCs suggests that after platelet activation they stabilize within the circulating MPs for their transport of action [7,13,59], undergoing changes in their profiles and using selective platelet packaging pathway for MPs [13,60]. For example, we justify expression changes on several PCs, based on the dominant expression pattern in 5p-arm about 3p-arm ( Figure 5B). The density measurement confirmed a significant difference in the level of expression, tens or hundreds of times greater in the 5p-arm than in the 3p-arm, as has been observed in other studies [61][62][63]. Our global estimates of miRNAs expression on all PCs pointed to a decline in this dominant pattern from PC-2 to PC-5, but only increased on PC-7 (Table 3). We found several non-model and length variants, positively correlated with recently emerged miRNAs (miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p), which have not yet been reported in other platelet miRNomes (complete sequencing of miRNAs) and provide greater quality and innovation to the analytical test mentioned in that study. Our analysis of miRNA-gene functional interaction, pointed to the existence of a molecular regulation mechanism in platelets, which was inferred based on the topology of the interaction network formed by recently emerged miRNAs (miR-103a-3p, miR-423-5p, and miR-92a-3p) and conserved miRNAs of the let-7 family interacting with the YOD1 gene, a desubiquitinating enzyme that is very expressed in platelet hyperactivity [56]. We also identified the functional roles of significant target genes in signaling pathways, cell cycle, stress response, platelet activation, and cancer. In the future, our investigations should be repeated in a larger number of samples, studying the sixth day of storage (not carried out with the current data). Fresh platelets (PC-0) should also be studied to obtain a broader profile of expression variation with storage days. The option to extend the study after seven days of storage would also be an advantage. In summary, our results have a promising application in transfusion medicine, because we describe a new collection of miRNAs (miR-486-5p, miR-92a-3p, miR-103a-3p, miR-151a-3p, miR-181a-5p, and miR-221-3p) that shows a sensitivity expression pattern due to biological platelet changes during storage. These miRNAs could be applied, in blood banks, as potential biomarkers to also measure the quality and viability of the PC during storage.
6,781.6
2020-08-01T00:00:00.000
[ "Medicine", "Biology", "Computer Science" ]
Deep Learning Based Real-Time Body Condition Score Classification System The number of animals worldwide is increasing day by day to meet the increasing animal protein needs. Depending on the increase in animal production, yield amount which can be obtained from per unit area can be increased by increasing the number of animals. In dairy cattle farms, it is necessary to group the animals according to their body condition score (BCS) and to care and feed the animals at certain times. Under normal conditions, these processes should be conducted by animal caregivers or experts coming to the enterprise. BCS ratings conducted by experts on farms based on visual examination may give unreliable results and may include misinterpretations. Therefore, technology-supported systems are required. In this study, the prediction of BCS, which is the most important indicator of proper feeding of dairy cattle, is aimed. In addition, by adapting the designed system to simple, fast and user-friendly mobile software, it will be possible to provide tests in enterprise environments in a shorter time. In the design of the system, deep learning models, which have been used frequently in recent years in computer science, have been used. The CNN model, which was trained with a 94.69% success rate through these data, has been converted into a mobile-friendly format for real-time tests. It is aimed to make real-time tests and provide easy access for dairy producers with the help of the designed mobile software. In order to increase the success of the CNN architecture, pre-trained networks have been utilized. In the study, VGG19 pre-trained network, whose success rate has been proved in the previous studies conducted in the literature, was used in model design. The 78.0% performance results obtained from the study indicate that pre-trained CNN architectures based on deep learning are successful for the real-time BCS classification problem. I. INTRODUCTION In dairy cattle, care and nutritional requirements may change through early, mid and late lactation periods and also during dry period. Therefore, animals should be grouped according to their levels in dairy cattle farms. Many enterprises have dairy cattle that are too fat or too thin from birth to dry period (stop of milk production). Failure to identify these animals at the right time increases the expenses of the dairy cattle enterprises due to disease treatments, losses in milk production and decreased fertility rates. In order to eliminate these problems, the animals are inspected regularly by experts and a scoring is made. This scoring system, involves visual inspection of each animal and is called body condition score (BCS). BCS is a method based on evaluation of fatness or thinness of cattle according to a five-point scale ranging from 1 to 5. The associate editor coordinating the review of this manuscript and approving it for publication was Donghyun Kim . Here, Score 1 indicates very thin cattle while score 5 indicates the very fat [1]. At birth, 1 is classified as very thin, 2 as thin and 5 as very fat, while 3 is the expected average score. Figure 1 shows the explanation of the BCS scores [2]. BCS offers many advantages for dairy cattle enterprises. Since the cows are scored between 1 and 5 in different periods, the ones with the desired BCS values exhibit lower prevalence rate for metabolic diseases. In addition, reproductive disorders that may arise among animals that are too fat or too thin can be prevented. Unnecessary feeding consumption can be avoided as well as increase in yield can be achieved with BCS, which is a unique indicator that shows the needs of animals are fully met [3]. BCS also indicates the energy balance estimation in dairy cattle. It is sometimes required to ensure that animals with undesirable BCS levels are reached to the desired levels by caring the animals. This process is carried out according to the method of grouping the animals by observation in line FIGURE 1. BSC scoring system in dairy cattle [2]. with the individual skills of the experts or staff experienced in this field. In this sense, the execution of these processes on farms may be ignored from time to time based on the cost of retaining expert staff permanently in the enterprises and the lack of experience of the staff [4]. Kellogg separated the body condition score in dairy cattle according to the periods of the animal. According to Table 1, BCS, which is between 3.5-4 in the dry period, is indicated as 2.5-3 during one month postpartum [5]. It has been stated by Kellogg that BCS is used as a useful tool in providing the herd's nutritional management and daily productivity. In their study, it has been stated that with the improvement in nutritional level, animal health, reproductive performance and milk productivity can be improved. However, thin cows in the negative energy balance in the herd are unable to perform at maximum capacity. Cows that are too fat are also prone to metabolic problems. Kellogg stated that using the BSC scoring of the herd could allow dairy producers to achieve adjustments in nutritional status more accurately [5]. In a different study published by Heinrichs and Ishler, it has been stated that BSC value should be 3+, 4-in the calving period, and the scores below 3+ indicate that the cows receive an inadequate energy supply in the late lactation and/or dry period. The researchers indicated that the failure to replenish energy reserves during this period would limit milk production during lactation. They also stated that the scores above 4-indicate that the energy intake in the late lactation and/or dry period is very high. They proposed to separate such cows in the dry period from the milking herd and to feed them with an adequate but not excessive protein, mineral and vitamin additive and a low energy ration [6]. II. RELATED WORKS In recent years, various initiatives have been proposed to estimate BCS automatically with technological advances. In the early 2000s, BCS was tried to be determined by classical image processing methods. However, deep learning-based approaches have been started to be used since 2015. In addition, the images used in the solution of the BCS problem can be categorized as 2D and 3D. Almost all of these images are professionally taken from the back of the animal. In addition, real-time solutions have been presented in the literature, albeit in a limited number. The systems designed in studies conducted until 2016 were based on the introduction of classical image processing processes and generally based on determining anatomical and shape-based surfaces and turning them into feature sets. In addition, in some studies feature reduction from these features was conducted by using various methods. The real images taken from the back of the animals were used as a dataset in these studies. As the deep learning methods have been started to be preferred by the researchers, the design methods of the automatic BCS detection systems have also changed. The literature review is shown in Table 2. When the literature on computer-aided automatic or semi-automatic BCS estimation systems is examined, it is seen that the images are generally obtained from the back region of the animal due to large number of anatomical regions and shape features, and BCS estimations are made based on these images. In addition, many of the images taken in the studies were taken using 3D imaging devices. In the studies conducted using 2D imaging, the devices used were for professional shooting. Therefore, it is understood that the practicality of most of the applications is quite costly, difficult and inflexible. In this study, the developed mobile program is intended for animal husbandry applications, and it is paid attention that the imaging method and imaging devices are accessible so that the study can be applied in daily life. For this reason, images obtained from a mobile device that everyone can access, showing the back of the animal, were used as a data set for BCS determination. In addition, a system using trained CNN pre-trained networks, supported by software based on mobile application, was implemented for testing the study. The study is express to the use of dairy producers both in theory while measuring the effect of CNN architectures on the BCS problem and in practice. The study consists of four parts and the rest of the study is as follows: In second section, the data set and the pre-trained CNN architectures used in the study are presented. In third section, the designed software and the usage of the software are mentioned. In the last section, the obtained results and the contribution of the results to the literature are discussed. III. MATERIAL AND METHOD In the study, the performance of pre-trained CNN architectures, one of the sub-branches of deep learning, was tested in determining BCS automatically in dairy cattle. In addition, in order to facilitate the implementation of the system, a mobile application that users can easily access was developed and the tests were carried out in this environment. The stages of the designed system are shown in Figure 2. Machine learning is used today in many areas such as identifying objects in images, conversion speech into text, matching news items, posts or products with users' interests, and selecting relevant search results. Increasingly, a classification model called as deep learning is utilized in these applications. Since conventional machine learning is limited in processing the raw form of data, before implementing machine learning, various data preprocessing, segmentation, feature extraction, feature selection and reduction procedures are required. In contrary, in deep learning methods, the learning of the data are provided by calculation models composing of multiple processing layers with multiple levels of abstraction. To calculate the value of a machine in each layer, it uses the back propagation algorithm to determine how much the internal parameters used should change compared to the representation in the previous layer. Thus, with deep learning, the complex structure in large data sets can be discovered. Deep convolutional networks have led to breakthroughs in processing images, video, speech and audio while recurrent networks have helped to solve sequential data such as text and speech [17]. Although the concept of deep learning architectures is based on the first general learning algorithm for supervised deep-fed multilayer perceptron's published by Ivakhnenko and Lapa in 1965, the first successful deep learning architecture in literature is the ''LeNet'' architecture developed by Yann LeCunn in 1989 [18], [19]. LeCunn tried to classify handwritten digits (MNIST) using LeNet architecture in his studies until 1998. In the LeNet network, the sub-layers consist of subsequently followed conv and max pool layers. The next top layers correspond to fully connected traditional MLP [19]. Figure 3 shows a standard LeNet architecture. Although the recurrent neural networks and LSTM models proposed by Hochreiter and Schmidhuber were proposed as well as LeNet studies, Deep Neural Networks were not preferred due to the high cost of computing between 1990 and 2000. Instead, the researchers preferred presenting the input data to models such as SVM, standard ANN etc. and processing the data at a certain level, which provided to solve the problems faster [20]. With the increase of CPU performance and the emergence of GPUs after 2000, the applicability of deep neural networks have started to be discussed. In 2006, Geoffrey Hinton announced how to train a deep multi-layer feed-forward network, which triggered the idea of deep learning [22]. After the study, interest in deep architecture has increased and studies have focused on this direction. In the following years, many deep architectural designs were realized, and the next step was Hinton and his team's high classification success in the ImageNet competition in 2012. With the networks currently used with the name of AlexNet, the classification success of ImageNet, which was 26.1%, was reduced to 15.3% by Hinton and his team [23]. With the architectures developed in the following years, success rates have been reduced to much lower levels. Figure 4 shows the architectural distribution of ImageNet classification success by years. After 2015, pre-trained deep architectures have surpassed human-level performance on the ImageNet dataset. Depending on this performance rate, researchers and companies have increased the use of deep architectures in their studies, and today deep architectures are used widely in almost all fields. The developed architectures have had different success rates in finding solutions for various problems in years. Therefore, it has become possible for the developers to adopt different architectures to their own problems. The process of taking a pre-trained architecture in this manner and training it with developer-specific data is called as transfer learning in the literature. Since transfer learning is built upon the success of the existing architecture, it is more likely to yield more successful results than an empty CNN architecture. Transfer learning process steps are shown in Figure 5 [25]. Current major CNN models in the literature can be listed as follows: LeNet (1998) [19], AlexNet (2012) [23], GoogleNet (2014) [26], VGGNet (2014) [27], ResNet (2015) [28], Inception (2016) [29], NasNet (2017) [30] vb. When choosing a network to be applied to the problem, it is necessary to consider the different features of the pre-trained networks. The most important parameter in this preference is network accuracy, speed and size. Choosing a network often requires balancing these features. A. CONVOLUTIONAL NEURAL NETWORK A Convolutional Neural Network (CNN) is designed to process data that come in the form of multiple arrays such as a color image composed of 2D arrays containing pixel intensities in three color channels [22]. Figure 6 shows the structure of a typical CNN architecture. Layers included in standard CNN architectures are convolutional, non-linearity, pooling, flattening and fully-connected layers. Convolutional Layer: This layer generates feature maps by shifting matrices with different core sizes (feature detectors) in the input image. This core structure is smaller in size compared to the input image and is similar to the input image that requires parameters such as weight and bias sharing between adjacent pixels of the image [31]. VOLUME 8, 2020 Non-Linearity Layer: This layer is applied to prevent the system from linearity and various nonlinear activation functions such as logistic, tanh, Relu are used. The weighted sum of the linear net input value is passed through an activation function for nonlinear transformation [32]. Pooling Layer: This layer is used to combine data from the convolution layer. There are various types such as max, min and average pooling for data reduction. Depending on the type of the problem and the amount of data, the number and type of pooling can be changed [33]. Flattening Layer: It ensures that meaningful data obtained from the system is edited before entering a classifier. When data in 2D form is used as input data of the classifiers such as ANN, SVM, etc., they should be converted to a single output plane. This is the reason it is called as flattening [34]. Fully-Connected Layer: It is a standard machine learning layer. It can include various classification methods according to the structure and type of the network. It has the same output number as the problem classification number. It can include models such as artificial neural networks (ANN), support vector machines (SVM) [35]. B. VGG NET VGGNet is a homogeneous architecture that takes its name from the developer group (visual geometry group-VGG) at Oxford University. According to the number of convolution layers, there are two types: VGG16 (41 layers, 16 of which are convolution layers) and VGG19 (47 layers, 19 of which are convolution layers). It accepts a 224×224×3 input image size and contains 3 × 3 core matrix. There are two fully connected layers with 4096 outputs. In the final layer, there is a fully connected layer with 1000 channels for 1000 classes in accordance with the ImageNet classification problem and SoftMax elements showing the suitability values of this layer. When performing transfer leaning, it is enough to make changes on this final layer [27]. An average of 73% success is achieved in the ImageNet classification using this architecture. The architecture of the model is shown in Figure 7. IV. EXPERIMENTAL STUDIES ON DEVELOPED APPLICATION In the study, a mobile application was developed to determine and automate BCS values, which is the most important indicator of whether the animal's requirements are met in different periods (early, mid and late lactation periods and during dry period), in hybrid black pied dairy cattle. BCS estimation using real-time images on mobile devices was carried out with the help of CNN architectures, which is one of the deep learning models frequently used nowadays. In addition, pre-trained CNN architectures were used to increase the performance of the system and VGG19 network was preferred among the pre-trained networks [4], [16]. In the design of the system, a user-friendly application that could operate on mobile devices with Android operating system was designed to use the Python programming language and Tensorflow deep learning libraries and to test and implement the system. In the first stage of the study, the dataset to be used for BCS estimation problem was prepared. The data of the images were evaluated by an expert zootechnician. The images of each of the dairy cattle (with black pied and hybrid cattles) were taken at different periods in 10 different dairy cattle farms in Nigde and Adana region from approximately 1.5-2 m distance. Therefore 505 different images in total were obtained which were obtained with a mobile device with resolutions ranging from 1024 × 768 to 3042 × 4032 pixels (The following image pre-processing techniques were automatically performed by TensorFlow: normalization, quantization, crop image, resize (224×224), rotate and basic filters). In this way, resolution limitation in the system was avoided and the system could operate at any resolution value. BCS scores of the animal were determined by the expert using these images. This data set was used in the training of CNN architectures. However, before entering the training, the images were pre-processed and the regions of interest (ROI) to be used in BCS estimation were cut. The data set obtained was divided into classes with a sensitivity of 1.0. Of the 505 images in total, 40 were Score 1, 161 were Score 2, 221 were Score 3, 63 were Score 4, and 20 were Score 5. The obtained images and their pre-processed versions are shown in Figure 8 for each score value. In the second stage of the study, training of the data set is performed. In the pre-training process of the data set, it is seen that VGG19 network achieved higher success rate. Therefore, in the training of the data set, VGG19 network was used [4], [16]. The Python programming language and TensorFlow and Keras libraries were used to train the network since they are widely used in literature, have a successful infrastructure, and are easy to apply for mobile systems. Table 3 shows the VGG19 based CNN architecture used in the study. The computer on which the data was trained has GTS450 GDDR5 1GB 128Bit Nvidia GeForce DX11 GPU card with Intel i7-2600 3.40 GHz processor, 18 GB RAM, running under 64 Bit Windows 10 operating system. The designed CNN was trained by running 200 steps with pre-processed data (505 original images were used in a data augmentation process to increase the image set to 2020 using such augmentation techniques as rotation, horizontal flip and vertical flip) and this process lasted 2 hours and 45 minutes in total (using these training parameters: optimizer = 'adam', loss = 'categorical_crossentropy', max_epochs = 200, metrics = 'accuracy', BATCH_SIZE = 32). The accuracy and loss values that occurred during the training phase are shown in Figure 9. Table 4 [37]. The values in the table are True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). A standard CM is shown in CM resulting from the training of the system with the help of VGG19 pre-trained network is shown in Table 5. Real-time images were used in the test of the trained CNN. Considering the structure of animal husbandry enterprises, it is difficult to access real-time images with a computer. For this reason, in the study, mobile software was developed for real-time tests in the enterprise environment. The trained CNN was integrated into the software and a user-friendly design was realized. Sample screenshots of the developed mobile software are given in Figure 10. The software designed for the tests was used in an enterprise by the expert zootechnician. The expert used the mobile application to score the animals in the enterprise and then recorded the score information based on his own expertise. Of the 50 images in total, 5 were Score 1, 10 were Score 2, 25 were Score 3, 10 were Score 4, and 5 were Score 5. In Table 6, the confusion matrix obtained as a result of the expert evaluation and software evaluation in the test process is presented. V. RESULTS AND DISCUSSION When the literature is examined, it is seen that the researchers have conducted many studies for BCS estimation problem by using the images taken from the back, pins and rump of the cows and using different models. Classical image processing processes and methods were used between 2000 and 2015. However, deep neural networks have been used in the last five years. The researchers tried to increase the solution rate of the problem, especially with the cameras and imaging methods used in the image acquisition phase. However, the chance and rate of implementation of such studies has been very low. In addition, studies generally used the back and profile images of animals during imaging. Although this process requires taking images in a very controlled way, it is not very useful in practice. Therefore, these studies could not go beyond academic studies. In several studies conducted in recent years, BCS has been tried to be determined by using deep architectures with a single image taken from the back of the animal. In this study, computer-aided software was developed to estimate the BS values using real-time images in order to reduce the error rate that may arise from the expert's interpretation in determining the BCS in dairy cattle. Considering the situation of animal husbandry enterprises, the software developed due to its easy use was designed in a form that could operate on mobile devices. The deep neural network architecture proposed in the study was designed using a pretrained VGG19 network. Unlike other studies in the literature, the single image of the animal was taken from the back of the animal in real time and the score value was estimated through this image. In addition, thanks to mobile support, it is clear to adapt the study to the field applications. In the study, firstly, animal images taken from different enterprises were classified into classes by expert zootechnician. 505 images in total were distinguished according to the class they belonged to in the range of 1-5. The regions of these images used by the expert to determine BCS were cropped. These regions were labeled according to the BCS value. Afterwards, transfer learning process was carried out in deep architectures and the data was trained in VGG19 network. At the end of the training, it was observed that the system provided classification with a success of 94.69%. The trained network was transferred to the mobile software designed for real-time testing of the system. For the tests of the study, the expert zootechnician conducted tests in the enterprises where there were animals that were not introduced to the system before using mobile software. As a result of the tests, 78% success rate was achieved. When the test results are examined, it is seen that Score 3 value is successfully distinguished in the tests. In addition, the results of misclassifications obtained in other scores also indicate Score 3. This result is one of the limitations of the system. The reason for this may be the number of data density according to the classes in the training data set. If the amount of data is increased for each class, it is possible that the classification success in the testing phase will also increase. In addition, it has been determined that the user-friendly mobile-based software is easily used by the expert. With the widespread use of this system in the enterprises, it will be possible to provide good care and feeding conditions, take precautions against possible problems related to care and feeding; so that the health condition of the animal, milk performance, fertility rate, etc. will be increased, which will increase the profitability of the enterprise. The results of combining deep architectures with mobile-based designs in the realization of BCS determination problem using real-time images are quite promising. In addition, the success of the system can be increased by changing the pre-trained CNN architectures used in the system. However, considering that CNN architecture will operate on a mobile device, the success of networks with more layers may be inversely proportional to the real-time operating performance of the system. In addition, the size of the training data that directly affects the classification performance of the system is important. In future studies, tests can be performed using different CNN architectures and different mobile platforms and data with more training clusters. ACKNOWLEDGMENT The author would like to thank to Dr. Mustafa Boğa for his contribution in classifying the images required for training the system and performing the tests of the system. His research and teaching focus on artificial intelligence, image processing, biomedical, machine learning, computer-aided detection, medical informatics, data mining, mobile programming, and computer vision. VOLUME 8, 2020
5,938.8
2020-01-01T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]
Deep Level Saturation Spectroscopy We review the “Deep Level Saturation Spectroscopy” (DLSS) as the nonlinear method to study the deep local defects in semiconductors. The essence of a method is determined by the processes of sufficiently strong laser modulation (up to saturation) of quasistationar two-step absorption of the probe light via deep levels (DLs). DLSS is based on nonequilibrium processes of the optically induced population changes for deep levels which lead to the changes in an impurity absorption. This method allows us the separation of the spectral contributions from different deep centers (even in the case of their full spectral overlap), on the basis of the difference of their optical activity (photon capture cross-sections) and of their electroactivity difference (carriers capture coefficients). As shown, DLSS is allowed to determine directly the main set of phenomenological parameters (cross-sections, concentration, bound energy, etc.) for deep local defects, their content and energy position in the band gap. Some important aspects of DLSS were shown also: the possibility to connect directly the measured data to the local centers which are participating in radiative recombination, and also the possibility to study directly the phonon relaxation processes in the localized states of deep defects. Introduction Optoelectrical properties and the applications of semiconductors, accordingly, are significantly defined by the defects, created from stoichiometry deviations, and by the presence of impurities, including uncontrollable ones. Well-known "Deep Level Transient Spectroscopy" (DLTS) [1] is a sensitive method to study deep levels (DLs) and is widely used for measurements of an electrically active defects in semiconductors.It is based on the temperature scanning of the capacitance transient of a reverse-biased barrier and allows determining the activation energy and concentration of the traps, also thermal emission rates and the capture crosssections.There are many more varieties, like, for example, "Deep Level Optical Spectroscopy" (DLOS, [2]), or "Acousto-electric DLTS" [3].The resolution of DLTS can be substantially increased when the so-called "Laplace DLTS" is used [4,5]. The development of nonlinear optics exposes unique possibilities, partially of two-photon spectroscopy [6][7][8][9][10] as the investigation tool of energetic structure of bands, elementary excitations, and local defects in crystals. The intrinsic interband two-photon absorption (TPA) is known to be a fundamental process with participation of the crystal native band states as initial and final, as well as intermediate virtual states of a TPA event [6][7][8][9][10].It is coherent and inertialess and does not saturate. A lot of defects, resonant with exciting radiation which changes their population, give rise to incoherent two-step absorption via such deep levels (DL) as real intermediate states [11][12][13][14][15]. Laser modulation of TSA consists of saturable and inertial processes, and their spectral influences are specifically variable by different DLs competition. Below the theoretical basis of other, nonlinear spectroscopic technique, our named as "Deep Level Saturation Spectroscopy" (DLSS), based on the nonequilibrium processes of optically induced changes in impurity absorption, and showing unique opportunities for the study of deep levels in semiconductors [11][12][13][14][15], is presented.Related researches were spent also in [16,17] and, however, were limited to qualitative interpretation of the experimental data. In [18,19] we reported a direct investigation of local electron-phonon interaction effects by the DLSS method in ZnS : Cu crystals.It was shown experimentally that DLSS is International Journal of Optics an exceptionally efficient technique for direct studies of the phonon relaxation processes in the localized states of deep defects. DLSS method can be divided into the two complementary techniques, differing by the character of the modulation of deep levels population.Firstly we shall discuss in Section 3 the effects of "direct modulation" by photoionisation of DL when defects are characterised on the basis of their optical activity.Then, in Section 5, the consideration of the effects of "indirect modulation" of DL population through the capture of the nonequilibrium carriers generated by interband twophoton absorption is given.As a result, additional parameters of deep levels based on their electro-activity are determined. The key idea of a method is based on the "light impact" effect, as the influence of the short and sufficient powerful laser light pulses, acting as a "photons blow," to the ensemble of DL, completely emptying or filling their states (population saturation) during modulation (Figure 1).The term "light impact" has been entered earlier, studying defects from photoconductivity dynamics [20,21]. It is normal to assume that during the moments of the laser light "impact" (I L (t)-photon flow density),-only photoionization of DL (with cross-section σ cD (ω L )) by such photons Dω L is essential, changing their population m(t) according to the simple rate equation written here for the "donor → conduction band" transitions.Whence we get depopulation changes for the donor states and, accordingly, the induced changes in absorption of probing light flow, shown by blue arrows in Figure 1 (see ( 6)) So, at the certain intensities I L of the modulation, defined by the cross-section of laser photon capture σ cD (ω L ), further intensity growth does not increase the induced absorption of probe light Dω since the centers are emptied completely by intense modulation.It is illustrated by the fan of saturated dependences on light intensity (3) shown by inset in Figure 1. Character of the saturation in (3) is defined only by the value of DL photoionization cross-section σ cD (ω L ), that is, by their "optical activity."Namely, the bigger value of this parameter is, the more quickly a full emptying of DL is reached, and consequently the saturation of induced absorption Δα.This phenomenon enables direct measurement of this initial parameter σ cD (ω L ) for DL.Below we have shown that discussed saturation effects enable us to determine the full set of DL parameters.This specificity also defines the choice of our abbreviation: "DLSS" DL saturation spectroscopy. The essence of a DLSS method consists in the following.During the two-step absorption (TSA) of probe quanta from transparency area (Dω < E g ) of media, there is a consecutive excitation of electrons from a valence band to conduction band through the deep local states in the forbidden gap.This can be classified as quasistationary TSA (v → D → c) for the probe light beam with long enough illumination (Δt > τ, τ-lifetime of localized carriers) and nonequilibrium TSA (Δt L < τ) for the mixed "probe + laser" two-step transitions v → D → c.At the combination of long probe and short laser light pulses (Figure 2), so-called, laser modulation of TSA (LM TSA) takes place [11,14,[22][23][24][25][26], which possesses the necessary spectroscopic opportunities. Unlike the coherent two-photon absorption (TPA) [23,[26][27][28] via virtual states, each step of TSA is caused by the combination of one-photon transitions which change where d is length of the sample. In Section 5 it is shown that if laser quanta energy is increased so that interband two-photon excitation appears (Dω L > E g /2), then it results additionally in the effects of photoinduced absorption due to the changes in deep level population caused by the capture of nonequilibrium free carriers.These signals (γ-DLSS) are defined by the coefficients γ of the carrier capture and compete with (σ-DLSS) signals discussed before due to σ-photoionization of DL.This specificity expands opportunities to full metrology of opto-, and electroactivity of DL's in semiconductors.As a result, two complementary DLSS methods, differing by the character of DL population modulation, may be realized. DLSS Theory Let us outcome a theoretical support of the presented method, which is defined by the phenomenon of deep laser modulation, up to saturation, of the quasistationary two-step absorption of the probe light beam via the deep centers in the bandgap. Two-Step Processes via Deep Centers and Their Laser Modulation.Let's consider optical transitions (Figure 3) through the deep donor states D with energy E D , concentration M, and quasistationary electron population m 0 ≡ m(0), established by TSA of probe light Dω before the action of laser modulation (LM).Last value m 0 defines the initial absorption of probe light and then, after the action of pulsed modulation I L (t), we shall get the same equation for α(ω, t), but now with m(t).Here σ vD (ω) and σ cD (ω) are the capture cross-sections for probe photons Dω in the transitions "valence band → donor" (photoneutralization) and "donor → conduction band" (photoionization), accordingly.It follows, that laser light-induced absorption Δα(ω) of a probe light, counting from the moment of laser switch-on t = 0 (m(t = 0) = m 0 ), is defined as International Journal of Optics The rate equation for the dynamics of DL population m(t), under transitions shown on Figure 3, will be where lifetime τ Σ of DL nonequilibrium population is entered as which is defined by the capture coefficients of the free electrons γ cD , and the holes γ vD and by their concentrations n(t) and p(t). Generally, it is necessary to write down the corresponding equations and neutrality conditions for concentration of free carriers n(t) and p(t), as function of illumination influence.Thus, carriers dynamics will be defined by a competition of number of processes (capture, recombination, etc.) which are defined by a set of unknown centers unessential responsible for TSA.The solution of the system of full equations is practically unreal and assumes the multivariate modeling hiding an essence of processes. All mentioned problems disappear if we consider "differential" specificity of the discussed experimental setup (Figure 2).Firstly, quasistationary population is reserved (n(0), p(0) = const.)during the measurements of Δα(ω, I L , t), because of invariance of influence to the crystal states of a "white" probe illumination, monochromatised only after passing the crystal.Secondly, the induced absorption, according to (6), is registered as small differential signal on the strong "background."As a rule, duration of modulation is essentially less than DL population relaxation times Δt L < τ Σ , which is supervised in the measurements from signal kinetics. Thus, the measured response of the centers unable to react to the changes of carriers number in the bands and it is normal to consider τ Σ = const.> Δt L since if at maximal modulation intensity I L the condition Δt L < τ Σ is kept, then at any smaller I L it will be kept also (see (8)).This condition can be checked up experimentally from supervision of the induced absorption signal kinetics.Thus, free carrier's relaxation solution necessity is neglected, but for the population of the centers it is solved easily. Going from the similar modeling, two DLSS methods σand γ-DLSS, differing by character of laser influence on DL population and giving the complementary information, are realized experimentally. Regime of Direct Modulation by Laser Photoionization (σ-LM TSA) Direct laser modulation of DL population is realised by their photoionization, when Dω L > E M and Dω L < E g /2.The second condition defines the absence of two-photon excitation of a crystal (Δn, Δp ≈ 0; τ Σ = const.),and also the absence of photoneutralization of the centers by laser quanta (σ * (ω L ) = 0). The scheme of optical transitions in such conditions (σ-DLSS) is presented on the left part of Figure 3. Generalizing (7) for any type of the centers (donors D or acceptors A), we shall obtain that Here new designations are entered where the index * points out the processes of carriers exchange with the band, more distant energetically from the local levels; τ = 1/(γ • n)-localized carrier lifetime relating to the carrier capture with capture coefficient γ.The influence of an additional stationary illumination I 0 controlling initial DL population m 0 , according to the quasistationar solution of ( 9) at I L (0) = 0, is entered as The solution of ( 9) gives us the dynamics of DL population m(t) that is Here is important the fact that during the influence of a laser pulse all other disturbing factors can be unconsidered.Due to a small duration and high intensity of laser pulses, the main mechanism of DL occupation changes is the emptying of the centers by laser quanta.This extremely simplifies calculations.This is used for (1)-(3).So, the progress of induced absorption of probe light during its laser modulation is defined by (6) and by the solution of ( 7) with dominating component σ(ω L )I L [11,12,15,26] This expression defines the basic regularities of σ-DLSS, illustrated by Figure 4 where formation of induced absorption spectra is shown. (a) Spectra of DLSS are defined by a difference of the spectra of photoneutralization and photoionization crosssections Δα(ω) ∼ σ vD (ω) − σ cD (ω).It follows that Δα(ω) spectra give us DL energy location in a forbidden gap relative to both c and v bands.It is defined (Figure 10) from the long-wave thresholds of induced bleaching (E cD = E M ) and induced absorption bands (E vD = E g − E M + Δ S ), where Δ S is the Stokes losses. Let's discuss the formation of the σ-DLSS spectra (Figure 4).Deep levels at weak illumination are manifested Conduction band Valence band in absorption spectroscopy by transitions to only one of the bands.This is either the valence or the conduction band [18].The type of the transitions (ionization or neutralization) is determined by the steady-state occupation of DLs or by the degree of compensation of the crystal, that is, the position of the Fermi level with respect to the energy levels of the defects.Consequently, the same centers can differently be evident in different samples of crystals and never revealing their "full" spectra.For a case Figures 4(a) and 4(d) DL's are compensated and expose only in the photoneutralization process.Thus, it is hardly possible to investigate the full absorption spectrum by traditional methods of steadystate spectroscopy.By the word "full" t the possibility of simultaneously detecting both neutralization "v → D" and ionization "D → c" transitions is mean. During transitions, the optical charge exchange of local centers, that is, lattice relaxation in the vicinity of the defect proceeds simultaneously.In other words, localized carriers generate phonons.This is described by the Franck-Condon rule of Stokes losses.In [18,19] it was shown that the given specificity of DLSS method is exclusively effective for the direct studies of phonon relaxation effects of local states. The "full spectrum" of DLs may be measured only by the transient spectroscopy techniques, using additional pulsed illumination, such as DLSS.Then, a nonequilibrium partial occupation of the defect states is created.Figure 4 illustrates the formation of the photoinduced spectrum Δα σ (ω) (14) in the presence of compensated deep donors D. Here, Figures 4(a As a result of a longtime (Δt τ) probe light TSA, the quasistationary filling of DL with electrons m 0 = const.is established (Figure 4(b)).It is defined by a ratio of the transition intensities for the both TSA steps and by reverse capture of the carriers (Figure 4(e)).It corresponds to the introduction of a Fermi quasilevels E F for holes and electrons.Thus, DL's are exposed in both of the TSA steps, photoionization and photoneutralization (Figure 4(e)). Because for one of the TSA steps DL is as an initial, and for the other as a final state, the TSA bands react differently to the population changes by the modulation (Figure 4(f)).Photoionization steps (∼m 0 σ(ω)) are characterized by the induced transparency (Δα < 0), and the steps of photoneutralization (∼(M − m 0 )σ * (ω)), by the induced absorption (Δα > 0).In spectral part E M < Dω < E g − E M , we have σ * (ω) = 0, and only the bleaching of a crystal is observed.At the shorter waves Dω > E g − E M is an actual overlapping and competition of both an induced bleaching (broken line in Figure 4(g)) and the induced darkness signals (see also Figure 10). (b) Light-intensity dependences (LID), or Δα(I L ), have satu-ration character (Δα at I L → ∞ does not exceed Δα(∞) = m 0 [σ vD (ω)−σ Dc (ω)] according to ( 14)), that is related with a full emptying (filling) of the donor (acceptor) centers under With the increase of modulation intensity, signals Δα(I L ) pass into saturation by complete laser blenching of DL.The saturation character is determined only by a cross-section value of DL photoionization σ cD (ω L ), that is, by "optical activity" of the centers.The higher σ(ω L ) leads to more effective DL emptying and Δα σ (I L ) is saturated at less I L . The LID's Δα(I L ) in Figure 5(a) were shown for the different photoionization cross-sections σ i (ω L ).The sequence of the curves 1-3 corresponds to the reduction of σ i (ω L ) values by the order.It shows high enough accuracy in definition of the σ i (ω L ) parameters. The amplitude of Δα σ in saturation is determined by quasistationary DL occupation m 0 .Curves 1, 2 , and 3 in Figure 5(a) show us the LID changes with the growth of initial population m 0 of the centers.Thus, the parameter σ i (ω L ) defines the character of DL saturation, and the initial population of centers influences only amplitudes of the signals.As a result we have a unique opportunity to determine separately σ i (ω) and m 0 values, which are inseparable in linear optics of defects because it includes them in a product form α = σm 0 , equal to optical losses. Therefore we shall change (14) to more convenient form for the experimental data analysis, expressing the laser intensity by its effective duration Then the total PIA coefficient β Σ , reduced to the power density of modulating radiation (β Σ ≡ Δα Σ /I L ), corresponding to the competition of TPA and several channels of σ-DLSS, can be expressed as where β TPA (ω) is a constant of TPA (ω + ω L ).The additivity property, taking place at weak effects (ΔI I, I L ), is used here.Also, the critical intensities I σ are entered conditionally.They describe the saturation effects and are defined by σ i (ω L ) values according to Plots of ( 16) were shown in Figure 5(b).An initial slope β i (0) of light intensity dependences (broken line in Figure 5(a)) is defined also by DL parameters and it turns out by the extrapolation of experimental LID's to zero intensity I L .Dimensions: It is important to note that Figure 5 illustrates a technique for the separation of spectral contributions to absorption from different by the nature DL's on the basis of the difference of their cross-sections σ i (Figures 5(a (c) Time development of a signal Δα(t) during laser illumination has an integrated character relating to the envelope of the laser pulse.It is related to accumulation dynamics of localized carriers. However, at the saturation intensities centers are emptying already at the initial stage of signal development Δα(t), as illustrated by Figure 6(a) where calculations of σ-DLSS kinetic at different I L according to (12) for Gaussian laser pulse (19) are presented (d) Relaxation of the induced absorption after the end of modulating pulse, from ( 6) and (12), will be and is defined by the lifetimes τ Σ (8) of the nonequilibrium population of DL.Here t k is the moment of the laser pulse termination.It follows that a condition τ Σ = const., mentioned before, may be checked experimentally from induced absorption kinetics, as independence of relaxation times from intensity of illumination. Excitation Spectroscopy of σ-DLSS: Identification of the Centers as Acceptors or Donors Let's show an opportunity for the categorization of observable in DLSS centers to the donor or acceptor type.For this purpose the measurements of the excitation spectra of DLSS [15] were based on the fact that the induced bleaching signals were dependent on the spectral composition of the probe beam acting before the laser modulation. The explanation is given on Figure 7.If the short-wave part of multichromatic probe beam Dω exc , causing photoneutralization σ * cA of the compensated acceptors in n-type crystals (Figure 7), is blocked, then these acceptors will remain populated completely by electrons.Then, DLSS signal for the quanta Dω probe is absent since a laser pulse Dω L cannot change their population.At the presence of such quanta Dω exc > E cA in a probe beam (transitions A → c), the centers are partially emptying and bleaching σ-DLSS signals appear.Thus for the compensated acceptors, the response strongly depends on the spectral composition of preceding illumination and, for the donors, does not depend.The given effects are defined by (11) for the initial centers filling m 0 . We shall consider the maximal value of PIA amplitude α max (ω) = Δα(ω, σ, I L → ∞), using ( 11) and ( 14) which defines PIA dependence on the conditions of quasistationary illumination I 0 .Assuming small probe intensity I, (21) for the bleaching spectra part becomes Let's consider two extreme cases: (a) a weak influence of capture from the neighbor band, that is, σ * (ω)I 0 (ω) τ −1 .It is realized for DL with repulsive potential (γ γ * ) or at their strong compensation (n 0 p 0 , for acceptors).Thus from (22) we have whence follows that, at the absence of illumination with Dω txc > E g − E M , neutralizing the centers, bleaching signals in the region E M < Dω < E g − E M are absent in general, since these centers are vacant.Thus, excitation spectrum of DLSS Δα(ω) = f ( ω), detecting by the change of illumination quanta energy Dω at the fixed probe frequency Dω probe , looks like the spectrum of photoneutralization cross-section σ * (ω) (curve 1 on Figure 8) in the bleaching region E M < Dω probe < E g − E M .(b) Otherwise, at the domination of the capture from nearest band (uncompensated centers), PIA is independent of the presence of illumination photoneutralizing the centers, or on a spectral composition of probe light (curve 3 on Figure 8). Corresponding excitation spectra and intensity dependences Δα(I ω ) of DLSS are shown schematically in Figure 8.Thus, from the analysis of DLSS excitation spectra, it is possible to do conclusions about the correlation of capture times τ cM and τ vM , and more, at the known type of dominating conductivity, about the nature of DL's (acceptor or donor?). Indirect Modulation of TSA from the Capture of Two-Photon Generated Carriers (γ-LM TSA) Two-photon excitation is volumetric and nonlinear and leads to the effects of absorption by generated nonequilibrium free carriers [12,28].The presence in a crystal of the fast carrier capture centers, due to change of their population under TPA excitation, becomes evident in the photoinduced absorption (PIA).This is shown schematically on Figures 9 and 3 (right part).Such γ-DLSS process is described also by the rate equation (7), where carrier concentration n(t) = n 0 + Δn(t) is defined, in difference to (13), before now, by two-photon excitation if 2Dω L > E g .At short enough laser pulses Δt L < τ r where β 2ωL is TPA constant for laser quanta Dω L .The concentration of carriers achievable at such excitation does not exceed 10 17 ÷10 18 cm −3 , up to optical breakdown.Neglecting the influence of probe light, balance equation (7) becomes and its solution according to (6) gives us PIA coefficient It follows that the form of γ-DLSS spectrum is defined by the ratio of the coefficients of carrier capture from nearest (γ) and distant (γ * ) bands (see Figure 3).As a rule, DL are characterized by primary capture from a near band, that is, γ γ * or γ cD γ vD (Figure 3). Thus, the substitution of ( 24) into (26) gives which reflects the basic properties of γ-DLSS. (a) Spectral appearance of γ-DLSS is formed according to the scheme Figure 9. Part of reasoning's in occasion of Figures 9(a) and 9(b) remains the same as for σ-DLSS.Main difference is that modulation of DL population occurs indirectly, through the capture of the carriers injected in a crystal volume optically, by TPA-excitation.The difference of Δα γ (ω) spectra (Figure 10) consists in a fact that the capture of carriers from a nearby band leads to the increase in DL population; therefore, γ-DLSS spectrum Figure 9(b) is expected as inverted in comparison with σ-DLSS spectra (Figure 9(a)). This is illustrated by experimental data for ZnSe [24] in Figure 11 where mentioned DLSS features are presented for laser modulation with Dω L < E g /2 (spectrum 1) and Dω L > E g /2 (spectrum 2).Curve 3 shows the calculated spectrum.The intensity dependences Δα(I L ) are compared with the theoretical ones according to ( 16) and (28) in Figures 11(d) and 11(e), correspondingly.Curves 2 and 3 differ in value σ vA by the order and show the precision of its determination.At Dω L > E g /2, TPA excitation takes place, and spectra (2) invert in sign without changing the shape. However, at the greater carrier capture efficiency from the more removed bands (e.g., domination of holes capture by the donors), DL's are emptying, and γas σ-DLSS spectra may also have an identical appearance. (b) Light-intensity dependences of σand γ-DLSS are essentially different (Figure 5), that also enables the separation of different centers contributions and their parameters definition.At the small intensities of modulating radiation, PIA have square-law LID (dotted line in Figure 5(c)), following the concentration (24) of two-photon generated carriers.With the growth of excitation, when the number of the vacant centers became low, LID aspires to the saturation (Figure 5(c)), and more quickly at the greater γ values. International Journal of Optics It is important that γ-LM TSA gives the information on processes of carrier capture by the centers, that is, about their electroactivity.Besides amplitude Δα γ (∞), it is defined by the value M − m 0 .We shall remind that for σ-modulation the amplitude Δα σ (∞) is defined by value m 0 .Therefore carrying out the experiments in both modes of modulation allows us Δt L = 30 ns; E g = 2.72 eV, from [24]. to define directly the concentration of the centers M (32), and also their quasistationary population m 0 /M (31). Let's transform (27) in the same way as( 16) where experimentally defined values depending on the DL parameters are entered Parameters g i (ω, 0) and I γ are as an initial slope of LID (at I L → 0) and a certain critical intensity of modulation at which saturation effects start (see Figure 5(d)).Dependences (28) are shown in Figure 5(d).Figure 5 gives us the comparison of LID ( 16) and ( 28) for σand γ-DLSS modes in their variability from DL parameters. It is important to note that Figure 5 illustrates the technique for the separation of spectral contributions from the different DL's to the induced absorption spectra, based on the difference of photon capture cross-sections σ i , that is, on the photoactivity of the centers. (c) Concentration and population of DL.We shall note that in both techniques the "binding to the center" is made by the characteristic energies of DL (see Figure 10).The conditions of quasistationary excitation of a crystal (initial population of DL m 0 before the modulation) for both techniques are identical.So, the values of PIA coefficients Δα at low modulation, I L → 0, are determined by an initial population m 0 (see Figure 5(a)) and by the donor vacancy M − m 0 (see Figure 5(d)), accordingly These dependences also define the form of the spectra in Figure 10.From here it is easy to determine an initial degree of DL population established during quasistationary TSA of the probe light, before the action of modulation laser pulse, and also to determine the concentration of the centers where Δα ∞ is a limiting value of the induced absorption of i-component of DLSS at I L → ∞, defined from the LID (Figure 5). Correlation between DLSS and Luminescence Dynamics: Identification of Radiation Centers Metrological opportunities of DLSS in determining the basic set of phenomenological DL parameters are shown above. Let's show one more aspect of DLSS, an opportunity to connect directly the experimental data with participation of the centers in radiative recombination and to define the luminescence band caused by concrete DL's.Till now, such opportunities have been realized only in the methods of optically detected magnetic resonance (ODMR) [31][32][33][34].Except the information on microscopic properties of the local centers and their physical-chemical identification, ODMR enable also to bind defects to their photoluminescence (PL) bands. In DLSS for these purposes the effect of dual influence of a laser pulse on the centers population in conditions of TPA excitation and also the dynamics of PL connected with it can be used.It is obvious in "excitation-quenching" effect in PL, illustrated schematically by Figures 12 and 14(b). The essence of the phenomenon consists with the fact, that the same laser pulse generating free carriers, which are capturing by the centers (γ-DLSS), besides also emptying them simultaneously via photoionization (σ-DLSS).If for this center the second step of recombination is radiative then laser pulse simultaneously also enhances and damps appropriated PL band.The competition of these phenomena develops in time in a complicated manner, but if the carrier lifetimes can exceed the duration of an excitation then PL, it will flare up after laser pulse termination. Light-intensity dependences (LID) of PL and DLSS are equally defined by the dynamics of the local center population changes Δm, which at the moderate excitation intensities are square laws from I L , since follows the carrier's concentration.Then with growth of excitation, due to finiteness of the number of local centers, Δm reaches value of their concentration, and PL and DLSS signals are saturated in same manner. Let's consider dynamics of the discussed processes (Figure 12) from the balance equation ( 7) for acceptor population changes m(t) at I L I, γ vA γ cA , and σ Ac (ω L ) = 0.The solution of (7) then can be expressed as where m 0 is defined by the quasistationar occupation of an acceptor states due to the probe illumination.The concentration of two-photon generated holes p(t) is defined from (24).The dynamics of impurity PL for n-type monopolar crystals is described by (35) The induced absorption at the competition of TPA with σand γ-DLSS is described by where β ≡ β ω+ωL is TPA coefficient.The changes of the centers population m(t) (33) with the growth of an excitation I L are presented in Figure 13(b). As we see, during the action of laser illumination impurity, luminescence can be blanked completely, sharply flaring up upon the termination of a laser pulse, and then being saturated with growth of excitation.Such analysis was used to explain the behaviour of a G-band of photoluminescence in CdS [29] as annihilation of the excitons, trapped by DAcenters.The effects of the same nature were shown in the laser-induced self-diffraction [35] experiments. Conclusion The "deep level saturation spectroscopy" (DLSS), based on incoherent nonequilibrium processes of impurity absorption, optically modulated up to saturation [11][12][13][14][15][22][23][24][25][26], allows to separate the spectral contributions from deep centers of different nature, even in the case when they are spectrally overlapped and irresolvable, and to define directly their basic phenomenological parameters, composition, and the role in the formation of photoelectric properties of crystals [29,35].The opportunities of DLSS may be expanded uniquely if two types of laser modulation of TSA, illustrated schematically by Figures 14(a Additional features of DLSS method are the following opportunities.(i) "Voluminosity" of phenomena.In DLSS the light beams from the crystal transparency region are "working."It allows to study the defects in the volume of a crystal, including an opportunity of the spatially selective 3D-scannings.The problem of the volumetric homogeneity diagnostics of crystals by content and parameters of local defects may be solved. (ii) Opportunity to measure "full spectra" of DL, that is to register simultaneously the interconnected spectra both as of the photoionization, and of the photoneutralization.This property has been used in [18,19] for the direct study of the local electron-phonon interaction in ZnS crystals. (iii) Separation of the spectral contributions of the optically competing centers on the basis of their optical activity difference (cross-sections of photon capture; Figure 14(a); σ-DLSS). (iv) Separation of the spectral contributions of the centers on the basis of their electroactivity difference (by carrier capture coefficients; Figure 14(c); γ-DLSS). (v) Separation of the spectral components of defects by the change of the modulation quanta energy, consecutive change of composition of the levels participating in the formation of induced response. (vi) Identification of the centers nature, as donor or acceptor (localization in forbidden gap), from the measurements of σ-DLSS excitation spectra [22].(vii) Special selectivity to the type of primary carrier capture by the centers, from the spectral form of γ-DLSS components.Induced response to the carrier capture from a near or distant band, relative to DL level, are different by sign, so the measured spectra are inverted [22]. International Journal of Optics (ix) Correlation between the luminescence and induced absorption.Study of photoluminescence (PL) and its dynamics (effect of laser "excitation-quenching" of PL; Figure 14(b)) and their correlation with induced absorption in the same conditions of two-photon excitation, allows to specify defect International Journal of Optics 15 participation in the formation of PL bands [14,29].As a result there is an opportunity not only of the defect metrology, but also revealing their participation in the radiative recombination. The discussed technique is used earlier for CdS [10,36,37], ZnSe [11,13], and ZnO [15,30] crystals for the study of deep level composition changes due to the crystal technology. Figure 3 : Figure 3: Scheme of optical transitions for DLSS calculations in the case of donor (D) states.Fat arrows, DL photoionization σ cD (ω L ) transitions with laser quanta absorption, and interband two-photon (TPA) excitation of a crystal β(2ω L ).Thin arrows, probe light absorption.Broken lines, free carriers capture. Figure 4 : Figure 4: Schemes of stationary α(ω) and induced Δα(ω) impurity absorption spectra formation and of the optical transitions, explaining σ-DLSS: regime of laser modulation of TSA by photoionization, at the absence of TPA-generation (2Dω L < E g ).(a) and (d).At small intensity, "dark" filling of the compensated donors, only the band of photoneutralization is observed.(b) and (e) At the intensities of probe light, breaking a dark-filling of DL's, photoionization band appears.(f) Changes of TSA spectra as a result of laser modulation.(c) and (g) Induced absorption spectrum. ) and 4(b) demonstrate the appearance of two-step transitions, and Figure4(c) their laser modulation.The initial part of the photoneutralization spectrum 4(d) and the full TSA spectrum 4(e) are given here.The reaction of two-step transitions 4(b) and their spectrum 4(e) to additional modulation by a laser pulse Dω L is shown 4(f) as the changes in the full absorption spectrum α(ω) and 4(g) as the measured induced absorption spectrum Δα(ω). Figure 5 : 2 √ ln 2 , Figure 5: Comparison of the light-intensity dynamics of σ-DLSS (laser photoionization of DL, (16)) and of γ-DLSS (28) (capture of nonequilibrium two-photon generated carriers).Separation of the components of different DL absorption bands: (a) and (b) based on the difference of laser photon capture cross-sections σ, that is, on photoactivity of the centers; (c) and (d) based on the difference of coefficients γ of carriers capture, that is, on electroactivity of the centers.The sequence of curves 1, 2, and 3 corresponds to the reduction of capture cross-sections σ(ω L ) for σ-DLSS, or coefficients γ of carrier capture (γ-DLSS).The sequence of curves 1, 2 , and 3 reflects the influence of initial population m 0 and "vacancies" M − m 0 of the centers, accordingly. Figure 9 : Figure 9: Schemes of the spectra formation for stationary α(ω) and induced Δα(ω) impurity absorption and the optical transitions, explaining γ-DLSS, as the consequence of laser modulation of the quasistationary TSA by the capture of two-photon generated carriers.(a) and (c).Energy scheme and TSA spectrum α(ω) before the modulation [same as on (b) and (e) parts of Figure 4].(b) and (d).Energy scheme and TSA spectra changes as a result of laser modulation.(e) Resultant induced absorption spectrum. InternationalFigure 12 : Figure 12: Scheme of the processes considered for the calculations of γ-DLSS dynamics and impurity luminescence. ) and 14(b), were realized: (I) direct modulation of DL population by their photoionization (σ-DLSS) by laser pulse and (II) "indirect" modulation of DL population by the capture of nonequilibrium carriers (γ-DLSS), two-photon generated by laser pulse.At the conditions of twophoton crystal excitation, both competing σand γ-DLSS processes may be actual (Figure 14(c)).Also the appearance of the effects of "enhancement-damping" of impurity luminescence by laser radiation (Figure 14(c)) is possible for defects involved in radiative recombination.Metrology opportunities of DLSS give a full set of phenomenological parameters of deep centers and their content: (i) energy position E vM and E cM in forbidden gap, (ii) values and spectra both of photoionization and of photoneutralization cross-sections: σ vM and σ cM , (iii) coefficients γ M of dominating carriers capture, (iv) quasistationary population of the centers, m 0 /M, (v) lifetimes of the localized carriers or times of restoration for quasistationary population, (vi) concentration of deep centers, M.
8,632.4
2012-03-19T00:00:00.000
[ "Physics" ]
Experimental hut evaluation of a novel long-lasting non-pyrethroid durable wall lining for control of pyrethroid-resistant Anopheles gambiae and Anopheles funestus in Tanzania A novel, insecticide-treated, durable wall lining (ITWL), which mimics indoor residual spraying (IRS), has been developed to provide prolonged vector control when fixed to the inner walls of houses. PermaNet® ITWL is a polypropylene material containing non-pyrethroids (abamectin and fenpyroximate) which migrate gradually to the surface. An experimental hut trial was conducted in an area of pyrethroid-resistant Anopheles gambiae s.l. and Anopheles funestus s.s. to compare the efficacy of non-pyrethroid ITWL, long-lasting insecticidal nets (LLIN) (Interceptor®), pyrethroid ITWL (ZeroVector®), and non-pyrethroid ITWL + LLIN. The non-pyrethroid ITWL produced relatively low levels of mortality, between 40–50% for An. funestus and An. gambiae, across all treatments. Against An. funestus, the non-pyrethroid ITWL when used without LLIN produced 47% mortality but this level of mortality was not significantly different to that of the LLIN alone (29%, P = 0.306) or ITWL + LLIN (35%, P = 0.385). Mortality levels for An. gambiae were similar to An. funestus with non-pyrethroid ITWL, producing 43% mortality compared with 26% for the LLIN. Exiting rates from ITWL huts were similar to the control and highest when the LLIN was present. An attempt to restrict mosquito access by covering the eave gap with ITWL (one eave open vs four open) had no effect on numbers entering. The LLIN provided personal protection when added to the ITWL with only 30% blood-fed compared with 69 and 56% (P = 0.001) for ITWL alone. Cone bioassays on ITWL with 30 min exposure after the trial produced mortality of >90% using field An. gambiae. Despite high mortality in bioassays, the hut trial produced only limited mortality which was attributed to pyrethroid resistance against the pyrethroid ITWL and low efficacy in the non-pyrethroid ITWL. Hut ceilings were left uncovered and may have served as a potential untreated refuge. By analogy to IRS campaigns, which also do not routinely treat ceilings, high community coverage with ITWL may still reduce malaria transmission. Restriction of eave gaps by 75% proved an inadequate barrier to mosquito entry. The findings represent the first 2 months after installation and do not necessarily predict long-term efficacy. Background Most malaria-endemic countries have adopted policies to promote universal distribution of long-lasting insecticidal nets (LLINs) free of charge across all age groups, and an estimated 49% of the population in sub-Saharan Africa had access to at least one LLIN in their household in 2013 [1]. However, resistance to pyrethroid insecticides used in all LLINs is now widespread across vector populations and may reduce the level of community protection [2,3]. Another challenge is maintaining effective year-round protection as during the hot dry seasons when transmission may still occur, some individuals are deterred from sleeping under nets [4]. The development of large holes through wear and tear of net fabrics during normal household use may compromise their protective efficacy despite LLINs retaining insecticidal potency for three years [5]. In areas of hyper-endemic malaria transmission, even when universal coverage (UC) of LLINs is achieved and nets are in good condition, malaria prevalence can remain relatively high unless additional control tools are implemented [6]. Indoor residual spraying (IRS) is a proven vector control method that has been used since the Second World War and was the central feature of the Global Malaria Eradication Campaign between 1955 and 1969, which successfully eliminated malaria from several countries and significantly reduced disease incidence in others [7,8]. In 2005 the US President's Malaria Initiative (PMI) revived IRS in sub-Saharan Africa by funding an initial $1.2 billion programme in 15 countries [9]. IRS coverage in sub-Saharan Africa increased substantially from <2% of the at-risk population protected in 2005 to 11%, or 78 million people, by 2010 [1]. A major challenge facing IRS programmes is how to sustain such gains in the face of operational problems, such as vector resistance to insecticides, lack of affordable alternative insecticides and limited resources for recurrent annual campaigns. In addition to the labour costs associated with spraying, high levels of pyrethroid resistance among mosquito populations have necessitated the use of more expensive non-pyrethroid formulations [10]. The commodity cost of PMI-supported IRS campaigns with the organophosphate Actellic CS 300 (pirimiphos methyl CS) has been estimated to be more than four times the expense of using the pyrethroid Icon CS 10 (lambdacyhalothrin CS) to cover the same area [11]. A new product has been developed which mimics the effect of IRS but is designed to control insecticide-resistant mosquitoes for a minimum of three years. Insecticide-treated, durable wall lining (ITWL) is a material that can be fixed to the inner walls and ceilings of houses. The principle is the same as IRS, to kill mosquitoes that land on the ITWL either before or after blood-feeding. If the coverage of ITWL is high enough, the population density and longevity of mosquitoes in the area becomes substantially reduced, together with malaria transmission. There is also the possibility that ITWL could be used to block entry of mosquitoes if the material is extended from floor to ceiling, therefore covering eave spaces. While ITWL, like IRS, could be used to reduce transmission by itself, it is more likely to be an adjunct to LLINs, with the LLIN providing additional personal protection through the barrier and excito-repellent effect. ZeroVector ® is a first-generation ITWL containing deltamethrin incorporated into high-density polyethylene shade cloth, which has been evaluated in several countries in sub-Saharan Africa and Asia and consistently received high levels of household acceptability and provided prolonged insecticidal activity of greater than 12 months [12,13]. With pyrethroid resistance now widespread throughout sub-Saharan Africa, attention has switched to development of a new generation of non-pyrethroid ITWL. Initial experimental trials of ITWL + LLINs were conducted using plastic sheeting that had been spray treated with a non-pyrethroid (organophosphate) insecticide [14,15]. These studies produced differing results in Côte d'Ivoire and Burkina Faso, which were attributed to variation in phenotypic resistance to organophosphates and pyrethroids among the respective vector populations [14,16]. A newer factory-produced product (PermaNet ® Lining) has been developed that consists of thin nonwoven sheets of cloth made from high-density polypropylene containing a non-pyrethroid insecticide mixture of abamectin (avermectin) and fenpyroximate (pyrazole) which are slowly released together and migrate to the surface of the fibre; neither insecticide has been used in malaria control before. Abamectin is a macrocyclic lactone that acts through chloride channel activation and was discovered in 1981 [17]. Contact bioassays have shown that abamectin is efficacious against house flies [18], cockroaches [19] and fire ants [20] in terms of mortality, and there is evidence for oviposition suppression in blowflies [21]. There are limited data for use against mosquitoes, but ivermectin (also an avermectin) is highly effective in terms of both mortality and oviposition suppression as a cattle parasiticide [22]. Abamectin is widely used in mixtures for control of crop and ornamental pests associated with greenhouse and nursery operators, e.g., abamectin + trifosine (fungicide) is used to control the two-spotted spider mite [23]. Fenpyroximate is a pyrazole in the mitochondrial complex 1 electron transport inhibitors (METI) group of insecticides, which disrupt insect respiration and are in widespread use globally. METI acaricides are extensively used to control Tetranychus spp (spider mites) [24]. To date, there are no published data demonstrating efficacy of these proprietary insecticides against mosquitoes. Despite the promise of ITWL, the only existing data to support the efficacy of this new product are small-scale, unpublished studies conducted by the manufacturer. In response to the increasing problem of insecticide resistance, PMI has funded a large-scale, cluster-randomized controlled trial (CRT) in Muheza, Tanzania to investigate whether ITWL combined with UC of LLINs provides added protection against malaria compared with LLINs alone [25]. To aid interpretation of the CRT results, an experimental hut trial of the ITWL with or without LLIN was conducted in Muheza against wild free-flying populations of Anopheles funestus sensu stricto (s.s.) and Anopheles gambiae sensu lato (s.l.). Insecticide treatments The following insecticide treatments were tested in the experimental hut trial: Insecticide safety With any new vector control product, it is essential that rigorous mammalian and environmental tests are undertaken to determine whether it is safe to use at the proposed dosages. Both active ingredients have low mammalian toxicity and good safety profiles [25]. The proprietary combination formula in the non-pyrethroid ITWL has passed an initial environmental examination (IEE) conducted by an independent regulatory agency; hazard quotients for continuous habitation in a residence with non-pyrethroid ITWL were far below the acceptable threshold [25]. [26]. By 2011 there were signs of pyrethroid resistance, with mortality of 75% for permethrin and deltamethrin [27]. In 2014 mortality of An. gambiae s.l. collected from experimental huts was 74% when exposed to deltamethrin, 51% for lambdacyhalothrin and 81% for permethrin [28]. Also in 2014, An. funestus s.s. were found to be resistant to deltamethrin and alphacypermethrin with mortality rates of 75 and 60%, respectively [28]. There is currently no resistance to carbamates or organophosphates. Experimental huts were constructed to a design described by the WHO [29] and based on the original veranda-hut developed in Tanzania in the 1960s [30,31] ( Fig. 1). In the modern design the eave gap is reduced to 5 cm, the ceiling board is lined with hessian sack cloth, similar to thatch, and the concrete floor surrounded by a water-filled moat [31]. In this trial the veranda traps were not used and the verandas left unscreened. This allowed mosquitoes to freely enter huts through all four eave spaces so the impact of partial eave blocking (only one eave left open) with ITWL could be assessed for some treatments. Inwardly directed eave baffles were installed to prevent egress of mosquitoes that had entered the hut; the only route of exiting was through the window traps [32,33] (Fig. 1). LLINs were deliberately holed with six 4 × 4-cm holes to simulate wear and tear [29]. Experimental hut trial Six experimental huts were used in total. An adult volunteer slept in each hut nightly from 20:30-06:00. All were provided with chemoprophylaxis and instructed on use. The six volunteers were rotated between huts on successive nights to reduce any bias due to differences in individual attractiveness to mosquitoes. ITWL treatments were attached to wall boards using Velcro and rotated between huts after every seventh night (six trial nights and one non-trial night for cleaning and aeration with no treatment) for a total duration of 63 nights (49 trial nights). Mosquito collections were conducted using mouth-aspirators between 06:30-08:00 each morning by trained field assistants. White sheets were laid on the concrete floor in the room to ensure dead mosquitoes were more easily visible. Dead and live mosquitoes were collected inside the hut and from the two window traps (not from the verandas, as they were left unscreened). Live mosquitoes were transferred to 150-ml paper cups and provided with 10% glucose solution for scoring delayed mortality after 24, 48, 72 h at the NIMR laboratory. Gonotrophic status was recorded as unfed, blood-fed, semi-gravid, or gravid. All members of the An. gambiae species complex and An. funestus species group identified by morphological characteristics were assumed to be An. gambiae s.l. and An. funestus s.s. based on recent PCR identification [27]. The entomological impact of each treatment was expressed relative to the untreated control in terms of the following: 1. Induced mortality: percentage of dead mosquitoes in the treated hut at the time of collection and after a 72 h holding period relative to the control hut; 2. Deterrence: percentage reduction in the number of mosquitoes caught in the treated hut relative to the number caught in the control hut; 3. Induced exiting (repellency) due to any potential irritant effect of treatment expressed as percentage of mosquitoes collected from the veranda traps of treated huts relative to percentage caught in the veranda trap of the control hut; 4. Inhibition of blood-feeding: reduction in blood-feeding rate relative to the control. This was calculated using the following model: 100 (Bf u -Bf t )/Bf u . where Bf u is the proportion of blood-fed mosquitoes in the untreated control hut and Bf t is the proportion of blood-fed mosquitoes in the huts with insecticide treatments. Supplementary bioassays WHO cone and cylinder tests To evaluate the efficacy of non-pyrethroid ITWL, standard WHO cone and cylinder bioassays were performed, based on the WHO protocol for IRS monitoring [29], Irritability tests To characterize any excito-repellent properties of the non-pyrethroid ITWL, 50 non-blood-fed, 2-to 5-day old, insectary-reared, pyrethroid-susceptible An. gambiae Kisumu or F1 generation of wild An. gambiae s.l. collected from Mkanyageni were individually introduced into plastic cones, containing either a piece of untreated netting, new non-pyrethroid ITWL or Interceptor ® LLIN, allowed to settle for 60 s, and time elapsed between the first landing and the next take-off of the mosquito was recorded, up to 360 s. Effective exposure time To determine the minimum exposure time required to kill 100% of mosquitoes exposed to the non-pyrethroid ITWL, WHO cylinder tests were conducted using progeny of wild An. gambiae s.l. which were exposed to new Data analysis Data were entered into an Excel database and transferred to Stata 11 for processing and analysis (Stata Corp LP, College Station, TX, USA). For the experimental hut trial, the principal aim was to compare the efficacy of the different treatments relative to the untreated control. The outcomes of interest were proportion of mosquitoes blood-fed, dead (i.e., total number of mosquitoes dead by morning plus delayed mortality after holding for a total of 72 h) and exiting on successive nights. Logistic regression for grouped data was used to estimate the outcomes, comparing results for treatments and untreated control, adjusting for clustering by day and for variation between individual sleepers and huts. Negative binomial regression was used to analyse numbers entering the huts (% deterrence). Bioassay data were summarized using proportions and means and binomial confidence intervals, where applicable. Results Mortality results (24 and 72 h) are presented in Fig. 2 for An. funestus s.s. and Fig. 3 for An. gambiae s.l. Results showing deterrence, insecticide-induced exiting and reduction in blood-feeding are presented in Table 1 (An. funestus s.s.) and Table 2 (An. gambiae s.l.). Anopheles funestus s.s Mortality 72 h after mosquito collections was relatively low across all treatments (Fig. 2). Interceptor ® is a WHOPES-recommended pyrethroid LLIN but produced only 29% mortality. The non-pyrethroid ITWL when used without LLIN produced 47% mortality (eaves partially blocked) but this was not significantly different to that of the LLIN (29%, P = 0.306), non-pyrethroid ITWL + LLIN (35%, P = 0.385), or pyrethroid ITWL (45%, P = 0.306). Mortality of non-pyrethroid ITWL increased between 24 and 72 h after mosquito collections and accounted for 38% of total mortality. However, there was also an unexpected 32% delayed mortality for the pyrethroid LLIN. Levels of mortality to the nonpyrethroid ITWL were consistently low across the 7 week trial, demonstrating no significant decline in bioefficacy (38, 44 and 35% 72 h mortality of An. funesus s.s. after one to two, two to four and >4 weeks, respectively, for non-pyrethroid ITWL with open eaves). The trial was conducted at the end of the rainy season and numbers of An. funestus s.s. collected were relatively few (n = 77 in the control). There was no significant difference in the numbers that entered huts with nonpyrethroid ITWL when eaves were partially blocked or open (P = 0.9562) ( Table 1). The only treatment which produced any measurable deterrence effect was the non-pyrethroid ITWL + LLIN which reduced entry by 48% (P = 0.044). There was also a significant increase in the proportion of An. funestus s.s. that had exited into window traps by morning for the pyrethroid LLIN (P = 0.001) and pyrethroid ITWL (P = 0.001) treatments compared with the untreated hut. The non-pyrethroid ITWL produced no significant increase in exiting (P = 0.082). The LLIN provided added personal protection over the non-pyrethroid ITWL with only 24% bloodfed compared with 69% (P = 0.001) and 56% (P = 0.001) for the two ITWL treatments (eaves open and partially closed). The blood-feeding rate in the untreated hut was surprisingly low, and significantly less than in the huts with ITWL. Anopheles gambiae sensu lato The numbers of An. gambiae s.l. collected over the duration of the trial were few (n = 53 in the control), meaning that the sample size was generally too small to assess differences between treatments (except where the effect was particularly large). Mortality levels 72 h after mosquito collections were uniformly low across all treatments, with the non-pyrethroid ITWL producing 43% mortality (eaves partially closed) compared with just 26% for the LLIN (Fig. 3). The mortality levels were similar to those for An. funestus s.s. and there were no significant differences between treatments. There was a degree of delayed mortality between 24 and 72 h, as observed previously with An. funestus s.s. The LLIN (57%, P = 0.007), nonpyrethroid ITWL + LLIN (59%, P = 0.002) and pyrethroid ITWL (68%, P = 0.002) all resulted in significant deterrence of An. gambiae s.l. entering indoors (Table 2). However, the non-pyrethroid ITWL treatment alone did not produce such an effect (P = 0.093 for eaves partially closed, and P = 0.393 for eaves open). As observed with An. funestus s.s., there was no significant difference Table 1 Effect of insecticide-induced deterrence, exiting and blood-feeding of Anopheles funestus s.s. in the experimental hut trial If the superscript in a column is the same there was no significant difference between treatments (P > 0.05) WHO cone and cylinder tests WHO cone and cylinder bioassays with non-pyrethroid ITWL samples from five batch productions, conducted with an exposure time of 30 min, showed no significant difference in mortality between ITWL pieces (95 and 99% mortality after 72 h in cone and cylinder assays, respectively). When replicates were pooled, a direct comparison of cone versus cylinder bioassays demonstrated that overall immediate (after 24 h) and delayed (after 72 h) mortality was significantly lower in cone than cylinder bioassays (23-49 vs 80-100% mortality at 24 h, and 91-97 vs 99-100% mortality at 72 h for cones vs cylinders, respectively; P = 0.001 for all) for all mosquito strains tested (Fig. 4). Irritability tests Time to first take-off in response to non-pyrethroid ITWL exposure was measured among individual An. gambiae Kisumu and F1 wild An. gambiae s.l. mosquitoes in comparison to untreated netting and a pyrethroid LLIN (Interceptor ® LLIN). By 6 min of exposure to nonpyrethroid ITWL, 34 and 44% of An. gambiae Kisumu and wild mosquitoes had taken flight, respectively, compared to 78 and 76% in response to Interceptor ® LLIN and 20 and 22% in untreated controls (Fig. 5). No mosquito irritability or excito-repellency was detected in response to non-pyrethroid ITWL exposure, as evidenced by no significant difference in numbers of mosquitoes taking flight, over the 6 min, in response to the non-pyrethroid ITWL compared to the untreated control (P = 0.232 for An. gambiae Kisumu and P = 0.425 for wild An. gambiae s.l.). Effective exposure time An exposure time of 7.5 and 15 min to new non-pyrethroid ITWL in WHO cylinder tests killed 88 and 100% of F1 wild An. gambiae s.l. mosquitoes, respectively, after a 72 h holding period (Fig. 6). Exposure time of 15 min was sufficient to kill 100% of mosquitoes within 48 h. Discussion At present there are no recognized WHO standards for ITWL products. Before progressing to community trials, candidate IRS and LLIN products are evaluated in Phase II experimental huts against an existing gold standard positive control and comparative performance assessed [29,34]. In this trial the positive control was the WHOPES-recommended pyrethroid LLIN, which produced equivalent levels of mortality to the non-pyrethroid ITWL. However, due to pyrethroid resistance in both of the major vector species, the level of mortality for the LLIN was lower than previous reports from the same site [28]. In 2006 when vector species were fully susceptible to pyrethroids, Interceptor ® LLIN (alphacypermethrin) killed 92% of An. gambiae s.l. when unwashed [35], while Olyset ® LLIN (permethrin) produced 61% mortality for An. gambiae s.l. and 72% for An. funestus s.s. [36]. The mortality rate with Interceptor ® LLIN in this trial was considerably lower at just 26% for An. gambiae s.l. and 29% for An. funestus s.s. and was due to the [26]. In this study the non-pyrethroid ITWL also elicited relatively low levels of mortality, between 40 and 50% for An. funestus s.s. and An. gambiae s.l. The level of mortality in free-flying mosquitoes remained consistent over the 7 weeks of the trial. WHO cone and cylinder bioassays exposing laboratoryreared susceptible and wild resistant An. gambiae s.l. to new pieces of non-pyrethroid ITWL for 30 min produced 90-100% mortality, demonstrating the inherent toxicity of the ITWL insecticides, with no evidence for cross-resistance to pyrethroids. In both assays, mosquito mortality remained low at 24 h, reaching the highest levels after 72 h. While new formulations would ideally produce more immediate mortality, a similar phenomenon has been reported for chlorfenapyr, a pyrrole insecticide which has demonstrated promise as an IRS and net treatment [37,38]; historically some organochlorines used successfully as IRS (e.g., dieldrin) were also characterized by delayed mortality. To help interpret the low levels of mortality observed in the main trial, WHO cylinder assays with incremental exposure times were performed to assess the duration of contact time required to kill the field strain (F1 progeny) of An. gambiae s.l. Only 7.5-15 min of contact with the non-pyrethroid ITWL induced 80-100% mortality. In all assays, levels of mosquito mortality were comparable between different rolls of ITWL, excluding differences in manufacturer batch production as a potential confounder in the main trial. With the demonstration of high mortality in the bioassays, the lower levels in the main trial may be attributable to wild, free-flying mosquitoes spending less time in contact with treated wall surfaces. Shorter resting times are expected when insecticides induce a degree of repellence or irritancy. To further characterize mosquito behaviour in relation to ITWL exposure, time to first flight was measured for An. gambiae Kisumu and F1 wild An. gambiae s.l. in comparison to untreated netting and a pyrethroid LLIN (Interceptor ® LLIN). No significant irritability was observed on exposure to the non-pyrethroid ITWL, unlike the pyrethroid LLIN which is known to have excito-repellent properties. These results were supported by the low exiting rates of both An. funestus s.s. and An. gambiae s.l. in non-pyrethroid ITWL huts compared to pyrethroid treatments (Interceptor ® LLIN and ZeroVector ® ITWL) in the main trial. In the main trial, only treatments containing pyrethroid interventions significantly reduced vector entry. By comparison, the low levels of deterrence in the non-pyrethroid ITWL huts and the supporting irritability bioassays indicate that the ITWL was not influencing mosquito entry. One possible explanation for the low mortality, exiting and deterrence in the non-pyrethroid ITWL huts, is that vectors were instead resting on the hessian sack cloth ceilings, which remained uncovered throughout the trial, and were not contacting the walls for sufficient time to obtain a lethal insecticide dose. During the trial no data were documented on the location of mosquitoes within the room. While few studies have characterized exactly where African malaria mosquitoes reside within experimental huts, a recent trial of pirimiphos-methyl IRS on wooden panelled walls performed at the same study site also reported unexpectedly Cumulative percentage of pyrethroid-susceptible Anopheles gambiae Kisumu, and F1 offspring of field-collected Anopheles gambaie s.l. taking off over time, following exposure to untreated netting, Interceptor ® LLIN or non-pyrethroid ITWL low levels of mortality and mosquitoes were noted to be resting on sack cloth-lined ceiling (M. Rowland, unpublished data). Earlier experimental hut trials of insecticidetreated wall lining materials have demonstrated that efficacy is strongly correlated with intervention surface area, with increasing coverage affording higher rates of vector mortality, deterrence and blood-feeding inhibition [39,40]. If in this trial mosquitoes secured refuge on the ceiling over the non-woven polyethylene wall lining material, this could partly explain the low mortality in the huts. During IRS campaigns in Tanzania, ceilings are not usually sprayed, being too high and inaccessible to spray men, and yet such campaigns can have had a major effect on An. gambiae population density and malaria transmission rates [6]. It has been suggested that ITWL may also impact malaria transmission by functioning as a method of housing improvement, if used to cover eave gaps, preventing mosquito ingress [25]. However, in the present trial, there was no significant differences in vector entry in huts with partially blocked or open eaves. It is possible that host-seeking mosquitoes are able to compensate for a partial restriction of 'entry points' if host odour is concentrated from those that remain. ITWL material is not currently designed to act an eave seal or withstand strong winds and house improvements would better look to other means of restricting access. Conclusions PermaNet ® ITWL is a new malaria vector control strategy, containing two non-pyrethroid insecticides, which is designed to function as a long-lasting IRS, when fixed to the inner walls and ceiling of houses. An experimental hut trial was performed in Muheza, Tanzania, to evaluate performance during 2 months after installation in comparison with a WHOPES-recommended LLIN (Interceptor ® ) and pyrethroid ITWL (ZeroVector ® ). The level of mosquito mortality was lower than expected: in the pyrethroid-treated LLIN and pyrethroid ITWL this was explained by insecticide resistance; in the non-pyrethroid ITWL this was attributed to low efficacy; since the ceiling was uncovered some mosquitoes may have secured refuge on this untreated surface. A series of novel, supplementary bioassays characterized the toxicity and mode of action of the non-pyrethroid ITWL product, and its effect on mosquito behaviour. The findings represent the first 2 months after installation and do not necessarily predict longer-term residual efficacy of non-pyrethroid ITWL. Authors' contributions RM, RMO, LAM, FWM, MWR, and WK designed the study and were responsible for data analysis and interpretation. RM, BB, WS, BE, and SW led the entomology field activities and laboratory assays and participated in data collection. GM and JPM were responsible for oversight, management and delivery of the trial. LAM, RMO and MWR drafted the manuscript which was revised by coauthors. All authors read and approved the final manuscript.
6,123.6
2017-02-17T00:00:00.000
[ "Medicine", "Biology" ]
Karyotype analysis of Aeluropus species (Poaceae) Aeluropus, a member of Poaceae subfam. Chloridoideae, includes six species, three of which occur in Iran. They are perennial halophytes of deserts and coastal marshlands of Iran. The genus is considered as a rich genetic source for gene manipulation and using it for crop improvement. Previous studies showed that members of Chloridoideae have small chromosomes and the base chromosome number n = 10. There are few chromosome records for Aeluropus species. Somatic metaphases of seven populations of three Aeluropus species were studied. The first chromosome counts (2n = 20) based on Iranian material for three species, A. macrostachyus, A. littoralis and A. lagopoides, are concordant with previous records outside Iran; mitotic number for A. macrostachyus is recorded here for the first time. Introduction Aeluropus Trin. (Chloridoideae Kunth ex Beilschm., Cynodonteae Dumort., Poaceae Barnhart) includes three species in Iran and six in the world (Bor, 1970;Watson et al., 1992;POWO, 2021). They are halophytic elements of central and tropical Asia, Africa, and Europe (Bor, 1970). Aeluropus littoralis (Gouan) Parl. as a perennial halophyte is native to the central desert and coastal marshlands of Iran. This species is considered as a rich genetic source for gene manipulation and crop improvement (Modarresi et al., 2012). Aeluropus lagopoides (L.) Trin. ex Thwaites is known as a salt-secreting, rhizomatous perennial halophyte distributed on dry and salty soils of Iran. This species survives in harsh environments due to the vigorous seed production, epicuticular wax layer, salt glands, and small leaves (Mohsenzadeh et al., 2006). Aeluropus macrostachyus Hack. is of potential fodder value in Southwest Asia (Öztürk et al., 2019). Due to the presence of hybrids, subspecies, and ecotypes in the studied taxa, Aeluropus identifications are somehow difficult (Abivardi et al., 2010). Variations in the species chromosomes provide useful data for biosystematic and breeding studies. In this project, the chromosome counts and karyotype parameters for the Iranian species of Aeluropus are studied for the first time. Material and methods In this project, 7 accessions of three Aeluropus species in Iran were gathered from nature (Table 1). Vouchers are deposited in the herbarium of Alzahra University (ALUH) and the herbarium of the Research Institute of Forests and Rangelands (TARI). The plants were collected during the years 2007-2017 in their natural habitats (Table 1). For the somatic chromosome study, the seeds were germinated on moist filter paper in the laboratory (ca. 21-24 ºC). In order to vernalize seeds, they were put at 4 ºC in a refrigerator (48-72 h), then transferred to room temperature. The growing root tips of ca. 0.7-1.0 cm long were cut and pre-treated in a saturated 0.002 M water solution of 8-hydroxyquinolin at 4 ºC in a refrigerator (2-4 h) and fixed in a cold mixture of ethanol and acetic acid (3 : 1) for 24 h. Root tips were macerated in two ways: 1) 1N HCl was used for 3 hours (Cold Hydrolysis) at room temperature; 2) Hot Hydrolysis was used by means of 1N HCl for 6 min. in 60 ºC bath. Temporary slides were made by squashing the segments and staining in 2% acetoorcein for 30-45 min. Good metaphase plates were photographed with Olympus microscope equipped with DP12 digital camera and measured Ideokar software ver. 1.2. Chromosomes were identified based on Levan et al. (1964). For karyotype symmetry, the coefficient of variation of chromosome length (CV CL ) based on Paszko (2006) and mean centromeric asymmetry (M CA ) based on Peruzzi and Eroğlu (2013) were determined. A2 index of Zarco (1986) and coefficient of variation of the chromosome size (CV) were also calculated. Results The chromosome counting in seven accessions of different species of Aeluropus in Iran showed only one ploidy level (2n = 20). The best time to catch the highest somatic metaphases at root tips was 10 a. m. to 01 p. m. in the species studied. The summary of karyotype features is shown in Table 2. The size of the shortest chromosome varied from 0.54 to 0.91 µm while the size of the longest varied from 1.19 to 1.67 µm. The range of total length of (Figs. 1-3). Discussion Aeluropus and Odyssea Stapf are the members of subtribe Aeluropodinae P. M. Peterson of Eurasia and Africa origin (Soreng et al., 2015(Soreng et al., , 2017. Aeluropus is a small genus with three species in Iran. They are mainly found as members of halophytic vegetation. These species have very small chromosomes and the basic chromosome number x = 10. The most common basic chromosome numbers in Poaceae are x = 7, 9, 10, 12 (Stebbins, 1982). It is assumed that the original basic chromosome number for Poaceae species is x = 7, and the larger ones are its derivatives. Another assumption is that the ancestral genome has x = 5 that varied by duplications and inter-chromosomal translocations to provide an intermediate ancestral genome with x = 12 (Shchapova, 2012). Basic chromosome number is constant in some tribes of Poaceae. Two main and frequent basic chromosome numbers are x = 7 (29.5 %) and x = 10 (31.9 %). Avdulov (1931) considered x = 12 as the ancestral basic chromosome number that results in other basic chromosome numbers by reduction due to aneuploidy (Hilu, 2004), but others as Stebbins (1982) considered x = 6 and 7 as ancestral in primitive genera and this discussion is still being continued (Hilu, 2004;Shchapova, 2012). Chloridoideae has x = 9 and 10 as the common basic numbers (Hilu, 2004); x = 9 is found in 86 % of the species and x = 10 in 13 % (de Wet, 1987). The basic chromosome number x = 5 has not been reported for this subfamily. Hilu and Alice (2001) believed that x = 10 is a plesiomorphic character in Chloridoideae. It is evident that aneuploid reduction from x = 12 to x = 10 and x = 9 in some genera appeared in the early stages of the evolutionary history of Chloridoideae (Hilu, 2004). Variation in basic chromosome number, the frequency of polyploidy, and hybridization is common in grass family but there is a kind of homogeneity in the basic chromosome number in Aeluropus. In this study, all the species were found to be diploid. Two species, A. littoralis and A. lagopoides, showed mitotic numbers 2n = 20 that were in agreement with previous results (Murin, Chaudhri, 1970;Nagabhushana, 1980;Tarnavschi, Lungeanu, 1982;Kožuharov, Petrova, 1991). In A. macrostachyus, 2n = 20 was also found which is the first record of the mitotic number for this species. Previously Khatoon and Ali (1993) reported the meiotic number x = 10 + 1B for A. macrostachyus. The inter-chromosome asymmetry index varied from 0.20 to 0.26, indicating similar chromosome length of taxa. Based on this index, A. littoralis had more variation in chromosome length. Aeluropus littoralis and A. lagopoides showed more similarities in karyotype characters. The close relationship of these taxa was also confirmed in morphological and anatomical studies (Abivardi et al., 2010). Chromosomes are of the metacentric (m) and submetacentric (sm) types. The amount of metacentric chromosomes in the cytotypes studied suggests that the karyotype of these species shows a trend to be stable. Aeluropus macrostachyus with 3A Stebbins's symmetry class had more symmetrical chromosomes than two other species. Among species studied, A. macrostachyus had the lowest inter-chromosomal (CV CL ) and intra-chromosomal asymmetry (M CA ) while A. littoralis had the highest values of these parameters. To have a better understanding of the variation in this genus, a cytogenetic study is suggested to consider the behaviour of the chromosomes during meiosis. There are some hybrid populations of Aeluropus species in Iran (Abivardi et al., 2010). Studying the chromosome counts of these populations is highly recommended.
1,793.4
2021-06-25T00:00:00.000
[ "Biology" ]
The role of the transcription factor Sp1 in regulating the expression of the WAF1/CIP1 gene in U937 leukemic cells. The Waf1/Cip1 protein induces cell cycle arrest through inhibition of the activity of cyclin-dependent kinases and proliferating cell nuclear antigen. Expression of the WAF1/CIP1 gene is induced in a p53-dependent manner in response to DNA damage but can also be induced in the absence of p53 by agents such as growth factors, phorbol esters, and okadaic acid. WAF1/CIP1 expression in U937 human leukemic cells is induced by both phorbol ester, a protein kinase C activator, and by okadaic acid, an inhibitor of phosphatases 1 and 2A. Both of these agents induce the differentiation of these leukemic cells toward macrophages. We demonstrate that phorbol esters and okadaic acid stimulate transcription from the WAF1/CIP1 promoter in U937 cells. This transcription is mediated by a region of the promoter between −154 and +16, which contains two binding sites for the transcription factor Sp1. Deletion or mutation of these Sp1 sites reduces WAF1/CIP1 promoter response to phorbol ester and okadaic acid, while a reporter gene under the control of a promoter containing only multiple Sp1 binding sites and a TATA box is induced by phorbol ester and okadaic acid. The WAF1/CIP1 promoter is also highly induced by exogenous Sp1 in the Sp1-deficient Drosophila Schnieder SL 2 cell line. These results suggest that phorbol ester and okadaic acid activate transcription of the WAF1/CIP1 promoter through a complex of proteins that includes Sp1 and basal transcription factors. The Waf1/Cip1 protein induces cell cycle arrest through inhibition of the activity of cyclin-dependent kinases and proliferating cell nuclear antigen. Expression of the WAF1/CIP1 gene is induced in a p53-dependent manner in response to DNA damage but can also be induced in the absence of p53 by agents such as growth factors, phorbol esters, and okadaic acid. WAF1/CIP1 expression in U937 human leukemic cells is induced by both phorbol ester, a protein kinase C activator, and by okadaic acid, an inhibitor of phosphatases 1 and 2A. Both of these agents induce the differentiation of these leukemic cells toward macrophages. We demonstrate that phorbol esters and okadaic acid stimulate transcription from the WAF1/CIP1 promoter in U937 cells. This transcription is mediated by a region of the promoter between ؊154 and ؉16, which contains two binding sites for the transcription factor Sp1. Deletion or mutation of these Sp1 sites reduces WAF1/CIP1 promoter response to phorbol ester and okadaic acid, while a reporter gene under the control of a promoter containing only multiple Sp1 binding sites and a TATA box is induced by phorbol ester and okadaic acid. The WAF1/ CIP1 promoter is also highly induced by exogenous Sp1 in the Sp1-deficient Drosophila Schnieder SL 2 cell line. These results suggest that phorbol ester and okadaic acid activate transcription of the WAF1/CIP1 promoter through a complex of proteins that includes Sp1 and basal transcription factors. Treatment of the human myeloid leukemic cell line U937 with phorbol esters such as phorbol myristate acetate (PMA), 1 an activator of protein kinase C, leads to macrophage/monocyte-like differentiation over a 72-h period (1,2). This process involves changes in cell-substrate adherence, growth arrest in late G 1 , and increased expression of monocyte markers (3,4). Similarly, treatment of U937 cells with okadaic acid, a natural product isolated from the black sponge and a potent inhibitor of protein phosphatases 1 and 2A, also induces differentiation of these cells (5), cell cycle arrest, and eventual (72-h) apoptosis (6). Both PMA and okadaic acid induce expression of the cyclindependent kinase inhibitor, WAF1/CIP1 (7,8). WAF1/CIP1 expression is induced by the p53 protein follow-ing irradiation of cells (9,10), but p53-independent expression of WAF1/CIP1 is associated with differentiation of myocytes (11,12), of HL 60 leukemic cells (13), and of a number of other cells. WAF1/CIP1 is expressed in a number of tissues over the course of murine development, and expression in most tissues is not dependent on the presence of p53 (14). p53-independent expression of the WAF1/CIP1 gene can be induced in cultured cells by a number of agents, including, besides PMA and okadaic acid, platelet-derived growth factor, fibroblast growth factor, and transforming growth factor ␤ (15,16). Preliminary analysis of the WAF1/CIP1 promoter suggests that the elements mediating response to serum in fibroblasts are located at least 1.9 kb upstream from the transcription start site (14), while responsiveness to tumor growth factor ␤ is mediated by elements located somewhere in the promoter sequences 1.3 kb upstream of the transcription start site (17). We now report that two sites that bind the transcription factor Sp1, located approximately 115 and 65 base pairs upstream from the transcription start site of the WAF1/CIP1 gene, are necessary for normal levels of basal, PMA-induced, and okadaic acid-induced transcription. These sites are also necessary for induction of the WAF1/CIP1 promoter by exogenous Sp1 in the Sp1-deficient Drosophila Schneider SL 2 cell line. Finally, our observation that a reporter plasmid containing multiple Sp1 binding sites and a TATA box shows transcriptional induction in response to PMA and okadaic acid in U937 cells suggests that the activity of Sp1 is sufficient for induced transcription of the WAF1/CIP1 gene. MATERIALS AND METHODS Cell Culture and Conditions-U937 human myeloid leukemic cells obtained from ATCC (Rockville, MD) and Dr. D. Kirkways (East Carolina University, Greenville, NC) were passaged in Dulbecco's modified Eagle's medium (Life Technologies, Inc.) supplemented with 10% heatinactivated bovine calf serum (Life Technologies, Inc.) and antibiotics at 37°C in 5% CO 2 . Drosophila Schneider SL 2 cells were grown at room temperature in Schneider's Drosophila medium (Life Technologies, Inc.) supplemented with 20% fetal calf serum. RNA Isolation and Northern blot Analysis-Total cellular RNA was extracted by the guanidium thiocyanate/CsCl ultracentrifugation method. RNA was separated on 1.2% agarose gels containing formaldehyde and transferred to a membrane. To detect specific transcripts, 32 P-cDNA probes labeled by random priming were hybridized to the membranes. The probes used were a 2.1-kb fragment containing an approximately full-length WAF1/CIP1 cDNA and a 1.5-kb tubulin cDNA. Plasmids-The CAT reporter plasmid containing 2.3 kb of WAF1/ CIP1 promoter sequence was a gift from T. Waldman and B. Vogelstein (Johns Hopkins University, Baltimore, MD). Smaller regions of the WAF1/CIP1 promoter were obtained by PCR amplification using the larger promoter construct as template and primers containing the desired 5Ј and 3Ј promoter sequences. Mutations introduced into 5Ј primers were incorporated into the subsequent constructs. PCR products were first cloned into the pCRII vector (InVitrogen) and then moved into the CAT reporter plasmid pJFCAT1 modified by excision of the SV40 trimer cassette (18). The Ϫ122/Ϫ61 deletion contruct was generated by excising sequences between SmaI sites at Ϫ125 and Ϫ63 from the construct containing promoter sequences from Ϫ154 to ϩ16. 2.2 kb of upstream WAF1/CIP1 promoter sequence was inserted into this Ϫ122/Ϫ61 deletion construct using an ApaI site at Ϫ129 and a vector HindIII site at the 5Ј end of the promoter sequence. The Ϫ81/Ϫ62 deletion was made by cloning the Ϫ154/ϩ16 fragment of promoter into the single-stranded vector M13 mp18 and then using a primer containing sequences from either end of the desired deletion for site-directed mutagenesis. To construct the Sp1-dependent luciferase reporter plasmid pGAGC6, oligonucleotides containing consensus Sp1 binding sites flanked by BamHI and BglII restriction sites were synthesized. A multimer of this oligonucleotide containing six Sp1 binding sites was obtained by BamHI/BglI digestion followed by ligation. The DNA fragment containing the Sp1 multimer was then cloned into a luciferase reporter plasmid upstream of the adenovirus major late initiator TATA box, which had been previously cloned into the luciferase plasmid. The plasmid pGAM, containing only the adenovirus TATA box, was used as a control. Transfection and Reporter Gene Assays-U937 cells were transfected by electroporation, and CAT assays were performed as described previously (19). 24 g of WAF1/CIP1 promoter construct and 1 g of cytomegalovirus/␤-galactosidase plasmid as a transfection control were used per 10 million cells. After electroporation, cells were divided into three culture dishes and allowed to recover for 6 h in medium containing 10% bovine calf serum. One dish was then treated with 200 nM PMA and one dish with 100 nM okadaic acid. 24 h later cells were harvested and subjected to freeze-thaw lysis, and 75 g of total cell protein was used for CAT assays. For SL 2 cells, 5 million cells in 60-mm dishes containing 1 ml of plain medium were transfected with 4 g of reporter plasmid using Lipofectin. After 6 h, the Lipofectin was removed and replaced with medium containing 20% fetal bovine serum. For luciferase assays, cells were transfected as above, then lysed in 125 mM Tris, pH 7.8, 10 mM dithiothreitol, 10 mM trans-1,2-diaminocyclohexane-N, N, NЈ, NЈ-tetraacetic acid, 50% glycerol, and 5% Triton X-100. 125 g of total cell protein in 100 l of lysis buffer was mixed with 100 l of luciferin buffer (20 mM tricine, 1.07 mM magnesium carbonate, 2.67 mM MgSO 4 , 0.1 mM EDTA, 33.3 mM dithiothreitol, 530 uM ATP, 270 uM coenzyme A, and 470 M luciferin, pH 7.8), and light intensity was measured. Gel Mobility Shift-For gel mobility shift experiments, oligonucleotides containing the desired promoter sequences were synthesized, annealed, and labeled using T4 polynucleotide kinase. After polyacrylamide gel purification, the labeled oligonucleotides were incubated with 5 g of U937 nuclear extract in a buffer containing 70 mM KCl, 20 mM Hepes, pH ϭ 7.6, 1 mM dithiothreitol, 0.1 mM EDTA, 0.01 mM ZnCl 2 , 60 g/ml poly(dI-dC), and 30 g/ml bovine serum albumin at 4°C for 30 min. Nuclear extracts were made as described (20) in a buffer containing 25% glycerol. For experiments using antibodies, 1 l of anti-Sp1 polyclonal antiserum (21) or preimmune serum was added to the protein extract plus buffer, and a 15-min preincubation on ice was car-FIG. 1. Accumulation of WAF1 mRNA in response to treatment of U937 cells with PMA or okadaic acid. U937 cells were treated with 100 nM PMA, 100 nM okadaic acid, or 500 nM okadaic acid as indicated above the lanes. At the indicated times, cells were collected and used to prepare RNA for Northern blot analysis. 20 g of total RNA was loaded in each lane, and filters were first probed with radiolabeled WAF1/CIP1 cDNA and then stripped and reprobed with tubulin cDNA as a loading control. FIG. 2. Promoter sequences between ؊122 and ؉16 mediate phorbol ester and okadaic acid induction of the WAF1/CIP1 promoter. A, diagrammatic representation of CAT constructs containing between 2.3 kilobases and 170 bases of WAF1/CIP1 promoter sequence. All constructs shown have a 3Ј terminus at ϩ16, where ϩ1 is the transcription start site. B, constructs were electroporated into U937 cells; one-third were treated with 200 nM PMA, one-third were treated with 100 nM okadaic acid, and one-third were untreated controls, as indicated below the panels. 24 h later CAT activity was assayed; for each construct, one assay is shown and average induction in response to PMA or OK calculated from three independent transfection experiments is listed below each lane. All transfections included cytomegalovirus/␤-galactosidase plasmid; cell extracts were assayed for galactosidase activity to ensure equal transfection efficiency (galactosidase assays not shown). C, U937 cells were cotransfected with the indicated WAF1/ CIP1 promoter constructs and vectors expressing either wild-type p53 (WT) or mutant p53 (Mut). CAT activity was assayed 24 h after transfection. ried out before the addition of radiolabeled oligonucleotide. For competition experiments, unlabeled competitor oligonucleotides were also added to protein extracts for a 15-min preincubation. The sequence of the control oligonucleotide lacking an Sp1 site used in Fig. 6 is 5Ј-AGCCAGGAGCCTGGGCCCCGGGGAG. RESULTS Treatment of U937 cells with either PMA or okadaic acid induces accumulation of WAF1/CIP1 mRNA (Fig. 1). Treatment with 100 nM PMA results in maximum levels of RNA by 2-4 h, while cells treated with 100 nM okadaic acid do not accumulate maximal levels of WAF1/CIP1 mRNA until 8 h (Fig. 1). This difference in the rate of induction between these agents is based on the concentration of activator employed, since higher concentrations of okadaic acid, 500 nM, result in increases in the level of WAF1/CIP1 mRNA levels at earlier time points (Fig. 1). To determine whether PMA and okadaic acid stimulate transcription from the WAF1/CIP1 promoter, contructs containing varying lengths of the WAF1/CIP1 promoter in front of a CAT reporter gene were transfected into U937 cells. Each set of transfected cells was split into equal aliquots, which were treated with PMA or okadaic acid or left untreated as controls. The smallest construct, containing 170 base pairs of promoter sequence (from 154 bases upstream of the transcription start site at ϩ1 to ϩ16), was fully inducible by both PMA and okadaic acid (Fig. 2). When compared with the Ϫ2320/ϩ16 construct (Fig. 2), the Ϫ154/ϩ16 construct was more strongly induced by PMA (19.5-versus 7.3-fold) and okadaic acid (16.1versus 9.8-fold), suggesting that upstream elements that repress transcription may be found between Ϫ2320 and Ϫ154. A series of intermediate constructs displayed a gradual increase in response as promoter sequence was deleted from Ϫ2320 to Ϫ154, but all constructs were induced by PMA and okadaic acid. 2 The two p53 binding sites identified at positions Ϫ2.3 kb and Ϫ1.4 kb do not play any role in the induction by these two agents since they are deleted in the smaller constructs without any affect on transcription. Further analysis of promoter sequences downstream from Ϫ154 revealed that a deletion of promoter sequences between Ϫ122 and Ϫ61 eliminated both basal and inducible promoter activity (see Fig. 3). When the Ϫ122/Ϫ61 deletion is introduced into a construct containing 2.3 kb of upstream WAF1/CIP1 promoter sequence, induction of transcription in response to PMA or okadaic acid is lost (Fig. 2), demonstrating that these sequences are important for inducible transcription and cannot be replaced by upstream elements. These elements are also important for induction in response to p53. Although U937 cells contain no wild-type p53, cotransfection of a wild type p53 expression vector induces transcription from the undeleted 2.3-kb WAF1/CIP1 promoter (Fig. 2). The Ϫ122/Ϫ61 deletion, however, abolishes induction in response to p53. Similar results were obtained when the two promoter constructs were compared in the GM glioma cell line, which contains a wild type p53 gene under the control of a steroid-inducible promoter (data not shown). These results indicate that WAF1/CIP1 promoter elements within the Ϫ122/ Ϫ61 region are necessary for both p53-dependent and p53independent induction. The WAF1/CIP1 promoter sequence between Ϫ122 and Ϫ61 does not contain consensus binding sites for factors such as AP 1, Egr-1, or NF-B, which are known to activate other genes in response to phorbol esters or okadaic acid (22,23). However, the region does contain several consensus binding sites for the transcription factor Sp1 (7). DNase I footprint analysis of this region indicated that at least two Sp1 consensus binding sites, centered around Ϫ115 and Ϫ67, fell within protected regions. 2 PMA treatment of cells did not cause any noticeable change in the footprints, suggesting that the binding of factors to this region of the WAF1/CIP1 promoter is not enhanced by PMA. To verify that these potential Sp1 binding sites were important for induction of transcription in response to PMA and okadaic acid, a series of CAT contructs were generated with deletions or mutations of the Sp1 consensus sequences found in the Ϫ122/ 2 J. Biggs, unpublished observations. FIG. 3. Regions containing Sp1 consensus binding sites mediate promoter response to PMA and okadaic acid. A diagram of CAT contructs with WAF1/CIP1 promoter sequence deleted between Ϫ154 and Ϫ61 is shown in the upper part of the figure. The lower part shows CAT assays from U937 cells transfected with the WAF1/CIP1 promoter construct indicated above each panel and then split into three aliquots and treated with 200 nM PMA, treated with 100 nM OK, or left untreated (Ϫ) as indicated beneath the panels. The basal transcription and -fold activation numbers shown below are the averages of three independent transfection experiments. All experiments included one transfection with the Ϫ154/ϩ16 WAF1/CIP1 promoter construct as a standard for basal activity. -Fold activation for each construct was calculated based on the basal activity of that particular construct. FIG. 4. A consensus Sp1 binding site between bases ؊117 and ؊112 is necessary for WAF1/CIP1 promoter activation in response to PMA or okadaic acid. Mutated PCR primers were used to introduce base pair changes to the wild type WAF1/CIP1 promoter sequence, which are indicated by underlining. Consensus Sp1 binding sites are shown above the sequence. U937 cells were transfected as described in Fig. 4; the numbers represent an average of three independent transfections, and were calculated as in Fig. 4. Ϫ61 region of the WAF1/CIP1 promoter (diagrammed in Figs. 3 and 4). These constructs were transfected into U937 cells and analyzed for basal activity and for transcriptional response to PMA and okadaic acid (Figs. 3 and 4). As mentioned above, deletion of the Ϫ122/Ϫ61 sequence, which includes several Sp1 consensus binding sites, markedly decreased both basal and induced transcription. All transcription experiments included one transfection using the Ϫ154/ϩ16 promoter construct; the basal transcription of this construct was set at 1.0, and basal transcription of all other constructs was normalized to this value. For each construct, percentage conversion of chloramphenicol to the acetylated form in PMA or okadaic acid-treated cells was divided by percentage conversion in untreated cells to obtain values for induced transcription. Deletion of sequences between Ϫ131 and Ϫ117, immediately 5Ј to the upstream Sp1 consensus binding site, decreased PMA induction by a small amount and decreased okadaic acid induction approximately 50%. A further deletion of sequences from Ϫ117 to Ϫ100, which eliminates the Sp1 binding sites entirely, markedly decreased basal transcription and induction in response to PMA and okadaic acid. In comparison, deletion of the downstream region between Ϫ81 and Ϫ62 knocked out okadaic acid response while having little effect on PMA response. PMA induction appears to require only the upstream element, while okadaic acid induction requires both the upstream and downstream elements. To more precisely examine the role of the upstream Sp1 consensus site in mediating WAF1/CIP1 transcription, mutations were introduced into the Ϫ131/ϩ16 promoter construct (Fig. 4). Mutation of three bases in the first Sp1 site decreased both basal and induced transcription by PMA and okadaic acid; the reduction in activity was approximately the same as that observed when the region was deleted. This result confirms that the Sp1 binding site is necessary for induction. Mutation of bases outside the Sp1 consensus sequence had little effect on promoter activity (Fig. 4). To test for binding of Sp1 (or other factors) at these sites, double-stranded oligonucleotides containing WAF1/CIP1 promoter sequence from Ϫ128 to Ϫ99 or from Ϫ86 to Ϫ57 were used for gel mobility shift experiments with nuclear extracts from U937 cells (Fig. 5). Both oligonucleotides bind to a set of three proteins or protein complexes, which closely resemble the set of proteins previously observed to bind to both Sp1 sites and retinoblastoma control elements (24,25). These proteins are usually designated 1A, 1B, and 2, (see Fig. 5); 1A has been identified as the Sp1 gene product based on interactions with FIG. 5. Sp1 and related proteins bind to the WAF1/CIP1 promoter at the sites protected from DNase I digestion. Gel mobility shift experiments were performed using either radiolabeled Ϫ128/Ϫ99 oligonucleotide or radiolabeled Ϫ86/Ϫ57 oligonucleotide, containing the sequences from the WAF1/CIP1 promoter protected from DNase I digestion and necessary for PMA/okadaic acid induction, as shown in Figs. 3 and 4. Oligonucleotides were incubated with U937 cell nuclear extract and preimmune serum or anti-Sp1 serum (first four lanes on the left). Nuclear extracts were also incubated with radiolabeled Ϫ128/Ϫ99 WAF1/CIP1 promoter oligonucleotide and with cold competitor oligonucleotides (right). The competitor oligonucleotides used are indicated above the lanes: the Ϫ128/Ϫ99 oligonucleotide itself, a WAF1/ CIP1 promoter oligonucleotide containing no Sp1 sites, and an oligonucleotide containing the SV 40 Sp1 binding site. The set of proteins that bind to the Sp1 consensus sites are commonly designated 1A, 1B, and 2, as indicated on the left. anti-Sp1 antibodies (25), and the other bands are postulated to be Sp1-related proteins (26). As shown in Fig. 5, preincubation of U937 nuclear extract with anti-Sp1 antibodies (21) disrupts binding of the 1A protein to both WAF1/CIP1 promoter oligonucleotides. All three proteins (1A, 1B, and 2) can also be competed off the WAF1/CIP1 Ϫ128/Ϫ99 promoter oligonucleotide with excess unlabeled Ϫ128/Ϫ99 oligonucleotide or an oligonucleotide containing the SV 40 Sp1 binding site (Promega) but not by an oligonucleotide that does not contain an Sp1 consensus sequence (see "Materials and Methods" for sequence), suggesting that the proteins that bind to both footprinted regions are Sp1 or Sp1-related factors. To verify that Sp1 activates transcription of the WAF1/CIP1 promoter, WAF1/CIP1 contructs were transfected into the Sp1-deficient Drosophila Schneider SL 2 cell line either in the presence or in the absence of the Sp1 expression vector pPacSp1 (27). In the Sp1-deficient Drosophila cells, WAF1/ CIP1 constructs containing sequence from Ϫ117 to ϩ16 are highly induced in response to exogenous Sp1 expression (Fig. 6). Deletion of either the upstream or downstream Sp1 binding sites individually has a partial effect on the level of expression in response to Sp1, but deletion of both sites results in a much greater reduction in promoter activity induced by cotransfecting the Sp1 expression vector (Fig. 6). This result is similar to the pattern observed in U937 cells for response to PMA or okadaic acid, although deletion of individual sites can have a more severe effect on response to PMA or okadaic acid in the U937 cells. Sp1 has been shown to interact with the TATA box binding protein (TBP)-associated protein TAF110 and, in collaboration with another TBP-associated protein, TAF250, activate transcription (28,29). If Sp1 and the complex of TBP proteins are sufficient for induction of transcription in response to PMA or okadaic acid, it is predicted that transcription of a reporter plasmid containing only multiple Sp1 binding sites and a TATA box would be stimulated by PMA or okadaic acid treatment of U937 cells. To evaluate this possibility the plasmid pGAGC6, containing six Sp1 binding sites and an adenovirus TATA box, was transfected into U937 cells, and the cells were treated with PMA or okadaic acid. Both PMA and okadaic acid induced transcription from this vector, while control vector not containing Sp1 sites was not induced (Fig. 7). DISCUSSION The association of WAF1/CIP1 expression with differentiation in many types of cultured cells suggests that p53-independent induction of WAF1/CIP1 may have a role in cell differentiation in vivo; the pattern of WAF1/CIP1 expression during mouse embryogenesis also correlates with terminal differentiation of skeletal muscle, cartilage skin, and nasal epithelium (12). Identification of the WAF1/CIP1 promoter elements that mediate p53-independent induction of transcription should help to identify the signal transduction pathways that stimulate expression of WAF1/CIP1 during cell differentiation. Since WAF1/CIP1 expression is stimulated by a number of agents, including platelet-derived growth factor, fibroblast growth factor, interleukin 2, tumor growth factor ␤, phorbol esters, okadaic acid, retinoic acid, and vitamin D 3 (15,30), it is possible that a number of elements in the WAF1/CIP1 promoter function together to precisely regulate the level of WAF1/CIP1 expression. Such multiple signals might be necessary to generate sufficient WAF1/CIP1 expression for growth inhibition, since there is some evidence that the ratio of WAF1/CIP1 protein to target molecules, such as cyclin-dependent kinases, determines its effect (31). This idea is supported by the fact that the WAF1/CIP1 promoter element responsible for induction in response to serum in fibroblasts appears to be located between Ϫ1,817 and Ϫ4699 bases upstream of the transcription start site (14), while induction in response to tumor growth factor ␤ in SW480 cells is mediated by elements between Ϫ61 base pairs and Ϫ1.1 kb upstream (17). These results suggest that there are at least two p53-independent pathways for induction of WAF1/CIP1 transcription. We now report that induction of the WAF1/CIP1 promoter in U937 leukemic cells by PMA and okadaic acid involves Sp1 binding sites located approximately 115 and 80 bases upstream of the transcription start site; loss of these sites results in lack of response to PMA and okadaic acid as well as loss of WAF1/ CIP1 promoter induction in response to exogenous Sp1 in the Sp1-deficient Drosophila Schneider SL2 cell line. The upstream (Ϫ115) Sp1 site is vital for full response of the promoter to PMA in U937 cells, but all potential Sp1 binding sites between Ϫ122 and Ϫ61 must be deleted to abolish induction by exogenous Sp1 in the SL2 cells. The requirements for utilization of an Sp1 binding site in U937 cells may be more stringent due to the presence of Sp1-related proteins (26) or Sp1-inhibitory proteins (25) that are not present in the Drosophila SL2 cells. The fact that the reporter plasmid pGAGC6 (containing multiple Sp1 binding sites upstream of a TATA box) is induced by PMA and okadaic acid in U937 cells suggests that Sp1 may be sufficient for increased transcription of the WAF1/CIP1 promoter in response to PMA or okadaic acid treatment. The Sp1 transcription factor is found in glycosylated and phosphorylated forms, but little is known about how these modifications affect function (32). Interactions between Sp1 and the retinoblastoma protein have also been reported; a 20-kDa inhibitor of Sp1 (Sp1-I) was identified that also bound to Rb, and it was proposed that Rb binds and inactivates Sp1-I, leading to transcriptional activation by Sp1 (24). A 74-kDa protein that binds to the transactivation domain of Sp1 and inhibits Sp1-mediated transactivation has also been identified (33). These reports suggest that there are multiple inhibitors of Sp1, which could inhibit interaction of Sp1 with the transcription factor IID complex (28). It is also possible that proteins of the transcription factor IID complex are modified in response to PMA, or that the composition of the transcription factor IID FIG. 7. Transcription of a reporter plasmid containing six Sp1 binding sites is induced by PMA and okadaic acid in U937 cells. Plasmid pGAGC6, containing six Sp1 binding sites and an adenovirus TATA box upstream of a luciferase reporter gene, was transfected into U937 cells. Cells were treated with PMA or okadaic acid as in Fig. 2; cell extracts were then assayed for luciferase activity. -Fold induction is an average calculated from at least four independent transfection experiments. Vector without Sp1 sites (pGAM) was used as a control. complex is altered by gain or loss of TBP-associated factors (TAFs). Recent studies have shown that some TAFs may serve as coactivators to mediate transcriptional regulation (34). Signal transduction pathways that activate or inactivate such Sp1 inhibitors may serve to regulate Sp1 transcriptional activation and may therefore regulate WAF1/CIP1 and other genes involved in cell differentiation. A number of signal transduction pathways composed of kinase cascades have been described in the literature (35). However, a variety of signals that activate either the mitogen-activated protein kinase pathway, the stress-activated protein kinase pathway, or the p38 kinase pathway fail to induce WAF1/CIP1 transcription in U937 cells. These signals include UV irradiation, activated mitogen-activated protein kinase kinase, and osmotic shock. 2 This suggests that the signals that induce WAF1/CIP1 transcription via Sp1 may be part of an as yet unidentified pathway.
6,325.2
1996-01-12T00:00:00.000
[ "Biology", "Medicine" ]
An Adaptive Authenticated Model for Big Data Stream SAVI in SDN-Based Data Center Networks With the rapid development of data-driven and bandwidth-intensive applications in the Software Defined Networking (SDN) northbound interface, big data stream is dynamically generated with high growth rates in SDN-based data center networks. However, a significant issue faced in big data stream communication is how to verify its authenticity in an untrusted environment. ,e big data stream traffic has the characteristics of security sensitivity, data size randomness, and latency sensitivity, putting high strain on the SDN-based communication system during larger spoofing events in it. In addition, the SDN controller may be overloaded under big data stream verification conditions on account of the fast increase of bandwidth-intensive applications and quick response requirements. To solve these problems, we propose a two-phase adaptive authenticated model (TAAM) by introducing source address validation implementation(SAVI-) based IP source address verification.,emodel realizes real-time data stream address validation and dynamically reduces the redundant verification process. A traffic adaptive SAVI that utilizes a robust localization method followed by the Sequential Probability Ratio Test (SPRT) has been proposed to ensure differentiated executions of the big data stream packets forwarding and the spoofing packets discarding. ,e TAAM model could filter out the unmatched packets with better packet forwarding efficiency and fundamental security characteristics. ,e experimental results demonstrate that spoofing attacks under big data streams can be directly mitigated by it. Compared with the latest methods, TAAM can achieve desirable network performance in terms of transmission quality, security guarantee, and response time. It drops 97% of the spoofing attack packets while consuming only 9% of the controller CPU utilization on average. Introduction e big data streams have the characteristics of being security-sensitive, having data size randomness, and being latency-sensitive [1]. Current data-driven and bandwidthintensive applications in data center networks [2,3] become increasingly complex. SDN simplifies the application management by utilizing various policy-based controls over SDN-enabled access layer switches. With the rapid development of applications in the northbound interface of SDN [4], the big data stream is dynamically generated with high growth rates in SDN-based data center networks. e SDN-based policies are enforced through flow tables, which is specified by various flow entries and match fields [5]. However, the cloud servers in SDN-based data center networks may be attacked and return incorrect flow table query results. Spoofed source addresses could be used to prevent tracking, attack flow tables, and circumvent security checks [6,7]. In addition, the recent big data stream growth, which transfers huge quantities of data between thousands of servers [8,9], makes it more complicated for the spoofed address verification. us, how to ensure the integrity of big data streams in SDN-based data center networks without affecting normal communications SDNbased networks has taken a crucial role when performing policy-based communications in the SDN-based data center networks [10]. Internet Engineering Task Force (IETF) SAVI group 1 has standardized the access layer source address validation mechanisms. ey designed a binding-validation model to prevent IP spoofing and policy confusing. Specially, the SDN controller maintains the source address binding table in the binding-validation model centrally [11]. With this feature, SAVI is widely adopted to validate big data streams in SDN-based data center networks. However, the big data stream traffic may put high strain on the SDN-based communication system during larger spoofing events. Such unique characteristics raise new drawbacks and room for improvement for the implementation of big data stream SAVI: (1) e data stream size generated by the data-driven and bandwidth-intensive applications is unpredictable. It is significant to determine the authenticated flow size in the big data stream in case the SAVI devices work separately and statically. Otherwise, the authentication performance will be inferior under big data streams in SDN-based data center networks. (2) Big data stream is security-sensitive. However, binding relationships in the existing binding-validation model are unable to be exchanged between devices dynamically. It is crucial to provide robust security verification service under spoofing attacks to maintain the security level provided by dynamic SAVI (D-SAVI) [12]. (3) Previous D-SAVI scheme is unable to provide differentiated executions of the normal packet forwarding and the spoofing packet discarding. e policy-based controls over SDN-enabled switches under big data streams delivered by those bandwidth-intensive applications will lead to the packet forwarding efficiency reduction seriously. In brief, both of the aforementioned limitations might incur additional overhead, thereby degrading SAVI performance under big data streams in SDN-based data center networks. In this study, we propose an adaptive authenticated model for big data stream source address validation, which optimizes the SDN-based network communication performance while maintaining the source address security level. Our main contributions in this paper are summarized as follows: (i) We proposed a controller-based model for big data stream SAVI management and then provided an architectural design of a security mechanism that permits attack detection and source address validation implementation under big data streams. Our proposed model eases the collaboration not only between the forwarding entities, but also between networks. erefore, spoofing attacks under big data streams can be directly mitigated. (ii) e SPRT-based model is controlling big data stream SAVI by changing the validation implementation according to the anomaly classification, which will be expected to effectively balance the flow e paper is structured as follows. Section 2 describes the adaptive authenticated model for big data stream SAVI. Section 3 evaluates the proposed approach TAAM by conducting various simulations and experiments. Section 4 describes the related work, Section 5 discusses limitations and future work, and Section 6 concludes the paper. Adaptive Authenticated Model e two-phase adaptive authenticated model (TAAM) for big data stream source address validation in SDN-based data center networks includes a data stream collector model, a flow classifier model, and an SPRT-based SAVI model ( Figure 1). e sFlow-based data stream collector model can collect real-time network performance information on bandwidth-intensive applications. Depending on the global view of the SDN controller, TAAM can monitor the access/ core switches and deploy the conditional entropy-based flow classifier model in the first phase, and then the primary classification of a big data stream is given. In the second phase, we adopt a statistical tool named SPRT to realize a differentiated SAVI model to obtain an extensive analysis of the spoofing possibility of whether a candidate stream requires urgent validation. Table 1 represents the corresponding description of each notation. sFlow-Based Data Stream Collector. e data stream collector model ( Figure 2) is responsible for big data stream gathering in the access layer. Taking a look at RFC3176 2 , sFlow Traffic Collector (sFlow) is a method for monitoring and collecting real-time traffic in a typically switched topology. Sampling, detection, and evaluation were performed by the distributed sFlow agents deployed in switches or routers for getting the sample statistical data in the access layer. We then get continuous SDN-based data center network-wide big data stream information from the collector. e data stream collector model combines an already proposed native OF approach to gather the statistical data stream information in the SDN-enabled switches. Initially, the OpenFlow-based controller encapsulates the OFPT Message in Table 2 into the Packet-in packet (Controller Communication Message) periodically. e periodical operations FEATURES REQUEST are lightweight and easily integrated with existing data stream collection architecture for the separation of the statistical data stream information gathering process from the controller by utilizing the sFlow agents [13]. In detail, TAAM leveraged the packet sampling capability of sFlow agent, which decouples entirely the statistical data stream information gathering process from the forwarding logic accompanied by the FEATURES RE-PLY message ( Figure 3). Consequently, the sFlow-based agent is mainly responsible for acquiring the necessary SDNbased data center network information. Additionally, the sFlow-based data stream collector samples big data stream packets. According to the characteristics of data collection, multithreaded gathering method was used to calibrate the model. e data are generated by the Open vSwitch 3 itself by setting the corresponding parameters obtained from PORT STATUS message on the sFlow agent. en, the controller sends the view of the current global network topology to the collector. After getting the big data stream information and issuing response actions to related access layer switches, the formatted packet results could be sent to the following models on the controller by UDP protocol. As the sFlow data stream collector receives big data stream packet samples, it updates the corresponding statistical counters inside the sFlow agent. For controller CPU utilization concerns, there is no need to constantly obtain detailed data stream information for each access layer switch of consecutive sampling time windows. Such an approach can reduce the data collection complexity Balance parameters of the false negative error rate λ 0 , λ 1 e probability distribution parameters for the flow event and redundant process of the flow collection algorithm in a certain sampling time window. Meanwhile, it requires lower CPU resources especially when the traffic is growing rapidly. Consequently, the flow features from the sFlow agent are periodically collected and extracted in a big data stream. Describing normal and anomaly behaviors is one of the difficulties that an anomaly detection system faces. Shannon's entropy could measure the information content associated with a random variable. Commonly, the higher entropy means the random variable in a certain system with bigger randomness. e data stream feature distribution will Features reply Switch to controller List of flow table match fields (ports, port speeds, supported tables, and actions) is replied by the access layer switches. Entropy-Based Flow Port status Switch to controller Enables the access layer switch to inform that SDN controller of flow changes to port speeds or connectivity. float vastly when the elephant traffic or the spoofing attack happens. According to studies, the launch of a spoofing big data stream attack is usually accompanied by access layer switch traffic exploding [14]. erefore, it is theoretically possible for preliminaries to classify the flows and corresponding binding relationships in the flow table with suspected spoofed source addresses. Particularly, the threshold value depends on the traffic situation and the distance from the anomalous host to its nearest SDN checkpoint, given that f 1 , f 2 , . . . , f n are the collected flow events. e entropy of a discrete random variable E(f) in the model could be defined as the following equation: Conditional entropy is typically defined as the Shannon entropy of conditional circumstances [15]. Four kinds of conditional entropy are defined. Accordingly, four kinds of entropy-based methods for the flow classification in the rough set data analysis are proposed, which is the conditional entropy (equation (2)of match fields, namely, source IP address (sip), destination IP address (dip), destination port (dport), and packet size (psize)) during a time window. Generally, such entropy will change significantly once anomalous conditions happen. As Figures 5 and 6 depict, long-lived flows created by bandwidth-intensive applications incur a large amount of flow rules. Since the concentration of particular addresses under the big data streams, a continuous significance increase in the normalized conditional entropy values (2a) and (2b) could be observed ( Figure 5). Consequently, the entropy value changes can significantly reduce the flow table match rate towards the packets arriving in OVS switches. Additionally, the corresponding data counters in the flow tables are also a high probability of unpaired addresses [16]. Only a few packet-in messages generate in the SDN-based network, and there is a sudden decrease in normalized conditional entropy values (2c) and (2d): e real-time conditional entropy changes could be a noteworthy provident to suspect a big data stream is an anomaly and whether a host or port needs to be validated repeatedly [17]. e suspected flow and corresponding binding relationship could be classified by monitoring the conditional entropy value variation persistently. In terms of model implementation, the cache component will record the value and flow events with a timestamp and their timeout. We adopt a successive entropy test component that uses the information stored in the cache to compare the conditional entropy value with a baseline model depicted in equation (3), which is a determined confidence interval of corresponding standard deviation. Let e i,n denote the conditional entropy of flow f n at the two consecutive time units of an interval to be calculated, and let E flow denote the threshold to classify a big data stream. Considering that the collected f i,1 , f i,2 , . . . , f i,n within a given window is independent, those candidate flows could be classified as equation (4). SPRT-Based Source Address Validation. To realize the source address validation in RFC7513 4 , the original SAVI designs a binding-validation model and determines the validation rules in the control plane. Current binding-validation mechanisms in D-SAVI [12] are enforced by flow rules in the access layer switches as in Figure 7, which bind source addresses to switch ports in the <Address, Switch, Port > format. Since the validation rules coexist with the OpenFlow forwarding rules generated by those switches, the latest SDN-based SAVI prototype could use "multiple tables" feature in OpenFlow v1.3 to perform policy-based communications. erefore, we designed an SPRT-based differentiated SAVI model to verify the suspected spoofing big data streams, which combined the requirements of SAVI efficiency optimization and flow table quantity reduction on the original SAVI basis. Any flows in normal big data stream originated from the legacy switches will be revalidated to verify their binding relationship state, except that the corresponding flow rules explicitly exist in it [18]. On the implementation side, we adopt a statistical tool named SPRT. e latest SPRT mechanism samples the candidate flows in big data streams and calculates the corresponding likelihood ratio. When confident, it terminates with a spoofed/normal decision. Different from current machine learning approaches that need urgently the feature selection, SPRT-based method is lighter, has fewer features, and is simpler to scale. Particularly, SPRT-based algorithm is easy to be implemented, because it does not depend on a predefined correlation knowledge base or a forehand training of correlation model. We divide the probability of false positive and false negative into two types as equations (5) and (6). Naturally, the anomaly candidates are mixed with the normal flows. Let H N i denote the candidate that is observed as a normal candidate, and let H A i denote the compromised binding relationships with anomaly flows. Let H C , for Security and Communication Networks C ∈ A i + N i , N i , denote the hypothesis that the measurements correspond to all normal flow and anomaly flow event. False positive means the acceptance of H C when H C is true, while the false negative means the acceptance of H C when H C is true. To avoid these two errors, α and β are defined as the balance parameters of the false positive and false negative. Comparing λ 0 with λ 1 , the λ 1 is naturally bigger, because a compromised binding relationship is more likely to be injected into the anomaly flows. e approximate maximum 15 log-likelihood function L(f i,1 , f i,2 , . . . , f i,n | H C ) could represent the probability ratio of all normal flow and anomaly flow events tested for candidate i. erefore, we consider l A i ,N i ,n as the following SPRT based on dynamic validation at f i,n as equation (7). Specially, the SPRT-based detection model can be considered as a one-dimensional random walk [19]. By utilizing the entropy-based flow classifiers, we gathered the classification of normal flows and anomaly flows. erefore, assume that each flow event f i,x is independent and identically distributed, and when a normal flow |e i,n | ≤ E flow is tested, we walk upward one step. Otherwise, we walk downward one step. According to equations (5)- (7), we can get equation (8). e intervals of the packet entering the switches will be fed to this module at a periodic time window subsequently. From the above equations, we can conclude that four parameters α, β, λ 0 , λ 1 are required by the SPRT-based detection method. Among them, α and β limit the false positive error rate and false negative error rate and give two boundaries (A i and N i ) if the model is considered as a one-dimensional random walk. λ 0 , λ 1 are the probability distribution parameters for the flow event f 1 , f 2 , . . . , f n and affect the number of observations required for the dynamic validation to reach a decision. Above all, the TAAM model averages over all corresponding past estimates to calculate l A i ,N i ,n if the one-dimensional random walk terminates at the i-th global data sample. Specially, when multiple samples in different flow tables are being considered, the calculations will be rather complex. We can utilize all the terminated data samples in the next symbol detection to calculate its likelihood ratio more accurately. e l A i ,N i ,n can be used to classify the candidates with parameters α and β, respectively, as follows: (1) If l A i ,N i ,n < β/1 − α, then declare normal big data stream and reduce the D-SAVI intensity, namely, (2) Else if l A i ,N i ,n > 1 − β/α, then declare that an anomaly big data stream is tested and maintain the D-SAVI intensity, that is, H A i +N i . (3) Otherwise, declare that TAAM is not sufficient to make a classification and continue collecting additional statistical access layer switch data. Experiments and Analysis TAAM is a controller-based model for big data stream SAVI management, which provided an architectural design of a security mechanism. In this section, we implemented a simulated SDN-based data center network to prove TAAM's feasibility and effectiveness. en, we compare our proposed TAAM with the latest D-SAVI model [12] in terms of performance optimization tests and security ensuring tests. Implementation of Experiments. e simulated experimental platform consists of four servers: two servers serve as the host node, and the other servers serve as the simulated edge switches. In the simulation, we use Open vSwitch and Floodlight 5 as the core switches in data center networks and SDN controller, respectively. According to the network resource requirements, simulations are carried out on three typical data center network topologies (Table 3): Abilene (Figure 8), GEANT (Figure 9), and Fat-Tree ( Figure 10). e background flow datasets are provided by TOTEM project 6 , Uhlig [20], and Fat-Tree [21] on three network topologies correspondingly (Table 3). Mininet 2.3.0 7 is applied for the topology and links simulation for SDN-based data center networks, which supports OpenFlow v1.3 standard. Internet Traffic and Workloads Generator [22] are used to generate the legitimate big data stream. We created a large number of bots by Python and Scapy3 to launch the attack, which was carried out in certain parameters as Table 4. We then calculate the number of packets and flows in normal and anomaly traffic on this victim host. As Table 4 depicts, the spoofing attack intensities are divided into three levels. Consequently, the differentiated attack levels of this experimental system can offer scientific quantitative analysis and appraisal to source address validation model. Table 5 provides the detection rate of the two-phase algorithm applied in TAAM. Initially, the benchmark of the two-phase algorithm applied in TAAM was calculated. To evaluate the performance of SPRT-based source address validation algorithm, we compared it with six other classic machine learning algorithms including XGBoost in D-SAVI [23]. e choice of entropy-based and SPRT-based algorithms has its advantage in model training time while retaining most of the performance benefits including precision and recall. Particularly, SPRT-based big data stream classification algorithm to realize differentiated verifications, with no additional machine learning model training time, does not take up real-time controller memory space. Parameter Tests. Aiming at choosing the upper and lower thresholds, respectively, as shown in Figure 11, we further tested the Security and Communication Networks influence of the threshold values towards the number of successive tests in the aforementioned SPRT-based differentiated validation model (Section 3.3). Basically, the greater the difference between λ 0 and λ 1 , the smaller the successive number of tests required for our method to reach detection. Furthermore, the detailed value of λ 0 and λ 1 in our evaluation could be also shown in Figure 12, and it can be concluded that our method can detect a compromised candidate after 6 to 8 successive tests. ere is a similar trend for the average number of required validations. However, such a test mechanism is not exactly implementable since the upper and lower threshold values now depend on the random parameters in different data center network topologies. We then summarize the normal candidates' proportion in all the binding relationships for training data. Under background big data stream traffic in Figure 13 for three different SDN-based data center topologies, and α and β are naturally small values limiting the false positive rate and false negative rate, which are usually between 0.01 and 0.05. Since the process of OVS flow table update operation is hard and very slow when the flow table load is too large, α and β are defined as the balance parameters of the false positive and false negative to avoid these two errors. Here, we set α � 0.01 and β � 0.02 in the evaluation and estimate λ 0 and λ 1 . Afterwards, we calculated the average abnormal candidates' proportion ( Figure 13) to estimate λ 0 and λ 1 . As Table 6 depicts, different network environments make different λ 0 and λ 1 values. In Section 3.4, we have mentioned that the likelihood function L (f i,1 , f i,2 , . . . , f i,n | H C ) could represent the probability ratio of all normal flow and anomaly flow events tested for candidate i. e results show that, in the different topology of Abilene, GEANT, and Fat-Tree, λ 1 is indeed bigger, because an anomaly candidate under big data stream is more likely to be injected into the anomaly flows. Another H1 S9 H2 H3 H4 H5 H6 H7 H9 H10 H11 H12 H8 S10 S11 S12 S13 S14 S3 S4 S1 S2 S5 S6 S7 S8 important observation is that the difference between λ 0 and λ 1 is bigger, accompanied by the growth of the topology scale. Performance and Overhead Tests. Regarding performance and overhead tests, we provide results under different network types and topologies. A recent set of comparisons in Table 7 imply that the D-SAVI has the edge on performance for now, but that our proposed TAAM promises better performance across the board due to the reduction in redundant processes of source address validation ( S7 S8 S9 S10 S11 S12 S13 S14 Data Layer Controll Layer Floodlight OpenFlow Figure 10: Fat-Tree SDN-based data center network topology. ese results also imply that the TAAM model is more effective than other existing methods to reduce average packet forwarding delay with limited anomalies. Compared with the original OpenFlow, the experimental results demonstrate the effectiveness of SAVI deployment in TAAM and show an obvious optimization in high bandwidth and large traffic environments. Consequently, our proposed TAAM model ensures more significant average packet forwarding delay optimization as the scale of the simulated data center network gradually expanded. Considering that the flow table update rate is effectively reduced Table 7, it also has the advantages of stability and reliability. TAAM is superficial in the aspect of average packet forwarding delay, because the differentiated SPRT-based source address validation can resist big data stream and simplify the executing process of binding relationship verification to reduce the traffic cost. However, the apparent advantage may narrow because the random polling module starts its security guarantee measures under attack Type C (80% Spoofing Packets). Furthermore, the existence of the SPRT-based dynamic validation model is the main factor that affects the SAVI system performance. For different topology sizes, the average packet forwarding delay for TAAM performs better than the other techniques. e reason is that a differentiated validation model is a technique that optimizes the original polling mechanism to reduce latency and the candidate validation frequency. Additionally, by measuring the controller's CPU utilization, the TAAM model overhead could be evaluated ( Figure 14). A serial CPU utilization rate effect was found when the entropy-based methods were used. Corresponding results show that the average controller CPU utilization is 8% without TAAM and 10.7% with TAAM deployment, which indicates that TAAM incurs little overhead under normal states. Consequently, TAAM ensures reliable identification of incoming packets substantially reducing the risk of accidentally filters on a normal host. e average controller's CPU utilization value proves that our proposed model, which can assure the safety and highly effective SDNbased data center network management, is safe, secure, and effective. Related Works A reliable source address validation is a significant prerequisite for big data stream communication and authentication in an SDN-based data center network. Aiming at the spoofing source address attack problem, the working group of IETF is first to standardize the prototype system of source address validation implementation (SAVI). SAVI sets up a Layer-2 switch and filter spoofing packets by establishing a binding-validation model for data transmission and communication [24]. Such a binding-validation mechanism increases the filtering granularity in SAVI. Meanwhile, SDN simplifies the application management by utilizing various policy-based controls over SDN-enabled switches. e scalability of authenticated model implementation is single hosts rather than IP prefixes in the source addresses, which is much more accurate than conventional methods. SAVI Filtering IP spoofing traffic with agility (VASE) and Virtual Source Address Validation Edge (VAVE) are both implementations under SAVI [25]. e purpose of the VASE and VAVE is to protect users in the communication systems from being spoofed by attackers within the same network domain. VASE and VAVE establish an SAVI protection zone comprising all of the communication devices including the Layer-3 OpenFlow switches and L2 SAVI switches. However, few of these papers discuss how to implement SAVI technology based on the SDN-based data center networks. On the other hand, based on the convenience provided by the SDN northbound programmable interface, BGP-based Anti-Spoofing Extension (BASE) [6] is an anti-spoofing protocol based on SDN specifications, and the implementation will be the existing authentication technology for hybrid SDN-based networks. Source Address Validation in Software Defined Networks (SDN-SAVI) [26] was the preliminary design to enable IPv6-based SAVI functionalities and implementation under SDN deployment. Benefited from the global view of SDN controller, SDN-SAVI deploys authentication technology through IPv6based flow tables installed in the access layer switch without implementing redundant settings. SDN-SAVI is excellent for externalizing configuration settings that may need to be changed by the network manager. To ensure the source address authenticity and security in the Content Delivery Networks (CDN), the authors in [27] proposed a mechanism CDNi that can detect spoofed source addresses and therefore create a robust defense against spoofing attack created by spoofed IP. However, one obvious shortcoming is that the current CDNi is unable to prevent users in the corresponding communication systems from forging the source addresses in the same network domain. Source Address Validation for SDN Hybrid networks (SAVSH) [28] can locate spoofed nodes for the OpenFlow switch replacement and deploy filtering flow rules and authentication code onto them with a desirable fine-grained administrative security level. SAVSH only takes a few SDN authentication devices as possible but trades desirable extensibility of the authentication tools and of the development workbench. SDN-based Integrated IP Source Address Validation Architecture (ISAVA) computes the [29] forwarding path in a local scope of the network for the problem of source address anonymity protection. To ensure that the source address generated after the execution of communication systems can uniquely identify the current big data stream, ISAVA proposes an authenticated data structure with privacy-preserving based on a longer verification path. Such paths enable a series of changes of the source address in packets of a big data stream, which makes it is suitable for SDN-based big data stream scenarios. SDN-Ti [30] is proposed for tracing back and identifying spoofing attackers in SDN-based data center networks. Switches applying SDN-Ti could extend the functionality of the SDN switch and controller, and it is also used in router of access networks. SDN-Ti is intelligent at recognizing the spoofing devices and quick at snooping address configuration packets. However, there still exist some drawbacks in a big data stream scenario. Limitations and Future Works To the best of our knowledge, this study is the first work to discuss how to validate source addresses using a differentiated SAVI mechanism under big data streams. However, the TAAM has more or fewer limitations when used in an SDN-based data center network. In this section, we discuss some of the limitations. Initially, to defend against source address spoofing attack in big data streams, the SPRT-based source address validation model must last for a time window and test different approaches to maintain the security level. However, setting up a conditional time window in the access switches means that every new flow in the big data stream has to be classified repeatedly, which could put extra pressure on the adaptive authenticated model. Furthermore, the topology of the SDN-based data center can affect the flow classifier. For instance, an SDN controller in the data center network may be directly attacked by several sophisticated spoofing attackers coordinately, making the entropy-based classifier less effective. Alternatively, the adaptive authenticated model can make much greater use of the SDN controller's global view to calculate an entropy-based threshold of the limitation. While the present results of model overhead and performance tests have verified the efficiency of the adaptive authenticated model, some flaws underlaid in the adaptive authenticated model construction could not meet specific traffic requirements completely. As a next step, we will attempt to implement the analytical model to a real data center network as a more practical model. Besides, it is significant to find adaptive methods to differentiate the spoofing packets in the big data stream from legitimate ones in realtime before they enter further into the control layer, therefore solving the IP source address validation problem in SDN-based data center network fundamentally. Our future work will mainly focus on further optimization of anomaly detection techniques and try to deploy it in other actual SDN-based data center networks. Conclusion is paper presented an adaptive authenticated model for big data stream SAVI in SDN-based data center networks. We propose a two-phase adaptive authenticated model by introducing SAVI-based IP source address verification. Our proposed TAAM model realizes real-time verification of data stream and dynamically reduces the redundant big data stream SAVI process. e model overhead and performance test results demonstrate that the two-phase adaptive authenticated model achieves desirable security, efficiency, and stability. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,219.8
2021-09-21T00:00:00.000
[ "Computer Science" ]
Fullerene modification of WO3 electron transport layer toward high‐efficiency MA‐free perovskite solar cells with eliminated light‐soaking effect In perovskite solar cells (PSCs), the light‐soaking effect, which means device performance changes obviously under continuous light illumination, is potentially harmful to loaded devices as well as accurately assessing their efficiency. Herein, chemically stable tungsten trioxide (WO3) with high electron mobility is used as electron transport material in methylamine (MA)‐free PSCs. However, the light‐soaking effect is observed apparently in our devices. A fullerene derivative, C60 pyrrolidine Tris‐acid (CPTA), is introduced to modify the interface between WO3 and perovskite (PVK) layers, which can bond with WO3 and PVK simultaneously, leading to the passivation of the defect and the suppression of trap‐assisted nonradiative recombination. What is more, the introduction of CPTA can enhance the built‐in electric field between WO3 and PVK layers, thereby facilitating the electron extraction and inhibiting the carrier accumulation at the interface. Consequently, the light‐soaking effect of WO3‐based PSCs has been eliminated, and the power conversion efficiency has been boosted from 17.4% for control device to 20.5% for WO3/CPTA‐based PSC with enhanced stability. This study gives guidance for the design of interfacial molecules to eliminate the light‐soaking effect. | INTRODUCTION During the past decade, perovskite solar cells (PSCs) have drawn great attention due to their low-cost solution processing and prominent features of the perovskite (PVK) materials, which include adjustable band gap, large absorption coefficient, high carrier mobility, large carrier diffusion length, and small exciton binding energy. [1][2][3] The certificated power conversion efficiency (PCE) of PSCs has been rapidly boosted to 25.7% in only a dozen years, [4][5][6][7][8] showing tremendous potential in a wide range of applications. However, there are still many issues to be addressed before their scalable commercialization. [9] The normal n-i-p structured PSC is the most studied cell system in PSCs. As a key building block of PSCs, an electron transport layer (ETL) located between the transparent electrode and PVK layer is applied, which can effectively transport the photogenerated electrons from the PVK layer to the cathode. [10] As an electron transport material (ETM), titanium dioxide (TiO 2 ) has been widely applied in the n-i-p structured PSCs because of their superior properties in electron extraction, transportation, well-aligned band energy, and the relatively mature manufacturing technique. [11] However, TiO 2 is sensitive to ultraviolet radiation and shows highly photocatalytic activity toward degrading the PVKs, which is detrimental to the stability of PSCs. [12,13] Zinc oxide (ZnO) is then proposed as an alternative for TiO 2 due to its high carrier mobility and nonphotocatalytic activity. [14] However, chemical instability of ZnO interface causes the decomposition of hybrid PVK film into PbI 2 under the extended heat-treatment. [15] Recently, tin dioxide (SnO 2 ) has been proverbially recognized as excellent ETMs of PSCs with impressive device performance owing to its high electron mobility and favorable band alignment with PVKs. [16,17] Nevertheless, exploring new types of ETMs is still an important issue to further push the study on PSCs. Tungsten trioxide (WO 3 ) is another promising ETM for PSCs, which has a wide bandgap (~3.4 eV), good chemical stability, and high electron mobility. [18] Mahmood et al. first explored WO 3 ETM for PSCs. [19] However, the resulting device with only WO 3 as ETL displayed a very low PCE of 3.8%, which could be attributed to rich defect density and fast charge recombination at the WO 3 /PVK interface. Introducing another n-type metal oxide (such as TiO 2 or SnO 2 ) to construct double-layer ETLs with WO 3 could suppress the charge recombination for good cell efficiency. [19][20][21] For instance, You et al. designed a TiO 2 /WO 3 bilayer as ETL to achieve an enhanced PCE of 20.14%, which was much better than the single WO 3 ETL-based device (with an efficiency of 17.04%). [22] Eze et al. used fullerene C 60 to modify WO x and obtained a PCE of 16.07%. [23] Chen et al. employed hexanolassisted low-temperature processed WO x nanocrystalline film as ETL. The increased conductivity and the reduced trap density of the resulting WO x nanocrystalline film boosted the efficiency to 20.77%, which was the highest efficiency of WO x -based PSCs up to date. [24] Nevertheless, much room still remained for the further improvement of WO x as candidate ETM for PSCs. The light-soaking effect, which means that a continuous light illumination can gradually change the performance of PSCs, has raised concerns about the unstable power output of these devices. [25,26] Therefore, it is crucial to eliminate such light-soaking effects which could accurately assess the efficiency and stability of PSCs. It is reported that light-soaking instability of devices is mainly due to the trap-assisted recombination or charge accumulation at the ETL/PVK and/or hole transport layer/PVK interfaces. [27,28] Wang et al. introduced an n-type helical perylene diimide (PDI 2 ) as effective interfacial layer between TiO 2 and PVK, which promoted charge transport and passivated surface defects, thereby resulting in suppressed interfacial recombination, improved cell efficiency, and reduced light-soaking instability. [29] Zhang et al. used an interfacial modifier, [6,6]-phenyl-C 61 -butyric acid 2-((2-(dimethylamino)ethyl) (methyl)-amino)-ethyl ester, to modify the TiO 2 ETL of the planar PSCs. Both the electron extraction capability of ETL and the quality of PVK film deposited above it were enhanced, thus the PCE and light-soaking stability of the resultant devices were significantly improved. [30] Shao et al. employed a fullerene derivative, fulleropyrrolidine functionalized by a side chain of triethylene glycol monoethyl ether (PTEG-1) as the ETL in p-i-n structured PSCs. The PTEG-1 could effectively alleviate the trap-assisted nonradiation recombination occurred at the interface of PVK/ETL owing to the electron-donor property of the side chains, resulting in the increased efficiency and the eliminated light-soaking effect. [31] The light-soaking effect was also found in WO x -based PSCs, which could be suppressed if the WO x layer was modified by a self-assembled monolayer (C 60 -C 6 -phosphoric acid). [26] However, indepth explanation was not provided in this work. Herein, we employed WO 3 film as ETL of PSCs, showing an obvious light-soaking effect. C 60 pyrrolidine Tris-acid (CPTA) was therefore introduced as an interface layer between WO 3 and PVK. Surprisingly, the light-soaking effect is completely removed after CPTA modification. The introduction of CPTA can simultaneously bond with W 6+ in WO 3 and Pb 2+ in PVK, which promotes a reduction in defect density and a suppression in trap-assisted nonradiation recombination. Also, both the work function of ETL and the PVK layer are changed simultaneously upon CPTA modification. The increased built-in electric field between ETL and PVK layer can facilitate the electrons extraction from PVK to ITO and restrain the carrier accumulation at the interface. Consequently, the WO 3 /CPTA-based PSCs achieve an impressive PCE of 20.5% along with the elimination of light-soaking effect. In addition, the unencapsulated WO 3 /CPTA-based PSCs exhibit an excellent stability: 90% of its initial PCE is maintained after 500 h continuous light irradiation at 1-sun, and 77% of its original PCE is obtained subjected to 1000 h thermal aging at 85°C. All of them are significantly superior to the pristine WO 3 -based cells. This work affords a viable way for eliminating the light-soaking effects of PSCs. | RESULTS AND DISCUSSION Considering the instability of MA + under hightemperature thermal aging, [32] we employed methylamine (MA)-free, Cs 0.15 FA 0.85 Pb(I 0.95 Br 0.03 Cl 0.02 ) 3 -based PVK (CsFA-based PVK) as the light absorber in this study. Figure 1A schematically displays an n-i-p device configuration of ITO/WO 3 ( Figure S1)/CPTA/CsFAbased PVK/Spiro-OMeTAD/Au, and the film preparation process is detailedly described ( Figure S2 and Supporting Information Experiment Section). Interestingly, it is found that the PCE of the pristine WO 3 -ETL formed device progressively increased from 14.5% to 15.8%, 16.4%, 16.8% and peaked at 17.1% with the extension of the light illumination time from 0 to 2, 4, 10, and 20 min (Table S1). The characteristic curves of current density-voltage (J-V) are displayed in Figure 1B, and the evolved process of detailed performance parameters is depicted in Figure S3. The PCE of the WO 3 -based device is no longer improved when the light-soaking time is further extended to 30 min, indicating that~20 min should be the optimal lightsoaking time for the PSC with WO 3 ETL. By contrast, modification by CPTA results in significantly higher and more stable device performance (20.1%) which is almost unchanged regardless of the light illumination time, indicating that the light-soaking effect is effectively eliminated upon CPTA modifying the WO 3 ETL. To in-depth understand the mechanism of CPTA modification on the elimination of light-soaking effect, the photoelectric properties of ETL are first tested. We measured and calculated the direct conductivity (σ) of WO 3 films before and after CPTA modification by current-voltage (I-V) characteristic curves (shown in Figure 2A). The conductivity of the WO 3 /CPTA film is estimated to be 3.78 × 10 −3 mS·cm −1 , significantly larger than the pristine WO 3 film (2.28 × 10 −3 mS·cm −1 ). This might be due to the electron injection through Lewis coordination between CPTA and W 6+ and result in enhanced charge transport. [33] Atomic force microscopy was performed to compare the surface roughness of WO 3 and WO 3 /CPTA films. As shown in Figure 2B,C, the roughness of WO 3 film decreases from the root mean square of 5.38 to 4.26 nm after CPTA modification, which will provide a favorable interfacial contact between the ETL and PVK and is conducive to uniform growth of PVKs. [34] We further find that the CPTA layer remains on the WO 3 surface even by following the washing procedure with the mixed solvent of DMF/DMSO ( Figures S4 and S5). Moreover, to acquire the distribution of CPTA in the film, we carried out the time-of-flight secondary ion mass spectrometric characterization. [35] Since C 60 might be decomposed by the high-energy ion beams during testing affording multicarbon fragments, we attribute the signal of the multicarbon fragments (C ≥ 8) to the CPTA molecule. As illustrated in Figure S5, although most CPTA molecules are detected at the interface between WO 3 and PVK, some CPTA molecules diffuse into the PVK layer owing to the solubility of CPTA in DMF. Both PVK films with and without CPTA modification exhibit similar morphology characteristics, thickness, UV-Vis absorption, and crystallization intensity ( Figure S6). The Urbach energy (E u ) of PVK films was used to evaluate their structural quality, which was obtained from their UV-Vis absorption spectra ( Figure S6G) according to the equation α = α 0 exp(hv/ E u ), where α is the absorption coefficient of PVK films and hv is the photon energy. The E u of WO 3 /CPTA-based PVK film is 48.78 meV, lower than the control one (64.64 meV, Figure 2D), demonstrating that WO 3 /CPTAbased PVK film has better quality and fewer defects. [36,37] Previous studies have pointed out that the carrier's accumulation at the PVK/electrode interface could result in a large capacitance effect owing to the interfacial electronic dipole polarization. [38] To assess this, frequency-dependent capacitance (C-f) measurement was performed to compare the capacitance properties of the control and target samples ( Figure S7). According to a previous report, the capacitance change of lowfrequency region (region I) is attributed to the polarization at electrode interfaces. [39] The capacitance of WO 3 / CPTA-based device is significantly lower than WO 3based device in the region I, indicating that the significant suppression of carriers accumulation at the ETL/PVK interface after CPTA modification and that is beneficial to eliminate the light-soaking effect. To further gain sight into the underly reason for the eliminated light-soaking effect, we employed the X-ray photoelectron spectroscopy (XPS) to study the interaction between WO 3 and CPTA. The W 4f spectra of the WO 3 and WO 3 /CPTA films are shown in Figure 2E. The binding energies located at 37.54 and 35.38 eV are identified to be the 4f 5/2 and 4f 7/2 characteristic peaks of W 6+ , respectively. While the two peaks shift positively to 37.74 and 35.58 eV after CPTA modified, which reveals the formation of chemical bonds between WO 3 and CPTA. [40] Moreover, we also explore the chemical interactions between CPTA and PVK. As illustrated in Figures 2F and S8, the signals of Pb 2+ move to lower binding energies upon CPTA modification, confirming again that the CPTA molecule would diffuse into the PVK layer due to the solubility of CPTA in DMF ( Figures S4 and S5) and hence the oxygen (O) atoms in CPTA donate its lone electron pair to the empty 6p orbital of Pb 2+ by coordination bond. [41,42] The strong chemical reactions between CPTA with WO 3 and PVK will passivate the defects at interface, which is conductive to suppress trap-assisted nonradiative recombination and beneficial for eliminating the light-soaking effect. [28,39] High-resolution synchrotron radiation photoelectron spectroscopy (HR-SRPES) was further carried out to evaluate the energy band levels [43,44] of the ETL and corresponding PVK films before and after CPTA treatment ( Figure 3I,II). The work function (W F ) of the WO 3 / CPTA film is −4.05 eV, higher than the WO 3 film (−4.25 eV). Moreover, the W F of PVK film is also increased from −4.00 eV to −3.73 eV upon CPTA modification, which might be attributed to the chemical interactions between CPTA and PVK. The energy-level arrangement diagram of the films is shown in Figure 3III and the corresponding parameters are listed in Table S2. Ulteriorly, the calculated built-in electrical field (Δμ u ) increases from 0.25 eV for the WO 3 -based sample to 0.32 eV for the WO 3 /CPTA-based sample, which is consistent with the results of kelvin probe force microscopy (KPFM). ( Figure S9 and Table S3. The Δμ k increases from 0.27 eV of WO 3 -based device to 0.46 eV of WO 3 /CPTA-based device.) Therefore, the CPTA modification can facilitate the electron extraction and suppress the accumulation of charge carriers at the buried interface. [45,46] Combined with the XPS, HR-SRPES, and KPFM results, the CPTA can bond with W 6+ of WO 3 and Pb 2+ of the PVK layer, resulting in effective defect passivation and alleviation of trap-assisted nonradiative recombination at the interface. Furthermore, the built-in electrical field increases significantly after CPTA modification, which can promote the interfacial carriers transport and inhibit the interfacial carrier accumulation. All these factors afford the elimination of the light-soaking effect. Subsequently, we investigate the effect of CPTA on the electron extraction and electron-hole recombination dynamics at the interface of WO 3 /PVK through photoluminescence (PL) and time-resolved PL spectroscopies. [47] The steady-state PL peak intensity of the WO 3 / CPTA-based PVK film has been quenched more tempestuously than that of the film formed on the pristine WO 3 ETL ( Figure 4A), indicating that more efficient electron transfer from PVK layer to WO 3 /CPTA layer than to pure WO 3 layer, which is beneficial for suppressing the lightsoaking effect. [34] Moreover, the PL peak blue-shifts from 790 nm for the control sample to 785 nm for the CPTAmodified one, suggesting an effective defect passivation of PVK and ETL subject to CPTA modification. [42,44] The conclusion is reinforced by the results obtained from time-resolved PL spectra in Figure 4B and Table S4. Reduced carrier lifetime from 167 ns for the WO 3 -based PVK to 12.2 ns for the WO 3 /CPTA-based PVK demonstrates the efficient carrier transfer from PVK to CPTAmodified WO 3 . To further understand the charge recombination mechanism, we measured the trend of short-circuit current density (J SC ) and open-circuit voltage (V OC ) of devices under different incident light intensities (I). According to the formula of J SC ∝ I α , the linear dependence between J SC and I in logarithmic form is shown in Figure 4C. The slopes of both devices are close to 1, which implies that the bimolecular recombination in both devices is negligible. [16] Besides, the dependence of V OC on Log(I) is presented in Figure 4D. The slope of WO 3 / CPTA-based device is 1.43 kT/q, which is lower than that of the WO 3 -based device (1.61 kT/q), indicating that the trap-assisted monomolecular recombination is significantly reduced in WO 3 /CPTA-based devices. [48] The reduction in trap-assisted monomolecular recombination might be attributed to trap passivation through chemical interactions between WO 3 and CPTA and/or between PVK and CPTA. The defect density was assessed by space charge limited current method. [49,50] We fabricated electron-only devices with a configuration of ITO/WO 3 (or WO 3 /CPTA)/CsFAbased PVK/PCBM/Au, while the corresponding I-V curves are presented in Figure 4E. The trap-filled limit voltage (V TFL ) values of devices based on WO 3 and WO 3 /CPTA are 0.47 and 0.23 V, respectively. The calculated trap density (N t ) for the WO 3 /CPTA-based device is 5.4 × 10 15 cm −3 (Table S5), which is~1/2 lower than that of the WO 3 -based device (1.1 × 10 16 cm −3 ). The obvious reduction of trap density indicates the effective passivation effect of CPTA on defects. Electrochemical impedance spectroscopy was further employed to unveil the interfacial charge transport behavior. [51] The Nyquist plots of the two devices tested in the dark condition were shown in Figure 4F, while the fitting results are summarized in Table S6. The WO 3 / CPTA-based device exhibits a larger recombination resistance (R rec ) in the low-frequency range and a smaller transfer resistance (R tr ) in the high-frequency range in comparison with the WO 3 -based cell, indicating its more efficient charge transport, which hence alleviates the accumulation of carriers at the interface and eliminates the light-soaking effect. To verify the influence of CPTA modification on the photovoltaic performance of devices, we comparatively studied the photovoltaic properties of WO 3 and WO 3 /CPTA-based PSC devices. As shown in Figure 5A, the cell with pristine WO 3 ETL exhibits a stabilized PCE of 17.4%, along with the V OC of 0.93 V, J SC of 24.7 mA·cm −2 , and fill factor (FF) of 75.9%. In contrast, the device with CPTA modification delivers a much higher PCE of 20.5%, together with the V OC of 1.03 V, J SC of 24.8 mA·cm −2 , and FF of 80.4% under identical conditions, which is among the high efficiency of PSCs with WO 3 -based ETMs (Table S7). The improvement of V OC and FF might be attributed to better contact between the ETL and the PVK layer due to the CPTA modification. The external quantum efficiency was also measured ( Figure 5B) and the integrated J SC values of WO 3 and WO 3 /CPTA-based devices are 23.50 and 23.63 mA·cm −2 , respectively. The stabilized power output was tested under continuous AM 1.5 G illumination ( Figure 5C). The stabilized PCEs are, respectively, 16.3% for the control cell and 19.6% for the CPTA-modified one, which match well with the performance deduced from the J-V curves. To confirm the reproducibility, 50 individual PSCs were fabricated, respectively, and their statistical histograms of the PV parameters are shown in Figures 5D and S10. WO 3 /CPTA-based PSCs have a narrower PCE distribution, highlighting the improved reliability of the device with CPTA modification. Finally, we comparatively studied the stability of the unencapsulated devices based on WO 3 and WO 3 / CPTA ETLs. Figures 6A and S11 show the dark stability in the N 2 atmosphere. An~81% of its original PCE is maintained for the WO 3 /CPTA-based device after 3000 h of storage, while the PCE of the WO 3based device drops to~53% of its initial PCE. The light stability of the devices was also evaluated under continuous light irradiation (white light LED, 1 sun) for 500 h. As illustrated in Figures 6B and S12, the device based on WO 3 /CPTA ETL maintains~90% of its original PCE, while it is only~82% for the WO 3based device. The thermal stability of devices was further examined under continuous heating at 85°C for 1000 h in N 2 atmosphere. We adopted Poly[bis(4phenyl)(2,4,6-trimethylphenyl)amine] as the hole transport material [52] and the evolution of the normalized performance parameters are displayed in Figures 6C and S13. The PCE of the WO 3 /CPTA-based device remains at 77% of the initial efficiency, while the WO 3 -based device decreases to 57%. The enhanced stability of WO 3 /CPTA devices might be attributed to the reinforced interface upon the modification of CPTA and/or the reduction of defect density in bulk PVK films. All data of the PSCs were collected after illumination stabilized for 20 min. CPTA, C 60 pyrrolidine Tris-acid; EQE, external quantum efficiency; ETL, electron transport layer; FF, fill factor; PCE, power conversion efficiency; SPO, stabilized power output. | CONCLUSION In summary, CPTA has been introduced to modify the interface of WO 3 ETL and PVK layer. The chemical bonds can be formed between CPTA with both WO 3 and PVK, resulting in decreased trap density obviously, and thus effectively suppressed the trap-assisted nonradiative recombination. Moreover, the introduction of CPTA can boost the built-in electric field, which is conducive to facilitate the charge transfer from PVK layer to WO 3 ETL, significantly inhibiting the accumulation of charge carriers at the WO 3 /PVK interface. The above effects jointly accelerate the elimination of light-soaking effects in the present PSCs. Hence, a decent PCE of 20.5% along with excellent stability has been achieved for the WO 3 / CPTA-based MA-free PSCs. Our work paves the way for the realization of highly efficient and stable PSCs.
5,054
2023-05-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics", "Chemistry" ]
A unified approach to weighted Hardy type inequalities on Carnot groups We find a simple sufficient criterion on a pair of nonnegative weight functions \begin{document}$V(x)$\end{document} and \begin{document}$W(x) $\end{document} on a Carnot group \begin{document}$\mathbb{G},$\end{document} so that the general weighted \begin{document}$L^{p}$\end{document} Hardy type inequality \begin{document}$\begin{equation*}\int_{\mathbb{G}}V\left( x\right) \left\vert \nabla _{\mathbb{G}}\phi \left(x\right) \right\vert ^{p}dx\geq \int_{\mathbb{G}}W\left( x\right) \left\vert\phi \left( x\right) \right\vert ^{p}dx\end{equation*}$ \end{document} is valid for any \begin{document}$φ ∈ C_{0}^{∞ }(\mathbb{G})$\end{document} and \begin{document}$p>1.$\end{document} It is worth noting here that our unifying method may be readily used both to recover most of the previously known weighted Hardy and Heisenberg-Pauli-Weyl type inequalities as well as to construct other new inequalities with an explicit best constant on \begin{document}$\mathbb{G}.$\end{document} We also present some new results on two-weight \begin{document}$L^{p}$\end{document} Hardy type inequalities with remainder terms on a bounded domain \begin{document}$Ω$\end{document} in \begin{document}$\mathbb{G}$\end{document} via a differential inequality. On the Euclidean space R n , the classical Hardy inequality asserts that and holds for every φ ∈ C ∞ 0 (R n ) if 1 ≤ p < n, and for every φ ∈ C ∞ 0 (R n \{0}) if n < p < ∞. Here the subscript zero signifies compact support. It is also known that the positive constant on the right-hand side of (1) is sharp but, for p > 1, that equality is only possible if φ = 0 a.e. In the critical case n = p an inequality of type (1) fails for every positive constant on the right-hand side, while the following sharp inequality is valid for all φ ∈ C ∞ 0 (B 1 (0)), where B 1 (0) is the unit ball in R n centered at the origin; see Edmunds and Triebel [12]. The stronger version of (2) was then presented by Adimurthi and Sandeep in [2]. On the other hand in the case of sub-Riemannian spaces, especially on Carnot groups G, Hardy type inequalities have been also intensively investigated, see [11], [22], [34], [10], [27], [26], [29], [36]. For instance, D'Ambrosio in [11] and Goldstein and Kombe in [22] established, among the other things, the following L p Hardy type inequality on polarizable Carnot groups G, for all φ ∈ C ∞ 0 (G\ {0}) , provided that Q ≥ 3 and 1 < p < Q. Here, Q is the homogeneous dimension of G, N : G −→ [0, ∞) is the homogeneous norm associated with the fundamental solution for the sub- . . , X m ) is the horizontal gradient on G and X 1 , . . . , X m are the generators of G (see Section 2 for definitions and preliminaries). Later in [27] Kombe discovered the sharp weighted Hardy inequality for the p = 2 case on general Carnot groups G having the form where φ ∈ C ∞ 0 (G\ {0}) and α ∈ R, Q ≥ 3, 2 < Q + α. Niu and Wang [34] then extended the inequality (4) to the L p case on polarizable Carnot groups G and showed that for any φ ∈ C ∞ 0 (G\ {0}) one has whenever α ∈ R, 1 < p < Q + α, γ > −1. We note that all the constants appearing in (3) , (4) and (5) are sharp but are never achieved. We also mention that Jin and Shen [26], recently have proved a weighted L p Hardy inequality on general Carnot groups G by using a special class of weighted p-sub-Laplacian and the corresponding fundemantal solution. More recently, Lian also has got a similar result on the same groups with a sharp constant, see [29]. All of these works motivate us to investigate a constructive method to derive Hardy type inequalities with different weights on G. In this direction, we provide an approach that recovers and improves most of the Hardy type inequalities that have appeared to date. More precisely, we verify that if V ∈ C 1 (G) and W ∈ L 1 loc (G) are nonnegative functions and Φ ∈ C ∞ (G) is a positive function such that almost everywhere in a general Carnot group G, then the inequality is valid for every φ ∈ C ∞ 0 (G), where p ≥ 2 and c p > 0. It is worth emphasizing here that one can readily obtain as many weighted Hardy type inequalities as one can construct a weight function V and a function Φ satisfying the above hypotheses (see Applications of Theorem 3.1). We remark that a similar inequality with a different nonnegative remainder term also exists for the case 1 < p < 2 (see Theorem 3.1). We also give new results on two-weight L p Hardy type inequalities with remainders on a bounded domain Ω in polarizable Carnot groups G. The primary tool which we employ in constructing these type of inequalities is a differential inequality involving a nonnegative general weight function V, the homogeneous norm N and a positive smooth function δ (see Theorem 4.1). We show some concrete examples by specializing the functions V and δ (see Applications of Theorem 4.1). Preliminaries and notations. We first give an account of some of the basic definitions, terminology and background results of analysis on Carnot groups G that will be used throughout the article. For further details on this topic we refer the interested readers to [3], [4], [7], [15], [17], [32], and the references therein. A Carnot group is a connected, simply connected, nilpotent Lie group G ≡ (R n , ·) whose Lie algebra G admits a stratification. That is, there exist linear subspaces This defines an s-step Carnot group and the integer s ≥ 1 is called the step of G. Via the exponential map, it is possible to induce on G a family of automorphisms of the group, called dilations, δ λ : R n −→ R n (λ > 0), such that where 1 = α 1 = · · · = α m < α m+1 ≤ · · · ≤ α n are integers and m = dim(V 1 ). The group law can be written in the following form where P : R n × R n −→ R n has polynomial components and P 1 = · · · = P m = 0 (see [32], Chapter 12, Section 5). Note that the inverse x −1 of an element x ∈ G has the form Let X 1 , . . . , X m be a family of left invariant vector fields that form an orthonormal basis of V 1 ≡ R m at the origin, that is, The vector fields X j have polynomial coefficients and can be assumed to be of the form where each polynomial a ij is homogeneous with respect to the dilations of the group, that is, Then, the Carnot-Caratheodory distance d cc (x, y) between two points x, y ∈ G is defined to be the infimum of the lengths b a γ (t), γ (t) 1/2 dt of all horizontal curves γ : [a, b] −→ G such that γ(a) = x and γ(b) = y. Notice that d cc is a homogeneous norm and satisfies the invariance property for all x, y, z ∈ G and is homogeneous of degree one with respect to the dilation δ λ . The Carnot-Caratheodory balls are defined by B(x, R) = {y ∈ G|d cc (x, y) < R}. The n-dimensional Lebesgue measure, L n , is the Haar measure of group G. This is the homogeneous dimension of G. The nonlinear operator is the p-sub-Laplacian on Carnot group G. If p = 2 then we have the linear sub- is the horizontal gradient on G and X 1 , . . . , X m are the generators of G. The fundamental solution u for ∆ G is defined to be a weak solution to the equation −∆ G u = δ 0 , where δ 0 denotes the Dirac distribution with singularity at the neutral element 0 of G. In [14], Folland proved that in any Carnot group G, there exists a homogeneous norm N such that u = N 2−Q is harmonic in G\ {0} and is a positive multiple of the fundamental solution for ∆ G . We now start with u and set and recall that a homogeneous norm on G is a continuous function N : G −→ [0, ∞), smooth away from the origin, which satisfies the conditions N (δ λ (x)) = λN (x) , N x −1 = N (x) and N (x) = 0 iff x = 0. Using the homogeneous norm N, we define the N -ball B N in G with center zero and radius R by A Carnot group G is called polarizable if the homogeneous norm N = u 1/(2−Q) , associated to Folland's solution u for the sub-Laplacian ∆ G , satisfies the following ∞-sub-Laplace equation This class of groups was introduced by Balogh and Tyson [3] and admits the analogue of polar coordinates. It is known that the Euclidean space, the Heisenberg group H n and Kaplan's H-type group are polarizable Carnot groups. In [3], the same authors also proved that for every 1 < p < ∞ the function Moreover, for each 1 < p < ∞ there exists a constant l p > 0 such that −∆ G,p u p = l p δ 0 in the sense of distributions. Weighted Hardy type inequalities. Here is the main result of this section. Theorem 3.1. Let V ∈ C 1 (G) and W ∈ L 1 loc (G) be nonnegative functions and Φ ∈ C ∞ (G) be a positive function such that almost everywhere in a general Carnot group G. There exists a positive constant c p depending only on p such that, if p ≥ 2, then and if 1 < p < 2, then for all φ ∈ C ∞ 0 (G) . Proof. We now recall the following inequalities that will be used in this article (see, for example, [30]). For any 1 < p < ∞ there exists a positive constant c p depending only on p such that for all a, b ∈ R n we have and |a + b| p ≥ |a| p + p |a| p−2 a · b + c p |b| 2 (|a| + |b|) 2−p , for 1 < p < 2. Let ϕ be a new variable ϕ := φ Φ , where 0 < Φ ∈ C ∞ (G) and φ ∈ C ∞ 0 (G) . Applying the inequality (9) with a = ϕ∇ G Φ and b = Φ∇ G ϕ, we get Multiplying the inequality (11) by V (x) on both sides and integrating by parts over G yield As a next step, by using the weighted p−Laplacian inequality (6) , we conclude that Making the change of variable ϕ = φ Φ in the above integrals, we obtain the desired result (7) . Note that the Theorem 3.1 holds also for 1 < p < 2 and in this case we use the inequality (10) with the same choices of a and b as in the above derivation. This finishes the proof of Theorem 3.1. Applications of Theorem 3.1. Let > 0 be given. To make following arguments rigorous we should replace the function N with its regularization N := (u + ) 1 2−Q and after the computation take the limit as −→ 0. However, for the sake of simplicity we will proceed formally. As we have already mentioned most of the known Hardy type inequalities on polarizable Carnot groups G such as (3) , (4) and (5) , and as well as other new results can be obtained, via the above approach, by making suitable choices for V and Φ. As a first example, note that the choice easily yields the following important result due to J. Wang and P. Niu [34]. Corollary 1. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let α ∈ R, 1 < p < Q + α, γ > −1. Then the inequality on B N , we recover the weighted L p Hardy type inequality (3.40) presented in [11]. Corollary 2. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Q = p > 1, α < −1. Then the inequality is valid for all φ ∈ C ∞ 0 (B N ). One can however apply the Theorem 3.1 to obtain other new inequalities on G. For instance, let us take then we readily get the following result. Corollary 3. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let α ∈ R, Q + α > p > 1. Then the inequality On the other hand, by considering the functions we obtain Carnot version of the inequality (5.1) proved in [31] for the Euclidean context. Corollary 4. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let 1 < p < Q, α > 1. Then for every φ ∈ C ∞ 0 (G), one has Another application of Theorem 3.1 with the special functions leads us to the subsequent improved Carnot analogue of the inequality (42) established in [19] for the Euclidean setting. We now take the units on B N , and we have the following Hardy type inequality (12) that was first proved in [24] for the Heisenberg group H n and then in [11] for polarizable Carnot groups G by slightly different methods. Corollary 6. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Q = p > 1. Then for every φ ∈ C ∞ 0 (B N ), one has Uncertainty Principle Inequalities. The first and most famous uncertainty principle goes back to Heisenberg's seminal work, which was developed in the context of quantum mechanics [25]. The mathematical details of this principle were provided by Pauli and Weyl [35] and hence it is sometimes referred to as the Heisenberg-Pauli-Weyl inequality. In the Euclidean setting, the uncertainty principle inequality with sharp constant can be stated as where φ ∈ C ∞ 0 (R n ) . There exists much literature devoted to deriving various uncertainty principle type inequalities in the Euclidean and other settings (see [16], [27]). For instance, in polarizable Carnot groups G, Kombe [27] showed that for any φ ∈ C ∞ 0 (G) , the following inequality is valid We should mention that Theorem 3.1 does not only give us weighted Hardy inequalities but also gives the Heisenberg-Pauli-Weyl type inequalities with the best constant. For instance, we now consider the pair where α > 0, and we immediately obtain Then the above inequality takes the form Aα 2 + Bα + C ≤ 0 for every α ∈ R which implies that B 2 − 4AC ≤ 0. In other words, we have the inequality (14). Now we make the following special choices of functions V and Φ in Theorem 3.1 where α > 0. We get Arguing as above, we have the following version of the Heisenberg uncertainty principle inequality. Corollary 7. Let G be a polarizable Carnot group with homogeneous norm N = u Finally, let us consider the pair V ≡ 1 and Φ = e −αN , α > 0 then we get following inequality. Corollary 8. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q . Then for every φ ∈ C ∞ 0 (G), one has WEIGHTED HARDY TYPE INEQUALITIES ON CARNOT GROUPS 2017 4. Two-Weight Hardy type inequalities with remainders. We now prove an improved two-weight L p Hardy type inequality via a differential inequality involving a general nonnegative weight function V , the homogeneous norm N and a positive smooth function δ. Theorem 4.1. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Ω be a bounded domain with smooth boundary ∂Ω in G. Assume V is a nonnegative C 1 −function and δ is a positive C ∞ −function such that Proof. For any φ ∈ C ∞ 0 (Ω) we set ϕ := N −γ φ with γ < 0, a constant that will be chosen later. By direct computation we have Applying the inequality (9) with a = γN γ−1 ϕ∇ G N and b = N γ ∇ G ϕ yields Multiplying both sides of (16) by V (x) N α and then using integration by parts gives Taking into account that ∆ G N = (Q − 1) |∇ G N | 2 N and ∆ G,∞ N = 0 we obtain on a bounded domain Ω with smooth boundary in G, where R > sup x∈Ω N (x) . It is obvious that they fulfill all hypotheses in the Theorem 4.1, hence we have the weighted L p Hardy type inequality containing a logarithmic remainder. Corollary 9. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Ω be a bounded domain with smooth boundary ∂Ω in G. Then for all φ ∈ C ∞ 0 (Ω), we have where Q + α > p ≥ 2, α ∈ R, c p > 0 and R > sup x∈Ω N (x) . Remark 3. In the Abelian case, when G = R n , with the ordinary dilations, one has G = V 1 = R n so that Q = n. Now it is clear that the above inequality with the homogeneous norm N (x) = |x| and α = 0 recovers the inequality (1.4) proved by Adimurthi et al. in [1]. We now apply Theorem 4.1 with the pair V ≡ 1 and δ = log(log R N ), R > e sup x∈Ω N (x) , and we obtain the following result including a different logarithmic remainder. Corollary 10. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Ω be a bounded domain with smooth boundary ∂Ω in G. Then for all φ ∈ C ∞ 0 (Ω), we have where Q + α > p ≥ 2, α ∈ R, c p > 0 and R > e sup x∈Ω N (x) . On the other hand, by making the choices V = e N and δ = e −N , we derive the subsequent two-weight L p Hardy type inequality involving two nonnegative remainders. Corollary 11. Let G be a polarizable Carnot group with homogeneous norm N = u 1 2−Q and let Ω be a bounded domain with smooth boundary ∂Ω in G. Then for all φ ∈ C ∞ 0 (Ω), we have where Q + α > p ≥ 2, α ∈ R and c p > 0. Another consequence of the Theorem 4.1 with the special functions V ≡ 1 and δ = R − N on the N -ball B N in G is the following inequality. Remark 4. The lack of regularity on the above choices can be readily handled by replacing the function N with a suitable N and then passing to the limit as −→ 0.
4,522
2016-12-01T00:00:00.000
[ "Mathematics" ]
Magneto-Optical Characteristics of Streptavidin-Coated Fe3O4@Au Core-Shell Nanoparticles for Potential Applications on Biomedical Assays Recently, gold-coated magnetic nanoparticles have drawn the interest of researchers due to their unique magneto-plasmonic characteristics. Previous research has found that the magneto-optical Faraday effect of gold-coated magnetic nanoparticles can be effectively enhanced because of the surface plasmon resonance of the gold shell. Furthermore, gold-coated magnetic nanoparticles are ideal for biomedical applications because of their high stability and biocompatibility. In this work, we synthesized Fe3O4@Au core-shell nanoparticles and coated streptavidin (STA) on the surface. Streptavidin is a protein which can selectively bind to biotin with a strong affinity. STA is widely used in biotechnology research including enzyme-linked immunosorbent assay (ELISA), time-resolved immunofluorescence (TRFIA), biosensors, and targeted pharmaceuticals. The Faraday magneto-optical characteristics of the biofunctionalized Fe3O4@Au nanoparticles were measured and studied. We showed that the streptavidin-coated Fe3O4@Au nanoparticles still possessed the enhanced magneto-optical Faraday effect. As a result, the possibility of using biofunctionalized Fe3O4@Au nanoparticles for magneto-optical biomedical assays should be explored. Iron oxide MNPs are the most commonly used MNPs due to their superparamagnetic stability and biocompatibility 16 . The properties of Fe 3 O 4 MNPs, such as size and shape, can be altered with different synthesis methods 17 and as a result the MNPs can be specialized for different applications. Several synthesis routes have been reported including thermal decomposition 18 , co-precipitation 19 , and hydrothermal synthesis 20 . Each technique fits the demands of different biomedicine applications. Generally, the particle size of Fe 3 O 4 MNPs is affected by pH variation 21 , temperature 22 , and stirring rate 23 during the synthesis process. To increase the stability and biocompatibility, the surface of the MNP generally needs to be modified with some noble metal or polymer. Several methods to modify MNPs made of iron oxide have been reported 3,4,24,25 . Gold, a noble metal with good biocompatibility, is commonly used in biomedical applications. Due to their biocompatibility gold coated MNPs have been developed and widely studied 26,27 . Moreover, gold coated MNPs simultaneously possess magnetic and plasmonic characteristics. Jain et al. reported that the magneto-optical Faraday effect in gold-coated iron oxide nanocrystals was enhanced due to surface plasmon resonance enhanced magneto-optics (SuPREMO) 28 . However, the nanoparticle surface generally needs to be modified with biomaterials or proteins for applications in biomedicine. It is well known that surface plasmon resonance (SPR) is very sensitive to the surface state of the nanoparticle. The SPR of a nanoparticle is highly responsive to small changes in the local refraction index 29 . Hence, the surface modification of biomaterials certainly alters the characteristics of the SPR. In this work, we synthesized Fe 3 O 4 @Au core/shell magnetic nanoparticles and coated their surface with streptavidin (STA) to study how the addition of STA impacted the magneto-optical Faraday effect. STA is a widely used biomaterial for developing new biomedical methods because the conjugation of STA and biotin is very strong. It is commonly used to investigate the quantification process with biotin. We experimentally demonstrated that Fe 3 O 4 @Au core-shell MNPs were still able to enhance the magneto-optical Faraday rotation even after surface modification with STA. This result suggests that SuPREMO is a promising effect to exploit in biomedical assay techniques based on the magneto-optical effect, such as the Faraday immunoassay system 30 . In our previous work 30 , we have demonstrated that the Faraday magneto-optical measurement with biofunctionalized magnetic nanoparticles (BMNs) results in a simple, convenient, and sensitive tool for assaying biomarkers. Due to the antibody-antigen interactions, BMNs conjugated with the biotargets to form large magnetic clusters over time. The magnetic characteristics of the BMNs reagent are altered as well. The Faraday rotation angle varies as a function of the size of the MNP. Therefore, we aim to observe the clustering process by measuring the Faraday effect of MNPs. Since SuPRMO MNPs possess the special characteristic of Faraday rotation enhancement, biofunctionalized SuPRMO MNPs are a potential reagent for increasing the sensitivity of the magneto-optical Faraday immunoassay technique. Figure 1a shows the powder X-ray diffraction (XRD) patterns of Fe 3 O 4 and Fe 3 O 4 @Au core-shell MNPs. The diffraction angle of the (311) peak of the raw MNPs occurs at 35.46°, which means that the composition of the MNPs is magnetite before reducing the Au shell 31 . X-ray diffraction (XRD) & UV-Vis spectrum. The XRD data showed that the synthesized particles are Fe 3 O 4 with good crystallinity. After coating the MNPS www.nature.com/scientificreports www.nature.com/scientificreports/ with an Au shell, the XRD signals of the Fe 3 O 4 core were shielded by the gold layer because of the heavy atom effect 24 . The absorbance of the synthesized particles was measured using ultraviolet-visible spectroscopy (UV-Vis) (U-2800A, HITACHI). The UV-Vis spectra (Fig. 1b) shows that the absorbance of pure Fe 3 O 4 MNPs monotonically decreased with the wavelength of light. However, the Fe 3 O 4 @Au core-shell MNPs exhibited an absorption peak at a wavelength of approximately 538.5 nm due to the SPR effect of the gold layer. After the bonding of STA, the wavelength of the absorption peak of the Fe 3 O 4 @Au-STA MNPs increased to around 550 nm. The result clearly shows that the STA modification induces a red shift of the UV-Vis spectrum. This means that the refractive index of the STA does indeed alter the SPR condition of the Fe 3 O 4 @Au MNPs. Dynamic light scattering. Figure 2 shows the hydrodynamic sizes of the Fe 3 O 4 , Fe 3 O 4 @Au core-shell, and Fe 3 O 4 @Au-STA MNPs measured by dynamic light scattering (DLS) (SZ-100Z, HORIBA). Table 1 Figure 3b shows a magnified HRTEM image near the surface of a Fe 3 O 4 @Au core-shell MNP. The crystal structure of the gold shell on the Fe 3 O 4 core can be clearly seen. The TEM image simultaneously shows the Au lattice near the particle surface and the Fe 3 O 4 lattice at the core. The Fe 3 O 4 lattice is relatively blurry because the electron beam has difficulty penetrating to the center of particle. The analysis of the energy-dispersive X-ray spectroscopy (EDS) (JEM-2010, JEOL Co. Ltd) for the Fe 3 O 4 @Au core-shell MNPs clearly revealed that the synthesized particles contained the elements Fe, O, and Au (Fig. 3c). The results of characteristic analysis proved that the Fe 3 O 4 @Au core-shell MNPs were successfully synthesized. High-resolution transmission electron microscopy (HRteM). Magnetization curve. Figure 4a shows the magnetization curve of the Fe 3 O 4 @Au MNP reagent measured by a SQUID magnetic property measurement system (MPMS, Quantum Design, Inc) at 300 K. The inset in Fig. 4a shows that there was no hysteresis in the magnetization curve even under a small magnetic field. Fe 3 O 4 @Au MNPs exhibited the characteristics of superparamagnetic material; noting that the magnetization shown is the magnetization of the MNP reagent, not the magnetization of the MNP powder. A 3-D Nanometer Scale Raman PL Microspectrometer (Tokyo Instruments, INC.) was used to determine whether STA was successfully coated www.nature.com/scientificreports www.nature.com/scientificreports/ on the surface of Fe 3 O 4 @Au MNPs. Figure 4b shows the Raman spectra of the Fe 3 O 4 @Au MNPs before and after coating STA. After STA was coated, peaks emerged in the region of 1200-1600 cm −1 with respect to the Raman signals of Fe 3 O 4 @Au MNPs. The marked peaks in Fig. 4b showed the presence of STA. Peaks at 1254 and 1279 cm −1 represent the amide III region and the peak at 1447 cm −1 represents the δ-CH 2 and δ-CH 3 bands. The Trp10, Trp7, Trp5, and Trp2 signals are at 1243, 1341, 1461, and 1580 cm −1 , respectively 32 . All these peaks are the characteristic Raman signals of STA 32 indicating that STA was successfully coated on the surface of the Fe 3 O 4 @ Au MNPs. www.nature.com/scientificreports www.nature.com/scientificreports/ faraday rotation measurement. To confirm the Faraday rotation enhancement of the Fe 3 O 4 @Au-STA MNPs, we checked the magneto-optical characteristic of the pure STA reagent. Figure 5a is the Faraday rotation spectrum of pure STA reagent (100 μg/mL) as a function of the applied magnetic field. Clearly the magneto-optical Faraday effect of the pure STA reagent was extremely weak when the applied magnetic field was less than 100 gauss. Recalling that Fig. 1b revealed that the STA modification does indeed alter the SPR condition of the Fe 3 O 4 @ Au NPs; Fig. 5b shows the Faraday rotation spectra of the Fe 3 O 4 @Au-STA and Fe 3 O 4 MNPs reagents as a function of the applied magnetic field. To exclude the influence of magnetization on the Faraday rotations, the saturation magnetizations of the measured samples were controlled (M s = 6.6 × 10 −3 emu/g for both). Figure 5b shows that the Faraday rotation of the Fe 3 O 4 @Au-STA was larger than that of Fe 3 O 4 for an applied magnetic field larger than 30 gauss. The gold layer of core shield MNPs can be seen as an optical cavity with multiple resonance modes. When the light at a corresponding frequency illuminates the cavity, the energy of that light is stored inside the cavity. The result is that the MNP inside cavity senses a stronger electromagnetic field than the MNP without a cavity. The enhanced interaction between the MNPs and light results in the larger Faraday rotation. This result proves that the Fe 3 O 4 @Au-STA MNPs still possessed the SuPREMO effect which enhances the Faraday rotation even after the STA coating was applied. The biomaterial modified magneto-plasmonic nanoparticle is promising for applications based on the magneto-optical Faraday effect. In our previous work 30 , we successfully developed a Faraday immunoassay technique based on the magneto-optical Faraday effect and biofunctionalized MNPs. Now, these experimental results suggest that biofunctionalized magneto-plasmonic nanoparticle can be exploited to improve sensitivity using the Faraday immunoassay technique. In summary, we synthesized the Fe 3 O 4 @Au core-shell MNPs and coated particle surfaces with STA. We observed that the Fe 3 O 4 @Au-STA MNPs still possess the Faraday rotation enhancement after conjugating the biomaterial on the surface of the gold layer. The experimental results imply that the biofunctionalized Fe 3 O 4 @Au core-shell MNPs still had the effect of SuPREMO and are promising for magneto-optical biomedical applications. Methods Synthesis of the bio-functionalized core-shell Fe 3 o 4 @Au nanoparticles. In this study, iron oxide nanoparticles were prepared by co-precipitation of Fe(II) and Fe(III) first. An iron salt aqueous solution was combined with Ferric chloride (FeCl 3 .6H 2 O) and ferrous chloride (FeCl 2 .4H 2 O) at a ratio of 2:1. The iron 33 . The mixed colloid was continuously stirred during each iteration. In total 10 iterations were executed and every iteration took 20 minutes. Fe 3 O 4 @Au MNPs were obtained by centrifuging (6000 rpm, 30 min) and were then washed with DI water. The precipitate Fe 3 O 4 @Au MNPs were dispersed in ethanol (2 mL). Surface modification was needed to bind STA onto the gold surface 34 . 11-mercaptoundecanoic acid (11-MUA) can be self-assembled on the gold surface of Fe 3 O 4 @Au MNPs and provides a carboxyl group for bioconjugation. www.nature.com/scientificreports www.nature.com/scientificreports/ Finally, a liquid phase reagent containing Fe 3 O 4 @Au-STA MNPs was produced. Figure 6 shows the synthesis processes of Fe 3 O 4 @Au-STA MNPs. the faraday rotation measurement setup. The Faraday rotation measurement was performed using AC magnetic fields and lock-in technique 35,36 . The light source was a diode-pumped solid-state laser with a wavelength of 532 nm. The frequency of the AC magnetic field was set at 813 Hz of which the environment noise was relatively lower. More details of the Faraday rotation measurement can be found in 30 . The measurement samples were prepared in liquid phase and encapsulated in sample holders made of glass. X-ray diffraction (XRD) was performed using BRUKER D8 SSS diffractometer with CuKα radiation.
2,836.4
2019-11-11T00:00:00.000
[ "Materials Science", "Medicine" ]
Nonthermal correction to black hole spectroscopy Area spectrum of black holes have been obtained via various methods such as quasinormal modes, adiabatic invariance and angular momentum. Among those methods, calculations were done by assuming black holes in thermal equilibrium. Nevertheless, black holes in the asymptotically flat space usually have negative specific heat and therefore tend to stay away from thermal equilibrium. Even for those black holes with positive specific heat, temperature may still not be well defined in the process of radiation, due to the back reaction of decreasing mass. Respect to these facts, it is very likely that Hawking radiation is nonthermal and the area spectrum is no longer equidistant. In this note, we would like to illustrate how the area spectrum of black holes is corrected by this nonthermal effect. A finite size system often displays a discrete energy spectrum for quantum fluctuation. It was suggested that since the dynamics of a black hole is uniquely determined by its charge(s), which is closely related to the finite region enclosed by the horizon, one also expects the mass or area spectrum to display similar discreteness [1,2]. There were many proposals to obtain area spectrum for various black holes since then. Most earlier methods of quantizing horizon area are based on real or imaginary part of quasinormal modes [3][4][5][6][7][8][9][10]. Recently the application of adiabatic invariant action variable did not use the quasinormal modes [11,12] and the idea of quantizing angular momentum to obtain area spectrum first appeared in the study of non-extremal RN black holes [13]. The various methods of quantization have settled down to a spectrum of equidistant discreteness In particular, one obtained c = 8π for various kinds of black holes in different spacetime dimensions. Nevertheless, this universal result is closely related to the assumption that the black hole is in the thermal equilibrium state where the Hawking temperature is well defined. Realistic black holes are more likely to be in the nonequilibrium state due to its negative specific heat. Even for those black holes with positive specific heat, temperature may still be ill-defined in the process of radiation, due to the back reaction of decreasing mass. A universal logarithm correction to the Bekenstein-Hawking area law has been predicted in various theories of quantum gravity and modified general relativity, such that The corresponded correction to the area spectrum was computed for α = − 3 2 in the context of adiabatic invaraince approach for a constant surface gravity and uneven discreteness was observed [11]: While the above logarithmic correction in (2) can be regarded as the consequence of loop quantum correction of surface gravity [14,15], we are looking for the other correction due to back reaction from the Hawking radiation. Among various models of black hole radiation, the tunneling model proposed by Parikh and Wilczek [16] has provided useful insights in the effort to resolve the Information Loss Paradox [17], black hole evolution [18,19] and black hole remnant [20]. The Parikh-Wilczek model regards the Hawking radiation as a tunneling process in some stationary vacuum. The potential barrier is dynamically establsihed due to the back reaction which observes energy conservation. The emission rate in the tunneling model has a universal result given the black hole entropy change ∆S BH due to radiation. The back reaction constantly changes the surface gravity during tunneling process, therefore the black hole is never in the thermal equilibrium. In the following, we would like to use the Schwarzschild black hole as an example to argue that the back reaction effect could produce another correction to area spectrum of order O(A −1 ). In the case of Schwarzschild black hole of mass M, we have the change of entropȳ where the first term is nothing but the thermal spectrum if the inverse of Hawking temperature T −1 H = 8πM is identified, and we regard the second term as the nonthermal correction due to back reaction. Those terms with α inside are the series expansion of the logarithmic correction with respect to large black hole mass and we reagrd them as the quantum correction to the spectrum. To see how the area spectrum also receives correction from those nonthermal and quantum effect, let us recall the derivation of (4) and then divert to the quantization of area. The tunneling process happens at the horizon in the following metric [16] in the unit G = c = k B = 1 : 1 which can be obtained from the static Schwarzschild black hole metric by a coordinate transformation of the Painlevè-type. The WKB approximation states that the emission rate Γ ∼ e −2ImS/h , where the imaginary part of action reads for Hamiltonian H = M − ω ′ and trajectory of emission is given by radial null geodesicṡ We remark that the emitted mass ω has been substrated from M such 1 We remark that one cannot further seth = 1 as in the nature unit. Since in the unit G = c = 1, one This brings down to a simple quadratic equation of ω: Were both nonthermal and quantum effect effects ignorable, one can drop the ω 2 term and obtain the quantum of mass ω =h 4M by solving (9). The area discreteness can be computed as We remark thath = l 2 p in our choice of unit. The universal prefactor 8π agrees with that obtained from previous methods [21]. Now we would like to include the nonthermal and quantum effects by solving (9) honestly and obtain where we choose the smaller root for ω < M and do the Taylor expansion as long as M ≫ l p . At last, we have the area discreteness Due to the nonthermal correction, the area spacing gets larger as the horizon area shrinks as α < π 2 but gets smaller vice versa. This can be regarded as an important signature for the Parikh-Wilczek tunneling model of Hawking radiation if the area discreteness were ever to be detected in the future. The area discreteness can be easily generalized to the Schwarzschild black hole in arbitrary dimension D [22], where where A = r D−2 0 Ω D−2 for horizon radius r 0 . We remark that in D = 4 the nonthermal correction is competitable to the quantum correction, however the former becomes less and less important as D increases. To obtain correction to black holes with more charges or different topology, one can in principle solve the following algebraic equation as a generalization of (9): given the change of black hole entropy as a function of black hole charges Q i and emitted charges q i . In the following examples, we will solve (14) in the large mass limit, i.e. M ≫ 0, and focus on nonthermal correction but ignore the quantum correction: • For a Reissner-Nordström black hole of mass M and electric charge Q, we have metric where the horizon r + = M + M 2 − Q 2 . It is convenient to define the extremality Γ = Q/M and charge-mass-ratio of emitted particle γ = q/ω. The area discreteness in generic reads where functions a(Γ, γ) and b(Γ, γ) are complicated but can be perturbatively computed. For instance, in the near Schwarzschild limit (Γ ≪ 1). On the other hand, in the near extremal limit where Γ, γ → 1, we obtain where Γ ≡ 1 − 2x 2 . • For a BTZ black hole in three dimensisons [23,24], area spectrum has been discussed in [25,26] and tunneling rate was discussed in [27]. We begin with the metric The entropy function is known where r 2 + = 1 2 Ml 2 + √ M 2 l 4 − J 2 l 2 . Following the (14), one obtains for nonrotating BTZ (J = 0): For J = 0, the area spacing in generic depends on the black hole angular momentum J and spin of the emitted particle j. If one defines the extremality Γ ≡ J/M and emitted particle's spin-mass-ratio γ ≡ j/ω, then where functios a(Γ, γ) can be solved by Taylor's expansion at small Γ and γ: The constant leading term in (22) agrees with that found in [25,26], and moreover we observe the nontermal correction depends on Γ and γ in general. • For a D-dimensional AdS black hole of different horizon topologies: For simplicity, we first examine the one with planar horizon, that is k = 0. The horizon can be analytically solved as r + = (al 2 M) 1 D−1 . We find leading-order equisistant spectrum: The leading term agrees with the universal factor 8π for any D > 3, but different from that obtained in the [28]. The nonthermal correction takes same form as that in the Schwarzschild black hole (13) but with opposite sign. We remark that the correction implicitly depends on the AdS radius of curvature l via the horizon area A. This result cannot be simply compared with (13) in the flat limit l → ∞ due to different horizon topology k = 0. For the spherical near-horizon topology, k = 1, we find that correction to area spectrum explicitly depends on l. In particular, at the limit of large mass and weak curvature (but keep M/l small), one obtains for D = 4. We remark that the result of (13) can be reproduced in the flat limit l → ∞. • For a D-dimensional Schwarzschild-de Sitter black hole: First, we would like to examinate the case of D = 3, where one obtains exact solution ∆A = 8πh (28) Since there is in fact no black hole in three dimensional de Sitter space, this should be identified as the area spectrum of dS 3 space itself 2 . For D > 3, one recieves the area spectrum correction. For instance, in D = 4 for large M and l: • For a D-dimensional AdS topological black hole [30]: We obtain D = 4 spectrum for large M and l: which is the same as (29). For massless topological black hole, where M → 0, we obtain the universal result ∆A = 8πh, which again is half of what was found in [31] where the volume change is included. 2 As discussed in [29], if additional contribution due to volume change is included, we would obtained twice of discreteness as ∆A = 16πh. • For a D-dimensional Gauss-Bonnet black hole, the metric reads: The tunneling model of the (AdS) Gauss-Bonnet black hole has been studied in [32,33], and the emission rate agrees with that in (4), where the entropy is given by where r + satisfies The area spectrum was discussed in [34], where the conclusion that entropy spectrum is shown to be equally spacing agrees with our assumption (14). In particular, the coefficient α vanishes for D = 4 such that which has no effect from the Gauss-Bonnet term. For D = 5, the area spectrum correction can be expressed via Taylor's expansion of α GB /M: In summary, we have investigated the nonthermal correction to area spectrum in various kinds of black holes using the quantization rule (8). This semiclassical approximation usually works better for highly excited states, that is, large black hole mass (charges), and the leading term reproduces the universal coefficient 8π. However, if the equidistant spectrum for entropy, ∆S BH = 2π, could persist through the lifetime of a black hole, (8) predicts an increasing correction to area spectrum toward the end of evaporation. To estimate the nonthermal correction to the lifetime of a Schwarzschild black holes, we observe in (12) that nonthermal correction contributes like a quantum correction with α = − π 2 at (1 − 1 8M 2 ) −1 . (37) In the figure 1, we plot both thermal radiation and nonthermal radiation for the Schwarzschild black hole. It is expected that black hole speeds up its evaporation in the nonthermal radiation thanks to increasing spacing in area spectrum.
2,793
2014-11-14T00:00:00.000
[ "Physics" ]
Integrative Analysis of miRNA and mRNA Expression Profiles in Mammary Glands of Holstein Cows Artificially Infected with Staphylococcus aureus Staphylococcus aureus- induced mastitis is one of the most intractable problems for the dairy industry, which causes loss of milk yield and early slaughter of cows worldwide. Few studies have used a comprehensive approach based on the integrative analysis of miRNA and mRNA expression profiles to explore molecular mechanism in bovine mastitis caused by S. aureus. In this study, S. aureus (A1, B1 and C1) and sterile phosphate buffered saline (PBS) (A2, B2 and C2) were introduced to different udder quarters of three individual cows, and transcriptome sequencing and microarrays were utilized to detected miRNA and gene expression in mammary glands from the challenged and control groups. A total of 77 differentially expressed microRNAs (DE miRNAs) and 1625 differentially expressed genes (DEGs) were identified. Gene Ontology (GO) annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis showed that multiple DEGs were enriched in significant terms and pathways associated with immunity and inflammation. Integrative analysis between DE miRNAs and DEGs proved that miR-664b, miR-23b-3p, miR-331-5p, miR-19b and miR-2431-3p were potential factors regulating the expression levels of CD14 Molecule (CD14), G protein subunit gamma 2 (GNG2), interleukin 17A (IL17A), collagen type IV alpha 1 chain (COL4A1), microtubule associated protein RP/EB family member 2 (MAPRE2), member of RAS oncogene family (RAP1B), LDOC1 regulator of NFKB signaling (LDOC1), low-density lipoprotein receptor (LDLR) and S100 calcium binding protein A9 (S100A9) in bovine mastitis caused by S. aureus. These findings could enhance the understanding of the underlying immune response in bovine mammary glands against S. aureus infection and provide a useful foundation for future application of the miRNA–mRNA-based genetic regulatory network in the breeding cows resistant to S. aureus. Introduction Bovine mastitis compromises the health and welfare of dairy cattle, as well as decreases the quality and quantity of milk production, causing huge economic losses in the global dairy industry [1]. Staphylococcus aureus is a major etiological pathogen of bovine mastitis, especially subclinical mastitis, causing a persistent and chronic infection, and antibiotic therapies are largely ineffective [2][3][4]. The infectivity and antibiotic resistance of S. aureus and other causative agents make bovine mastitis more difficult to control, which is also a risk of public health [5][6][7][8][9]. By breeding dairy cattle resistance to udder diseases, the risk of mastitis may be reduced in the dairy cow population [10]. Therefore, the identification of specific genes related to mastitis susceptibility or resistance can provide a new way to control mastitis through genetic selection [11,12]. In recent years, numerous studies have shown that bovine mammary epithelial cells (BMECs) respond to the invasion of bacteria or bacterial products by altering the expression levels of several genes involved in inflammation and immunity in vitro [13][14][15]. However, one limitation of these studies is that the conclusions drawn at cellular levels are not necessarily consistent with those of individuals [16]. Although some transcriptomewide association studies have been carried out on S. aureus-induced mastitis in vivo, these studies always analyzed the expression levels of mRNAs or microRNAs (miRNAs) separately [17][18][19][20][21]. Few studies used a comprehensive approach based on the integrative analysis of miRNA and mRNA expression profiles to improve the understanding of the underlying molecular mechanism of cow mastitis caused by S. aureus. To investigate various interaction networks and regulatory modes of mRNAs and miRNAs, we constructed a S. aureus-type bovine mastitis model and integrated the analysis of miRNAs and mRNAs between the S. aureus-infected quarters and the control ones. These findings will provide new insights into the mechanism of S. aureus-induced cow mastitis. The Establishment of Bovine S. aureus-Induced Mastitis Model Indicators of the three cows were measured and recorded after bacterial infection. At 48 h post inoculation, the dairy cattle suffered from obvious pain and had a drastic reduction (25.8% reduction in average) in milk yield. In addition, the temperature of the cows raised (1.7 • C in average), and their mammary glands and lymph nodes were swollen and hard. At the same time, the alteration of the biophysical properties of milk (grey-white color) was observed. There were significant increases of somatic cell count (SCC) of the milk from inoculated quarters (A1: 1,790,000/mL; B1: 1,920,000/mL; and C1: 2,080,000/mL), while those from the controls remained below 100,000/mL. Differential Expressed miRNA Identification A total of 21,293,853 and 18,588,177 raw reads were generated from the control and S. aureus-inoculated groups, respectively, by miRNA sequencing (Table S1). After raw reads were disposed, there were 20,847,000 and 18,504,775 clean reads for length distribution assessment. The assessment results revealed that the 78.76% and 71.79% of clean reads were 20-24 nucleotides in length in the two groups ( Figure S1). Principal component analysis (PCA) showed the miRNAs in the challenged and control groups can be classified into different clusters, respectively, indicating sequencing data is qualified for further analysis (Figure 2A). A total of 77 DE miRNAs, including 30 up-regulated and 47 downregulated miRNAs (p ≤ 0.05 and |log2FC| ≥ 1), were identified in the S. aureus-inoculated group, compared with control group ( Figure 3A). The up-regulated and down-regulated miRNAs are shown in red and green dots, respectively, while the miRNAs with no significant difference in the two groups are shown in black dots. (B) DEGs in bovine mammary gland between the control group and S. aureus-inoculated group. The up-regulated and down-regulated mRNAs are indicated by red and green dots, respectively, while the mRNAs with no significant difference in the two groups are indicated by black dots. Differential Expressed mRNA Identification The values of 2100 RIN and 28S/18S were between 7.5-8.9 and 1.3-2.1, respectively (Table S2), indicating that the RNA quality met the requirement and could be used for marker hybridization. In this study, the CV values of all samples ranged from 3.389% to 4.821% (Table S3), indicating that the detection results of the microarray are reliable. The up-regulated and down-regulated miRNAs are shown in red and green dots, respectively, while the miRNAs with no significant difference in the two groups are shown in black dots. (B) DEGs in bovine mammary gland between the control group and S. aureus-inoculated group. The up-regulated and down-regulated mRNAs are indicated by red and green dots, respectively, while the mRNAs with no significant difference in the two groups are indicated by black dots. Differential Expressed mRNA Identification The values of 2100 RIN and 28S/18S were between 7.5-8.9 and 1.3-2.1, respectively (Table S2), indicating that the RNA quality met the requirement and could be used for marker hybridization. In this study, the CV values of all samples ranged from 3.389% to 4.821% (Table S3), indicating that the detection results of the microarray are reliable. The PCA was also performed to evaluate the sample distribution. Two separate clusters were found, representing the S. aureus inoculation and control groups, respectively ( Figure 2B). The transcriptional sequences of the same group were assembled in the same cluster, indicating that the main differences in the mRNA expression profiles occurred between different groups. A total of 1030 up-regulated genes and 595 down-regulated genes (p ≤ 0.05 and |log 2 FC| ≥ 1) were identified in the S. aureus inoculation group versus control group ( Figure 3B). Interaction Analysis of the miRNAs and mRNAs Three up-regulated and ten down-regulated DE miRNAs (p ≤ 0.05 and |log 2 FC| ≥ 2) were selected for the miRNA-mRNA interactive analysis. Among all potential target genes predicted by TargetScan, 143 up-regulated and 63 down-regulated genes identified in this study were employed for the construction of miRNA-mRNA interaction networks ( Figure 4). Interaction Analysis of the miRNAs and mRNAs Three up-regulated and ten down-regulated DE miRNAs (p ≤ 0.05 and |log2FC| ≥ 2) were selected for the miRNA-mRNA interactive analysis. Among all potential target genes predicted by TargetScan, 143 up-regulated and 63 down-regulated genes identified in this study were employed for the construction of miRNA-mRNA interaction networks ( Figure 4). Functional Analysis of Differentially Expressed Genes The Gene Ontology (GO) annotation based on three categories (biological processes (BP), molecular functions (MF) and cellular component (CC)) was performed to explore biological functions of DEGs regulated by DE miRNAs, in which there were 721 up-regulated and 381 down-regulated genes. The 721 up-regulated genes were significantly enriched in 174 BP terms, 31 MF terms and 25 CC terms. Among them, 68 up-regulated genes Functional Analysis of Differentially Expressed Genes The Gene Ontology (GO) annotation based on three categories (biological processes (BP), molecular functions (MF) and cellular component (CC)) was performed to explore biological functions of DEGs regulated by DE miRNAs, in which there were 721 up-regulated and 381 down-regulated genes. The 721 up-regulated genes were significantly enriched in 174 BP terms, 31 MF terms and 25 CC terms. Among them, 68 up-regulated genes of 19 terms were involved in inflammation and immune response ( Table 1). The 381 downregulated genes were significantly enriched in 199 BP terms, 23 MF terms and 37 CC terms. Among them, 21 down-regulated genes of 25 terms were involved in inflammation and immune response. Only the top 10 up-regulated and down-regulated terms in each category are listed in Figure 5. Features of DEGs enriched in the top 9 significant GO terms are shown in Figure 6. of 19 terms were involved in inflammation and immune response ( Table 1). The 381 downregulated genes were significantly enriched in 199 BP terms, 23 MF terms and 37 CC terms. Among them, 21 down-regulated genes of 25 terms were involved in inflammation and immune response. Only the top 10 up-regulated and down-regulated terms in each category are listed in Figure 5. Features of DEGs enriched in the top 9 significant GO terms are shown in Figure 6. The 721 up-regulated genes were significantly enriched in 65 KEGG pathways, in which 22 pathways containing 119 up-regulated genes were involved in inflammation and immune response ( Table 2). The 381 down-regulated genes are significantly enriched in 26 KEGG pathways, in which 10 KEGG pathways containing 51 down-regulated genes were involved in inflammation and immune response ( Table 2). The top 30 up-regulated and down-regulated pathways are listed in Figure 7. Features of DEGs enriched in the top 9 significant KEGG terms are shown in Figure 8. Validation of DE miRNAs and DEGs by qRT-PCR To verify the accuracy of RNA sequencing and microarray, qRT-PCR was performed to detect the expression levels of miRNA and DEGs. The results showed that the relative expression levels of selected miRNAs and mRNAs identified by qRT-PCR were consistent with RNA sequencing and microarray results, respectively (Tables S4 and S5), indicating a high reliability of the study. Discussion To date, more than 150 pathogenic bacteria have been identified in dairy cows with mastitis; among them, Escherichia coli, Streptococcus spp. and S. aureus are most frequently isolated from cows with clinical or subclinical mastitis [9,32]. In this study, the S. aureustype bovine mastitis model was constructed to explore interaction patterns of mRNAs and miRNAs in the S. aureus-infected quarters and the control ones. One quarter of the mammary gland of each cow received the inoculation of S. aureus, and the remaining quarters with the inoculation of PBS served as control group. In this way, the systematic errors could be well minimized when we analyzed and compared the expression levels of mRNAs and miRNAs between inoculated and control groups [33,34]. In total, 77 DE miRNAs and 1625 DEGs were identified in the S. aureus-challenged quarters, compared with the healthy ones ( Figure 9). A previous study showed that miR-664b is a promising candidate involved in response to pathogen infection, which was down-regulated in S. aureus-infected quarters (0.450-fold change, p < 0.001) [35]. Accordingly, CD14 Molecule (CD14), a lipopolysaccharide-binding protein enriched significantly in several inflammation-related terms (cellular response to organic substance/oxygen-containing compound/biotic stimulus/biotic stimulus/molecule of bacterial origin terms), which was identified as a predicted target of miR-664b, was up-regulated in S. aureus-infected quarters (2.151-fold change, p = 0.002) (Table S6). This result is consistent with previous studies, in which CD14 was measured as an up-regulated trend as an early innate immune response gene in bacterial infections of mammary gland [13,36,37]. This finding potentially supports that miR-664b negatively regulates its target gene, CD14, to mediate inflammation in mammary gland of dairy cattle infected by S. aureus. G protein subunit gamma 2 (GNG2), another target gene of miR-664b, was up-regulated in S. aureus-inoculated quarters (3.246-fold change, p = 0.020), which is significantly en- A previous study showed that miR-664b is a promising candidate involved in response to pathogen infection, which was down-regulated in S. aureus-infected quarters (0.450-fold change, p < 0.001) [35]. Accordingly, CD14 Molecule (CD14), a lipopolysaccharide-binding protein enriched significantly in several inflammation-related terms (cellular response to organic substance/oxygen-containing compound/biotic stimulus/biotic stimulus/molecule of bacterial origin terms), which was identified as a predicted target of miR-664b, was upregulated in S. aureus-infected quarters (2.151-fold change, p = 0.002) (Table S6). This result is consistent with previous studies, in which CD14 was measured as an up-regulated trend as an early innate immune response gene in bacterial infections of mammary gland [13,36,37]. This finding potentially supports that miR-664b negatively regulates its target gene, CD14, to mediate inflammation in mammary gland of dairy cattle infected by S. aureus. G protein subunit gamma 2 (GNG2), another target gene of miR-664b, was up-regulated in S. aureus-inoculated quarters (3.246-fold change, p = 0.020), which is significantly enriched in three significant terms (cellular response to organic substance term, cellular response to oxygen-containing compound term and cellular response to acid chemical term) and four significant pathways (PI3K-Akt signaling pathway, chemokine signaling pathway, Kaposi sarcoma-associated herpesvirus infection pathway and Ras signaling pathway) (Table S6). These terms and pathways are mainly involved in inflammation response. Previous studies mainly focused on functional analysis of GNG2 in human malignant melanoma cells [38][39][40]. However, there is no direct evidence to prove the association between the up-regulation of GNG2 and the infection of S. aureus in mammary glands. The highly expressed GNG2 may also be associated with the down-regulation of miR-23b-3p (0.223-fold change, p < 0.001), which was identified to be associated with various cancers, such as cervical cancer, renal cancer and pancreatic cancer [41][42][43][44]. Other up-regulated DEGs regulated by miR-23b-3p in the S. aureus infection group were collagen type IV alpha 1 chain (COL4A1) (2.272-fold change, p = 0.007), microtubule associated protein RP/EB family member 2 (MAPRE2) (5.500-fold change, p = 0.001) and member of RAS oncogene family (RAP1B) (2.548-fold change, p = 0.008). Although COL4A1, MAPRE2 and RAP1B are respectively enriched in various inflammation-related terms and pathways, to our knowledge, there is no evidence to prove that they have a bearing on bovine mastitis infected by S. aureus. The down-regulation of miR-664b has a potential association with the extremely significant up-regulation of interleukin 17A (IL17A) (18.584-fold change, p < 0.001) in S. aureusinoculated quarters, which plays a crucial role in the defense of Gram-positive bacterial infection and inflammation development [45][46][47]. IL17A is significantly enriched in the terms of cellular response to organic substance, leukocyte migration and inflammatory response and the pathways of IL-17 signaling and rheumatoid arthritis, which indicated that IL17A potentially acts as a functional gene in the defense of S. aureus infection in bovine mammary glands. Generally known, the expression level of a single gene can be regulated by multiple miRNAs [48]. As shown in this study, miR-331-5p, which targets IL17A, was down-regulated in S. aureus-inoculated quarters (0.273-fold change, p < 0.001). At the same time, LDOC1 regulator of NFKB signaling (LDOC1), the target gene of miR-331-5p, was up-regulated in the infected group (2.114-fold change, p = 0.002). LDOC1 is significantly enriched in cellular response to organic substance term, cellular response to oxygen-containing compound term, cellular response to biotic stimulus term, cellular response to lipopolysaccharide term, response to lipopolysaccharide term, cellular response to molecule of bacterial origin term and response to molecule of bacterial origin term. Previous studies have suggested that LDOC1 regulated the expression of nuclear factor kappa-B (NF-κB), which plays a significant role in cellular inflammatory and immune responses [49]. Additionally, multiple studies have shown that LDOC1 can induce apoptosis [50][51][52]. Thus, it remains to be clarified the role of LDOC1 in S. aureus-induced apoptosis. The down-regulation of miR-19b (0.397-fold change, p < 0.001) is potentially responsible for the up-regulation of LDOC1 in S. aureus-induced mastitis, which has been identified to be the candidate marker for lung cancer and diabetes [53,54]. The down-regulation of miR-19b is also observed to account for the down-regulation of low-density lipoprotein receptor (LDLR) (2.976-fold change, p = 0.024), which was significantly enriched in cellular response to organic substance term, cellular response to oxygen-containing compound term, cellular response to acid chemical term, inflammatory response term and toxoplasmosis pathway and can develop inflammatory atherosclerosis [55]. S100 calcium binding protein A9 (S100A9) is a kind of pro-inflammatory factor, and the protein from exosomes in follicular fluid causes inflammation by NF-κB pathway activation in polycystic ovary syndrome [56,57]. In this study, the up-regulated S100A9 (10.631-fold change, p = 0.006) and down-regulated predicted target miRNA-2431-3p (0.459-fold change, p = 0.005) were screened in S. aureus-inoculated quarters. S100A9 was enriched in multiple significant inflammatory and immune-related pathways, including positive regulation of hydrolase activity pathway, leukocyte migration pathway, neutrophil chemotaxis pathway and inflammatory response pathway. Ethics Statement and Animals Selection All experimental protocols in this study were reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University (ZZCX2019-SYXY-056). All methods in this study were carried out in accordance with the Administration of Affairs Concerning Experimental Animals published by the Ministry of Science and Technology of China. Three apparently half-sib, healthy and mastitis-free Holstein dairy cattle (A, B and C) were chosen from a dairy farm in Yangzhou, China. All the three cows were in the middle lactation term of first parity with a consistent history of milk somatic cell count (SCC) below 100,000/mL. In particular, the employed cows were detected to be in absence of Mycobacterium bovis, Brucella abortus, Anaplasma spp., Babesia spp., Theileria spp., bovine leukemia virus, bovine herpesvirus-1, bovine viral diarrhea virus and bovine respiratory syncytial virus with commercial or in-house molecular diagnostic kits [58][59][60][61]. Then, the experiment was performed after one week in quarantine. Mastitis Model Construction For challenge infection study, aliquots from frozen stock cultures (S. aureus, ATCC29213) were plated on sheep blood agar and incubated at 37 • C for 18 h under 10% CO2-enriched conditions. Bacterial suspensions for each pure culture were diluted in sterile phosphate buffered saline (PBS) (Biosharp, Hefei, China) to 1 × 10 7 Colony-Forming Units (CFU)/mL, using a spectrophotometer (Eppendorf, Germany) with a wavelength of 600 nm. For challenged group, one quarter (A1, B1 and C1) of the mammary gland of the three individuals received a dose of 5 × 10 7 CFU of S. aureus, and one of the remaining quarters (A2, B2 and C2) not administered with the S. aureus inoculation served as control group that received 5 mL of sterile PBS [20,62]. The milk yield, SCC (Shanghai DHI Test Center, Shanghai, China) and temperature of cows were recorded before and at 24 h post-inoculation. Sample Collection and Total RNA Extraction The mammary tissues (1-2 g per quarter) were collected by sterile surgery from two quarters per dairy cattle at 48 h post-inoculation. Samples from challenged (A1, B1 and C1) and control (A2, B2 and C2) quarters were immediately frozen in liquid nitrogen before RNA extraction or stored in 10% formalin for hematoxylin and eosin (HE) staining. Total RNA was extracted from 250 mg mammary tissues with mirVanaTM RNA Isolation Kit (Applied Biosystems, Carlsbad, CA, USA) and purified with QIAGEN RNeasy ® Kit (QIAGEN, Dusseldorf, Germany). The RNA quality was assessed using Agilent Bioanalyzer 2100 (Agilent Technologies, Santa Clara, USA) and NanoDrop spectrophotometer (Thermo Fisher, USA). Total RNA samples were stored at −70 • C. A total of 10 µg per RNA sample was sent to a commercial sequencing laboratory (Oebiotech, Shanghai, China) for evaluating the expression levels of miRNA with HiSeq 2000 System (single-end) (Illumina, San Diego, CA, USA) and mRNA with microarray (G2519F-023647, Agilent Technologies, Santa Clara, CA, USA). Pathological Tests After 48 h of soaking, the samples were rinsed with water for 12 h and subjected to gradient alcohol dehydration, wax impregnation and embedding. Hematoxylin-eosin (HE) staining was performed for 15 min after dewaxing and adequate washing. The pathological changes were visualized with a microscope (M152, Mshot, Guangzhou, China) at different magnifications. The miRNA counts were normalized as transcript per million (TPM) with the formula (number of reads per miRNA alignment) / (number of reads from the total sample alignment) × 10 6 [64]. The differentially expressed (DE) miRNAs in each sample were calculated with DEseq R package (1.18.0), with p ≤ 0.05 and fold change ≥2 as the threshold. mRNA Analysis and Data Process The 2100 RNA Integrity Number (RIN) and 28S/18S values were detected to evaluate the quality of RNAs. The GeneSpring software (version 12.5, Agilent Technologies, Santa Clara, CA, USA) was utilized to evaluate the coefficient of variation (CV) of each sample. Feature Extraction software (version 10.7.1.1, Agilent Technologies Santa Clara, CA, USA) was employed to extract and analyze raw data from array images. Briefly, the raw data was normalized with the quantile algorithm, and the resultant flag value of any probe was assigned as "Detected" only if there were no "Compromised" or "Not Detected". DEGs were identified with p ≤ 0.05 and |log 2 FC| ≥ 1 as the threshold. miRNA-mRNA Interaction Network Construction With the online software TargetScan (www.targetscan.org, accessed on 6 November 2020), the potential target genes of DE miRNAs with more significant expression levels (p ≤ 0.05 and |log 2 FC| ≥ 2) were predicted and intersected, with the DEGs identified by microarray test (p ≤ 0.05 and |log 2 FC| ≥ 2). Then, the miRNA-mRNA interaction networks were constructed and visualized with the DE miRNAs and screened genes by Cytoscape (v3.7.2) [65]. Functional Analysis of Differentially Expressed Genes DEGs regulated by DE miRNAs were screened to further understand their biological and metabolic pathways. Gene ontology (GO) annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were respectively performed with the DAVID 6.8 (https: //david.ncifcrf.gov/, accessed on 6 November 2020) and KOBAS 3.0 (http://kobas.cbi.pku. edu.cn/index.php, accessed on 6 November 2020) using R based on the hypergeometric distribution [65]. Then, the GO terms and KEGG pathways with adjusted p ≤ 0.05 were significantly enriched in DEGs or the miRNA target genes. Statistical Analysis Data were analyzed using GraphPad Prism 8 (GraphPad, San Diego, CA, USA) with Student's t-test and presented as mean ± standard deviation (SD). The resulting p-values were adjusted using the Benjamini and Hochberg's approach for controlling the false discovery rate (FDR). Adjusted p < 0.05 indicated a significant difference. Conclusions In the present study, we comprehensively analyzed the changes in miRNA and mRNA profiles of the mammary gland of dairy cattle under S. aureus inoculation. Overall, 77 DE miRNAs and 1625 DEGs were identified in the S. aureus-challenged quarters. Among them, the predicted integrated regulatory network was constructed with the miRNAs (miR-664b, miR-23b-3p, miR-331-5p, miR-19b and miR-2431-3p) and the mRNAs (CD14, GNG2, COL4A1, MAPRE2, RAP1B, IL17A, LDOC1, LDLR and S100A9), which were significantly associated with inflammation and immunity. These findings could enhance the understanding of underlying immune response in bovine mammary glands against S. aureus infection and provide a useful foundation for the future application of the miRNA-mRNA-based genetic regulatory network in the breeding of cows resistant to S. aureus. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/pathogens10050506/s1. Table S1: Statistics of miRNA sequencing. Table S2: The quality control of mRNAs. Table S3: The variation coefficient of samples used for microarray test. Table S4: Comparison of the expression levels of seven miRNAs detected by transcriptome sequencing and qRT-PCR. Table S5: Comparison of the expression levels of eight mRNAs detected by microarray and qRT-PCR. Table S6: Functional annotations of key DEGs and their potential target miRNAs. Table S7: The primers used for qRT-PCR to validate the small RNA sequencing. Table S8: The primers used for qRT-PCR to validate the microarray test. Figure Institutional Review Board Statement: All experimental protocols in this study were reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University (ZZCX2019-SYXY-056). All methods in this study were carried out according in accordance with the Administration of Affairs Concerning Experimental Animals published by the Ministry of Science and Technology of China. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available in the main text and supplementary material of this article. Conflicts of Interest: The authors declare no conflict of interest.
5,727.4
2021-02-01T00:00:00.000
[ "Biology", "Medicine", "Agricultural and Food Sciences" ]
Learning Identity-Consistent Feature for Cross-Modality Person Re-Identification via Pixel and Feature Alignment RGB-IR cross-modality person re-identification (ReID) can be seen as a multicamera retrieval problem that aims to match pedestrian images captured by visible and infrared cameras. Most of the existing methods focus on reducing modality differences through feature representation learning. However, they ignore the huge difference in pixel space between the two modalities. Unlike these methods, we utilize the pixel and feature alignment network (PFANet) to reduce modal differences in pixel space while aligning features in feature space in this paper. Our model contains three components, including a feature extractor, a generator, and a joint discriminator. Like previous methods, the generator and the joint discriminator are used to generate high-quality cross-modality images; however, we make substantial improvements to the feature extraction module. Firstly, we fuse batch normalization and global attention (BNG) which can pay attention to channel information while conducting information interaction between channels and spaces. Secondly, to alleviate the modal difference in feature space, we propose the modal mitigation module (MMM). Then, by jointly training the entire model, our model is able to not only mitigate the cross-modality and intramodality variations but also learn identity-consistent features. Finally, extensive experimental results show that our model outperforms other methods. On the SYSU-MM01 dataset, our model achieves a accuracy of and an of Introduction Person ReID can be viewed as a cross-camera image retrieval problem, which aims at matching individual pedestrian images in a query set to ones in a gallery set captured by di erent cameras. Its main challenge lies in the interclass and intraclass variations caused by di erent lighting, poses, occlusions, and views. Most existing methods [1][2][3][4][5] mainly focus on matching RGB images captured by visible cameras, which can be formulated as an image matching problem under a single modality. However, these methods cannot be applied to images taken in poor lighting conditions, because the visible camera cannot capture pictures with discriminative features. However, in practical application scenarios, the camera should ensure all-weather operation. Since the visible camera has limited e ect on the security work at night, the camera that can switch the infrared mode is being widely used in the intelligent monitoring system. In visible mode and infrared mode, RGB images and infrared images are collected, respectively, which belong to two di erent modalities. RGB images have three channels but IR images have only one channel, so the ReID problem in a cross-modality setting becomes extremely challenging, which is essentially a cross-channel retrieval problem. First, infrared images of di erent identities are di cult to distinguish but are easy to distinguish in visible images. In addition, the same person varies greatly in di erent modalities. It is known as modality discrepancy. To address visible-infrared person ReID, several approaches [6][7][8][9][10] have been proposed, aiming to mitigate modal differences by aligning features or pixel distributions. Feature alignment methods [6,8,10] mainly focus on bridging the gap between RGB and IR images through features. It is difficult to match RGB and IR images in a shared space due to large cross-modality differences between the two modalities. Different from existing methods that directly match RGB and IR images, we use generative adversarial networks to generate fake IR images based on real RGB images and then match the generated images through a feature alignment network. e generated fake IR images are used to reduce the modality difference between the RGB and IR images. Although the generated fake IR images are very similar to real images, there are still intraclass differences due to pose variations, viewpoint changes, and occlusions. Inspired by the above discussion, in this paper, we propose a pixel and feature alignment network (PFANet) that simultaneously mitigates cross-modality differences in pixel space and intramodality variation in feature space. As shown in Figure 1, to reduce the modal difference, we apply a generator (G I ) to generate fake IR images. en, to alleviate the intramodality variation, a feature extraction module (F) is designed to encode fake and real IR images into a shared feature space by exploiting identity-based classification and triplet loss. e batch normalization and global (BNG) attention is added to the feature extraction network (F), which can make the network learn which channel is more important as well as can interact between channels and spaces. Furthermore, to mitigate the modal difference in the feature space, a modal mitigation module (MMM) is proposed, which can significantly mitigate the difference between the two modalities. Finally, to learn identity-consistent recognition, a joint discriminator (D) is utilized. Its input is an image-feature pair. e major contributions of this work can be summarized as follows: RGB-IR Person ReID. RGB-IR cross-modality person ReID can be seen as a multicamera retrieval problem that aims to match pedestrian images captured by visible and infrared cameras, which are widely used in video surveillance, public security, and smart cities. Compared with RGB-RGB single-modality person ReID which only deals with RGB images, the key challenge in this work is to mitigate the large differences between the two modalities. To address the challenge caused by differences in modality distributions, a variety of approaches to cross-modality person re-identification have been proposed. Some early work focused on solving the channel mismatch between RGB images and IR images, due to RGB images having three channels. In contrast, IR images have only one channel. Wu et al. [10] proposed a deep zero-padding network and contributed a new ReID dataset SYSU-MM01. In [11], a dual-path network with a bi-directional dual-constrained top-ranking loss was introduced to learn modality alignment Figure 1: Framework of the proposed model. It consists of an image generation module (G), a joint discriminator module (D), and a feature extraction module (F). e G can generate fake IR images X ir ′ to mitigate the cross-modality variation, and the F can alleviate the intramodality variation. e F module contains ResNet-50 and BNG attention and MMM module. e BNG module can focus on channel and spatial information, and the MMM module can reduce modality differences. feature representations for RGB-IR ReID. Feng et al. [12] proposed a framework for solving heterogeneous matching problems using modality-specific networks. Ye et al. [13] proposed a dual-stream network with feature learning and metric learning to convert two heterogeneous modalities into a consistent space where the modalities share a metric. Dai et al. [6] introduced a cross-modality generative adversarial network (cmGAN) to reduce the distribution differences between RGB and IR features. Most of the above approaches mostly focus on reducing intermodality differences by feature alignment, while ignoring the large crossmodality differences in pixel space. Unlike these approaches, the proposed model in this paper is able to combine feature alignment and pixel alignment, effectively reducing intramodality and crossmodality variations. By training the model, the model is able to learn identity consistency features. GAN in Person ReID. A generative adversarial network (GAN) consists of a generator and a discriminator, using the idea of game theory, where the generator tries to generate an image to deceive the discriminator, and the discriminator tries to discriminate whether the image is real or generated. rough multiple adversarial training, generative adversarial networks are able to learn deep representations of data in a self-supervised manner. GAN can generate high-quality images, perform image enhancement, generate images from text, and convert images from one domain to another [14,15]. GAN was first proposed in 2014's [16]. After that, researchers have proposed a variety of task-specific GAN structures, such as CycleGAN [14], Pix2Pix [17], and StarGAN [15]. ere are many works in the field of pedestrian re-identification that also apply GAN to improve accuracy. Li et al. [18] proposed a network that allows querying images of different resolutions to process crossresolution person ReID. Wang et al. [19] designed an end-toend alignment generative adversarial network (AlignGAN) for the RGB-IR ReID task. JSIA-ReID [20] implemented a two-layer alignment of pixels and features in a unified GAN framework. In our work, we apply GAN to generate cross-modality images that mitigate modal differences between RGB-IR image data in pixel space. Attention Mechanisms. ere is an important feature in the human visual system that allows people to selectively focus on things of interest in order to capture valuable information. Inspired by the human visual system, many works have attempted to employ attention mechanisms to improve the performance of CNNs. Attention mechanisms enable the network to focus on areas of interest to the human body and better extract useful information. SENet [21]integrated spatial information into the channel-level feature responses and computed the corresponding attention with two MLP layers. Later, bottleneck attention module (BAM) [22] built independent space and channel submodules in parallel and embedded them into each bottleneck block. Considering the relationship between any two positions of the feature map, nonlocal feature attention [23] was proposed to capture the relationship between them. e convolution block attention module (CBAM) [24] sequentially cascaded channel attention and spatial attention. However, these works ignored the information about the weights adjusted from the training; therefore, we wanted to highlight the significant features by using the variance of the trained model weights, which also was able to amplify cross-dimensional interactions and captured important features of all three dimensions. We propose new attention (BNG) to solve the above problem. A modal mitigation module (MMM) is designed to mitigate the modal distribution, using channel attention to guide the learning of instance normalization (IN) for mitigating modal differences while preserving identity information. The Proposed Method In this part, we introduce the proposed PFANet in detail. Our network will be presented in the following three parts, including (1) RGB-IR images generation module, (2) BNG attention module, and (3) modal mitigation module. To reduce cross-modality variation, we apply generative adversarial networks to convert RGB images to fake IR images, which have IR style while maintaining their original identity. en, the features of the two modalities are extracted for feature alignment. e BNG attention is designed to make the network focus on channel and spatial information. In addition, the modal mitigation module (MMM) is proposed to mitigate the differences between the two modalities. e main output of the PFAnet during testing is the feature for person ReID. RGB-IR Images Generation Module. ere is a large cross-modality difference between RGB and IR images, which significantly increases the difficulty of the task of cross-modality pedestrian re-identification. To reduce crossmodality variation, we apply generative adversarial networks to convert RGB images X rgb to fake IR images X ir ′ , which has IR style while maintaining their original identities. e generated fake IR image X ir ′ can mitigate the modality differences between RGB and IR images. e module consists of a generator G I that generates a fake IR image from an RGB image and a joint discriminator D I that discriminates whether the image is a real image or a generated image. e input of the generator is the real images X rgb , and its output is the fake IR images X ir ′ � G I (X rgb ). e input of the discriminator is the generated fake IR image X ir ′ ; if the image is real, its output is one, and if the image is the generated image, the output is zero. e goal of the generator is to make the generated image as similar as possible to the real image, and the goal of the discriminator is to discriminate as much as possible whether the input image is real or generated. Unlike ordinary discriminators, the input to our discriminator is a pair of IR images and ReID feature maps. e generator and discriminator play the min-max game as [16], and the modal can make the fake IR image X ir ′ as realistic as possible. e adversarial loss for generating IR images is defined as follows: where Among them, f X ir map,R is the extracted image feature of X ir and f X ir ′ map,R is the extracted image feature of generated image X ir ′ . Equation (1) is used to train the generator model; after the constraint of the loss function, the generator will generate a more realistic IR image. Equations (3) and (4) are used to train the discriminator, which differs from traditional discriminators in that the input is a pair of image features. It has two advantages, firstly, the fake IR image X ir ′ will be closer to the real IR image X ir through the max-min game [16], and the distribution of the features f X ir ′ map,R of the fake IR image will be more similar to the real image features f X ir map,R . Secondly, f X ir ′ map,R is able to maintain the identityconsistency by the corresponding image X ir ′ constraint. Although L G I loss can ensure that the fake IR image X ir ′ resembles the real IR image X ir , there is no guarantee that the generated fake IR images retain the structure and content of the original RGB images X rgb . To deal with this problem, we introduce a generator G R for generating IR images into RGB images and the corresponding discriminator D R . Also we introduce cycle-consistency loss which is defined as follows: L cyc loss enables the G I generated IR image to be consistent with the input real RGB image. We use the L1 norm instead of the L2 norm because the L1 norm allows the generator to generate better image edges. Specifically, we input the real RGB image X rgb into the generator G I to generate the fake IR image X ir ′ and then use the generator G R to generate the reconstructed RGB image from the fake IR image. We do something similar with IR images. Now, the loss of the generator can be defined as follows: where ω is the weight of cycle loss and ω is set to 10 as in [14]. By using this loss during adversarial training, we can generate high-quality IR images. e BNG Attention Module. Our proposed BNG attention is an efficient and lightweight attention mechanism. e BNG attention can be embedded at the end of any convolutional neural network, for the residual network ResNet-50; the end of the residual structure can be embedded. e structure of BNG is shown in Figure 2. BNG attention consists of two submodules, as shown in Figure 2(a); the channel attention submodule can use the weight information of the trained model to highlight salient features. We obtain its scale factor from batch normalized (BN [25]) as shown in where μ B and σ B are the mean and standard deviation of mini batch B and c and β are the trainable parameters used to fit the data distribution. e formula for channel attention can be expressed as follows: where c is the scale factor for each channel, and the weights are obtained as W c � c i / j�0 c j . We measure the importance of each channel by applying the scale factor of BN to the channel dimension and suppressing insignificant features. Since channel attention only focuses on channel information, there is no global space-channel information interaction; to solve this problem, we design a global attention module. It can reduce information attenuation and amplify the features of global dimension interaction. Inspired by CBAM [24], the channel attention and spatial attention are connected in turn. e main structure is shown in Figure 2(b). Given the input feature map F 1 ∈ R C×H×W , the intermediate state F 2 and output F 3 are defined as follows: where M c and M s are the channel and spatial attention maps, respectively. ⊗ denotes element-wise multiplication. e channel attention submodule uses a 3D arrangement to preserve information across three dimensions and then uses a two-layer MLP layer that amplifies the channel spatial dependencies across dimensions. e channel attention submodule is illustrated in Figure 3. In the spatial attention submodule, to focus on the spatial information, two convolutional layers are used to fuse the spatial information. e size of the convolution kernel is set to 7 * 7. Since max-pooling reduces information and has a negative influence, we remove the max-pooling operation to retain more features. e same reduction ratio c is adopted from the channel attention submodule, same as BAM. e spatial attention submodule without group convolution is shown in Figure 4. Modal Mitigation Module (MMM). To mitigate the modal distribution, a modal mitigation module (MMM) is designed. For the input image X, we denote the features extracted in the convolution block as M ∈ R h×w×c and input it into the MMM, where h, w, an d c represent the height, width, and a number of channels of the feature map M, respectively. e instance normalization (IN) is used to mitigate modal differences on a single instance [27]. Instance normalization (IN) computes the mean and variance in a single instance and reduces the difference between the two data distributions. However, using IN directly may has a negative impact on the ReID task. Because the distribution of image data has changed significantly, some identifying information may be lost. To overcome these shortcomings, we use channel attention to guide the learning of IN, which mitigates modal differences while preserving identity information. Specifically, we input the feature into a two-layer MLP to downsample the channels and then upsample to the original number of channels and use the activate function to activate the feature as a mask to supervise the IN operation: where m C is the channel mask, representing the identityrelated channels, and M is the instance-normalized result of the input M. Similar to SENet [21], the method of generating a mask with channel dimension can be expressed as follows: where W 1 ∈ R c/r×c and W 2 ∈ R c×c/r are learnable parameters in the two bias-free fully connected (FC) layers, which are followed by ReLU activation function δ(·) and sigmoid activation function σ(·). g(·) denotes global average pooling of features. In order to balance performance and reduce the number of parameters, the downsampling ratio is set to r � 16. e formula for instance normalization is defined as follows: where E[·] is to calculate the mean of each dimension and Var[·] is to calculate the standard deviation of each dimension. To avoid dividing by zero, we add ϵ to the denominator, and M j ∈ R h×w is the j-th dimension of the feature map M. Loss Function. In this section, we will introduce the loss we used when training the generator to generate a fake IR image X ir ′ . On the one hand, X ir ′ should be classified to the same identity class as the corresponding X rgb ; on the other Mobile Information Systems hand, X ir ′ should satisfy the triplet loss [28] of the corresponding X rgb identity constraint. We define these two losses as L gan cls and L gan tri and denote them in L gan cls � L cls X ir where p(·) is the predicted probability of belonging to the ground-truth identity; the ground-truth identity of the fake IR image X ir ′ should be the same as that of the original RGB image X rgb . Although the generated image X ir can reduce crossmodality differences, there are still large intramodality differences caused by lighting, human pose, and view. We minimize the fake IR image X ir ′ and the real IR image X ir in a shared space via identity-based classification and triplet loss. We define these two losses as L feat cls and L feat tri and denote them in where p(·) represents the predicted probability that the input belongs to the ground-truth identity, and ∪ means the union sets. In summary, the overall loss of our module is shown in where L G and L D I are calculated by equations (1) and (2). Datasets and Settings . We evaluate our model on SYSU-MM01 [10]. SYSU-MM01 is a very popular RGB-IR ReID dataset; it contains pedestrian images captured by six cameras, including two infrared cameras (camera3 and camera6), and four natural light cameras (camera1, camera2, camera4, and camera5). For each pedestrian, there are at least 400 RGB images and IR images with different poses and viewpoints. Among them, 296 IDs are used for training, 99 IDs are used for verification, and 96 IDs are used for testing. Following [29], there are two test modes, i.e., all-search mode and indoor-search mode. For the all-search mode, all images are used. For the indoor-search mode, only use indoor images from 1st, 2nd, 3rd, and 6th cameras. Both modes employ single-shot and multishot settings, in which 1 or 10 images of a person are randomly selected to form a gallery setting. Both modes use IR images as probe sets and RGB images as gallery sets. Evaluation protocols: we use cumulative matching features (CMC) and mean average precision (mAP) as evaluation metrics. Following [29], the results of SYSU-MM01 are evaluated using the official code based on the mean of 10 repeated random splits of the gallery and probe set. Implementation details: we use the ResNet-50 [30] pretrained on ImageNet as the CNN backbone, use the output of its pool5 layer as the feature map M, and use the average pooling to obtain the feature vector V. We add BNG-attention to each layer of residual blocks in ResNet-50 and MMM module after the third and fourth layers. For triplet loss, we use the FC layer to map the feature vector V into a 256-dimensional embedding vector. For classification loss, the classifier takes the feature vector V as input and includes a 256-dim fully connected (FC) layer, followed by batch normalization [25], dropout, and RELU as the middle layer, and an FC layer with the identity number logit as the output layer. e dropout rate is set at 0.5. We use PyTorch to implement the model, the images are data augmented by horizontal flipping, and the batch size is set to 72 (9 people, each of which has 4 RGB images and 4 IR images). For the learning rate, the learning rate of the generation module and discriminator module is set to 0.0002 and optimized using the Adam optimizer. We set the classifier and the embedder to 0.2 and the CNN backbone to 0.02 and optimize them by SGD. Comparison with the Other Methods. In this section, we compare our method with several different cross-modality person ReID methods including the following methods: (1) with different structures and loss functions, two-stream [10], one-stream [10], zero-padding [10], BCTR [13], BDTR [13], D-HSME [26], and DGD + MSR [12] learned modality-invariant features and align them in feature space and (2) cmGAN [6] and JSIA [20] use the generative adversarial networks (GANs) to generate cross-modality IR images; they mitigate modal differences in pixel space. e experimental results are shown in Table 1. In Table 1, we can find that there are various evaluation protocols, i.e., all-search/indoor-search and single-shot/ multishot; firstly, for the same method, indoor-search performs better than all-search, because the images have less background variation in indoor mode, and matching is easier. Secondly, the rank scores of single-shot are lower than ones of multi-shot, but mAP scores of single-shot are higher than ones of multishot. is is because, in multishot mode, there are ten images in the gallery setting, while in single-shot, there is only one image. As a consequence, under the multishot mode, it is much easier to hit an image but difficult to hit all images. is situation is inverse under the single-shot mode. e R1, R10, and R20 denote Rank-1, Rank-10, and Rank-20 accuracy (%). e mAP denotes the mean average precision score (%), and our model shows good performance. Compared with JSIA, our model achieves over 2.7% on Rank-1 and 2.49% on mAP in the single-shot setting of all-search mode. In the single-shot setting of indoor-search mode, our model achieves a rank-1 accuracy of 44.0% and an mAP of 52.96%. In the multishot setting of indoor search, our model achieves a rank-1 accuracy of 53.40%, and an mAP of 44.35%, which is higher than JSIA by 0.7% and 1.65%, respectively. Ablation Study. In this section, we design ablation experiments to test the effectiveness of the BNG module and MMM module. Our ablation experiments are performed on the dataset SYSU-MM01 and use the single-shot setting of all-search mode. Influence of BNG module: the results of ablation experiments for BNG attention are shown in Table 2. Compared with the baseline model (B), by adding BNG attention, the rank-1 accuracy and mAP are improved by 5.57% and 4.39%, proving the effectiveness of BNG attention. Influence of MMM module: as shown in Table 2, the model with MMM (B + MMM) achieves a rank-1 accuracy of 39.97% and an mAP of 39.52%, which are higher than those of the baseline (B) by 5.84% and 5.98%, respectively. It is proved that our proposed MMM module has good performance. Visualization of Generated Images. For a more intuitive understanding of the generator model, we show the learned fake IR images in Figure 5. As shown in Figure 5, the first row is the real RGB image, the middle is the fake IR image generated by the generator, and the last row is the real IR image. We can observe that fake IR images have similar content (e.g., pose and view) and maintain the identity of the corresponding real RGB images while having an IR style. erefore, the generated fake IR images can bridge the gap between RGB and IR images and can reduce cross-modality variation in pixel space. Conclusion In this paper, we proposed a new pixel and feature alignment network (PFANet) for the RGB-IR ReID task. e model consisted of a feature extractor, a generator, and a joint discriminator. e BNG attention and the MMM module were designed in the feature extraction module. rough these two modules, the model not only mitigated modality differences but also paid attention to channel and global information. e cross-modality IR images were generated by the generator, which could bridge the gap between RGB and IR images and reduce cross-modality variation. Ablation experiments verified the effectiveness of each module. Extensive experiments on the SYSU-MM01 dataset illustrated that our model achieved state-of-the-art performance. Data Availability e SYSU-MM01 data used to support the findings of this study have been deposited in the "Rgb-infrared cross-modality person re-identification" repository (http://isee.sysu. edu.cn/project/RGBIRReID.html). Conflicts of Interest e authors declare that they have no conflicts of interest.
6,182.4
2022-10-11T00:00:00.000
[ "Computer Science" ]
Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios Imaging spectroscopy technique was introduced in the cultural heritage field in the 1990s, when a multi-spectral imaging system based on a Vidicon camera was used to identify and map pigments in paintings. Since then, with continuous improvements in imaging technology, the quality of spectroscopic information in the acquired imaging data has greatly increased. Moreover, with the progressive transition from multispectral to hyperspectral imaging techniques, numerous new applicative perspectives have become possible, ranging from non-invasive monitoring to high-quality documentation, such as mapping and characterization of polychrome and multi-material surfaces of cultural properties. This article provides a brief overview of recent developments in the rapidly evolving applications of hyperspectral imaging in this field. The fundamentals of the various strategies, that have been developed for applying this technique to different types of artworks are discussed, together with some examples of recent applications. Introduction In order to provide curators, scholars, conservators, archaeologists and conservation scientists with efficient tools for gaining knowledge of artifacts and archeological objects, it is important to study the materials and artists' techniques used in creating the artworks and understand what restoration materials have subsequently been adopted in their preservation [1,2]. Presently, both invasive and non-invasive approaches can be used in the study of these materials; both have specific advantages and problems [2,3]. When dealing with cultural properties, the use of invasive techniques is discouraged since they require samples or micro-samples from investigated objects. However, these techniques offer more detailed information for identification purposes than non-invasive techniques. The latter, instead, are performed without sampling operations, and can be implemented as spot and as imaging techniques. Although they provide preliminary information on the materials and their distribution on the object's surfaces, in most of cases, non-invasive techniques need a complimentary multimodal approach, in order to exhaustively characterize or identify all the materials. Usually, scientific investigation programs encompass a hierarchical use of analytical techniques, starting from non-invasive imaging techniques that are used for a preliminary screening and extensive evaluation of the surface, followed by non-invasive analytical spot techniques and, only when needed, as the last step, a complementary phase with micro-invasive techniques focused on investigating few suitable, selected points. Although not all these steps are always necessary, preliminary non-invasive analysis should always be recommended before starting any conservation or restoration procedure, in order to assist curators and conservators in their decision-making process. Non-invasive techniques, as such, do not alter the surface under examination, and are suitable for long-term or periodic monitoring of very flexible optical features that were used for the inspection of outdoors targets at distances of several meters. Interestingly enough, very recently another research front has developed towards the opposite direction: To apply HSI for the study of small size and finely detailed art and historical objects, such as photographic materials, including contemporary color negative and positive films. One such occasion presented itself when a corpus of photographic materials, which was considerably damaged by a flood, was studied, restored, and digitalized within the Italian Tuscan Regional Project 'Memoria Fotografica' (Photographic Memory). The measurements acquired in this project were performed with a modified version of high-resolution spatial and spectral HSI scanner made by IFAC-CNR, which allows non-invasive operations on surfaces of variable dimensions. This innovative IFAC-CNR HSI prototype, used to study 35 mm photographic negatives and positives, was optimized for the inspection of details featuring millimetric (or sub-millimetric) sizes [33]. Due to the excellent spatial and spectral resolution of the developed system, this novel methodology is now suitable for being used in this specific sector to support the conservation and restoration of negative and positive films. To the best of the authors' knowledge, it was the first time such a technique was applied to the analysis of negative films. Another important application of HSI technique is related to archival high quality documentation of works of art. In this case, matching the requested requirements by curators and conservators involves dedicated HSI instrumentation and experimental protocols. These data have to be acquired with an illumination/observation geometry configuration that follows the Commission Internationale de l'Eclairage (CIE) recommendations, such as the 2 × 45 • /0 • or d/0 • configurations, to provide calibrated RGB images and colorimetric values (i.e., CIEXYZ, CIEL*a*b*, sRGB, etc.) [34][35][36][37][38]. The application of HSI technique for the colorimetric analysis of paintings before and after the restoration is still rare in the field due to the difficulties in having accurate, reliable, and reproducible data suitable for matching the colorimetric calculations as required by CIE. In addition to these two pioneering approaches, some significant examples of HSI applications in the CH field are presented in the following sections, with a focus on the most recent developments that point toward new applicative directions. State-of-the-Art of HSI Systems Imaging spectrometers collect data over three dimensions: Two spatial and one spectral. For this reason the acquired data-set is called file-cube (or data-cube, image-cube), as each item/datum is associated with two spatial coordinates (x, y) locating the pixel position, and with a spectral coordinate (λ), providing the reflectance intensity at each wavelength. The most common method for categorizing the various types of imaging spectrometers is by the method of collecting the data-cube in a single detector readout; they are categorized as; (a) Push-broom spectrometers; (b) Staring; (c) Fourier Transform Imaging Spectrometry; (d) Whisk-broom Spectrometers; (e) Snapshot Imaging Spectrometers [39]. To acquire the data set, the imaging system is pushed across the object under investigation (HSI scanner) or vice versa, and the image-cube is built up one spatial line at a time. Alternatively, the acquisition of the data can be done by using an internal device scan mirror and without moving the HSI device in front of the object (or vice versa) [12]. Push broom spectrometers use a focal plane array (FPA) detector, and thus collect a vertical slice of the data-cube at once so that only one spatial dimension needs to be scanned to fill out the cube. In the staring systems, a filtered camera is constructed by placing a filter wheel or a tunable spectral filter in front of a camera. This device collects several images at different wavelengths and needs to take several images of the same detail to complete the data set [40][41][42]. Fourier Transform imaging spectrometry (FTIS) uses an internal interferometer, usually a Sagnac interferometer, which images the interference pattern onto the camera sensor [43,44]. This type of imaging systems are not common, particularly not in the Vis-NIR regions. Whisk broom spectrometers, which use a linear array of detectors, collect a single column of the data-cube at a time and thus scan across the two spatial dimensions of the data-cube [12]. However, to the knowledge of the authors, this type of system is not as commonly used for CH applications. Snapshot imaging spectrometers collect the entire three-dimensional (3D) data-cube in a single integration period without scanning. This imaging system is faster than the previous ones as it can take spectral images at real time (several frames per second), but to date, the spatial and spectral resolutions are poorer than, for instance, the push-broom systems [45,46]. HSI Application on Photographic Materials Within the "Memoria Fotografica" (Photographic Memory, 2018-2019) project, recently funded by the Tuscan Region (Italy) and IFAC-CNR Florence (Italy), HSI technique was applied, for the first time, on photographic materials, namely on contemporary negative and positive films. This project had set as its primary objective the definition of a methodology for the recovery, stabilization and restoration of photographic collection of the "Dainelli archive", which was damaged by the flood that hit the territory of Leghorn, Italy, in September 2017. This archive included different classes of photographic material (color negatives, color slides, prints with chromogen and inkjet development) that were severely compromised by this extreme event. HSI technique was selected as an innovative scientific approach that could simultaneously face multiple needs, including: (a) analysis of the photographic materials (dyes and emulsions); (b) digitization; (c) tentative digital restoration and/or recovery of lost parts. Since the use of HSI for investigation on photographic negatives was unexplored until this pilot project, the instrumentation had to be re-designed for this specific purpose. At IFAC-CNR laboratories, a readapted version of the HSI scanner [47,48], operating both in transmission-and reflectance-mode, was devised to inspect negative and positive photographic films. The hyper-spectral scanner for measurements in transmission on photographic material consists of a linear spectrograph connected to a camera, a mechanical part that allows the movement of photographic film pieces, a projector that uniformly illuminates a diffusing screen positioned almost in contact with the sample to be measured. The composed instrument is connected to a computer that allows its management and provides the acquisition of data and their display. With this instrumental configuration, the scanner operates in the 400-900 nm range. The measuring head is equipped with an Opto-Engineering TC23024 telecentric lens, with a Prism-Grating type transmission spectrograph (Specim ImSpector™ -Oulu, Finland-mod. V10E) and with a Hamamatsu digital camera CCA ORCA-ERG model. Scanning is obtained by moving the target (negative/ positive film) with horizontal streaks up to about 25 cm. Prior to each horizontal scan, a calibration is performed on the illuminant itself. Lighting is provided by a 150 Watt tungsten-quartz halogen lamp (QTH) with a color temperature of approximately 3200 K. The instrument operates with a spatial sampling step of about 37 microns, resulting in the acquisition of approximately 27 points per millimeter, equivalent to almost 700 ppi. Thus, the obtained spectral images have, at half of the maximum contrast, a spatial resolution of 5 lp/mm. The spectral sampling is performed in steps of about 1.2 nm, which produces 400 bands and results in a spectral resolution of approximately 2.8 nm in the operating range. Presently, scientific analysis applied to photographic films and their constituent materials for conservative purposes are quite fragmentary, and there are hardly any publications available dealing with this matter [49][50][51][52]. From a material point of view, the film system, as far as the color is concerned, can be considered relatively simple, with a three-layer structure. Each layer contains a specific dye, namely yellow, magenta and cyan dyes, to which other two correcting dyes have to be added. However, due to the complex and extensive gamut of color offered by the negative and positive color films, in the 400-900 nm range the identification of these colorants is far from easy. In fact, working in transmittance the three layers contribute to the final spectra producing complex spectral features (absorption bands) that are quite similar on all the pixels [52]. This makes it difficult attaining an analytical information and identification of dyes formulation (e.g., based on their commercial brand). However, it is possible to process the HSI data in order to infer some knowledge on the state of conservation of the frames, and thus to support with accurate spectroscopic information the digitalization phase and the subsequent digital restoration of the photographic material. In some cases, even partially lost parts of a scene can be reconstructed by recovering the information on colors from preserved details. As an example of the potentialities of this type of analysis, the results obtained on a color negative photogram, showing dented and un-homogeneous areas, are reported in Figure 1. Sensors 2020, 20, x 6 of 12 produced/purchased. After an iterative process, the reported procedure converges toward an output data set that can be used to guide the digitalization of the degraded altered frames and their subsequent digital restoration. These preliminary results indicate that HSI can play a prominent role in the long-term preservation and development of photo inventory in archives and image collections [55]. HSI Application for Colorimetric Analysis of Paintings Non-invasive diagnostics as well as accurate color acquisitions on paintings are possible with the HSI scanner developed at IFAC-CNR. Its orthogonal pair of linear motion actuators move a compact line-spectrograph in a vertical-plane. The adjacent vertical strips, which slightly overlap, perform the actual scan movement. The Specim Ltd. spectrographs ensure geometrical deformations that can be considered negligible, while the additional filters compensate any internal stray-light. This system's spectral range is in the 400-1650 nm, with two separate spectrographic heads covering the VNIR (400-900 nm), which is also used for colorimetric applications, and SWIR (950-1650 nm) ranges with a spectral resolution of 2.5 nm and 8 nm, respectively. The devices are pushbroom HSI models based on PGP spectrographic heads. The technical details have been previously described in [47,48]. A fiber-optic illuminator (Schott-Fostec) which directs the radiation of a 3300 K QTH lamp onto a fiber-optic bundle forms the basis of the illumination module that terminates with a pair of lightlines with cylindrical lenses. The geometry of the illumination system is conformed to CIE standards for colorimetric measurements, projecting their beams around the viewed line-segment, symmetrically at 45° with respect to the surface [47]. The internal stray-light is one of the main cause of colorimetric errors that can be compensated only by a flat subtraction from all the wavelengths of the signal measured on the spectral channels below 400 nm. The magnification of details without compromising quality is possible due to the combination of high color accuracy and high spatial sampling (0.1 mm) obtained from the image-cubes. Thus, it is possible to visualize, for instance, the pattern of paint cracks (craquelure), which is a detail of great interest in conservation documentation [38]. As a case study, the analysis of the "Polittico dell'Intercessione" (ca. 1425), a panel with five partitions (97 cm × 222 cm), painted by Gentile da Fabriano (c. 1370-1420), in the Church of San Niccolò Oltrarno Figure 1a shows the entire sRGB image reconstructed from the HSI data of a negative color that had remained attached to the glassine envelope following the flood (after being in the water and mud for nearly two weeks) and detached during the restoration intervention. To optimize the spectroscopic study of these frames, automated classification methods were used on these image-data in order to obtain a separation in homogeneous areas, or rather, to group pixels in classes that shared similar spectral trends. This phase of the research focused on testing a series of algorithms and procedures to define these areas. Among the various algorithms tested, a new statistical approach was proposed based on a mathematical model called "UMAP" (Uniform Manifold Approximation and Projection for Dimension Reduction) [25]. UMAP is a scalable algorithm for data dimension reduction that favors the preservation of local distances over global distance. It is built upon mathematical foundations related to the work of Belkin and Niyogi on Laplacian eigenmaps [53,54]. In the UMAP procedure, the elements (pixels) that make up the region of interest (ROI) are grouped based on the similarity of their spectral trend (Figure 1b). In this map, lightest areas correspond to the most represented areas. In the distribution map there is a rather compact central body (reported in white), while a peninsula and two detached areas are distinguishable in its periphery. To recognize the pixels on the sRGB image grouped by spectral similarity in these areas, an arbitrary color has been assigned to these zones (or classes). In Figure 1b five classes of pixels are highlighted in red, green, blue, cyan, and yellow. The corresponding average transmittance spectra, representative of these classes, are shown in Figure 1c. It can be observed that the classes represented with red, green, and blue colors have similar spectral trends, which are mainly dominated by spectral features of the magenta color present in part of the tram body (Figure 1d). The spectral differences in these three classes are instead related to the darker tone of the portion of the tram (blue curve), to the areas that are partially overshadowed by the trees (green curve) and to the remaining part under the sunrays (red curve). The red area in Figure 1d could also be related to the portion of the frame that was damaged by water. The other two classes, cyan and yellow, map the distribution in the frame sections that are lightly and severely damaged, respectively. The yellow class, particularly, groups pixels with spectra that show weak spectral features originating from the chromatic emulsions of the film during the harsh washing process during and after the flood. Strengths and weaknesses UMAP algorithm are summarized in [25]. In particular, in Section 6 it is remarked that whenever the interpretability of the reduced dimension results is of critical importance, algorithms like PCA must be preferred with respect to t-SNE and UMAP. Nevertheless, methods like t-SNE and UMAP tend to keep the local structure with respect to the global structure of data, which in HSI application means that pixels that contribute to different clusters not necessarily have so distant spectra as the distance among the clusters may suggest. Here, to the authors' knowledge, UMAP resulted to be better scalable than t-SNE, which means that its results depend less on data dimensions. On IFAC-CNR data, UMAP has worked well up to about 7,000,000 spectral pixels. To give an idea of the speeds, UMAP processes 429,165 pixels with 399 bands in 857.47 s, while t-SNE version optimized for Multicore CPUs by Dmitry Ulyanov (https://github.com/DmitryUlyanov/Multicore-TSNE) employs 2905.28 s. As regards to the digitalization and digital restoration, HSI can help detect whether the dyes of the emulsions have maintained their chromatic characteristics or have undergone alterations based on the transmittance spectra extracted from the cube-file. These data need to be complemented with the technical data reported by the producers of photographic film. To this aim, HSI data-analysis based on PCA (Principal Component Analysis) demonstrated to be effective. Specifically, PCA was successfully applied to reconstruct the absorbance curves of each individual dye from their mixtures [52]. Using this procedure, the absorption curves of the three basic layers could be obtained for each frame, or groups of frames, to be compared with the data provided by the manufacturer when the films were produced/purchased. After an iterative process, the reported procedure converges toward an output data set that can be used to guide the digitalization of the degraded altered frames and their subsequent digital restoration. These preliminary results indicate that HSI can play a prominent role in the long-term preservation and development of photo inventory in archives and image collections [55]. HSI Application for Colorimetric Analysis of Paintings Non-invasive diagnostics as well as accurate color acquisitions on paintings are possible with the HSI scanner developed at IFAC-CNR. Its orthogonal pair of linear motion actuators move a compact line-spectrograph in a vertical-plane. The adjacent vertical strips, which slightly overlap, perform the actual scan movement. The Specim Ltd. spectrographs ensure geometrical deformations that can be considered negligible, while the additional filters compensate any internal stray-light. This system's spectral range is in the 400-1650 nm, with two separate spectrographic heads covering the VNIR (400-900 nm), which is also used for colorimetric applications, and SWIR (950-1650 nm) ranges with a spectral resolution of 2.5 nm and 8 nm, respectively. The devices are pushbroom HSI models based on PGP spectrographic heads. The technical details have been previously described in [47,48]. A fiber-optic illuminator (Schott-Fostec) which directs the radiation of a 3300 K QTH lamp onto a fiber-optic bundle forms the basis of the illumination module that terminates with a pair of light-lines with cylindrical lenses. The geometry of the illumination system is conformed to CIE standards for colorimetric measurements, projecting their beams around the viewed line-segment, symmetrically at 45 • with respect to the surface [47]. The internal stray-light is one of the main cause of colorimetric errors that can be compensated only by a flat subtraction from all the wavelengths of the signal measured on the spectral channels below 400 nm. The magnification of details without compromising quality is possible due to the combination of high color accuracy and high spatial sampling (0.1 mm) obtained from the image-cubes. Thus, it is possible to visualize, for instance, the pattern of paint cracks (craquelure), which is a detail of great interest in conservation documentation [38]. As a case study, the analysis of the "Polittico dell'Intercessione" (ca. 1425), a panel with five partitions (97 cm × 222 cm), painted by Gentile da Fabriano (c. 1370-1420), in the Church of San Niccolò Oltrarno in Florence, is reported. The HSI measurement was mainly focused on a color evaluation of the paint surface before and after the challenging restoration operations. These five partitions depict Saint Louis of Toulouse, the Resurrection of Lazarus, the scene of the intercession of Christ and the Holy Virgin with God, Saints Cosma and Damian, and also Saints Julian and Bernard [56]. It was considered a cryptic, unreadable work until an important restoration project made it comprehensible. The first documentation on this work, dated 1862, reported that the painting belonged to the Florentine church of San Niccolò Oltrarno. In 1897, the panel was badly damaged by fire, which magnified the alterations induced by previous restoration. Therefore, the polyptych was considered to be irrevocably damaged and, consequently, was stored for decades, until 1995, when it was selected for a new restoration project carried out by the Opificio delle Pietre Dure (OPD) in Florence, Italy [56]. Due to a standardized experimental acquisition protocol, that includes reproducibility of illumination, data-acquisition, repositioning, etc., IFAC-CNR HSI device measurements guarantee comparability between different measurements sessions. In this specific case, it was possible to colorimetrically compare the images collected before and after the restoration procedures on the painting. These data were stored as CIEL*a*b*76 TIF images files; it was possible, therefore, to visualize the three-color parameters (L*, a*, b*) as three separated grey level maps, in which high values for the three parameters corresponded to brighter pixels in the resulting image file. In Figure 2, the reconstruction of a detail of the scene "the Resurrection of Lazarus" is reported. Here, the sRGB image reconstructed from the HSI data at the end of the restoration procedure is reported together with the images of the b* colorimetric values calculated for the CIEL*a*b*1976 color space before and after the restoration, and with their color-difference image. In the analysis of this detail-as the gamut of the colors used by the artist was dramatically reduced after the fire of 1897 and by the previous restoration, it is evident that the color parameters have changed with different extents depending on the areas of the paintings. As an example, three spots were highlighted and their colorimetric values were calculated from the data cube (Table 1). They were chosen in order to show the chromatic effect of the restoration procedure on three colors, which are affected in a different manner by the presence of a yellowish aged varnish. Furthermore, this effect, which can be visually appreciated, has to be documented by using a CIE colorimetric approach. To better visualize the results on the painting after the removal of the aged varnish, CIE b* value can be used; it represents the blue-yellow component, with yellow and blue in the positive and negative directions, respectively. In this case, as expected, the bluish area (point 1) reduced the amount of yellow, while the red and flesh tone areas (points 2 and 3, respectively) increased this value resulting after the restoration more yellow (and red) than before. In addition, a* and L* values increased at different levels once the work was finished. Figure 2. From left to right: sRGB of the Lazarus detail reconstructed from the HSI data at the end of the restoration procedure with the spots whose colorimetric values are reported in Table 1; images in grey scale of the b* colorimetric values before restoration; after being restored; image difference: ∆b* colorimetric values (after-before). Application of a New Compact and Portable HSI System The relatively new Specim IQ HSI camera differs from other HSI devices available on the market since it integrates into a single compact and portable device housing the hyper-spectral sensor, additional color cameras, flash-memory card for data storage. Furthermore, this system works like a reflex camera with an interactive and touch display in the back of the camera that allow controlling the entire set of data acquisition and processing operations ( Figure 3). Moreover, it has been designed as a mobile, hand-held and stand-alone HSI camera for applications in different kinds of environments [31,58]. The integrated color camera supports the spectral camera usage by allowing the visualization of the scene and adjustment of the manual focus of the spectral camera. The camera can be used equally well indoors or outdoors and it works both with natural and artificial light. Like the majority of hyper-spectral cameras, Specim IQ is a push-broom system that makes a linescan over the target area. The scanning is performed with internal mechanisms. The acquisition process usually takes from a few seconds upwards, so the camera has to be mounted on a tripod. Specim IQ technical characteristics are the following: 400-1000 nm working range; 7 nm spectral resolution; 3.5 nm spectral sampling; 204 spectral bands; 512 × 512 pixels resulting image; unprocessed and processed data of each are directly saved by the camera and occupied approximately 300 MB. The hyper-spectral data are visualized immediately after the measurement and the user can add metadata to them "Saint Catherine carried by the Angels on Mount Sinai" (49.6 cm × 67 cm, private collection, 19th century, oil painting on canvas) by the Austrian artist Karl von Blaas (1815-1894) is here used as an example of the way in which Specim IQ can be applied to the study of artworks. The HSI data served to identify the pigments the artist had used and to map them with Spectral Angle Mapping (SAM) algorithm over a particular area. The SAM procedure is a spectral classification tool which is uploaded in the camera's software. The obtained results revealed, for instance, that the artist had used Thénard's blue pigment (also known as cobalt blue, an important synthetic pigment used since the 19th century) in the blue and bluish areas. The image that appeared in the camera's screen can be seen in Figure 3b, in which the graph at the right indicates the pixel extracted from the blue right sleeve of the angel. The identification of the blue pigment as cobalt blue was based on the entire spectral shape and its absorption features in the 500-700 nm range [59]. Subsequently, its areal distribution was mapped with the SAM tolerance thresholding option that was selected from the camera SAM mask tool (Figure 3c). This map was obtained by using a spectrum selected from the blue area of the imaged scene as reference. Table 1; images in grey scale of the b* colorimetric values before restoration; after being restored; image difference: ∆b* colorimetric values (after-before). Table 1. Colorimetric values on three selected spots (average of 5 × 5 pixel, spectral acquisition sampled by 2) calculated from the spectra extracted by the cube-files acquired before (L* b , a* b , b* b ) and after (L* a , a* a , b* a ) the restoration process. CIE standard 2 • observer and D65 illuminant, CIEDE2000 color-difference formula [57]. The visualization of these colorimetric variations through a new elaborated image difference is an important tool provided to conservators and art historians to understand the extent the restoration has changed the work. Application of a New Compact and Portable HSI System The relatively new Specim IQ HSI camera differs from other HSI devices available on the market since it integrates into a single compact and portable device housing the hyper-spectral sensor, additional color cameras, flash-memory card for data storage. Furthermore, this system works like a reflex camera with an interactive and touch display in the back of the camera that allow controlling the entire set of data acquisition and processing operations ( Figure 3). Moreover, it has been designed as a mobile, hand-held and stand-alone HSI camera for applications in different kinds of environments [31,58]. The integrated color camera supports the spectral camera usage by allowing the visualization of the scene and adjustment of the manual focus of the spectral camera. The camera can be used equally well indoors or outdoors and it works both with natural and artificial light. Like the majority of hyper-spectral cameras, Specim IQ is a push-broom system that makes a line-scan over the target area. The scanning is performed with internal mechanisms. The acquisition process usually takes from a few seconds upwards, so the camera has to be mounted on a tripod. Specim IQ technical characteristics are the following: 400-1000 nm working range; 7 nm spectral resolution; 3.5 nm spectral sampling; 204 spectral bands; 512 × 512 pixels resulting image; unprocessed and processed data of each are directly saved by the camera and occupied approximately 300 MB. The hyper-spectral data are visualized immediately after the measurement and the user can add metadata to them "Saint Catherine carried by the Angels on Mount Sinai" (49.6 cm × 67 cm, private collection, 19th century, oil painting on canvas) by the Austrian artist Karl von Blaas (1815-1894) is here used as an example of the way in which Specim IQ can be applied to the study of artworks. The HSI data served to identify the pigments the artist had used and to map them with Spectral Angle Mapping (SAM) algorithm over a particular area. The SAM procedure is a spectral classification tool which is uploaded in the camera's software. The obtained results revealed, for instance, that the artist had used Thénard's blue pigment (also known as cobalt blue, an important synthetic pigment used since the 19th century) in the blue and bluish areas. The image that appeared in the camera's screen can be seen in Figure 3b, in which the graph at the right indicates the pixel extracted from the blue right sleeve of the angel. The identification of the blue pigment as cobalt blue was based on the entire spectral shape and its absorption features in the 500-700 nm range [59]. Subsequently, its areal distribution was mapped with the SAM tolerance thresholding option that was selected from the camera SAM mask tool (Figure 3c). This map was obtained by using a spectrum selected from the blue area of the imaged scene as reference. Conclusions After almost two decades since HSI was slowly introduced in the CH field, this technique has proven its versatility and utility for investigating different typologies of artworks, providing element to answer a variety of conservative issues. The HSI technique demonstrated enormous potentialities, which are partly still unexploited, in particular, in the examination of paintings and works on paper. This imaging technique offers great potential for future developments. In recent years, new application trends are emerging, including applications to photographic materials and an accurate color analysis as a support for restoration interventions. Moreover, new models of user-friendly HSI devices featuring portability and compactness are spreading the use of HSI techniques to several contexts and different environments. All these can contribute to creating other innovative HSI methods and applications and leading to new directions in the study of CH objects. Finally, HSI is finding its place among other techniques as a preliminary approach to the study of artworks, specifically because it can analyze the entire surface of any nearly flat object. Conclusions After almost two decades since HSI was slowly introduced in the CH field, this technique has proven its versatility and utility for investigating different typologies of artworks, providing element to answer a variety of conservative issues. The HSI technique demonstrated enormous potentialities, which are partly still unexploited, in particular, in the examination of paintings and works on paper. This imaging technique offers great potential for future developments. In recent years, new application trends are emerging, including applications to photographic materials and an accurate color analysis as a support for restoration interventions. Moreover, new models of user-friendly HSI devices featuring portability and compactness are spreading the use of HSI techniques to several contexts and different environments. All these can contribute to creating other innovative HSI methods and applications and leading to new directions in the study of CH objects. Finally, HSI is finding its place among other techniques as a preliminary approach to the study of artworks, specifically because it can analyze the entire surface of any nearly flat object. Funding: This research was partially funded by Regione Toscana (Memoria Fotografica project).
7,494.2
2020-05-01T00:00:00.000
[ "Materials Science", "Art" ]
European Economic Ethics Research - A Diagnosis The purpose of the European Economic Community’s founders was not only ‘mercantilist', but ‘economic', in the broader sense of the term ‘economics'. If there has been a specific model of Europe, it has been the social market economy. But the crisis of the welfare state has raised doubts about key features of that model. Does Europe have anything particular to offer in the economic realm? The approaches of economic ethics that have been developed in Europe have a lot to say in the formation of a ‘Euroethos'. The article tries to show the main European approaches and to delineate the traits of a European proposal. 1. Is the European Union also an ethical-economic project? There is a long history behind the establishment of the European Union, although it was only after the Second World War that Robert Schuman took the first step towards creating the European Community: integrating and jointly managing Franco-German coal and steel production in order to increase wealth and, above all, lay the foundations for harmony. It was thought that a European Federation could gradually be formed from this position. The European Coal and Steel Community was set up in 1951 with the Paris Treaty. In 1957 the 'Six' (Germany, France, Italy, Belgium, the Netherlands and Luxembourg) signed the Treaties of Rome, by means of which the European Economic Community (EEC) and the European Atomic Energy Community (EURATOM) were created. From this point the process of building the European Union went on in several different stages. 1 It is undeniable that the European Union's creation as an economic community stemming from the Coal and Steel Community led voices from the left to criticise the fact that the European Union had in the first line been born as a 'Europe of Merchants', which only gradually would insist on also becoming a 'Europe of Politicians' and, later on, a 'Europe of Citizens'. In the meantime, the first target has been ________________________ zfwu 9/1 (2008), 10-27 achieved to a greater or lesser extent; the second is still a long way off, and the third even further away. But the actual fact is -as scholars and citizens say quite truly -that without a Citizens' Europe it will be very hard to achieve a political and economic Europe, which is why it is urgent to build that Europe of Citizens. On the other hand, it is also true that the purpose of the European Economic Community's founders was not only 'mercantilist', in the exclusively monetary sense of the term, but 'economic', in the broader sense of the term 'economics'. If economics has the aim of helping to create a good society, as we will defend in this paper, then one must acknowledge that the founders of the European Community were setting out to build a pacific Europe, based on creating wealth and common tasks calling for peoples' cooperation rather than coming into conflict. The economic workings should have a positive-sum outcome and help to build peace. The founders of the European Community were implicitly agreeing on Kant's view that striving for peace is an ethical-juridical duty, as practical reason utters its irrevocable veto: " [t]here ought to be no war (…), because this is not the way one should seek their right" (Kant 1968b: 354). But they also agreed with Kant that there are just as well reasons for seeking peace based on the commercial instinct, because the commercial spirit cannot coexist with war (Kant 1968c: 368). Thus, it was necessary to foster trade in order to take the regulatory idea of a perpetual peace as a guide. The path taken by the Community and the European Union clearly has taken a winding route, and the different referenda on the Draft of the Constitutional Treaty have produced such discouraging results that it has been necessary to draw up a new Reform Treaty. One of the crucial points of discussion has precisely been the economic realm: leftist sectors quite rightly have pointed out that the first part of the Draft of the Constitutional Treaty covered social rights, amongst other fundamental rights, but that the following parts failed to provide the mechanisms required to protect these. In fact, it seems that the social Europe has given in to neoliberalism. In any case, political experts consider it vital for the Union to thrive and grow in order to become an increasingly relevant reference in the world system apart from the United States, China or India. The European Union, as a transnational union, constitutes a pioneering experiment which cannot be ruled out. It is nevertheless quite certain that going ahead with it requires designing the traits of a certain European êthos, a 'Euroethos', and the way economics is conceived and undertaken in practice has a central role in this êthos. If there has been a specific, albeit non-exclusive European model, it has been the social market economy, which has attained a much higher level of equity in the economic sphere than other models. In this respect, Michel Albert talked of the Rhenanian capitalism opposed to the Californian version in the nineteen-eighties, and Jeremy Rifkin continues to insist in the twenty-first century on the particular nature of the "European dream" as compared to the North American dream (Albert 1991;Rifkin 2004). Nevertheless, the crisis of the welfare state has raised doubts about key features of that model, which would appear to be beating an all-out retreat. Does Europe have anything particular to offer? The approaches of economic ethics that have been developed in Europe doubtlessly have a lot to say in the formation of a Euroethos. 2. The origins of economic ethics in Europe What is economic ethics? In the nineteen-seventies 'business ethics' was born in the United States, reaching Europe in the eighties and then little by little the rest of the world. The changes in Eastern Europe and the reinforcement of the European Union led Europeans to develop their own approaches, no longer depending on the North Americans (Mcmahon 1997: 317). At this time experts did not distinguish between economic ethics and business ethics. They often talked of 'Wirtschaftsethik' and 'Unternehmensethik', of 'éthique économique' and 'éthique de l'entreprise', and also of 'ética económica' and 'ética empresarial', as well as 'ética de los negocios', without making any distinction between them. Both were linked in the bibliography on the subject, and not incorrectly so, because business models depend to a large extent on the economic systems. It is nevertheless also possible to distinguish between economic ethics and business ethics, insofar as economic ethics would preferentially -though not exclusively -deal with reflection on economic systems and on the diverse economic orders and consider the place of business organisations and institutions from these standpoints, whilst business ethics would, above all, address the action of business organisations and individual actors, in the framework of the codes which are at the same time the source of possibility and constriction. Economic ethics -as Conill puts it -refers either to the whole field of the relations between economics and ethics in general or to the ethical reflection on economic systems (Conill 2004: 17). Arnsperger and Van Parijs propose a typical characterisation of a European êthos: for them, economic ethics is the part of social ethics dealing with the behaviour patterns and the institutions in this domain (Arnsperger/Van Parijs 2000: 14-15). The importance of considering economic ethics in the context of social ethics as a whole is explicitly expressed in the title of their book, Éthique économique et sociale, which insists on social justice being one of the essential dimensions in this context. For his part, Karl Homann states that "Wirtschaftsethik (Unternehmensethik) deals with the question: how can moral norms and ideals be brought to bear [zur Geltung bringen] in the conditions of modern economics?" (Homann 1993(Homann : 1287. The origins of economic ethics in Europe The relations between economics and ethics in the European traditions go back at least as far as Aristotle's reflections on economics and chrematistics. However, attributing the emergence of modern economics to the protestant ethic has been a very extended opinion since Weber and Tawney: the protestant ethic would have an influence on fostering production, on the saving and investment which got capitalism under way (Weber 1904(Weber /1905Tawney 1926). We should not forget that beliefs are vital for social life, but also for economic life. We live, move and exist in beliefs, and that is why it is important to modulate beliefs and not only change the rules of the game. zfwu 9/1 (2008), 10-27 However, Weber's theses have been criticised from at least two standpoints: the first questions whether it was Protestantism and not Catholicism which fostered capitalism; the second does not set out to correct Weber but to complete his view, by attempting to show that Protestantism was the initiator not only of the capitalist form of production but also of the modern form of consumption which made capitalism possible. Without a rise in consumption, production does not increase either, and Protestantism stimulated both of these. As for the first point, some authors remember that part of Catholic thought supported obtaining profit (Robertson 1973). Not only was the 'spirit of capitalism' present in Catholic spheres, such as Florence and Venice in the fifteenth century, and the south of Germany and Flanders, but traits supporting the birth of capitalism can be gleaned in Catholic thought. For example, sixteenth century Spanish scholastics, very particularly the 'Salamanca School' (Vitoria, Soto, Molina or Valencia), proposed doctrines which would already contain a rudimentary form of assumptions now considered to be liberal or capitalist -in the field of private property, public finances, monetary theory, value theory, the theory of prices, salaries and profits. Late scholastics also recognised the importance of trade for a community's peace and for seeking the common good amongst the different regions of the earth (Grice-Hutchinson 1995;Chafuén 1991;del Vigo 2006). In this same stance, it has even been affirmed that the Jesuits "favoured the company spirit, the freedom to speculate and the expansion of trade as a social benefit. It can be asserted that the religion underlying the capitalist spirit is more Jesuitism than Calvinism" (Robertson 1973: 164). As for the second criticism of Weber's doctrine, the sociologist of religion Colin Campbell wrote his book The Romantic Ethic and the Spirit of Modern Consumerism precisely with the aim to extend Weber's thesis on the influence of Protestantism in the birth of Capitalism. Campbell thinks that if the Industrial Revolution was possible due to an ethics of production, which gave moral approval to the production and accumulation of wealth, there also had to be some ethics of consumption in order to give consumption a moral identity (Campbell 1987: ch.6). If the question that Weber brought up as regards the accumulation of wealth was "how could an activity directed toward profit, tolerated at best from the Christian standpoint, become a vocation?", the question should now be asked in relation to consumption, that is, "how could pleasure-seeking, ethically tolerated in the best of cases, become an acceptable goal for citizens of the ascetic society?" (Campbell 1987: 100). If rational ascetics promoted production, the sentimental side of pietism fostered consumption: together these contributed to the development of the modern economy. The ascetic ethics of vocation and predestination would promote production and the accumulation of wealth, the sentimental ethics of love and pleasure would promote consumption and thus increase demand. Advertising and fashion would not have successfully taken hold in consumers' spirits had they not harboured an insatiable desire for novelty, whose satisfaction would be ethically justified. Reflecting on an ethics of consumption, and not only of exchange and distribution of goods, is one of the great tasks of European economic ethics at the present time (Knobloch 1994;Cortina 2002). The next step in forming an economic ethics was taken by Adam Smith. In his work we find an ethics which opens up to economics through demands of the development of modern reality, thus constituting economic ethics and an economics which maintains its ethical nucleus according to the specific social and political contract, thus constituting an ethical (or political-ethical) economics. The central traits of Smith's economic ethics would be the following, according to Jesús Conill: (1) Self-love and self-interest, however important they may be in the system of trade, are not the only motives. Furthermore, they are not opposed to sympathy, sympathy instead being opposed to egotism. (2) What moves us to a great extent is the economics of esteem. (3) Sympathy enables the approving and disapproving of conducts. (4) A critical study of the role of markets has to be made, establishing authorities for control, so that the "system of freedom" is realised in the "commercial society". (5) Smith professes a modern economic republicanism which establishes the connection between the public and the private sphere from freedom (also from economic freedom) and from virtue (also public virtue). Neither the invisible hand (which tends to be identified with the market) nor the visible one (which tends to be identified with the state) are enough -there has to be the "intangible hand" of virtue and social capital, which generates civility (Conill 2004: 103-113). This economic republicanism will be extended in a large number of the current trends in economic ethics in Europe. Only later will the autonomisation of economic sciences lead to the separation between ethics and economics. The profile of European economic ethics from the nineteen-eighties Economic and business ethics in their present form came into Europe in the nineteen-eighties as an academic discipline at universities and business schools and also in the world of companies. In 1984 the first chair of business ethics was founded in Nijenrode University, at the Netherlands Business School, and ten years later the number of chairs had risen to fifteen, including the prestigious Dixons Chair for Business Ethics and Corporate Responsibility at the London Business School. In 1992 Business Ethics: A European Review came out; in 1994 the collection of essays edited by B. Harvey, Business Ethics: A European Approach; in 1995 a bilingual magazine in French and English; and in 1987 the European Business Ethics Network was founded (van Luijk 1997). In this context there were pioneering works by Oswald von Nell-Breuning, the late dean of Catholic Social Thought in Germany, and Arthur Rich, the late Protestant leader of business ethics in Switzerland, working in the eighties until the new movement started (Nell-Breuning 1956-1960Rich 1984Rich -1990 . At this time it was hardly distinguished between economic ethics and business ethics, though economic ethics seemed to refer to the macro-level in its connection with the meso-level, whilst business ethics covered the micro-level in its relationship with the medium level. One of the characteristics of economic ethics in Europe consisted in dealing, above all, with the macro and meso-levels. There is obviously a great similarity between the economic ethics developed in Europe and the form developed in the United States, but in the nineteen-nineties some works set out to stress the differences between both (Enderle 1996;van Luijk 1997), which is of great use for establishing the central traits of European economic ethics from the standpoint of research. We will attempt to describe these traits, which are not exclusive in the least, but may indeed provide a profile of these ethics. (1) A first characteristic of European economic ethics is the diversity of both languages and cultural traditions. For instance, there is a difference between Great Britain and Ireland in comparison with continental Europe, and the islands are more closely linked with the United States and Anglo-Saxon culture. But the sort of joint work that one might wish for also fails to exist among continental European specialists, and this is something that has unfortunately endured until the present day. 2 (2) The dominance of the United States and the strength of English as a language mean that Europeans depend to a large extent on the Anglo-Saxon countries as regards journals and publishers with the greatest impact. In spite of there being high quality publishers and journals in continental Europe, it is the Anglo-Saxon ones which gain greatest recognition. (3) The origins of economic ethics have conditioned their development in the United States and Europe. In the United States it was business scandals which aroused reflection in the nineteen-seventies and the need to improve business practices in order to generate trust. In continental Europe, the situation was different. Since the nineteenseventies the debate on the models of political economy (Hayekian liberalism, social liberalism, liberal socialism, democratic socialism, historical materialism) has been a central focus of discussion in both the academic world and in political life. The fall of the Berlin wall in 1989 and the fact that only market economy remained in force revived debates on economic models. But the question was no longer the alternative 'liberalism or planned socialism', but that of hybrids, of very different 'ethics of capitalism'. In this debate the reflection of economic ethics on the frameworks proved essential (Koslowski 1986;Conill 2004). Consequently, economic ethics in continental Europe consisted -and still consists -above all in reviewing economic systems. (4) Indeed, one characteristic of European economic ethics is that it deals, above all, with systemic and foundational questions, preferring to handle theoretical matters. This reflection gave rise to some excellent models of economic ethics, which have gradually been perfected over these twenty-five years without changing substantially. However, there has been increasingly an attempt to link the economy and business, to bind regulatory frameworks with concrete experiences. (5) This concern for frameworks is upheld on both epistemological reasons and in the influence of religious traditions. On the one hand, there is a close connection between European economic ethics and social sciences and philosophy. Economic ethics arose as a field of social ethics, concerned with matters such as the ethical aspects of privatisation, the moral basis of employees' codetermination rights, the ethics of investment policies and the moral properties of the market economy (van Luijk 1997). This is ________________________ 2 A pleasant exception was the international conference organised by the Berlin Forum in cooperation with the Heidelberg Academy of Sciences and Humanities in Heidelberg, 2007, which was dealing with the question "European Business and Economic Ethics: Diagnosis -Dialogue -Debate". quite understandable, given the relevance of the European models of the Theory of Society. But the influence of Protestant and Catholic moral theology and of the Catholic Church's social doctrine is also decisive in both Central European and Nordic countries as well as in the Mediterranean countries. 3 In Europe as a whole, a large number of theories, centres or schools concerned with economic ethics have, or have had, a religious identity. This is the case with the influential works by Arthur F. Utz (Utz 1994) and all those that follow this approach, such as Alcalá de Henares University (S. Echevarría), but also with the influence of Louvain-la-Neuve University, the "Économie et Humanisme" centre and so many others, including business schools. 4 (6) In the context of this research work, there is a wide representation of liberal trends based on an individualist paradigm, both the followers of Hayek and the theories of rational choice. But one of the peculiarities of European economic ethics is the relevance of models which lay the basis for a social market economy. Free market opportunities are combined with the acceptance of a quota in the promotion of the common good, for the account of the corporations, governmental agencies, trade unions, professional groups and other groups of interest. According to van Luijk, these traits enable us to talk of a "European version of business ethics" (van Luijk 1997: 76). (7) Closely connected with these characteristics, one of the central features of European research into economic ethics is the reluctance to accept individualism as the core of social life by the majority of the new models of economic ethics, as we will see later on in this article; instead, there is an inclination towards recognising intersubjectivity as the core of everyday life. According to the majority of these models of economic ethics, this intersubjectivity ought to be materialised in the economy, either on the institutional level (reflexive functionalism) or on the level of interpersonal relations. Kant's legacy is unquestionable, specifically the formulation of the categorical imperative of the 'end in itself', which enables the hypothetical imperative of the 'people of devils' to be structured with the categorical one of the 'kingdom of ends'. Indeed, the formulation of the imperative of the 'end in itself' -"Act in such a way that you treat humanity (…), always at the same time as an end and never simply as a means" -orders people to be treated unconditionally as ends in themselves, but at the same time allows them to be taken as means to realise one's own interests (Kant 1968a: 429). Because even a people of devils -on condition that they are intelligent -would prefer the rules of cooperation on the institutional level, meaning that the rules wanted by all can regulate individual acts of exchange, moved by the 'strongest interest'; but a ________________________ 3 The influence of the Catholic Church proved decisive through two statements: the 1864 Syllabus condemned modernism and economic liberalism, and the 1891 Rerum Novarum marked the start of the apogee of the Catholic Church's social doctrine, so influential in different sectors' positive valuation of the social market economy and in matters of social justice. In Spain, this influence led to the emergence of groups such as Acción Social Empresarial or Fomento Social and the Cooperative Movement, which is in an excellent state of health in these times of globalisation. kingdom of ends realises that people also constitute the unconditioned moment of economic life (Kant 1968c: 366). It is nevertheless necessary to recognise here that Hegel leads us beyond Kant, not only because he demands embodying morality in political-economic institutions but, above all, because he discovers that reciprocal recognition is the core of social life: the relationship between subjects -the intersubjective relation -and not only the one established by individuals with contracting capacity. This approach is the one taken in the work of a large number of European authors, both in economic ethics (Kolm, Ulrich, Steinmann, Zamagni, Bruni, Valencia School) and in the spheres of ethics as a whole (Apel, Habermas, Honneth, Ricoeur, Cortina). (8) The fulfilment of intersubjectivity will entail that levels need to be discerned in economic life: that of interpersonal relations in the sphere of exchange, and that of citizens who have to consent to the rules of normative frameworks. One of the central categories of reflection is that of economic citizenship, in a more republican than liberal sense. These characteristics are to be found pervading the most relevant models of economic ethics over the last twenty-five years. 4. The great models: the dispute of rationalities The central problem in the sphere of economic ethics in Europe involves the relations between economic rationality and ethical rationality, in cases in which ethics is acknowledged to have any rationality. At least three positions can be assumed in this respect: (1) Ethics is irrelevant for economics. This is the standpoint asserted by positivism and the Theory of Systems, in approaches such as the one taken by Niklas Luhmann. (2) Economics is an axiologically neutral science, as acknowledged by Max Weber's famous Wertfreiheit postulate. This is the position upheld by a large number of economists. (3) There is a relationship between economics and ethics enabling some sort of economic ethics to be developed. This paper will concentrate on the models standing within this last position, but above all deal with the new models. In effect, from the nineteen-eighties the traditional models linking economics and ethics were still being developed in Europe. There was doubtlessly a major presence of utilitarianism in its different versions, analytical Marxism, taking methodological individualism as its basis (Elster, Cohen, Ovejero); economic liberalism, taking the approach marked by Hayek and sometimes extending to a 'democratic capitalism' like the one put forward by M. Novak (Schwartz, Rubio de Urquía, Rodríguez Braun); the iusnaturalism of ethics as successfully developed as that of Peter Koslowski; or the proposals based on the social doctrine of the Catholic Church, as is the case with Utz or García Echevarría. All these proposals continued to be of great influence in Europe. From the nineteen-eighties, however, new models of economic ethics, acting as a basis for different schools of thought, began to come forward. I intend to refer to some of these models, taking the emblematic authors of each of these as a reference. I shall start by looking at the 'radical liberalism' of Van Parijs. Radical liberalism (Philippe Van Parijs and the Chaire Hoover) In a 'liberal' line there is the solidarity-based (not ownership-orientated) liberalism of Van Parijs and the Hoover Chair. In a Rawls-like approach they opt for a 'radical liberalism', which assumes a basic citizens' income as a central tenet. Basic income makes real freedom possible for all, insofar as it enables them to assume jobs which may prove gratifying. Basic income is a modest income in cash, sufficient to cover basic necessities; it is regularly received and not subject to any condition other than citizenship or residency. It can be defined as an "income paid by the government to each full member of society (a) even if she is not willing to work; (b) irrespective of her being rich or poor; (c) whoever she lives with; (d) no matter what part of the country she lives in" (Van Parijs 1995: 35). This is one way to implement economic citizenship, understood as the right to enjoy a part of a country's economic goods. In this aspect it coincides with the stake proposed by Bruce Ackerman and Anne Alstott, but, unlike them, the quantity is modest and regularly given, while Ackerman and Alstott propose handing over a large one-time lump sum (Ackerman/Alstott 1999). The reason for those who opt for basic income is that equal opportunities do not exist between citizens handling a sum of money, and it is better to give a modest but regular amount. This same position is defended by the Basic Income European Network (Raventós, Domenech, Pinilla, Bertomeu). Reflexive functionalism: the economic theory of morality (The Munich School: Karl Homann, Ingo Pies, Christoph Lütge) According to Homann, economic ethics deals with the question: "how can moral rules and ideals have any authority under the conditions of modern life?" (Homann 1993(Homann : 1287. In modern societies it is not the goals and principles of morality, such as the promotion of individuals' dignity or solidarity, that are questioned, but the means by which these goals are attained. Since the fall of the Berlin wall it has been openly acknowledged that modern societies are market economies in which activities are coordinated through competition, and competition forces those who wish to remain on the market to calculate economically and seek a profit. They are thus moved by incentives such as profit-seeking and avoiding penalties. But, according to Homann, traditional ethics, above all the thought with roots in Kant, claims that it is immoral to allow oneself to be drawn along by incentives: it would seem that competitiveness and morality exclude each other. However, Homann states that it is necessary to implement moral values through the modern system of economic competition. It is why the maxim for modern ethics should be as follows: "regulatory ideals and demands assert themselves only through the modern economy, and not against this" (Homann 1993(Homann : 1295. It is therefore necessary to distinguish between the structural order (the constitution, laws, the economic order, the order of competition) and the measures within that order (innovations, market strategies, price policy etc.). It is also necessary to organise the rules so that the results desired by all arise from one's own interest (Homann/Blome Drees 1992). Rules must nevertheless be effective if they are not to wear away, and effectiveness is only achieved through their realisability. To determine the rational possibilities of observance of rules, economic rationality must be resorted to. In the economic model of action, 'rationality' means that human beings follow incentives which arise from situations. If patterns of conduct are to be changed, the situation and the incentives which arise from this have to be changed. Homann proposes to distinguish between two perspectives of morality: the interior one, which refers to interpersonal relations, and the systemic one. From the systemic angle the complex of principles, norms, actions and virtues is construed as strategies to solve social problems, that is, they are assigned social functions. This involves (1) a functional determination of morality in the framework of a theory of society; (2) a positive calculation of the consequences of institutional arrangements; (3) establishing the institutionally appropriate incentives so that the results sought after morally stem from one's own interest, that is, as unwanted consequences of intentional actions; (4) making a positive analysis of the aggregated consequences of alternative rules and proposing as compulsory norms the ones that have to be wished for by means of a legitimating act that can only be made from democratic consensus (Homann 1997: 146). Thus, it can be said: if norms oblige, it is because the consequences of their general fulfilment are wished for. That is also why rules will only oblige if fulfilment is sufficiently assured. Only from this paradigm can the non-intended conditions of intentional actions, universal dilemmatic structures and the meaning of incentives for society's morality bear any fruit. The thesis of this functionalism will be: the systematic place of morality in the market economy -not at all the only one -is the Rahmenordnung, that is, the economic order (Homann 1997: 152). But the reflexive functionalist proposal leaves some problems unsettled: (1) It conceives ethics as an ethics of conviction (Gesinnungsethik) and disinterest, not as an ethics of responsibility and universalisable interest. This second type of ethics must also take incentives into account. It conceives economic rationality as exclusively motivated by incentives arising from situations. But in fact, the homo oeconomicus, as a heuristic method, conceals a major parcel of reality, because economic rationality also covers other motivations for exchange, production, distribution and consumption (wish for identity, fellowship, traditions etc. In spite of the accusations of falling into dualism that have been made against this model, the fact is that it does not attempt to oppose economic and communicative rationality but to transform economic rationality from the inside. As Peter Ulrich states, a critical economic ethics sets out to find the normative and axiological assumptions of economic rationality and thus provides the social economy with a normative-discursive foundation, neither utilitarian nor contractualist but discursive, understood as democratic control by those affected. The integrative economic ethics approach considers the following three institutional levels of socio-economic rationalisation: (1) the order of understanding (that of the social contract), which corresponds to a normative social integration and a communicative ethical rationality; (2) the economic system (controlled by the market and the state), which corresponds to a functional direction of the system, by means of which the complexity is reduced, and to strategic rationality; (3) personal action, that is, the realm of contracts of trade, pertaining to the effective use of resources and instrumental calculating rationality. The integrative approach includes these three levels. Thus the market neither represents morality nor constitutes its counterpart: whether the market works efficiently from ends that are valuable for vital praxis depends on the democratically determined political-economic structural order. That is why the action of economic citizens is important from an active republicanism (Ulrich 1997). A critical economic ethics of this sort will attempt to intermingle with each other the teleological element of economic rationality and the deontological aspect of ethicalpractical reason. It will do this in three ways: (1) As a normative foundation, the ethics of discourse ensures a deontological minimum, such as the unconditional value of the person and the reciprocal recognition of emancipated interlocutors, which leads to effective problem-solving, because people who realise that they are well treated in their work are also more efficient. Indeed, modern morality reduces transaction costs, because it generates better responses for dignity and justice, insofar as people see their rights protected. (2) A critique of given individual preferences which discovers self-interests in the strict sense. In contrast to methodological individualism, it is important to remember that the preferences can be modified. (3) The integrative economic ethics approach states that it is vital to consider the economy not only from the standpoint of the system but also from the Lebenswelt, from non-systemic presuppositions of the rational economy. The external effects of the economy force one to take into account all those affected. It is therefore necessary to institutionalise a political-economic communication order constituted by economic citizens. If we wished to apply a test to verify the validity of economic institutions or actions which can also act as a regulatory idea, we could say that "any action or institutional regulation which could have been determined to be 'productive' by free and emancipated citizens is socio-economically rational" (Ulrich 1993: 237). The approach taken by integrative economic ethics nevertheless brings up certain problems: (1) By distinguishing three levels and assigning a type of rationality to each of these, it masks the fact that communicative rationality, strategic and calculating rationality act together on all these levels. On the action level it does not seem to take into account that there are motives for exchange other than calculation. Its trust in public opinion as the place for morality requires an in-depth analysis of public opinion, which is the place for forming judgments, but not for making decisions. Economic and business ethics based on dialogical praxis: a "culturalist" strategy (Erlangen School, especially Horst Steinmann and Albert Löhr) Steinmann asserts that the new social horizon, that of globalisation, brings up the demand for economic ethics once more. Globalisation increases the shortcomings of the government of national law and requires getting private actors from economic and civil society involved in ethical-political processes which enable creating and assuring the normative foundations of a peaceful world economy (Steinmann 2004: 34). Only citizens' will for peace provides the historical-cultural assumption without which one cannot distinguish between right and wrong: the culture of peace is a vital a priori condition of ethical political theory. But globalisation reveals at the same time a cultural fragmentation. Hence, instead of following a universalist strategy ('from the top down') it proposes a 'culturalist strategy' which goes 'from the bottom up': it starts from a specific historical situation and gradually transsubjectively achieves compatible ways of living. The priority task of ethical-political theories will consist in improving the argumentative praxis in education to prepare an argumentation which makes peace possible. The culturalist strategy will then consist in finding local solutions for specific problems: solutions which may nevertheless be universalised through dialogical networks. This enables a process of reciprocal learning, respectful to specific cultural practices, which in turn enables international planning processes to be devised, taking specific peculiarities into account. Steinmann states against Homann that one must recognise that the economic order is not the "systematic place of morality" (Steinmann 2004: 35) and neither can the company's role be reduced to a conduct compatible with incentives. The rules of the game are not only suppositions, means to guide business conduct, but also consequences of this: there is a reflexive integration of the elements. Against Ulrich, it should be noted in Steinmann's opinion that the ideal presuppositions of discourse are merely explanations of a historical experience, which has taken place in vital praxis, in the peaceful settlement of conflicts, and which has enabled the distinction between an argued solution of conflicts and a force-induced solution. Universally valid norms are only the result of learning through verified experience (Steinmann 2004). Steinmann's culturalist strategy nevertheless also has some serious limitations in my opinion: (1) The ideal presuppositions of discourse doubtlessly belong to an 'impure reason' and not to a 'pure reason' alien to experience of life (Conill 2006). But precisely because they enable us to distinguish between the procedures valid for solving conflicts (the argued ones) and the invalid ones (force or deceit) these are a priori suppositions -though not 'pure' -in concrete conflict situations. (2) One might think that the search for peace is one of those a priori presuppositions, but in any case what we must recognise is that the value assigned to a rationally valid rule is justice: what is important for norms that serve to solve conflicts is that these norms are just. Otherwise peace may be obtained at the expense of freedom and at the expense of the poor who have to adapt to specific situations -such behaviour refers to adaptable preferences 4.5 Civil economy: the relational paradigm (Serge-Christophe Kolm, Luigino Bruni, Carmelo Vigna, Stefano Zamagni) The Civil Economy line, represented by Zamagni, Bruni or Vigna, also proposes replacing holistic and individualistic paradigms in economics with a relational one. In their opinion, methodological holism has been discarded and only individualism would seem to be left, but individualism does not explain economic reality sufficiently. It is time to bring intersubjectivity to the foreground, though economy has not taken this into account except in the short stage of the civil economy (fifteenth century). However, to understand the 'new' paradigm, it is vital to make certain distinctions: First of all, it is necessary to distinguish between social interactions and interpersonal relations. The former may be anonymous and impersonal, while, in the case of the latter, the identity of the subjects is a constituent part of the relationship itself and the power of the 'inter' is a central matter. Economics traditionally takes into account only the former. Secondly, it is also necessary to understand human sociability in two senses: (1) the propensity to fellowship, in Smith's sense, which belongs to the expressive dimension of the subject; (2) the utility obtained from coexisting with others, which belongs to rational calculation. Modern science, going by its utilitarian statute, separates both these elements and exalts the exchange of equivalents over reciprocity. The economist is interested in studying market mechanisms and does not analyse the human quality of the results. That means that the economist is not interested in socially orientated motivations, and as a consequence the utilitarian subject has preferences, not desires. A third distinction is the one between extrinsic motivations (an expression of acquisitive passions), which are the maximisation of profit for the businessman and utility for the consumer, and the intrinsic ones (which arise from the passion for the other as something with which to bolster one's own identity). The economist insists on incentive schemes for the subject to attain the maximum efficiency, while it is true that intrinsic motivations have enormous power. This is why it is necessary to take into account terms such as identity, reciprocity, gratuity, relational goods or happiness, terms that the neoclassical paradigm has nezfwu 9/1 (2008), 10-27 glected, because it does not realise that the influences at work are not always objectively determinable, or that a change in the conditions under which the economic action is undertaken represents an objective influence or not, depending on the moral constitution of the subject and its reference context. "The aim -Zamagni affirmedis to think of a subject capable of combining freedom of choice and relations, because if it is true that considering the sole relationship ends up giving rise to an ambiguous communitarianism (according to which the individual is a derivative of the social sphere), freedom of choice alone will not get us any further than the individualist fusion (for which the social sphere is the simple product of individual interactions)" (Zamagni 2006: 46). In this context gratuity has full meaning, understood not as philanthropy ('doing for others') but as building fraternity, doing with others in a personal relationship. This time it is not the gift which one in turn wishes to receive, as in the essay by Mauss (Mauss 1950), nor concern for the other, but instead a question of 'giving so that you in turn can give'. What erodes the social bond is a market reduced to the exchange of equivalents, an uncivil market, not the civil market founded on the principle of reciprocity. That is why an ethics of civic virtues founded on gratuitous action has to be cultivated: if economic agents no longer preferentially uphold in their structure the values whose affirmation is being sought, any external coercion (incentives and legal norms) is powerless. The solution to the problem of the agents' moral motivation does not consist in setting them constraints or giving them incentives for them to act against their interest but in offering them a fuller understanding of their good. The Valencia School's proposition is also to use the realisation of intersubjectivity through the economy as a key, because intersubjectivity is the core of social life (neither the individual nor the hólon), but from a critical hermeneutic stance (Cortina et al. 2008;Cortina 2003;Conill 2004Conill , 2006García-Marzá 2004;Lozano 2004). The characteristics of this proposal are as follows: The philosophical method implemented should be that of a critical hermeneutics, attempting to look into the economic activity itself ('from inside') to discover: (a) the goals which endow this economic activity with social legitimacy and meaning, 5 (b) the ethical norms that this activity must follow, and (c) the philosophical foundation of such norms, which endows them with rational validity. ________________________ 5 This will bring to light the actions of personal agents or those of organisations and institutions, but the activity as a whole pursues goals which must be socially legitimated, for not only must political activity be socially legitimated but other social activities too, including the economic domain. Hence, ethical reflection cannot be made from outside the economic activity but must take place from inside. As regards the goals of economic activity, the main currents of opinion understand that the goals of economics are economic growth and satisfying the preferences of parties presenting a 'solvent' demand. However, it is reasonable to consider that economic growth and the GDP are means for serving the goal which gives the economy meaning and social legitimacy: the satisfaction of people's needs, starting with the basic ones, the empowerment of their capabilities, so as to enable them to pursue the lifestyle that they have reasons to value -in short: the creation of a good society (Sen 1999;Conill 2004). In this respect, an economic ethics has to be closely linked to an ethics of human development, and even more so in a global world (Goulet 2004;Gasper 2004). It is precisely globalisation that stresses the interdependence amongst all the countries in the world, and economic activity must have the aim of developing all of these, taking 'development' to mean individuals' empowerment. As regards the ethical norms that have to be followed in the economic activity, they must consist in the modulation of the civic ethic of this society in the economic realm. And finally the philosophical foundation of the norms is a cordial version of discourse ethics (a 'herzliche Version der Diskursethik'), which considers not only the procedural side of practical reason but also its cordial dimension (cp. Cortina 2007 for details). This hermeneutical approach has significant consequences for the economic structure: (1) On the micro-level of personal action, it should be recognised that preferences are not data but that they are formed and modulated from agents' motivations; and also that there are both extrinsic and intrinsic motivations (in agreement with Zamagni). Not only is calculating rationality at work on this level but also communicative rationality (the search for identity, the influence of traditions etc.). This is why it is important to foster civic virtues enabling virtuous circles of good practice to be generated, in accordance with the republicans, who point out the necessity of civic virtue for a good society. Civic virtue works as the 'intangible hand' of society, whose result is harmony (Pettit 1997). (2) On the meso-level of business organisations, calculating and communicative rationalities work together. In order to be competitive, the company needs to use the modern world's own mechanisms (the market, competition and profit-seeking), but at the same time needs to generate trust by using its 'moral resources', especially the dialogue with the stakeholders (García-Marzá 2004). This means that corporate social responsibility may be extremely useful, on condition that it is not understood as a science but as a part of business ethics, that is, as a management tool, as a means of prudence and as the demand of justice to take into account those affected on local and global levels. But also that a company should behave as a 'citizen company'. (3) On the meso-and macro-levels of economic-political institutions and in the frameworks of national and global rules, representatives and experts have to work together with NGOs and especially with those affected by the economic activity. The constitutions of the respective countries and the international economic order, which contain the economy's rules of play, require reforms which must be subject to consensus. This already contains a moral momentum, as proposals such as that of Homann and Ulrich lead us to understand. How to implement these reforms is an open question, but it is undeniable that those affected by the norms must be taken into account in this consensus so that the Rahmenordnung can be steered by universalisable interests, and they must be taken into account in a dual sense: (a) public policies and economic norms have to be designed to empower those affected so that they can participate in decisions; (b) those affected must take part in decision-making. The places in which those affected can participate in decision-making must be institutionalised -on the local level, on a national scale, in transnational and global spheres. A critical public opinion is not enough: the presence of those affected is required in the institutions at which decisions are taken. 5. The new challenges on the globalisation horizon As stated above, in the nineteen-eighties economic ethics aroused expectations in Europe that have not been wholly fulfilled. In the early twenty-first century the globalisation horizon again brings up the demand for economic ethics, for at least three reasons. First of all, if globalisation is to benefit all the peoples of the earth, the aid of national states is required, but also that of two new players: companies and solidarity organisations. These three protagonists should help to structure a fair world order precisely when volatile financial markets are fluttering, when companies are offshoring to countries with cheaper labour, with non-existent labour law, with a lack of environmental regulations and with asymmetric rules of trade. Secondly, the existence of public goods increasingly claims a global governance, going beyond national governments. Some of these goods are international stability and security, an international global order, the shared commitment to combat pockets of lawlessness and settle regional conflicts, but also an open and inclusive economic world system that meets the needs of all and global welfare. Questions of global justice are taking on very special relevance. In this respect, one can begin to see some implementation of the Global Compact put forward by Kofi Annan as Secretary General of the United Nations as well as the European Union Green Book on RSE. Thirdly, the crises of the welfare state, which started to emerge in the nineteen-seventies and have only heightened, are threatening the economic model of social economy, which has led Europe at a certain point to creating the most egalitarian society on the planet and to meeting the requirements of social citizens, in the T. H. Marshall sense (Marshall 1992). Universalising social citizenship is a demand of justice on which the European Union, as a social Europe, should work if it is to be faithful to its moral identity. The models of European economic ethics thus come up against both old and new questions against which they are called to show their worth. These questions are mainly: the construction of a global economic ethics (St. Gallen School, Homann, Koslowski, Lütge), the possibility of a global governance (London School of Economics), the ethics of human development (Desmond Gasper, Sabina Alkyre, Onora
11,207
2008-01-01T00:00:00.000
[ "Economics", "Philosophy" ]
Multi stratagem analysis of sentiments on twitter data using partial phrase harmonizing Sentiment analysis is constructive in the application environment for business intelligence and suggests systems because it is a very easy medium for the two ends of the availability to communicate. Numerous strategies and schemes have been worn inside the sentiment analysis, such as language processing, polarity lexicons, machine learning, and psychometric scales which establish diverse types of analyzing sentiments as assumptions ended, scheme reveals, and corroboration data set. Since the internet has to turn into a commanding resource of retrospect the sphere of sentiment is moreover referred to as Sentiment Analysis or Opinion Mining. It has seen an enormous boost in academia over the decades. Analyzing sentiment to extract sentiments in different levels like word, sentence, and document provides articles’ feeling polarities. While well identified consumers’ sentiments articulated in sentences by opinion. Customary machine learning schemes cannot virtuously mirror the views of writers. This paper proposes a scheme called multi-strategy sentiments with semantic resemblance to disentangle the topic with partial phrase matching. Additionally, the Naïve Bayes classification is applied to search for the probability of the distribution of knowledge in different categories of knowledge set. Introduction Sentiment analysis is commonly employed in opinion mining for knowing sentiments, subjectivities moreover sensitive states in online texts. The process was accomplished on product evaluation by organizing the products attributes. At the present time, sentiment polarity analysis is utilized in extensive range of domains like in finance. This concentrates on examining the direction-based text that involves text that contains statements or opinions. The process of sentiment classification investigates whether the specific text is subjective or objective or if the text constitutes both the feelings of positive or negative. This classification method has IOP Publishing doi: 10.1088/1757-899X/1055/1/012075 2 much number of essential qualities that may include various process, jobs, techniques, attributes and also application domains. There exists much number of jobs in the classification of sentiment polarity. There are three major characteristics of this classification are class, level besides assumption with respect to sentiment sources as well as targets. The distinctive two class problem incorporates the categorization of sentiments as positive or negative. Furthermore changes include organizing messages as subjective / objective. Sentiment analysis concentrates on the specification of user's point of view with respect to specific area. Analyzing sentiment is contextual text mining that recognizes and extracts subjective knowledge from source and allows a company to know its brand, product when tracking online discussions. However social media analysis is confined to basic analysis of sentiments and metrics. It's like scratching the surface and losing the important knowledge is searching for creative use of state of art AI techniques is also an important method for detailed analysis. It is important to classify a few brands that support the following lines in the customer dialog: 1. Key product aspects and repair aspects of brand that concern customers. 2. The fundamental interests of users and their responses to these problems. When used in combination, these basic concepts become a real important tool for analysis with human pre cision of many brand conversations. Intentional research improves sport by evaluating a message's intention an d figuring out how it applies to views, news, marketing, concerns, feedback, gratitude or inquiries. Currently, the internet is a forum to express opinions and exchange experiences and it is not the source of information. Feedbacks are normally gathered within the network about the product tweeted by customers. Since it is an incredibly convenient communication platform for both ends of supply, believing analytics are us eful in the setting for commercial intelligence and suggest systems. Various methods and techniques, such as machine learning, lexicons of polarity, natural language processing, and psychometric scales, have been used in feeling analysis, which analyse different kinds of sensation analysis, such as assumptions made, system reveals and validation datasets. research generally takes place at three levels: word, term, and record level, where the majorities of recent studies usually use the term and document. However, the degree of the word is the fundamental and thus it is seldom considered to be more important and demanding. In fact, the short sentences of one or two Chinese characters in Chinese as one language are most frivolous.This function can not be mirrored in tradition al machine learning schemes. This study therefore proposes a new hybrid sentiment analysis that uses the fluid set theory of the machine learning and the polarity lexicon approach fully. Western thinkers began to understand emotions earlier. First they address the propensity of w ords or phrases to feel and calculate them as real values, which can be further used for deciding the pr opensity of phrases or paragraphs to feel. The pattern of feeling was examined. NB (Naive Bayes), M E (MaxEntor Maximum Entropy) and SVM ( Support Vector Machine) are three key feel analysis alg orithms for machine learning. For simplicity of analysis, we choose NB and SVMs. Sentiment analysis is one of the complexes methods that consist of five important phases for examining sentiment data. The sequence of sentiment analysis process is shown in Figure 1 iii. Sentiment detection, iv. Displaying output The various levels of sentiment analysis is depicted in the following figure 1, System for Sharing Recommendations Loren Terveen [1] stated that empirical findings support the feasibility of automatic recommendation recognition. First, Usenet messages are an overwhelming source of web resources recommendations: 23% of usennet messages relate to web resources, and 30% are recommendations. Secondly, machinerecognized instances of advice also have almost 90 % accuracy. Third, quite a few resources are suggested by one person. The recommendations reported tend to be valuable resources for the respective community. Finally, a reasonable indicator of resource quality is the number of independent resource recommender s. The more distinct recommenders a resource has, the more often it appears in the FAQs, is a comparison of th e suggested services in FAQs (lists of Commonly Asked Questions compiled by human subjects specialists). T wo main design principles: specification and reusable are differentiated from other recommending systems by PHOAKS. What is recommended? What is important? The fundamental principle of collaborative filtration is that people suggest objects to each other at least. Usenet news readers know that this is also a conventional You can tell what a page is good for and how useful it is: PHOAKS searches for site references (URLs) and takes a note as a suggestion if a number of tests are conducted. The message must not be sent to so many newsgroups in the first place. Messages from a large num of groups are so generic that they are actually not related to any of the groups thematically. Second, whether the URL is a signature or signature file part of a document, it's not a recommendation. Third, if the URL happens in a previous message's quoted portion, it is not included. Fourthly, if the URL textual structure contains word markers which indicate that it is recommended and does not contain makers that indicate that it is marketed or promoted, it is listed as a recommendation. The categorization regulations have been quite complex and have introduced this fundamental technique to identify the different goals of web resources. The future work includes the following thing as mentioned by the author: Firstly, they continue the study of therelationship between Usenet messages' suggested tools and FAQs' sources. The temporal dimension isof particular interest to them. So they can for instance assess the degree to which Usenet messages are a big faq content predictor. Second, FAQs are used to boost the recommendation data system. For example, one would be prepared to use the references to the resource in FAQs from a database. We plan to combine the best of recommendations that we immediately disregard (for example, timeliness) with ethical recommendations (for example , long-term significance and quality). Exploiting Microblogging Social Ties for Sentiment Analysis Xia Hu and Lei Tang [2] said Micro blogging has, like Twitter, become a popular human expression medium that allows users to easily generate news, public events or items. Mass feelings and thoughts about different topics can be a valuable resource for the vast number of micro blogging data. In general, this consideration constructs an esthetic space to manage bright and short messages without the very fact that the micro-blogs are networked content. Emotional theories of infectivity in supervised learning process and sparse earning in micro-blogging address loud texts. An observational analysis of two Real World Twitter data sets reveals the high performance of our short noisy tweets management system. Micro blogging sites are commonly used in various fields for exchanging knowledge or opinions. As such a tool with an increasing abundance of opinion, it attracts a great deal of interest from those who seek to understand individual views or to measure the overall feeling of mass populations. For example, marketers may target users who want to start actively using a brand or product in social media. Agencies around the world continue to track developments before, during and after the crisis to facilitate recovery and to provide disaster relief. Entire volume of knowledge in micro blogs poses opportunities and difficulties to study such short and noisy texts. Sentiment analysis for product and film reviews, which distinguish significantly from micro blogging results, was extensively studied. In micro blogging, the text is a few phrases or 1-2 sentences, as opposed to regular text with several terms that help collect statistics. Users can also use and invent novel acronyms which is rarely used in traditional documents when writing a micro blogging post. Consider the example, messages such as "It's cooool," and "OMG" are perceptive and common on micro blogs but some are not structured words. The semantic meanings of such messages are difficult for machines to precisely recognize, but they provide user friendliness in fast and instant communications for people. One distinct feature of microblogging is that it is possibly connected via user connections that may contain useful semantic indices that cannot be found purely in text-based methods. Modern approaches do not use social relationship information when applied directly to micro blogging data. It is well known in social science that emotions and feelings play an imperative role in our life pertaining to social media. When you feel feelings, you don't normally hold your feelings, you prefer to express them. Indivisible verbal and postural input, known in social science as emotional interference, often appear to take up emotions from others. In personal relationships, it can be significant because emotional contagion "promotes convincing sync hrony and monitoring of feelings of others, even if people do not directly listen to the details." The emotional contagion is the product of Fowler and Christakis recording the spread of joy in a social network. The figure 2.1 and 2.2 explain the phenomenon by two social processes, selection and influence: people who become friends or similar to their friends over time. Both explications show the possibility of similar behaviors or opinions being expressed by connected individuals. Inspired by these sociological findings, we speak about using social media information to encourage feelings research in the context of micro-blogging. The purpose of this paper is to provide a supervised approach to the study of micro blogging feelings in order to understand the brilliant essence of message by learning the information related to social relations. They investigated, in particular, whether micro blogging information contains social theories. They then talked about how social relations can be shaped and used for the supervised analysis of feelings. Proposed Method In proposed system, like existing system, data set is taken as records from Excel worksheet with category in second column. Preprocessing work is carried out. Then words combinations are found out and valid phrases are gathered. These phrases conditional probability is found out among all categories which become Naïve Bayes Classification work. In addition, synonym words replacement is also made. Moreover, partial phrases like two words in one sentence and three words in other sentence are also treated as same phrases during naïve bayes classification. The study of emotions is very critical and the task at word level is more difficult. The first step was therefore to construct a lexicon of feeling which would infer the polarities of feeling and words. There should be specified two types of emotional sentences: the fundamental and compound sentences as specified below: simple phrases which have two letters and no derogation or modifications. Composite emotion phrases are sentences with more than two characters or negative sentences or modifications. The Naïve Bayes (NB) algorithm is widely used as a classification in document categorization. In a emotion analysis, Naïve Bayes addresses at first the labeled training corpus where every document knows the feeling polarities. The latter analyses the probability of a document that corresponds to different classes, provided the labels of function, which are then assigned to the higher probability groups. Every article is played and words of feeling are taken from the training corpus. Then the following probability is determined according to Equation (1) for each word of feeling and reported in a table of probability. Download Twitter Data Twitter Data is downloaded using 'twitter' package, in which two or more search words such as tablet, mobile and laptop are given. In the files 'laptoptwitter.csv,' 'tablettwitter.csv' and 'mobiletwitter.csv' all three contents are saved. The first column contains laptop, second has tablet and third column has mobile tweet posts. Preprocess Twitter Data In this phase Twitter Data is preprocessed using 'tm' package in which stemming, stop word removal and URL link removal is carried out. All the words are converted into lower case. Sentiment Words File Creation Here, a .csv file created in which sentiment phrase, category and sentiment value is being added as records. The category is one of laptop, tablet and mobile. The sentiment value is from -5 to +5 based on importance. Two Adjacent Word Phrase Combination At first, Twitter Data is converted into two words phrases such as first word and second word as one phrase, second word and third word as next phrase and so on for all tweets. These phrases are checked with sentiment value records taken from 'sentimentvalues.csv' created in previous module. If the phrase is matched with sentiment phrase then sentiment value of the corresponding category is taken and added. For all tweets, mobile category's positive and negative score is found out and displayed. Likewise tablet and mobile categories are also prepared. Then conditional probability of these phrases in all the three categories are found out and displayed. 8 Twitter data has been translated into three phrases of terms, namely first word, second word and the third word, for all tweets. The first sentence is a second, third and fourth letter. These phrases are checked with sentiment value records taken from 'sentimentvalues.csv' created in previous module. If the phrase is matched with sentiment phrase then sentiment value of the corresponding category is taken and added. For all tweets, mobile category's positive and negative score is found out and displayed. Likewise tablet and mobile categories are also prepared. Then conditional probability of these phrases in all the three categories are found out and displayed. Missed Word Phrase Combination Here phrases are formed with middle word deletion from the previous module phrases. These phrases are checked with sentiment value records taken from 'sentimentvalues.csv' created in previous module. If the phrase is matched with sentiment phrase then sentiment value of the corresponding category is taken and added. For all tweets, mobile category's positive and negative score is found out and displayed. Likewise tablet and mobile categories are also prepared. Then conditional probability of these phrases in all the three categories are found out and displayed. Conclusion A new approach proposed in this paper for measuring polarities and sentimental sentence strengths, which could be used also with partial sentence matched to evaluate the semantic similitude of sentences. It uses a probability value in contrast with traditional approaches and uses a normal value for the polarity of sentimental sentences. It proposes a multi-strategic sentiment analysis scheme focused on the polarities and strengths of certain words. It considers adverse conjunctures, particularly in the NB-based scheme. The system can be used to evaluate the documents' emotions. The approach was shown to be feasible and efficient. The shift will reflect in the future on how the photos of Emoticons and Unicode characteristics are close to those found.
3,817.8
2021-02-01T00:00:00.000
[ "Business", "Computer Science" ]
Multi-rogue wave solutions for a generalized integrable discrete nonlinear Schrödinger equation with higher-order excitations In this paper, we construct the discrete higher-order rogue wave (RW) solutions for a generalized integrable discrete nonlinear Schrödinger (NLS) equation. First, based on the modified Lax pair, the discrete version of generalized Darboux transformation is constructed. Second, the dynamical behaviors of first-, second- and third-order RW solutions are investigated in corresponding to the unique spectral parameter, higher-order term coefficient, and free constants. The differences between the RW solution of the higher-order discrete NLS equation and that of the Ablowitz–Ladik (AL) equation are illustrated in figures. Moreover, we explore the numerical experiments, which demonstrates that strong-interaction RWs are stabler than the weak-interaction RWs. Finally, the modulation instability of continuous waves is studied. Introduction Rogue wave was founded in many fields, such as nonlinear optics, fluid mechanics, and even finance [1][2][3]. A mass of nonlinear evolution equations including the NLS equation, Kundu-Eckhaus equation, Hirota equation, Sasa-Satuma equation, nonlinear wave equation and so on, can describe the RW phenomena [4][5][6][7][8][9][10]. As a basic model that describes optical soliton propagation in Kerr media, the NLS equation contains multi-soliton solutions, breather solutions, and RW solutions [5,[11][12][13]. However, in the regime of ultra-short pulses, the NLS equation is inappropriate to accurately describe the phenomena, and higher-order nonlinear dispersion terms must be taken into account [6][7][8][9]14]. In discrete integrable system, the RW solutions of the AL equation, coupled discrete NLS equation and discrete Hirota equation are also discussed based on generalized Darboux transformation (DT) and Hirota bilinear method [15][16][17][18]. There are great differences on RWs between the continuous integrable system and discrete integrable system. Ohta and Yang point out that the RWs can exist in the defocusing Ablowitz-Ladik equation [17]. As we know, the higher-order NLS equation named as the Lakshmanan-Porsezian-Daniel (LPD) equation [19] iq t + q x x + 2|q| 2 q + γ q x x x x + 8|q| 2 q x x + 6q * q 2 x + 4q|q x | 2 +2q 2 q * x x + 6|q| 4 q = 0. (1) is the third member of the NLS hierarchy. Here q is a varying wave packet envelope, q * denotes the complex conjugate q, and γ is a real parameter and stands for the strength of higher-order linear and nonlinear effects. Equation (1) can also describe the dynamics of higher-order alpha-helical proteins with nearest and next nearest neighbour interactions [20,21]. This equation has attracted great attentions. In Refs. [19,22], Authors establish the relation between higher-order NLS equation and one-dimensional Heisenberg ferromagnetic chains when higher order spin-spin exchange interactions (biquadratic type) and the effect of discreteness are considered. The integrability of Eq. (1) including its singularity structure, construction of Lax pair and Bäcklund transformation have been discussed in detail in Ref. [22]. The one soliton solution of Eq. (1) has been constructed [20] by using Hirota method. Multisoliton solutions using DT is presented in [23]. Rogue waves for the three-coupled fourth-order NLS system is studied in [24]. Besides, Eq. (1) can be regarded as a special case for an integrable three-parameter fifthorder nonlinear Schrödinger equation [25,26]. Rational solutions, breather solutions, rogue wave and modulation instability of this integrable three-parameter fifthorder nonlinear Schrödinger equation are analytically studied based on DT and robust inverse scattering transform [27,28]. The corresponding rational solutions and breather solutions of Eq. (1) can be obtained under certain constraints. In this article, we focus on the following spatial discretization [20] of integrable higher-order NLS equation (1) Equation (2) can govern the discrete α-helical protein chain model with several higher-order excitations and interactions. Under the transformation the higher-order integrable discrete NLS equation (2) yields the integrable fourth-order NLS equation (1). Reference [20] investigates the integrability of Eq. (2) including Hamiltonian, discrete Lax pair, discrete soliton and gauge equivalence. However, as we know, there is little work on rogue wave solutions and breather solutions of this higher-order integrable discrete NLS equation (2). This is the main motivation for us to investigate the higher-order RWs of the discrete integrable NLS equation (2) with higher-order excitations in this paper. Moreover, it is very meaningful to study other integrable properties of the higher-order integrable discrete NLS equation (2). We shall give an insight into the continuous limit theory of higher-order integrable discrete NLS equation (2) including discrete DT, discrete rational solutions, discrete breather solutions and gauge equivalence in the future. The paper is organized as follows. In Sect. 2, by using the modified discrete Lax pairs, we apply the generalized (1,N-1)-fold Darboux transformation [5,15] to construct higher-order discrete RW solutions of Eq. (2). The dynamical behaviors of these discrete RWs are discussed in Sect. 3, which exhibits interesting wave structures. Finally, in Sect. 4 the modulation instability of continuous-wave states of the higher-order discrete NLS equation (2) is investigated. Lax pair and generalized discrete DT The higher-order discrete NLS equation (2) admits the following discrete modified Lax pair where the shift operator E is defined as Eϕ n = ϕ n+1 , ϕ n = (ϕ n,1 , ϕ n,2 ) T is the vector eigenfunction. The matrices U n and V n with spectral parameter λ take the forms in which One can directly verify that the discrete zero curvature condition U n,t = (E V n )U n − U n V n of the linear spectral equations (4) yields the generalized integrable discrete NLS equation (2). Following the idea in [29], the Darboux transformation of the higher-order discrete NLS equation (2) can be obtained. Under the gauge transformation with can be determined by The linear spectral problem (4) changes to new one as and the matricesŨ n andŨ n satisfỹ The relation between potentialq n [N ] and potential q n is where with It is noted that the expression of 1 [N ] and 2 [N ] can be derived by substituting 2 ) T for the first and second column in [N ], respectively. Next, we will construct the generalized (1, N − 1)fold DT for higher-order discrete NLS equation (2). The generalized (1, N − 1)-fold DT links to single spectral parameter λ = λ 1 and the order N-1 of the highest-order derivatives for the eigenfunctions. Using the similar method in Ref. [5,18], we get a generalized (1, N −1)-fold DT for higher-order discrete NLS equation (2). Especially, we consider the following eigenfunction solution of the Lax pair (4) with seed solution q 0 (n, t) = ce iφt with C j ( j = 1, 2) are arbitrary complex parameters (i.e.,C 1 = 1, C 2 = 0), d k , f k are free real and is small parameter. We fix the spectral parameter λ = λ 1 + 2 with λ 1 = √ 1 + c 2 ± c in Eq. (11) and expand eigenfunction ϕ(λ) into the Taylor series at = 0, then we obtain where Then we obtain a generalized DT for the higher-order discrete NLS equation (2) q where 2 are described by [N ] respectively, but the first row and the second row in the [N ] are changed to (λ N +1 φ 1 , . . . , φ 1 RW solutions and dynamic behaviors Case 1: one-order RW solutions As N = 1, the solution (13) reduces with For convenient, choose h = 1, c = 3 4 corresponding to λ 1 = 1 2 then we obtain the first-order RW solution of Eq. (2) is Note that q n+n 0 [1] is a solution with arbitrary real number shift n 0 and the translational property also satisfies the following higher-order RW solutions. Next we illustrate the property of first-order RW solution (15). By analyzing the explicit formula of q n+n 0 [1], we find that the parameter γ produces no effect in the amplitude of the first-order RW solution (15). The maximum amplitude of |q n [1]| is 63 16 at point (n 1 , t 1 ) = (0, 0) with the shift n 0 = 1 3 , which is an on-site RW (see Fig. 1a). The minima amplitude attains 0 at two sites (n 2 , respectively. Moreover, we find that the lower peak amplitude of the first-order RW can reach at two adjacent lattice sites when n 0 = 5 6 , which is called inter-site RW (see Fig.1b). Through detailed calculation, we find that the higher-order discrete NLS equation (2) has the identical amplitude but different center points with the same background wave plane comparing with the fundamental RW solution in the AL equation [17]. Next, we consider the effect of higher-order term γ and spectrum parameter λ on RWs. Figure 2a, b, c show that the first-order RWs become narrower with the increase of nonlinear term parameter γ but the peak does not changed. When γ → ∞, the first-order RWs can concentrate the energy. On the conversely, when γ → 0, Eq. (2) reduces to the AL equation, and the RWs approach the fundamental RWs of the AL equation. Altering the parameter λ, we see that the amplitudes of the fist-order RWs increase with the spectrum λ increase (see Fig. 2d, e). The numerical simulation with random noise is an effective method to test the stability of the system [30,31]. In what follows, we study the dynamical behaviors of the first-order RW solutions by numerical simulation with the initial conditions and perturbation for Eq. (2). Figure 3a is the exact first-order RW solution (15). Figure 3b, c are the profiles of the numerical simulation, which exhibit the time evolution of the RWs with initial condition and the perturbation of the initial solution with 2% amplitude as random noise at t ∈ (−1, 1), respectively. The corresponding results show that the numerical simulations of the first-order RW solution can well agree with the exact RW solution (15) besides a little weak oscillation near the edges with the perturbation case. • For the case d 1 = f 1 = 0, the strong interaction happens that the RWs have four minimum points and five local maximum including a biggest peak at the center of the wave packets (see Fig. 4a). • For the case d 1 = 0 or f 1 = 0, the second-order RWs split into three first-order RWs, whose centers become a rotating triangle, and the whole profiles have three local maximum and six minimum points (see Fig. 4b, c). Moreover, adjusting the parameters freely, we find that the area of the triangle increases with the increase of the parameters |d 1 | or | f 1 | and | f 1 | can control the rotation of the triangle RWs. Next, we give the dynamical property for the secondorder RWs by the numerical simulation. Figure 5a, d are exact second-order RW solutions with different parameters d 1 and e 1 . Figure 5b, c show that the numerical simulation of the strong interaction (i.e.,d 1 = 0, f 1 = 0) can well agree with the exact solution except for weak oscillations at t > 0.4 (see Fig. 5c). For the weak interaction case (i.e.,d 1 = 10, f 1 = 0), we find that the wave propagation can also match the exact solution well. However, if we add the random noise (2%) to the initial solution, the weak interaction displays serious oscillations after time exceeds 0.2, which may be due to the main energy distribution [17]. Case 3: third-order RW solutions When N = 3, by the formula (13) and take the special spectral parameters λ = 7 4 with c = 33 56 , then the thirdorder discrete RW solution is obtained as where Fig. 5 The second-order RW solutions (17). Exact solutions with a d 1 = f 1 = 0 and d d 1 = 10, = f 1 = 0. b and e the numerical simulation using exact solutions (17) at t = −1 as initial conditions. d and f numerical simulations by adding random noise with amplitude 2% to the exact solutions (17) as initial conditions and [3] 1 and [3] 2 change to [3] but the first row and second row in the [3] are replaced by (φ 1 [4,0], , respectively. The exact expression of three-order RW solution is so clumsy that we omit it here. We just give its structural analysis corresponding to the four different parameters (d 1,2 , f 1,2 ). • For the case d 1 = 10, d 2 = f 1,2 = 0, the thirdorder RWs split into six first-order RWs, which form a triangular pattern (see Fig. 6b); • For the case d 2 = 10, d 1 = f 1,2 = 0, the thirdorder RWs also split into six first-order RWs, which array to a rotating pentagon pattern with a first-order RWs located at the center (see Fig. 6c). Now we study the dynamical behaviors for the third-order RWs (18) by the numerical simulation. Here, we only consider the strong-interaction case (see Fig. 6a) and weak interaction case (see Fig. 6b). Figure 7a, b show that the strong-interactions of third-order RWs almost agree with the exact solution (18). If a small noise adds to the exact solution (18) in stronginteraction case, the wave propagation behaves well except a small bulge around the edges (see Fig. 7c) but the amplitude is obviously lower than the one's of the exact solution and numerical simulation case (see Fig. 7a, b). On the other hand, no matter what we add a noise or not to the initial condition, the wave propagations of weak interaction of third-order RWs display strong oscillations (see Fig. 7d-f). We infer that the dispersed energy of the third-order RWs can more easily lead to the disorder than the strong interaction case. Assume that real equation (21) exists the following complex solution (q 1,n (t), q 2,n (t)) = (q (0) 1,n , q (0) 2,n )e gt+ikn , where g is the MI gain k is an arbitrary real wavenumber and q We point out here the MI takes place when expression (24) is positive. The MI condition g 2 > 0 holds as cos k > 1 − c 2 1 + c 2 , , and k = 2mπ, m ∈ Z . Figure 8 shows that the growth rate g(k) becomes larger and larger as the amplitude c increases, meanwhile, for the fixed amplitude c = 2, g max (k) also gets larger with the increase of parameter γ . Conclusions In this paper, we have studied a higher-order integrable discrete NLS equation by the generalized discrete (1, N − 1)-fold Darboux transformation. The discrete higher-order RW solutions are given by determinants. We have analytically studied the dynamical behaviors of discrete RW solutions, which exhibits abundant patterns including on-site, inter-site, the rotating triangle and pentagon structures. Comparing with the discrete NLS equation, the first-order RWs of integrable higher-order discrete NLS equation have the identical peaks but different center points with the same plane-wave amplitude. We also find that the nonlinear term parameter γ can control energy density of the RWs. By means of numerical simulation with the time evolutions of the RW solutions, we reveal that the strong interaction RWs are more stable on the wave propagation than the weak interaction case. Finally, the modulation instability condition of the background wave solutions are given.
3,591
2021-06-18T00:00:00.000
[ "Mathematics" ]
Effect of a novel piperazine compound on cancer cells Many drugs have been developed for anticancer chemotherapy. However, more anti-cancer drugs should be developed from potential chemicals to circumvent the disadvantages of existing drugs. Most anti-cancer chemicals induce apoptosis in cancer cells. This study tested the efficiency of a new chemical, the piperazine derivative 1-[2-(Allylthio) benzoyl]-4-(4-methoxyphenyl) piperazine (CB01), on glioblastoma (U87) and cervix cancer (HeLa) cells. CB01 was highly cytotoxic to these cells (IC50S  < 50 nM) and induced the traditional apoptotic symptoms of DNA fragmentation and nuclear condensation at 40 nM. Western-blot analysis of the cell lysates revealed that the intracellular apoptotic marker proteins, such as cleaved caspase-3, cytochrome c, and Bax, were highly upregulated in the CB01-treated cells. Furthermore, increased activities of caspase-3 and -9, but not caspase-8, were observed. Therefore, these results suggest that CB01 can act as an anticancer chemotherapeutic by stimulating the intrinsic mitochondrial signaling pathway to induce cytotoxicity and apoptosis in cancer cells. Introduction Piperazine is a chemical compound in which two of the six carbons facing each other in a hexagon ring were replaced with nitrogen. It is an important compound widely used to develop therapeutics of interest against various diseases due to its availability, simple processing, and high yield [1][2][3]. Currently, there are several piperazine-based therapeutics, such as anti-depressants [4,5] as well as anti-viral [6,7] anti-inflammatory [8][9][10][11], and antioxidant [12,13] agents. Given this high performance of piperazine, several academic and industrial research institutes have developed new piperazine derivatives as therapeutic agents [14,15]. Recently, piperazine backbones synthesized via rational drug designing such as piperazine-linked bisanthrapyrazole, 5-hydroxychromenone piperazine, quinazoline linked substitutedpiperazine have shown an excellent performance as anti-cancer drugs [1,16,17]. Each of these compounds induces apoptosis and suppress the proliferation of cancer cells in erythroleukemic A562 cell line, epidermal cervical cancer, and lung cancer cells, respectively [1,18]. Apoptosis is a natural and necessary process that maintains homeostasis in diverse organisms. Defects in apoptosis cause immunodeficiency, and genetic and autoimmune problems, eventually leading to cancer. Apoptosis is known to occur via two different pathways-the extrinsic pathway (also called the "death receptor pathway") and the intrinsic pathway (occurs through mitochondria). However, these two pathways are connected, and the factors of one pathway can affect the other. The morphological changes associated with apoptosis in a cell include cell contraction, chromatin condensation, and nuclear and DNA fragmentation. Many currently used chemotherapeutics suppress cancer-cell survival by inducing apoptosis via the two main signaling pathways mentioned above [19,20]. This study assessed for the anti-cancer effect of a recently synthesized piperazine compound, 1-[2(Allylthio)-benzoyl]-4-(methoxyphenyl) piperazine (CB01) (Fig. 1). This compound was identified as toxic to non-small cell lung cancer by Dr. SH Hong of Open Access MTT assay The MTT assay was performed using a common protocol [21,22]. Briefly, cells were seeded in 96-well plates (50 µl of 4 × 10 4 cells/well). After 4 h, 50 µl of fresh medium containing CB01 at the indicated dose was added. After 48 h, the MTT solution at 0.1 mg/mL was added into each well, and the samples were incubated for 4 h. After removing the supernatant, 200 µl of 100% (w/v) DMSO was added to each well, and the samples were incubated at room temperature (25 °C) for 10 min. Finally, the absorbance of the samples at 595 nm was measured using a microplate reader. The experiment was repeated three times, each in triplicate. LDH cytotoxicity assay The LDH assay was performed using the D-Plustm LDH Cell Cytotoxicity Assay Kit (Dongin LS Biotech, Seoul, Korea). Briefly, 50 µl of 2.5 × 10 4 cells per well were inoculated into a 96-well plate. After 24 h of culture, 50 µl medium with CB01 at the indicated concentration was added to each well. After 48 h, the cells floating in the culture were precipitated via centrifugation at 600 g for 5 min. The control group was treated with 10 µl of lysis buffer in the kit before the centrifugation, and 10 µl of the supernatant was transferred to a different well in new 96-well plate. Afterward, 100 µl of the LDH reaction mixture, composed of LDH buffer and water-soluble tetrazolium salt (WST) substrate at a 1:50 ratio, was added. The plate was cultured at 20-30 °C for 30 min, and the absorbance of the samples at 450 nm was measured using a microplate reader. The experiment was repeated three times, each in triplicate. Apoptotic DNA-fragmentation assay Total DNA with low molecular mass was extracted from the cells as described previously [23]. First, the cells were grown in 60 mm plates. After 24 h, CB01 was administered at 10 or 40 nM, and the cells were cultured for 48 h. Afterward, they were washed with phosphate-buffered saline (PBS) and lysed using the ice-cold lysis buffer [0.2% Triton X-100, 10 mM Tris (pH 7.5), and 10 mM EDTA]. The lysates were incubated on ice for 40 min. Positive control was conducted using 5 µM camptothecin (CPT). The lysates were centrifuged at 10,000g at 4 °C for 30 min, and the harvested supernatants were treated with buffered phenol and buffered chloroform-isoamyl alcohol (24:1, v/v). DNA was ethanol-precipitated and then dissolved in TE buffer [10 mM Tris (pH 7.5) with 1 mM EDTA] with 50 ug/ml RNase A. The DNA samples were analyzed through electrophoresis on 1.2% agarose gels. Evaluation of nuclear morphology The morphological changes in the nuclei of the cells treated with CB01 were examined through DAPI staining as described before [21]. U87 and HeLa cells were seeded in 35 mm plates (1.6 × 10 5 cells) and treated with 40 nM CB01. After 48 h of culture, the medium was removed, and the cells were washed three times with PBS. Next, they were fixed for 20 min at 25 °C with 4% formaldehyde containing 0.1% Triton X-100. The cells were then stained for 1 h at 37 °C with 300 µM DAPI diluted in PBS (1:100, v/v). The stained cells were observed using a Nikon fluorescence microscope (TE 2000 U: Tokyo, Japan) with ultraviolet (UV) excitation at 300-500 nm. Caspase activity assay The activities of caspase-3, -8, and -9 were examined using caspase-3, -8, -9 colorimetric assay kits (Promega, Biovision, USA) as described before [21,23]. Cells were treated with 40 nM CB01 for 0, 24, or 48 h. They were then lysed using 50 µl lysis buffer, and the supernatants of the lysates were collected by centrifugation at 10,000g for 5 min. Subsequently, the total protein concentration of each supernatant was quantified using the Bradford assay. Then, a sample (3 µl) from each lysate was mixed with the 2 × buffer from the assay kit to reach the total volume of 50 µl, and 4 mM DEVD-pNA substrate from the assay kit was added. After incubating the samples at 37 °C for 1.5 h, their absorbance at 405 nm was measured. Western blotting Western blotting was performed as described before [24]. Cells seeded in 60 mm plates were treated with 40 nM CB01 and incubated for 48 h. The cells were then lysed using Radioimmunoprecipitation Assay buffer [0.1% SDS, 50 mM Tris-HCl (pH 7.4), 0.5% sodium deoxycholate, and 150 mM NaCl]. The lysates were centrifuged at 20,000g for 15 min at 4 °C, and the total concentration of each supernatant was measured using the Bradford assay. The proteins in each lysate sample were separated via Sodium Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis (SDS-PAGE) using 12.5% gel at 130 V for 1.5 h and then transferred onto nitrocellulose membranes (GE Healthcare UK Ltd., Hammersmith, UK) at 32 mA for 1.5 h by using semi-dry transfer equipment (Hoefer, Inc., Holliston, MA, USA). The membranes were blocked with the blocking agent [5% (w/v) non-fat dry milk and 0.1% (w/v) Tween 20 in PBS] for 2 h at 4 °C. Lastly, the membranes were probed overnight with monoclonal antibodies against apoptosis-associated proteins [1: 1,000 dilution in PBS with Tween20 (PBST)]. After washing the membranes three times with PBST, they were incubated for 1 h at 25 °C with goat anti-mouse IgG conjugated to horseradish peroxidase (1: 5,000 dilution in PBST, Abcam). Afterward, the membranes were washed again with PBST and incubated with the developer kit (Bio FACT, Daejeon, Korea). As an internal control β-actin was probed with a mouse monoclonal antibody (1: 5000 dilution, Thermo Fisher Scientific, Waltham, MA, USA). Statistical analysis All the data are presented as mean ± SEM. Statistical significance was assessed using the t test with paired samples; *p < 0.05, **p < 0.01, and ***p < 0.001. CB01 induces the apoptotic symptoms of DNA and nuclear fragmentation To determine whether the cytotoxicities observed in Fig. 2 are related to apoptosis, CB01-treated cells were evaluated for DNA and nuclear fragmentation, which are indicators of apoptosis. Specifically, DNA fragmentation was observed in HeLa and U87 cells treated with 40 nM CB01 (Fig. 3A). In this experiment, 5 µM camptothecin (CPT), which is an alkaloid that is a wellknown drug that causes apoptosis by selectively inhibiting DNA topoisomerase type 1, was used as a positive control [26]. CB01 induced DNA fragmentation very clearly at 40 nM. In addition, DAPI staining revealed that CB01 caused nuclear fragmentation in U87 and HeLa cells (Fig. 3B). CB01 upregulates the major apoptotic proteins Caspases are known as the core enzymes of apoptosis [27]. The levels of apoptotic markers in CB01treated cells were assessed to determine whether the CB01-induced cytotoxicity is associated with caspases. Through western blot analysis, the activities of apoptotic marker proteins in cells treated with 40 nM CB01 for 48 h were assessed. The three proteins play an important role in initiating apoptosis. The activated Bax induces permeability in the mitochondrial outer membrane. The cytochrome c was released from mitochondria. The caspase proteins which are key enzymes for apoptosis, were activated. The activities of apoptosis core proteins, such as cytochrome c, Bax, and cleaved caspase-3, which is active form, were observed in the cells treated with 40 nM CB01 for 48 h (Fig. 4). These data demonstrate that CB01 causes cytotoxicity through induction of apoptosis. The caspase inhibitor Z-VAD-FMK suppressed the CB01-induced apoptosis The two known apoptosis pathways commonly lead to the activation of caspase-3, and thus caspase-3 activation serves as direct evidence of apoptosis [23]. We observed significantly elevated caspase-3 activities in the lysates of CB01-treated U87 and HeLa cells, compared with the levels in the untreated controls. In addition, these CB01induced increases in caspase-3 activities were suppressed by the pan-caspase inhibitor Z-VAD-FMK, which irreversibly binds to the cleavage site of a caspase, whereby the caspase cannot be cleaved to be activated [28]. Collectively, our results suggest that CB01 selectively induces the activation of caspase-3 in U87 and HeLa cells (Fig. 5). CB01 induces apoptosis via the intrinsic pathway Apoptosis occurs through either an extrinsic or intrinsic pathway. The extrinsic pathway is activated through ligand-binding interactions on extracellular surface receptors, whereas the intrinsic pathway, also called the mitochondrial pathway, is activated through intracellular signals within the mitochondrial inter-membrane space [29]. The caspase-3, -8, and -9 activities were investigated using a colorimetric assay to determine which apoptotic pathway was induced in CB01-treated cells. The activities of caspase-3 and -9 in cells treated with 40 nM CB01 increased over time; however, the caspase-8 activities in these cells were unaffected (Fig. 6). Thus, CB01 appears to cause apoptosis via the intrinsic pathway. Discussion Several anticancer drugs are currently used in chemotherapy. Chemotherapy has the benefit of being applicable irrespective of the cancer stage. Over the years, tremendous medical advances have been made in comprehending cancer biology and devising targeted chemotherapy [30]. Novel remedial chemicals and access methods with powerful effects on tumor or healthy tissues are continuously being adopted in clinics [31]. Although the efficacy of existing chemotherapeutics is not without controversy, there is a growing consensus that their anti-cancer effects are partly due to their abilities to induce apoptosis. The clinical use of these drugs is time-consuming and expensive, thus economical, and efficient anti-cancer drugs are needed [32]. Piperazine derivatives are a class of chemical compounds that contain piperazine as the key functional group and possess many pharmacological properties. Several studies on many piperazine derivatives have been reported, and the results of several relevant clinical trials are encouraging [4,6,8]. As the piperazine skeleton can easily be combined with various structures, whereby promising economical anticancer drugs may be developed in the future [1]. A recent study found that the synthesized piperazine derivative CB01 is effective in killing U87 and HeLa cells by inducing apoptosis according to mitochondrial changes. This study is the first to explore, discover, and reveal the mechanism of action of CB01, a new anticancer substance, and is expected Fig. 4 Increased expression of apoptotic marker proteins in CB01-treated cells. U87 and HeLa cells were cultured in 60 mm plates with 0 or 40 nM CB01. After 48 h, cell lysates were prepared and analyzed via western blotting using antibodies against apoptotic marker proteins, such as cleaved caspase-3, cytochrome c, and Bax. β-Actin was used as the internal loading control. By loading a quantified amount of protein in the same amount, the expression of three key apoptosis enzyme proteins was confirmed in the presence or absence of CB01. In addition, it was the same amount as the amount of protein used to confirm the expression of β-actin Fig. 5 Suppression of CB01-induced caspase-3 activity by the caspase-3 inhibitor Z-VAD-FMK. U87 and HeLa cells were co-treated with 50 µM Z-VAD-FMK and 40 nM CB01 or 5 µM CPT for 48 h. Then, cell lysates were prepared, and their total protein concentrations were quantitated using the Bradford Assay. The caspase-3 activity in each lysate was assessed using a fluorescent assay kit to greatly contribute to the discovery of new anticancer substances and the promotion of cancer research in Korea. Furthermore, it is expected that a new cancer treatment strategy will be established based on the mechanism of DNA damage response by CB01. The strategy of administering CB01 to radiation therapy and chemotherapy, which are treatments that induce DNA damage, can increase the efficiency of chemotherapy. Given the modifiable of piperazine, it is expected that the basic structural properties of piperazine can be used to devise a supplementary alternative, for the treatment of solid tumors, particularly in the breast, pancreas, colon, and lung. Fig. 6 Comparison of the caspase-3, -8, and -9 activities. U87 and HeLa cells were treated with 40 nM CB01. Cell lysates were prepared after 24 h or 48 h, and their total protein concentrations were quantitated using the Bradford Assay. The caspase-3, -8, and -9 activities of each lysate were measured using fluorescent assay kits. The samples were incubated at 37 °C for 2 h, and their absorbance at 405 nm was measured. All the data are expressed as mean ± SEM. Statistical significance was evaluated using the paired sample t test; ns = p > 0.05, *p < 0.05, **p < 0.01, and ***p < 0.001
3,466.4
2021-11-17T00:00:00.000
[ "Biology", "Chemistry" ]
Structural basis for voltage-sensor trapping of the cardiac sodium channel by a deathstalker scorpion toxin Voltage-gated sodium (NaV) channels initiate action potentials in excitable cells, and their function is altered by potent gating-modifier toxins. The α-toxin LqhIII from the deathstalker scorpion inhibits fast inactivation of cardiac NaV1.5 channels with IC50=11.4 nM. Here we reveal the structure of LqhIII bound to NaV1.5 at 3.3 Å resolution by cryo-EM. LqhIII anchors on top of voltage-sensing domain IV, wedged between the S1-S2 and S3-S4 linkers, which traps the gating charges of the S4 segment in a unique intermediate-activated state stabilized by four ion-pairs. This conformational change is propagated inward to weaken binding of the fast inactivation gate and favor opening the activation gate. However, these changes do not permit Na+ permeation, revealing why LqhIII slows inactivation of NaV channels but does not open them. Our results provide important insights into the structural basis for gating-modifier toxin binding, voltage-sensor trapping, and fast inactivation of NaV channels. Introduction Eukaryotic voltage-gated sodium (NaV) channels generate the inward sodium current that is responsible for initiating and propagating action potentials in nerve and muscle 1,2 . The sodium current is terminated within 1-2 milliseconds by fast inactivation 1,2 . A wide variety of neurotoxins bind to six distinct receptor sites on NaV channels and modify their function 3,4 . α-Scorpion toxins and sea anemone toxins bind to Neurotoxin Receptor Site 3, dramatically inhibit fast inactivation of NaV channels, and cause prolonged and/or repetitive action potentials [3][4][5] . Scorpions utilize these toxins in their venoms to immobilize prey by inducing paralysis and causing cardiac arrhythmia 4,6-8 . Because of their high affinity and specificity, scorpion toxins are used widely to study the structure and function of NaV channels. α-Scorpion toxins bind to the voltage sensor (VS) in domain IV (DIV), which is important for triggering fast inactivation [9][10][11][12][13] . Therefore, structures of the high-affinity complexes of α-scorpion toxins and NaV channels will provide critical information for understanding the structural basis for toxin binding, voltage-sensor trapping, and fast inactivation. Eukaryotic NaV channels contain four homologous, nonidentical domains composed of six transmembrane segments (S1-S6), organized into a voltage-sensing module (VS, S1-S4) and a pore module (PM, S5-S6) with two intervening pore helices (P1 and P2) 14,15 . The S4 segments contain four to eight repeats of a positively charged residue (usually Arg) flanked by two hydrophobic residues. These positively charged residues serve as gating charges, moving outward upon depolarization to initiate the process of activation 14,15 . Chemical labeling and voltage clamp fluorometry suggest that DI-VS and DII-VS are primarily responsible for activation of the channel, whereas DIV-VS induces fast inactivation 14,15 . A triple hydrophobic motif, Ile-Phe-Met (IFM), located in the DIII-DIV linker, serves as the fast inactivation gate 14,15 . Mutation of the IFM motif can completely abolish fast inactivation 14,15 . Determination of the structures of prokaryotic [16][17][18] and eukaryotic [19][20][21] NaV channels has remarkably enriched our understanding of their structure and function. Those structures revealed that NaV channels share similar key structural features 22 . The central pore is formed by the four PMs with the four VSs arranged in a pseudosymmetric square array on their periphery. The four homologous domains are organized in a domain-swapped manner, in which each VS interacts most closely with the PM of the neighboring domain. The four S6 segments come together at their intracellular ends to form the activation gate [16][17][18] . Intriguingly, in the structures of mammalian NaVs, the IFM motif binds in a receptor site formed by the DIII S4-S5 linker and the intracellular ends of the S5 and S6 segments in DIV, which suggests a local allosteric mechanism for fast inactivation of the pore by closing the intracellular activation gate [19][20][21] . The α-scorpion toxins bind to DIV-VS in its resting state, trap it in an intermediate activated conformation, and inhibit fast inactivation, providing an attractive target for studying the coupling of DIV-VS to pore opening and fast inactivation [9][10][11][12][13]23,24 . Strong depolarization can reverse voltage-sensor trapping and drive the a-scorpion toxin off its receptor site, providing direct evidence for a toxin-induced conformation of the VS 9,24,25 . The cryo-EM structure of the αscorpion toxin AaHII was resolved bound to two different sites on a nonfunctional chimera of the cockroach sodium channel NaVPas, which contained 132 amino acid residues of the DIV-VS of the human neuronal sodium channel NaV1.7 embedded within 1449 residues of NaVPas 26 . These results revealed structures of AaHII bound to the voltage sensors in both DI and DIV but did not resolve whether AaHII bound to either of these sites was functionally active in the chimera 26 . Therefore, the precise structural mechanism by which α-scorpion toxin binds to the DIV-VS in a native sodium channel and blocks fast inactivation still remains elusive. LqhIII from the 'deathstalker scorpion' Leiurus quinquestriatus hebraeus (also known as the Israeli yellow scorpion and the North African striped scorpion) is classified as an α-scorpion toxin and shares the common βαββ scaffold containing four pairs of Cys residues that form disulfide bonds 7 . Most scorpion toxins paralyze prey by targeting the sodium channels in nerve and skeletal muscle specifically 7 . In contrast, LqhIII binds with highest affinity to the human cardiac sodium channel, with an estimated EC50 of 2.5 nM 27,28 . It prevents fast inactivation efficiently, and it dissociates at an extremely slow rate 27,28 , making it exceptionally potent. In this work, we elucidate the molecular mechanisms of voltage-sensor trapping and block of fast inactivation by α-scorpion toxins in the context of a functional native toxin-receptor complex by determining the cryo-EM structure of rat cardiac sodium channel NaV1.5 in complex with the α-scorpion toxin LqhIII at 3.3 Å resolution. Our experiments provide important insights into the structural basis for gating-modifier toxin interaction, voltage-sensor trapping, electromechanical coupling in the VS, and fast inactivation of the pore. Results Voltage-sensor trapping of NaV1.5 by LqhIII. For our structural studies, we took advantage of the fully functional core construct of the rat cardiac sodium channel NaV1.5 (rNaV1.5C), which can be isolated with high yield and high stability 21 . Expression of rNaV1.5C in the human embryonic kidney cell line HEK293S GnTIand recording from single cells in whole-cell patch clamp mode (see Methods) yields inward sodium currents that activate rapidly and inactivate within 6 ms (Fig. 1a, black trace; inward current is plotted as a negative quantity by convention). Perfusion of increasing concentrations of LqhIII progressively slows the fast inactivation process and makes it incomplete (Fig. 1a, colored traces). We measured the sodium current remaining 6 ms after the depolarizing pulse as a metric of LqhIII toxin action (Fig. 1a, dotted line), because the unmodified sodium current has declined to nearly zero by this time, whereas substantial toxin-modified sodium current remains. The EC50 value for the increase in sodium current remaining at 6 ms following the stimulus is 11.4 nM (Fig. 1a). This effect of LqhIII and other a-scorpion toxins is achieved by trapping the voltage sensor in DIV of sodium channels in a conformation that allows sodium channel activation but prevents coupling to fast inactivation 4,9,10 . Voltage-sensor trapping develops slowly and progressively over more than 20 min, with a half-time of 11.3 min at 100 nM ( Fig. 1b). As expected from previous work 4,9,10 , strong depolarizing pulses to +100 mV cause dissociation of the toxin and loss of its blocking effect on fast inactivation (Fig. 1c). The molecular mechanism for this long-lasting voltage-dependent block of fast inactivation of NaV1.5 sodium currents by LqhIII is unknown. Structure determination of rNaV1.5C/LqhIII complex by cryo-EM. We analyzed the structural basis for the potent voltage-sensor trapping effects of LqhIII by cryogenic electron microscopy (cryo-EM). LqhIII was incubated with purified rNaV1.5C for 30 min. The regulatory proteins FGF12b and calmodulin were added to stabilize the isolated protein, and the toxin/channel complex was further purified by size-exclusion chromatography (SEC). A symmetric peak of the toxin/channel complex was collected from the second SEC run ( Supplementary Fig. 1a, b). Detailed descriptions of protein expression, purification, cryo-EM imaging, and data processing are presented in Methods. Cryo-EM data were collected on a Titan-Krios electron microscope and processed using RELION ( Supplementary Fig. 1c, d; Supplementary Fig. 2a-c). A 3D reconstruction map of the rNaV1.5C/LqhIII complex was obtained at an overall resolution of 3.3 Å, based on the Fourier Shell Correlation (FSC) between independently refined half-maps (Fig. 2a). Strong density specifically localized near the extracellular side of DIV-VS shows that there is only one LqhIII molecule bound to rNaV1.5C ( Fig. 2b; purple), as expected from previous biochemical studies of scorpion toxin binding to sodium channels 9 . The local resolution for the PM core region is ~3.0-3.5 Å, whereas the four peripheral VSs have local resolutions of ~3.5-4.0 Å (Fig. 2c). The resolution for the toxin is lower than the channel protein (~4.0-5.0 Å, Fig. 2c). However, the interacting surface of the toxin that binds to DIV-VS has a resolution of ~3.5-4.0 Å for the amino acid side chains that form the complex, as they are tightly bound ( Supplementary Fig. 2d and e). The 3D structure of the tightly disulfide-crosslinked toxin is well-known from previous studies ( Supplementary Fig. 3a) 29 , allowing it to be accurately fit into the observed density. No significant density was observed at high resolution for the C-terminal domain (CTD), FGF12b, or calmodulin (Fig. 2b), indicating that these components of the purified protein complex are mobile. Overall structure and LqhIII binding site. The high-resolution cryo-EM density map allowed us to build an atomic model for the rNaV1.5C/LqhIII complex ( Fig. 3a and b). The overall structure of the rNaV1.5C/LqhIII complex is very similar to our previous apo-rNaV1.5C structure 21 , with a minimum RMSD of 0.78 Å over 1164 residues. However, local conformational differences give many important insights. The structure of LqhIII is rigidly locked by disulfide bonds, except for the β2β3 loop and C-terminal region, which are highly flexible in solution as revealed by NMR analyses (Fig. 3c). Remarkably, LqhIII uses these two flexible regions to bind to the extracellular side of DIV-VS by wedging its β2β3 loop and C-terminus into the aqueous cleft formed by the S1-S2 and S3-S4 helical hairpins (Fig. 3b). These features are in close agreement with previous molecular-mapping studies of neurotoxin receptor site 3 11 and with the structure of the AaHII/NaVPas-NaV1.7 chimera 26 (see Discussion). The toxin may attack Neurotoxin Receptor Site 3 in the DIV-VS using its most flexible regions to allow it to dock in a stepwise manner that results in a tight induced-fit complex. The close interactions of the C-terminus and the β2β3 loop of LqhIII with rNaV1.5C are illustrated in Fig. 3b and d. At the C-terminus of LqhIII, Glu63 interacts with the Asn329-linked glycan from DI-PM, and Lys64 dips into the aqueous cleft and interacts with Gln1615 ( Fig. 3b and d). The end of the β2β3 loop inserts into the DIV-VS cleft and partially unwinds the last helical turn of the S3 segment. Mutagenesis studies mapping Neurotoxin Receptor Site 3 revealed a negatively charged residue in the extracellular S3-S4 linker that is conserved among NaV channels and is critical for α-scorpion toxin binding 4,9 . In agreement with those studies, the conserved negatively charged residue Asp1612 at an equivalent position in NaV1.5 mediates this crucial interaction with the bound toxin. His43 and His15 wrap around Asp1612 like pincers forming a hydrogen bond (~2.5 Å) and a potential salt bridge (~4.0 Å), respectively (Fig. 3d). Moreover, we note that the backbone carbonyl of His43 engages the backbone carbonyl of Thr1608 at a distance of 2.8-3.5 Å, which may contribute to the affinity or specificity of interactions with the β2β3 loop ( Fig. 3d) 30 . The complementary interacting surfaces of LqhIII and Neurotoxin Receptor Site 3 are depicted in a space-filling model in Fig. 3e (left), and the functionally important interacting residues are highlighted in yellow with embedded sticks and displayed in an 'open-book' format in Fig. 3e (right). The interacting surface area of neurotoxin receptor site 3 covers ~ 836 Å 2 located on an arc stretching from the S3-S4 linker to the S1-S2 linker (Fig. 3e, right). The LqhIII toxin latches onto that arc, gripping it between the β2β3 loop and the C-terminus (Fig. 3e). It is likely that the flexibility of these regions of the toxin in solution is important for its initial approach and final tight grip on its target site. An intermediate activated state of DIV-VS trapped by LqhIII. Fast inactivation of NaV channels requires activation of DIV-VS 9,10,12,13 . Because there is no membrane potential during solubilization and purification, the VSs of published NaV structures are usually in partially or fully activated states. In our apo-rNaV1.5C structure, four of the six gating charges of DIV-VS pointed outward on the extracellular side of the hydrophobic constriction site (HCS), as expected for an activated state 21 . As a result, the fast inactivation gate in the apo-rNaV1.5C structure binds tightly in a hydrophobic pocket next to the activation gate 21 . α-Scorpion toxins bind to NaV channels in the resting state with higher affinity and trap the channel in a partially activated state, in which both the rate and extent of transition to the inactivated state are impaired ( Fig.1) 9,10 . Because of its high affinity and specificity, LqhIII is able bind to the purified rNaV1.5C protein in its activated state and chemically induce voltage-dependent structural changes to partially deactivate the VS. Remarkably, LqhIII binding drives DIV-S4 approximately two helical turns inward to form an intermediate, partially activated structure ( Fig. 4a and b). Each gating charge Arg in the intermediate activated DIV-VS is positioned ~10-12 Å further inward than in the fully activated DIV-VS ( Fig. 4a and b). Importantly, in the toxin-bound intermediate activated state reported here, R1 to R4 adopt a 310-helix conformation, with the last helical turn of the S4 segment relaxing R5 into alpha-helical form. In contrast, in the fully activated state, the region between R2 to R6 is in 310-helical form, but R1 is alpha-helical. As a consequence of the 310-helix conformation from R1-R4 in the toxin/channel complex, the residues between R1-R2 and R3-R4 bridge the HCS such that R1-R2 and R3-R4 share the same vertical plane in their interactions with the negative side chains of the extracellular negative cluster (ENC) and intracellular negative cluster (INC), respectively. This unique linear voltage-sensor-trapped conformation would be strongly stabilized by these simultaneous gating charge interactions outside and inside the HCS, which may provide the chemical energy required for potent voltage-sensor trapping against the force of the transmembrane electrical field and therefore for effective modification of sodium channel gating. The potential gating charges R5 and R6 translocate to the intracellular side of the VS completely. These charged residues were proposed to interact with the CTD in the structure of the NaVPas/NaV1.7 chimera 26 . However, the CTD was not resolved in our structure, preventing visualization of the potential binding positions of R5 and R6. Superposition of the fully activated state (grey) and toxin-induced intermediate activated state (blue) of the DIV-VS revealed a remarkable conformational difference (Fig. 4c). From S1 through most of S3 there is little or no structural change, whereas the final two helical turns of S3 and the entire S4 segment undergo dramatic conformational shifts. Notably, Gly1607 serves as a pivot point for S3 rotation, and the rotation of upper S3 in turn moves S4 downward ~11 Å, such that R1 and R2 in the intermediate activated state are approximately in the positions of R3 and R4 in the fully activated state (Fig. 4c). This toxin-induced conformational change in the S3-S4 linker is further documented by our fit to the cryo-EM density, which is illustrated in Extended Fig. 3c-e. At the intracellular end of S4, an elbow-like bend is formed between S4 and the S4-S5 linker, which pushes the S4-S5 linker ~4.6 Å inward at its N-terminal end (Fig. 4c). Intriguingly, our previous resting-state structure of NaVAb showed that a similar elbow pushes the S4-S5 linker and its connection to S4 strikingly inward and twists this segment in order to close the intracellular activation gate 31 . This conformational change in the S4-S5 linker is further supported by the close fit of our structural model to the cryo-EM density ( Supplementary Fig. 4a). Superposition of the intermediate activated DIV-VS structure (blue) upon the resting state NaVAb-VS structure (orange) further illuminates these conformational differences (Fig. 4d). The connecting S3-S4 loop of the intermediate activated state of the LqhIII/rNaV1.5C complex is not located as deeply inward as that of resting state of NaVAb and is not as tightly twisted (Fig. 4d). Moreover, the R1 and R2 gating charges are both located fully outward from the HCS in the partially activated S4 segment in the LqhIII/rNaV1.5C complex, whereas R1 is positioned only partially outward from the HCS in the resting state of NaVAb (Fig. 4d). In addition, the S4-S5 linker in the intermediate activated state has not moved as deeply into the cytosol as in the resting state (Fig. 4d). These differences suggest that the toxin-induced intermediate activated state of NaV1.5 VS is indeed an intermediate state between the resting state and the fully activated state. A hallmark feature of the action of a-scorpion toxins is strongly voltage-dependent dissociation from their receptor site, which correlates with the voltage dependence of activation of sodium channels ( Fig. 1c; 9,24,25 ). The structure of the rNaV1.5C /LqhIII toxin complex reveals the molecular basis for this important aspect of scorpion toxin action. In the complex of the toxin with the partially activated state of the DIV VS, the positive charge of the e-amino group of K64 on LqhIII interacts with the same negatively charged side chain in the ENC that interacts with R1 and R2 in the activated conformation of the VS ( Fig. 4a and b). In light of these structures, it seems likely that outward movement of the S4 segment during activation of the DIV VS creates a clash with K64 Fig. 4b and c). Considering that the two proteins were purified following the same procedures, it is likely that the same set of lipid and detergent molecules would be available for binding. Therefore, we believe the larger lipid or detergent molecule is able to bind in the lumen of the activation gate of the intermediate activated structure because of its larger diameter rather than because of a change in lipid or detergent concentrations between the two protein preparations. does not appear to be open enough to conduct hydrated Na + 32,33 . To test this hypothesis, we used molecular dynamics methods similar to those we previously applied to the NaVAb structure 32,33 in order to investigate the effect of LqhIII on pore hydration and dilation of the intracellular activation gate (Fig. 6). The inner pore of rNaV1.5C is depicted lying from right (extracellular) to left (intracellular) with the surrounding S5 and S6 helices illustrated in orange (Fig. 6a). Water molecules (red) fill the inner part of the central cavity on the right and the intracellular exit from the pore on the left. However, in this snapshot, there is a gap in hydration in the intracellular activation gate itself (white), where the S6 segments come together in a bundle (orange helices, Fig. 6a). In fact, statistical analysis of the conformational ensemble shows that the average probability density of water molecules in the intracellular activation gate (purple band) is near zero in simulations of both rNaV1.5C (black) and rNaV1.5C/LqhIII (red; Fig. 6b). Accordingly, Na + did not permeate through the dehydrated activation gate in any of the simulations, suggesting that the pore is functionally closed. Not only is the activation gate the least hydrated region of the pore on average, but it is also nearly always dehydrated ( Fig. 6a and b). Even when a pathway connecting the central cavity to the intracellular space is transiently present, water molecules are usually excluded from entering this region due to the hydrophobic effect (Fig. 6a). As such, the activation gate is predominantly dehydrated (dewetted) and occupied by 4 water molecules on average, Fig. 6c and d). However, the activation gate is not significantly more likely to be wetted in simulations of rNaV1.5C with LqhIII than in simulations without LqhIII ( Supplementary Fig. 5b), consistent with the fact that toxin binding does not open the gate sufficiently for passage of Na + . Nevertheless, the analysis of fluctuations in diameter and hydration of the intracellular activation gate provide an initial suggestion that LqhIII binding may facilitate the transition to the open state of NaV1.5, which requires activation of the voltage sensors in domains I, II, and III for completion. Discussion We determined the structure of rat NaV1.5C in complex with the α-scorpion toxin LqhIII by single particle cryo-EM. Biochemical and biophysical studies support only a single neurotoxin receptor site 3 per sodium channel located in the VS in DIV, at which α-scorpion toxins, sea anemone toxins, and related gating-modifier toxins bind 9,10 . Consistent with this expectation from functional studies, we found a single molecule of LqhIII bound to the VS in DIV. The toxin binds at the extracellular end of the aqueous cleft formed by the S1-S2 and S3-S4 helical hairpins in the VS through its b2b3 loop and its C-terminal. Many conserved amino acid residues that are important for α-scorpion toxin binding and its functional effects on sodium channels are located in key positions in the toxin-receptor binding interface (Fig. 3e). These results provide convincing evidence that we have correctly identified the pharmacologically important Neurotoxin Receptor Our complex structure provides an excellent model for investigating the coupling between gating charge transfer and fast inactivation. The cryo-EM structure of AaHII/NaVPas-NaV1.7-DIV-VS chimera suggested that R5 and K6 of NaV1.7-DIV-VS were stabilized by interaction with the NaVPas CTD. By contrast, in our fully functional LqhIII/rNaV1.5C structure, the CTD was not observed. In fact, the CTD's also were not observed in the high-resolution structures of human NaV1.2, 1.4, and 1.7 channels either [19][20][21]40 . These results suggest that the CTD's of native mammalian NaV channels are disordered and/or mobile and differ substantially from the cockroach CTD in the NaVPas-NaV1.7 chimera, whose amino acid sequence is not similar to the CTD's of mammalian NaV channels. Based on this comparison, it seems likely that the CTD plays a secondary role or a regulatory role in fast inactivation in mammalian NaV channels, which may In previous work, the structure of a chimera of the cockroach sodium channel NaVPas with the AaHII toxin bound was determined by cryo-EM 26 . The functional significance of this sodium channel in the cockroach is unknown, and this chimera containing a segment of the DIV-VS of human NaV1.7 was nonfunctional; therefore, it is difficult to precisely compare this prior work to the structures we present here. Unexpectedly, AaHII bound to the NaVPas chimera in two positions, one on the VS in DI of NaVPas and one on the DIV-VS contributed in part by NaV1.7, and it was not shown whether either of these sites was functional in the chimera 26 . In contrast, we found only a single toxin binding site, as expected from previous structure-function studies 9,10 . Neurotoxin Receptor Site 3 identified in our study is generally similar to the AaHII binding site found in DIV of the AaHII/NaVPas-NaV1.7-DIV-VS chimera structure 26 , but we found an important difference in the binding poses of the two toxins. Compared with AaHII bound to the NaVPas-NaV1.7-DIV-VS chimera, LqhIII bound to rNaV1.5C is rotated ~26 o downward, further away from the glycan and DI of the channel (Supplementary Fig. 7). This difference may reflect alteration in the position of the receptor site within the NaVPas-NaV1.7-DIV-VS chimera tertiary structure caused by artifactual constraints from formation of the chimera 26 . Alternatively, the structure of the functionally active LqhIII/rNaV1.5C complex described here may be characteristic of the cardiac sodium channel, which has numerous distinct features compared to neuronal sodium channels like NaV1. 7 Thus, the toxin-bound state we have characterized here may have broad significance for voltage sensor trapping by a wide range of gating-modifier toxins from hundreds of species of spiders, scorpions, mollusks, and coelenterates, which all use this universal mechanism to immobilize their prey. Methods Electrophysiology. All experiments were performed at room temperature (21-24 °C) as described previously 21 . Human HEK293S GnTIcells were maintained and infected on cell culture plates in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% FBS and glutamine/penicillin/streptomycin at 37°C and 5% CO2 for electrophysiology. Unless otherwise mentioned, HEK293S GnTIcells were held at -120 mV and 100-ms pulses were applied in 10 mV increments from -120 mV to +60 mV. A P/-4 holding leak potential was set to -120 mV. CryoEM grid preparation and data collection. Three microliters of purified sample were applied to glow-discharged holey gold grids (UltraAuFoil, 300 mesh, R1.2/1.3), and blotted for 2.0 -3.5 s at 100% humidity and 4 °C before being plunged frozen in liquid ethane cooled by liquid nitrogen using a FEI Mark IV Vitrobot. All data were acquired using a Titan Krios transmission electron microscope operated at 300 kV, a Gatan K2 Summit direct detector and Gatan Quantum GIF energy filter with a slit width of 20 eV. A total of 4,222 movie stacks were automatically collected using Leginon 49 at a nominal magnification of 130,000x with a pixel size of 0.528 Å (superresolution mode). Defocus range was set between -1.2 and -2.8 μm. The dose rate was adjusted to 8 counts/pixel/s, and each stack was exposed for 8.4 s with 42 frames with a total dose of 60 e -/ Å 2 . Cryo-EM data processing. The movie stacks were motion-corrected with MotionCorr2 50 , binned 2-fold, and dose-weighted, yielding a pixel size of 1.056 Å. Defocus values of each aligned sum were estimated with Gctf 51 . A total of 3,805 micrographs with CTF fitted better than 6 Å were used for particle picking, and a total of 1,817,940 particles were automatically picked in RELION3.0 52 . After several rounds of 2D classification, 882,608 good particles were selected and subjected to one class global angular search 3D classification with an angular search step at 7.5°, at which low-pass filtered cryo-EM map of rNaV1.5C was used as an initial model. Each of the last five iterations was further subjected to four classes of local angular search and 3D classification with an angular search step at 3.75°. After combining particles from the best 3D classes and removing duplicate particles, 570,843 particles were subjected to per-particle CTF estimation by GCTF followed by Bayesian polishing. The polished particles were subjected to last round three-class multi-reference 3D classification. The best class containing 267,595 particles was auto-refined and sharpened in Relion3.0. Local resolution was estimated by ResMap in Relion3.0. A diagram illustrating our data processing is presented in Supplementary Fig. 2. Model building and refinement. The structures of rat rNaV1.5C (PDB code: 6UZ0) and LqhIII (PDB code: 1FH3) were fitted into the cryo-EM density map in Chimera 53 . The model was manually rebuilt in COOT 54 and subsequently refined in Phenix 55 . The model vs map FSC curve was calculated by Phenix.mtrage. Statistics for cryo-EM data collection and model refinement are summarized in Supplementary Table 1. Molecular dynamics model. The cryo-EM structure of rNaV1.5C/LqhIII lacking DI-DII and DII-DIII linkers is composed of three chains which correspond to DI, DII, and DIII-DIV. The MODELLER software (ver. 9.22) was used to insert missing residues and sidechains within the polypeptide chains, followed by quick refinement using MD with simulated annealing 56 . Neutral N-and Ctermini were used for the three polypeptide chains in our refined model of rNaV1.5C. N-termini from chains DII and DIII-IV were acetylated, and a neutral amino terminus (-NH2) was used for DI. Neutral carboxyl groups (-COOH) were used for all C-termini. Disulfide bonds linking residues 327-342, 909-918, and 1730-1744 were included in our model of the channel as they were present in the cryo-EM structure; however, no glycans were added to the protein. Charged N-and Ctermini were used for LqhIII and disulfide bonds linking residues 12-65, 16-37, 23-47, and 27-49 were included. Molecular dynamics simulations. Molecular models of rNaV1.5C/LqhIII with and without the toxin were prepared using the input generator, Membrane Builder 57-61 , from CHARMM-GUI 59 . The rNaV1.5C/LqhIII model was embedded in a hydrated DMPC bilayer, with approximately 150 mM NaCl. The protein was translated and rotated for membrane embedding using the PPM server 62 . The lipid bilayer was assembled using the replacement method, and solvent ions were added at random positions using a distance-based algorithm. A periodic rectangular cell with approximate dimensions of 14x14x13 nm was used, which comprised ~240,000 atoms. The CHARMM36 all-atom force field [63][64][65] was used in conjunction with the TIP3P water model 66 . Non-bonded fixes for backbone carbonyl oxygen atoms with Na + 67 , and lipid head groups with Na + 68 were imposed. Electrostatic interactions were calculated using the particle-mesh Ewald algorithm 69,70 and chemical bonds were constrained using the LINCS algorithm 71 . The energy of the system was minimized with protein position restraints on the backbone (4000 kJ/mol/nm 2 ) and side chains (2000 kJ/mol/nm 2 ), as well as lipid position and dihedral restraints (1000 kJ/mol/nm 2 ) using 5000 steps of steepest descent. The simulation systems were pre-equilibrated using multi-step isothermal-isovolumetric (NVT) and isothermal-isobaric (NPT) conditions for a total of 10.35 ns (see Table MDS1 for parameters). Unrestrained "production" simulations of approximately 300 ns were then generated with a 2 fs time integration step. The first 100 ns of all production simulations were considered part of equilibration based on RMSD analyses of Cα atoms (Supplementary Fig. 6) and were excluded from subsequent data analysis. Thirty independent replicas (ten of them 400 ns-long and twenty of them 300 ns-long) were generated for each system using random starting velocities, yielding a total simulation time of After performing the spatial transformations, the z-axis of the simulation box was used as the pore axis of NaV1.5 and the transformed positions were used for subsequent analyses. The axial distribution of water was computed by counting the number of water O-atoms within a cylindrical of radius 8.5 Å centered on the pore axis. The probability distribution of water was calculated for each replica by counting the number of water molecules in uniform cylindrical slices along the pore-axis and normalizing the counts by the slice with the highest number of water molecules (solvent slice). The average and SEM of the probability distribution was computed across replicas. Pore hydration analysis indicated a dehydrated region located at the ICAG (-2.8 nm < z < -1.5 nm). The number of water molecules in the gate was counted for each frame and normalized by the total number of frames to obtain the probability distribution. The average and s.e.m. were computed across replicas. To measure the size of the intracellular activation gate, residues at the ends of the S6 helices MD parameters used for each pre-equilibration and production step. The ensemble (ENS), time integration step, and total time/replica are indicated. Berendsen (B) (1) or Nosé-Hoover (NH) (2, 3) thermostats and B or Parrinello-Rahman (PR) (4, 5) barostats with their respective coupling times constants (CTC) are shown. Protein backbone and sidechain position restraints as well as lipid position and dihedral restraints are also specified.
7,322
2020-12-28T00:00:00.000
[ "Biology" ]
Investigation into deep hole drilling of austenitic steel with advanced tool solutions Deep hole drilling processes for high-alloyed materials are characterised by worn guide pads and chatter vibrations. In order to increase feed rates, process stability and bore quality in STS deep hole drilling, various investigations were carried out with adjustments to the tool. First, a new process chain for the production of tribologically optimised guide pads and their effects on the guide pad shape is described in detail. The results of these studies show that the shape change in the area of the axial run-in chamfer through a micro finishing process leads to a better bore hole quality. Furthermore, the influence of guide pad coating and cooling lubricant on the deep hole drilling process was investigated. In addition, the machining of the austenitic steel AISI 304 is analysed by using a conventional steel boring bar and an innovative carbon fibre reinforced plastic (CFRP)-boring bar. While the conventional drill tube oscillates with different eigenfrequencies, the CFRP-boring bar damps chatter vibrations of the drill head and stabilises the process. Even at higher feed rates up to f = 0.3 mm, it is possible to machine austenitic, difficult-to-cut-materials with significantly reduced vibrations. Introduction Modern machining technology aspires the combination of economical productivity and increased quality. The technology of deep hole drilling enables the possibility to produce deep bore holes, which is in response to these challenges. Compared to the competing drilling processes, conventional deep hole drilling with asymmetrical tool structure is effective in achieving very large length-to-diameter (l/D) ratios, especially used for deep bore holes with a ratio larger than 10 [1]. During deep hole drilling processes, the bore hole wall is formed by the supporting guide pads, whereby bore holes with very low surface roughness depths Rz can be produced [2]. During the machining process, high mechanical loads stress the guide pads, which lead to an extensive wear of these components [3]. Depending on the materials to be machined, such as austenitic steels, these loads increase and exceed the initial state multiple [4]. These steel grades in particular cause high abrasive and adhesive tool wear [5]. Previous research at the Institute of Machining Technology has already focused on the wear of guide pads in single tube system (STS) deep hole drilling of austenitic steels [4,6]. Special attention was paid to the shape of diamond-like carbon (DLC)-coated guide pads, on tool wear behaviour and on quality of the drilled bore holes. Furthermore, ta-C coated guide pads have been successfully applied to increase the tool life and bore hole quality [6,7]. Due to its high hardness and low coefficient of friction, ta-C coatings proved to be a good solution to compensate the process-related high force and wear stress of the guide pads under lubricated conditions [8]. In this study, deep drilling oil was used as usual in STS drilling. 1.1 Increasing bore quality and tool life in single-lip drilling of austenitic steels Using twist drills for drilling austenitic steels, intense mechanical loads and tool wear at the outer corner occur [9]. Alternatively, single-lip drills are used in industrial applications as tools for deep hole drilling austenitic steel. The implementations of these drilling processes are performed on deep hole drilling machines as well as on machining centres, which both use emulsions as coolants [10]. Industrial companies using central lubrication systems mostly run their deep hole drilling machines with emulsion lubricant [11]. Due to higher prioritisation on the cooling effect over the lubricating effect, water-miscible lubricants are primarily used. In deep hole drilling, a minor part of the heat is transferred to the coolant lubricant [1]. Moreover, in deep hole drilling, the coolant's lubricating effect on the guide pads has a significant influence on the wear and thus on the bore hole quality [7]. Firstly, Biermann et al. have studied the effects of microfinished guide pads on bore surface quality in SLS drilling. They found out that a friction reducing coating (e.g. ta-C) is appropriate to reduce abrasive wear effects [4,6]. In order to evaluate the guide pad's characteristics and estimate the cutting performance, the analysis of microstructures and mechanical properties was performed [12]. In experimental tests, the tribological qualities were determined in the form of tool wear, workpiece surface roughness and mechanical loading [13]. Consequently, the results for single-lip deep hole drilling of the austenitic steel AISI 304 with tribologically optimised guide pads under variation of the coolant are presented and discussed in the first part of the article. Depression of chatter vibrations in STS drilling Another characteristic of deep hole drilling processes is the vibration tendency of the tool. Torsional and bending vibrations often accompany the processes. Bending vibrations are the result of deficient alignments of the machine components or insufficient support of the boring bar used. To minimise vibrations, there are active damping systems in machine tools, which accelerate a reaction force on the components [14]. Also, dynamic vibration absorbers in boring bars for turning and guide pad structure in deep hole drilling are appropriate to reduce chatter in tools [15,16]. Furthermore, rotational speed is a decisive variable, as this can trigger a resonance of the machine tool, which can cause bending vibrations. In contrast, torsional vibrations are primarily influenced by the feed rate and the workpiece material [17]. Therefore, high strength materials effect chatter vibrations. Moreover, tough materials with high adhesion tendency lead to a stick-slip effect at the guide pads, which can also have an influence on tool vibration [18]. For this reason, STS deep hole drilling processes of difficult-to-cut materials are accompanied by chatter vibrations, independent from boring bar length and cutting parameters. In order to reach an acceptable tool life, it is necessary to reduce the feed rate and to use a passive damping system [18]. In the second part of the article, the potential of a new type of boring bar made of carbon fibre reinforced plastic (CFRP) is shown and analysed as a solution for reducing process vibrations and improving feed rates. 2 Guide pad wear during deep hole drilling of austenitic steel Experimental setup and method The experiments were carried out on a deep drilling machining centre, type Ixion TLF 1004 and on a Heller FT4000 fiveaxis machining centre. Isocut T 404 deep drilling oil from Petrofer Chemie H.R. Fischer GmbH & Co. KG is used on the deep drilling machining centre. With a viscosity of ν = 10.1 mm 2 /s (at T = 313 K), it is known as low-viscosity drilling oil and has been specially developed for single-lip deep hole drilling of small diameters. For deep hole drilling tests with emulsion, the water-miscible concentrate Avantin 4409 from Carl Bechem GmbH was used in a volume ratio of 8% on the five-axis machining centre. Both coolants are designed for steel cutting and are alloyed with appropriate additives. Fig. 1a shows the used deep hole drilling tool with a diameter of d tool = 13 mm. The wear elements indexable insert and guide pads can be replaced on this tool. The guide pads used are made of carbide grade P20 and are differently coated with TiN, TiAlN and ta-c (tetra amorphous carbon) in order to increase tool life [19]. Additionally, specific pre-treatments were applied. Fig. 1b shows the three differently coated guide pads. Conventionally, the guide pads undergo a polishing process before and after the coating. In order to optimise the shape of the uncoated guide pads, a micro finishing process was realised by a superfinish attachment of type Supfina 202. The micro finishing is characterised by an interaction of two overlaid relative movements. The typically finished surface structure is based on the described specific process kinematic: the rotating shaft on the one hand and the oscillating press roll on the other hand. In this special case, a shaft is designed with grooves for fixing the guide pads as shown in Fig. 1c. Furthermore Fig. 1d shows the contact situation between the flexible pressure roller and a guide pad. With regard to the high hardness and wear resistance of the guide pad's coatings, diamond was used as cutting material [7]. In order to analyse the cooling lubricant's as well as the coatings effects on machinability of AISI 304 austenitic steel, various methods are common in scientific context. Li et al. used a Spike-T system to evaluate mechanical tool loads in context of tool wear experiments. Further wear signs were taken by a light microscope, which was integrated to the machining tool and process [20]. Arif et al. applied a drilling dynamometer type Kistler 9125 in their study to provide high quality measurement of mechanical tool loads. They showed in their research, that using piezoelectric measuring systems are applicable to determine cutting force [21]. In this study, wear signs were analysed by a light microscope type Keyence VHX 5000 in the field, in comparison to Martinho et al. who used SEM imaging ex situ for detailed, microscopic analysis of cutting performance [22]. In order to compare macroscopic as well as microscopic wear signs between lubrication types, a light microscope was used in this study. Nickel et al. used a testing method, which based on Barkhausen noise in context of single-lip drilling, nondestructive methods and surface integrity [23]. When lubrication and coating qualities are in focus, tactile measuring of surface quality is appropriate. In the performed experimental investigations, bore holes were analysed by a MahrSurf XR20 device. The methodology of this study is represented in Fig. 2. Fig. 3a shows the achieved surface quality of the tribologically optimised guide pads. During the finishing process, the shaft rotated with a rotational speed of up to v p = 25 m/min, while the press roll oscillated with a frequency of f os = 20 Hz (shown in Fig. 3b). In addition to the choice of the specific cutting parameters, there are several possibilities to influence the tool and process characteristics. On the one hand, the hardness of the press roll can be varied. On the other hand, the surface quality can be improved by using different grain sizes in subsequent process steps. These variations and their influences have been investigated in different studies [6,7]. A very short process was designed for these guide pad sizes. During the process time of approx. t = 30 s, a 90 shore pressure roller and The finishing not only improves the surface quality but also changes the shape of the guide pad [6]. In the detailed illustration of the guide pad corner, Fig. 3c shows that the edges after polishing were rounded by the finishing process. As shown in Fig. 1d, the pressure roll wraps around the guide pad in the area of the run-in chamfer elastically, which results in this rounding. A measuring microscope type Alicona InfiniteFocus G5 was used to measure the profiles in the radial inlet/outlet area and in the axial inlet/outlet area. The material removal in the radial inlet and outlet area is relatively low (approx. 8 μm), but the transition to the ground chamfer is completely rounded. The material removal in the axial inlet area and the resulting rounding is significantly higher. The oscillating movement of the ductile pressure roll is perpendicular to this edge, which significantly increases the removal rate. This removal causes the axial contact point of a profiled guide pad to be shifted backwards by approx. 60 μm from the tip of the cutting edge compared to a conventional guide pad. These changes the equilibrium of forces at the tool head to a small extent. Analysis of the micro finished guide pads and the subsequent coating process To improve the tribological properties of the cemented carbide guide pads, three PVD coatings were applied and investigated [24,25]. On the one side, conventional Ti-based nitride coatings such as TiN and TiAlN were selected due to their high wear resistance and well-known tribological properties [26]. On the other side, a ta-C coating from the group of DLC coatings offers a low friction for sliding against steel counter bodies and possesses a high hardness at the same time [27]. The cryogenic broken cross-section fracture images taken by SEM (type JEOL FE JSEM 7001, Japan) of the coated guide pads are illustrated in Fig. 4. The TiN coating shows a thickness of 1.69 μm, whereas the TiAlN and ta-C coating show a smaller thickness of 0.68 μm and 0.52 μm, respectively. However, Ti-based nitrides have a surface roughness depth of Rz ≈ 1 μm, whereas the roughness profile of the ta-C coating has values of Rz = 0.5 μm. These inherent properties of the coating can be explained by the different phase composition of the coatings. Amorphous coatings in comparison to crystalline thin films have a lower roughness profile. Additionally, the high residual stresses of ta-C coatings result in the increased hardness and low coating thickness. The mechanical properties of the coatings were investigated by a nanoindenter G200 (Agilent Technologies, USA) according to the method suggested by Oliver and Pharr [28][29][30]. The hardness of the coating showed almost similar mechanical properties for the TiN and TiAlN coating with hardness values above 20 GPa. In contrast to this, the hardness of ta-C coating was determined to be approx. 40 GPa. The difference of the mechanical properties is a result of the phase composition and different types of bondings. Especially, the covalent bonding of carbon and the hybridization of the orbitals (ratio of sp 3 / sp 2 ) lead to high hardness [31]. With respect to the interactions of the coatings with the cemented carbide substrate, the residual stresses of the coated WC-Co substrate were measured by X-ray diffraction as it is presented in a former study of the authors [32]. It is noticeable that for all substrates a slightly anisotropic compressive stress state was found in the WC phase. The difference in the Fig. 3 (a) Finishing process sequence and results; (b) finish parameters; (c) detailed view of the guide pad shape orientation of the stresses is caused by the kinematics of the finish pre-treatment of the guide pads prior to the deposition process. The highest compressive stresses were determined for the substrate of the ta-C coating (σ = −2034 ± 34 MPa). This directly correlates with the mechanical properties determined by nanoindentation and the interaction between coating and substrate. Summarising the presented properties, ta-C coatings are a highly promising opportunity for the field of deep hole drilling. The polished guide pads, with an axial chamfer transition to the support area, show linear contact marks independent of the variation of the lubricant. The finished guide pads, on the other hand, show a more pronounced contact area in the direction of rotation. The rounding in the area of the run-in chamfer implies the fact that a larger part of the workpiece material in the finished pads comes between the support area and the bore wall [2,33]. The high tendency of the austenitic steel to strain harden leads to significant material adhesion to the guide pad and has a significant influence on the bore hole surface quality. Results of experimental drilling tests According to this, the achievable bore hole qualities under the use of emulsion are lower than with deep drilling oil. The oil improves the contact conditions between the friction partners in the area of the axial inlet, which reduces the friction moment. The frictional heat generated at this point is reduced at the same time. The tendency of adhesive wear increases with higher temperature, which is why the wear under oil is lower at this point. The rear support area between the bore wall and the guide pad is sufficiently flushed by the coolant and thus well cooled. The higher cooling capacity of the emulsion has a positive effect on wear development. The more pronounced tendency to adhesion and the greater frictional torque on the guide pads and in the area of the chip formation zone lead to a higher mechanical tool load during the process when using emulsion. Fig. 6 shows the results for guide pad wear of austenitic steel with variation of the coating. In the industrial implementation, both emulsion and deep drilling oil are used. Therefore, this influence has also been investigated. The process execution with deep drilling oil produces better drilling qualities compared with the use of emulsion. The low lubrication effect in the axial inlet area clearly promotes the adhesion tendency, which is why there is a stronger development of wear than with the use of oil. Compared to the other two coatings, the ta-C coating has an improved friction behaviour and a higher Fig. 4 (a) SEM pictures of the cross section and topography of the PVD coatings; (b) hardness of the coating; (c) residual stresses in the substrate layer hardness. As a result, there is no wear in the rear part of the support area. This area has the last contact with the bore wall, which gives it a significant influence on the surface quality. The ta-C coating has a lower temperature resistance than titanium-based coatings, wherefore the advantage of high hardness and good friction properties in the front contact area is not given. The wear marks of the ta-C and TiN coating are comparable. During the tests with the titanium-based coated guide pads, a shrill whistle generated by vibrations was registered acoustically. This acoustic perception is not given in the process with ta-C coated guide pads. The reason for this is the lower tendency of adhesion between the coating and the workpiece material. The TiAlN coating shows the highest wear irrespective of the cooling lubricant used and consequently produces the lowest bore hole quality. The wear patterns show clear defects on the supporting surface. At these points, strong adhesion of the workpiece material and the coating has occurred, whereby the coating has been completely removed from the carbide substrate. After a single occurrence, this process takes place repeatedly in the following, so that a delamination occurs over an increasing drilling path. During deep drilling, the mechanical tool load changes from austenitic steel with oil to emulsion by approx. 43% with regard to the feed force F z and by approx. 33% with regard to the drilling torque M B . 3 CFRP-boring bar with improved damping characteristic for STS deep hole drilling of austenitic steels Experimental setup and method The investigations have been carried out on a STS deep hole drilling machine type Giana GGB 560 (see Fig. 7). This machine tool enables the machining of bore holes with a maximum diameter of D = 300 mm and a maximum drilling length of l t = 3000 mm. A workpiece spindle and a tool spindle enable different process strategies with rotating tool as well as rotating workpiece. The presented experimental tests have been carried out with a rotating workpiece and a non-rotating tool. This strategy was essential to measure the mechanical loads on the boring bar with resistance strain gauges. Fig. 7 shows the machine tool and the arrangement of the stain gauges on the boring bar. The measurement of the torsional moment was carried out with two strain gauges which were switched as a full bridge. The strain gauges have a linear design with a length of the measuring grid of l mg = 6 mm and a measuring resistance of R = 120 Ω. The passive sensors are glued to the boring bars next to the clamping position. Mechanical loads during the process lead to deformation of the measuring grid, which causes a change in the electric resistances. Chatter can be detected and analysed by dint of the Wheatstone bridge circuit with a measuring system. In order to increase process stability, cutting forces data were analysed by using NI DIADEM software. By digital filtering the signals and using fast-Fourier transformation, chatter vibrations in the form of oscillatory cutting forces were determined [34]. In this study, a drill head from tool manufacturer botek Präzisionsbohrtechnik GmbH with a diameter of d = 60 mm was used. This tool is equipped with two cutting inserts and three guide pads. The guide pads were adapted to STS deep hole drilling of AISI 304 as mentioned in the previous chapter. This includes a ta-C-coating and the pretreatment of the uncoated guide pads by micro finishing [6]. To analyse the process dynamics, two different boring bars were used. In reference tests, a conventional steel-boring bar (AISI 4140) was installed, while an innovative CFRP-boring bar was developed for stabilising the deep drilling process. Fig. 7c, d shows the drill head and the CFRP-boring bar, which has been wound with ±45°-fibre direction. Moreover, the wall thickness is about s = 7 mm consisting of an inner and outer CFRP winding (each with a thickness of s = 2 mm) and a GFRP layer of s = 3 mm. A thread is required to attach the drill head to the boring bar. For this reason, a metallic adapter, featuring a specific STS thread, was adhered into the bar over a length of l = 135 mm. Both boring bars share the same dimensions with an inner diameter of d i = 37 mm and an outer diameter of d o = 51 mm. The inner diameter of the adapter is about d Ad = 27 mm. The tests were carried out for two different workpiece materials. As a reference, the tempering steel AISI 1060 was used. Beyond that, the austenitic steel AISI 304, which belongs to the difficult-to-machine-materials, was drilled [35,36]. Moreover, friction can lead to tribocorrosion [37]. For these reasons, STS deep hole drilling of AISI 304 is characterised by an adhesive guide pad wear as well as strong chatter vibrations [6]. Especially, the elongation of both materials differs significantly as the elongation of the austenitic steel is five times higher, compared to the elongation of the ferritic/perlitic steel. This leads to longer chip forms which can reduce the process reliability. In comparison to AISI 304, higher tensile strength and different grain structure of AISI 1060 in general cause small fragmented chips. Fig. 7e shows the mechanical properties of the investigated materials. The methodology of this study is summarised in Fig. 8. Results and discussion Machining of AISI 1060 is characterised by reliable chip breaking and high process stability. Thus, the STS deep hole drilling is not accompanied by vibrations of the boring bar when choosing low cutting parameters. Fig. 9 points out the influence of the feed rate on torsional vibrations of the boring bar. These tests were carried out without the use of a lanchester damper of the deep hole drilling machine. The analysis of the drilling torque shows a stable process without chatter vibrations when using feed rates within f = 0.2 mm. In contrast, an increased feed rate of f = 0.3 mm causes intense vibrations of the tool, which can be identified by the oscillation of the drilling torque with M B ≈ ±475 Nm. According to the FFT in Fig. 9, the oscillation of the boring bar is characterised by three frequencies, whereas it can be pointed that the lowest frequency has the largest amplitude. In contrast to AISI 1060, STS deep hole drilling of the austenitic steel AISI 304 is more challenging. In general, this material tends to long chip formation as well as strong chatter vibrations, already at moderate cutting parameters. Fig. 10 shows the measured drilling torque depending on varying feed rates. Cutting parameters and experimental setup are equal to the previous STS deep hole drilling of AISI 1060 (Fig. 9). In comparison to the machining of AISI 1060, drilling of AISI 304 initiates strong chatter vibrations even at low feed rates of f = 0.1…0.2 mm. Whereas the amplitudes of the vibrations are comparable, the analysed frequencies differ A possibility for process stabilisation in STS deep hole drilling of austenitic steel is the application of a damper system, specifically a lanchester damper. The vibration-reducing effect of these systems depends on chattering frequencies as well as the position of the damper. During a deep hole drilling process, these frequencies depend, among other factors, on the chosen cutting parameters and the drilling depth l f . Thus, the position of the damper has to be changed to get an enhanced damping effect. For these reasons, a self-damping CFRP-boring bar was developed to increase the dynamic process stability separated from drilling depth. The purpose of the development is a stable drilling process with significantly reduced chatter vibrations and constant mechanical tool loads on the cutting edges and guide pads. The innovative boring bar was applied in STS deep hole drilling of AISI 304 with respect to varying feed rates. Fig. 11 shows the measured drilling torques depending on applied feed rates. At feed rates f = 0.1…0.3 mm, the oscillation of the drilling torque is reduced significantly. The self-damping properties of CFRP enable a stable process without the application of an additional damper system [38]. This enables a reduction of the maximum mechanical load on cutting edges and guide pads of the tool. Moreover, the required power output of the machine tool was decreased. Further increase of the feed rate up to f = 0.4 mm affects torsional vibrations of the boring bar with an amplitude of the drilling torque of ΔM B ≈ ±250 Nm, which is significantly lower compared to the amplitude of the drilling torque when drilling with a conventional steel-boring bar ( Fig. 9; ΔM B ≈ ±400…±450 Nm). Consequently, a CFRP-boring bar can help reduce the likelihood of tool failure. To reduce the amplitude of the torsional vibrations, it is appropriate to use the lanchester damper of the machine tool. When using conventional steel-boring bars, the process reliability and the bore hole quality can be increased using one or more dampers. Fig. 12 compares the amplitudes of the measured drilling torque for both investigated boring bars when drilling austenitic steel. The effectiveness of the lanchester damper is clearly shown by the reduction of oscillation. To point out the behaviour The occurring mechanical loads result in strong vibrations of the steel-boring bar whereas the CFRPboring bar oscillates with a significantly smaller amplitude. The damping system enables a reduction of vibrations for both boring bars. It can be pointed out that the range of the drilling torque is about ΔM B ≈ ±40 Nm, when using the steel bar. In comparison, the amplitude when using the CFRP-boring bar is already half as large without the application of an additional damper (ΔM B = ±20 Nm). It should be noted that the drilling torque was recorded on the boring bar behind the damper. Consequently, larger amplitudes are likely to occur between the drill head and the damper. The oscillation of the CFRP-boring bar can also be reduced by activating the damper system. According to Fig. 11, deep hole drilling with a feed rate of f = 0.4 mm causes strong chatter vibrations. Therefore, the effectiveness of the damper was analysed for this feed rate. The influence of a damper system on the drilling torque when using a CFRPboring bar is shown in Fig. 13. The results point out that also when using a CFRPboring bar, the process can be further stabilised by a damper system. The amplitude of the drilling torque is about ΔM B ≈ ±25 Nm, which results in constant mechanical tool loads. This emphasises the potential of the carbon fibre boring bar in order to increase the productivity in machining of austenitic steels by enabling a stable process. Conclusion In experimental tests on deep hole drilling, the influence of tribologically optimised guide pads on the process was analysed. It was shown that the modification of the axial run-in chamfer shape by a micro finishing process can increase the bore quality, independent of coating and lubricant. The results can be applied in particular to the conventional guide pad coatings TiN and TiAlN. The potential of ta-C coating is still limited to deep hole drilling using oil. The lower lubricating effect of an emulsion causes high thermal load on the guide pads, which is particularly challenging with ta-C coating. Uddin et al. established that TiN coatings provide decreased tool wear, while simultaneously lowering surface quality in machining of adhesive aluminium alloys [39]. In general, drilling AISI 304 causes stronger wear on the guide pads using emulsion compared to drilling with oil. This inevitably leads to lower bore hole quality as well as higher mechanical tool loads. In comparison to more ecofriendly lubrication techniques like MQL, significantly higher surface qualities were occurred [40]. The studies show that to achieve high bore quality and tool life, deep drilling of austenitic steel should be carried out under oil (Fig. 14) The use of a CFRP-boring bar for STS deep hole drilling has large potential to increase the productivity of industrial deep hole drilling processes. The damping characteristic of CFRP leads to a reduction of chatter vibrations during the drilling process. Especially, the drilling of high alloyed materials typically requires moderate cutting parameter to avoid highly fluctuating mechanical loads on cutting inserts and guide pads. In experimental tests with CFRP-boring bars, more constant drilling torque was measured. This enables the possibility to apply more productive cutting parameters even without the use of an additional damper system. The results are briefly summarised in Fig. 15. In following research projects, the maximum load on CFRP-boring bars has to be analysed to evaluate the process limits for industrial applications. In order to improve damping qualities, proven effects of structural and technological parameters of CFRP material on machining should be investigated [41]. Moreover, the variation of bore diameter and boring length should be analysed in future tests. Funding Open Access funding enabled and organized by Projekt DEAL. The investigations described in this paper were carried out with the support of the German Research Foundation (DFG) within a special research project 'Entwicklung von tribologisch optimierten Führungsleisten für das Tiefbohren' (Project No. 276395118). Data availability Not applicable. Code availability Not applicable. Declarations Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
7,158.2
2021-09-13T00:00:00.000
[ "Materials Science" ]
An Exact Formula for Calculating Inverse Radial Lens Distortions This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. Introduction Distortion is a physical phenomenon that in certain situations may greatly impact an image's geometry without impairing quality nor reducing the information present in the image. Applying the projective pinhole camera model is often not possible without taking into account the distortion caused by the camera lens. This phenomenon can be modelled by a radial distortion, the most prominent component, and a second, with a lesser effect, a decentering distortion which has both a radial and a tangential components. Radial distortion is caused by the spherical shape of the lens, whereas tangential distortion is caused by the decentering and non-orthogonality of the lens components with respect to the optical axis ( [1,2]). It is important to note that radial distortion is highly correlated with focal length [3] even if in literature it is not modelled within the intrinsic parameters of the camera [4]. This is due to the fact that the radial distortion model is not linear, in contrary to other intrinsic parameters. We can see in Figure 1 the displacement applied to a point caused by both radial and tangential distortion. Decentering distortion was modelled by Conrady in 1919 [5] then remodelled by Brown in 1971 [6] and a radial distortion model was proposed by Brown in 1966 [7]. These distortion models have been adopted by the Photogrammetry as well as the Computer Vision communities for several decades. Most photogrammetric software such as PhotoModeler (EOS) uses these models (see Equations (1) and (2)) to correct observations visible on the images and provide ideal observations. Roughly, radial distortion can be classified in two families, barrel distortion and pincushion radial distortion. Regarding the k 1 coefficient in Formula (1), barrel distortion corresponds to a negative value of k 1 and pincushion distortion to a positive value of k 1 , for an application of the distortion and not a In the center, a painting from Piet Mondrian [12] (which is now in the public domain since 1 January 2016); on the left, the painting with a barrel effect; and on the right, the same image with pincushion distortion. Using these models to compensate the observations is now well known and many software dealing with images or panoramas propose plugins dedicated to distortion correction (mainly only radial distortion) [13] . However, although we have the equations to compensate the distortion, how to compute the inverse function in order to apply such a distortion is not obvious. For example, when an image of a known 3D point is computed using a calibrated camera, the 2D projected point can be easily computed, but we need then to apply the distortion to the image point in order to obtain an accurate projection of the original 3D point. This first application justifies the present work. How to determine the inverse of a closed form solution for distortion model equations? The second reason is the merging the work of the two communities involved, photogrammetry and computer vision. While having worked separately for years, the situation between the two communities has drastically changed for almost a decade [14]. This is also visible in the form of new commercial or open-source software dealing with photogrammetry or computer vision. For example, PhotoScan (from Agisoft) or MATLAB that comes with the camera calibration toolbox or, for the open-source side, OpenCV toolbox which provides a solution for multiview adjustment. These three software use the same Equations (1) and (2) to manage distortion, but the mathematical model here is used to apply distortion and not to compensate it. Determining an exact formula to calculate inverse lens distortion, which allows using the same software to apply and compensate distortion with two set of k n parameters can be very useful and, in fact, is the purpose of this work. This paper is organized as follows: in the next section, a demonstration on computing an exact formula for inverse radial distortion is presented. This approach gives a set of k 1 ...k n coefficients computed from the original polynomial distortion model. In Section 3, several applications of this formula are presented. First of all an experiment on this inversion formula is done using only the k 1 ...k n coefficient. Then, an application used to compute a formula for an inverse radial lens distortion is applied to an image coming from a metric camera (Wild P32) with a 3-micron distortion in the edge of the frame. The next experiment is done on a calibration grid built with black disk. Finally, a discussion on converting the distortion model between several photogrammetric software, PhotoModeler (EOS [15]), PhotoScan (Agisoft [16]) and OpenCV [17] is proposed. Calibration Approach Radial distortion is mainly considered in the camera calibration process. Since Duane Brown's first publication, a large quantity of work was performed in the field of camera calibration, opening the way for new methods. Several techniques were proposed using orthogonal planes, 2D objects with plannars patterns up to self-calibration with unknown 3D points. Interesting reviews were published on both the photogrammetry and computer vision sides by Fraser [18], Zhang [19] and more recently by Shortis [20]. When Brown [6] proposed a radial distortion model in 1971, he also proposed a way to calibrate cameras using a set of plumb lines. The idea of using a set of straight wires to compute a distortion model in a camera calibration process remains in use 45 years later in the fields of photogrammetry and computer vision; Hartley in 1993 [21,22] then Faugeras and Devernay [23] and recently Nomura [24], Clauss [25], Tardif [26], and Rosten [27]. Inverse Radial Distortion After Conrady and Brown a lot of work was done to deal with removing distortion from images. As the problem is shared by photogrammetry community as well as computer vision we can refer to many books and papers on this topic. Including the famous Manual of photogrammetry [28] and Atkinson gives an overview of these problems for the two communities [29]. Nevertheless, the problem of reverse distortion is somewhat the poor relation of the problems of distortion. As mentioned by Heikkilä and Silvén [30], "only a few solutions to the back-projection problem can be found in literature, although the problem is evident in many applications." And in the same paper, "we can notice that there is no analytic solution to the inverse mapping". In the particular case of high distortion as in wide-angle and fish-eye lenses, some non polynomial (and invertible) models have been proposed; for example Basu and Licardie introduced the Fish-Eye Transform (FET) in [31]. Also Faugeras and Devernay [23] propose another invertible model based on the Field-of-View. A complete description of these models can be found in the review written by Hugues [32] and also in [33]. Regarding the polynomial model, several solutions have been tested to perform inverse radial distortion and the solution can be classified in three main classes (even if other approaches can be found such as the use of a neural network [34]): • Approximation. Mallon [35], Heikkilä [30,36] and then Wei and Ma [37] proposed inverse approximations of a Taylor expansion including first order derivatives. According to Mallon and Welhan, "This is sometimes assumed to be the actual model and indeed suffices for small distortion levels." [35]. A global approach, inverse distortion plus image interpolation, is presented in a patent held by Adobe Systems Incorporated [38]. • Iterative. Starting from an initial guess, the distorted position is iteratively refined until a convenient convergence is determined [39][40][41]; • Look-up table. All the pixels are previously computed and a look-up table is generated to store the mapping (as for example in OpenCV). All these methods involve restrictions and constraints on accuracy, time processing or memory usage. Nevertheless, some very good results can be obtained. For example, implementing the iterative approach gives excellent results, however the processing time is drastically increased. Given in Peter Abeles' blog [40], the method is easy to implement. Results are shown in Figure 3. Iterative method applied to a Nikon D700 camera with a 14mm lens. Along the frame diagonal the points are first compensated by what is a normal process of the radial distortion using Equation (1) and then the distortion is applied with the iterative process [40] and the result compared to the original point. On Y-axis, the distance between the original point and the computed reverse point. The iterative solution works by first estimating the radial distortion magnitude at the distorted point and then refining the estimate until it converges. Algorithm 1 shows an implementation of this approach. Algorithm 1 Iterative algorithm to compute the inverse distortion Require: point P n P c = P n repeat r = ||P c || dr = 1 + k 2 r 2 + k 4 r 4 + ... P c = P n /dr until Convergence of P c return P c Only a few iterations are necessary. The results presented in Figure 3 are in pixels. The used camera here is a Nikon D700 with a 14 mm lens. The calibration was done with PhotoModeler EOS and the results are k 1 = 1.532 × 10 −4 , k 2 = −9.656 × 10 −8 , k 3 = 7.245 × 10 −11 . The coefficients are expressed in millimeters. The center of autocollimation and the center of distortion are close to the image center. Only a few iterations are necessary to compute the inverse distortion. In this case, with a calibration made using PhotoModeler [15], the inverse of distortion represents the application of the distortion to a point projected from the 3D space onto the image. This iterative approach is very interesting when the processing time is not an issue, as for example in the generation of a look-up table. Note, however, that a good initial value is needed. According to these existing methods, we now want to obtain a formula for the inverse radial distortion when modelled by a polynomial form as described by Brown. The inverse polynomial form will be an expression of the original k 1 ...k 4 coefficients of the original distortion. Lens Distortion Models We consider the general model of distortion correction or distortion removal that can be written in the following form by separating radial and tangential/decentering components: Radial Distorsion In the following we will consider only radial distortion. General Framework Given a model of distortion or correction with parameters (k 1 , k 2 , k 3 , ...), our general objective is to find the inverse transformation. A natural assumption is to express the inverse transformation on the same form of the direct transformation, i.e., with parameters (k 1 , k 2 , k 3 , ...). Therefore we want to express each k i as a function of all the k j . Radial distortion Let us assume that there exists two transformations T 1 and T 2 : where r = x 2 + y 2 , r = x 2 + y 2 , and P and Q are power series: with a 0 = 1, a 1 = k 1 ,..., a n = k n (in order to use k as an index in the calculus of the Appendix). In addition to starting at n = 0 we facilitate those calculi. We can scale r as r = αr in order to have the same domain of definition for P and Q. So Q reads: with b n = b n α 2n . In the following b n is used instead of b n but we keep in mind this change of variable. Given the definition of r and by using transformation T 2 in Equation (4) we obtain: and similarly with Equation (3): r = r|P(r)| P and Q are positive which allows removing the absolute value. Hence by injecting the last equation in the first we get: and at the end: It is possible to derive a very general relation between coefficients a n and b n but it is not exactly adapted to real situations where P is a polynomial of finite order. Therefore we can derive a slightly simpler relationship in the case where only a 1 , ..., a 4 are given. It is summarized in the following result: Proposition 1. Given the sequence a 1 , ..., a 4 it is possible to obtain the recursive relation: where we use the following intermediate coefficients: We will derive this expression in Appendix A and show how the coefficients b 1 , ..., b n can be computed both with symbolic and numeric algorithms in Appendix B. Several remarks can be made about this result: The problem is symmetric in terms of P and Q, so the relations found for a n can of course be applied in the reverse order. Remark 2 For any n the coefficient b n can be computed recursively. In Equation (9), the first summation is obtained thanks to a 0 , ..., a 4 and q(n − 1), ..., q(n − 4) that only depends on the sequence a n . Similarly the second summation involves b 0 , ..., b n−1 and values p(j, 2k) which both depend only the given sequence a n . Therefore the recursive formula for b n can be implemented at any order n. We provide the 4 first terms: All formula till b 9 are summarized in Appendix C. Results and Experimental Section In this section we propose three experiments to test this inverse formula for radial distortion in order to evaluate the relevance of such approach. • First, we begin by testing the accuracy of the inverse formula by applying the forward/inverse formula recursively within a loop. Hence, the inverse of the inverse radial distortion is computed 10,000 times and compared to the original distortion coefficients. • Then, for a given calibrated camera, we compute the residual after applying and compensating the distortion along the camera frame. A residual curve shows the results of the inverse camera in all the frame. • The last experiment is the use of inverse distortion model on a image made with a metric camera built with a large eccentricity (a film-based Wild P32 camera), without distortion. We apply a strong distortion and then compensate it and finally compare it to the original image. Inverse Distortion Loop In this experiment, as the formula gives an inverse formula for radial distortion we do it twice and compare the final result with the original one. In a second step, we iterate this process 10,000 times and compare the final result with the original distortion. The Table 1 shows the original radial distortion and the computed inverse parameters. The original distortion is obtained by using PhotoModeler to calibrate a Nikon D700 camera with a 14 mm lens from Sigma. The results for this step are presented in Table 2. In the columns 'Delta Loop 1' and 'Delta Loop 10000' we can see that k 1 and k 2 did not change and the delta on k 3 and k 4 are small with respect to the corresponding coefficients: the error is close to 1E-10 smaller than the corresponding coefficient. Note that k 4 was not present in the original distortion and as the inverse formula is in function of only k 1 ...k 4 , the loop is computed without the coefficients k 5 ...k 9 which influences the results, visible from k 4 . Table 2. Radial distortion inverse loop and residual between coefficients of the orignal distortion and after n inversions (n = 2 and n = 10,000). Coefficient Original This first experiments shows the inverse property of the formula and of course not the relevance of an inverse distortion model. But this experiment shows also the high stability of the inversion process. However, even if coefficients k 1 ...k 4 are sufficient in order to compensate distortion, the use of coefficients k 1 ...k 9 are important for the inversion stability. The next two experiments show the relevance of this formula for the inverse radial distortion model. Inverse Distortion Computation onto a Frame This second experiment uses a Nikon D700 equipped with a 14 mm lens from Sigma. This camera is a full frame format, i.e., a 24 mm × 36 mm frame size. The camera was calibrated using PhotoModeler and the inverse distortion coefficients are presented in Table 1, where Column 1 gives the calibration result on the radial distortion, and Column 2 the computed inverse radial distortion. Note that the distortion model provided by the calibration using PhotoModeler gives as a result a compensation of the radial distortion, in millimeters, limited to the frame. The way to use this coefficient is to first express a 2D point on the image in the camera reference system, in millimeters, with the origin on the CoD (Center of Distortion), close to the center of the image. Then the polynomial model is applied from this point. The inverse of this distortion is the application of such a radial distortion to a point theoretically projected onto the frame. In all following experiments, the residuals are computed as follows: A 2D point p, is chosen inside the frame, its coordinate are previously computed in millimeters in the camera reference system with the origin on the CoD. Then p 1 is p compensated by the inverse of distortion. Finally p 2 is p 1 compensated by the original distortion. The residual is the value dist(p, p 2 ) . The following results show the 2D distortion residual curve. For a set of points on the segment [O, max X/2] the residuals are computed and presented in Figure 4 as Y-axis. The X-axis represents the distance from the CoD. These data comes from the calibration process and are presented in Table 1. The results shown in Figures 4 and 5 are given in pixels. In Figure 4a, below, we present the residuals using only coefficients k 1 ..k 4 of the inverse distortion. The maximum residual is close to 4 pixels, but residuals are less than one pixel until close to the frame border. This can be used when using non configurable software where it is not possible to use more than 4 coefficients for radial distortion modeling. In Figure 4, below, we present the residual computed from 0 to max X frame using coeficients k 1 ...k 9 for inverse distortion. The results are very good, less than 0.07 pixel on the frame border along 0X axis and the performance is quite the same as for compensating the original distortion. We can see that in almost all the images the residuals are close to the ones presented in Figure 5. Nevertheless, we can observe in Figure 4 higher residual in the corners, where the distance to the CoD is the greatest. Here follows a brief analysis of the residuals: These two experiments show that the results are totally acceptable even if the residuals are higher in zone furthest from the CoD, i.e., in the diagonal of the frame. As shown in Table 3, only 2.7 % of the frames have residuals > 1 pixel. Inverse Distortion Computation on an Image Done with a Metric Camera This short experiment used an image taken with a Wild P32 metric camera in order to work on an image without distortion. The Wild P32 terrestrial camera is a photogrammetric camera designed for close-range photogrammetry, topography, architectural and other special photography and survey applications. This camera was used as film based, the film is pressed onto a glass plate fixed to the camera body on which 5 fiducial marks are incised. The glass plate prevents any film deformation. The film format is 65 mm × 80 mm and the focal length, fixed, is 64 mm. Designed for architectural survey the camera has a high eccentricity and the 5 fiducial marks were used in this paper to compute the CoD. In Figure 6a four fiducial marks are visible (the fifth is overexposed in the sky). The fiducial marks are organized as follows: one at the principal point (PP), three at 37.5 mm from the PP (left,right,top) and one at 17.5 mm (bottom). This image was taken in 2000 in the remains of the Romanesque Aleyrac Priory, in northern Provence (France) [42]. Its semi-ruinous state gives a clear insight into the constructional details of its fine ashlar masonry as witnessed by this image taken using a Wild P32 during a photogrammetric survey. As this image did not have any distortion, we used a polynomial distortion coming from another calibration and adapted it to the P32 file format (see Table 4). The initial values of the coefficients have been conserved and the distortion polynom expressed in millimeters is the compensation due at any point of the file format. The important eccentricity of the CoD is used in the image rectification: the COD is positioned on the central fiducial marks visible on the images in Figure 6a,b. (a) (b) Figure 6. Distortion-less image taken using a small-format Wild P32 metric camera and application of an artificial distortion. (a) On the left, original image taken with P32 Wild metric camera; (b) On the right, pincushion distortion applied on this original image whithout interpolation. As images have not the same pixel size some vacant pixel are visible as black lines (see Hughues [32]). These lines surround the distortion center, here located on a fiducial mark, strongly shifted from the image center. After scanning the image (the film was scanned by Kodak and the result file is a 4860 × 3575 pixel image), we first measure the five fiducial marks in pixels on the scanned image and then compute an affine transformation to pass from the scanned image in pixels to the camera reference system in millimeters where the central cross is located at (0.0, 0.0). This is done according to a camera calibration provided by the vendor, which gives the coordinates of each fiducial mark in millimeters in the camera reference system. Table 5 shows the coordinates of the fiducial marks and highlights the high eccentricity of the camera built for architectural survey. This operation is called internal orientation in photogrammetry and it is essential when using images coming from film-based camera that were scanned. The results of these measurements are shown in Table 5. In Figure 6a we can see the original image taken in Aleyrac while in the Figure 6b we can see the result of the radial distortion inversion. Figure 7 shows the original image in grey and the image computed after a double inversion of the radial distortion model in green. We can observe no visible difference in the image. This is correlated with the previous results in the second experiment, see Figure 4. Table 4. Radial distortion comensation and then application of the inverse used with the image taken with the P32 camera. Conclusions and Discussion The experiments presented in this article show the relevance of the proposed methodology and the reliability of the result. However, a significant difference exists depending on whether the set of coefficients k 1 ...k 4 or k 1 ...k 9 . For large distortion the number of parameters should be significant. See Figures 4 and 5 for the influence of the number of coefficients. We can note that since the formulation by Brown, the number of coefficients used to characterize the distortion has increased. In 2015 the Agisoft company added k 4 in their radial distortion model while at the same time many software still use only k 1 , k 2 and in 2016 they add p 3 and p 4 to the tangential distortion model. Even when k 1 ...k 4 are sufficient for compensating the radial distortion, it is however necessary to increase the degree of the polynomial to correctly compute the inverse. A Bridge between PhotoModeler and Agisoft for Radial Distortion One of the applications for using such a formula to compute the inverse distortion coefficient in function of k 1 ...k 4 is to convert distortion models between two software programs that use the inverse distortion model, as for example PhotoModeler and PhotoScan from Agisoft. Indeed PhotoModeler uses the Brown distortion model to compensate for observations made on images and so to obtain a theoretical observation without distortion effect. In contrary, PhotoScan from Agisoft uses a similar model but it adds the distortion to a point projected onto the image. To convert a distortion model from PhotoModeler to PhotoScan, or vice versa, we need to compute the inverse distortion model. his implication in the iterative inverse distortion method and Jean-Philip Royer for his implementation of the inverse distortion in Python in PhotoScan Software in order to import and export camera distortion toward other photogrammetric software. Author Contributions: Pierre Drap designed the research, implemented the inverse distortion method and analyzed the results. Julien Lefèvre proved the mathematical part. Both authors wrote the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Appendix A In Equation (8) For k fixed P(r) k can be rewritten as a product: P(r) k = 4 ∑ n 1 =0 a n 1 r 2n 1 ... 4 ∑ n k =0 a n k r 2n k which gives a more compact expression P(r) k = 4k ∑ m=0 r 2m ∑ n 1 +...+n k =m 0≤n i ≤4 a n 1 ...a n k = 4k ∑ m=0 r 2m p(m, k) Note that p(m, k) = 0 as soon as m > 4k. Then we obtain: Let us call t(k) the coefficient in the previous sum. Actually t(k) will turn out to be q(k), defined in Proposition 1. Last we can express the initial equality P(r)Q r(P(r) = 1: To obtain the equality: l ∑ k=0 a k t(l − k) = 0 for l ≥ 1 We decompose this sum: and since a 0 = 1 we conclude that t(l) satisfy the same recurrence relationship as q in Proposition 1. The two quantities are therefore equal since they have the same initial value. We have also an alternative expression for q given by the definition of t depending on b n and p(m, 2n) that gives: By remarking that p(0, 2k) = 1 we obtain Proposition 1. Appendix B There are two ways of implementing the result of Proposition 1. One can be interested in having a symbolic representation of coefficients b n with respect to a 1 , a 2 , a 3 , a 4 . But one can also simply obtain numeric values for b n given numeric values for a n . The main ingredient in both cases is to compute efficiently the coefficients p(j, k). One can start by the trivial result that if n 1 + ... + n k = j then n 1 + ... + n k−1 = j − n k . But n k can take only 5 values between 0 and 4 (there are no other coefficient a n in our case). Therefore one can easily derive the recursive identity: p(j, k) = p(j, k − 1) + a 1 p(j − 1, k − 1) + a 2 p(j − 2, k − 1) + a 3 p(j − 3, k − 1) + a 4 p(j − 4, k − 1) To compute b n we need coefficients p(j, 2k) with the constraints j + k = n, 0 ≤ k and 1 ≤ j ≤ 8k. A table (n − 1) × 2(n − 1) is defined and coefficients are progressively filled by varying the coefficient k from 1 to 2(n − 1) thanks to dynamic programming. This step is summarized in Algorithm B1. Algorithm B1 Computation of p(j, k) Require: coefficients a 1 , a 2 , a 3 , a 4 , integer N Define an array p of size N × 2N − 1 for k = 0 : 2(N − 1) do p(0, k) = 1 end for for k = 1 : 2(N − 1) do for j = 1 : 2(N − 1) do p(j, k) = p(j, k − 1) + a 1 p(j − 1, k − 1) + a 2 p(j − 2, k − 1) + a 3 p(j − 3, k − 1) + a 4 p(j − 4, k − 1) With p(j, k) = 0 as soon as j < 0 or k ≤ 0 end for end for return p The most tricky aspect is to manipulate formal terms with a 1 , a 2 , a 3 , a 4 . For that it is good to remark that coefficients b n are made of terms a n 1 1 a n 2 2 a n 3 3 a n 4 4 such as n 1 + 2n 2 + 3n 3 + 4n 4 = n. We can therefore have an a priori bound on each exponents, n, n/2, n/3, n/4 respectively. Given that, each multinomial term can be represented as a coefficient in a 4D array of bounded size. It is also very convenient to use a sparse representation for it due to many vanishing terms. Addition of terms are simply additions of 4D arrays of size bounded by n 4 . Multiplication requires shifting operations on dimensions of the array, basically multiplying by a n 1 1 corresponds to a translation by n 1 of the first dimension.
7,088.2
2016-06-01T00:00:00.000
[ "Physics" ]
Kinetics of the permanent deactivation of the boron-oxygen complex in crystalline silicon as a function of illumination intensity Based on contactless carrier lifetime measurements performed on p-type boron-doped Czochralski-grown silicon (Cz-Si) wafers, we examine the rate constant Rde of the permanent deactivation process of the boron-oxygen-related defect center as a function of the illumination intensity I at 170°C. While at low illumination intensities, a linear increase of Rde on I is measured, at high illumination intensities, Rde seems to saturate. We are able to explain the saturation by assuming that Rde increases proportionally with the excess carrier concentration Δn and take the fact into account that at sufficiently high illumination intensities, the carrier lifetime decreases with increasing Δn and hence the slope of Δn(I) decreases, leading to an apparent saturation. Importantly, on low-lifetime Cz-Si samples no saturation of the deactivation rate constant is observed for the same illumination intensities, proving that the deactivation is stimulated by the presence of excess electrons and not directly by the photons. Since the 1970s it is known that the power output of solar cells fabricated on boron-doped Czochralski-grown silicon (Cz-Si) degrades under prolonged illumination at room temperature. 1his reduction in power output was related to a degradation of the carrier lifetime in the silicon bulk. 1 In the past two decades, this light-induced degradation (LID) effect has been extensively investigated and it was shown that the simultaneous presence of boron and oxygen in the silicon material is a necessary prerequisite for the LID. 2,3Hence, defect models were developed which attribute the lifetime degradation to the activation of a boron-oxygen-related defect (BO defect). 4Furthermore, in 2006 it was discovered that the lifetime degradation, and hence also the degradation in solar cell efficiency, can be permanently reversed by illumination at elevated temperature, 5 a process referred to as 'permanent deactivation' of the BO defect.While the impact of temperature on the deactivation rate constant R de follows an Arrhenius law, 6,7 only very limited data has been published concerning the dependence of R de on the illumination intensity during the process of permanent deactivation.Measurements by Herguth et al. 8 suggested a linear increase up to illumination intensities of 1 sun (100 mW/cm 2 ) on solar cell level. In this contribution, we analyze the impact of illumination intensity I on R de over a broad intensity range from 0.5 to 3 suns.Our experiments show that the R de (I) dependence does indeed follow a linear dependence at low illumination intensities, whereas at high intensities above ∼2 suns an apparent saturation in R de is observed.We discuss our experimental findings based on the assumption that R de is proportional to the excess carrier concentration ∆n independent on the illumination intensity, which fully explains the observed R de (I) behavior.We use p-type boron-doped Cz-Si wafers with resistivities of 1.0 Ωcm and 2.2 Ωcm, respectively.All wafers receive a damage etch step and an RCA cleaning 9 of the surfaces and a subsequent phosphorus diffusion in a quartz-tube furnace using POCl 3 at 850 • C, resulting in n + -regions on both surfaces with a nominal sheet resistance of 100 Ω/sq.We then split the wafers into two groups of samples: for the samples of group I, the phosphorus silicate glass (PSG) as well as the n + -regions are removed.The remaining sample thickness is then about 144 µm for the 1-Ωcm material and 133 µm for the 2.2-Ωcm material.The samples of group II retain the phosphorusdiffused n + -regions through the complete subsequent process sequence and the measurements, only the PSG is removed.These samples have a thickness of about 148 and 136 µm, respectively. Subsequently, all samples pass through an RCA cleaning before receiving a 10-nm thick Al 2 O 3 layer deposited on both wafer surfaces using plasma-assisted atomic layer deposition (plasma-ALD) 10 plus a 70-nm thick SiN x capping layer on top (refractive index n = 2.05 at λ = 632 nm) deposited via plasma-enhanced chemical vapor deposition (PECVD).The passivation of the surfaces by an Al 2 O 3 /SiN x stack reduces the surface recombination velocity (SRV) well below 10 cm/s. 11Now, all samples undergo a rapid thermal annealing (RTA) step in order to accelerate the permanent deactivation using fast cooling ramps. 12We perform the RTA step in a commercially available belt firing furnace (DO-FF-8.600-300,Centrotherm).As previously shown, 12,13 the RTA process has a crucial impact on the kinetics of the permanent deactivation.The highest deactivation rate constant R de after RTA treatment is obtained for a fast belt speed of v belt =7.2 m/min and a set-peak temperature of ϑ peak =850 • C to 870 • C. 12 However, in order to enable us to determine the deactivation rate constant also at high illumination intensities, we have chosen a slightly slower belt speed of v belt =6.0 m/min in this study that results in slightly lower deactivation rate constants which facilitates the experimental extraction of the deactivation rates.After the RTA treatment, the initially 15.6×15.6 cm 2 wafers are laser-diced into samples of 5 × 5 cm 2 size. We examine the deactivation process starting from the state after annealing at 200 • C in darkness, leading to a non-permanent deactivation of the BO complex and a corresponding lifetime τ 0 . 3,14On the other hand, we denote as τ 0p the lifetime after complete permanent deactivation.Note that the actual value of τ 0p depends strongly on the details of the thermal history of the sample and in particular on the previous RTA treatment. 12We vary the light intensity of the halogen lamp used for the deactivation process between 0.5 and 3 suns by changing the lamp-sample distance.The measurement uncertainty of the light intensity is within 2 %.During deactivation the samples are placed on a hotplate at an effective sample temperature of (169.5±3.5)• C for all illumination intensities.We monitor the lifetime τ(t) during the permanent deactivation process by removing the samples in defined intervals from the hotplate and measuring the lifetime using the photoconductance decay (PCD) technique 15 or the quasi-steady-state photoconductance (QSSPC) method, 16 both at ∼31 • C (Sinton Instruments Lifetime Tester WCT-120).Unless otherwise stated, we report τ at a fixed injection level of ∆n/p 0 =0.1 with ∆n being the excess carrier concentration and p 0 being the hole concentration in darkness that equals in our samples the boron doping concentration N dop .From the measured lifetimes we extract the effective defect concentration N t * with N t * = 1/τ(t)-1/τ 0 .The rate constant R de of the permanent deactivation process is then determined by fitting the evolution of N t * (t) using a single-exponential decay function of the form N t * = A × exp(-R de × t) + B. Figure 1 shows the measured R de values as a function of illumination intensity I during the permanent deactivation treatment performed at 170 • C. The rate constant R de is determined in a light intensity range between 0.5 and 3 suns for Cz-Si materials with two different resistivities, 1.0 and 2.2 Ωcm, respectively.We observe an approximately proportional increase of R de (I) for light intensities between 0.5 to 2 suns for both materials.The rate constant R de of the 1.0 Ωcm-material increases from a value of ∼37 h -1 at 0.5 suns to (110-130) h -1 at 2 suns.However, surprisingly, at illumination intensities above 2 suns we do not observe any significant further increase of the rate constant and R de virtually saturates at (120 ± 20) h -1 .On the 2.2-Ωcm material, we measure a comparable saturation value as measured on the 1.0-Ωcm for intensities above 2 suns.Note that the extracted rate constants show a significant scatter, which could be related to background effects of hitherto unknown nature, however, our data unambiguously show a weaker-than-linear dependence of R de (I) at high intensities (> 2 suns).shown in Fig. 1, we find a linear increase of R de with the increasing light intensity I during deactivation over the entire measured intensity range.Importantly, we do not observe any saturation of R de up to an illumination intensity of 3 suns.The black line represents a linear fit of R de (I).The experimental results shown in Figs. 1 and 2, the observed saturation of the rate constant as well as the absence of any saturation in low-lifetime samples, clearly support the assumption of an electronically stimulated deactivation process, where the photons are not directly involved. In a recent theoretical work by Voronkov and Falster, 18 it was assumed as most likely that R de is proportional to the electron concentration ∆n during the permanent deactivation of the BO defect.Due to the increased temperature of the deactivation process it was furthermore conjectured that the bulk lifetime is not limited any longer by the BO defect, but by some background defects and the lifetime might hence be constant during the deactivation treatment.The latter assumption was in particular needed to explain the strictly exponential decrease of N t * (t) during permanent deactivation, 18 which is generally experimentally observed, also in this study, and is not consistent with the strict proportionality of R de on ∆n.To verify in particular the latter assumption experimentally, we have performed in-situ lifetime measurements during the permanent deactivation process using the dynamic infrared lifetime mapping (dynamic-ILM) 17 technique.During the measurement the sample is placed on a hotplate at a set temperature of 140 • C and illuminated by an LED array of 950 nm wavelength.Figure 3 shows the measured lifetime evolution τ(t) during permanent deactivation at 140 • C and an illumination intensity corresponding to 0.92 suns, clearly showing that the lifetime is not constant during the set deactivation temperature of 140 • C. Nevertheless, we find an exponential decrease of N t * (t) at 140 • C deactivation temperature.A further development of the BO deactivation theory seems to be necessary to explain this experimental finding.However, our experimental results shown in Figs. 1 and 2 do now clearly prove that the permanent deactivation must be related to the injected carriers during deactivation.We hence chose a pragmatic approach here and replot our measured R de data as a function of the excess carrier concentration ∆n.Based on the aforementioned, the question arises which ∆n value would be the most meaningful in this plot.In fact, we follow here the pragmatic approach of Wilking et al. 19 and determine an average excess carrier concentration ∆n av = 1 /2×(∆n min +∆n de ), where ∆n min is extracted right before the deactivation starts, i.e. in the minimum lifetime state, and ∆n de is measured in the fully deactivated state, where the lifetime and hence ∆n is maximal.Note that the excess carrier concentrations were extracted not at the deactivation temperature, but at the temperature of the lifetime measurements, which were performed at ∼31 • C, FIG. 3. Lifetime τ of a sample of group I (i.e.n + -regions were removed) measured during permanent deactivation at ϑ de,set = 140 • C. The lifetime measurement was performed using the dynamic infrared lifetime mapping (dynamic-ILM) 17 technique, where the sample was placed on a hotplate and illuminated by an LED array of 950 nm wavelength at an intensity of I=0.92 suns.Each measurement took 20 s. because the ∆n values during the deactivation process were not experimentally accessible to us.However, due to the similar lifetime trend at 140 • C and 31 • C we assume here that the lifetime at the deactivation temperature of 170 • C behaves in a similar way, which, however, needs to be confirmed in future experiments.Figure 4 shows the measured deactivation rate constant R de as a function of ∆n av .The data in Fig. 4(a) recorded on 1.0 Ωcm and in Fig. 4(b) on 2.2 Ωcm p-type Cz-Si samples can be fitted using proportionality functions (black solid lines).The deviation from this linear behavior may be due to inaccuracies when determining the lifetime in the minimum during the deactivation process, since we determine the first lifetime data point after 10-20 s of illumination at 170 • C and the actual minimum may be reached even faster than within time span. With the observed proportional dependence of the deactivation rate constant R de on the excess carrier concentration averaged from the lifetime minimum during the deactivation process and from the permanently recovered state, the apparent saturation behavior observed in Fig. 1 can now be understood as follows: At relatively low illumination intensities, during the deactivation process, the excess carrier concentration ∆n increases in proportion with the illumination intensity I.However, at higher intensities, the lifetime starts to decrease with increasing injection level, e.g. by recombination via a shallow Shockley-Read-Hall center or Auger recombination, and in this injection range, ∆n increases only marginally with increasing illumination intensity.Hence, assuming that R de increases proportionally with ∆n av would explain the apparent saturation behavior observed in Fig. 1.Note that our latter conclusion also holds if we plot R de as a function of ∆n min or as a function of ∆n de , showing that despite the fact that we cannot explain the exponential decay of N t * (t) during the permanent deactivation process at this point in time, we are nevertheless able to conclude that R de ∼ ∆n seems to be a generally valid law of the BO deactivation process. In this letter, we have shown that the deactivation rate constant R de depends linearly on the excess carrier concentration ∆n during the deactivation of the BO-related defect center.As a consequence, R de increases linearly with the illumination intensity at low intensities, but shows an apparent saturation at high illumination intensities, when ∆n becomes a weaker-than-linear function of the illumination intensity.This results in an apparent saturation of R de when plotted versus the illumination intensity which we varied between 0.5 and 3 suns in this study.On our investigated 1.0 and 2.2 Ωcm high-lifetime Cz-Si samples, we have observed the apparent saturation for intensities above ∼2 suns resulting at a maximum R de of 120±20 h -1 .Note, that the actual lifetime of the sample under investigation determines the intensity above which R de saturates and that in low-lifetime samples, much higher intensities are required to reach the saturation in R de compared to high-lifetime samples.The latter is a direct consequence of the R de ∼ ∆n dependence and also clearly shows that the deactivation is an electronically stimulated process where photons are not directly involved. Figure 2 Figure2shows the rate constants R de depending on the light intensity of samples with significantly lower lifetime after deactivation (τ 0p <350 µs) compared to the ones shown in Fig.1(τ 0p >1 ms).The applied illumination intensity ranges between 1 to 2.5 suns for the 2.2-Ωcm material or between 1 to 3 suns in case of the 1-Ωcm material.Filled circles show the measurements on the 1-Ωcm material, open circles on the 2.2-Ωcm material.The error bars give the uncertainty of the mono-exponential fit to N t * (t).The deactivation rate constant R de increases from (42.5±2.5)h -1 at 1 sun to (140.8±9.4)h -1 at 3 suns in case of 1-Ωcm samples.The deactivation rate constants of the 2.2-Ωcm samples is comparable with (38.5±6.8)h -1 at 1 sun to (104.0±3.2) h -1 at 2.5 suns.Contrary to the dependence FIG. 4 . FIG. 4. Deactivation rate constant R de as a function of excess carrier concentration ∆n av for (a) 1.0 Ωcm and (b) 2.2 Ωcm p-type Cz-Si samples.Cz-Si samples of group I (i.e.n + -regions were removed) with high lifetimes (>1 ms) are shown as filled circles, low-lifetime (<350 µs) samples, also of group I, as open triangles.Open squares show data measured on Cz-Si samples with n + -regions still present (group II) and hence reduced lifetimes.Error bars show the uncertainty according to the exponential fit.The black line is a proportional fit of all data points.
3,690.4
2017-03-06T00:00:00.000
[ "Physics", "Materials Science" ]
Generalized squared remainder minimization method for solving multi-term fractional differential equations Mir Sajjad Hashemi , Mustafa Incb,c,1 , Somayeh Hajikhah Department of Mathematics, Basic Science Faculty, University of Bonab, P.O. Box 55517-61167, Bonab, Iran<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Mathematics, Firat University, 23119 Elazig, Turkey<EMAIL_ADDRESS>Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan Introduction Fractional integration and differentiation are generalizations of integer-order calculus to noninteger ones. It is demonstrated in literature that fractional calculus can play a justifiable and beneficial role in the modeling of various phenomena e.g. science and engineering [2,3,6,25,29,30,[35][36][37]42]. The classical differential operators are defined as local operators, whereas the fractional differential operators are nonlocal. This significant property makes studying fractional differential equations an active area of research. There are a several number of definitions of fractional derivatives involving the kernel of the special functions, such as Mittag-Leffler function, Prabhakar function and so on. Some important ones are, Riemann-Liouville and Caputo fractional derivatives in [14,41], Caputo-Fabrizio fractional derivative in [8,13], and Atangana and Baleanu suggested another version of fractional-order derivative, which uses the generalized Mittag-Leffler function with strong memory as nonlocal and nonsingular kernel in [5]. Multi-point fractional differential equations appear in different types of visco-elastic damping [40]. The multi-point boundary conditions in the fractional differential equations can be understood as the controllers at the end points dissipate or add energy according to censors located at intermediate points. Model equations proposed so far are almost always linear. Therefore, we concentrate on the following multi-point fractional differential equation: where γ j , j = 1, . . . , k+1 are real constant coefficients and 0 < β 1 < β 2 < · · · < β k < α. Here, we consider this equation with most usual fractional derivative in Caputo sense. Equation (1) permits us to describe the model more accurately than the classical integer equation. The nonlocality of Caputo fractional derivative means that the next state of a system depends not only upon its current state but also upon all of its historical states. Bhrawy et al. [11] applied the spectral algorithm based on generalized Laguerre tau (GLT) method with generalized Laguerre-Gauss (GQ) and generalized Laguerre-Gauss-Radau (GRQ) quadrature methods for Eq. (1). The shifted Chebyshev spectral tau (SCT) method based on the integrals of shifted Chebyshev polynomials is utilized to construct the approximate solutions of such problems [12]. Spectral tau method combined with the shifted Chebyshev polynomials are proposed to solve the Eq. (1) in [15]. Three alternative decomposition approaches are introduced for the approximate solution of Eq. (1) by Ford et al. [16]. More discussion about the approximate solution of Eq. (1) can be found in [27,34,38]. The paper is organized as follows: In Section 2, we briefly discuss some necessary definitions and mathematical preliminaries of fractional calculus, which will be needed in the forthcoming sections. The third section deals with a generalization of squared remainder minimization (GSRM) method for the multi-point fractional differential equations. Section 4 is devoted to the study of convergence analysis for the proposed method. Some illustrative examples are investigated in Section 5. Conclusion remarks provide our final section. Preliminaries and notations In this section, we review some preliminaries and properties of well-known fractional derivatives and integrals for the purpose of acquainting with sufficient fractional calculus theory. Among various definitions of fractional derivatives e.g. 13,19], Atangana-Baleanu [7,17,18] and conformable fractional derivative [20,21,26], we introduce two most commonly used definitions, namely, the Riemann-Liouville and Caputo derivatives [9,14,33]. Various analytical and numerical methods are utilized to consider the fractional differential equations e.g. [1, 10, 22-24, 31, 32, 39]. http://www.journals.vu.lt/nonlinear-analysis Definition 1. Let α ∈ R + . The operator J α is called the Riemann-Liouville fractional integral operator. Now, we discuss the interchange of the Riemann-Liouville fractional integration and limit operation, which is useful in the convergence analysis of further discussed method. is called the Riemann-Liouville fractional differential operator, where the ceiling function α denotes the smallest integer greater than or equal to α. is called the Caputo fractional differential operator. For the Caputo derivative, we have The GSRM method For problem (1), we specify the linear operator Then there exist unknown constants c 0 , c 1 , . . . , c n such that Obviously, the approximate solution u n (x) needs to satisfy the following conditions: Substituting (3) into Eq. (2) concludes where For any , then the nth-order remaining terms can be given by Remark 2. If the linearly independent functions are supposed as then Eq. (5) becomes http://www.journals.vu.lt/nonlinear-analysis The important point to note here is the minimization problem in the GSRM method by the fact (4). That is, the problem is minimizing (Ou n )(x) in such a way that To do this, we introduce the real functions Therefore in order to find the unknown vector C = (c 0 , c 1 , . . . , c n ), we have to solve the minimization problem Remark 3. If we set polynomials (6) as basis functions, then constrains in minimization problem (7) become We use the Lagrange-multiplier method [28] to minimize problem (7). By using this method, we solve the following system of algebraic equations 2 : where Λ T = (λ 0 , λ 1 , . . . , λ α −1 ) and I T (C) = (I 0 (C), I 1 (C), . . . , I α −1 (C)). Equivalently, we can define the vector For simplicity of notations, if we use Therefore, the linear system of equations (8) becomes or, in the abstract form, where 2 ω 0 , ω 0 2 ω 0 , ω 1 · · · 2 ω 0 , ω n 2 ω 1 , ω 0 2 ω 1 , ω 1 · · · 2 ω 1 , ω n . . . . . . . . . . . . Convergence analysis This section is a discussion about the convergence of the GSRM method for the multipoint fractional differential equations of the form (1). Theorem 1. Suppose that u(x) is the exact solution of fractional differential equation (1) defined on [x 0 , x f ], and u n (x) is the corresponding approximate solution of problem given by the GSRM method. If there exists a polynomial p n (x) ∈ P n [x 0 , x f ] such that for Proof. One can easily conclude that for any n ∈ N, we have Moreover, from the continuity of norms and p n (x) → u(x) we get From Eqs. Numerical results In this section, we present the results of the GSRM method on five test problems. We perform our computations using Maple 18 software with 30 digits. Example 1. Consider the equation with initial condition u(0) = 0. Exact solution of this equation is given by u(x) = x 4 − x 3 /2. Figure 1 shows the absolute errors for various α by the GSRM method with polynomial bases as (6) and n = 4. In Table 1, the absolute errors obtained by present method are compared with GLT method [11] with N = 50 and SCT method [12] with N = 64. From this table it may be concluded that the GSRM method is more accurate than the mentioned approaches. The CPU time used in this example is 0.842, and condition number of coefficient matrix in (9) w.r.t. α = 0.01 is 2.61. Example 2. Let us consider the following fractional equation: Example 3. Consider the Bagley-Torvik equation By solving the resultant system, we get c 0 = c 1 = 0 and c 2 = 1. Therefore we obtain the exact solution for this example by using the GSRM method. The best maximum absolute errors by GLT(GQ) and GLT(GRQ) reported in [11] with initial conditions u(0) = u (0) = 0. Exact solution of this equation is given by u(x) = x 3 . Numerical results will not be presented since the exact solution is achieved by choosing n = 3. Regarding Example 4 and in [16], the best result is attained with 512 steps, and the maximum absolute errors are 6.93E − 05, 1.18E − 04 and 3.10E − 06 by using method 1, method 2 and method 3, respectively. Obtained results by GLT(GQ) and GLT(GRQ) methods [11] with N = 64 are 1.43E − 05 and 1.80E − 05, respectively. Moreover, in [38], the absolute error 1.86E − 09 is reported by HWCM method, and 3.39E − 13 is reported by SCT method in [15]. In [27], the maximum absolute error by the Haar wavelet operational matrix method is 1.12E − 02, and reported error in [38] is 2.91E − 03, w.r.t. N = 64. Whereas by the GSRM method, we obtain 4.99E − 28. The CPU time used in this example is 0.811s, and condition number of coefficient matrix in (9) is 849.54. Conclusion In the present paper, the squared remainder minimization method is developed to the multi-term fractional differential equations. A minimization problem is manifested and it considered by the Lagrange-multiplier method. Convergence of the GSRM method is theoretically proved. Five test problems are investigated. For some of the given examples, exact solutions by the present method are extracted. Accuracy and reliability of the GSRM method is revealed by the reported figures and tables.
2,177
2021-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]