text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A global view on star formation: The GLOSTAR Galactic plane survey Context. While over 1000 supernova remnants (SNRs) are estimated to exist in the Milky Way, only less than 400 have been found to date. In the context of this apparent deficiency, more than 150 SNR candidates were recently identified in the D-configuration Very Large Array (VLA-D) continuum images of the 4–8 GHz global view on star formation (GLOSTAR) survey, in the Galactic longitude range − 2 ◦ < l < 60 ◦ . Aims. We attempt to find evidence of nonthermal synchrotron emission from 35 SNR candidates in the region of Galactic longitude range 28 ◦ < l < 36 ◦ , and also to study the radio continuum emission from the previously confirmed SNRs in this region. Methods. Using the short-spacing corrected GLOSTAR VLA-D+Effelsberg images, we measure the ∼ 6 GHz total and linearly polarized flux densities of the SNR candidates and the SNRs that were previously confirmed. We also attempt to determine the spectral indices by measuring flux densities from complementary Galactic plane surveys and from the temperature-temperature plots of the GLOSTAR-Effelsberg images. Results. We provide evidence of nonthermal emission from four candidates that have spectral indices and polarization consistent with a SNR origin, and, considering their morphology, we are confident that three of these (G28.36+0.21, G28.78-0.44, and G29.38+0.10) are indeed SNRs. However, about 25% of the candidates (8 out of 35) have spectral index measurements that indicate thermal emission, and the rest of them are too faint to have a good constraint on the spectral index yet. Conclusions. Additional observations at longer wavelengths and higher sensitivities will shed more light on the nature of these candidates. A simple Monte Carlo simulation reiterates the view that future studies must persist with the current strategy of searching for SNRs with small angular size to solve the problem of the Milky Way’s missing SNRs. Introduction The structure formed from the expelled material and the shockwave of a supernova explosion interacting with the surrounding ⋆ The movie associated to Fig. 11 is available at https://www. aanda.org ⋆⋆ Member of the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. ⋆⋆⋆ Jansky Fellow of the National Radio Astronomy Observatory. interstellar medium (ISM) is known as a supernova remnant (SNR). The interactions of expanding SNRs and the ISM are important feedback mechanisms that may trigger star formation or, on the contrary, disperse gas and thus suppress the star formation rate in a galaxy. Gas can be blown out of the Galactic plane, and turbulent pressure is produced and maintained on both small (molecular cloud) and large (galaxy-wide) scales (e.g. Efstathiou 2000; Ostriker & Shetty 2011;Dubner & Giacani 2015;Bacchini et al. 2020). In order to fully understand and quantify the impact SNRs have on the dynamics of star formation in the Milky Way from an observational point of view, having a complete catalog of Galactic SNRs is highly desirable. The most recent Galactic SNR catalogs (Ferrand & Safi-Harb 2012;Green 2019) contain fewer than 400 objects. This number is, however, significantly smaller than the expected ∼1000 discussed by Li et al. (1991), who arrived at this estimate by using statistical arguments primarily based on our knowledge of the distances to SNRs in the Milky Way. Ranasinghe & Leahy (2022), using a similar statistical analysis but with improved distances to the currently known SNRs, estimate that the number quoted by Li et al. (1991) must be a lower limit, and that there could be over 3000 SNRs in the Galaxy. It is believed that this apparent discrepancy is only due to SNRs that are fainter and smaller than the currently known sample of SNRs not being detected, rather than insufficient knowledge of the local universe (Brogan et al. 2006;Anderson et al. 2017: hereafter A17). In an attempt to improve this situation, recent radio Galactic plane surveys have been carried out with good sensitivity to both compact and extended emission leading to the identification of well over one hundred SNR candidates (A17; Hurley-Walker et al. 2019;Dokara et al. 2021: D21 from here on). These studies used the radio and mid-infrared (MIR) anti-correlation property of SNRs (Fürst et al. 1987b;Haslam & Osborne 1987). While H II regions emit brightly at both radio and MIR wavelengths, SNRs are typically bright radio emitters on the one hand but weak MIR emitters on the other hand. Fürst et al. (1987b) found that the ratio (R) of 60 µm MIR to 11 cm radio flux density is much higher for H II regions than SNRs (R HII ∼ 1000 and R SNR ∼ 15). Subsequently multiple other studies confirmed this anti-correlation property (Broadbent et al. 1989;Whiteoak & Green 1996;Pinheiro Gonçalves et al. 2011). Most of these SNR candidates are yet to be confirmed as genuine SNRs with clear nonthermal radio emission. In addition, some objects in the Galactic SNR catalogs either do not have good radio measurements (such as G32.1-0.9 and G32.4+0.1), or, worse, the evidence that they emit nonthermal synchrotron radiation is rather weak (e.g., G31.5-0.6; Mavromatakis et al. 2001). It is not uncommon for H II regions, which emit thermally, to be confused as SNRs due to their similar radio morphology (e.g., A17 and D21). The presence of nonthermal synchrotron radio emission is thus vital to determine whether an object is truly a SNR. Synchrotron radiation is linearly polarized and typically has a negative spectral index at frequencies where synchrotron losses do not occur (typically over 1 GHz; Wilson et al. 2013), where the spectral index α is determined via a power law fit to the flux density spectrum, as S ν ∝ ν α , S ν being the flux density and ν is the frequency. In this work, we focus on confirming the status of the SNR candidates and the sample of objects that were catalogued as SNRs (hereafter called 'known SNRs') in the region of the Galactic longitude range 28 • < l < 36 • and |b| < 1 • (hereafter called 'the pilot region') by measuring linearly polarized flux densities (LPFD) and spectral indices. The rest of the paper is organized as follows: Sect. 2 contains the descriptions of the data and the methods used for this study. The results for known and candidate SNRs are presented in Sects. 3 and 4 respectively. Their implications are discussed in Sect. 5, and we provide a summary of this work in Sect. 6. The GLOSTAR survey The global view on star formation in the Milky Way (GLOSTAR) survey 1 is a 4-8 GHz sensitive, unbiased, large scale continuum and spectral line survey of the first quadrant of the Galaxy, covering the region bounded by the Galactic longitudes −2 • < l < 60 • and Galactic latitudes |b| < 1 • , in addition to the Cygnus X star forming complex (76 • < l < 83 • and −1 • < b < 2 • ). The observations were done using the Karl Jansky Very Large Array (VLA) in B-and D-configurations, as well as the 100 meter Effelsberg telescope. Full details of the observations and data reduction are presented in Medina et al. (2019) and Brunthaler et al. (2021). The catalogs of continuum sources in the GLOSTAR pilot region (28 • < l < 36 • and |b| < 1 • ) from the VLA images, which contain 1575 sources in the D-configuration images including several supernova remnants and 1457 in the B-configuration images, are discussed in Medina et al. (2019) and Dzib et al. (2023), respectively. An overview of the survey and initial results are described in Brunthaler et al. (2021). Further results are presented in D21 (SNRs), Ortiz-León et al. (2021, Cygnus X methanol masers), Nguyen et al. (2021, Galactic center continuum sources), and Nguyen et al. (2022, methanol maser catalog). Here, we give a brief overview of the data that we use for this study. We focus on the GLOSTAR pilot region, which contains numerous extended and compact sources overlapping with a strong Galactic background (see Fig. 1 and also Medina et al. 2019). The W43 mini-starburst complex located at l ∼ 30 • (where the bar of the Milky Way meets the Scutum-Centaurus Arm, e.g., Zhang et al. 2014) and the W44 supernova remnant at l ∼ 35 • are among the brightest objects observed in this region. In our previous work (D21), we detected over 150 SNR candidates in the D-configuration VLA images of the full survey, with the pilot region containing 35 of them. We use only the D-configuration VLA and the Effelsberg continuum images in this work. Since the D-configuration images do not completely recover emission on scales larger than a few arcminutes, they are not suitable to accurately measure the total flux densities and spectral indices of extended objects such as many SNRs. For this purpose, we use images from the single-dish 100 m Effelsberg telescope and their combination with the VLA D-configuration images. The Effelsberg images of this survey do not contain information on the very large scales (>1 • ) due to the baseline subtraction and limited coverage in Galactic latitude (|b| < 1 • ). The large scale information had been 'restored' using the Urumqi 6 cm survey images (Sun et al. 2007(Sun et al. , 2011b to produce the GLOSTAR-Effelsberg maps with the correct intensities (see Brunthaler et al. 2021, for details). However, all the objects we study are smaller than half a degree, and since we need to filter out the large scale background in any case, we use the original maps directly (i.e., without restoration using the Urumqi maps) to avoid a source of uncertainty. The calibration and imaging of the VLA and the Effelsberg data, along with their feathering for the total power Stokes I images, are described by Brunthaler et al. (2021). Feathering is a method to combine two images with emission from different angular scales, where the two images are co-added in the Fourier domain (uv-space) (Anderson et al. 2014) and the GLOSTAR VLA D-configuration catalog (Medina et al. 2019), are marked using grey circles. response (e.g., Vogel et al. 1984;Cotton 2017). In this work, we exclusively use the combination of VLA-D and Effelsberg data, and hereafter these images are called 'the combination images'. Since the frequency coverage is not exactly the same on both the VLA and the Effelsberg telescopes, producing the combination images is not straightforward. The final VLA continuum data from the GLOSTAR survey are binned into nine subbands centered on frequencies from ∼4.2 to ∼7.5 GHz, whereas the Effelsberg continuum maps have two subbands centered at frequencies f E,lo ∼ 4.9 GHz and f E,hi ∼ 6.8 GHz (see Brunthaler et al. 2021, for more details). The procedure that we followed to combine the D-configuration VLA and the Effelsberg data for the different Stokes parameters is described below. Image combination: Stokes I We average the VLA images from the first five subbands at lower frequencies and the next three subbands at higher frequencies separately to form two VLA images, I V,f V,lo and I V,f V,hi respectively. We discard the ninth subband since it is mostly corrupted by radio frequency interference. The first five subbands have an average frequency f V,lo ∼ 4.7 GHz and the next three subbands have f V,hi ∼ 6.9 GHz, which are already close to the central frequencies of the Effelsberg continuum data, f E,lo ∼ 4.9 GHz and f E,hi ∼ 6.8 GHz respectively, but they are not exactly equal. To bring the two VLA images (I V,f V,lo and I V,f V,hi ) to the frequencies of the Effelsberg images, we use a pixel-by-pixel VLA spectral A145, page 3 of 21 A&A 671, A145 (2022) index α pix to scale the intensities of each pixel: where I pix V,f E,lo and I pix V,f E,hi are the VLA pixel values estimated at the Effelsberg central frequencies. For pixels with intensities above a signal-to-noise threshold of three, α pix is measured from the two Stokes I images I V,f V,lo and I V,f V,hi . For pixels below the threshold, we take a spectral index of zero, i.e., we use the average intensity. After bringing the VLA images to the central frequencies of the Effelsberg images, we feather the VLA and the Effelsberg maps I V,f E,lo + I E,f E,lo to produce the low frequency combination image, and I V,f E,hi + I E,f E,hi to produce the high frequency combination image. Finally, the low and high frequency images are averaged to form the 5.85 GHz GLOSTAR combination image. These combination images will be made available on the GLOSTAR image server 2 before the publication of this work. Image combination: Stokes Q and U Similar to the Stokes I procedure, we make the low-and highfrequency VLA images by averaging the first five and next three subbands. We then directly feather each of these averaged images with their respective Effelsberg maps: P V,f V,lo + P E,f E,lo and P V,f V,hi + P E,f E,hi , where P represents Stokes Q or U. We do this without any intensity scaling applied to bring them to the exact same frequency as we did for Stokes I. This is because Stokes Q and U have both positive and negative features, and a direct spectral index calculation is not possible. The Stokes Q and U images at each of the two frequencies are then combined to form the linearly polarized intensity maps Q 2 + U 2 . The low and high frequency maps are then averaged to form the 5.85 GHz GLOSTAR combination image of linearly polarized intensity. We note that this method may introduce a bias in the measured polarized intensities and the polarization vectors due to the different central frequencies. However, we find that this bias is negligible since the frequencies are quite close ( f V,lo ≈ f E,lo and f V,hi ≈ f E,hi ). Assuming a spectral index of −0.7 for synchrotron emission, the different central frequencies of the feathered VLA and Effelsberg images of linearly polarized emission introduce an error of approximately 4%, which is close to the calibration uncertainty. For the polarization vector to change by just five degrees from f V,lo to f E,lo , the rotation measure must be greater than about 2500 rad m −2 , which is unlikely to be seen in any typical Galactic source. Nonetheless, to introduce this bias in the uncertainty measurement of flux densities and also the instrumental polarization (≲2% in both VLA and Effelsberg data), we adopt a conservative 10% error that will be added in quadrature to the usual uncertainty we obtain from the measurement of flux density of an extended source. In addition, we observe that the LPFD measured in the combination images may be lower than the values measured in the VLA D-configuration only images. This can happen due to the depolarization that occurs when the polarization vectors in the small scale structure detected by the VLA are misaligned with the polarization vectors measured from the Effelsberg data. It is worth noting that, in this study, the exact degree of polarization is not exceptionally important except to the degree it establishes whether the source is or is not polarized, i.e., we only use it as a tool to identify nonthermal emission. Supernova remnant catalogs In D21, we presented the list of the SNR candidates that are detected in the D-configuration VLA images of the GLOSTAR survey. It contains 77 objects that were noted as potential SNRs by earlier studies (their Table 3), and 80 new identifications as well (their Table 4). These candidates were identified using the MIR-radio anti-correlation property of SNRs as discussed earlier. In the GLOSTAR pilot region, there are 21 candidates discovered in the 1-2 GHz HI/OH/Recombination line survey (THOR; A17) and 14 from the GLOSTAR survey (D21). These 35 candidates, in addition to the 12 confirmed SNRs in the Galactic SNR catalogs by Ferrand & Safi-Harb (2012) and Green (2019), are the targets of this study. Ancillary data In addition to the GLOSTAR survey continuum images previously described, we also use other complementary radio surveys that are able to recover emissions at the scale of several arcminutes: the 1-2 GHz HI/OH/Recombination line survey (Beuther et al. 2016;Wang et al. 2020) combined with the VLA Galactic Plane Survey (VGPS; Stil et al. 2006), which is called the THOR+VGPS 3 , the 80-230 MHz GaLactic and Extragalactic All-sky Murchison Widefield Array survey (GLEAM; Hurley-Walker et al. 2019) 4 , the Effelsberg 11 cm (∼2.7 GHz) survey of the Galactic plane by Reich et al. (1984) 5 , and the 3 cm (10 GHz) survey of the Galactic plane with the Nobeyama telescope by Handa et al. (1987) 6 . Flux density and spectral index measurements We use the GLOSTAR combination images to measure the flux densities at 5.85 GHz, in addition to other surveys as mentioned earlier (Sect. 2.3). We note that we do not measure the flux densities from the two sub-bands to derive an 'in-band' GLOSTAR spectral index, since each of those images depends uponthough only partly -the pixel-by-pixel spectral index from the VLA data, which suffer from the problem of the undetected large scale flux density (see Sect. 2.1.1). The presence of background emission may bias the value of the measured spectral index. This is particularly true for extended objects in the Galactic plane since the nonthermal Galactic background is strong and ubiquitous at low radio frequencies. In addition, the intensity of this background is dependent on frequency and position (e.g., Paladini et al. 2005). The method of 'unsharp masking' (Sofue & Reich 1979) is generally used to filter out the large scale Galactic emission, but it is not appropriate for smaller scale background emission across an object with the size of a few arcminutes. In this work, we fit a 'twisted plane' that removes the background contribution up to a first order variation. Points are chosen around an object such that they represent the background emission in that area, and a two-dimensional least-squares linear fit is performed to the pixel intensities to measure the background variation. The uncertainty from this background subtraction operation is determined by choosing multiple sets of vertices. We subtract the local background in both the total intensity and the polarized intensity images, and we mask pixels typically below a 3σ-level, where the noise is determined locally by a sigma-clipping algorithm. While several objects we discuss in this paper already have their low frequency flux densities derived in multiple previous studies, for the sake of consistency with regards to spectral index, we make our own measurements of the flux densities of these objects using the images directly from their survey data, performing background subtraction in the same manner as we do for the GLOSTAR images. We also mask the point sources that are clearly unrelated (e.g., Tian & Leahy 2005) to keep the measurement as accurate as possible. In addition, since radio interferometric artefacts such as radial 'spokes' are common near bright sources, we do not measure the total or linearly polarized flux densities if we are unable to disentangle such effects from the emission of an object. Due to such artefacts, polarization measurements are not possible for about a third of the objects studied in this work. The spectral index of an object is usually measured by fitting a straight line to the relation between flux densities and frequencies in logarithmic space: However, the values determined in this manner are sensitive to the presence of background emission. Turtle et al. (1962) introduced the concept of temperature-temperature (TT) plots, in which a spectral index is extracted from the slope of a straight line fit to the pixel intensities at one frequency against the pixel intensities at another frequency. In essence, we integrate over the whole area to measure the flux density spectral index (α FD ), whereas the TT-plot spectral index (α TT ) is calculated by measuring the variation of each pixel at different frequencies. The intensities on TT-plots can be represented by brightness temperatures in Kelvin, or pixel intensities in Jy beam −1 . In this work, we exclusively use pixel intensities, and the spectral index is calculated using: where m S is the slope of the line that is fit to pixel intensities. This is a more reliable measurement of spectral index of an extended object because the flux density bias introduced by a constant large scale background emission moves all the points equally, and hence does not affect the slope of the fit. Since the combination images are produced using the spectral index derived from the D-configuration GLOSTAR-VLA images, they are not suitable to measure the TT-plot spectral index (α TT ). We only use the GLOSTAR-Effelsberg images for this purpose. We also measure the flux density spectral index (α FD ); this serves as a useful consistency check since we subtract the background regardless, as described above. We note that, at low radio frequencies such as the regime of the GLEAM survey (≲200 MHz), absorption effects become important, either via synchrotron self-absorption or free-free absorption (e.g., Wilson et al. 2013;Arias et al. 2019). This lowers the emitted flux at low frequencies and increases the power-law spectral index compared to values determined at higher frequencies. Such a 'spectral break' effect had been noted in several SNRs before (e.g., Sun et al. 2011a). Spectral breaks are also observed in pulsar wind nebulae due to the central pulsar's time-dependent energy injection (Reynolds & Chevalier 1984;Woltjer et al. 1997). When we calculate the flux density spectral index in this work, if we clearly see a break, we split the spectrum into two and calculate two spectral indices; if the break is not obvious, we calculate only a single spectral index. The nonthermal emission from the Galactic disk is polarized, and it may have structure on small scales that is not filtered out by an interferometer. While this is more significant at longer wavelengths, it might affect the GLOSTAR images too (see D21). We verify that in the objects we study in this work, there exist no features with no Stokes I counterparts when measuring LPFD. In addition, a Ricean polarization bias might introduce a positive offset. This occurs because LPFD is the square root of the sum of squares ( Q 2 + U 2 ), and any positive or negative noise in Q and U will always add up and result in a non-zero LPFD. We find that this effect is an order of magnitude smaller than the flux densities we report, and in fact there is no need to correct for this bias due to the background subtraction procedure and the 3σ-level mask we use (see Wardle & Kronberg 1974). Nonetheless, the twisted-plane background subtraction procedure is applied to the linearly polarized intensity images as well, which accounts for the Galactic plane polarized background and also any possible Ricean polarization bias. Known SNRs The Galactic SNR catalogs of Green (2019) and Ferrand & Safi-Harb (2012) list 12 confirmed SNRs in the region we study. In Table 1, we present the GLOSTAR 5.8 GHz integrated flux densities of these SNRs along with their spectral indices. If possible, overlapping H II regions and clearly unrelated point sources are masked while measuring the flux densities, taking special care in crowded regions. If it is unclear whether a particular region of emission belongs to the SNR, we do not remove that region. We find that the flux densities and spectral indices are generally consistent with previous studies. We present the GLOSTAR combination images of some interesting known SNRs and discuss them below. The total intensity images and the linearly polarized intensity images of all the known SNRs studied in this work are shown in Appendix A. 3.1. G29.6+0.1 While we had already detected linear polarization in the SNR G29.6+0.1 using the VLA D-configuration images (in D21), the emission in the combination images seems to be depolarized due to the addition of large scale information from the GLOSTAR-Effelsberg data. We do not measure any polarized emission over a 1σ upper limit of ∼0.1 Jy. The flux densities we measure (see Table 1) appear to be lower than what is expected from the lower limits reported by Gaensler et al. (1999): ∼0.41 and ∼0.26 Jy at 5 and 8 GHz, respectively. The reason for this inconsistency is unclear. Nonetheless, the broadband spectral index we derive from our measurements (approximately −0.5) is in line with the TT-plot spectral indices derived by Gaensler et al. (1999). We show the GLOSTAR combination image of the SNR G29.6+0.1 in Fig. 2. The spectrum of this SNR shows that it might be falling more rapidly from 1.4-5.8 GHz than from 0.2-1.4 GHz, suggesting the presence of a spectral break around 1 GHz. But given the uncertainties, we reserve judgment on the changing spectral index. this is a SNR-H II region complex. The Stokes I flux densities we measure are consistent with those given by Fürst et al. (1987a) within uncertainties, and we also find a morphology similar to their image. However, the spectral index we derive from 200 MHz to 10 GHz is essentially zero, which is consistent with our TT-plot result (Fig. 2), but in slight tension with the value of approximately −0.2 given by Fürst et al. (1987a). Even after separating from the region the thermal emission that they reported, we find no evidence for synchrotron emission. In the 24 µm images of MIPSGAL (Multiband Infrared Photometer for Spitzer Galactic plane survey; Carey et al. 2009), we find weak emission following the radio morphology, hinting that the emission may be thermal. Based on sulfur and Hα optical lines, Mavromatakis et al. (2001) also suggest that this may be an H II region instead of a SNR. High resolution deeper observations at lower frequencies will shed more light on the nature of the emission from this object, but the evidence so far suggests that G31.5-0.6 is not a SNR. 3.3. G32.1-0.9 Folgheraiter et al. (1997) discovered the SNR G32.1-0.9 in the X-ray regime, and they found a possible faint radio counterpart in the 11 cm Effelsberg images. A17 reported a possible detection in the THOR+VGPS data too, but no radio spectral index was ever determined. While we cannot confidently identify any counterpart in the GLOSTAR data, the 200 MHz GLEAM image shows a shell that corresponds to the 11 cm Effelsberg and THOR+VGPS detections (Fig. 3). Using these three images, we derive a radio spectral index for this unusually faint SNR for the first time, α FD = −0.68 ± 0.12. Its average 1 GHz surface brightness is approximately 3 × 10 −22 W m −2 Hz −1 sr −1 , which makes it one of the faintest radio SNRs currently known: it is only three times brighter than the faintest SNR known in the Milky Way (G181.1+9.5; Kothes et al. 2017). 3.4. G32.4+0.1 G32.4+0.1 was discovered in the X-ray regime by Yamaguchi et al. (2004), who also noted a possible counterpart in the images of the 1.4 GHz NRAO VLA Sky Survey (Condon et al. 1998). The radio emission from this SNR is faint but clearly visible in the GLEAM, the THOR+VGPS and the GLOSTAR combination images, allowing us to measure, for the first time for this SNR, a spectral index of −0.21 ± 0.07 (from brightness values) to −0.39 ± 0.10 (from a TT-plot). The GLOSTAR combination image and the plots for spectral index determination are shown in Fig. 2. As noted in Sect. 2.4, the low frequency emission detected in GLEAM may be self-absorbed which brings the spectral index close to zero; hence we favor the TT-plot spectral index (approximately −0.4) for higher frequencies where the effects of synchrotron self-absorption are not important. Linear polarization is undetected, with an upper limit on the linearly polarized flux density of ∼0.3 Jy. 3.5. G32.8-0.1 Green (2019) lists the SNR G32.8-0.1 with an uncertain spectral index of −0.2 based on the work of Caswell et al. (1975), who report a flux density of 12.8 Jy at 408 MHz. Unfortunately, no uncertainties were quoted, but they reported that their error might be large. Later, Kassim (1992) observed this SNR with the VLA at a similar frequency of 330 MHz, and their results are in dispute with the result from Caswell et al. (1975). They measured a significantly higher flux density of ∼32 Jy and consequently a more negative spectral index of approximately −0.5, but no uncertainties were quoted once again. This SNR is clearly visible in the GLOSTAR survey, in addition to the GLEAM and the THOR surveys (Fig. 2), which helps us resolve the tension. Our measurements of flux density (16.3 ± 1.7 Jy at 200 MHz) and spectral index (α TT = −0.27 ± 0.04) are consistent with the values given by Caswell et al. (1975), which is confirmed by our TT-plot spectral index (α TT = −0.32 ± 0.05). 3.6. G35.6-0.4 The nature of emission from G35.6-0.4 had been a subject of discussion for a long time. It was included in the early SNR catalogues (e.g., Downes 1971;Milne 1979 radio recombination line by Lockman (1989) among other studies, had cast doubts that the emission is nonthermal (see Green 2009, for an overview). Finally, using higher quality radio continuum data, Green (2009) "re-identified" this as a SNR with a spectral index of approximately −0.5. This object is clearly visible in the GLOSTAR survey, where we also unambiguously detect linearly polarized emission (Fig. 4). Its spectrum appears to be broken; from 200 MHz to 1.4 GHz the flux density has no significant change (α ∼ 0), and from 1.4 to 10 GHz it falls with a spectral index of α = −0.31 ± 0.07. This is confirmed with the GLOSTAR Effelsberg TT-plot spectral index as well (Fig. 4). This result ( index derived by Rennie et al. (2022): α = −0.34 ± 0.08 from 2.7-30 GHz. Green (2009) derives a slightly steeper spectral index of −0.47 ± 0.07. This is probably because of the different choice of polygons used for measuring the flux density and also the subtraction of background emission in this complex region, but we note that the values are consistent within 2σ. Given the presence of radio recombination lines that indicate thermal emission and a spectral index approximately −0.3, this region appears to be a complex of thermal and nonthermal emissions. Paredes et al. (2014) suggest that there may be two circularly shaped extended objects present in this complex (marked by two red dotted circles in Fig. 4), and with one of them with thermal and the other one with nonthermal emission. We find that MIR emission is detected from the southern part in the GLIMPSE and MIPSGAL images (Churchwell et al. 2009;Carey et al. 2009), providing further evidence of thermal emission from this region. The linearly polarized emission detected in the GLOSTAR combination data (see Fig. 4) also hints at the presence of two shells, one centered at G35.60-0.40 and the other at G35.55-0.55, similar to those reported by Paredes et al. (2014). However, since we find polarization from both these regions, it is likely that emissions from these regions have both thermal and nonthermal components. Candidate SNRs In the pilot region, we had discovered 14 new candidate SNRs from the GLOSTAR survey in our previous work (D21), in addition to the 21 candidates discovered by A17 using THOR+VGPS images. The continuum flux densities of these candidates are presented in Table 2 (from THOR+VGPS) and Table 3 (from GLOSTAR). We derived flux density spectral indices whenever possible, and these are plotted in Fig. 5. We discuss five objects for which there is good evidence of nonthermal emission in detail in the following sections. We also find that 14 other candidates possibly have a negative spectral index. But since they are quite faint and the morphology of these candidates is not clear (see Figs. B.1 and B.2), we do not discuss them further. We detect this object in the Stokes I images of our GLOSTAR survey (Fig. 6), with the same morphology as observed in the THOR+VGPS images. Its fractional polarization is about 2%, which is not unusual in SNRs (e.g., Sun et al. 2011a). The linearly polarized intensity map from GLOSTAR shows a faint structure, close to the noise level in this region, that resembles the total intensity of the shell of this object. From the Effelsberg images A145, page 10 of 21 of our survey, we made a TT-plot and obtained a spectral index of −0.33 ± 0.14. By measuring the background-subtracted flux densities in the images of GLOSTAR combination, THOR+VGPS and GLEAM, we obtain a brightness spectral index of −0.28 ± 0.11. These measurements and the morphology we observe in the total and linearly polarized intensity images provide ample evidence of nonthermal emission from this object, and hence we conclude that G28.36+0.21 is indeed a SNR. G28.78-0.44 The candidate SNR G28.78-0.44 (Fig. 7) had previously been identified in the MAGPIS and the THOR+VGPS surveys (Helfand et al. 2006;A17). Hurley-Walker et al. (2019) derive a spectral index of approximately −0.7 in their GLEAM survey (70-230 MHz), consistent with the spectral index from the TIFR-GMRT Sky Survey and the NRAO VLA Sky Survey (de Gasperin et al. 2018;Dokara et al. 2018). While the polarization from this object was already clearly visible in the VLA images of the GLOSTAR survey (D21), the addition of the Effelsberg data allows us to measure its flux densities at 5.8 GHz. The fractional polarization we measure in the combination images is about 4%. We also detect this object in the Effelsberg 11 cm survey (Reich et al. 1984) and the Nobeyama 10 GHz survey (Handa et al. 1987). These give us a broadband flux density spectral index of −0.42 ± 0.04, which is consistent with the TTplot spectral index from the Effelsberg images of the GLOSTAR survey alone (−0.52 ± 0.12, see Fig. 7). Thus we find strong evidence that this filled-shell object is a SNR. G29.38+0.10 This source appears to have a complex structure with a bright pulsar wind nebula (PWN) and a faint SNR shell in the GLOSTAR combination image (Fig. 8). The central structure of this complex is bright and highly polarized in the combination images, with the degree of linear polarization reaching as high as 30% in some pixels. For the whole complex, this value is 5.5 ± 0.8%. We had detected strong linear polarization from this object in our previous work as well (D21), which was based only on the D-configuration VLA images. Its low frequency spectral index measured using the GLEAM images by Hurley-Walker et al. (2019) for the whole complex, and for the central source by Dokara et al. (2018) using the TGSS-NVSS spectral index map (de Gasperin et al. 2018) is approximately zero, which is typical of PWNe. We calculate a similar spectral index using the THOR+VGPS and GLEAM images as well. However, between the THOR+VGPS and the GLOSTAR combination images, the flux density falls with a spectral index of α FD ∼ − 0.34. Constructing a TT-plot using images from the two bands of the GLOSTAR-Effelsberg data, we measure a value α TT ∼ − 0.46. This implies that there is a break in the spectrum of this source near 2 GHz, or a gradual turnover. Such a varying spectral index at these frequencies is once again typical of PWNe (see Pacini & Salvati 1973;Reynolds & Chevalier 1984;Sun et al. 2011a). These facts provide further evidence that G29.38+0.10 is a PWN+SNR shell complex. 4.4. G031.256-0.041 A17 cataloged G31.22-0.02 as a shell-shaped SNR candidate based on the THOR+VGPS images. It lies in a crowded field with a strong background, due to which the determination of the TT-plot spectral index from the Effelsberg images (α TT ) was not possible. This region is better resolved in the GLOSTAR combination images, in which we identify the brightest part of the supposed shell of G31.22-0.02 (at l ∼ 31.26 • , b ∼ −0.02 • ) to be inside another shell (Fig. 9). We believe that this is a PWN+shell complex, and named it as a GLOSTAR SNR candidate G031.256-0.041 in our previous work (D21). The flux densities we measured in the THOR+VGPS and the GLOSTAR combination images are similar within uncertainties (S ∼ 0.35 Jy), giving a spectral index close to zero between 1.4 and 5.8 GHz. In the 200 MHz GLEAM images, what we believe is the center of the PWN (at l ∼ 31.26 • , b ∼ − 0.02 • ) is barely resolved, with a peak brightness of nearly 0.8 Jy beam −1 . The background level in this region is about 0.4 Jy, implying that the flux density of the peak is ∼0.4 Jy, similar to the flux densities from the GLOSTAR combination and the THOR+VGPS images. Unfortunately, the linearly polarized intensity images from GLOSTAR in this region are contaminated with sidelobe artefacts of nearby bright sources, prohibiting us from measuring its degree of polarization. The morphology and the estimated spectral index are, however, consistent with our PWN+SNR shell interpretation. G034.524-0.761 We discovered the SNR candidate G034.524-0.761 in our previous GLOSTAR work, where we had identified clear linear polarization from the VLA data (see Fig. 11 of D21). With the addition of the Effelsberg data to the VLA images, we now obtain a degree of polarization ∼10% from this candidate. In addition, we obtain a TT-plot spectral index of approximately −0.6 using the Effelsberg images, although with a large A145, page 12 of 21 R. Dokara et al.: A&A proofs, uncertainty of ∼0.5. We measured flux densities in the 200 MHz GLEAM and the 1.4 GHz THOR+VGPS images, which give us a spectral index of approximately −0.9. While all these facts point to a nonthermal origin of the emission from this region, the morphology of this candidate (Fig. 10) indicates that this might be a filament. For this reason, we cannot conclude that this object is a SNR. Discussion It is evident from Fig. 5 that the spectral indices of several SNR candidates are not well constrained yet. Most of them have a small angular size and a low surface brightness, and they lie in crowded regions with a strong background; these conditions result in large uncertainties in the measurement of their spectral indices. Moreover, the polarization signals from several SNRs may remain undetected because of limited sensitivity (the linearly polarized flux density is typically only a few percent of the total flux density, e.g., Sun et al. 2011a). Deeper observations of these candidates across the radio band are necessary to constrain their spectral indices and linear polarization better. However the current results do not look very promising since the rate of confirmation appears to be quite low, and we are forced to ponder over the strategy to identify new SNRs. Since most of the bright SNRs are likely to have been discovered already, it might progressively get more difficult to find the remaining fainter ones. H II regions are more numerous in the Galaxy, and there is a chance that the fainter H II regions contaminate the sample of the faint SNRs. However, the SNR candidates identified by A17 and D21 do not have any significant coincident MIR emissions detected in the Spitzer MIR surveys, which can detect H II regions anywhere in the Galaxy (Anderson et al. 2014). Hence, we believe that, if the SNR candidates do not turn out to be SNRs, the confusion must be due to radio emitters other than H II regions, although it is unclear what kind of objects they might be. A17 and D21 suggest that the remaining undetected SNRs must be faint and also have a small angular size. We turn our attention toward these properties of the sample of the SNR candidates. Angular radius One question that needs to be answered before starting the search for the remaining SNRs is whether most of them are indeed small, since that would determine what resolution is necessary to detect the 'missing' SNRs. To estimate their apparent angular extents, we ran a simple Monte Carlo simulation of evolution of SNRs in the Milky Way. SNRs are evolved in a locally uniform ISM using the expressions from Draine (2011), which are based on the four classical stages as proposed by Woltjer (1972): 1. The earliest part of the evolution is known as the freeexpansion or the ejecta-dominated phase. We assume that the mass of the swept up ISM (m sw ) is negligible compared to the mass of the SN ejecta (m ej ) in this stage. 2. Sedov-Taylor phase begins when the shocked and swept up mass is comparable to the ejecta mass m sw ∼ m ej , during which the explosion can be approximated as a point source injecting only energy. 3. Snowplow phase begins when the radiative cooling losses become important and the matter behind the SNR shock cools rapidly to form a cold and dense shell. In the hot and tenuous medium that is interior to the shock, however, the energy losses do not yet play a role, and the pressure from this hot central volume drives the momentum of the dense outer shell. 4. The final phase is 'dispersion' as the SNR merges into the surrounding ISM and fades away when the shock speed drops to the ambient velocity dispersion levels. We derive the radius of each SNR based on the time since explosion and the position in the Galaxy. Following are the main parameters and inputs of the simulation: -Galactic supernova rate of one per 40 yr, with the corecollapse and thermo-nuclear types being 85 and 15% respectively (Tammann et al. 1994;Reed 2005). -Three dimensional gas density model of the Milky Way from Misiriotis et al. (2006). -A random Monte Carlo model of the two-dimensional distribution of supernova events in a disk with a central hole and a two-arm spiral following Li et al. (1991). The central hole is to account for the dearth of massive star formation, and by extension SNRs, near the Galactic center (see Nguyen et al. 2021;Ranasinghe & Leahy 2022, for example). -Core-collapse SN events, which trace massive star formation, are chosen to have a scale height of 80 pc, the same as the scale height of the molecular gas (from Misiriotis et al. 2006). -Type Ia SNe arise due to mass accretion onto old degenerate stars; accordingly we use the thick disk scale height of 0.7 kpc from Kordopatis et al. (2011). -The maximum lifetime of SNRs is fixed at 80 000 yr (Frail et al. 1994). -For a Type Ia supernova, the kinetic energy of the ejecta is fixed at 10 51 erg and the ejecta mass is normally distributed from 0.8 M ⊙ -1.8 M ⊙ (following Scalzo et al. 2014). -For the more numerous core-collapse supernova events, the ejecta mass (8 M ⊙ -11 M ⊙ ) and the kinetic energy (0.2-1.3 times 10 51 erg) are randomly drawn from distributions adapted from the results of Martinez et al. (2022). There are, however, some caveats to consider: -Realistically, the properties of the ISM are not smoothly varying functions of position as the model given by Misiriotis et al. (2006). The ISM number density can drastically change depending on the environment, especially in the case of previous mass-loss events such as stellar winds. These affect the evolution of SNRs in a crucial and nontrivial manner (e.g., Yasuda et al. 2021). -The distribution of supernova events follows the model of Li et al. (1991), which is quite simplistic. But similar to their findings, we also observe that the results are insensitive to parameters of the disk and the spiral arms. The inverse dependence of angular radius with distance makes our result even more robust than that of Li et al. (1991). -The distributions of ejecta mass we used (from Scalzo et al. 2014;Martinez et al. 2022) may not hold for the Milky Way accurately, since those results are from the nearby local universe with supernovae from several galaxies. However, we find that even if the ejecta mass for core-collapse supernovae was only 1 M ⊙ instead of 8 M ⊙ -11 M ⊙ , the results are mostly the same. -There is evidence that the explosion energies of supernovae can have a range wider than that we have taken, for both Type Ia and core-collapse, from ∼10 49 to ∼10 52 erg (e.g., Benetti et al. 2005;Fisher & Jumper 2015;Pejcha & Thompson 2015;Murphy et al. 2019;Leahy et al. 2020). Even with a wider range, we find that the resultant radius distribution does not significantly change. -We do not take into account the effects of clustering. This is the main drawback of this simulation. A significant fraction of massive star formation -and the number of SN events by extension -happens in clusters (e.g., Krumholz 2014). Ferrière (2001) estimates that ∼60% of O stars probably remain in their natal group, while the rest of them end up in the 'field'. If multiple supernovae occur in succession in such clusters, this might result in the formation of a super-bubble (e.g., Ehlerová & Palouš 2013). We ran the simulation for two million years, which is several generations of SNRs. A snapshot at a time of 1.8 Myr is presented in Fig. 11, and a movie of the whole two million years is available online. Given that the lifetime of a SNR and the SN rate are fixed at 80 000 yr and one for every 40 yr respectively, about 2000 SNRs exist at the end of the simulation. It is clear that most of the SNRs are quite small with angular radii of only a few arcminutes, similar to the THOR and GLOSTAR SNR candidates. Even if the lifetime of a typical SNR is longer than 80 000 yr as we had used, the resultant distribution does not shift to higher angular scales significantly. This is due to the fact that the expansion is considerably slower in the later stages of SNR evolution. While this simulation only serves as a first A145, page 15 of 21 A&A 671, A145 (2022) approximation since we do not consider several effects such as those mentioned above, it is nevertheless useful to give us an idea of what to expect. And the result reiterates the views of A17 and D21 that SNR searches must focus on small angular sized objects to make the most gains. Radio surface brightness In the simulation described above, we also measured the area of overlap of SNRs. We find it to be typically less than 10% of the total sky area covered by SNRs, suggesting that the confusion due to SNRs overlapping themselves may not be important. However, the SNRs originating from core-collapse events are located near massive star forming complexes, which also contain other extended structures emitting at radio wavelengths. H II regions are the most likely sources of positional overlapping confusion: they are probably over 8000 in number (Anderson et al. 2014), and the range of the values their radio surface brightness is similar to that of SNRs. Currently, the faintest SNR known has a brightness temperature of about 0.33 K at 1 GHz (Kothes et al. 2017), and, by extrapolating to 1 GHz assuming a nonthermal spectral index, we find that the SNR candidates from A17 and D21 are at a similar or lower surface brightness. On the other hand, the background emission from the diffuse gas in the Milky Way is at a level of a few Kelvin in the inner Galactic plane at 1 GHz (e.g., Reich et al. 1990), and it is even higher in regions such as the mini-starburst W43 where one expects many SNRs due to recent massive star formation activity. This implies that the diffuse background emission is a critical source of confusion, and finding new SNRs will probably be more difficult from now on. Interferometric surveys at lower frequencies, such as MeerKAT, appear promising in the search for new SNRs (e.g., Heywood et al. 2022), but the nonthermal Galactic background emission is also stronger at lower frequencies and may contribute to the confusion. Summary and conclusions We derived spectral indices of previously confirmed SNRs in the Galactic longitude range 28 • < l < 36 • , using the VLA-D+Effelsberg combination images of the 4-8 GHz GLOSTAR survey in addition to other complementary and archival survey data. These include the first radio spectral index determinations for SNRs G32.1-0.9 and G32.4+0.1, along with the first reported spectral break for SNR G35.6-0.4. We showed that G31.5-0.6 may not be a SNR, and we provided further evidence of nonthermal emission from the SNR candidates G28.36+0.21, G28.78-0.44, G29.38+0.10, and G034.524-0.761. We find that G28.36+0.21 and G28.78-0.44 are typical SNR shells, and G29.38+0.10 is a PWN+shell complex. Based on a simple Monte Carlo simulation of SN events in the Milky Way, we find that most of the SNRs yet to be discovered must have angular sizes smaller than half a degree. Hence, despite the low rate of confirmation, we believe that future studies must focus on small angular sized objects such as the THOR and GLOSTAR SNRs. The forthcoming Effelsberg images from the GLOSTAR survey for the rest of the coverage will be analyzed in the coming months, which will undoubtedly help us study more SNRs and candidates in the near future.
11,526.2
2022-11-24T00:00:00.000
[ "Physics" ]
A recombinant scFv antibody-based fusion protein that targets EGFR associated with IMPDH2 downregulation and its drug conjugate show therapeutic efficacy against esophageal cancer Abstract The present study aimed to evaluate the anti-tumor efficacy of the epidermal growth factor receptor (EGFR)-targeting recombinant fusion protein Fv-LDP-D3 and its antibody-drug conjugate Fv-LDP-D3-AE against esophageal cancer. Fv-LDP-D3, consisting of the fragment variable (Fv) of an anti-EGFR antibody, the apoprotein of lidamycin (LDP), and the third domain of human serum albumin (D3), exhibited a high binding affinity for EGFR-overexpressing esophageal cancer cells, inhibited EGFR phosphorylation and down-regulated inosine monophosphate dehydrogenase type II (IMPDH2) expression. Fv-LDP-D3 was taken up by cancer cells through intensive macropinocytosis; it inhibited the proliferation and induced the apoptosis of esophageal cancer cells. In vivo imaging revealed that Fv-LDP-D3 displayed specific tumor-site accumulation and a long-lasting retention over a 26-day period. Furthermore, Fv-LDP-D3-AE, a pertinent antibody-drug conjugate prepared by integrating the enediyne chromophore of lidamycin into the Fv-LDP-D3 molecule, displayed highly potent cytotoxicity, inhibited migration and invasion, induced apoptosis and DNA damage, arrested cells at G2/M phase, and caused mitochondrial damage in esophageal cancer cells. More importantly, both of Fv-LDP-D3 and Fv-LDP-D3-AE markedly inhibited the growth of esophageal cancer xenografts in athymic mice at well tolerated doses. The present results indicate that Fv-LDP-D3, and Fv-LDP-D3-AE exert prominent antitumor efficacy associated with targeting EGFR, suggesting their potential as promising candidates for targeted therapy against esophageal cancer. Introduction Esophageal cancer (EC) is one of the most common cancer worldwide with a high mortality and a relatively low overall 5-year survival rate. EC is divided into two pathological subtypes, namely esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC) (Pennathur et al., 2013;Fatehi Hassanabad et al., 2020;He et al., 2021). In the Chinese population, ESCC is the main EC subtype and has a high incidence (Pennathur et al., 2013;Lu et al., 2021). EAC is less common in China, while being more prevalent in western countries, and has a short median survival time (Fujihara et al., 2017). Currently, chemotherapy represents the main treatment for EC, but it is associated with considerable toxicity (Liu et al., 2019). In recent years, targeted therapy has shown promising anti-tumor efficacy. EGFR is a member of the ERBB (the erythroblastic leukemia viral oncogene homolog) receptor tyrosine kinase family of transmembrane receptor proteins, which include an extracellular ligand-binding domain, a transmembrane domain, and an intracellular kinase domain (Ciardiello & Tortora, 2003). EGFR is overexpressed in most esophageal tumors (Hanawa et al., 2006). In general, EGFR overexpression is more common in ESCC than in EAC (Kawaguchi et al., 2007). Cetuximab, an EGFR inhibitor monoclonal antibody, has a significant therapeutic effect against EGFR-overexpressing ESCC (Zhu et al., 2018). Gong et al. used pingyangmycin and cetuximab to treat EC xenograft in athymic mice, reporting an enhanced therapeutic efficacy under combination treatment as opposed to monotherapy (Gong et al., 2012). In addition, the combination of cetuximab and trastuzumab has shown a synergistic antitumor effect against EC both in vitro and in vivo (Yamazaki et al., 2012). In general, antibody-drug conjugates (ADC) consist of three parts, that is, the antibody, a linker, and a small-molecule cytotoxic drug. The antibody facilitates targeting, the linker connects the antibody to the small-molecule cytotoxic drug, and the latter exerts cytotoxicity against tumor cells. Hu et al. previously prepared an EGFR-targeting ADC (LR004-VC-MMAE), which exhibited favorable anti-tumor activity in mouse models of EC (Hu et al., 2019). Therefore, EGFR represents a promising target for the treatment of EC. Human serum albumin (HSA) can be internalized into cancer cells, where its degradation contributes to the supply of free amino acids, representing a major energy source for tumors (Davidson et al., 2017). This altered state of extracellular protein metabolism can be exploited for the targeted delivery of anti-tumor drugs. HSA domain III (D3)-modified single-chain variable fragment (scFv) have an extended serum half-life while retaining their specific binding efficacy (Andersen et al., 2013). Targeted delivery of anti-tumor drugs via albumin has shown anti-tumor potential in previous studies (Kratz, 2014;Shan et al., 2018). Furthermore, IMPDH2 is a rate-limiting enzyme in the de novo biosynthesis of guanine nucleotides. It is overexpressed in various types of cancers, and its upregulation is related to poor prognosis, promoting tumor formation and development (Ying et al., 2018;Kofuji et al., 2019;Sahu et al., 2019). It has been suggested that IMPDH2 can be used not only as a biomarker for tumor diagnosis, but also as a potential therapeutic target for the treatment of malignant tumors. However, rarely studies have been reported on IMPDH2 as a potential therapeutic target for esophageal cancer. Lidamycin (LDM, also known as C-1027) is an anti-tumor antibiotic currently undergoing phase II clinical trial. LDM is composed of an active enediyne chromophore (AE), with extremely potent cytotoxicity, and a non-covalently bound apoprotein LDP. LDP can bind to AE and stabilize the enediyne structure through hydrophobic interactions. In particular, AE and LDP can be isolated and reconstituted in vitro (Shao & Zhen, 1995;Tanaka et al., 2001;Shao & Zhen, 2008). Thus, LDM can be used as an effective "warhead" agent for the construction of antibody-drug conjugate through a unique process, DNA recombination, and molecular reconstitution. ScFv with good penetration and distribution in tumor tissues has been used for the targeted delivery of anti-tumor drugs (Trebing et al., 2014). In previous studies, we constructed a novel recombinant fusion protein (Fv-LDP-D3) and its antibodydrug conjugate (Fv-LDP-D3-AE) . In that recombinant fusion protein (Fv-LDP-D3), Fv is an anti-EGFR single-chain variable fragment, LDP is the apoprotein of LDM, and D3 is the domain III of HSA; Furthermore, the antibody drug conjugate (Fv-LDP-D3-AE) was prepared by integrating the active enediyne chromophore AE of LDM into the fusion protein. The study demonstrated the efficacy of the antibody drug conjugate (Fv-LDP-D3-AE) against K-Ras-mutated pancreatic cancer . However, whether Fv-LDP-D3 and Fv-LDP-D3-AE have anti-cancer activity against EC and their underlying molecular mechanism has not yet been studied. In this study, we evaluated the anti-tumor activity of Fv-LDP-D3 and Fv-LDP-D3-AE in EC in vitro and in vivo, exploring their potential mechanisms of action. Based on the current findings, the constructed recombinant fusion protein and the antibody-drug conjugate may be potential candidates for EGFR-targeted EC therapy. Reagents The Annexin V FITC Apoptosis Detection Kit, Cell Cycle Assay Kit-PI/RNase Staining Kit, and Fluorescein Labeling Kit-NH 2 were all purchased from DOGINDO (Japan). The DyLight680 Antibody Labeling Kit and BCA protein assay kit were purchased from Thermo Fisher Scientific (Waltham, MA, USA). The Cell Counting Kit-8 was purchased from NCM Biotech (Suzhou, China). Immobilon western chemilum HRP substrate was purchased from Millipore (Burlington, MA, USA). Matrigel V R Basement Membrane Matrix was purchased from Corning (Glendale, AZ, USA). Cell culture Human ESCC cell lines KYSE150, KYSE520 were purchased from Creative Bioarray, Inc. Eca109 cell line was provided by our laboratory. KYSE150, KYSE520, and Eca109 cell lines were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin. Mouse embryonic fibroblasts cell line NIH3T3 was purchased from the Cell Center of Peking Union Medical College (Beijing, China) and cultured in Dulbecco's modified Eagle medium (DMEM) supplemented with 10% FBS and 1% penicillin-streptomycin. All cells were cultured in a 37 C, 5% CO 2 cell incubator. Binding affinity of Fv-LDP-D3 in vitro 2.3.1. Enzyme-linked immunosorbent assay (ELISA) KYSE150, KYSE520, NIH3T3 cells were seeded in 96-well plates at 2 Â 10 4 cells per well, placed in a 37 C, 5% CO 2 incubator, and cultured for 24 h. After fixing with 4% paraformaldehyde and blocking with 5% skim milk PBS solution, different concentrations of Fv-LDP-D3 were added to the experimental wells and incubated for 2 h. An anti-His tag monoclonal antibody was then added, and the plate was incubated at 37 C for 1 h, followed by the addition of horseradish peroxidase-conjugated goat anti-mouse IgG and incubation for 40 min. After each incubation, 96-well plates were washed with PBST or PBS. Next, 3,3 0 ,5,5 0 -tetramethylbenzidine was added, followed by 2 M H 2 SO 4 to stop the reaction. The absorbance of reaction wells was measured at OD 450 nm using a microplate reader (Thermo Fisher Scientific, Franklin, MA, USA). Immunofluorescence KYSE150 and KYSE520 cells were seeded on 24-well plates at 1 Â 10 5 cells per well and cultured in the incubator for 24 h. Cells were fixed with 4% paraformaldehyde, followed by three washes with PBS. Fv-LDP-D3 (400 ng/mL) was then added to the experimental wells of 24-well plates, followed by incubation for 2 h at room temperature and washing with PBS three times for 3 min each. The anti-His tag monoclonal antibody was added, incubated at 37 C for 2 h, and washed with PBS three times, 3 min each. The Alexa Fluor V R 488-conjugated goat anti-mouse IgG was then added, and the cells were incubated in the incubator for 1 h and washed thrice with PBS. After adding anti-fluorescence attenuation sealing agent (including DAPI), cells were observed and photographed under a fluorescence microscope (200Â). Internalization of Fv-LDP-D3 KYSE150 and Eca109 cells were seeded in an 8-well chamber slide and cultured for 24 h. After adding fluorescein-labeled Fv-LDP-D3 recombinant fusion protein (0.2 mg/mL) alone or in combination with macropinocytosis inhibitor EIPA (80 lM), cells were incubated in the dark and then washed with PBS thrice. The cells were fixed with 4% paraformaldehyde for 0.5 h and washed thrice with PBS for 4 min each time. Anti-fluorescence attenuation sealing agent (including 4 0 ,6-diamidino-2-phenylindole, DAPI) was then added to avoid light reaction at room temperature for 15 min. The cells were then observed and photographed under a fluorescence microscope. Apoptosis and cell cycle arrest assays The pro-apoptotic effect of Fv-LDP-D3 and Fv-LDP-D3-AE on KYSE150 and Eca109 cells as well as the effects of Fv-LDP-D3-AE on cell cycle arrest were analyzed via flow cytometry. For apoptosis analysis, the cells were seeded in a 6-well plate at a density of 2 Â 10 5 /well and cultured in a 37 C cell incubator for 24 h. Different concentrations of Fv-LDP-D3 and Fv-LDP-D3-AE were then respectively added to the wells, and the untreated cells were used as controls. Cells were collected according to the instructions of the Annexin V FITC Apoptosis Detection Kit (Dojindo). For cell cycle arrest analysis, cells were seeded at the same density and then collected as per the instructions of the Cell Cycle Assay Kit-PI/ RNase staining (Dojindo). Apoptosis and cell cycle arrest was detected via flow cytometry. In vitro cytotoxicity assay The cytotoxicity of Fv-LDP-D3 and Fv-LDP-D3-AE was analyzed via the Cell Counting kit-8 method. Briefly, several EC cell lines were seeded in 96-well plates at 3 Â 10 3 cells/well and placed in a 37 C incubator for 24 h. The cells were then treated with either Fv-LDP-D3 (1 mg/mL) or different concentrations of Fv-LDP-D3-AE. After incubation for 24 h, 10 lL CCK8 reagent were added and incubated for 1 h. Absorbance was measured at OD 450 nm using a microplate reader, and untreated cells were used as control. Cell survival rate (%) was calculated as per the following formula: The IC 50 values were calculated using SPSS software. Western blot Briefly, cells were lysed on ice using RIPA tissue/cell lysis buffer for approximately 30 min, and the protein concentration was quantified using the BCA protein assay kit. Equal amounts of protein samples were transferred to PVDF membranes after SDS-PAGE electrophoresis. After blocking, the membranes were incubated respectively with specific antibodies overnight and were subsequently incubated with the secondary HRP-anti-rabbit IgG or HRP-anti-mouse IgG antibodies. Protein bands were visualized with Immobilon Western Chemiluminescent HRP Substrate reagent and captured using a FlourChem E imaging system. 5-Ethynyl-2'-deoxyuridine (EdU) assay The experimental procedure was performed according to the EdU kit instructions (Beyotime, China). Briefly, KYSE150 and Eca109 cells were inoculated in an 8-well chamber slide at 37 C for 24 h. Fv-LDP-D3-AE (12.5 ng/mL) was then added and cultured for 24 h. Then EdU staining buffer was added, and cells were fixed with 4% polyformaldehyde. The nuclei were stained with Hoechst 33342. Cells were observed under a fluorescence microscope and photographed. Migration and invasion assay Transwell experiments were performed to evaluate cell migration and invasion. For the migration assay, KYSE150 and Eca-109 cell suspensions of 1 Â 10 5 cells in serum-free cell culture medium were added to the upper wells of transwell chambers (Corning, USA), while 500 lL cell culture medium containing 20% fetal bovine serum was added to the lower wells. Different concentrations of Fv-LDP-D3-AE were added to the upper wells, then the plates were cultured in a cell incubator for 24 h. Cells were fixed with 4% paraformaldehyde, stained with 0.1% crystal violet, a cotton swab was used to gently scrape cells off the inner chamber, and then observed and photographed under an inverted microscope (100Â). 33% acetic acid was added for decolorization, and the OD 570 value was determined using a microplate reader. For the invasion assay, 45 lL of diluted matrigel were added to the transwell chamber and placed in a 37 C incubator for 2 h to solidify. The rest of the experiments were performed following the experimental procedures described for the migration assay. Transmission electron microscopy After treating KYSE150 and Eca-109 cells with Fv-LDP-D3-AE for 24 h, 2.5% glutaraldehyde was added to fix the cells, then 1% osmium tetroxide was added, followed by dehydration under different concentrations of alcohol and 100% acetone. After uranyl acetate lead circuit staining, the cells were observed and photographed using transmission electron microscope (TEM). In vivo fluorescence imaging assay In vivo fluorescence imaging experiments were performed using KYSE150 and KYSE520 nude mouse xenograft models. The Fv-LDP-D3 recombinant fusion protein was labeled as per the DyLight 680 Antibody Labeling Kit instructions. When tumor sizes reached approximately 180-400 mm 3 , Fv-LDP-D3 was injected intravenously into mice at a dose of 20 mg/kg. Mice were anesthetized with 2% isoflurane, and the in vivo distribution of the fusion protein was evaluated using the IVIS-Imaging System (Xenogen). In addition, after the experiment, the mice were sacrificed by the CO 2 method, and the tumor tissues and main organs of the nude mice were taken out for in vitro fluorescence imaging. Imaging signals were analyzed using Living Image software (Xenogen, Alameda, CA, USA). 2.12. In vivo therapeutic efficacy 2.12.1. Effect of FV-LDP-D3-AE in the KYSE150 xenograft model Female BALB/c-nu mice (18-22 g), were purchased from SPF (Beijing) Biotechnology Co., Ltd. KYSE150 cells (5Â 10 6 cells/ 200 lL) suspended in PBS were inoculated subcutaneously into the right armpit of athymic mice. When the size of tumors reached 400-500 mm 3 , the tumor mass blocks were removed from nude mice and subjected to dissection in sterile saline. Tumor tissue fragments (2 mm 3 ) were then transplanted into the right armpit of nude mice with a trocar, and the wound was sealed with celloidin. When the tumor volume reached about 80 mm 3 , nude mice were randomized into two groups (n ¼ 6, per group), a control group and a Fv-LDP-D3-AE (0.2 mg/kg) group. The drug treatment group was given by tail vein injection once a week for two consecutive weeks. During the experiment, the long and short diameters of the tumor were measured every 3 days, and the mice were weighed. The tumor volume was calculated as per the following formula: V ¼ AB 2 /2, where A is the longest diameter of the tumor, and B is the shortest diameter perpendicular to A. The tumor growth inhibition rate was calculated as: [1-(terminal tumor volume in the administration groupinitial tumor volume in the administration group)/(terminal tumor volume in the control group -initial tumor volume in the control group)]Â 100%. 100 mm 3 , nude mice were divided into groups according to tumor volume and body weight, with five mice per group. Those groups included a control group, Fv-LDP-D3 (20 mg/kg) group, Fv-LDP-D3 (40 mg/kg) group, Fv-LDP-D3-AE (0.25 mg/kg) group, and Fv-LDP-D3-AE (0.5 mg/kg) group. As for the administration schedule, Fv-LDP-D3 (20 mg/kg) and Fv-LDP-D3 (40 mg/ kg) were administered twice a week, while Fv-LDP-D3-AE (0.25 mg/kg) and Fv-LDP-D3-AE (0.5 mg/kg) were administered once a week. Treatment continued for two weeks, and administration was done via tail vein injection. The mice were euthanized at the end of after the experimental period. Tumors and various organs were taken and fixed in 10% formalin for hematoxylin and eosin staining. In addition, the expression level of Ki-67 in tumor tissues was detected via immunohistochemistry. Statistical analysis Statistical analyses were performed using SPSS 17.0 (SPSS Inc., Chicago, IL) and GRAPHPAD PRISM 6 software (GraphPad Software, Inc., San Diego, CA, USA). All data are presented as the mean ± standard deviation. Statistical differences between groups were determined via the Student's ttest or two-way ANOVA analysis. p < .05 was considered to indicate significance. ( Ã p < .05; ÃÃ p < .01, ÃÃÃ p < .001). Binding of Fv-LDP-D3 to EC cells The binding ability of Fv-LDP-D3 was evaluated via ELISA and immunofluorescence analysis. ELISA-based binding assays indicated that the Fv-LDP-D3 recombinant fusion protein could bind to KYSE520, KYSE150 cells respectively in a concentration-dependent manner. However, a very weak binding ability of Fv-LDP-D3 to mouse embryonic fibroblasts cell line NIH3T3 was observed. (Figure 1(A)). Furthermore, KYSE520 and KYSE150 cells exhibited green fluorescence in the cell membrane and cytoplasm, indicative of Fv-LDP-D3 binding to EC cells in vitro (Figure 1(B)). Results are reported as means ± standard deviation (SD) (n ¼ 3). NS: no significance, Ã p < 0.05, ÃÃ p < 0.01, ÃÃÃ p < 0.001 versus control group. Macropinocytosis-mediated uptake of Fv-LDP-D3 in EC cells The uptake of Fv-LDP-D3 in EC cells was detected under fluorescence microscope. The recombinant fusion protein Fv-LDP-D3 was labeled with fluorescein, and the nuclei were stained with DAPI. As shown in Figure 2, green fluorescence was observed around DAPI-stained nuclei, indicating that KYSE150 cells and Eca109 cells can uptake the Fv-LDP-D3 protein. Upon addition of EIPA, a specific inhibitor of macropinocytosis, the green fluorescence intensity around nuclei weakened, thus indicating that Fv-LDP-D3 protein was passively taken up by EC cells via macropinocytosis. Inhibition of EC cell proliferation by Fv-LDP-D3 and Fv-LDP-D3-AE The CCK8 method was used to evaluate the cytotoxicity of Fv-LDP-D3 and Fv-LDP-D3-AE in KYSE150, KYSE520, and Eca109 cells. As shown in Figure 3(A), when compared to the control group, Fv-LDP-D3 significantly inhibited the growth and proliferation of EC cells ( Ã p < .05). In order to characterize the molecular mechanism of Fv-LDP-D3-related cell growth suppression, we first detected the expression levels of EGFR and IMPDH2 in different EC cells. The highest EGFR expression was detected in KYSE520 cells, and the highest IMPDH2 expression was detected in KYSE150 cells (Figure 3(B)). As shown in Figure 3(C), Fv-LDP-D3 inhibited EGFR phosphorylation and reduced IMPDH2 expression. In all tested cell lines, Fv-LDP-D3-AE exhibited strong cytotoxicity (Figure 3(D); Table 1). In KYSE150 cells, the phosphorylation of EGFR was downregulated, with their downregulation being more pronounced under higher concentrations (Figure 3(E)). In Eca109 cells, Fv-LDP-D3-AE also inhibited phosphorylation of EGFR (Figure 3(E)). These findings indicated that the anti-proliferation effect of Fv-LDP-D3-AE may be mediated via the suppression of EGFR phosphorylation . Cell proliferation was further analyzed via EdU staining. A low proportion of EdU-positive cells was observed in the presence of Fv-LDP-D3-AE as opposed to among control cells, further suggesting that Fv-LDP-D3-AE inhibited cancer cell proliferation (Figure 3(F-G)). Inhibition of EC cell migration and invasion by Fv-LDP-D3-AE We evaluated the effect of Fv-LDP-D3-AE on the migration and invasion ability of KYSE150 and Eca109 cells via transwell assays. The experimental results are shown in Figure 4(A,B). Fv-LDP-D3-AE inhibited the migration of EC cells in a concentration-dependent manner. We added matrigel to the transwell chamber to assess the influence of Fv-LDP-D3-AE on EC cell invasion. Fv-LDP-D3-AE inhibited the invasion of EC cells compared to the control treatment (Figure 4(C,D)). DNA damage, cell cycle arrest, and apoptosis of treated EC cells To determine the pro-apoptotic effect of Fv-LDP-D3 and Fv-LDP-D3-AE on EC cells, KYSE150 and Eca109 cells were incubated with different concentrations of the experimental formulations for 24 h. Higher drug concentrations induced greater apoptosis, with late apoptosis in particular being induced in a concentration-dependent manner ( Figure 5(A,B)). Western blot experiments were performed to determine the effects of Fv-LDP-D3 and Fv-LDP-D3-AE on apoptosis-related protein expression. Fv-LDP-D3 suppressed caspase 3 and Bcl-2 expression in a concentration-dependent manner, while cleaved caspase 3 levels increased. At Induction of ultrastructural changes by Fv-LDP-D3-AE in EC cells Transmission electron microscopy was used to study the effect of Fv-LDP-D3-AE on the ultrastructure of EC cells. In Fv-LDP-D3-AE-treated KYSE150 cells, mitochondria were severely swollen, the mitochondrial membranes were severely disintegrated, the matrix was uneven, cristae were broken and reduced, indicative of a prominent mitochondrial damage compared to the control. Similar changes were found in Eca109 cells ( Figure 6). Taken together, these results indicated that Fv-LDP-D3-AE can damage the mitochondria of EC cells. In vivo and in vitro fluorescence imaging of Fv-LDP-D3 The tissue distribution and tumor targeted accumulation of Fv-LDP-D3 in KYSE520 and KYSE150 xenograft mouse models were evaluated using an optical molecular imaging system. As shown in Figure 7(A), the fusion protein Fv-LDP-D3 was successfully labeled with DyLight680. Fv-LDP-D3 exhibited superior tumor-targeting capability in KYSE520 xenograft models compared to in KYSE150 xenograft-bearing mice. The fluorescence intensity observed in nude mice subcutaneously inoculated with KYSE520 cells peaked at 72 h after intravenous administration. Notably, in vivo, DyLight680-labeled Fv-LDP-D3 targeted the tumor site and the fluorescence persisted for a long-lasting retention over a period of 26 days (Figure 7(B)). Moreover, DyLight680-labeled Fv-LDP-D3 localized in the tumors of nude mice in vitro, whereas other organs did not display detectable fluorescence, indicative of Fv-LDP-D3 could be specifically distributed in the tumor site (Figure 7(C)). In vivo therapeutic efficacy We evaluated the therapeutic effect of Fv-LDP-D3-AE against KYSE150 xenograft in athymic mice. During the experimental period, Fv-LDP-D3-AE suppressed tumor growth. The tumor inhibition rate of Fv-LDP-D3-AE at 0.2 mg/kg was 64%, as compared with the controls (p < .05) (Figure 8(A)). No prominent toxicity was observed in treated mice as none of them died, and less than 10% weight loss was observed ( Figure 8(B)). In order to further study the therapeutic effect of Fv-LDP-D3 and Fv-LDP-D3-AE, we evaluated their anti-tumor activity in the KYSE520 xenograft model. The tumor inhibition rates of Fv-LDP-D3 (20 mg/kg) and Fv-LDP-D3 (40 mg/ kg) were 73% and 81%, respectively, and those of Fv-LDP-D3-AE (0.25 mg/kg) and Fv-LDP-D3-AE (0.5 mg/kg) were 69% and 89%, respectively (Figure 8(C,D)). The daily activity and food intake of treated mice were normal, and the observed weight loss was within 10% (Figure 8(E)). Compared with the control group, Fv-LDP-D3 and Fv-LDP-D3-AE inhibited the expression of Ki-67 in tumor tissue (Figure 8(F)). At the end of the experiment, mice were euthanized and specimens of various organs were collected and processed for the preparation of hematoxylin and eosin (H&E) stained sections; tissue was then examined under a microscope. No histopathological changes were found in the heart, liver, spleen, lung, kidney, stomach, and small intestine of mice in the Fv-LDP-D3 (40 mg/kg), and Fv-LDP-D3-AE (0.5 mg/kg) treated groups as well as the controls, indicating that the administered dosage levels were well tolerated (Figure 9). Discussion EC remains associated with high morbidity and mortality rates. While chemotherapy enables the local control of tumor In vitro fluorescence images; 1-7 representing tumor, heart, liver, spleen, kidney, small intestine, and femur taken from the dissected xenograft-bearing mice, respectively. and may prevent distant metastasis, patients often develop drug resistance against chemotherapeutics (Vrana et al., 2018). Although many new therapeutic and diagnostic methods have been introduced, disease recurrence rates remain high, prognosis remains poor, and there still is no curative treatment. In addition, research into the treatment of ESCC has revealed a number of potential therapeutic targets such as orai1-mediated calcium signaling and PFKL (phosphate kinase, live type), however, further research is still needed Cui et al., 2018;Zheng et al., 2022). EGFR is implicated in tumor development, progression, and metastasis, via downstream signaling cascades involving AKT, MEK, and ERK (Yewale et al., 2013). In our previous study, we constructed a recombinant EGFR/CD13 dual-targeting fusion protein, which exhibited anti-tumor efficacy (Guo et al., 2010;Sheng et al., 2017). Therefore, EGFR-targeting ADC formulations may represent a promising therapeutic approach for the treatment of EC. An albumin-LDM conjugate previously constructed by our research group exhibited anti-tumor efficacy both in vitro and in vivo (Li et al., 2018). Further, other albumin-based drugs have already been approved for the treatment of cancer (Kratz, 2008). Taken together, albumin is an effective carrier for the specific delivery of drug molecules to tumor sites. In our previous research, a recombinant fusion protein and ADC based on the third domain of albumin and an anti-EGFR scFv were prepared via recombination technology. In the present study, we demonstrated that the recombinant fusion protein Fv-LDP-D3 has high affinity for EGFR-overexpressing EC cells, ensuring localization to the tumor and reducing toxicity to normal cells. NIH3T3 cells lack EGFR overexpression (Guo et al., 2014), and Fv-LDP-D3 has a very weak affinity for NIH3T3 cells. Fv-LDP-D3 was taken up by EC cells through macropinocytosis. In vivo imaging studies demonstrated that Fv-LDP-D3 could selectively accumulate at tumor sites in the KYSE520 xenograft model where it was retained for a long period of time, suggesting that the properties of human serum albumin may prolong the half-life of drugs. We observed superior tumor targeting in the KYSE520 xenograft model compared to the KYSE150 xenograft model, which may be due to the high expression of EGFR in KYSE520 cells. The present findings support the feasibility of using EGFR as the therapeutic molecular target and albumin as the carrier for cancer targeted therapy. Fv-LDP-D3 inhibited the proliferation of EC cells at relatively high concentrations, which was attributed to the suppression of EGFR phosphorylation and IMPDH2 signaling. IMPDH2 is overexpressed in a variety of tumors and can promote cell invasion as well as migration (Duan et al., 2018). Thus, IMPDH2 overexpression is closely related to tumor progression (Zou et al., 2015). Shikonin, a selective IMPDH2 inhibitor, was reported to suppress the growth of triple-negative breast cancer cell line MDA-MB-231 (Wang et al., 2021). The recombinant fusion protein Fv-LDP-D3 suppressed IMPDH2, which provides a new idea for the development of esophageal cancer drugs. Of course, it should be noted that targeting inhibition of p-EGFR/IMPDH2 signaling may be able to improve esophageal cancer treatment efficacy, and the question as to whether there is a relationship between IMPDH2 and EGFR signaling requires in-depth experimental exploration. Fv-LDP-D3 induced the apoptosis of EC cells in vitro by upregulating apoptotic factors (including cleaved PARP, cleaved caspase-3) and suppressing the expression of antiapoptotic proteins (such as Bcl-2). Various anti-cancer drugs cause DNA damage, leading to the activation of cell cycle checkpoints and subsequent proliferation arrest (Montano et al., 2012). Fv-LDP-D3-AE treatment induced DNA damage, as indicated by the increased expression of p-Histone H2A.X, while suppressing Chk1 expression. In addition, Fv-LDP-D3-AE inhibited the migration and invasion of EC cells in vitro, induced apoptosis, induced cycle arrest at G2/M, and compromised mitochondrial structure. Fv-LDP-D3-AE also suppressed EGFR phosphorylation. Further, Fv-LDP-D3-AE (0.2 mg/kg) exhibited a tumor inhibition rate of 64% in KYSE150 mouse xenograft model. Additionally, in the KYSE520 tumor model, both the recombinant fusion protein Fv-LDP-D3 and its antibody-drug conjugate have shown strong therapeutic effects. In particular, the antibody-drug conjugate has improved therapeutic efficacy compared to recombinant fusion proteins. Taken together, our research suggests that Fv-LDP-D3 and Fv-LDP-D3-AE may be promising agents for the treatment of EGFR-positive esophageal tumors. Conclusions In summary, the present study demonstrated that the recombinant fusion protein Fv-LDP-D3 which consists of the Fv fragment of an anti-EGFR antibody, the apoprotein of lidamycin (LDP), and the third domain of human serum albumin (D3) and its pertinent drug conjugate (Fv-LDP-D3-AE) which is prepared by integrating the active enediyne chromophore (AE) into the fusion protein molecule both exhibited strong anti-tumor activity against EC cells in vitro and effectively inhibited the growth of cancer xenografts in vivo. The current findings highlight the therapeutic potential of Fv-LDP-D3 and Fv-LDP-D3-AE for esophageal cancer.
6,545
2022-04-13T00:00:00.000
[ "Biology", "Chemistry" ]
Chiral visible light metasurface patterned in monocrystalline silicon by focused ion beam High refractive index makes silicon the optimal platform for dielectric metasurfaces capable of versatile control of light. Among various silicon modifications, its monocrystalline form has the weakest visible light absorption but requires a careful choice of the fabrication technique to avoid damage, contamination or amorphization. Presently prevailing chemical etching can shape thin silicon layers into two-dimensional patterns consisting of strips and posts with vertical walls and equal height. Here, the possibility to create silicon nanostructure of truly tree-dimensional shape by means of the focused ion beam lithography is explored, and a 300 nm thin film of monocrystalline epitaxial silicon on sapphire is patterned with a chiral nanoscale relief. It is demonstrated that exposing silicon to the ion beam causes a substantial drop of the visible transparency, which, however, is completely restored by annealing with oxidation of the damaged surface layer. As a result, the fabricated chiral metasurface combines high (50–80%) transmittance with the circular dichroism of up to 0.5 and the optical activity of up to 20° in the visible range. Being also remarkably durable, it possesses crystal-grade hardness, heat resistance up to 1000 °C and the inertness of glass. For a mono-c-Si metasurface fabrication, the choice of patterning technique is critically important, as it can easily generate structural defects and convert mono-c-Si into poly-c-Si or even a-Si, which boosts the optical absorption loss to inappropriately high levels 18,19 . The electron beam lithography followed by the chemical etching is widely employed as it can produce regular vertical grooves and wells in mono-c-Si films without noticeably affecting the quality of the remaining material. However, the technique is incapable of creating complex submicrometer shapes and presently known more sophisticated etched structures have a size of at least a few micrometers 20 . Accordingly, the most common lithographically cut optical silicon metasurfaces consist of nanoridges 6 , nanorods 12 , nanodisks 16,21 , nanopillars 11,22 , nanoposts 5 , nanofins 23 , etc., which are all effectively two-dimensional shapes with vertical walls and height bound to be equal to the thickness of the initial mono-c-Si layer. As an alternative, the simplest 3D shapes-nanospheres-consisting of poly-c-Si have been produced and then converted into the mono-c-Si form by subsequent femtosecond laser pulse irradiation 24 . Recently, poly-c-Si nanoparticles of more complex chiral crescent shapes have been fabricated by means of the gradient mask transfer technique 25 . Since its early days, the focused ion beam (FIB) lithography has been recognized as a convenient tool for nanofabrication of emerging optical materials 26,27 . The technique produces minor effect on the optical response of metals dominated by the conduction electrons, and has been applied for shaping the metallic parts of numerous plasmonic metasurfaces and metadevices [28][29][30] . Current dual beam systems of the electron-ion microscopy allow fast processing of relatively large areas. The advanced options of controlling the FIB scanning sequence by programmable digital templates enable fabrication of arrays with truly three-dimensional (3D) unit cell shapes [31][32][33] . Being technically capable of versatile silicon patterning 34 , the FIB lithography is known to produce 30-50 nm thick damaged layers 35 . The related significant increase of the optical absorption loss 18 has prevented the technique from being applied to the fabrication of Si-based optical structures. Although some early works suggested the possibility to restore the functionality of FIB-exposed silicon waveguides by annealing 19 , the particular prospects remained unexplored for a decade. Our recent electron microscopy studies have confirmed that FIB patterning of a SOS film generates a 50 nm thick damaged surface layer, which is also heavily contaminated with implanted gallium atoms 36 . Optical transmission measurements confirmed this layer to have an extremely adverse effect on the optical properties. High temperature annealing with oxidation was found to transform it into an oxide glass-like coating and to restore the transparency. Intriguing indications of polarizing features of the patterned SOS layer otherwise overshadowed by the strong light absorption were observed. In this paper, we perform a comprehensive study of the optically functional mono-c-Si based complex shaped dielectric metasurfaces produced by fast and precise FIB patterning followed by appropriate high temperature annealing. To illustrate the merit of the advance, we fabricate a chiral 4-fold rotationally symmetric mono-c-Si metasurface. The optical chirality critically depends on the lack of mirror symmetry planes 37 , and although formally the symmetry of planar 2D structures can be broken by the presence of substrates 38,39 , in optical experiments such structures are clearly outperformed by those of proper 3D-chiral shape 40 . As the widely accepted chemical etching is restricted to producing flat Si blocks, the chiral Si-based metasurfaces known so far rely on the symmetry breaking by the substrate. Due to the lack of rotational symmetry axes such metasurfaces also do not possess circularly polarized optical eigenmodes 12,41 . We demonstrate that applying the FIB technique provides a valuable opportunity to handle sophisticated 3D shapes and to create chiral silicon metasurfaces free of such disadvantages and combining high transparency with strong optical chirality and notable durability. We describe the fabrication routine and precisely reconstruct the final composition of the unit cell. We perform comprehensive optical studies and show that they are in-line with the results of full-scale electromagnetic simulations. We also formulate a simple semi-phenomenological model that points at the chiral guided-mode resonance as the origin of the strong optical chirality. Results and Discussion Fabrication and structure reconstruction. Symmetry aspects are of key importance for the design of chiral metasurfaces. While the chirality by itself requires the absence of mirror symmetry planes, rotational axes of the order of 3 and higher ensure that circularly polarized waves are the true eigenmodes and all polarization transformations during the transmission of normally incident light are caused by the metasurface chirality 42 . Accordingly, we select a 4-fold symmetric square lattice of twisted crosses, which is shown in Fig. 1(a). In order to avoid the light diffraction in the visible range, the lattice period is set to 370 nm. This way, circular areas of 68 μm in diameter are patterned by the FIB (see Methods). The samples differently oriented with respect to the sapphire substrate crystallographic axes are fabricated along with a few test samples of smaller area for the reconstruction purposes. As seen in Fig. 1(b), the patterning results in a smooth complex-shaped relief of the mono-c-Si surface. Its precise reconstruction (see Methods) by means of the atomic force microscopy (AFM) reveals the averaged unit cell relief map presented in Fig. 1(c), where the colour specifies the height as measured from the flat Si/Al 2 O 3 interface. The corresponding digital 3D model of the patterned SOS presented in Fig. 1(d) shows that the FIB creates 150 nm deep and ∼100 nm wide curved grooves. Note that the maximum SOS height exceeds the initial 300 nm thickness of the SOS layer, which indicates a substantial overdeposition of silicon from the grooves onto the adjacent regions unexposed to the FIB. It has been established that shaping mono-c-Si with FIB damages its structure and creates layers which strongly absorb visible light 18 . The optical studies of the fabricated metasurface presented below confirm this general rule. Our recent transmission electron microscopy studies 36 have revealed that processing with FIB gives rise to a 50 nm thick layer of damaged silicon, where the crystalline structure is altered and Ga atoms are implanted. In order to restore the transparency in the visible range, the samples are subjected to the annealing with surface thermal oxidation (see Methods). This produces significant changes to the surface relief, as is illustrated by the typical SEM view shown in Fig. 2(a), where the complex chiral pattern is unobservable beneath the almost flat top SiO 2 surface. AFM studies reveal a relief with a root mean square roughness of 0.5 nm and in-plane correlation lengths in the range between 15 and 20 nm. On the larger scale, the surface has a weak periodic modulation and its average unit cell map of heights measured from the Si/Al 2 O 3 interface is shown in Fig. 2(c). At the same time, sequences of FIB cross sections of the test samples (see Fig. 2(b) reveal that the Si/SiO 2 interface underneath the SiO 2 layer has a truly chiral complex shape. By means of the FIB 3D reconstruction we resolve this shape and create the average unit cell map of heights measured from the flat Si/Al 2 O 3 interface presented in Fig. 2(d). Combining all the microscopic data, we produce the overall 3D model of the three-layered structure of the annealed chiral metasurface shown in Fig. 2(e). Apparently, the oxidation consumes a substantial part of the patterned SOS layer, but the remaining mono-c-Si retains a pronounced shape chirality. Optical properties. Microspectrometric transmission measurements (see Methods) demonstrate the efficiency of the annealing with oxidation for restoring the metasurface transparency in the visible range. As shown in Fig. 3, the unprocessed plain SOS transmission spectrum has the classical Fabry-Perot oscillations that indicate the homogeneity and transparency of the epitaxial mono-c-Si layer. After the FIB patterning, the same measurements reveal a dramatic increase of the light absorption as the transmission decreases by an order of magnitude. Upon annealing, the overall level of transparency is restored, while the transmission spectrum apparently differs from that of the initial plain SOS layer. From the symmetry point of view, for the normally incident light, the optical properties of a chiral 4-fold rotationally symmetric metasurface are identical to those of a layer of isotropic chiral medium 42 . The corresponding key optical chirality parameters-circular dichroism (CD) and optical activity (OA)-can be resolved by different methods, and are independent of the sample orientation with respect to the incident linear polarization as well as, due to the Lorentz reciprocity, of the side of incidence 37 . Therefore, in order to confirm that the annealed metasurface retains the translational and rotational symmetries and to unambiguously characterize its chirality, we combine two different optical techniques. In the first method, hereinafter denoted as transmission ellipsometry, OA-spectropolarimetry and CD-spectrophotometry of the transmitted light are simultaneously performed (see Methods). Preliminary, for the linearly polarized light incident normally from the sapphire substrate, a pair of mutually orthogonal polarization directions are identified as those preserved during the transmission through the unprocessed SOS areas. Next, for those linear polarizations, the polarization state of the light transmitted through the metasurfaces is analysed. In this approach, the metasurface OA is defined as the angle of the relative rotation 500 nm In the second method, which we call CD-spectrometry, the left circularly polarized (LCP) and right circularly polarized (RCP) transmittances, T L and T R correspondingly, are resolved by means of microspectrometric measurements with the circularly polarized light incident directly onto the metasurface samples from the air. In this technique, CD is evaluated as CD = (T R − T L )/(T R + T L ). Generally, we observe an encouraging consistency of the CD and OA spectra measured for differently oriented samples and using different incident polarizations. Averaging the optical data over the samples and polarizations allows to improve the ellipsometry data quality in a broad wavelength range. Comparison of the CD spectra resolved by means of the ellipsometry and spectrometry in Fig. 4 the ellipsometry shown in Fig. 4(c) exhibits higher noise level at longer wavelengths due to worse spectral efficiency of the ellipsometer spectrometer resulting in lower consistency of the spectra of different samples and for different incident polarizations. For a final check, we employ the Kramers-Kronig relations between OA and CD. Being well known for natural weakly chiral materials 37,43 , the relations can be generalized for strongly chiral metasurfaces 32 . Accordingly, we evaluate the spectrum of OA from the broadband spectrum of CD measured by the transmission ellipsometry. As seen in Fig. 3(c), the obtained dataset accurately follows the experimental OA values. Altogether, the thorough analysis of the collected optical data unambiguously proves that the metasurface samples retain their rotationally symmetry and combine strong optical chirality with high transparency. Numerical simulations and discussion. For the full-scale electromagnetic modelling (see Methods), we take the metasurface unit cell obtained by combining the microscopic data (see Fig. 2(e)) and assuming the tabulated permittivity of mono-c-Si 44 , ε = 2.25 of the SiO 2 layer, and ε = 3.1 of the sapphire substrate. As shown in Fig. 4(a), the simulations quantitatively well reproduce the overall trend of the metasurface transmittance in a very broad wavelength range. Importantly, the simulated transmittance is very sensitive to the permittivity values assigned to Si within the multilayered structure shown in Fig. 2(e). For example, taking exactly the same structure but with the elevated silicon parts consisting of amorphous a-Si 10 lowers the simulated transmittance by several times everywhere below the 800 nm wavelength. Therefore, the consistence of the observed and simulated data indicates a good optical quality of the silicon remaining after the oxidation. The calculated transmittance spectrum experiences a number of sharp dips down to rather low values, some of which (see, e.g., the vicinity of 965 nm wavelength) are resolved by the experiment, although as considerably smoother and shallower ones. Sharper transmission dips are either observed as merged into a broader single one (as those at 642 and 656 nm wavelengths) or remain unobserved (e.g. as the one at 830 nm). According to the simulated absorption spectrum, the transmission dips are accompanied by sharp resonant increase of the losses which otherwise stay on the background level below 10% in the red and near infrared ranges. Comparison of the simulated and measured optical chirality parameters in Fig. 4(b) and (c) indicates the overall adequacy of our numerical model, which reproduces most of the complex features of the OA and CD spectra. One can see that the simulated CD spectrum has a number of sharp resonances at the wavelengths exactly corresponding to the transmission dips. The simulated OA spectrum exhibits sharp antiresonances at those wavelengths. In the experiment, some of the sharp features are observed as smoother CD resonances and Sharp spectral anomalies indicate the presence of high quality factor resonances which are generally typical for dielectric metasurfaces. In a metasurface consisting of patterned layer of highly refracting transparent material, such resonances appear in the form of the so-called guided-mode resonances, which occur when the incident plane waves diffract into the modes guided by the layer. Some years ago, it was shown how a periodic chiral patterning of a TiO 2 layer gives rise to very similar narrow CD resonances and OA antiresonances 45 . In our case, this occurs in a layer of mono-c-Si with a 1.5 times higher refractive index which makes the grid of resonances noticeably denser. At shorter wavelengths, this causes their merging into a hardly comprehensible grid of peaks and dips on the spectra of optical observables. At longer wavelengths, one can isolate and analyze a single resonance, as is illustrated in Fig. 5 by a detailed view of the simulated absorption peak, transmission dip, CD resonance and OA antiresonance around 755 nm wavelength also observed by the experiments. One can formulate a relatively simple analytical model describing the emergence of optical chirality due to a guided-mode resonance similar to the recent chiral coupled mode (COM) model developed for metallic nanostructures with extreme optical chirality due to multiple plasmon resonances 42 Although general formulation of the chiral COM model implies introduction of many phenomenological parameters, their number can be substantially reduced by applying the basic principles of symmetry and reciprocity. In particular, the reciprocity enforces a degeneracy of the conjugated resonances which are to possess equal frequencies and half-widths 42 . Note that it is even easier to apply this methodology to the chiral silicon metasurface, as its resonances possess higher quality factors and one can neglect their coupling. Accordingly, for an isolated resonance as in Fig. 5, we apply a reduced single-resonant version of the formalism developed for plasmonic metasurfaces 42 . The solution of the COM model equations yields Lorentzian dispersion of the absorption: which is different for the RCP and LCP polarized incident waves (subscripts R and L) and the sides of incidence (superscripts ′ and ′′). The transmission amplitudes possess a Fano-type dispersion: and are independent of the side of incidence in accordance with the Lorentz reciprocity. Here the parameters α 0 and τ 0 describe the direct achiral non-resonant transmission and absorption correspondingly, ω 0 is the resonance frequency and γ is its half-width. The partial absorption and transmission amplitudes, α ″ ′ L R, , and τ R,L , are determined by the parameters of the coupling of the conjugate resonances to the free-space circularly polarized plane waves. The metasurface optical chirality is then explained as a result of the chirality of this coupling. The observable transmission characteristics expressed as T R,L = |t R,L | 2 , = − OA t t (arg arg ) 1 2 L R , and CD = (|t R | 2 − |t L | 2 )/(|t R | 2 + |t L | 2 ) are all determined by a few model constants: the resonance frequency ω 0 and half-width γ, and by the partial transmission amplitudes τ 0 and τ R,L . Since only the difference between the phases of the complex amplitudes t R and t L is relevant, the observables are fully characterized by 7 real parameters. As illustrated in Fig. 5, this simple approach allows us to reproduce very precisely all the data in the vicinity of an isolated resonance obtained by the full-scale FDTD simulations. In accordance with Eq. (1), the absorption peaks in Fig. 5(a) have the Lorentzian shape and their equal resonance frequency and half-width yield the resonance quality factor as high as ω γ  / 190 0 . For comparison, this value exceeds by an order of magnitude that typical of plasmon resonances hosted by metallic chiral metasurfaces: the quality factors of the resonances of relatively transparent arrays of complex shaped particles 40 as well as of more opaque arrays of chiral holes 42 do not exceed 20. Although narrower resonances with higher quality factors are predicted by the FDTD simulations, they remain unobserved experimentally. Such guided mode resonances rely on extremely weak coupling of the leaky guided modes with the free-space plane waves and require high degree of the metasurface translational order. In our case, the scattering loss caused by the structure imperfections overshadows the weak coupling and suppresses the narrower resonance excitation. We believe also that the scattering is generally responsible for the considerable widening and shallowing of all observed resonances compared to those in the simulated spectra. As seen in Figs. 5(a), 3D-chiral patterning of the Si layer gives rise to a strong selectivity of the coupling of sharp resonances to the plane free-space circularly polarized waves. This allows achieving strong OA and CD in the range of relatively high transmittance. As the electromagnetic fields of optical resonances of dielectric nanostructures are generally confined in the parts with high refractive index, the shape of the Si layer is more important than its environment. To illustrate this, we simulate also the light transmission through a similarly arranged 2D-chiral silicon structure, in which the pattern shown in Fig. 1(a) consists of twisted crosses of 70 nm wide slits with vertical walls cut through a 150 nm thin SOS layer. From the formal symmetry point of view, the optical chirality of such structure appears only due to the mirror symmetry breaking by the presence of the sapphire substrate. Physically, the 2D-chiral Si blocks host sharp dielectric resonances, while the substrate perturbs them and provides a chiral selectivity to their coupling to the free-space light. This effect, however, appears to be very weak and we are able to achieve substantial optical chirality only upon strong resonant suppression of the transmission, i.e., with both LCP and RCP transmittances on the level of a few percent. One should also note the key positive role of the regular light absorption in mono-c-Si in the build up of the optical chirality. It is known, that CD of rotationally symmetric metasurfaces appears solely due to the difference of the RCP and LCP light absorptions 42 . On the other hand, the generalized Kramers-Kronig relations implicate that the CD resonances are accompanied by proportionate OA antiresonances 32 . Therefore, in the range of weak light absorption in mono-c-Si in the near infrared range, even strong resonances producing pronounced impact on the observed transmittance do not result in substantial CD or OA (see, for instance, Fig. 4 in the vicinity of 965 nm wavelength). The simulated spectra are a good reference point to discuss the optical capabilities of 3D chiral silicon metasurfaces. As illustrated by Fig. 5, one can realize an efficient circular polarizer transmitting about 45% of the RCP and only 1-2% of the LCP light in a narrow resonant range. Although the current structure lacks broadband chiral effects, its high quality factor resonances are remarkably similar to those of the chiral metallic metasurfaces in the microwave 47 and terahertz 48 ranges. There, it is possible to achieve broadband OA by exploiting specific combinations of the resonances that give rise to the so-called Blaschke contribution to the OA 32,48 . A chiral silicon metasurface with similarly arranged resonances will operate as a broadband omnidirectional polarization rotator for the visible light. More generally, the fabricated metasurface illustrates the feasibility of creating arbitrarily shaped 3D silicon nanostructures for the visible range using the commonly available FIB laboratory equipment. While currently the theoretical design of silicon metasurfaces is primarily focused on the available lithographically cut structures, nanopatterned silicon layers are also getting attention and, for instance, recent simulations have shown that their guided mode resonances facilitate the nonreciprocity of the nonlinear transmission regime 49 . Conclusion We show that FIB lithography followed by annealing with oxidation of the damaged layer can produce complex shaped mono-c-Si metasurfaces. As a particular example, we apply 4-fold symmetric chiral patterning to a mono-c-Si layer on sapphire and fabricate a metasurface, which combines high transparency with strong optical chirality. Light transmission experiments verify the metasurface rotational chiral symmetry and suggest that its performance is limited by the scattering on the imperfections arising during the annealing. Adjusting the fabrication routine to improve the translational order will allow achieving even higher levels of the optical chirality. Methods FIB patterning. FIB patterning of the commercial 300 nm epitaxial mono-c-Si films on monocrystalline sapphire are performed using FEI Scios DualBeam system. For the patterning, the beam of a current of 0.1 nA of Ga + ions accelerated to an energy of 30 keV is used. The beam path is controlled by the digital templates which specify subsequent milling of square unit cells, and within each cell the FIB follows the twisted cross paths starting from the center outwards as illustrated in Fig. 1(a). Annealing with oxidation. Preceded by the sample cleaning with ex-situ plasma argon-oxygen mixture for one hour in Fischione 1070 NanoClean system, the annealing is carried out in a custom furnace for 30 minutes in dry air at 1100° C, which, according to the Massoud silicon oxidation model 50 3D reconstruction data. All obtained AFM images are processed with a specific numerical routine that includes a subtraction of the tip curvature radius, noise reduction, and a two-step averaging over all unit cells and their 4-fold rotations 42,51 . FIB 3D reconstruction. Reconstruction of the oxidized annealed metasurface is preceded by deposition of a 200 nm thick Pt protection layer using the FEI Scios gas injection system. The initial 20 nm thick Pt layer was deposited by the electron-beam-induced deposition, while the rest is deposited using the much faster FIB-induced deposition. To perform the 3D reconstruction, 120 equidistant consecutive sections with a pitch of 20 nm and a depth of 1.5 μm were milled with the FIB of a current of 0.1 nA of Ga + ions accelerated to an energy of 30 keV. SEM images of each nanostructure cross-section are acquired using the T2 in-lens detector enhancing the contrast between the constituent materials. The images are then combined in a pool and post-processed with the FEI Avizo 9 software. The relative alignment of successive images is determined by the ordinary least squares algorithm. The material layer segmentation is performed using a gray-scale-based threshold filtering, which allows to preserve the small shape features. The obtained profiles of the interfaces between the layers are subjected to the same averaging routine 51 as the AFM data above. Microspectrometry. Light transmission measurements are performed in a wide spectral range (400-1050 nm) using a setup with Olympus CX31-P trinocular polarized-light microscope equipped with fiber-optic Avantes AvaSpec-2048-USB2-UA spectrometer. Measurements are done with a 40x microobjective which allows to collect the light transmitted through a round region of the area about 30 μm 2 . The microscope halogen lamp is used as a broadband unpolarized light source. Transmission ellipsometry. OA and CD are measured by using the linearly polarized normally incident light and characterizing the polarization state of the transmitted light with Horiba Jobin-Yvon UVISEL 2 spectroscopic ellipsometer. The obtained ellipsometric angles are recalculated by custom scripts to the polarimetric values corresponding to OA and CD. The used rectangular light spot of an area of 50 × 70 μm 2 is sufficiently small to study individual metasurface samples. The surface of the R-cut sapphire substrate forms an angle of 57.6° with the optical axis of the crystalline sapphire 52 , and the substrate birefringence also contributes to the measured polarizing properties. To identify the metasurface contribution and eliminate that of the substrate, two specific linear polarizations of light incident on the metasurface from the sapphire are used. They are chosen empirically so that no polarization transformation occurs during the transmission through the unprocessed SOS areas, which corresponds to the light polarized along or perpendicular to the projection of the sapphire optical axis on the R-cut surface. CD-spectrometry. CD spectrometry is performed using CLSM FV1000 confocal laser scanning microscope based on IX81 inverted microscope equipped with Olympus scanning unit with a spectral type detection system. External Schott ACE halogen light source is used with the internal IR interference filter removed to expand the spectrum. LCP and RCP light states incident from the air onto the metasurface are obtained by Moxtek UBB01A ultra-broadband wire-grid polarizer combined with achromatic Thorlabs AQWP05M-600 quarter-wave plate. 10x microobjective is used and the spatial resolution of at least of 3 μm is regulated by the size of the confocal aperture. Spectra in the range 400-800 nm are recorded as CLSM raster and post-processed with CLSM software. Numerical modeling. Full-scale FDTD modeling is performed using commercial package Speag SEMCAD X v14.8 with Acceleware CUDA GPGPU acceleration library running on a high-performance workstation equipped with a pair of 10-core Intel Xeon processors and Nvidia Tesla K40 GPU. Periodic boundary conditions are applied to the side faces of the reconstructed unit cell (Fig. 2(e)), while the top and bottom faces of the simulation box are set to light absorbing perfectly matched layers. The simulated light plane wave is incident upon the patterned silicon surface from the vacuum. FDTD grid step of 4 nm is used as it has been checked that further decreasing the step does not affect the optical observables. Arrays of field monitors are placed above and below the metasurface to resolve the contributions from the incident, transmitted and diffracted waves. The absorption is evaluated as the deficit of the light energy between the incident and all the outgoing waves.
6,427.4
2018-08-02T00:00:00.000
[ "Materials Science" ]
Epstein-Barr Virus BART9 miRNA Modulates LMP1 Levels and Affects Growth Rate of Nasal NK T Cell Lymphomas Nasal NK/T cell lymphomas (NKTCL) are a subset of aggressive Epstein-Barr virus (EBV)-associated non-Hodgkin's lymphomas. The role of EBV in pathogenesis of NKTCL is not clear. Intriguingly, EBV encodes more than 40 microRNAs (miRNA) that are differentially expressed and largely conserved in lymphocryptoviruses. While miRNAs play a critical role in the pathogenesis of cancer, especially lymphomas, the expression and function of EBV transcribed miRNAs in NKTCL are not known. To examine the role of EBV miRNAs in NKTCL, we used microarray profiling and qRT-PCR to identify and validate expression of viral miRNAs in SNK6 and SNT16 cells, which are two independently derived NKTCL cell lines that maintain the type II EBV latency program. All EBV BART miRNAs except BHRF-derived miRNAs were expressed and some of these miRNAs are expressed at higher levels than in nasopharyngeal carcinomas. Modulating the expression of BART9 with antisense RNAs consistently reduced SNK6 and SNT16 proliferation, while antisense RNAs to BARTs-7 and -17-5p affected proliferation only in SNK6 cells. Furthermore, the EBV LMP-1 oncoprotein and transcript levels were repressed when an inhibitor of BART9 miRNA was transfected into SNK6 cells, and overexpression of BART9 miRNA increased LMP-1 protein and mRNA expression. Our data indicate that BART9 is involved in NKTCL proliferation, and one of its mechanisms of action appears to be regulating LMP-1 levels. Our findings may have direct application for improving NKTCL diagnosis and for developing possible novel treatment approaches for this tumor, for which current chemotherapeutic drugs have limited effectiveness. Introduction EBV is a member of the herpes virus family and is a preeminent human oncogenic virus with a causal relationship to several malignancies, including endemic Burkitt's lymphoma (eBL), nasopharyngeal carcinoma (NPC), a proportion of gastric carcinomas (GC), NKT-cell lymphomas (NKTCL), Hodgkin disease (HD), post-transplant lymphoma-like disease (PTLD), and leiomyosarcomas [1,2]. Within the context of AIDS, EBV is associated with a proportion of non-Hodgkin lymphomas, almost all HD, and leiomyosarcomas. The EBV genome contains over 170,000 bp encoding more than 80 genes. EBV gene expression during latency and tumorgenesis consists of distinct combinations of six nuclear proteins (EBNAs), three membrane proteins (LMPs) and multiple noncoding RNAs, including over 40 miRNAs [3,4,5,6,7]. While the EBV latent proteins have been investigated intensively for some time, the contribution of EBV-encoded miRNAs or altered cellular miRNA expression in EBV-induced cancers has not been fully explored. The EBV miRNAs were the first viral encoded miRNAs discovered [6]. MiRNAs are ,22 nt transcripts that form imperfect duplexes with target mRNAs and thereby inhibit their expression. MiRNAs typically target the 39 UTR of mRNAs and the average magnitude of repression of the encoded protein is ,30% [8]. EBV miRNAs are primarily derived from a group of alternatively spiced RNAs transcribed from the BamH1A region of the genome (BamA rightward transcripts or BARTs) [3,4,5,6]. The BARTs encode a large number of miRNAs and with the exception of mir-BART2; the majority are derived from two clusters. A cluster of 3 miRNAs has also been identified which are derived from the BHRF1 gene. The sum total of at least 40 EBVencoded miRNAs dramatically increases the complexity of potentially biologically active molecules encoded by EBV during latent infection [9]. Like many of the miRNAs discovered to date, the functions of the EBV-encoded miRNAs remain poorly understood. It has been hypothesized that herpesvirus miRNAs, including those encoded by EBV, Cytomegalovirus (CMV), and Kaposi's sarcoma-associated Herpesvirus (KHSV), may facilitate the viral life cycle by blocking innate or adaptive immune responses or by interfering with the appropriate regulation of apoptosis, cell growth, or DNA replication in infected cells [9]. Herpesvirus miRNAs might also target mRNAs for viral genes that regulate the productive lytic cycle, thus having a role in maintaining latency or modulating productive lytic infection. EBV-encoded miRNAs can target both viral and cellular genes. EBV mir-BART2 targets the EBV DNA polymerase mRNA for degradation [10], which inhibits lytic replication and miRNAs from BART cluster 1 may target the viral LMP-1 protein [11]. In addition, mir-BART5 targets the pro-apoptotic factor PUMA and mir-BHRF1-3 targets the chemokine/T-cell attractant CXCL11 [12,13]. Dysregulation of cellular miRNAs following B cell infection has also been described [11,14,15,16,17]. The cellular miRNAs 146a and 155 regulate lymphocyte signaling and gene expression pathways in this context. Three general patterns of viral gene expression have been identified in EBV-associated cancers [1,2]. Latency I is characterized by expression of EBNA-1, while latency II is characterized by expression of EBNA-1 along with LMP1 and 2. Latency III is characterized by expression of all EBNAs and LMPs and is typically associated with B cells infected with EBV in vitro or in lymphomas in the immunosuppressed. EBV miRNAs are expressed in all EBV infected tumor cells, although they are differentially expressed in some tumors [3,4,5,6,7,18,19,20,21]. The context in which miRNA functions are investigated may be particularly important since the ubiquitous and powerful activities of all the latent proteins expressed in latency III could mask some of the activities contributed by miRNAs. Recently, several cell lines have been isolated from EBV-associated NKT cell lymphomas, which appear to select for latency II in both primary tumor tissues as well as the cell lines [22,23]. Thus, NKTCL cell lines may be a powerful model system to investigate the functions of EBV gene products within the context of latency II and may lead to insights into miRNA functions in EBV-associated HD, GC, and NPC, for which few practical cell culture systems are available. Nasal NK/T cell lymphomas (NKTCL) are a heterogeneous group of tumors, so named because some tumors have an NK phenotype (CD3 2 , CD56 + ) and some have a T cell phenotype (usually CD4 + /CD3 + , but sometimes CD8 + CD3 + and sometimes CD3 + 4 2 , 8 2 gamma delta) [24,25]. NKTCL is a distinct clinical entity characterized by necrotic lesions in the nasal cavity, nasopharynx, and palate. These are generally aggressive tumors with poor prognosis [24,25]. A universal feature of these tumors is the consistent and strong association with EBV, although the precise role of the virus in this disease remains poorly understood. Analysis of primary tumor tissue has shown a latency II pattern of EBV gene expression [22,23]. At least 7 cell lines of both NK and T cell-like phenotypes have been derived from primary tumors. These include NK-like (CD3 2 , CD56 + ) SNK1, -6, -10 and T-celllike (CD3 + , CD56 + , TCRc/d + ) SNT 8, -13, 15, -16 cell lines [22,23]. The cell lines, like the primary tumor tissues from which they were derived, retain latency II EBV expression patterns and the EBV genome is clonal. EBV expresses more than 40 miRNAs, but which ones are expressed in NKTCL remains unknown. We hypothesized that specific viral and cellular miRNAs are likely to play a role in the genesis and maintenance of NKTCL. To address this, we utilized microarrays and quantitative PCR to identify EBV miRNAs that are expressed in established NKTCL cell lines. Transfection of antisense oligonucleotides to some of the abundantly expressed EBV miRNAs revealed that at least one of them, BART9, contributes significantly to NKTCL proliferation. The results provide new information about the expression pattern of EBV encoded miRNAs in NKTCLs and identified a novel function for the EBV-encoded BART9 miRNA. NKTCL stably maintain the EBV Type II latency program In tumors, EBV displays latency programs characterized by specific patterns of viral gene expression. In Burkitt's lymphoma, Type I latency is seen while Type II latency is observed in nasopharyngeal carcinoma, gastric carcinoma, and Hodgkin's disease. Type III latency is often restricted to B lymphomas in immunodeficient patients [26]. Although previous studies have found that NKTCLs have a type II latency phenotype, it is common for some EBV positive cell lines to drift towards type III latency in culture. To confirm the latency phenotype under our culture conditions, we tested five NKTCL cell lines for latent and lytic gene expression. We found that the two SNK (SNK6 and SNK10) and three SNT (SNT8, SNT15 and SNT16) cell lines expressed EBNA1 and LMP1 ( Figure 1A). These cell lines did not express the other EBNA proteins, EBNA-LP, EBNA3C and EBNA2 ( Figure 1B). We also found that there was no expression of Zta lytic protein ( Figure 1C). These data indicate that the NKTCL cell lines stably exhibit a Type II latency program. miRNA microarray profiling of NKTCL Nasal NK/T cell lymphomas (NKTCLs) have been demonstrated to be consistently associated with Epstein-Barr virus (EBV) as all cases are EBV positive [27,28,29]. EBV encodes at least 40 microRNAs (miRNA) [3] and there is increasing evidence for the role that miRNAs play in malignant transformation of cells [30]. Therefore, to investigate the role of EBV miRNAs in NKTCL oncogenesis, we first carried out miRNA microarray profiling. We isolated total RNA from two representative NKTCL cell lines, SNK6 and SNT16. Both SNK6 and SNT16 express cellular and EBV proteins that are consistent with prototypical NK and T-cell like NKTCLs respectively. These cell lines were also chosen for microarray profiling and further analysis because of their robust growth and viability in cell culture relative to other known SNK or SNT cell lines. A miRHumanVirus microarray chip was used to examine the expression levels of 1100 mature miRNAs that included human (875) and viral (225) miRNAs. The probes also included 44 EBV miRNAs in the microarray chip. We used the criteria and statistical parameters described in the Methods to analyze the EBV miRNA expression patterns in the two NKTCL cell lines. Using a median expression value cut-off of 500, we identified 19-21 EBV miRNAs that were present at relatively high levels in SNK6 and SNT16 cell lines ( Fig. 2A). To verify the reliability of the microarray data, we selected seven EBV miRNAs whose expression in the microarray ranged from high to low and carried out Taqman PCR on total RNA extracted from SNK6 and SNT16 cells. To more easily compare the relative expression levels of these miRNAs to previous studies, miRNA levels are shown normalized to either 10 pg total RNA or as copy numbers per cell. As shown in Figure 2B and C, the relative expression level of EBV miRNAs BART17-5p, BART7, BART1-3p, BART9, and BART10 was at least one log higher than EBV miRNA BART2-3p in both SNK6 and SNT16 cells. The levels of the miRNAs were also higher in SNT16 cells than SNK6 cells. This is in agreement with the microarray data which also showed a higher expression level of the selected miRNAs in SNT16 cells compared to SNK6 cells (Fig. 2B). Notably, BHRF1 derived miRNAs were nearly undetectable ( Fig. 2A-C). These data indicate that the microarray profiling data are generally reliable and this analysis has therefore determined the set of EBV miRNAs which are expressed in the SNK6 and SNT16 cell lines. Reducing EBV miRNA levels affect SNK6 and SNT16 growth rate miRNAs regulate many genes including, those involved in cell growth [31]. We first investigated the consequences of blocking EBV miRNA function on the growth rate of SNK6 and SNT16 NKT-cell lines. Based on the miRNA microarray profile (Fig. 2), we chose six EBV miRNAs that were expressed at high levels in both cell lines. The six EBV miRNAs were individually inhibited by transfection LNA-modified antisense oligonucleotides. Samples were collected every 24 hours for three days and cell numbers and viability analyzed. The anti-EBV-miR-BART9, anti-EBV-miR-BART7 and anti-miR-BART17-5p showed a statistically significant reduction in SNK6 cell growth (,19%, ,20% and ,29%, respectively) (Fig. 3A). EBV-miR-BART1-5p and EBV-miR-BART16 antisense oligonucleotides did not have statistically significant effects on SNK6 growth rate. Also there was no significant effect on the viability of the SNK6 cells upon inhibition of any of the EBV miRNAs shown in Figure 3A (Fig. 3B). In SNT16 cells, only anti-EBV-miR-BART9 showed a statistically significant decrease (,34%) in cell growth. Anti-EBV-miR-BART16, anti-EBV-miR-BART17-5p, anti-EBV-miR-BART7, anti-EBV-miR1-5p affected cell growth by ,25%, ,3%, ,10% and ,7%, respectively, but these differences were not statistically significant in a paired t-test (Fig. 3C). We noted that there was a decrease of ,20% in viability of SNT16 cells when the levels of EBV miRNAs were reduced (data not shown). Anti-EBV-miR-BART1-3p showed an increase in SNK6 and SNT16 cell growth, but this difference was not statistically significant ( Fig. 3A and C). A scrambled control miRNA had no detectable effect on proliferation or viability on either cell line (Fig. 3). This data suggests that the expression levels of some EBV miRNAs may play a role in cell proliferation. Inhibiting EBV miR-BART9 reduces LMP1 protein and mRNA expression in SNK6 cells Because reducing BART9 levels affected growth rate in SNK6 and SNT16 cells (Fig. 3), we focused further experiments on the BART9 miRNA. EBV-encoded LMP1 triggers multiple cellular signaling pathways that influence cell growth [32]. We carried out immunoblot analysis to investigate if the effect on growth rate upon reduction of EBV BART miRNA was a result of altered LMP1 expression. SNK6 cells were transfected with either control miRNA (Scramble) or anti-EBV-miR BART9 and cells were lysed 96 hours post-transfection. Immunoblots were performed and probed for LMP1 levels. We found that inhibiting EBV miR-BART9 reduced LMP1 protein expression by almost 50% when normalized to the Hsp70 loading control and relative to the control miRNA (Fig. 4A). In these experiments, we also probed for lytic protein Zta in order to examine if the activation of EBV lytic program was the reason for reduced SNK6 cell growth rate and found no detectable expression of Zta protein (data not shown). We also investigated the kinetics of anti-EBV-miR-BART9 effect on LMP1 level by carrying out a time-course experiment. We observed that there was a ,46% reduction of LMP1 protein level 96 hours post-transfection of anti-EBV-miR-BART9 compared to control miRNA (Fig. 4B). We next examined if the ,50% reduction in LMP1 protein level following inhibition of BART9 was a consequence of reduced LMP1 mRNA expression. SNK6 cells were transfected with anti-EBV-miR-BART9 and total RNA extracted from the cells 96 hours post-transfection. cDNA was synthesized and Q-PCR was performed using LMP1 specific primers. We found that there was ,2-fold decrease in the LMP1 mRNA levels when BART9 was inhibited (Fig. 4C). This data suggests that BART9 miRNA functions as a positive factor for LMP1 at the level of mRNA accumulation. EBV miR-BART9 has a positive effect on LMP1 protein and mRNA expression in SNK6 cells If BART9 does indeed have a positive influence on LMP1 expression, then increasing its level might be predicted to increase LMP1 expression, in contrast to the effect of the antisense BART9 miRNA (Fig. 3, 4). To test this prediction, we transfected a precursor for miRNA-BART9 (pre-EBV-miR-BART9) into SNK6 cells and performed immunoblots and Q-RT-PCR to examine levels of LMP1 protein and mRNA, respectively. We found that increasing BART9 levels increased LMP1 protein expression by ,33% relative to cells transfected with the precursor negative control miRNA (pre-NegCtrl) when normalized to the loading control b-actin (Fig. 5A). We also found that over expressing BART9 increased the LMP1 mRNA level by a factor of 1.7 (Fig. 5B). Increase in EBV-miRNA-BART9 level modestly affects SNK6 cell growth The role of LMP1as the oncoprotein of EBV is dependent on its expression level. While LMP1 has been reported to promote cellular transformation, increased expression of LMP1 can inhibit cell growth [33]. We next investigated whether increasing BART9 levels would inhibit SNK6 cell growth. Precursor BART9 was transfected into SNK6 cells and samples were collected every 24 hours for three days and cell numbers and viability analyzed. We found that increasing BART 9 levels modestly (,8%) affected SNK6 cell growth rate ( Figure 6A) without affecting viability ( Figure 6B). Although the data for reduction in growth rate of SNK6 following over-expression of BART9 did not show a statistically significant difference in a paired t-test, the results of three independent experiments showed a clear and reproducible trend of reduced growth. This suggests that the level of LMP1 needs to be regulated stringently as an increase ( Figure 6) or decrease below a threshold point (Figure 3 and 4) has an inhibitory effect on SNK6 cell growth. To determine whether the effects of BART9 miRNA on LMP1 expression are directed through the 39UTR of the LMP1 mRNA, we cotransfected a BART 9 miRNA precursor with an LMP1 expression plasmid containing the natural 39UTR and a plasmid lacking this element in HeLa cells. Under these conditions, BART9 had no effect on LMP1 expression from either expression plasmid, suggesting that the effects of BART9 on LMP1 expression are indirect (data not shown). Discussion In this study we show that ,20 EBV miRNAs are abundantly expressed in Nasal NK/T cell lymphomas (NKTCL). We also provide evidence that modulating EBV miRNA levels impacts NKTCL growth rate. Furthermore, we found a direct correlation between levels of EBV-miR-BART9 and LMP1 protein and mRNA expression. Together, these observations suggest that BART9 miRNA positively modulates expression of LMP1 and one manifestation of perturbing this regulation is a retardation of NKTCL cell growth. A number of studies have characterized EBV miRNA expression and their roles in nasopharyngeal carcinomas [4,11,19]. Other studies have focused on the role of cellular miRNAs in NKTCLs [34] and other human cancers [35]. While EBV miRNAs have been found to be conserved evolutionarily [3], they are differentially expressed in different cell types [3,7]. However, to our knowledge this is the first study to determine the expression of EBV miRNAs in NKTCLs. We found that at least 19 EBV miRNAs are abundantly expressed in NKTCLs. Indeed, these 19 miRNAs appear to be expressed at levels 2-3 logs higher than their expression in NPCs [19]. We note that a limitation of our study is that only two NKTCL cell lines and no primary tumors were profiled. Nevertheless, our data suggest that even though EBV viral gene expression might be similar to NPC and Hodgkin disease [36], miRNA expression could vary greatly between these two tumors. What role might these EBV miRNAs play in NKTCLs? Cancer is a disease where cell proliferation is dysregulated. A number of studies have demonstrated a connection between miRNAs and cellular differentiation and in many instances miRNAs act as oncogenes by down-regulating tumor suppressors [30,37]. In this study, we found that inhibiting EBV miRNAs slowed the growth rate of NKTCLs. This reduction in proliferation was not because of loss of cell viability. These EBV miRNAs may be functionally analogous to cellular miRNAs like miR-106b that targets the cell cycle inhibitor p21 Cip1 [38] or miR-221 and miR-222 that regulate p27 kip1 [39]. Since EBV miRNAs are evolutionarily conserved [3], it is also possible that they target viral proteins as is the case with EBV-miR-BART17-5p that has been reported to regulate LMP1 [11] or EBV-miR-BART2 that down-regulates BALF5 viral DNA polymerase [10]. In the SNK6 cell line we observed that inhibiting EBV-miR-BART9 reduced the level of LMP1 mRNA and protein. Furthermore, over-expression of BART9 miRNA resulted in increase of LMP1 protein and transcript levels. This suggests that in NKTCLs, BART9 miRNA likely regulates LMP1 mRNA expression and we favor the hypothesis that this is an indirect regulation as a reporter plasmid containing the 39 UTR of the LMP1 mRNA was not responsive to BART9 in transient expression experiments (data not shown). BART9 may indirectly up-regulate LMP1 by targeting a repressor of its expression. When BART9 is inhibited, the level of this putative repressor may be increased resulting in decrease of LMP1 transcript and protein levels. Alternatively, BART9 may be involved somehow in maintaining LMP1 mRNA stability which has been reported to have a long half-life [40]. In this scenario, BART9 may stabilize LMP1 mRNA, and inhibition of BART9 thus renders the LMP1 mRNA susceptible to degradation. A tight regulation on the expression of LMP1 is beneficial to EBV and its survival in infected cells. For instance, if LMP1 expression is consistently high, it can either result in cell growth arrest [41], inhibit viral and cellular promoters [42] or enhance epitope presentation to cytotoxic T cells [43]. However, under certain conditions, it might be beneficial for EBV to induce LMP1 expression for a short period of time. Notably, a recent study reported the transient upregulation of LMP1 by the p38 signaling pathway [44]. In summary, we have shown that 19 EBV miRNAs are abundantly expressed in NKTCLs and their levels are likely to be important in maintaining cell growth. Our data also indicate that EBV BART9 is involved in regulating LMP-1 expression in these cells. This has implications in mechanisms of lymphomagenesis and future experiments could be directed at investigating the role of EBV miRNAs and its regulation of cellular targets. Cell lines The NK-T cell lymphoma (NKTCL) cell lines, SNK6, SNK10, SNT8, SNT15, SNT16 were obtained from Norio Shimizu (Tokyo Medical and Dental University). The cells were cultured in RPMI1640 supplemented with 10% fetal bovine serum, 1% Taqman qPCR for selected EBV miRNAs in SNK6 and SNT16 cells. The indicated EBV miRNAs were quantified using a stem-loop PCR protocol described by Chen et al [46] for detecting miRNAs. The copy number of each of the miRNAs was determined by reverse transcription and amplification of synthetic miRNAs. The graph represents miRNA expression as molecules per 10 picogram RNA. (C) Taqman qPCR for indicated EBV miRNAs in SNK6 and SNT16 cells as described in (B). The data are presented as copy number of EBV miRNA per cell. doi:10.1371/journal.pone.0027271.g002 Penicillin-Streptomycin and 250 ng/ml Fungizone (Amphotericin B; Invitrogen) and 600 IU of IL-2. MiRNA microarray analysis and validation Total RNA was isolated from SNK6 and SNT16 cells using the miRNAeasy kit (Qiagen) as per manufacturer's protocol. RNA was analyzed by LC Sciences (Houston, TX) with miRNA microarrays using the mParaflo microfluidic chip technology and all data is MIAME compliant. The detailed process can be found at http:// www.lcsciences.com. Briefly, photogenerated reagent chemistry probes for miRNAs were in situ synthesized on chips with three repeats for each probe to allow for statistical analysis. MiRHu-manViruses version 13 arrays were used to detect a total of 1100 unique mature miRNAs comprising of 875 human miRNAs and 225 virus miRNAs. The virus miRNAs included 44 EBV miRNAs. RNA samples from SNK6 and SNT16 cells were labeled with Cy3 for hybridization. The chips included 50 control probes based on Sanger miRBase Release 13 with four-sixteen repeats. The control probes were used for quality controls of chip production, sample labeling and assay conditions. Included in the control probes were PUC2PM-20B and PUC2MM-20B which are the perfect match and single-base match detection probes, respectively, of a 20-mer RNA positive control sequence that is spiked into the RNA samples before labeling. For a transcript to be listed as detectable three conditions had to be met: (1) signal intensity had to be greater than three times background standard deviation; (2) spot co-variance (CV), defined as ratio of standard deviation over signal intensity had to be less than 0.5; (3) the signals from at least 50% of the repeating probes had to be above detection level. Data was normalized using a cyclic LOWESS (Locally-weighted Regression) method [45] to remove system related variations such as sample amount variations, dye labeling bias, and signal gain differences between scanners to reveal biological relevant variations. A t-test was performed on the signals obtained for the repeating probes and p-value calculated. MiRNAs were defined as differentially expressed if they had a p-value,0.01. Clustering analysis was performed with a hierarchical method using average linkage and Euclidean distance metric. The clustering data was represented as a heat map using TIGR MeV (Multiple Experimental Viewer; The Institute for Genomic Research). The microarray data has been deposited in GEO database with accession number GSE30695. Validation of miRNA microarray Selected EBV miRNAs were quantified using a PCR protocol described by Chen et al [46] for detecting miRNAs. Briefly, stemloop primers complementary to specific EBV miRNAs were designed as described by Cosmopoulos et al [19]. For each miRNA assayed, 100 ng of total RNA was reverse transcribed using a TaqMan MicroRNA RT kit as described by the manufacturer and a specific stem-loop primer at a final concentration of 50 nM. RNA was prepared using an RNeasy kit (Qiagen) from exponentially growing tissues culture cells. Each 20 ul PCR reaction contained 1 ul of RT product, 16TaqMan Universal master mix, 1.5 uM forward primer, 0.7 uM reverse primer, and 0.2 uM probe. The reactions were incubated in a 48-well plate at 95uC for 3 min, followed by 40 cycles of 95uC for 15 s and 60uC for 30 s. The copy number of each of the miRNAs was determined by reverse transcription and amplification of synthetic miRNAs that were identical to the published sequences. SNK6 and SNT 16 cells were seeded at 1610 6 cells in 24-well tissue culture plates and transfected with antisense or precursor miRNAs using Oligofectamine or Lipofectamine RNAimax according to the manufacturer's protocol. Transfection efficiency of the miRNAs in SNK6 and SNT 16 cells was determined with FAM labeled EBV-BART16 miRNA and was found to be nearly 98% as determined by flow cytometry (data not shown). Cell viability following transfections was measured by Trypan Blue exclusion and found to be ,95%. RT-real time PCR for EBV mRNAs Independent transfections of anti-EBV-miR-BART 9 or pre-EBV-miR-BART 9 were performed in SNK6 cells. The controls transfected were either Scramble-miRNA (Exiqon) or Precursor-Negative Control (Pre-NegCtrl) (Applied Biosystems), respectively, as described above. Total RNA was isolated using RNeasy mini kit (Qiagen) and RT-real-time-PCR assays carried out for quantification of LMP1 and a-tubulin levels using the Bio-Rad MyIQ single color detection system. Briefly, 10 ng of cellular RNA was reverse transcribed into cDNA using the iScript cDNA synthesis kit (Bio-Rad) in a 20 ml reaction using the manufacturer's protocol. Quantitative real-time PCR was performed using 3 ml of the synthesized cDNA and the iQ TM SYBR Green Supermix (Bio-Rad). PCR reactions were carried out in 96-well format using a Bio-Rad iCycler. Analysis was done by the MyIQ software program (Bio-Rad) and the fold-changes were calculated using the DDCt method as previously described [47] with a-Tubulin as the housekeeping gene control. The primer sequences used for LMP1 have been described previously [48] and were LMP1 (forward) -59 Figure 3. Inhibiting EBV BART miRNA levels affect NKTCL growth rate without affecting cell viability. (A) SNK6 cells were transfected with antisense to the indicated EBV miRNAs and cell numbers counted every 24 hours for three days. Cell growth rate was calculated as difference in cell numbers between the 24 hour and 72 hour time point and compared with cells transfected with control Scramble miRNA. Data shown are the average 6 SD from three independent experiments. (* represents p value of #0.05 in a paired t-test). (B) SNK6 transfected with antisense EBV miRNAs were analyzed for viability by Trypan blue exclusion in a Vi-CELL counter every 24 hours for three days. The data presented is the cell viability at 72 hours post-transfection and is the average 6 SD from three independent experiments. (C) SNT16 cells were transfected with antisense to the indicated EBV miRNAs and cell growth rate analyzed as described above. Data shown are the average 6 SD from three independent experiments. (* represents p value of ,0.05 in a paired t-test). doi:10.1371/journal.pone.0027271.g003 AGCCCTCCTTGTCCTCTATTCCTT 39, LMP1 (reverse) -59ACCAAGTCGCCAGAGAATCTCCAA 39. The primers for a-Tubulin were, a-Tubulin (forward) -59 CCTGACCACCCA-CACCACAC 39, a-Tubulin (reverse) -59 TCTGACTGAT-GAGGCGGTTGAG 39. Cell proliferation functional assay SNK6 and SNT16 cells were seeded in 96 well plates at 8610 5 cells in 100 ml/well and transfected with 100 pmol of indicated anti-EBV-miRNAs or pre-EBV-miRNAs or control Scramble-miRNA or Pre-Neg-Ctrl. In some experiments, the cells were When compared to cells transfected with control miRNA and normalized to b-actin loading control, quantification of immunoblots showed that BART9 inhibition reduced LMP1 protein levels by ,50%. (B) SNK6 cells were transfected with control or anti-EBV-BART9 miRNA and samples collected every 24 hours in a time-course experiment. Cell lysates were prepared and immunoblot analysis carried out to determine LMP1 expression. Quantification of LMP1 levels using Image J as described above showed that LMP1 protein levels are reduced only at later time-point. (C) SNK6 cells were transfected with either anti-EBV-BART9 or control miRNA and cells collected 96 hours posttransfection. Total RNA was extracted and cDNA synthesized using iScript cDNA synthesis kit. Using LMP1 specific primers, Q-PCR was carried out and data analyzed using the DDCt method. Data shown is the average 6 SD from three independent experiments. (** represents p value of ,0.005 in a paired t-test). doi:10.1371/journal.pone.0027271.g004 seeded at 1610 6 cells/well. After overnight incubation, the cells were transferred into 24 well tissue culture plates. Cells were collected every 24 hours and analyzed for cell number and viability using the Becton-Dickinson Vi-CELL counter at the Baylor College of Medicine Flow Cytometry Core. The cell counter uses trypan blue exclusion to automatically stain and count cells, as well as assay cell size and viability. Immunoblotting Cells following treatment were lysed with EBCD buffer (50 mM Tris-HCl, pH 8.0, 120 mM NaCl, 0.5% NP-40, 5 mM dithiothreitol) containing protease inhibitor cocktail (Sigma). Immunoblotting was performed as described previously [49]. Monoclonal antibodies used in this study include, LMP1 (S12), EBNA-LP (JF186), EBNA3C (A10), and EBNA2 (R3). Other antibodies Quantification of immunoblots showed a ,33% increase in LMP1 protein levels in cells transfected with EBV miRNA compared to control miRNA transfected cells when normalized to loading control. Data shown is a representative immunoblot from three independent experiments. (B) Total RNA was extracted from SNK6 cells transfected with precursor EBV-BART9 or control miRNA. Following cDNA synthesis, LMP1 mRNA levels were analyzed by Q-RT-PCR. Data presented is the average 6 SD from three independent experiments. (* represents p value of ,0.05 in a paired t-test). doi:10.1371/journal.pone.0027271.g005 obtained commercially included EBNA1 (1EB12, Santa Cruz), Zta (Argene), b-Actin (Sigma) and a-tubulin (Sigma). HRP secondary antibodies were obtained from Jackson Immunolaboratories and Western blots were developed using the SuperSignal West pico kit (Thermo Scientific). Immunoblots were quantified using Image J software [50]. Figure 6. Increasing EBV-BART9 miRNA level has a subtle effect on SNK6 growth rate. (A) Precursor EBV-BART9 miRNA or control miRNA were transfected into SNK6 cells and samples collected every 24 hours for three days. Growth rate of SNK6 cells was determined by calculating cell numbers. When normalized to cell numbers in control miRNA transfected cells, there was ,8% reduction in SNK6 growth rate. The data shown is the average 6 SD from three independent experiments. (B) In the experiments described above, SNK6 were analyzed for viability by Trypan blue exclusion in a Vi-CELL counter at every time point. The data shown is the cell viability at 72 hours post-transfection and is the average 6 SD from three independent experiments. doi:10.1371/journal.pone.0027271.g006
7,151.2
2011-11-10T00:00:00.000
[ "Biology", "Medicine" ]
Double vision: 2D and 3D mosquito trajectories can be as valuable for behaviour analysis via machine learning Background Mosquitoes are carriers of tropical diseases, thus demanding a comprehensive understanding of their behaviour to devise effective disease control strategies. In this article we show that machine learning can provide a performance assessment of 2D and 3D machine vision techniques and thereby guide entomologists towards appropriate experimental approaches for behaviour assessment. Behaviours are best characterised via tracking—giving a full time series of information. However, tracking systems vary in complexity. Single-camera imaging yields two-component position data which generally are a function of all three orthogonal components due to perspective; however, a telecentric imaging setup gives constant magnification with respect to depth and thereby measures two orthogonal position components. Multi-camera or holographic techniques quantify all three components. Methods In this study a 3D mosquito mating swarm dataset was used to generate equivalent 2D data via telecentric imaging and a single camera at various imaging distances. The performance of the tracking systems was assessed through an established machine learning classifier that differentiates male and non-male mosquito tracks. SHAPs analysis has been used to explore the trajectory feature values for each model. Results The results reveal that both telecentric and single-camera models, when placed at large distances from the flying mosquitoes, can produce equivalent accuracy from a classifier as well as preserve characteristic features without resorting to more complex 3D tracking techniques. Conclusions Caution should be exercised when employing a single camera at short distances as classifier balanced accuracy is reduced compared to that from 3D or telecentric imaging; the trajectory features also deviate compared to those from the other datasets. It is postulated that measurement of two orthogonal motion components is necessary to optimise the accuracy of machine learning classifiers based on trajectory data. The study increases the evidence base for using machine learning to determine behaviours from insect trajectory data. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13071-024-06356-9. that can return valuable insights into their flight behaviour and has already led to significant advances in disease prevention.For instance, early studies on mosquito trajectories led to the development of an improved insecticide-treated net (ITN) design that provides better protection against disease transmission [2].Further research on mosquito behaviour is likely to lead to other such improvements. Previously, many tracking studies involved manual processing to capture behaviours, with a number of examples concerning mosquitoes [3][4][5].However, advancements in high-resolution cameras, computational power and computer vision technology have enabled automated tracking of behaviour [6].Typically, this involves using cameras to capture videos or images that are subsequently processed to identify the mosquitoes or objects of interest.To facilitate accurate tracking, experiments rely on a clear contrast between the insect and background, achieved by illumination control.Appropriate lighting can be achieved using front, back or side illumination with artificial sources where the wavelength is normally selected in an insect blind region of the spectrum; in some cases natural lighting from the sun can be used effectively [7,8].Tracking individual insects entails analysing the contrast differences within the images.By applying appropriate thresholds, the objects of interest are accurately segmented from the background. Insect behaviour can be quantified using twodimensional (2D) or three-dimensional (3D) tracking systems.Three-dimensional tracking provides full quantitative measurement of the three orthogonal components of an object's position and movement in 3D space.This is at the expense of a more complex imaging setup and hence higher cost.The most widely used approach for 3D tracking is stereo vision with a pair of rigidly coupled cameras (Fig. 1) [7].The camera separation is one of the main factors that determines the resolution of the depth information with respect to the cameras, increased separation giving improved resolution at the expense of less correspondence between the camera views, a larger setup and needing a more rigid mechanical coupling between the cameras.Camera calibration is crucial, particularly when attempting to construct 3D trajectories from stereo cameras.This process involves establishing a relationship between the 2D coordinates obtained from each camera and the 3D coordinates of markers in a known pattern from a set of calibration frames.Typically, stereo camera calibration has to be performed in situ and also compensates for lens distortion [6].In contrast, 2D tracking recovers the motion of a body from the projection of its position onto the 2D image plane of a single camera and some information is lost (Fig. 2).The two-component information obtained, in general, is a combination of the three orthogonal position components due to perspective projection.The field of view has an angular limit, determined by the camera lens.Hence, a specific mosquito movement at the front and back of the measurement volume will give differing results in pixels on the camera.Fortunately, several software packages are available that facilitate automated tracking.These packages provide functionalities for image preprocessing, object identification, and trajectory analysis, The imaged volume is determined by the angular field of view, θ, and hence increases with distance from the camera streamlining the tracking process and reducing the manual effort required [9][10][11]. Telecentric imaging was introduced for single-camera, 2D measurement applications as an object appears at the same size irrespective of its position along the optical axis (Fig. 3) [12].It employs a lens with aperture matching the field of view, and Fresnel lenses enable large, metrescale applications (see inset in Fig. 3).The telecentric arrangement is achieved by spacing the two lenses on the camera side by a distance equal to the sum of their focal lengths.This geometry removes the perception of depth and eliminates perspective distortion [12,13].Wideangle LED sources with a large aperture Fresnel lens for illumination makes telecentric imaging well suited for indoor recordings. In recent studies, researchers have explored 2D and 3D trajectories, shedding light on their respective merits and limitations.A notable investigation focused on zebrafish behaviour, where a comparison was made between 3D and 2D tracking [14].To capture the zebrafish movements, two cameras were positioned to view orthogonal planes within a large water tank.Videos were processed into frames and analysed with a 3D multi-target tracking algorithm [15] resulting in the quantification of a range of essential behavioural characteristics.Intriguingly, the analysis revealed consistent underestimation of these behavioural features when relying solely on 2D views.This discrepancy can be attributed to the lack of the extra dimension provided by 3D tracking, which offers a more comprehensive understanding of the zebrafish's rich behavioural repertoire.Consequently, it was concluded that collecting and analysing 3D trajectories was a necessary overhead, despite the use of multiple cameras and an increased computational load.Furthermore, an additional finding emerged, indicating that a 3D approach requires fewer subjects compared to a 2D approach to obtain comparable statistical results.More recently, stereo-based 3D tracking has been instrumental in understanding moth behaviour in attraction to artificial light revealing that dorsal tilting is responsible for the seemingly erratic flight of the moth around a light source [16]. Tracking techniques have greatly advanced our understanding of mosquito behaviours.Butail et al. [8] (2012) used a stereo camera system to construct and validate 3D trajectories of wild Anopheles gambiae.This research revealed insights into male mosquito motion [17].Building upon these findings, a more recent study [18] focused on classifying the disparities in mosquito behaviour between male and non-male (females and mating couples).By utilising explainable artificial intelligence (XAI), the study explored the dissimilarities among these classes, reinforcing existing knowledge about the behaviour of male mosquitoes within mating swarms.XAI showed that females and mating couples (non-males) tend to exhibit extreme, high and low, values for velocity and acceleration features (kinematic characteristics) perhaps reflecting the increased energy availability in females through blood feeding and the more chaotic movement of mating couples.The paper shows the utility of machine learning, and XAI techniques in particular, to extract behaviour insights from 3D trajectory information.Parker et al. [19], examined mosquito behaviours around human baited bednets in the field using 2D imaging of Anopheles gambiae mosquitoes.Here, a pair of identical recording systems were used 'side by side' to expand the field of view and telecentric imaging was utilised (as shown in Fig. 3) to produce an accurate projection of two orthogonal components of motion onto the image plane.This research identified four distinct behavioural modes-swooping, visiting, bouncing, and resting-using bespoke algorithms based on entomologist expertise.Furthermore, it was observed that mosquitoes possess the ability to detect nets, including unbaited untreated ones.These findings contributed to the understanding of mosquito interaction with ITNs. Tracking in combination with AI techniques has also been used to examine behaviours of other insects.Machraoui et al. [20] used 2D imaging, tracking and feature extraction with supervised learning models to differentiate sandflies from other insects with accuracies of circa 88% for support vector machine and artificial neural network models on an optimised feature set. In this article, we explore the relative merits of 2D and 3D mosquito tracking when classifying and interpreting behaviours via machine learning.We present a comparative analysis among 3D trajectories, 2D telecentric (removing one orthogonal component) and 2D single-camera data with perspective distortion, all derived from the same dataset, to assess the advantages and limitations of these tracking approaches.Analogous features are determined for each of these datasets, and the accuracy of the machine learning classifier provides a useful quantitative metric to assess the outcomes and XAI enables interpretation of behaviours.We hypothesise that 3D tracking and 2D telecentric tracking will return similar results, despite the loss of the additional information in the third dimension.We further hypothesise that a single-camera tracking system will return lower performance due to perspective effects and lens distortion.A deeper understanding of the strengths and weaknesses of 2D and 3D mosquito tracking will enable researchers to make informed decisions regarding experiment design.Overall, our research endeavours to advance the field of mosquito tracking and behaviour analysis via XAI, ultimately aiding in the development of more efficient and targeted mosquito control measures, leading to significant public health benefits. Methods A machine learning classifier has been established to classify male to non-male mosquitoes using 3D trajectories from mating swarms [18].From this 3D dataset, corresponding 2D telecentric and 2D angular field of view information is derived to simulate the data obtained from these tracking systems.The sections below detail how the single-camera 2D telecentric and 2D angular field-of-view trajectories are determined and the corresponding features derived for the 2D data. Dataset description The trajectories of the mosquitoes utilised in this investigation were produced by Butail et al. and were provided as 3D tracks following the processing steps outlined in [8].The data were collected in Doneguebogou, Mali, for the years 2009-2011, during which wild Anopheles gambiae mosquito swarms were observed. The dataset contained 191 male mosquito tracks over 12 experiments as well as 743 mating couple tracks (where male and female mosquitoes mate in flight and are tracked together) over 10 experiments (Table 1).The male mosquito tracks were captured in swarms where no females were present, whereas couple tracks were generated from swarms that contained mating events.Prior to analysis, tracks were filtered based on duration, excluding those < 3 s.This decreased the size of the dataset but effectively eliminated tracks with low information content. The experiments used to track the mosquitoes utilised a stereo-camera set up using phase-locked Hitachi KP-F120CL cameras at 25 frames per second.Each camera captured 10-bit images with a resolution of 1392 × 1040 pixels.On-site calibration of the cameras was performed using a checkerboard and the MATLAB Calibration Toolbox [21].The relative orientation and position of the cameras were established through extrinsic calibration, which involved capturing images of a stationary checkerboard in multiple orientations and positions.The camera's height, azimuth, and inclination were recorded to establish a reference frame fixed to the ground. Two-dimensional projection of 3D trajectory data To conduct a comparative analysis between 3D and 2D trajectories, two methods were employed to convert the 3D dataset to a 2D one. 1.The first involves the omission of depth information, resulting in the plane of view parallel to the camera (YZ for this dataset).This method emulates a wellcalibrated 2D setup that uses telecentric imaging [19], i.e. the separation of the two lenses on the imaging side generates the telecentric condition and any lens distortion effects have been removed by appropriate calibration.2. The second transformation method utilises a single lens camera model placed a distance away from the swarm to project the trajectories onto a 2D plane (the camera detector plane), simulating the transformation that occurs through a single-camera setup including perspective and lens distortion. To perform the second transformation, the camera was modelled using OpenCV [22] requiring focal length, principal points, distortion coefficients, and the camera location and rotation.The 3D trajectories were projected onto the image plane using a perspective transformation, utilising the projectPoints function (https:// docs.opencv.org/4.x/ d9/ d0c/ group__ calib 3d.html), represented by the distortion-free projection equation (Eq.1): where P w is a four-element column vector in 3D homogeneous coordinates representing a point in the world coordinate system, p = u v 1 T is a threeelement column vector in 2D homogeneous coordinates defining the corresponding position (u, v) of a pixel in the image plane, R and t refer to the rotation and translation transformations between the world and camera coordinate systems, s is a scaling factor independent of the camera model, and A is the camera intrinsic matrix given by (Eq.2). (1) sp = A[R|t]P w with f x and f y the focal lengths expressed in pixel units, and c x and c y are the principal points on the detector in pixel units.Under these definitions the coordinates of the imaged point on the camera (u, v) are in pixels.Radial, tangential, and prism distortions are included by modifying the 3D point in camera coordinates, given by [R|t]P w [22].The camera intrinsic matrix values and distortion coefficients were based on the specifications provided by one of the camera models employed during the dataset generation process.These include the focal lengths ( f x = 1993.208and f y = 1986.203), principal points ( c x = 705.234and c y = 515.751 ) and distortion coefficients ( k 1 = −0.088547, k 2 = 0.292341 , and p 1 = p 2 = 0 ) [8].To ensure accurate representation of the swarm, the translation vector was adjusted such that the optical axis aligns with the centre of a cuboid enclosing the swarm, while the camera model was positioned at a predetermined distance from the swarm centre.As detailed by Butail et al. [8], the camera was positioned between 1.5 m and 2.5 m away from the swarm.Therefore, in our simulated experiment, the camera model was positioned at 2 m from the swarm centre.Simulations were conducted with and without the lens distortion terms which showed that the vast majority (> 98%) of the distortion observed in the image was due to perspective at this range (for the camera intrinsic matrix values given above and a cuboid object extending 1 m in each axis).For the single-lens 2D camera model, the coordinates of the image points in pixels (from Eq. 1, corresponding to the 3D trajectory coordinates) were used directly for feature calculation and classification. To investigate the impact of different distances between the camera and the swarm on classifier performance from a single-lens 2D measurement, adjustments in the focal length of the camera model were accounted for such that the swarm occupied the same extent in the image.The thin lens equation (Eq. 3) was used to approximate the distance from the lens to the image plane as the object distance is varied.This equation relates the focal length, f , to the distance of the object to the camera lens, u , and the distance of the camera lens to the image plane, v .Subsequently, by applying the magnification equation (Eq.4), the magnification factor, M , was determined [23].Based on the new distance between the object and the lens, the corresponding focal length was calculated and utilised in the camera intrinsic matrix (Eq.2). (2) Thereby, datasets for 3D, 2D telecentric, and 2D single camera at varying object distances were derived; an example is provided (Fig. 4). (4) Machine learning framework This study employs an anomaly detection framework, as detailed in [18], to classify male and non-male mosquito tracks.Track durations are unified by splitting them into segments of equal duration, and flight features are extracted per segment.In [18], tracks shorter than double the segment length were removed.However, this restriction is removed as filtering was unified to remove Fig. 5 Diagram outlining the machine learning pipeline used to classify male and non-male mosquito tracks tracks < 3 s in duration.This unifies the datasets and makes downstream comparison like-for-like.Features are selected using the Mann-Whitney U test and highly correlated features are removed.Classification is performed using a one-class support vector machine (SVM) model trained on a subset of the male class.The model forms predictions on track segments, and then a voting method is employed to return the final class prediction of whole tracks (Fig. 5). The 3D trajectory feature set is detailed in [18].For 2D trajectory data, an equivalent feature set was employed resulting in 136 features of flight, with most feature calculations remaining consistent, albeit with the exclusion of the third axis.For instance, straightness (also referred to as tortuosity) is computed as the ratio between the actual distance travelled and the shortest path between the start and end positions.For 3D trajectories, this was calculated as: However, in the 2D trajectory case, this was now calculated as: The calculations for the remaining features were originally devised for 2D trajectories.However, in the context of 3D trajectories, projections onto the X-Y, Y-Z, and X-Z planes were computed, resulting in the derivation of a single value.An example of this is the calculation of curvature, which requires a single plane: The study employed K-fold cross-validation.Two male trials were reserved for testing, while the remaining trials were used in training.All remaining classes (couples, females, and focal males) were used in testing.In the K-fold cross-validation process, different combinations of male trials were systematically rotated into the training set in each iteration, which is referred to as a 'fold' .Performance metrics such as balanced accuracy, ROC AUC (area under the receiver operator curve), precision, recall, and F1 score were calculated with males and non-males considered as the positive class for metric computation. The framework had various parameters that can be tuned including the machine learning model hyperparameters and the window size used to split tracks into segments.These were tuned together in a cross-validated grid search attempting to maximise balanced accuracy.An independent tuning set containing three male trials and two couple trials, distinct from the dataset used to report the classification performance and named the modelling set, was used to obtain the best parameters.The grid search utilised in this study encompassed a more refined range of values with a smaller step size compared to [18], which is detailed in the supplementary material.The hyperparameter, ν, described as "an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors", was set to 0.2.This value was chosen to make strong regularisation of the model to allow large errors on the male class (the only class that is seen during training) to reduce overfitting. Evaluation of transformed data Various methods were used to assess the different datasets.The machine learning pipeline provides quantitative metrics for evaluating performance on the 3D and 2D trajectory feature sets.Analysing feature correlations between 3D/2D datasets can reveal insights into the preservation of flight features within 2D trajectories.Correlations were computed by calculating the average absolute Pearson's correlation coefficient across features between two datasets.Even though each dataset has specific window parameters that are identified during hyperparameter tuning, a fixed segment size and overlap were used to determine the correlation matrix to generate paired samples.An alternative technique for analysing and comparing features is to visualise them through an embedding. Here, an embedding is a lower dimensional space that condenses the information content from a higher dimensional space.Uniform manifold approximation and projection (UMAP) [24] creates a visualisation that shows how the 2D/3D datasets cluster within the embedded feature space.Notably, UMAP is a dimensionality reduction technique that preserves the local relationships and global structure of the data, making it particularly suitable for this purpose. Most importantly, it is necessary to deduce whether the machine learning models are utilising features correctly and behavioural insights gathered are consistent with those from 3D trajectories.By using SHapley Additive exPlanations (SHAP) values [25], it was possible to visualise and explain how the model made its predictions. From [18], classification of male and non-male trajectories based on 3D trajectory features was demonstrated, alongside XAI to interpret the machine learning model.The SHAP plots have increased noise due to using field data and may exhibit a slight skew in the colour scale.To ensure robust interpretations, SHAP scatter plots were also used to visualise the SHAP value distribution as a function of feature value. Results The 3D dataset was transformed into 2D telecentric and single-camera datasets at various distances from the swarm.Evaluating the machine learning framework's performance at these distances (Fig. 6), the single-camera model closely matches the telecentric dataset as the camera moves farther from the object.For each distance, the tuned pipeline returns differing segment sizes and overlaps, which are also displayed. Comprehensive results for the performance when using tuned pipeline parameters of the 3D dataset, 2D telecentric dataset, and 2D single-camera datasets with the camera placed at 2 m and 15 m are provided (Table 2).Across all datasets, the best performance obtained was from the 3D tracks with a balanced accuracy and ROC AUC score of 0.656 and 0.701.This performance may seem low, but the classifier is attempting to distinguish small differences in features of flight over segments of a few seconds and the data were captured in the field under various conditions; hence, such performance in this application is notable.Generally, the single-camera model performs worse than the telecentric and 3D methods in both cases.At 2 m, the single-camera model fares 6.8% and 6.9% worse in balanced accuracy and ROC AUC compared to the 3D dataset, primarily because of perspective distortion.The telecentric and 3D methods exhibit similar performance with absolute percentage differences of 2.1% and 1.0% in balanced accuracy and ROC AUC, respectively.This indicates preserved tracking accuracy with a 2D telecentric dataset, i.e. two orthogonal displacement components quantified, despite a loss of depth information.Similarly, when the singlecamera model is placed farther away, its performance closely mirrors that of the 2D telecentric dataset with absolute percentage differences of 0.7% and 1.3% for balanced accuracy and ROC AUC respectively.Note that the performance metrics for the female and focal male classes are not conclusive as these results are based on a limited number of tracks. A closer analysis of individual fold performance across the datasets revealed additional understanding.As reported in [18], poorly performing folds were those that were tested on abnormal trials where mosquito type was different (Mopti form instead of Savannah form) and swarm location differed (over bundles of wood rather than bare ground).These conditions could alter mosquito trajectory features, potentially causing them to fall outside the decision boundary of the singleclass model.Conversely, folds including abnormal trials in training consistently performed best.This indicates potential overfitting to the variability within their features, leading to accurate classifications for male mosquitoes but reduced accuracy for non-male.This trend held for both the 2D telecentric and singlecamera models at 15 m.However, the 2 m singlecamera model displays the opposite behaviour, with the best performance on folds containing abnormal trials in testing.This implies that the perspective distortion introduced by the camera at this distance is affecting the feature values and their variability, resulting in unexpected performance variations across different trials. The performance of these models can be visualised through confusion matrices (Fig. 7) and receiver-operator characteristic (ROC) curves (Fig. 8).The confusion matrices display the predictions of all folds with the percentage of predictions labelled in each section of the matrix.The ROC curves depict the performance of a Analysing the correlation between features from different datasets can reveal insights into the preservation of flight features in 2D trajectories.Datasets were generated at various distances using the same segment size and overlap, such that correlation can be computed between paired samples.To compute the correlation, each of the datasets was pairwise correlated to produce the matrix (Fig. 9).Overall, utilising a 2D telecentric setup preserves more features compared to the 3D dataset, with an average correlation of 0.83.Shape descriptors show the lowest correlation because of depth loss, which is expected.Conversely, a single-camera setup compromises tracking accuracy, resulting in lower feature correlation compared to a 3D stereoscopic system.The average correlation between the 2D single camera at 2 m with the 3D dataset and the 2D telecentric system is 0.72 and 0.87, respectively.However, positioning the single camera at 9 m significantly improves correlation.Average correlation values increase to 0.80 and 0.96 compared to 3D and 2D telecentric datasets, respectively.These results are expected as increasing camera-swarm distance reduces the perspective distortion effect, thereby resembling telecentric setup data and enhances feature preservation. The UMAP representation (Fig. 10) provides a clear visualisation of the disparities between the datasets.The SHAP plots of the best performing folds for each model were generated and are provided in the supplementary material.This includes SHAP summary plots for the best performing folds for the 3D, 2D telecentric, and 2D single-camera model at 2 m and 2D single camera model at 15 m datasets, respectively (Additional file 1: Figs.S1-S4).The supplementary material also includes SHAP summary plots where only the common features across each model are selected and sorted alphabetically (Additional file 1: Figs.S5-S8).SHAP scatter plots for the third quartile of angle of flight feature are provided for each dataset (Fig. 11).This feature was chosen as an example to illustrate the impact that each camera system has on SHAP and feature values.In this figure, each point represents a segment, with its corresponding normalised feature value on the x-axis and its SHAP value on the y-axis.A histogram of the segment feature values is provided as a grey shadow. The feature selection process for each dataset selects slightly different types of features.Among the datasets, the numbers of selected features are as follows: 61 for the 3D dataset, 34 for the 2D telecentric dataset, 42 for the 2D single-camera dataset at 2 m, and 35 for the 2D singlecamera dataset at 15 m.Notably, the 3D dataset contains more features as it includes some feature calculations projected in the X-Y, Y-Z,, and X-Z planes which are not present with 2D data.Despite these differences, a significant portion of features is shared between them.Specifically, 85% of the features are common between the 2D telecentric and 2D single-camera datasets at 2 m, while 97% of the features are common between the 2D telecentric and 2D single-camera datasets at 15 m.These observations further reaffirm that the 2D single-camera dataset at 15 m can effectively emulate a 2D telecentric system.It is important to note that across all datasets, Fig. 9 Pairwise correlation matrix between each dataset.The Pearson correlation between the same features for each pair of datasets is computed, with the average of the correlations taken to return a final value for the dataset pairs Fig. 10 UMAP representation of each of the datasets only a few shape descriptors are selected, consistent with the findings from [18]. Discussion This study compares 3D and 2D trajectory datasets simulating various imaging techniques.Performance metrics were obtained via a one-class machine learning classifier on field data of male and non-male mosquitoes in a mating swarm.Generally, the 3D and 2D telecentric datasets performed best, with the exception of some metrics from the 2D single-camera model at 15 m.Performance with a single camera at a great distance (with a suitable focal length lens) approached that of the telecentric dataset.However, at a typical distance for insect tracking of around 2 m, performance showed an average decrease of about 0.05 across all metrics on the test datasets. Earlier, we hypothesised that 2D telecentric imaging data would perform similarly to stereoscopic 3D data despite the loss of one axis of information.We anticipated that a single-camera model would be less effective at short distances compared to larger distances, where trajectory data align more closely with telecentric imaging (with larger focal length imaging lenses).The machine learning classifier performance metrics confirm both hypotheses.The implication of the first hypothesis is that the necessary features to differentiate the behaviour of male compared to non-male mosquitoes are present in two orthogonal components of motion as well as in a complete threedimensional measurement.This is different from what we normally consider to be the accuracy of a measurement.In terms of metrology, accuracy is the difference between a measurement and the true value.The speed of a mosquito requires all three velocity components for accurate determination.The findings demonstrate that features extracted from 2D orthogonal, i.e. independent axes, measurements can characterise behaviour comparably to 3D measurements (Table 2). Single-camera 2D data are typically obtained without calibrating for geometric distortions introduced by the imaging lens.Distortion increases linearly with radial distance from the optical axis and is a power law with respect to numerical aperture [26] and the angular field of view increases as the camera is moved closer to the scene of interest.Hence, close range imaging yields higher distortion compared to distant imaging for the same field of view.Perspective effects at close range mean that the two components of position measured at a detector are also a function of object position along the optical axis and the magnitude increases with radial distance (as for lens distortion).Hence, it appears that classifier performance is impacted by perspective and distortion aberrations, particularly noticeable at closer distances.Conversely, positioning the camera further away reduces perspective distortion, leading to more reliable interpretations akin to 2D telecentric data.However, by tuning the parameters for the machine learning pipeline at each distance, the pipeline partially accommodates for the distortion effects introduced at smaller distances.The changing segment size for each distance, as determined by the tuning dataset, thus plays a strong role in the classification performance leading to some of the variations in balanced accuracy and ROC AUC as distance increases.The figure (Fig. 6) captures these variations and also depicts the tuned segment size and overlap at each distance.Differing segment sizes capture different scales of behaviour and would lead to variations in feature values and thus differences in classification performance.Intriguingly, this study found that achieving comparable performance between a single-camera 2D measurement and the corresponding 2D telecentric assessment occurs at a range of 9-10 m.The pipeline parameters after 9 m remain consistent and are equivalent to the telecentric system, displaying its effectiveness at emulating a telecentric camera system. The correlation analysis highlights differences between the camera systems.The single-camera model at 15 m correlates strongly with 2D telecentric data (0.96), while 2D telecentric data correlate well with the 3D dataset (0.83).In the UMAP representation (Fig. 10), features from the 2D single camera at 2 m cluster towards the upper left corner, suggesting less reliable and inconsistent object tracking at close range.SHAP scatter plots for the angle of flight, third quartile, feature (Fig. 11) corresponding to the four different imaging setups demonstrate similarity among the 3D, telecentric, and single camera at 15 m, whereas the single camera at 2 m has increased noise and overlap between the classes across some feature values.This feature describes the upper quartile of the change in angle of flight distribution within a track segment, where high values indicate a large deviation.It can be argued that this feature for the 3D dataset shows the clearest separation between the male and non-male classes, while overlap occurs in other setups.In the single camera at 2 m SHAP scatter plot, the histogram displays a distorted distribution of normalised feature values compared to the other histograms, further illustrating the impact the distortion that camera systems at close distance bring.SHAP summary plots in the supplementary information confirm these trends indicating subtle differences in feature contributions and a slight skew towards male predictions with closerange single-camera models.This phenomenon can be attributed to the perspective distortion introduced in trajectories that are constructed by a single-camera model, resulting in highly variable features across all classes.Consequently, the distinct separation between classes diminishes for 2D imaging at close range. The study primarily focused on the Y-Z view directly imaged on the camera detector, but the other two orthogonal views were assessed (Additional file 1: Figs.S12-S13).Notably, the machine learning model performance of these additional views were higher than that of the original view that has been discussed.Specifically, the overhead view X-Y, which captures the distinctive circular motions of swarming male mosquitoes and the more erratic behaviours of mating couples, likely contributed to its higher effectiveness.The X-Z plane, observing the swarm from the other side view, may perform better because of the increased uncertainty of X-positional data in combination with perspective, which may amplify the depth information (e.g. through increased variability in certain features).Mating couple tracks move less in the depth plane and thus lead to bias towards one of the classes.Both these views utilise the depth axis that, while derived, introduces significant noise, rendering these findings less reliable.During the generation of the dataset, the camera system is placed 1.5-2.5 m away from the swarm and the baseline is 20 cm [8], meaning the angle subtended by the cameras at the swarm in the stereoscopic setup varies between 4.6 and 7.6 degrees.According to [27] for a related stereoscopic imaging setup, with an angle of 5 degrees, the uncertainty in depth displacements is > 11 times the uncertainty parallel to the detector plane.With an angle of 7.5 degrees, this uncertainty is > 7 times the uncertainty parallel to the detector plane.As a result, the accuracy of the depth component (X) is 7-11 times worse than the other measurement components, and thus these results from the other views are unreliable. Overall, the 3D dataset demonstrates superior performance, followed by the telecentric dataset.Both setups can be configured in a small experimental footprint compatible with experimental hut trials in sub-Saharan Africa.Stereo 3D setups require alignment of the two cameras on the same field of view and in situ calibration.Two-dimensional telecentric setups require large aperture optics typically achieved with plastic Fresnel lenses [19], the same size as the required field of view, and careful alignment of the separation between the camera and large aperture lenses.Single-camera 2D imaging is experimentally simpler and can be done with lower camera to object distance within the size of a typical experimental hut but then generate the distortions described above and lower performance in machine learning classification and hence difficulties in behaviour interpretation.Two-dimensional imaging at longer range becomes problematic for practical reasons, the image path would extend outside a typical dwelling, and it is difficult to prevent occlusion by people and animals during recordings that can take several hours.Also, with large focal length lenses, outdoor implementation in low light conditions can be particularly problematic as the optical efficiency reduces, an effect that has not been investigated here.It is also recognised that the calibration process for stereoscopic imaging naturally means that trajectories are obtained in physical distance units, e.g.mm; telecentric setups can also be relatively easily calibrated as position data parallel to the camera detector remains the same irrespective of an object's position along the optical axis.The resulting machine learning models can therefore be applied to results of other, equally well calibrated experiments that attempt to elicit similar behaviours.Two-dimensional single-camera measurements are obtained in pixels from the detector-whilst known artefacts could be placed in the field of view for calibration, manual assessment of whether trajectories are in the appropriate depth plane would need to be made.Hence, the machine learning models from 2D single-camera measurements are less useful than the calibrated data from stereoscopic 3D or telecentric 2D setups. There are certain limitations with this study that should be acknowledged.First, the datasets used for comparing the performance of different tracking systems were all simulated, except for the 3D dataset.The 3D data used for simulating the other tracking systems were gathered from mosquito swarms, where their movement revolves around a central point, resulting in generally symmetric trajectories (especially in both horizontal axes).As a result, these findings may not be applicable in studies that have unsymmetrical movements (e.g.mosquito flight around bednets [19]).The orientation of the 2D datasets is to primarily capture the vertical axis, with respect to the ground, and one horizontal axis.It is probably important for 2D datasets to include the effect of gravity and one other orthogonal axis.Were a trajectory to be along a linear axis not captured by a 2D imaging system, then clearly it would fail to provide useful information.However, the mating swarm data used here [8], data from field tests tracking mosquitoes around human baited insecticide treated nets [28] or in odour stimulated wind tunnels tests [29], mosquitoes do not exhibit straight line flight behaviours.The 3D data itself were gathered from wild mosquito swarms and as such the trajectories may already contain noise that may reduce performance across all tracking simulations.To further validate these findings, future trials of the various tracking systems should be tested by generating new experimental data from each system in diverse scenarios and then comparing their trajectories to determine whether the same behaviours and trends between the 3D and 2D datasets are observed. Conclusions Accurately tracking mosquitoes, or more generally insects, is a difficult task that requires care to be taken at many stages.This includes considering the experimental conditions, the video recording equipment, and the software used to identify insects from videos.Nonetheless, accurate tracking of mosquitoes could lead towards improved understanding of their behaviours that may influence disease transmission intervention mechanisms.The results of this study imply that 2D telecentric and 3D stereoscopic imaging should be the preferred imaging approaches to adequately capture mosquito behaviour for machine learning analysis.Both of these approaches are compatible with laboratory and field-based studies, but it should be recognised that 2D telecentric imaging is less complex and the data more straightforward to process.Single-camera 2D imaging over large, metre-scale field of view, although experimentally easier and needing less expensive equipment, should be avoided because of the distortion in the results and subsequent difficulty in interpretation.Nonetheless, if a single camera is placed at a considerable distance from the object of interest, achieving accurate interpretations of behaviour may be feasible.However, this demands expensive long focus lenses and a strong light source to effectively record trackable mosquitoes. Fig. 1 Fig. 2 Fig. 1 Schematic of stereo camera setup for 3D mosquito tracking illustrating the boundaries of the space imaged by both cameras and where 3D measurements are possible Fig. 3 Fig. 3 Schematic of a single-camera telecentric setup for measuring two orthogonal components of mosquito movement.The mosquitoes are back-lit from the LED on the left hand side and are observed as shadows on the camera Fig. 4 Fig.4 Plots displaying the effect of the transformation methods on a single mosquito trajectory.a Original 3D track.b Two transformation methods applied to the trajectory: the 2D telecentric transformation with depth information ignored (blue) and the 2D camera model developed in OpenCV at 2 m (orange) and 15 m (green), respectively, whilst utilising the distortion coefficients from Butail et al.[8] Fig. 6 Fig. 6 Classification performance of the 2D single-camera model as the distance varies.Solid lines: 2D single-camera model; dashed lines: data from 2D telecentric model.a Balanced accuracy as distance varies.b ROC AUC score as distance varies.Both graphs also display the optimised segment size and overlap from hyperparameter tuning at each distance Fig. 7 Fig. 7 Confusion matrices of each dataset: a original 3D dataset, b 2D telecentric dataset, c 2D single-camera model at 2 m, and d 2D single-camera model at 15 m Fig. 8 Fig. 8 Receiver-operator characteristic (ROC) curves of each dataset.The dark blue line displays the average ROC curve across all folds, the light blue lines show the ROC curve at each fold and the grey shadow depicts the standard deviation.Within the figure, (a) displays the original 3D dataset, (b) the 2D telecentric dataset, (c) the 2D single-camera model at 2 m, and (d) 2D single-camera model at 15 m Fig. 11 Fig. 11 SHAP scatter plots for the third quartile of angle of flight feature.Within the figure, (a) displays the original 3D dataset, (b) the 2D telecentric dataset, (c) the 2D single-camera model at 2 m, and (d) 2D single-camera model at 15 m Table 1 Numbers of experiments and tracks for each class of mosquito Table 2 Performance metrics of each 3D/2D dataset when passed into the machine learning pipeline with the 95% confidence interval provided in brackets
9,380
2024-07-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Biology" ]
A Very Fast Decision Tree Algorithm for Real-Time Data Mining of Imperfect Data Streams in a Distributed Wireless Sensor Network Wireless sensor networks (WSNs) are a rapidly emerging technology with a great potential in many ubiquitous applications. Although these sensors can be inexpensive, they are often relatively unreliable when deployed in harsh environments characterized by a vast amount of noisy and uncertain data, such as urban traffic control, earthquake zones, and battlefields. The data gathered by distributed sensors—which serve as the eyes and ears of the system—are delivered to a decision center or a gateway sensor node that interprets situational information from the data streams. Although many other machine learning techniques have been extensively studied, real-time data mining of high-speed and nonstationary data streams represents one of the most promising WSN solutions. This paper proposes a novel stream mining algorithm with a programmable mechanism for handling missing data. Experimental results from both synthetic and real-life data show that the new model is superior to standard algorithms. Introduction It is anticipated that wireless sensor networks (WSNs) will enable the technology of today to be employed in future applications ranging from tracking, monitoring, and spying systems to various other technologies likely to improve aspects of everyday life. WSNs offer an inexpensive way to collect data over a distributed environment that may be harsh in nature, such as biochemical contamination sites, seismic zones, and terrain subject to extreme weather or battlegrounds. The sensors employed in WSNs-which are miniatures embedded computing devices-continue to produce large volumes of streaming data obtained from their environment until the end of their lifetime. It is known that when the battery power in such sensors is exhausted, the likelihood of erroneous data being generated will grow rapidly [1]. Both uncertain environmental factors and the low cost of the sensors may contribute to an intermittent transmission loss and inaccurate measurement. Even when they seldom occur, errors and noises in data streams sensed by a large number of sensors may be misinterpreted as outliers; they frequently trigger false alarms that might either lead to undesirable consequences in critical applications or reduce measurement sensitivity. Data classification is a popular data mining technique used to determine predefined classes (verdicts) to which unseen data freshly obtained from a WSN map, thereby providing situational information about current events in an environment covered by a dense network of sensors. At the core of the classification technique is a decision tree constructed by a learning algorithm that uses tree-like graphs to model the underlying relations of attributes characterized by the output signals of the sensors to predefined classes. Other alternative algorithms include a support-vector machine, neural network, and Bayesian network algorithms, which offer about the same ability to model nonlinear relations between inputs and outputs. However, decision trees have been widely used in WSNs because of their simplicity and the interpretability of their rules, which can easily be derived from the structure of the tree. The huge volume and imperfect quality of data streams poses two specific issues applicable to data mining applications, especially for decision trees used in WSNs: problems surrounding model induction and predictive accuracy. A decision tree is constructed by learning from a set of training data, a process in which a local greedy partitioning method is normally used. The training data have to be stationary and bounded in size throughout the learning process. Should new learning data arrive, the learned model must be trained again by processing the whole dataset to update the underlying relations. However, although a single WSN includes a huge number of sensor nodes, each of them has only a limited storage capacity, and it is difficult to accommodate all the training data of the whole network. This implies that data mining can be carried out only at a backend base station that meets storage and computation complexity requirements. Centralized data aggregation gives rise to problems of data synchronization and data consistency, given that the data may come from different sensors randomly distributed over the whole network. Most importantly, retraining a decision tree model requires an ever-increasing degree of latency due to the tremendous volume of data needed. Even if only the latest data are used and old data are discarded, evolving data streams are nonstationary, and very frequent updates in which the model is repeatedly retrained are therefore needed to catch up with the level of prediction accuracy for the current trend. The second issue is the imperfect quality of the data stream, which clearly affects the prediction accuracy of the decision tree. Noisy data confuse the decision tree with false relations of attributes to classes; such false relations effectively mislead the training algorithm to produce an enormous number of pseudopaths and nodes in the decision tree. Not only do such pseudopaths and nodes degrade accuracy and blunt predictive power, but they also result in problems of tree-size explosions. Though decision tree pruning is a technique commonly employed to remove redundant tree branches and nodes, it surely adds to overall computational complexity and overheads. Given the scarcity of memory space and computational power in WSNs, finding appropriate solutions to alleviate these problems has become an urgent task. This paper proposes an alternative type of decision tree-the very fast decision tree (VFDT)-to be used in place of traditional decision tree classification algorithms. The VFDT is a new data mining classification algorithm that both offers a lightweight design and can progressively construct a decision tree from scratch while continuing to embrace new inputs from running data streams. The VFDT can effectively perform a test-and-train process each time a new segment of data arrives. In contrast with traditional algorithms, the VFDT does not require that the full dataset be read as part of the learning process, but adjusts the decision tree in accordance with the latest incoming data and accumulated statistical counts. As a preemptive approach to minimizing the impacts of imperfect data streams, a data cache and missing-data-guessing mechanism called the auxiliary reconciliation control (ARC) is proposed to function as a sidekick to the VFDT. The ARC is designed to resolve the data synchronization problems by ensuring data are pipelined into the VFDT one window at a time. At the same time, it predicts missing values, replaces noises, and handles slight delays and fluctuations in incoming data streams before they even enter the VFDT classifier. To the best of our knowledge, this novel data mining model is the first attempt to alleviate problems of imperfect data in WSNs using a stream mining algorithm and an auxiliary control. This paper makes two key contributions to the literature: it applies stream mining techniques to WSNs by providing an ARC-cache combination to deal with imperfect data streams. The remainder of this paper is organized as follows. Section 2 reviews the technological background to data mining in WSNs and discusses existing methods of handling missing values. An imperfect data stream problem is formulated in Section 3. Section 4 gives a detailed description of our novel VFDT and ARC method and how it can be applied to WSNs. Section 5 details a set of simulation experiments performed using the VFDT and ARC method for both synthetic and real-world datasets. Section 6 concludes the paper. Data Mining in WSNs. Mining WSN data is said to be constrained by certain limitations and characteristics of WSNs [2]. It first depends on the topology of how a WSN is connected and the purpose of such a setup. Three main topologies have been introduced: (1) the star topology, where a central node is connected to a number of surrounding sensors; (2) the cluster topology, where different stars are interconnected by a central node and the outermost network can be expanded by adding additional stars like a cluster of clusters; (3) the mesh topology, known as an ad hoc topology where nodes and sensors are arbitrarily added and mixed without following any specific pattern. In the WSN literature, a central node known as the sink is the network component that gathers all sensor measurements. The sink usually has greater computational resources than the sensor nodes. As shown in Figure 1(a), Bahrepour et al. [3] proposed a simple "local type" star network characterized by a single sink function that serves as a gateway where collected data are aggregated and data mining is usually performed. The output of a local-type WSN is the set of data mining classification results based on measurements collected from all directly connected sensors. In this type of WSN, it is possible to perform all data mining at the sink. The other type of sink shown in Figure 1(b) is known as a fusion-type sink and resembles a hierarchy of clusters. Ensemble-style data mining is often carried out at each intermediate gateway; a voting method is used to select the best classification result with the highest level of accuracy. Alternatively, the data mining result from each intermediate gateway serves as a local optimum or answer representing its own branch of clusters; the result will then be fed into another downstream cluster as one of the inputs. From the WSN perspective, data mining is conducted at the root of the fusion-type network, with each input taken from each connected cluster that offers a representative output. This paper focuses on the important WSN task of classification. It is applicable to almost all kinds of WSN applications, for example, in detecting whether a monitored biomedical patient is suffering from an illness, tracking whether a herd of cattle is moving along the normal route, determining whether a large machine is operating normally, estimating whether a rainforest is growing in balance, or ascertaining whether an anomaly of any kind has arisen in any other type of environment. A decision tree classifier makes predictions or classifications according to predefined classes based on test samples by traversing a tree of possible decisions. WSNs commonly adopt a decision tree method because the trees that represent relations between attributes and classes are informative and intuitively understood. Each path through a decision tree is a sequence of conditions that describe a class. Rules can be derived from such decision tree paths and can be used in a WSN to distinguish an outcome or phenomenon based on measurements observed from sensed data. The simplicity of a decision tree offers useful insights due to the transparent model learning process it follows. The model is learnt by first observing a complete set of training samples. Each sample has several attributes, each of which may be represented by a signal given by a sensor. A sample record may take the form (X, y), where X is a vector of (x 1 , x 2 , x n ) n attributes and y is a class where the classification problem is to construct a model that defines a mapping function f : {X} ⇒ {y}. In contrast, it is difficult to interpret the inner workings of alternative classification methods such as neural network, support vector machine, and regression model approaches [4]. However, one major drawback of the decision tree method is the risk of overfitting, that is, the situation in which a large number of insignificant tree paths grow, usually as a result of being mistrained with many contradicting instances due to the noise in training data. The tree grows tremendously in size and incorrect tree paths adversely confuse the classification results. Review of Methods for Handling Missing Values. It is known that a major cause of overfitting in a decision tree is the inclusion of contradicting samples in the model learning process. Noisy data and data with missing values are usually the culprits when contradicting samples appear. Unfortunately, such samples are inevitable in distributed communication environments such as WSNs. Two measures are commonly employed to define the extent of values missing from a set of data [5]: the percentage of predictor values missing from the data set (the value-wise missing rate) and the percentage of observation records that contain missing values (the case-wise missing rate). A single value missing from the data usually indicates a transmission loss or malfunctioning of a single sensor. A missing data value record may result from a broken link between clusters, as in the fusion-type WSN in Figure 1(b). Three different patterns of missing values can occur: missing completely at random (MCAR), missing at random (MAR), and not missing at random (NMAR) [6,7]. It is only in the MCAR case that the analysis of remaining complete data could yield a valid classifier prediction according to the assumption of equal distributions [8]. The MCAR pattern occurs when the distribution of an example with a missing value for an attribute does not depend on either the observed data or the missing data. In other words, the assumption made when the MCAR pattern occurs is that the missing and complete data follow the same distribution. In this paper, we are concerned only with the value-wise MCAR type of missing data. The simplest but not the ideal way to deal with missing values is to discard sample instances with missing values. Alternatively, when the missing values represent only a small percentage of the data set, they can be converted into a new variable. A more commonly adopted method known as imputation is to substitute missing values with analyzed or predicted values. Previous studies have compared the performance of different imputation methods in replacing missing values for a decision tree classifier. To the best of our knowledge, few studies examine how to handle missing data for stream mining types of decision tree classifiers. An unresolved problem in stream mining research is how to detect concept changes due to noise-infested data and missing values. A streaming ensemble algorithm (SEA) [9] utilizes an ensemble approach in which a majority vote prevails, which is similar to bagging to detect and avoid concept changes in a data stream subject to noise. Another approach is the weighted classifier ensemble (WCE) method [10] in which weighs on different data partitions are carefully adjusted according to the voting decision of the ensemble. Under the WCE approach, previous data are divided into sequential chunks of a fixed size, with a classifier being built from each chunk to improve classification accuracy for the most recent chunk. The last approach of using a flexible decision tree algorithm (FlexDT) [11] facilitates robust data mining with concept drifts and guards against noise-carrying data streams by using fuzzy logic and a sigmoidal function. The drawback of this approach is clearly the longer run time and slow speed due to the use of fuzzy functions. These methods largely require complex and fundamental changes to the central decision tree algorithm; bagging requires that many other trees are grown so the one that yields the best result is solicited from the population of trees. Taking into consideration restrictions on resources and computational power in WSNs, a lightweight approach is likely to be preferred to ensemble-type methods. Formulation of Imperfect Data Stream Problem This section presents a mathematical model to address the problem of missing values in data stream mining for a sensor network. The assumptions of the model are as follows (i) The model is defined from the perspective of a centralized data stream mining engine or base station in the WSN where aggregated sensed data are sent to a single classification tree for classification. The mining process runs continuously as the data stream in segments. The length of each data segment is equal to the width of the sliding window. A whole segment will enter the VFDT during each window of time. (ii) The data stream is characterized by a train of data records. Each record has one or more attributes. Each attribute is assumed to hold a value given by a sensor in the case of a local-type WSN, and the value can come from the sink of a cluster of sensors in the case of a fusion-type WSN. The values for the attributes of a record are assumed to arrive in synchronization across a number of different sensors and/or sinks of clusters. A unique time stamp is added to each record. The time stamps increase in uniform intervals. (iii) All values for the attributes of the same record are used at the same time (including missing values) for each iteration of the test-and-train process at the VFDT. The name record and instance are used interchangeably. (iv) The attributes of data records take data of the following formats: nominal, numeric, binary, or mixed. The original mathematical model for minimizing the overall number of missing values influencing the VFDT within a window size of timeout I? is as follows: Because the existence of all X i j depends on I i −p , its mathematical model is simplified as follows: subject to The objective function (2) represents minimization of the influence of missing values on data stream mining performance probably due to unstable network deliveries of data. Constraints (3), (4), and (5) certify the information entropy of the missing values to the VFDT. The importance of a single attribute is represented by its entropy, and the importance of all attributes in a sink comprising several sensors is the sum of the importance of each single attribute. Importance is constrained to be nonnegative. Constraints (5), (6), and (7) logically restrict the existence of missing values in the same sensor. Formulae (9), (10), (11), and (12) are parts of the VFDT learning process that use information gain (9) as a model updating method by checking the attribute splitting criteria. The Hoeffding bound (12) restricts the instances in which updating occurs; it is only when a particular pattern occurs with sufficient statistical regularity and the Hoeffding bound is exceeded that the decision tree then splits on this node, thereby expanding the tree by growing new branches. Condition (13) concerns the total number of VFDT attributes collected from distributed sensors. Constraints (14) and (15) are the conditions by which the model is updated in the heuristic evaluation. Constraints (16) and (17) are functions for estimating the average arrival rate of the data stream in segments of X collected by the distributed sensors at timestamp p. An average data rate r − is given to estimate the percentage change in the data rate. Constraint (18) indicates the overall error rate of the ARC. Design of VFDT and ARC in WSN In a stream-based classification, the VFDT decision tree is built incrementally over time by splitting nodes into two using a small amount of the incoming data stream. How many samples have to be seen by the learning model to expand a node depends on a statistical method called the Hoeffding bound or additive Chernoff bound. This bound is used to decide how many samples are statistically required before each node is split. As the data arrive, the tree is evaluated and its tree nodes can be expanded. The following equations essentially depict the building blocks of the stream mining model using the Hoeffding bound. The tree they represent is generally known as the Hoeffding tree (HT), which grows by holding to the Hoeffding bound as a yardstick. The heuristic evaluation function is used to judge when to convert a leaf at the bottom of the tree into a conditional node, thereby pushing it up the tree. Given that a node split occurs when there is sufficient evidence that a new conditional node is needed, replacing the terminal leaf with the relevant decision node better reflects current conditions as represented by the tree rules. In (19), G(·) denotes the heuristic evaluation function for building a decision tree based on the information gain of an attribute, Info(A j ). The Info(A j ) function measures the amount of information sufficient to classify a sample as a node according to the information gain theory. The merit of a discrete attribute's counts n i jk represents the number of samples of class k that reach the leaf, where the attribute j takes the value i which is estimated by collecting sufficient statistics. In (20), P i is the probability of observing the value of attribute i and P i,k is the probability of observing the value of the attribute i given class k derived from (9): Let us assume that we have a real-valued random variable r with a bounded range of R, which arrives at the n number of independent observations. Equation (12) shows how the Hoeffding bound is computed with a confidence level of 1−δ, and the mean of r is at least r − ε. The observed mean of the samples is r . We assume that the range R has a probability of 1 given that the information gain of R is log 2 class number. The core of the algorithm is the use of the Hoeffding bound to choose a split attribute as the decision node. Let x a be the attribute with the highest G(·) and x b be the attribute with the second-highest G(·), such that the difference between the pair of top-quality attributes is defined as The VFDT is operated according to a simultaneous testand-train process, meaning that when a new data segment arrives, the attribute values of the segment will pass down the tree from the root to one of the most likely leaves. In this way, the tree engages in a testing process also known as a classification or prediction exercise based on sample data. At the same pass (traversing through the tree), if the sample data carry a known class y, the model will estimate whether inclusion of the new sample has resulted in the accumulation of sufficient statistics and decide whether a new tree node should be split. This action is on a par with lightweight model learning where the decision tree uses heuristic methods (as in (9) to (14)) to estimate whether the current structure of the tree representing current knowledge needs to be updated. The workflow of the VFDT building process is shown in Figure 2. The model starts from scratch and progressively shapes the tree as new data arrive. The tree building process is essentially different from that followed by the traditional type of decision tree, which requires repeated scanning of the Calculate ΔG(·) n i jk : whole bounded database for training and is inappropriate in a data stream mining environment. The VFDT learns by simply reading through the data stream and checking whether it should further expand its tree nodes. This unique model learning feature makes it a suitable candidate for implementing an autonomous decision maker in WSNs. ARC Design. The ARC is a set of data preprocessing functions used to solve the problem of imperfect data streams before they enter the VFDT. The ARC can be programmed as a standalone program which may run in parallel and in synchronization with the test-and-train VFDT operation. Synchronization is facilitated by using a sliding window that allows one segment of data to arrive at a time at regular intervals. When no data arrive, the ARC and the VFDT simply stand still without any action. The operational rate of the sliding window should be no greater than the speed at which the VFDT is operated and faster than the speed at which the WSN sensors transmit data. When data segments arrive as a stream, one segment at a time will initially be cached. The sliding window closes for a brief moment. While the window is closed, the ARC will attempt to correct four different types of imperfect data (if any) in the cache: missing values, noise, delayed data, and data fluctuations. The correction methods employed for each type are described in the following section. After the data have been manipulated, with missing values guessed on a best efforts basis and noise eliminated, the processed data enter the VFDT for instant testing and training. A class prediction/classification output and a failure anomaly report will then be generated by the VFDT and ARC, respectively. The end user could employ the VFDT output for subsequent decision making if implemented as a final base station, or could feed it into a further cluster of the WSN as an intermediate classification result derived from its own cluster. The failure and anomaly report contains statistics on variables such as the percentage of missing values, noise, delay, and data fluctuations as additional information about the quality of current data traffic. This information could be used as a reference indicator to gauge the reliability of the classification result based on the current quality of the data stream. It could also be used as an alarm signal to alert the network administrator to initiate repairs to the network infrastructure should the statistics in the report show a recurring problem over time. The sliding window will open again when the output results are sent, the data cache will be cleared, the VFDT will have been incrementally trained, and the gateway sensor node will be ready to receive the next incoming segment of data. Only statistics and accumulative counts remain at the ARC and VFDT throughout this continuous operation, thus providing a lightweight operating environment. No historical data need to be stored anywhere at this node. Figure 3 is a block diagram showing the major operating features of the decision center. Missing Data and ARC Noise Estimation. To tackle the problem of missing values in a data stream, a number of prediction algorithms are commonly used to guess approximate values based on past data. Although many algorithms can be used in the ARC that deployed should ideally achieve the highest level of accuracy while consuming the least computational resources and time. Some popular choices we use here for simulation experiments include, but are not limited to, mean, naïve Bayesian, and C4.5 decision tree algorithms for nominal data and mean mode, linear regression, discretized naïve Bayesian, and M5P algorithms for numeric data. Missing value estimation algorithms require a substantial amount of past data to function. For example, before using a C4.5 decision tree algorithm as a predictor for missing values, a classifier must be built using statistics from a sample of a sufficient size. To further lighten the workload induced by the ARC at the gateway sensor node, the estimation algorithm kicks on only when two conditions are met. First, sufficient statistics must be obtained from past data. Second, the trained classifier (regardless of which algorithms are used) will retrain itself only when prediction error reaches a certain threshold. The ARC therefore registers an error rate e, 0 ≤ e ≤ 1, which is the average probability of the ARC misclassifying a randomly drawn test sample. On the other hand, a missing value predictor is needed for each individual attribute of the data segment. To reduce processing time, a missing value predictor is built for qualified (significant) attributes only, and their missing values are predicted on the run. As a rule of thumb, 50% of the most significant attributes contribute to the quality of VFDT classification accuracy. About half of the attributes are therefore selected for the missing value treatment, with missing values for the remaining attributes being recorded as blank to speed up the overall ARC process. The feature selection method based on a principal component analysis is known to be both efficient and effective and is therefore used to rank the most important attributes. The mechanism adopted for handling noise in the data stream is similar to that used to estimate missing values. Noises are considered to be values far different in range from normal values. A surge or interruption in radio signals along a wireless communication link will bring such values up or down to an extreme. However, because this rarely happens in practice, noise has a low probability occurrence distribution. In our model, we can safely assume that noise is equivalent to an outlier in our data samples because both noise and outliers share the same statistical characteristics. The ARC therefore used an outlier detection algorithm instead of a missing value prediction algorithm to handle noise. However, as argued in [12], traditional outlier detection techniques are not directly applicable to WSNs because of the multivariate nature of sensor data. Janakiram et al. [13] suggest using a Bayesian belief network (BBN) model to identify local outliers in streaming sensor data. In this model, each node trains a BBN at its ARC to detect outliers based on the behavior of its neighbors' readings and its own readings. An observation is considered an outlier if it falls beyond the expected range. Given that the model was shown to work well, it is applied here to detect and replace noise. When a segment of data arrives and fills the cache, the ARC conducts a quick scan to find any outliers. A normal mean is used as a substitute for the values of any outliers in the cache. Figure 4 is a flow chart that shows how the ARC works and retrains its predictor model for the significant attributes only. Handling Delay and Fluctuations in Data Caches. Additional buffer space is required to overcome delays and fluctuation problems in data mining in WSNs. The additional buffer is called a bucket, which is a preceding space in front of the cache. The bucket can be implemented in the same gateway sensor node at the outmost interface position between the cache and the sink connector. The function of the bucket is essentially that of a synchronized bulk transfer receptacle that regularly shuttles between the data stream inlet and the data cache. It must operate at specific intervals the frequency of which must be no lower than that of the sliding window. The concept of bucket transfer is analogous to that of a cable car or a lift in a building that carries multiple passengers in bulk. Although passengers (data) can walk into a lift (bucket) asynchronously, they must do so within a certain time limit that has to be shorter than the real-time operational requirement of the lift (VFDT). A slight time latency caused by different sensors can therefore be tolerated. Figure 5 is a visual representation of a bucket. We assume that the ARC that controls the cache is a single entity, the ARC-cache; the bucket is the outermost buffer space that first receives incoming data until it is full. The filled bucket is then loaded into the ARC-cache to fix noise and missing values. To address the issue of data fluctuations, which is one of the requirements of our imperfect data stream handling model, we propose the use of a lower-bound β low of the data rate change that constrains the data rate from dropping suddenly [14]. Meanwhile, because any dramatic rise in · · · · · · · · · · · · · · · · · · · · · · · · the data rate may cause sensor nodes to become congested and VFDT goes timeout, any sudden surge should be suppressed. An upper-bound β up of the data rate change is therefore proposed to prevent timeouts. The upper-bound value is related to the real-time constraint T R , where β up ⇐ T R . If the data rate is lower than β low , an empty pocket in the bucket must be replenished. For this reason, we reviewed previous research on dealing with missing data. Six methods are commonly used to deal with the problem of missing data in data analysis: probabilistic splits, the complete case method, grand mode/mean imputation, separate classes, surrogate splits, and complete variation [5]. In addition to the missing data methods listed above, we have also formulated our own solution whereby when the data rate changes beyond β up , excess data will be saved in a cache and used to replenish missing data when the data rate falls below β low . A snapshot of the traffic smoothing data in the bucket is taken from our experiment and shown in Figure 6. In Figure 6(a), it illustrates the process of ARC-cache that uses a window to cache the data collected from four sensors. Under the ARC-cache with a defined window size (size = 5), the ARC is constructed for every five collected data and then fills them to build a decision tree. Each data arrives with a timestamp. In Figure 6(b), we give an example to synchronize the delayed arrival data. A cache is like a bucket responsible for collecting those data arriving within a period, which is also referred to the window size. For example, if a piece of data is delayed and the window of bucket is still available in that period, the cache will wait for the delayed data to arrive before the bucket expires. Then the bucket with full data load will be used to update the decision tree as per usual. The corresponding pseudocode of the traffic fluctuation-smoothing algorithm is shown in Pseudocode 1. Simulation Experiments Simulation experiments are conducted to validate our theoretical model comprising the ARC-cache and the VFDT. The aim of this section is to evaluate the performance of our proposed methods in dealing with missing values in data streams. Several different types of data streams are used in the experiments to facilitate a thorough comparison, including those generated synthetically from data generators and reallife data. For the simulation, we implement a VFDT program and extend it by incorporating the functions of an ARC model for guessing missing values in data streams. Though the estimation method employed should be generic, the following methods are used in our experiments: the mean, naïve Bayesian, and C4.5 decision tree methods for nominal data, and the mean mode, linear regression, discretized naïve Bayesian, and M5P approaches for numeric data. WEKA 3.6, and the computing platform is a Windows 7 64bit workstation with an Intel quad core 2.83 GHz CPU and 8 Gb of RAM. Synthetic Data Stream. An MOA stream generator is used to create a synthetic data stream comprising one million data records. We wrote a customized JAVA software program which randomly adds missing values according to a parameter set by the user-the missing data percentage (MDP). There is another optional control that allows the user to place missing values at either the beginning or the end of the data stream. It is well known that decision trees in stream mining models are unstable in the initial stage of learning. Inserting missing values at this early stage only lengthens the process of training the model to maturity. It may make more sense to observe the impact of missing values after the VFDT model is established and see how it responds to the imperfect stream. The MOA stream generator generates the four different synthetic datasets shown in Table 1. The two LED datasets represent a scenario whereby a WSN has a refined resolution of output classes (10 of them). For example, the data collected by the sensors can reflect a wide range of categorical phenomena. The SEA dataset demonstrates a relatively simple scenario whereby binary outputs such as true or false are derived from ternary types of sensed data. 100 2700 5300 7900 10500 13100 15700 18300 20900 23500 26100 28700 31300 33900 36500 39100 41700 44300 46900 49500 52100 54700 57300 59900 62500 65100 67700 70300 72900 75500 78100 80700 83300 85900 88500 91100 93700 96300 98900 Number of instances Accuracy (correct %) LED7 is a data stream that is simpler than the rest in that it has only 7 nominal attributes. In this experiment, we configure the MDP at 20%, 40%, 60%, 80%, and 100%, which are randomly inserted missing values in defined positions in the data stream. As a result, in comparison with a perfect data stream where MDP = 0%, the higher MDP comes with a lower level of VFDT classification accuracy. Figure 7 presents the VFDT accuracy comparison when missing data are added to the middle of the data stream soon after the model learning process is completed; Figure 8 presents the same comparison but with missing values added at the end. We want to use this set of experiments to show the impact of missing values on data stream mining. Missing values lead to a dramatic reduction in accuracy if the algorithm does not have any ARC mechanism to deal with imperfect data. In other words, the results of these experiments are one of the motivations for this study suggesting stream mining algorithms should be robust to missing values, or accuracy will suffer. LED24 is a more complicated data stream with 24 nominal attributes and a total of one million instance records. We add MDP 50% to this dataset. To handle imperfect data, the ARC-cache is applied together with the VFDT with different window sizes: 250, 500, 750, and 1000. A C4.5 decision tree function in WEKA is chosen as the ARC construction method in this case. The experimental results shown in Figure 9 show there is a little difference in performance between window sizes in a very large data stream. However, it is also observed that a smaller window size gives a faster ARC-cache and VFDT computing speed. Moreover, we compare the ARC-cache and VFDT results to those of a common missing value solution in which means are used to replace the same missing values; we find that our proposed method yields better data stream mining performance. The ARC-cache provides a function whereby the missing values replacement performance is checked through a statistical report output. In comparison with the nonmissing values data stream, the results indicate how many correct/wrong number instances arise using different methods to replace the missing values (see Table 2). SEA is a data stream consisting of 3 numeric attributes, two nominal classes, and one million instances. We also add MDP = 50% of missing values to this dataset. To handle such an imperfect data stream, we use the ARC-cache and VFDT in different window sizes of 500, 750, and 1000. The missing value predictor in the ARC is initially trained up by the M5P function in WEKA. The experimental results shown in Figure 10 show that the ARC-cache and VFDT method yields unsatisfactory results, possibly due to the processing of pure numeric attributes. Nevertheless, the level of accuracy achieved is still slightly higher than that yielded by the mean method. The results are loaded into a margin curve chart visualized in WEKA to evaluate the model generated by a different data stream (as shown in Figure 11). Margin is defined as the difference between the probability predicted for the actual class and the highest probability predicted for the other classes. We observe that the nonmissing data stream has the best performance in (a); while the data stream where MDP = 50% yields the worst performance, including some negative prediction results in (b). Comparing ARC missing values replacement with means computed for missing values replacement, the latter presents some negative predictions, whereas the ARC does not. As a result, we find that the ARC-cache and VFDT may have an MDP bottleneck: when MDP is higher than a certain threshold, the ARC-cache and VFDT method may perform less well. This conclusion may make sense, because when MDP is relatively high (e.g., close to 50%), the VFDT decision tree can simply no longer determine which parts of the data stream are meaningful and which are not, the same can be said for the missing value predictor in the ARC-cache. Real-World Data Stream. In this experiment, we use a set of real-world data streams downloaded from the 1998 KDD Cup competition provided by Paralyzed Veterans of America (PVA) [15]. The data comprise information concerning human localities and activities measured by monitoring sensors attached to patients. We use the learning dataset (127 MB in size) with 481 attributes originally in both numeric and nominal form. Of the total number of 95,412 instances, more than 70% contain missing values. In common with the previous experiment, we compare the ARC-cache and VFDT method with the standard missing values replacement method found in WEKA using means. The results of the comparison are shown in Figure 12. Considering the number of attributes is very large, we apply a moderate window size (W = 100) for the ARC to operate. A complete dataset given by PVA is used to test the ARC-cache and VFDT method (115 MB). The experiment results demonstrate that using WEKA mean values to replace missing data yields the worst level of VFDT classification accuracy. Although using the ARC-cache and VFDT method to deal with missing values in the dataset does not yield 7000 9000 11000 13000 15000 17000 19000 21000 23000 25000 27000 29000 31000 33000 35000 37000 39000 41000 43000 45000 47000 49000 51000 53000 55000 57000 59000 61000 63000 65000 67000 69000 71000 73000 75000 77000 79000 81000 83000 85000 87000 89000 91000 93000 results as accurate as the complete dataset without any missing values, ARC-Cache and VFDT performance is much better than that achieved using WEKA means to replace missing values. The enlarged chart in Figure 12 shows that the WEKA replacement approach has a very little effect in maintaining the level of performance because of the very high percentage of missing data (70%) in this extreme example. Our ARC model also takes a long time to lift the level of accuracy to that required. Another dataset from the real world is introduced to the experiment. This dataset, which can be downloaded from UCI Machine Learning [16], is called "Localization Data for Posture Reconstruction." Through the observation of tag identification sensors for body location activities, the learning motivation is to classify different human activities. The original dataset has 7 attributes: sequence name (nominal) tag identifier ID (nominal) timestamp (numeric); date (DD.MM.yyyy HH:mm:ss:SSS) x-coordinate of the tag (numeric) y-coordinate of the tag (numeric) z-coordinate of the tag (numeric) and the activity label. The dataset comprises 164,860 instances. It is a typical example of a scenario in which mixed types of data are sensed. We choose linear regression as the method for predicting missing values for numeric attributes in the ARC and employ the naïve Bayesian method to do the same for nominal attributes. A large amount of missing values is deliberately added to the dataset. The missing completely at random (MCAR) method is chosen for the use in this scenario. Forty percent of the total number of instances (records) is replaced by missing values, meaning 65,944 instances are added completely at random. The distributions of missing values for different attributes are 8,056 (person sequence name) 8,165 (tag identifier sensor) 7,921 (timestamp) 8,051 (date) 8,092 (xcoordinate) 7,943 (y-coordinate) 7,995 (z-coordinate) and 8,141 (activity target). We observe two significant phenomena from the results shown in Figure 13. The performance gain follows approximately the same trend as the line where no missing value is applied (which is the ideal case). The other interesting finding is that in addition to that for the person-sequencename and tag-identifier attributes, the accuracy also rises for the rest of the attributes. This result essentially means that ARC predictions for missing values have little effect on identifier-type data which basically carry no meaning other than indexing the records in the dataset. Time-seriestype data with date and timestamp attributes show relatively significant improvements in ARC missing value prediction because the regression method chosen as the missing value predictor is best at predicting nonexistent values in a timeseries of existing data. In summary, it may be advisable to choose an appropriate missing value predictor for each different type of attribute to obtain the best results. Regarding the learning speed of the proposed method, we compared the ARC learning speed for each missing value in the same test. Figure 14 shows that there is a linear relationship between the learning speed and the amount of samples. With the sample size growing, the time spent on VFDT learning increases, linearly. To verify the suitability of our proposed model for real-time application with respect to learning speed, we investigate the learning time of ARCs coupled with Feature Selection. In Figure 15, suppose the cache size is 10 4 such that X1=X2=X3=X4=X5=X6=X7 and suppose Y is the leaning time taken to process each ARCcache in the VFDT. From the theory of triangle in geometry, it is easy to observe that the Y 1≈Y 2≈Y 3≈Y 4≈Y 5≈Y 6≈Y 7 approximately in the diagram. In other words, the processing Figure 15: An example that shows the stability of processing time for ARC-cache. speed of ARC-cache in this example goes stable hence it is suitable for some real-time applications because the increase is linear. This holds true as long as the time-critical requirement is not greater than Y . Consequently that means the real-time application allows time Y as the processing time for the proposed method so that the computing speed will satisfy the real-time requirement. Conclusion The complex nature of incomplete and infinite streaming data in WSNs has escalated the challenges faced in data mining applications concerning knowledge induction and timecritical decision making. Traditional data mining models employed in WSNs work mainly on the basis of relatively structured and stationary historical data and may have to be updated periodically in batch mode. The retraining process consumes time, as it requires repeated archiving and scanning of the whole database. Data stream mining is a process that can be undertaken at the front line in a manner that embraces incoming data streams. We propose using a Very Fast Decision Tree (VFDT) in place of traditional data mining models employed in WSNs due to its benefit of lightweight operation and its lack of a data storage requirement. To the best of our knowledge, no prior study has investigated the impact of imperfect data streams or solutions related to data stream mining in WSNs, although the preprocessing of missing values is a well-known step in the traditional knowledge discovery process. This paper proposes a holistic model for handling imperfect data streams based on four features that riddle data transmitted among WSNs: missing values, noise, delayed data arrival, and data fluctuations. The model has a missing value predicting mechanism called the auxiliary reconciliation control (ARC). A bucket concept is also proposed to smooth traffic fluctuations and minimize the impact caused by late arriving data. Together with the VFDT, the ARC-cache facilitates data stream mining in the presence of noise and missing values. To prove the efficacy of our model, a simulation prototype is implemented based on ARC-cache
10,796.8
2012-12-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Numerical investigation of turbulent mixed convection in a round bottom flask using a hybrid nanofluid In this study, we conducted a numerical investigation of the turbulent mixed convection of a hybrid nanofluid (HNF) in a flask equipped with an agitator, which is commonly used in organic chemistry synthesis. The bottom wall and the middle section of the flask were maintained at a constant high temperature Th, while the upper, left, and right walls up to the middle of the flask were kept at a low temperature Tc. The HNF consisted of Graphene (Gr) and Carbon nanotubes (CNTs) nanoparticles (NP) dispersed in pure water. The governing equations were solved numerically using the finite size approach and formulated using the Boussinesq approximation. The effects of the NP volume fraction φ (ranging from 0% to 6%), the Rayleigh number Ra (ranging from 104 to 106), and the Nusselt number (Nu) were investigated in this simulation. The results indicated that the heat transfer is noticeably influenced by the Ra number and the increase in the φ ratio. Additionally, the agitator rotation speed had a slight effect on the heat transfer. Introduction Numerous theoretical and experimental studies have focused on the investigation of mixed convection in cavities containing nanofluids and hybrid nanofluids.Selimefendigil and O ¨ztop 1 analyzed the mixed convection and fusion behavior of CNT-water nanofluid in a horizontal annulus under the influence of oriented magnetic field and rotating surface.In this context, authors analyzed the fusion behavior and mixed convection characteristics by varying various factors such as the volume fraction of nanoparticles, the inclination angle of the magnetic field, Hartmann number (Ha), and the rotational speed of the inner wall.Accordingly, their results show that the fused fraction and the Nu value increased by increasing the volume fraction of CNT nanoparticles, whereas an inverse effect has been seen for Ha number values.Also, it can observe that the heat transfer speed is diminished for elevated rotational speed of the inner wall.Moreover, they demonstrated that it can control the melt front propagation by means of the rotation of the inner surface.Akhter et al. 2 studied the mixed convection of Al 2 O 3 -Ag/water hybrid-nanofluid in square cavity under the effect of an oriented magnetic field and two roughness rotating cylinders.Consequently, Akhter et al. examined the influences of numerous factors such as the volume fraction of the hybrid nanoparticle, the rotation speed of cylinders, the inclination angle of the magnetic field, and the Ha number on the flow and thermal fields.The obtained results from this simulation indicate that the flow of the mixed convection accelerates with the rotation speed of cylinders, while decreases with elevated values of the magnetic field and the volume fraction of hybrid Al 2 O 3 -Ag nanoparticles.They obtained also that the enhancement in the heat transfer reached 261.29% at the maximum value of the rotating speed of rough cylinders. Ali et al. 3 carried out a numerical study of magnetomixed convection in a partially heated rectangular enclosure equipped with Al 2 O 3 -water nanofluid and rotating flat plate.In this context, authors examined the effect of the length and speed of rotating flat plate on the flow and thermal fields.They found that the high length and elevated rotational speed of the plate generates highest quantity of heat transfer.Additionally, the optimal heat transfer performance has been achieved at concentration of 5% of Al 2 O 3 nanoparticles which is 123.02% more than pure water.Higher magnetic field strength attenuates the fluid motion and hence heat transfer rate significantly.The elevated values of the magnetic field reduce the nanofluid movement and consequently heat transfer rate significantly.Jiang et al. 4 numerically studied the entropy production and mixed convection heat transfer of Fe 3 O 4 /MWCNT-water hybrid nanofluid confined in a 3D cubic porous enclosure with wavy wall and two rotating cylinders.The effect of some flow parameters such as the direction of the two cylinders rotations and their positions in the enclosure, the angular speeds of both cylinders, Darcy number (Da), and Hartmann number (Ha) were examined.The results showed that heat transfer increases with higher values of angular speed of cylinders and Da number and lower values of Ha.On the other hand, the entropy generation is mainly related to heat transfer.[7][8][9][10] In addition, work on the study of convective phenomena within ventilated cavities has mainly concerned simple and regular geometries (square, rectangular, etc.).On the other hand, few studies have dealt with the case of more complex geometries such as that of Tmartnhad et al. 11 The latter numerically analyzed the heat transfer, in a trapezoidal cavity ventilated and crossed by air.Also, the effect of the rotational speed and magnetic field on the mixed convective heat transfer in diverse cavities has been extensively studied by researchers.Accordingly, Alsabery et al. 12 investigated the problem of transitory Magnetohydrodynamic (MHD) combined convection inside a wavy-heated rectangular enclosure containing two internal rotating cylinders applying the heterogeneous nanofluid system.Also, the effect of volume fraction of nanoparticles, Hartmann number on the flow and heat transfer has been examined in this study.Their results reveal that the augmentation of the Nu number is the consequence of elevating the rotational speed of the two cylinders.In addition, Jabbar et al. 13 examined the mixed convection of Cu-water nanofluid in an inclined porous enclosure combined with a dynamic rotating cylinder.In this context, Jabar et al. studied the effects of some parameters such as volume fraction of nanofluid, conductivity ratio, inclination angle, Darcy number, rotational speed, and Richardson number.The obtained results from this study indicate that the increasing in the rotation speed of the cylinder can augment the Nu number in the rate of 223%, whereas the increasing of the thermal conductivity by a ratio of 10 times augments the Nu number by 136%.Furthermore, an opposite effect of the Richardson number (Ri) has been observed on the Nu number.In another study, Alsabery et al. 14 conducted a numerical study of the mixed convection and the entropy generation of Al 2 O 3 -water nanofluid inside a wavy-walled enclosure including a rotary cylinder.In their search, Alsabery et al. studied the effect of several physical and geometrical factors, such as the angular rotational speed, number of undulations, the length of the heat source, the Ra number, and the concentration of Al 2 O 3 nanoparticles.The obtained results illustrate that the augmentation of the rotation speed of the cylinder improves the rate of heat transfer for values of the Ra number less than 5 3 10 5 .Also, the Nu number has affected by the number of undulations and the increasing in the volume fraction Al 2 O 3 nanoparticles and the length of the heated wall improves the velocity of heat exchange. The effectiveness of such processes is often limited by the thermophysical properties of the fluids used. 15he development of research dealing with nanofluids aims to significantly improve heat transfer by introducing a low concentration of nanoparticles (size less than 100 nm) into a pure fluid. 1,16,17Several studies have been carried out on the mixed convection of nanofluids [18][19][20] whose numerical study focused on the heat transfer within a ventilated square cavity and crossed by a nanofluid (Al 2 O 3 -water).The latter varied the location of the discharge opening.It appears that the average Nusselt number increases with the increase in the Reynolds and Richardson numbers and the volume fraction.Shahi et al. 21and Talebi et al. 22 carried out a numerical study concerning the mixed convection within a nanofluid (Cu-Water) in a ventilated square cavity of which a portion of its base is subjected to a heat flow.Their results indicate the addition of nanoparticles leads to the increase of the average Nusselt number.4][25][26][27][28][29] In this context, Kalidasan and Rajesh Kanna 30 considered in their numerical study, the case of a ventilated square cavity with an adiabatic obstacle in its center.The authors were interested in the contribution of the use of a hybrid nanofluid (consisting of nanoparticles of diamond and cobalt oxide in water as a suspending fluid) on the thermal performance of the cavity. The study of heat transfer by natural convection in the prepared round bottom flask for the composition of organic chemistry, such as the ease of isolation after the reaction, the low cost, and the simplicity of operation, 31 it is of great importance in the study of the mechanisms of chemical reactions.Thus, the objective of the present work consists of a characterization of the turbulent mixed convective heat transfer of hybrid nanofluid in round bottom flask contains an agitator.Accordingly, we numerically study the turbulent mixed convection of a hybrid nanofluid (HNF) in a round bottom flask containing an agitator; It is one of the laboratory flasks used in organic chemistry synthesis.The bottom wall and to the middle of the flask are maintained at a constant high temperature T h with the agitator rotation speed (v) fixed at 250 rpm.While the upper, left, and right walls to the middle of the flask are maintained at low T c .The HNF comprises Gr and CNT nanoparticles suspended in pure water.The governing equations are solved numerically using the finite size approach and formulated using the Boussinesq approximation.In this simulation, we investigated the effects of NP volume fraction u from 0% to 6%, Rayleigh number (Ra) from 10 4 to 10 6 , the rotation speed of the agitator and the Nusselt number (Nu). Novelty, originality points, and the practical applications of this work For most chemical reactions, the reaction rate increases by increasing temperature.So it is always useful to work at optimum temperature.As well as, the use of the catalysts increases the reaction rate.In this context, Graphene (Gr) and Carbon nanotube (CNTs) are used as catalysts for certain chemical reactions (Nanocatalysis), where they have been studied for several years as supports for nanoparticle metal catalysts.So, adding Gr and CNTs nanoparticles in reaction medium and increasing the temperature will increase the reaction rate.Furthermore, the new findings presented in this study are: The use of nanoparticles enhances the thermal properties for heat transfer and heat storage of the reaction medium inside the round bottom flask, also enhances the rate of chemical reactions (the thermal activation of chemical reactions); The use of Graphene (Gr) and Carbon nanotubes (CNTs) nanoparticles like a catalyst in organic reactions forms the complex interactions and creates nano-regions that enhance reactivity and selectivity of chemical reactions (nanocatalysts). On the other hand, the practical application of this study is the optimization of the operating conditions of the industrial processes, especially, improving the efficiency of the synthesis reactions of various industrial products.On the other hand, the nanocatalysts allow for rapid and selective chemical transformations, with the benefits of excellent product yield and ease of catalyst separation and recovery.Recently, new applications of nanocatalysts are developing, covering: Nanocatalysis for various organic transformations in fine chemical synthesis; Nanocatalysis for oxidation, hydrogenation, and other related reactions; Nanomaterial-based photocatalysis and biocatalysis; Nanocatalysts to produce non-conventional energy such as hydrogen and biofuels; Nanocatalysts and nano-biocatalysts in the chemical industry.The simplifying assumptions used in our study are as follows: Problem description and mathematical formulation The basic fluid used is a Newtonian fluid, incompressible and which satisfies the hypothesis of Boussinesq.The nanofluid is assumed to be incompressible and the flow is turbulent, stationary, and twodimensional.The thermophysical properties of the hybrid nanofluid are constant, except for the variation of the density, which is estimated by the assumption of Boussinesq. The mathematical formulations and the numerical solution procedure are described as follows: Continuity equation The continuity equation is a fundamental principle in fluid dynamics that expresses the conservation of mass within a fluid flow.It states that the rate of mass entering or exiting a control volume must be equal to the rate of change of mass within the control volume.Mathematically, the continuity equation can be expressed as: Momentum equation in the (x, y)-direction The momentum equation in the (x, y)-direction, also known as the Navier-Stokes equation, describes the conservation of momentum for fluid flow in a particular direction.It represents the balance between the rate of change of momentum, the pressure forces, and the viscous forces acting on the fluid. 32 Energy equation The energy equation, also known as the heat transfer equation or the first law of thermodynamics, is a fundamental equation in thermodynamics and fluid dynamics that describes the conservation of energy in a fluid flow. The energy equation can be written as 32 : Turbulent kinetic energy Rate of energy dissipation The rate of energy dissipation, also known as the dissipation rate, refers to the rate at which mechanical energy is converted into internal energy or heat within a fluid flow. 32 With Stress production: Buoyancy term: Prandtl number: The eddy viscosity: The thermo-physical properties of the hybrid were predicted nanofluid using the following models [33][34][35][36] : Density of the hybrid nanofluid.The density of a hybrid nanofluid refers to the measure of mass per unit volume of the mixture, which consists of a base fluid and dispersed nanoparticles, was calculated based on the volume fractions of the components. 33 Heat capacitance of the hybrid nanofluid.The heat capacitance is typically calculated based on the mass and specific heat capacity of the nanofluid mixture, taking into account the volume fractions and properties of the base fluid and nanoparticles. 34 Thermal conductivity of the hybrid nanofluid.The thermal conductivity of a hybrid nanofluid is influenced by the properties of the base fluid and dispersed nanoparticles, as well as their volume fractions.It is typically calculated based on the effective thermal conductivity model, which takes into account the contributions from both the base fluid and nanoparticles.The thermal conductivity of the hybrid nanofluid plays a significant role in its heat transfer characteristics and performance in various applications. 36 With: M = 3 and Viscosity of the hybrid nanofluid.The viscosity of a hybrid nanofluid is affected by several factors, including the viscosity of the base fluid, the volume fraction and size of the dispersed nanoparticles, and their interactions with the base fluid. 36 Where the dimensionless numbers are: The Nusselt number.The Nusselt number is a dimensionless parameter used in fluid mechanics and heat transfer to quantify the convective heat transfer rate from a surface.It is defined as the ratio of convective heat transfer to conductive heat transfer across a boundary or surface. 17 The heat transfer coefficient is expressed as: Thermal conductivity: Average Nusselt number: The boundary conditions are written as: On the left and right walls: On the top flask neck: On the round bottom: On the adiabatic agitator: The agitator is adiabatic and v = 250 rpm. Code validation and grid testing When performing numerical simulations, the accuracy and reliability of the results are influenced by the discretization of the computational domain into smaller elements or nodes.Increasing the number of nodes allows for a finer representation of the flow and temperature fields, capturing more detailed features and resolving smaller-scale phenomena.Therefore, based on the assessment of the numerical simulations of the Gr-CNT-Water mixture (j = 0.02), Ra = 10 5 and are presented in Table 1 and the information conveyed by Figure 2, it can be concluded that the chosen node number of 17139 is sufficient for accurately capturing the simulated phenomena within acceptable computational limitations.Figure 3 presents a comparison between the results of the present study and those of Guiet et al. 37 The comparison concerns the mean Nusselt number.It is clear that the results of our code are in good agreement with those proposed by Guiet et al. Results and discussion The main purpose of this study was to determine the effect of different parameters such as the volume ratio of mixed nanoparticles (0 ł j ł 0.06), the rotational speed of the agitator (250, 275, 300, and 350 rpm) and Rayleigh number (10 4 ł Ra ł 10 6 ) on the flow behavior and convective heat transfer in a round bottom flask of the resulting CNT-Gr-water hybrid nanofluid.The thermophysical properties of water and the tested nanoparticles are regrouped in Table 2. Figure 4 shows us isothermal contours, and as expected, in the absence of nanoparticles, the heat distribution is much lower than in their presence.Also, Figure 4 shows that the isotherms conform to the shape of the round bottom flask.This outcome is unsurprising since weak circulation structures are formed under low Rayleigh number conditions, resulting in minimal heat transfer.However, at high Rayleigh number conditions (Ra = 10 6 ), the circulation structure's strength within the cavity intensifies, and heat transfer within the cavity is predominantly controlled by it.Moreover, the isothermal lines are predominantly clustered near the round bottom portion of the hot wall and the left and right portions of the cold circular walls, as expected in the Rayleigh-Be´nard experiment.Thus, substantial temperature gradients arise in the bottom portion of the hot wall.In addition, we can clearly see from Figure 4 that the intensity of the buoyancy effect rises when the volume fraction of the hybrid nanofluid increases, and this may be confirmed by the increasing of the circulation shape inside the cavity.The streamlines contours are depicted in Figure 5.We clearly observe from Figure 5 that the streamlines differ from one case to another according to the proportion of nanoparticles and according to the high value of the Rayleigh number.Generally, we notice an evident difference in the angle and shape of the vortex rotation system. The analysis of Figure 5 reveals important characteristics of the streamlines within the studied system.It is observed that the streamlines generally exhibit symmetrical behavior, indicating a balanced flow pattern. One significant finding is that increasing the Rayleigh number leads to higher values of the stream function.The Rayleigh number serves as a measure of the balance between buoyancy and viscous forces, with higher values indicating stronger natural convection.This stronger convection enhances the flow patterns and results in higher values of the stream function. At lower Rayleigh numbers (specifically Ra = 10 4 ), the stream function reaches its lowest value, indicating a relatively weaker flow.In this regime, secondary vortexes are formed, which are additional flow structures created within the system.These vortexes contribute to the complexity of the flow pattern. In contrast, at higher Rayleigh numbers (specifically Ra = 10 6 ), the vortexes expand both horizontally and vertically, increasing in size and intensity.This expansion suggests a more vigorous flow characterized by stronger fluid mixing and heat transfer. Additionally, the presence of nanoparticles and their volume fraction influences the thickness of the thermal boundary layer adjacent to the heated wall.The thermal boundary layer represents the region where heat transfer occurs between the heated surface and the fluid.The nanoparticles in the hybrid nanofluid contribute to the vorticity and flow behavior in these regions, leading to the emergence of vortexes. The vorticity, or rotational motion, induced by the hybrid nanofluid particles plays a key role in the formation and intensification of these vortexes.The interaction between the fluid and nanoparticles in these regions contributes to the observed streamlines and flow patterns. Figure 6 provides insight into the distribution of isotherms under different rotational speeds of the mixer, both in the absence and presence of nanoparticles.The Figure 6 reveals variations in heat distribution depending on the rotation speed, and a significant difference in temperature distribution is observed upon the addition of nanoparticles. When examining the isotherms, we can observe distinct changes in heat distribution patterns as the rotational speed of the mixer is altered.The presence of nanoparticles further amplifies these changes.The addition of nanoparticles introduces new heat transfer pathways, altering the heat distribution throughout the system. Moving on to Figure 7, we explore the impact of the agitator's rotational speed on the velocity streamlines in the absence and presence of nanoparticles.Without nanoparticles, pure water exhibits tangential flow along the agitator toward the wall of the round bottom flask.Due to the influence of reflection and gravity, the liquid follows a tangential path along the normal direction of the agitator, resulting in a circulating water flow from the bottom to the top.Increasing the agitator speed induces the formation of three non-symmetrical vortexes, and the velocity of rotation significantly affects the maximum value of the streamline. With the introduction of nanoparticles, the flow exhibits four symmetrical vortexes.Moreover, it is evident that the presence of nanoparticles slightly increases the maximum values of the streamline, indicating enhanced flow velocity.Based on these findings, a rotational speed of 250 rpm was selected as it maximized the streamline value considering the presence of nanoparticles (j = 0.04). The observed changes in flow patterns and streamline values highlight the influence of rotational speed and the presence of nanoparticles on fluid dynamics.These insights are essential for optimizing mixing efficiency and heat transfer in systems employing mixers and nanofluids. Figure 8 provides valuable insights into the variation of the Nusselt number (Nu) with the volume fraction (j) of nanoparticles for different Rayleigh numbers (Ra).Previous studies have established that the addition of nanoparticles to the base fluid has two opposing effects on intracavity convective heat transfer. The first effect is an increase in Nu, which is attributed to the enhanced thermal conductivity of the mixture resulting from the presence of nanoparticles.These nanoparticles possess higher thermal conductivity compared to the base fluid, facilitating better heat transfer within the system.This increase in thermal conductivity leads to improved convective heat transfer, resulting in a higher Nu value. On the other hand, the second effect arises from the increase in viscosity caused by the addition of nanoparticles.The viscosity of the fluid is influenced by the presence and concentration of nanoparticles, and this higher viscosity tends to suppress convective motion within the system.As a result, the convective heat transfer is reduced, leading to a lower Nu value. The dominance of either effect depends on various factors, including the type of nanoparticles used, the strength of the convective flow (Ra), and the specific model employed to estimate the viscosity and thermal conductivity of the mixture. The findings presented in Figure 8 of this study demonstrate important trends.It is observed that, for a given Ra value, the introduction of nanoparticles leads to a monotonic increase in Nu.This suggests that the increase in thermal conductivity outweighs the viscosity-induced suppression of convective motion, resulting in enhanced heat transfer.Furthermore, it is demonstrated that Nu increases monotonically with increasing Ra for a given j value.This observation aligns with the general understanding that elevating the Rayleigh number intensifies the convective flow within the system.The increased convective strength promotes more effective heat transfer, leading to higher Nu values. Figure 9 presents the relationship between the mean Nusselt number (Nu) and the Rayleigh number (Ra) for different types of nanofluids (NFs), specifically Graphene (Gr), Carbon Nanotubes (CNT), and a hybrid nanofluid consisting of Graphene and Carbon Nanotubes (Gr-CNT). The graph provides insights into the impact of the Rayleigh number on the Nusselt number and the total heat transfer within the cavity under the given conditions.Furthermore, the type of nanofluid employed influences the average Nusselt number and the heat transfer characteristics within the cavity.Different types of nanoparticles, such as Graphene and Carbon Nanotubes, have varying thermal properties and interactions with the base fluid.These variations in thermal conductivity and fluid-nanoparticle interactions impact the overall heat transfer performance of the nanofluid. In particular, it is observed that the hybrid nanofluid consisting of Graphene and Carbon Nanotubes (Gr-CNT) in water exhibits the highest mean Nusselt number among the studied nanofluids at a Rayleigh number of 10 6 .This finding suggests that the combination of Graphene and Carbon Nanotubes provides synergistic effects, leading to enhanced heat transfer performance within the cavity. The information provided in Figures 8 and 9 enhances our understanding of the relationship between the Rayleigh number, the type of nanofluid, and the mean Nusselt number.These findings contribute to the optimization of nanofluid-based heat transfer systems, allowing for the selection of appropriate nanofluids and operating conditions to achieve desired heat transfer rates. Conclusion In this study, the impact of the Rayleigh number, the volume fraction (j) of nanoparticles, the rotational speed and the type of nanofluid on the flow streamlines, isotherm distribution, and mean Nusselt number was investigated.The main findings are summarized as follows: The heat transfer performance is enhanced by 13% when increasing the Rayleigh number to Ra = 10 5 and by 40% when increasing the volume fraction to j = 0.6.These findings suggest that higher volume fractions, combined with higher Rayleigh numbers, improve the efficiency of heat transfer in the system.The study revealed that at a rotation speed of 275 rpm, distinct flow circulation cells were observed, and the system exhibited the highest heat transfer performance.The influence of nanoparticle type and volume fraction on the Nusselt number is more significant than the impact of a high Rayleigh number.The Gr-CNT hybrid nanofluid offer a significantly superior enhancement in thermal performance compared to graphene nanoparticles or carbon nanotube nanoparticles.This is primarily attributed to their exceptional thermal conductivity and low densities. Figure 1 Figure 1 illustrates the round bottom flask cavity considered in the present study.As shown, the left and right walls have straight and circular surfaces, while the top wall is straight.In addition, the cavity has a height H = 115 mm, diameters D0 = 64 mm, d1 = 32 mm, and flask neck d2 = 32 mm, for the agitator d3 = 3 mm and L = 10 mm.The simplifying assumptions used in our study are as follows: Turbulent kinetic energy refers to the energy associated with the chaotic and irregular motion of fluid particles in a turbulent flow.It represents the fluctuating component of kinetic energy in a turbulent flow field.32 Figure 1 . Figure 1.Geometry of the studied configuration. Figure 4 . Figure 4. Isotherms for different Ra and volume fraction. Figure 5 . Figure 5. Velocity streamline for different Ra and volume fraction. Figure 6 . Figure 6.Isotherms for different speeds of rotation of the agitator. Figure 7 . Figure 7. Velocity streamline for different speeds of rotation of the agitator j = 0. Figure 8 . Figure 8. Variation of average Nusselt number Nu with HNP volume fraction j at different Rayleigh numbers Ra. Figure 9 . Figure 9. Variation of Nusselt number with volume fraction of Gr, CNT, and Gr-CNT NPs for Ra = 10 6 . Table 1 . Values of the stream function for different nodes number. Figure 2. A schematic illustration of the numerical grid was provided.Figure Comparison of the average Nusselt number between the present work and that of Guiet et al. Table 2 . Thermophysical properties of fluid and nanoparticles.
6,141.8
2023-09-01T00:00:00.000
[ "Materials Science", "Engineering", "Chemistry" ]
Thermochemical compatibilization of reclaimed tire rubber/ poly(ethylene-co-vinyl acetate) blend using electron beam irradiation and amine-based chemical Waste tire rubber is commonly recycled by blending with other polymers. However, the mechanical properties of these blends were poor due to lack of adhesion between the matrix and the waste tire rubber. In this research, the use of electron beam irradiation and (3-Aminopropyl)triethoxy silane (APTES) on enhancing the performance of 50 wt% reclaimed tire rubber (RTR) blend with 50 wt% poly(ethylene-co-vinyl acetate) (EVA) was investigated. Preparation of RTR/EVA blends were carried out in the internal mixer. The blends were then exposed to electron beam (EB) irradiation at doses ranging from 50 to 200 kGy. APTES loading was varied between 1 to 10 wt%. The processing, morphological, mechanical, and calorimetric properties of the blends were investigated. The stabilization torque and total mixing energy was higher in compatibilized blends. Mechanical properties of RTR/EVA blends were improved due to efficiency of APTES in further reclaiming the RTR and compatibilizing the blends. APTES improved the dispersion of embedded smaller RTR particles in EVA matrix and crosslinking efficiency of the blends. Calorimetric studies showed increased crystallinity in compatibilized blends which corresponds to improved mechanical properties. However, the ductility of the blend was decreased due to increased interaction between EVA and APTES. Presence of APTES increased the efficiency of electron beam irradiation induced crosslinking which was shown through gel content analysis and Charlesby-Pinner equation. Introduction Tires can be accounted as one of the toughest polymeric material to be recycled. Tires have been designed with complex structure to enable good mechanical properties which ensure safety and durability during service life [1]. However, these complex structure and excellent mechanical properties render the tires non-degradable and painstaking to recycle after life on the road [2]. The main constituent of a tire is rubber and carbon black fillers, which are common materials used in polymeric composites [3,4]. At disposal, a tire is still found to have 67% of functional valuable materials. This further strengthens the urge to recycle waste tires. Bulky waste tires are shredded, pulverized or granulated to obtain waste rubber powder which is then incorporated into polymeric materials as a mean to recycle it [5]. This powder is also commonly known as ground tire rubber (GTR). GTR have been infamously incorporated into thermoplastic matrix but has not found much success due to distinct loss of matrix ductility and toughness [6]. Thus, reclaiming method was developed and used on crosslinked rubber to partially devulcanize the rubber to increase the plasticity and processability of the rubber [5]. Reclaimed tire rubber (RTR) happens to be the second most commonly used type of waste tire rubber in the polymer industry. Weak interfacial adhesion between the polymer and GTR is reported as one of the factors contributing to loss of ductility and toughness in GTR/thermoplastic compound [7][8][9]. Very few studies reported on successful methods in improving the interfacial properties of GTR/ thermoplastic compounds [10][11][12][13][14]. RTR is likely to exhibit improved interfacial properties compared to GTR, owing to the partially devulcanized RTR surface which enables enhanced interaction between the thermoplastic and RTR. However, it still is important to further enhance the interfacial properties of the RTR to produce blends with superior performance. In this study, (3-Aminopropyl)triethoxy silane (APTES) has been employed as compatibilizer for RTR/EVA blends, via reactive compatibilization method. High energy ionizing radiations such as electron beam and gamma irradiation have been actively used to enhance the properties of polymeric materials. This is due to the ability of ionizing radiation to crosslink, degrade, graft or cure polymeric materials upon exposure to irradiation [15]. Properties enhancement and improved compatibilization in the blends of recycled materials such as GTR subjected to the use of high energy radiation have been reported previously. Studies on gamma irradiation of GTR/polyethylene [16] and electron beam irradiation of GTR/EVA [17] showed improved interfacial adhesion between GTR particles and the enveloping thermoplastic matrix due to the ionizing radiation. This study explored the feasibility of using electron beam irradiation to enhance the properties of APTES compatibilized RTR/EVA blends. Materials Poly(ethylene-co-vinyl acetate) (EVA) Grade EVA N8045 was supplied by TPI POLENA Public Limited Company, Thailand. EVA N8045 contains 18% vinyl acetate, with a melt flow index (MFI) value of, 2.3 g/10 min and a density of 0.947 g/cm 3 . Reclaimed tire rubber, RTR, with grade RECLAIM Rubberplas C, was supplied by Rubplast Sdn. Bhd., Malaysia. RTR was derived from waste heavy duty tires. The RTR contains 48% rubber hydrocarbon, 25% carbon black fillers, 15% acetone extract, 5% ash content with a density of 1.3 g/cm 3 . (3-Aminopropyl)triethoxy silane (APTES) supplied by Sigma Aldrich was used as compatibilizer in RTR/EVA blends. APTES had specific gravity of 0.95 g/cm 3 and boiling point of 217 °C. Melt blending of RTR/EVA An internal mixer model Brabender Plasticoder PL2000-6, supplied by Brabender GmbH & Co. company was used to melt blend the RTR and EVA. A schematic of the internal mixer is shown in Fig. 1. The blending temperature and rotor speed was fixed to 120 °C and 50 rpm, respectively. RTR to EVA blending ration was fixed at 50:50 for all the blends. EVA was discharged first into the mixing chamber, followed by addition of RTR and APTES after 2 min. All the prepared blends were subjected to the total mixing time of 10 min and the torque reading during mixing was recorded. The melt mixed blends were collected, and compression moulded. Table 1 shows the formulation of the prepared blends. Blends collected after melt mixing were compression moulded to form appropriate test pieces. The blends were separately moulded into sheets with the thickness of 1, 2 and 5 mm at 130 °C. Each moulding cycle included 3 min of pressure-less preheating, venting for 20 s and then followed by compression under pressure of 14.7 MPa for 3 min using LP-S-50 Scientific Hot and Cold Press supplied by LabTech Engineering Company Ltd. The hot-pressed sheets were directly cooled at 20 °C for 2 min in a cold press equipped with a chiller. Electron beam irradiation The compression moulded sheets were irradiated using the 3 MeV electron beam accelerator (model NHV-EPS-3000) at doses ranging between 0 -200 kGy. The acceleration energy, beam current and dose rate were 2 MeV, 5 mA, and 50 kGy per pass, respectively. Physical and mechanical characterization Gel content of the samples was determined using ASTM D2765 method. Each irradiated sheet was cut into small pieces with approximately 0.5 g weight and inserted into a stainless steel wire mesh sample pouch of 120 mesh size. The samples were soaked into toluene and extracted through boiling the toulene for 24 h using Soxhlet apparatus. After extraction, the samples were dried in an oven at 70 °C until a constant weight was obtained. The gel content was calculated as per Eq. 1. where W 0 and W 1 are the dried weight of sample before extraction and after extraction, respectively. Ratio of chain scission to crosslinking (p 0 /q 0 ) in the blend upon exposure to irradiation were obtained quantitatively from plots of S + S 1/2 versus 1/D based on Charlesby-Pinner equation, Eq. 2. where S is the sol fraction, u 1 the number averaged degree of polymerization, D is radiation dose (in kGy), p 0 and q 0 (1) Gel content = W 1 W 0 x 100 (2) S + S 1∕2 = p 0 q 0 + 10 q 0 Du 1 are fraction of ruptured and crosslinked main-chain units per unit dose, respectively. RTR contained a substantial amount of gel before irradiation, hence, the sol fraction used in this study was derived from the absolute yield of gel fraction upon irradiation (gel content after irradiation minus gel content before irradiation). Gel permeation chromatography (GPC) characterization was done for a couple of RTR samples to determine the molar mass distribution of the soluble rubber fraction. Firstly, low molecular weight moieties/additives in RTR samples were removed through 24 h boiling acetone extraction in Soxhlet apparatus. Then the RTR samples were washed with toluene and extracted again for 24 h in boiling toluene to further remove the soluble fraction of RTR. Filter paper was used to separate the non-dissolving RTR components and toluene. Soluble rubber fraction of RTR was obtained by evaporating the Toluene by drying in the oven at 50 °C for 6 h. GPC evaluation was done on the dried soluble rubber by dissolving the soluble rubber in tetrahydrofuran (THF). The solute was passed through a column packed with porous materials that separates polymer chains based on molecule size. The GPC test were done according to polystyrene standard. Tensile test was carried out in accordance to ASTM D412 standard using dumbbell shaped specimens with 25 mm gauge length, 6 mm width and 1 mm thickness. Wallace die cutter was used to punch out the tensile test specimens from the compression moulded sheets. The test was done at room temperature using Toyoseiki universal testing machine supplied by Toyo Seiki Seisaku-sho, Ltd. Japan; equipped with 10 kN load cell. All samples were pulled under tension at crosshead speed of 50 mm/min. Tensile strength, modulus at 100% elongation and elongation at break were recorded. A minimum of 7 specimens were used for each set of blends and the average results were taken as the resultant value. Tear testing was conducted as per ASTM D264 standard. Test specimens with 102 mm length, 10 mm width and 1.5 mm thickness were cut manually using a sharp blade from the compression moulded sheets according to ASTM D624 Type C (right angle) test piece specification. The tear test was carried out at room temperature with Toyoseiki universal testing machine, supplied by Toyo Seiki Seisaku-sho, Ltd. Japan; equipped with 10 kN load cell. All samples were tested at the crosshead speed of 50 mm/ min. Six specimens were used for each set of blends and the average tear strength results were taken. Hardness test was conducted according to ASTM D2240 (Type Shore A) standard with samples directly produced from compression moulding with dimension of 100 mm length, 100 mm width and 5 mm thickness. The testing was carried out at room temperature with Durometer Hardness blunt indenter model Zwick 7206 supplied by Zwick Roell GmbH & Co. A minimum of 9 hardness readings were recorded for each sample and the average results were taken as the resultant value. Morphological and thermal characterization The tensile fractured surfaces were observed with field emission scanning electron microscope (FESEM) Model FEI Quanta 400 supplied by Thermo Fisher Scientific to understand the nature of tensile failure. Firstly, the fractured surface samples were sputter coated with gold to avoid electrostatic charging and poor image resolution. Then the samples were stick to sample holder using carbon tape prior to imaging process. The morphology of the blends was observed using transmission electron microscope (TEM) model Jeol-JEM-2100 supplied by JEOL Ltd, Japan. Ultrathin sections with approximate thickness of 80 nm were cut from the compression moulded sample sheets with ultra-microtome machine, model LEICA Ultracut UCT supplied by Leica Microsystem Ltd. at temperature of -100 °C achieved using liquid nitrogen. The sections were placed on a carbon coated copper grid and observed under TEM using a voltage of 200 kV. The crystallization and melting temperature, heat of fusion and degree of crystallinity of the blends were studied using differential scanning calorimeter (DSC), model Mettler Toledo DSC 1/32 equipped with STARe System supplied by Mettler Toledo. About 5 to 10 mg of samples were analysed in 50 ml/min continuous flow of nitrogen gas. Firstly, the thermal history in the blends was erased by heating the samples from room temperature to 140 °C at 50 °C/min followed by cooling to 0 °C at a cooling rate of 10 °C/min. Subsequently, second heating was done at a rate of 10 °C/min up to 140 °C followed by cooling at a rate of 10 °C/min up to 0 °C to obtain melting temperature (Tm), crystallization temperature (Tc) and heat of fusion ∆H f . First heating scan was omitted from data analysis. The results were analysed using STARe System software. The percentage of crystallinity, X c , of EVA in the blends was calculated as per Eq. 3. where ∆H f is the heat of fusion of the sample, ∆H f° is the heat of fusion of a 100% crystalline polyethylene which is 295 J/g and w EVA is the weight fraction of EVA in the blends. Thermogravimetry analysis (TGA) experiment was used to determine the thermal stability of the samples using computerized thermo gravimetric analyzer (Mettler Toledo TGA/DSC 1 equipped with STARe System supplied by Mettler Toledo). Weight loss vs. temperature thermogram was obtained by heating the samples from room temperature to 600 °C. All analysis was carried out using 5 to 10 mg of samples in nitrogen atmosphere (flow rate 50 ml/min) and a heating rate of 10 °C/ min. The results were analyzed using STARe System software. The normalized weight loss vs temperature curve was smoothed using a least-squares averaging technique before analysis. Processing characteristic The torque-time pattern of the APTES containing blends at different compatibilizer loading is shown in Fig. 2. The peaks of torque-time pattern at the twentieth second (peak A) and second minute (peak B) represents the introduction of EVA and RTR into the mixer, respectively. The torquetime pattern showed a stabilization plateau zone after 4 min till the end of mixing. Additionally, the absence of rising torque indicates the blends achieved homogenous mixing. All APTES containing blends showed similar torque-time pattern at EVA addition point (peak A) and complete melting of EVA. This is caused by the same loading of EVA used in entire formulated blends. However, RTR introduction torque (peak B) was decreasing with increasing APTES loading. APTES is in liquid form, providing lubricating effect and thereby reducing the torque reading with the increasing APTES loading. The recorded torque of APTES containing blends, at the end of the mixing, was higher than control blend, 50RTR. This could be due to increased interaction between EVA and RTR through compatibilization effect offered by APTES. Figure 3 shows the influence of compatibilizer loading on torque readings during EVA loading, EVA melting, RTR loading and stabilization of the blends. The torque readings of introduction and melting of EVA in the APTES containing blends remained similar to 50RTR blend. This is also caused by the similar amount of EVA in all the blends. Conversely, RTR loading and stabilization torque readings are clearly affected by the presence of APTES. RTR loading torque of the blends decreased with increasing APTES loading. Apparently, this is caused by the reduction of viscosity as a result of APTES capability to further reclaim the RTR. More details on this mechanism will be discussed in the following sections. Increase in stabilization torque was noted with increasing APTES loading due to enhanced interaction between RTR and EVA from the efficient compatibilization by APTES, hence increasing the viscosity of the compatibilized blends. In comparison to control blend, 50RTR, all APTES compatibilized blends recorded higher total mixing energy as shown in Table 2. The blends with 3 and 5 wt% APTES loading recorded 8.23 kNm total mixing energy which corresponds to 6.6% increase in total mixing energy compared to 50RTR. This is expected due to enhanced interaction between EVA and RTR resulting from efficiency of APTES as compatibilizer [18]. Figure 4 shows the function of the APTES loadings and irradiation doses on the gel content values of 50RTR blends. Interestingly, the gel content before irradiation (0 kGy) of all APTES containing blends are lower than the control 50RTR blend. The gel content of 50RTR blend at 0 kGy was contributed by partially devulcanized structure of RTR [19]. Systematic reduction of gel content before irradiation with the increment of APTES loading in the blends was observed as a result of APTES capability to further devulcanize or reclaim the RTR. Amines were used as reclaiming agents since the introduction of rubber reclaiming by pan and digester process [20]. Amines were reported to function as reclaiming agents in rubber, where amines are capable of cleaving the crosslinks in vulcanized rubber via nucleophilic reaction. This is made possible by amines lone pair of electrons which display strong nucleophile nature. Amines have been shown to successfully act as agents of reclaiming for EPDM rubber [21]. APTES can likewise function as a reclaiming agent for tire rubbers as it contains an aliphatic primary amine group. A mixture of 3 wt% APTES with RTR (RTR/3APTES) was prepared in an internal mixer to evaluate the feasibility of APTES acting as reclaiming agent for RTR. Molar mass distribution of the soluble fraction of both RTR and RTR/3APTES was obtained through GPC evaluation, as shown in Table 3. Weight average molecular weight (M w) , number average molecular weight (M n ) and polydispersity index (PDI) of RTR decreased by 33%, 11% and 24.5%, respectively with the addition of APTES. These evaluations validate that APTES plays an important role in the scissoring of crosslink networks in RTR. Gel content analysis 50RTR blend showed a delay in crosslinking yield up to 50 kGy irradiation dose following the readily present radical scavenging and stabilizing additive within RTR [19]. The efficiency of the yield of crosslinking, in APTES containing blends increases with increment of irradiation dose. The use of 10 wt% APTES completely resolved the delayed crosslinking yield observed in 50RTR. It is hypothesized that the APTES interacts with the additives in tire and residual radical scavenging reclaiming agents within RTR which enables the crosslinking process to take place in the blends. Compared to equivalent control, 50RTR blend; 5 wt% APTES containing blends showed improvement in gel content by 39% and 16% at 100 kGy and 200 kGy irradiation dose, respectively. Table 4 shows the ratio of chain scission to crosslinking, p 0 /q 0 values of the blends. The control blend, 50RTR, recorded a value of 1.27 for p 0 /q 0 values, suggesting the prominence of chain scissions over crosslinking in this blend. Macroradicals formed during irradiation are stabilized or scavenged by additives such as stabilizers, fillers, antioxidants and residual reclaiming agent which are freely present in RTR. This reduces the likelihood of macroradicals overlapping to form crosslinking, as well as increasing the possibilities of the polymeric chain scission [22]. The p 0 /q 0 values decreases with the addition of APTES, suggesting improved crosslinking efficiency with the addition of APTES. At 5 wt% APTES loading, the p 0 /q 0 value was reduced to half the control's (50RTR) p 0 /q 0 value. This is due to combination effect of efficient reclaiming or devulcanization of RTR by APTES, and the interaction of APTES with radical scavenging additives readily present in the RTR. These effects increased the efficiency of radiation-induced crosslinking. The effects of irradiation doses and APTES loading on tensile strength of APTES containing 50RTR blends are presented in Fig. 6. In comparison to control blend, increase by 15.26% in tensile strength of 10 wt% APTES containing blends before irradiation (0 kGy) was noted. Tensile strength increases before irradiation in APTES containing blends even though the gel content decreases due to effective interphase formation between RTR and EVA component in the presence of APTES. Materials could withstand higher amount of failure stress owing to the good interphase that promotes effective stress transfer [23] between RTR and EVA components. Mechanical properties Moreover, the use of APTES facilitated the dispersion of smaller sized rubber particles in the blends as evidenced from SEM micrographs in Fig. 14 which suggest an increment of effective surface area of interaction between EVA and RTR components. More elaborated discussion on these changes have been detailed in the morphological analysis section. The effect of electron beam irradiation on the tensile properties of APTES containing blends can be inferred from Table 5 and Fig. 6. Tensile strength of APTES containing blends (at 1, 3, 5 & 10 wt% loadings), showed an upward trend with increasing dose of irradiation. Nevertheless, Table 5 shows the effectiveness of irradiation induced enhancement in tensile strength of APTES containing blends decreases from 1wt% up until 5 wt% APTES loading and stabilizing thereafter. Efficiency of APTES as reclaiming agent of RTR causes considerable increase in tensile strength before irradiation. In addition, improvement in the gel content yield upon irradiation with increment of APTES loading was noted. These results suggest that higher loading of APTES in blends promotes formation of more irradiation induced crosslinks. Figure 7 shows the effect of irradiation dose and APTES loading on EB of the blends. A drop of 42% in EB in 10 wt% APTES containing blends was noted in comparison to 50RTR blends before irradiation. Values of EB before irradiation dropped systematically with increasing APTES loading. This observation might be caused by the improved adhesion between RTR and EVA resulting increased restriction of matrix to flow. However, a prominent decrease in EB was noticed with the addition of APTES. From previous study [19], it was apparent that EVA matrix mainly contributes to the EB of RTR/EVA blends. This suggest that EVA matrix properties could have been altered due to interaction between the amine group of APTES with the vinyl acetate group of EVA. To confirm this assumption, a mixture of 3 wt% APTES and EVA was prepared and tensile tested. The EVA/3APTES mixture showed a prominent 39% decrease in EB value in comparison to pure EVA. This indicates changes in the microstructure of EVA due to the chemical interaction between EVA and APTES, resulting in deterioration of EB. In general, EB shows downward trend with the increment of irradiation dose [24] in APTES containing blends. Table 5 shows that at lower APTES loading (1 and 3 wt%) and in relation to increasing irradiation dose, no substantial changes was detected in percentage of changes in EB. Whereas, at 5 and 10 wt% APTES loading the decrease in percentage of changes of EB was similar to 50RTR. Higher amount of irradiation induced crosslinks was formed in the blends containing higher loading of APTES as suggested by the gel content analysis. At higher APTES loading, the prominent difference between gel content before and after irradiation, leads to a bigger difference in the EB of the blends. These observations are supported by Charlesby Pinner analysis whereby crosslinking is more dominant than chain-scission in APTES containing blends in comparison to 50RTR blend. Figure 8 exhibit the effect of APTES loading and irradiation on modulus at 100% elongation (M100) of 50RTR blends. A 30% increase in M100 was observed at 10 wt% APTES loading compared to control blend. As mentioned earlier, APTES functioned as reclaiming agent, breaking down the three-dimensional network system in RTR, which should have reduced the M100 value of the blend as the rubber is now softer. Nevertheless, an opposite trend was observed due to the domination of microstructural modification of EVA by interaction with APTES. Increase in the irradiation dose, increases the M100 values of APTES containing blends, suggesting increased stiffness caused by irradiation induced crosslinking [24,25]. The influences of APTES loading and irradiation dose on the tear strength of 50RTR blends are shown in Fig. 9. Tear strength increased up to 5 wt% APTES loading and stabilized thereon. An improvement of 20% in tear strength of 5 wt% APTES blend at 0 kGy was noted compared to the control blend, 50RTR. Such observation can be accounted to increased reclamation of RTR particles and enhanced adhesion between RTR and EVA matrix. Tear strength beyond 5 wt% APTES loading showed a plateau suggesting that the compatibility of 50RTR/5APTES has reached an adequacy enabling maximum tear strength. In Han and Han [26]'s previous work, the increment in intrinsic strength was reported due to enhanced energy dissipation and/or deviation of tear path which contribute to the increases in tear energy of filled system. Tear strength of the APTES containing blends increased via efficiently arrested or deflected crack which initiates and propagates through the bulk due to improved dispersion of smaller RTR particles. The dispersion of RTR at 5 wt% APTES loading might have been the optimum, resulting in a plateau of tear strength thereon. Irradiation only improved the tear strength of 1 wt% APTES blend with minimal or no improvement noted at higher APTES loading. In Fig. 10, at 0 kGy, APTES containing blends showed an increase in hardness values up to 5 wt% APTES loading, with saturation thereon. Microstructural change of EVA following the interaction with APTES could have caused this observation. All APTES containing blends upon irradiation showed improved hardness by a meagre 1 to 3%. Chemical mechanism of reclaiming and compatibilization In this study, we have used APTES, which contains a silane atom connected to a propyl spacer with the primary amine as the organofuntional group and three ethoxy groups. Figure 11a represents a readily happening hydrolysis process when moisture present in the atmosphere reacts with APTES. In this process, more reactive silanols groups are produced from hydrolysis of ethoxysilanes [27]. Subsequent to formation of silanols, polysiloxane network structure are formed from self-condensation of silanols [28,29]. The combination of mechanical shearing and chemical aid is preferred to reclaim crosslinked rubber as the resulting reclaimed rubber have superior properties in comparison to rubber reclaimed by mechanical shearing only. Chemicals such as thiols, phenols and disulphides have the capability to scavenge the radicals formed during mechanical shearing in chemical aided method used for reclaiming vulcanized natural rubber [5,20]. EPDM rubber are more commonly reclaimed by nucleophilic mechanism using amines. Amine is an excellent reclaiming agent, a strong nucleophile, specifically due to the ability to cleave cyclic octasulfur by primary and secondary amine [20]. Previous studies have shown amines capability of reclaiming rubber via a nucleophilic mechanism as shown in Fig. 12a. Waste tire rubber can also be reclaimed using APTES which contains the primary amine of the organofunctional group which cause decrement in molecular weight and gel content as discussed in earlier section. Carboxylic group are known to be present on degraded mass such as RTR [30]. Amine group of APTES reacting with these carboxylic groups can form a stable covalent bond as shown in [31,32]. The 110 °C processing temperature further aided this process. Addition of APTES altered the properties of EVA matrix, causing the changes to the EVA microstructure following the interaction between EVA and APTES. Amine group of APTES could interact with carboxylate ester of vinyl acetate group within EVA, thereby, reducing the vinyl acetate to alcohol as shown in Fig. 13a. The processing temperature of 110 °C used in this study could easily facilitate the proposed reaction. EVA macromolecules with some hydroxyl (-OH) groups within its backbone are produced from this reaction, which could further react with silanol groups of hydrolysed APTES and silanol groups of condensed APTES (polysiloxanes) [33]. These reactions are presented in Fig. 13b, c. Reactive compatibilization of EVA and RTR with the use of APTES can aid strong interfacial adhesion in the blends. Covalent attachment between RTR and EVA via a combination of APTES reaction with RTR (Fig. 12b) and EVA (Fig. 13b, c), facilitate formation of a strong interphase. One of the reasons of improved mechanical properties in APTES compatibilized blends before irradiation is due to the formation of good interphase. Figure 14a reveals the fracture surface of 50RTR blend with the clean and uncoated rubber particle (indicated by arrows). Empty voids present on the surface due to large particle pull out which indicates lack of adhesion between the EVA matrix and RTR particle. Fig. 14 SEM micrographs of 0 kGy a) 50RTR and b,c,d) 3 wt% APTES containing 50RTR blends with an overview at 1000x (a,b) and focused RTR particle in APTES containing blend at 5000x (c,d) magnification 3 wt% APTES containing blends in Fig. 14b, showed dispersion of fully embedded RTR particles within EVA matrix. Smaller RTR particles were observed in APTES containing blend compared to 50RTR blend. Dispersion of the smaller rubber particle supports the claim that APTES functions as reclaiming agent in RTR. Softer rubber from reclaiming of RTR by APTES and the shearing during compounding makes it easier to break the rubber into smaller particle. This leads to EVA matrix with well dispersed smaller RTR particle. Morphological study RTR particles were fully embedded and interlocked together with EVA by fibril formation as shown in Fig. 14c, d. It is notable that the rubber particles also supported the applied stress which is evident from the broken or ruptured rubber particle [34,35] as shown in Fig. 14d. This observation supports effective formation of interfacial adhesion between EVA and RTR in the presence of APTES allowing for enhanced tensile properties before irradiation. EVA matrix in APTES containing blends shows no indication or signs of fibrillation on the matrix surface in comparison to 50RTR blend. This supports the discussion in EB, whereby the EVA microstructure was expected to be altered due to interaction with APTES. TEM micrographs at 5000 × magnification in Fig. 15a, c, show the darker grey shades of RTR domains distributed in lighter grey shades of EVA matrix. Domains sizes of RTR before irradiation was measured and was found to be around 0.5 to 2 µm. Upon irradiation, lesser contrast was noted between the RTR domains in comparison to EVA phase, with RTR more difficult to be distinguish and scattered around in EVA matrix. The range of RTR domains size was measured to be around 0.1 to 1 µm in irradiated 50RTR blend. These observations support that irradiation helped to improve the dispersion of RTR inside EVA phase. Effective increase in the surface area of RTR in contact with EVA is evident from the decreased domain size of RTR upon irradiation. Migration of nano sized fillers or substances (indicated by arrows) from RTR into EVA phase was also noted both before and after irradiation. Some studies proved that nano filler such as nanoclay [36] and carbon nanotubes [37,38] can lower crosslink network density in samples at lower irradiation doses. Figure 15b, d show EVA phase entrapped within the RTR phase as indicated by the arrows. An amorphous state can be formed from entrapped EVA intermingling with free chains on the surface of RTR. TEM micrographs of APTES containing 50RTR blends before and after irradiation are shown in Fig. 16. As per 50RTR blend, domains of RTR (darker grey shade) distributed in EVA matrix (lighter grey shade) was also observed in the APTES containing blends. At 0 kGy, RTR domain size in APTES containing blends are measured to be in between 0.3 to 1.0 µm, approximately 50% smaller compared to 50RTR blend. This observation proves the earlier claims of APTES being able to further reclaim RTR; allowing for smaller RTR domain sizes within EVA matrix. Reduced contrast between EVA matrix and RTR domain also suggest enhanced compatibility in APTES containing blends. At 200 kGy irradiation dose, further reduction of the domain sizes of RTR to 0.1 to 0.5 µm was noted. Similar to 50RTR, higher contrast was noted between RTR domains and EVA matrix in APTES containing blends upon irradiation. These observations indicate heterogeneity or distinct phase separation in APTES containing blends upon irradiation. This can be owed to enhancement of irradiation induced crosslinks formation in these samples. Figures 16b, d show the difference in the RTR domain of APTES containing blends, before and after irradiation, respectively. Different grey colored patches within RTR domain before irradiation signifies the partially devulcanized state of the rubber. The darker grey indicates crosslinked rubber region while the lighter grey shades indicate the amorphous/free rubber chains. At 200 kGy, distinct changes to the patches of different grey shades with overlapping spherical shapes in different grey shades was observed. Similar observation was not noted in control, 50RTR blend. Therefore, this could be due to efficient crosslink formation in APTES containing blends. Calorimetric analysis DSC thermograms of 50RTR and APTES containing blends at before and after 200 kGy irradiation doses are presented in Fig. 17. A distinct narrowing of melting and cooling peak was observed in compatibilized blends. Whereas irradiation caused shift of melting and cooling peaks toward lower temperature along with a reduction in the height of peak; especially prominent in APTES containing blend. To aid the discussion, Fig. 18 was charted using raw data collected from DSC analysis. Figure 18a shows the crystallization temperature (T c ) of APTES compatibilized blends at 0 kGy increasing only minutely as compared to 50RTR blend. 50RTR and APTES compatibilized blends recorded T c of 65.0 and 65.5 °C, respectively. Adhesion between RTR and EVA increased as compatibilization effectively reduced the interfacial tension. Hence, the nucleation effect rendered by RTR is enhanced, requiring higher amount of energy to form crystal [14]. Similar to 50RTR blend, a decrease in the T c of APTES containing blends was noted with increase in irradiation dose. The net amount of EVA chains involved in chain rearrangement to form crystals decreases with irradiation induced crosslinks in the blends, leading to a drop in T c with rise in irradiation dose. APTES containing blends has melting temperature (T m ) lower than control, 50RTR blend which suggest imperfect and/or lesser amount of crystals are formed compared to 50RTR blend [39]. At 0 kGy, 50RTR and 50RTR/5APTES blends showed T m of 84.2 and 82.8 °C, respectively. This observation supports good interfacial interaction [40] between EVA and RTR which hinders the growth of EVA crystals. Increasing irradiation dose also recorded decreased T m of APTES containing blends. However, at 200 kGy the extent of decrease in T m of APTES containing blends (1.4%) was lesser compared to control, 50RTR blend (3.0%). Before irradiation, the compatibilization leads to smaller and/or lower amount of crystals formed in the blends while, in irradiated 50RTR blend, the higher possibility of EVA chain movements (due to lesser crosslinking efficiency) lead to increased crystal formation as compared to the compatibilized blends. Figures 18c, d show the effect of irradiation dosage on crystallinity and heat of fusion of the compatibilized blends, respectively. Intriguingly, at 0 kGy, APTES containing blends recorded higher crystallinity and heat of fusion compared to 50RTR blend. Compatibilized PP/NBR blends [41] and PA6/SEBS blends [42] reported similar observation. Increased crystallinity of APTES containing blends are aligned with improvement in tear and tensile strength of the blends. Improved interfacial adhesion enhanced nucleating capability of the blends, causing in improved crystallinity and heat of fusion of the blends [43]. Improved crystallinity of APTES containing blend could also be caused by microstructural changes in EVA. Increase in irradiation dose caused a drop in crystallinity and heat of fusion of the APTES containing blends. This is quite conflicting to control, 50RTR, which showed higher crystallinity and heat of fusion in irradiated blends compared to un-irradiated 50RTR blend. In 50RTR blend, higher flexibility for EVA chain rearrangements and recrystallization was possible due to redistribution of RTR and lower crosslinking efficiency. Gel content analysis also confirmed the increased crosslinking efficiency in APTES containing blends. Upon irradiation, more crosslinks were formed in APTES containing blends compared to 50RTR blends. Hence, this lead to lower degree of rearrangement in EVA chain to enable crystals formation in APTES containing blends causing greater decline in crystallinity upon irradiation [39]. Thermal degradation properties The thermal degradation of 50RTR and APTES compatibilized blends were studied using thermogravimetry analysis. The weight loss curve with respect to temperature is presented in Fig. 19. All samples underwent two-step Fig. 17 Influence of irradiation dosage on DSC thermograms of 50RTR and APTES containing 50RTR blends degradation process. Table 6 list the temperature corresponding to specific and maximum weight losses. The maximum temperature for first and second degradation steps were recorded around 360 °C and 475 °C, respectively. As reported in previous communication [25], the first degradation step corresponds to the intensive loss of vinyl acetate (component from EVA) and depolymerization of natural rubber (component from RTR). Whereas the second degradation step corresponds to the intensive depolymerization of polyethylene chain (component from EVA) and butadiene rubber (component from RTR). The weight loss observed below 300 °C is associated to loss of volatile organic compounds such as additives, oils and oligomers [44]. Non-irradiated APTES compatibilized blends showed slightly enhanced thermal stability of about 3 to 7 °C up till the onset of second degradation step. Use of APTES facilitated the dispersion of RTR in smaller domain sizes within EVA matrix increasing effective interfacial area. Additionally, the APTES compatibilization mechanism also indicated vinyl acetate group responsible for the first degradation step might have been consumed during the interphase formation. Both these factors contributed to the enhanced thermal stability observed in APTES compatibilized blends. Contrarily, the second degradation step was not affected by the compatibilization. This finding suggests the compatibilization had no significant effect on the polyethylene backbone of EVA and butadiene rubber component of RTR. Irradiation enhanced thermal stability of 50RTR blend by about 10% up to 350 °C. However, irradiation had no influence on the thermal stability of APTES compatibilized 50RTR blend. This was an interesting finding because in both cases, the RTR domain sizes decreased in irradiated samples. Gel content recorded at 200 kGy for both 50RTR and 50RTR/5APTES blends was 61.5% and 71.6%, respectively, indicating formation of crosslinks. Indeed, there is possibility for a complex molecular change happening in an irradiated sample that influences its thermal stabilities. Conclusion Results revealed that APTES play dual role in RTR/EVA blends: as compatibilizer and as reclaiming agent. Compatibilizing and reclaiming lead to improved mechanical properties in APTES compatibilized 50RTR blend. 5 wt% APTES was found to be the optimal loading for the compatibilized blends. Crystallinity of 5wt% APTES compatibilized blends was higher in un-irradiated and low irradiation dose, resulting in higher mechanical properties in the blends. Use of APTES has also ensured improved crosslinking efficiency in electron beam irradiated blends. However, interaction between EVA and APTES resulted in decreased ductility in the compatibilized blends. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s10965-021-02748-y. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 19 Influence of compatibilization and irradiation on 50RTR blends thermal stability
9,055.2
2021-09-20T00:00:00.000
[ "Materials Science" ]
EMERGING EMITTERS AND GLOBAL CARBON MITIGATION EFFORTS International efforts to avoid dangerous climate change have historically focused on reducing energyrelated CO2 emissions from countries with the largest economies, including the EU and U.S., and/or the largest populations, such as, China and India. However, in recent years, emissions have surged among a different, much less-examined group of countries, raising the issue of how to address a next generation of high-emitting economies that need strong growth to reduce relatively high levels of poverty. They are also among the countries most at risk from the adverse impacts of climate change. Compounding the paucity of analyses of these emerging emitters, the long-term effects of the COVID-19 pandemic on economic activity and energy systems remain unclear. Here, we analyze the trends and drivers of emissions in each of the 59 developing countries whose emissions over 2010-2018 grew faster than the global average (excluding China and India), and then project their emissions under a range of pandemic recovery scenarios. Although future emissions diverge considerably depending on responses to COVID-19 and subsequent recovery pathways, we find that emissions from these countries nonetheless reach a range of 5.1-7.1 Gt CO2 by 2040 in all our scenarios—substantially in excess of emissions from these regions in published scenarios that limit global warming to 2C. Our results highlight the critical importance of ramping up mitigation efforts in countries that to this point have played a limited role in contributing the stock of atmospheric CO2 while also ensuring the sustained economic growth that will be necessary to eliminate extreme poverty and drive the extensive adaptation to climate change that will be required. C a n C u i D a b o G u a n D a o p i n g W a n g Introduction Fossil fuel carbon dioxide (CO2) emissions are the largest contributor to global warming. Going back to the 1990s and before, analyses of fossil emissions and energy-emissions models (IAMs) have focused on a handful of regions that include industrialized economies where emissions have been high (US, EU), and rapidly-industrializing countries such as China and India (Raupach et al. 2007;Fernández González, Landajo, and Presno 2014;Hubacek, Guan, and Barua 2007;Cantore and Padilla 2010). To the extent other countries are included, they are typically heavily aggregated, often literally into a "Rest Of World (ROW)" group, or "other developing countries" in the non-Annex I countries' list of UNFCCC (Winkler, Brouns, and Kartha 2006). Yet, since 2010, most of the growth in global emissions has been among these non-annex I, "ROW" countries. Over 2010-2018, all the countries with an emissions growth rate higher than the world average are developing economies, including countries currently in the lists of the least developed countries (LDCs) and/or landlocked developing countries (LLDCs) (IEA 2018a; United Nations 2020). In contrast to large emitters, such as the United States, China and India, these developing countries individually have small emissions, but collectively the emissions are comparable with those of the top emitters and have a large potential to dominate global emissions in the future. These countries face multiple daunting challenges. Many of these emerging emitters face high costs in adapting to climate change while having the weakest adaptation capacity. At the same time, they need to sustain economic growth to generate jobs and lift people out of poverty. It is this growth that is accelerating the rise of their global CO2 emissions and leads to the challenge of implementation of their intended nationally determined contributions (INDCs) toward climate change mitigation. A key issue is the increasing demand for oil and rising coal-related CO2 emissions that may cause a lock-in of emission-intensive energy use patterns among Asian (Steckel, Edenhofer, and Jakob 2015) and African countries (IEA 2019a, Steckel et al. 2020, Lucas et al. 2015. Following the COVID-19 pandemic outbreak, countries have applied various lockdown measures to avoid the rapid spread of the virus. As these lockdown strategies limit production and consumption activity, they are having a significant impact on energy consumption and CO2 emissions (Le Quéré et al. 2020;Liu et al., n.d.). For example, the world's energy demand in the first quarter of 2020 declined by 3.8% relative to Q1 of 2019 (IEA 2020), and global CO2 emissions were over 5% lower in Q1 2020 than in Q1 2019 (IEA 2020). The severity of the pandemic and the strictness of the lockdown measures vary among the emerging emitting developing countries. For example, Peru has experienced severe outbreaks with over 835,000 cases and over 33,000 deaths as of October 2020 (World Health Organization 2020), although with a prolonged lockdown. Vietnam applied strict lockdown measures early in the outbreak and largely succeeded in controlling the spread of COVID, with 1,100 cases by October 2020 (World Health Organization 2020). Since COVID is projected to re-emerge and have impacts for nearly four years (Kissler et al. 2020), it might reshape the emission pattern of the emerging emitters and substantially influence their future emissions. Therefore, understanding the driving forces behind previously growing emissions, the impact of COVID and likely future trajectories of CO2 emissions of these developing countries is essential in the context of measures to achieve global emissions reduction. Although, even prior to COVID, emissions of some major economies were declining (US, EU) (Le Quéré et al. 2019) or had at least flattened (Guan et al. 2018), few studies have focused outside of those main regions, on countries where growth rates are high but emissions remain relatively low for now. Here, we comprehensively assess the situation of these emerging emitters, develop country-specific emissions scenarios that capture the impact of COVID and finally discuss potential measures for global emission reduction. In this study, we use index decomposition to analyze the drivers of emissions of 59 fast growing developing countries. We then develop country-specific emission scenarios for a range of future energy and development trajectories using an Adaptive Regional Input-Output (ARIO) model. These capture regionspecific COVID effects combined with more aggregate shared socioeconomic pathways (SSPs) generated by large Integrated Assessment Models and assumed application of low-carbon technologies in the emerging emitters. Located in Asia, Africa, and Latin America, these 59 developing countries discharged CO2 emissions of between 0.7 Mt (Eritrea) and 542.9 Mt (Indonesia), individually much smaller emissions than the emission giants in North America and Europe. However, the 59 emerging emitters as a group accounted for growth of CO2 emissions from 2.7 Gt in 2010 to 3.8 Gt in 2018. These countries combined now emit 65% more than India, which has the 3 rd largest emissions at 2.3 Gt. Specifically, the 1.1 Gt of increased emissions from these countries contributed to almost 40% of the world emissions growth over this period, signaling a new generation of emerging emitters. Figure 1 | Map of countries with fast-growing CO2 emissions. Note: The depth of purple reflects the volume of emissions in 2018, and the size and the color of the bubbles represent the annual growth rate of emissions (2.5% is the global average), green for declining, yellow for slowgrowing, and red for fast-growing. The average annual emission growth rate of the 59 countries was 6.2%, much higher than the global average of 2.5%. The average annual growth rate of GDP of the 59 countries' GDP was 4.6%. Over half (34 out of 59) of this group of developing countries have experienced emission growth faster than GDP growth over the past decade and 12 have seen emission grow at twice the rate of GDP growth. These countries are in diverse stages of development ranging from LDCs, such as Ethiopia and Uganda, to economies in transition (EIT)(United Nations 2020), including Georgia. For most LDCs and LLDCs the increase in emissions appears to be strongly coupled with GDP growth (mapped as bubbles above the lines of slopes of 1 and 2 in Figure 2, which represent a 1 to 1 and two to one ratio of emissions growth to GDP growth respectively). These countries include Laos, Zambia, Myanmar, Ethiopia, Georgia, Uganda, and Vietnam. There are another 25 countries for which CO2 emissions have grown more slowly than GDP, including Mongolia, Ghana, and Peru. We now move to discuss how the diverse natural and economic situation of these countries has shaped different patterns and drivers of CO2 emission growth. Figure 2 | Relative increase of CO2 emissions and GDP in 2018 over 2010 Notes: Each bubble represents a country, plotted by GDP increase in 2018 relative to the level of 2010 on the horizontal and CO2 emissions increase on the vertical. The bubbles of countries with below 100% emission increase (within the black dotted box) in the left part are zoomed in on the right part. The size of the bubbles represents the amount of CO2 emissions in 2018. The colors represent the developing stages of the countries (United Nations 2020), red for developing economies (DE), cyan for economies in transition (EIT), green for least developed countries (LDC), and purple for landlocked developing countries (LLDC). The two grey lines with slopes of 1 (lower) and 2 (upper) mean the CO2 emission growth rate is the same as or twice the rate of the GDP growth. The figure includes 57 countries (South Sudan and Eritrea lack GDP data). Emerging emitters carbonizing together, but in diverse ways. To understand the driving forces behind these fast-growing emissions we decompose emissions growth (C) over 2010 to 2018 into contributions from six factors: CP from population (P) growth; CG from economic growth measured by GDP (G) per capita; CIS from industrial structure (IS), as reflected by changes in the share of primary industry, secondary industry, and tertiary industry in GDP; CEI from energy intensity (EI) defined as energy consumption (E) per unit of GDP; CES from energy structure (ES), the share of energy consumption of coal, oil, natural gas, and other energy sources; and CCI from CO2 emissions intensity (CI) which is the emissions per unit of energy consumption, as follows: Where, i refers to the ith industry in primary industry, secondary industry, and tertiary industry; j refers to the jth energy type in coal, oil, natural gas, and other types. The change in C from time 0 to time T can be divided into six parts using the logarithmic mean Divisia index (LMDI) method as follows: Where, Xij refers to the driving factors, i.e. P, G, ISi, EIi, ESij, and CIij. We select Myanmar, Ethiopia, Vietnam, Uganda, Mongolia, and Peru as case countries to reveal the different drivers of emission growth. The results are shown in Figure 3 and are summarized below for each country: Myanmar ( consumption, and energy intensity as the top three contributors. GDP growth has been the main driving force of emission increments. Next, the energy structure of Myanmar has become oil-oriented, especially in construction, power generation, manufacturing, and household sectors. Further, shifts in the structure of industry towards manufacturing, a key factor behind Myanmar's success in poverty reduction, have also contributed to increasing CO2 emissions. While economic development has progressed in Myanmar, substantial potential remains for sustained economic growth. On the one hand, this will contribute further to growing CO2 emissions, but on the other hand, it is essential to continue to take people out of poverty, including the more than three-quarters of a million people still in extreme poverty. At the same time, Myanmar is ranked 160 out of 181 countries on the ND-Gain index which measures a country's vulnerability to climate change together with its capacity to improve resilience. Myanmar is also second on the Germanwatch list of countries most impacted by climate change over the past 20 years. Figure 3 | Drivers of Emissions Growth in Case Study Countries Notes Carbon emissions also increased slightly more than GDP with an annual average growth rate of 6%, increasing to 5 Mt in 2018. Increasing population and oil consumption were the main impetus behind the growth of emissions in Uganda, contributing 37.0% (1.1Mt) and 27.6% (0.8Mt) to the CO2 emission increment, respectively. Uganda saw a 3.6% annual average population growth rate over 2010-2018, and its net population increase was 10.29 million. Over the period, oil consumption increased at an annual growth rate of 7.1%, which also contributed substantially to the increase of CO2 emissions. The increasing contribution of services to GDP and, in particular transport services, also added to the growth of CO2 emissions. On the other hand, investments in renewable energy led to a lower energy intensity that reduced the growth of emissions by 1 Mt over the period. Uganda has yet to achieve the sustained inclusive growth necessary to drive poverty reduction. The absolute increase in the Peruvian population, was, in part, due to the young population structure, but was also the result of higher immigration, especially from Venezuela. By 2018, more than three million immigrants have been officially accepted by the Peruvian government. Both industry and changes in CO2 intensity contributed to substantial reductions in emissions over the period, amounting to over 14 Mt, offsetting to a large degree the increases driven by population growth and increases in GDP per capita. Future emissions: COVID19 delay and potential mitigation Economic development will continue to be a strong driver of emissions growth in these developing countries and essential to eliminate extreme poverty. Nevertheless, as the experiences of some of the countries discussed above has shown, structural changes in these economies towards less carbon intensive activities, such as, shifting away from coal to other less carbon intensive sources of energy, and improvements in the energy intensity of GDP can significantly dampen the growth in emissions from these emerging emitters. An additional factor impacting the trajectory of emissions from these and all other countries is the COVID-19 pandemic and economic slowdowns resulting from countries applying lockdown strategies to limit the spread of the virus. We now move to model the impact of COVID-19 scenarios on the emissions growth of these developing countries and how this may impact on overall emissions by 2040 and on achieving the objective of limiting global warming to 2C . We use the projections from GAINS 4 and an ARIO model (see Annex 2 for description of the modelling approach) to explore a scenario for the impact of COVID and then, after 2024, supposing that COVID is To investigate the short-term (next five years) changes in emissions brought about by the sudden shock of COVID-19, we assume that production efficiency and economic structures are unlikely to change significantly within such a short time. The relationship between emissions and GDP (the emission intensity) will not be altered. Therefore, we simply estimate the emissions of countries based on their sectoral emission intensities and economic outputs: . Therefore, we assumed the countries reach the regional average emission intensity (emission per unit GDP); and based on that emission intensity and the country-level GDP forecast, the region-level emissions are allocated to country level. 6 Low-carbon technologies (LCT) applied from 2025 (linearly increasing in application and fully applied by 2040), including carbon capture and storage (CCS), renewable energy for the production of newly-demanded electricity, and electric vehicles replacing the newly-increased oil fueled automobiles from 2030. More specifically, countries are categorized into two groups, thermal-powered countries and non-thermal-powered countries, according to whether the percentage of thermal-powered emissions over total emissions is greater than 50%. For the thermal-powered countries, the low-carbon technology for the power sector is set as CCS and for the non-thermal-powered countries the lowcarbon technology is set as renewable energy for power generation. Both are applied to newly-demanded electricity, linearly increasing in coverage from 0 in 2025 to 100% in 2040. renewable energy for power generation, and with electric vehicles replacing oil fueled automobiles, and a weak policy scenario (WPS) where no low-carbon technologies are applied (i.e. the SSP2 scenario from the GAINS model, a middle of the road scenario in terms of mitigation and adaptation). As shown in Figure If large scale adoption of low-carbon technologies were applied from 2025, including CCS (carbon capture and storage) and renewable energy for power generation and electric vehicles in place of oil-fueled automobiles, the aggregated emissions of these countries could be reduced by 611 Mt CO2, 9% of the 2040 emissions of 6.7 Gt under the baseline scenario. We also modelled a more-extensive scenario for the application of low carbon technologies with carbon capture and storage and renewable energy applied for all the production of newly-demanded electricity from 2025, and electric vehicles replacing all the newlyincreased oil fueled automobiles from 2030. In this case, the aggregated emissions of these emerging emitting countries would be 5.9 Gt, which is 811 Mt (12%) lower than the baseline scenario. Even with the rapid application of low-carbon technologies, the emission growth rate is higher than that under SSP1. This suggests that while new technologies can help to reduce the emissions of these countries, alone they are insufficient to enable them to achieve a "sustainable pathway". Uganda (Figure 4e) is also projected to experience slower growth from 2020 to 2024 even without the impact of COVID. Emissions under the default lockdown assumption fall by about 25% and recover to 96% of the baseline level by the end of the COVD period. There has been some spread of COVID since August 2020, following the loosening of the strict lockdown in July 2020. Nevertheless, the situation is much better than the African average, and the effectiveness of the initial lockdown strategy, means that Uganda may achieve a mild-lockdown pathway, at least no worse than the default lockdown scenario. In this situation and if the country continues increasing consumption of oil without applying low-carbon technologies, the post-COVID emissions will be around 15.4 Mt CO2 in 2040, compared to the 13.7 Mt emissions with LCT. Peru (Figure 4f) is projected to experience a steady increase in growth from 2020 to 2024 in the absence of COVID. However, emissions under the default lockdown assumption fall considerably by about 40% although they recover to 98% of the baseline level by 2024; Despite applying strict lockdown measures since March 2020, Peru has been fighting one of the worst COVID outbreaks with over 820 thousand confirmed cases and more than 32 thousand deaths. By October 2020, the spread has yet to be controlled, and if the trend continues, Peru will probably maintain its strict lockdown strategies and its future CO2 emissions may decline substantially. In such circumstance, Peru's emissions will reach 68 Mt in 2040 under NPS, which is 4 years behind that of the baseline and mildest-lockdown assumption. With low-carbon technologies emissions could be reduced by 5.7% in 2040. Discussions and Conclusion Emerging emitters among developing countries have collectively contributed extremely little to the overall stock of CO2 in the atmosphere. However, they have come to the forefront of the growth of CO2 emissions over the past decade and will likely increasingly be so. Strong and sustained economic growth, crucial for poverty reduction, increases in population and heavy carbon energy consumption will drive significant emissions growth. Taking energy structure as an example, over the period 2010-2018, among the 34 countries that use coal in our sample of developing countries, 23 countries show a rising share of coal consumption in the energy mix and 29 countries increased their absolute consumption of coal. The impact of COVID is hitting developing countries hard and putting back progress on poverty reduction. The World Bank predicts that the COVID-19 pandemic could result in between 71 and 100 million people being pushed into extreme poverty 19 . There will, therefore, be a need to quickly revive growth in these economies, and their current dependence on traditional fossil fuels is likely to result in significant carbon emissions. These countries are confronting the massive challenges of achieving inclusive economic development, contributing to climate change mitigation and adapting to rising global temperatures, changing precipitation and more extreme weather events. Indeed, these emerging emitters are the most vulnerable and least prepared to adapt to climate change. For these countries, climate change will undermine their ability to drive poverty reduction as it will constrain productivity growth, especially in agriculture, and requires scarce resources to be redirected towards adaptation. Costinot et al (2016), for example, compute that the impact of climate change on agricultural productivity alone will result in a decline in welfare equivalent to almost 4% of GDP in Uganda and over 6.5% of GDP in Vietnam. This assumes that trade and production patterns adjust to dampen the impact. If adjustment is constrained then losses could amount to more than 7.5% of GDP in Uganda and over 11 per cent in Vietnam. These countries actions relating to emissions reduction will significantly influence the global effort to mitigate climate change. There is a large degree of diversity across these countries in terms of the size of national absolute CO2 emissions, the relationship between GDP growth and increases in CO2 emissions, the drivers of the emissions growth, the response to the COVID impact, and the impact of alternative post-COVID pathways. This requires country specific assessments and responses rather than common strategies defined for all the countries. Many of these countries are also already at the forefront of mitigation efforts in terms of enhancing the ambition of their Nationally Determined Contributions under the Paris Agreement 20 . In the post-COVID era, the outcomes from different pathways could lead to a difference of over 1 Gt in emissions from these countries. Hence, to limit global warming well below 2 °C, the world needs to reduce emissions by 25% less than 2018 levels and emerging emitters have a significant role to play. This would be facilitated by measures in emerging emitters to adopt lower-carbon development pathways, including progress on accelerating changes to industrial structure, energy transformation and adoption of new production technologies. For example, in countries where industry drives emissions growth, efforts could focus on accelerating structural transformation, which in turn is essential for economic diversification and job creation; countries with increasing energy consumption as major drivers of emissions growth can explore ways to lower their emission intensity, through both technologies that improve energy efficiency and shifts towards low carbon sources of energy, such as, clean oil, gas, and renewable energy. A global adoption of a "low carbon lifestyle" would lessen the carbon intensity of production in developing countries. Low-carbon technologies are a crucial means to limit the surging emissions. In our scenario analyses, adoption of low-carbon technologies can have a considerable influence on future emission reduction: with early application of CCS and renewable energy in the power sector and electric vehicles replacing the oil-fueled automobiles, the emerging emitters could reduce emissions by 600 Mt CO2 by 2040. The challenge for the global community is to facilitate these economic transformations in ways that support sustained growth and poverty reduction. This can be achieved by improving access to the finance and knowledge necessary to support adoption of new technologies and the shift towards lower carbon intensity of growth. More advanced countries could assist developing countries by sharing energy-saving technologies and knowledge about renewable energy. Climate clubs have been identified as one solution to deliver coordinated climate mitigation (Nordhaus 2015;Paroussos et al. 2019), facilitating, for example, the technology diffusion that would lower the cost of mitigation in developing economies. More generally, simultaneously addressing the challenges of ending extreme poverty, achieving inclusive growth throughout the world and meeting climate goals will require cooperative solutions that integrate both the development needs and emission realities of developing countries.
5,437.6
2020-12-01T00:00:00.000
[ "Economics", "Environmental Science" ]
THE RELATIONSHIP BETWEEN JUSTICE AND RATIONALITY IN ANCIENT, MEDIEVAL AND MODERN ERA: A CRITICAL INVESTIGATION OF MACINTYRE’S CONCEPTION The present study aims to explore the nature of justice and rationality and a relationship between them that how it has become a base for any society and culture in ancient, medieval and modern age. And how different thinkers present rival and compatible views about justice and rationality and how they both impact in our society. Any society benefits from having justice as a prevailing virtue. This helps ensure that wrongs will be ended and rights will be upheld thereby leading to a safer society for everyone. Its strong relation with virtues maintains that it cannot uphold without the presence of virtues. The most basic virtue is rationality without which no justice is possible. Different thinkers in ancient medieval and modern times give different views about the relationship between justice and rationality. But Macintyre holds that there is no neutral conception of justice but there are different standards of justice and rationality in every society. Introduction Justice is a broad notion that is based on a concept of moral rightness. Justice has different conception in every culture. It is the result of distinct methodologies, religion and histories that justice has rival and incompatible concepts. On the other hand rationality aware an individual that all his instinctive and judgments in which he believe are either justified or not. A British philosopher Alasdair Macintyre presented the concepts of four traditions (Aristotelian, Augustinian, Humean and Thomists) about justice and its relationships with practical rationality. The differing accounts of justice and rationality which are presented by Aristotle, Augustine, Aquinas and Hume, different in their conceptual scheme. 1 These traditions bared a lot of criticism. Macintyre defines, In Whose Justice? Which Rationality, a tradition a "A concept of rational inquiry ... according to which the standards of rational justification themselves emerge from and are part of a history in which they are vindicated by the way in which they transcend the limitations and provide remedies for the defects of their predecessors within the history of that same tradition". 2 Ancient Era and Criticism We start from ancient traditions of Plato who followed Socrates by concluding about justice that: "A just soul and a just man will live well, and an unjust one badly … a just person is happy, and an unjust person wretched." 3 Macintyre holds that according to Plato, justice is nothing but what the strong make it. Those who lack justice will not only fail in excellence by the standard of virtue, but they also fail in respect of effectiveness, a failure which they unable to understand correctly. 4 Macintyre maintains that one cannot become just, on Aristotle's viewpoint, without the ability to reason practically. 5 Frank, De Vita said that, Plato's concepts of justice are based on the emotion itself, while Aristotle concepts rest on action and virtue. He said that, to be Platonically just, an agent should be rationally control of their desires. On the other hand Aristotle maintains that to attain the state of justice an individual should have practical wisdom. So through practical wisdom an individual attain the balance between their spirited emotions and appetitive desires to develop Aristotelian virtues. Thus the Platonic and Aristotelian just individuals are consistent and describe in harmony with each other 6 David Johnston (1941) explains that, according to Plato the purpose of justice is the accomplishment of wisdom, and an explanation of accurate associations of power and duty. On the other hand in contrast Aristotle argued that above all the thought of justice concerns to associations between men who are free and equal and who have different abilities, which allow them to contribute to the political society in diverse ways. 7 Another thinker Karl Popper maintains that, according to Plato a society will be just if the individuals of that society will sacrifice their needs for the sake of the city. He holds that according to Plato justice is health, stability and the unity of the collective body, nothing more 8 . Thus this is extremely dangerous in Popper's perspective. According to Popper, the state and other social institutions designed by human are subject to rational inquiry and that are serving the interests of individuals. He was against the Plato's organistic view that justice is what in which the functioning of state will be in proper way, while on the other hand Popper argued that a proper justice is based on equal treatment of individuals. The Popper emphasize on the view of libertarianism to a large extent. He focuses on the individual freedom that is the most significant political value. He said that through equality one can lead to totalitarianism. But his main focus was towards freedom and said that its value is greater than equality because equality put freedom in dangers 9 . Karl Popper argued that Plato's concept of an ideal state is totalitarian while Popper's own view is libertarian. So, it is concluded that Plato's theory of Justice is idealistic in nature. And it also comes to us that, both Plato and Aristotle think that a just person would have a much more righteous and successful life. But there are many fruitful people who attained their achievement through unjust habits. Medieval Era and Criticism A.Y. "Fred" Ramirez explains, Like Aristotle, Aquinas defines justice as a rendering-to-eachhisdue: "Now each man's own is that which is due to him according to equality of proportion. Therefore the proper act of justice is nothing else than to render to each one his own" (ST, SS, Q. 58, Art. 11).. Aquinas names justice as a cardinal virtue, of which mercy, liberality, and pity are secondary in the sense that justice would encompass them all. He distinguishes between "commutative justices" from "distributive justice" in the following way: the former refers to the manner in which one individual interacts with another, privately, whereas the latter refers to the manner in which a community acts towards a single person in the way it distributes, proportionately, common goods, such as titles, resources, rights, opportunities. (Q61, Art. 1). Aquinas goes on to argue that favoritism is opposed to distributive justice-that justice should be meted out according to the merits of a cause-and he illustrates this with the example of offering a professorship: it would be unjust to offer a position to someone simply because he is a particular man (e.g., Peter); rather, justice requires, by nature of the cause in question, that the position be offered on the merits of the person's knowledge. 10 Gottfried Wilhelm Leibniz's (1646-1716) was impressed by Aquinas and Aristotle and in practical rationality he adds the pluralism of values and the theory of estimating the possibilities of the outcome of recommended actions. According to him the distinct aspect of practical rationality is that the best rational choice which is the best of all the other world's choices is the divine rational choice, which is the choice of God. In kind, divine and human thought are parallel and in degree they are different. It is natural for men to make mistakes to approach to the goods, for him the real good is that which is in front of him. On the other hand God choose the real good. Thus in good moral conduct the correct moral reasoning is important. Though, if the moral act is based on reasoning within the limits of the abilities of the person concerned, it will be rational. In other words, men are able to deliberate rationally whether or not their moral judgments are accurate. 11 David Berge argued that, on several points Augustine and Aristotle differ: Augustine holds that the city of God (Civitas Dei) is all-embracing, while for Aristotle who is not the citizen of a specific polis, he will be excluded from that polis. Augustine's believed that the chief virtues are humility and charity while on the other hand Aristotle denies his notion of virtues that is based on humility and charity. Augustine believes on the authority of will which do right acts while Aristotle maintains that there is no any will as such. And Augustine's telos is God himself while Aristotle's telos exist in the city of Man. Thus, their schemas were dissimilar at a large extent. But it was Thomas Aquinas in the 13th century who brings the thoughts of Aristotle and Augustine together. 12 Thomas' view holds that only the laws of God are the source of justice that, without virtues there is no justice and without grace there is no virtue. According to him our society will be called rational and the just society if it will beyond the comprehension of God. 13 Macintyre express that both for Aristotle and Aquinas, rational justification is a matter of deducibility from first principles, in the case of derived assertions and of the self evidentness as necessary truths of those same first principles. 14 He holds that the statement of Aquinas is logical and justifiable, as it come outs from the complex arguments that are considered more than Augustine and Aristotle. 15 Paul J. Weithman 16 argue that the central difference between the two thinkers lies in the dissimilar values each assigns to citizens attachment to the common good. Thus, the Thomist holds that it is through our moral development that we are able to apply the first principle. Another thinker Hans Kelsen elaborate the view of utilitarian's in his theory of justice that, according to them, the goal of each person is chiefly the elevations of agents own happiness. Macintyre holds that: All utilitarian doctrine which prescribe that everyone is to count for one, that everyone desires are to be weighed equally in calculating utility and, what it, right, from Bentham to contemporary utilitarian's and egalitarians, are deeply incompatible with Aristotle's standpoint. For what those deprived of the justice of the polis want cannot provide any measure for justice". 17 Kelsen believed that law can be separated from what is just or morally right. A law is still a law and should be obeyed even if it is completely immoral. He concludes rightly that it is impossible to define justice scientifically and if reasonably any concept of justice emerges than it will be establish on emotive premises. He holds that justice is irrational ideal that highlight the preferences and values of individuals. Modern Era and Criticism In modern perspectives the philosophy is not moral but it is technical. John Rawls (1921Rawls ( -2002 who was also a communitarian was opposed to the concept of utilitarian's and argued that the utilitarian's reserve freedom and political rights and the freedom of everyone, they maintain that by doing so they are helping to promote society. He gives two principles of justice to form a better society: The "First Principle" maintains that society should guarantee every individual of a city equal fundamental rights and freedom to achieve these rights, towards which all individual can approach equally. 18 It is also called the principle of liberty. And the other is called different principle in which the fair distribution which deals with the organizations, opportunities, income and possessions introduced. Rawls' description of distributive justice in his defense of libertarianism, is criticized by Nozick. According to Nozick the means are created by people and they have full right upon things which they produce. Thus, the distribution of the least advantaged will be unjust because it is often seen that people do not work willingly for others, for the reason that they do not want share equally those things which they produce with a great struggle. 19 Michael Sandel criticized Rawls and argued that Rawls supports people to sense about justice while separated from the values and targets that define who they are as persons and that permit people to find out what justice is. 20 Amartya Sen raised objection by arguing that we should focus not only to the distribution of possessions, but also how well the individuals are capable of utilize those possessions to achieve their goals. 21 Against Rawls, Cohen argued that any unequal distribution of possessions is unjustified. He was in opposition to Rawls and holds that the rich are already gifted with talent, advantaged childhood, and objective. So Cohen said that if we will gratify them more with material possessions then it means we are gratifying them twice, which will not be just 22 . Hume and Hutcheson In Stephen Darwall views, Hutcheson holds that a person makes efforts to achieve a goal and to achieve the goal which an action requires is practical truth. He argued that rational beings are those that have concern for the good of others and their own good. 23 According to Richard, Hutcheson was religious in his beliefs and holds that it is through human reason and understanding that we could apprehend the world's reality in which we live, and can judge the condition of institutions. He said that reason urges us to find out that what is the purpose and mission of God to create human beings and other creature on earth. Thus he believed that man has an intuitive moral sense which is his hidden potentiality. 24 About "reason" Hume argued that it is a faculty through which we discover the truths of the world. In Hume's perspective reason is not practical. 25 Millgram argued that, as Hume maintains that reason is not practical but his account has a "null hypothesis" that rational and irrational actions cannot exists. Elijah Millgram, Christine Korsgaard and Jean Hampton, argued that Hume is merely a denial of a practical rationality but it does not mean that he is an anti-rationalist 26 . Hume's account, as Macintyre holds, require a very diverse kind of context, that the goal of a society is based on the fulfillments of desires, in which reciprocal are organized for their interactions. 27 According to MacIntyre, Reid criticize David Hume that the use of rationality whether it will be practical or theoretical requires no any specific kind of social order. We are spending our lives in the world in which the matter of practical rationality is personified in those social contexts that are continuously under the conflicts and disagreements 28 . JOSEPH GRCIC (1865-1943) said that, according to Hobbes justice means acting upon the contract and injustice is any violation of the contract that is done through the ruler. Hobbes maintains that some political philosophers for example Aristotle put forward that the ruler cannot be divided into dissimilar authorities. For Hobbes, a separation is impossible between legislative judiciary and executive power because to resolve the clash between them requires a supreme authority which will be the authority of a ruler. Hobbes' concept of the contract maintains that this contract generate lawful and moral obligations. He said that no contract exists among a ruler and the individuals, and to the supreme ruler there are no moral and legal boundaries. He maintains that a ruler has the power to take the possessions of people, to tax, to makes law, to the appointment of judges, give reward or punish and make war and peace and anything he wants in his state, while on the other hand people have no rights expect the right of self preservation. The liberty of self preservation is the liberty that people are in opposition to the ruler. People will protest for another ruler if the existing ruler will not protect them and make injustice to the people. 29 Paul Weirich (1946) argues that, the evaluation of a rationality of an act depends upon the circumstances and abilities of an individual. Its demands adjust to them. When the circumstances are not in favor of him, it decreases its demands. For example, when the time for a decision is short, it may close the eyes to top options. Rationality is bounded, in common language. And it is attainable in its specific term. Some options are rational in facing the problem of every decision. An option that is rational will be sustained by good reason than any other option. But some options are not equally rational and equally sustained by reason. At least one option is rational, because rationality is attainable and be suitable for all appropriate doctrines of rationality. In an ideal decision, the rule to maximize utility applies while on the other hand the principle of satisfying may apply in a non ideal problem. Some options make best use of utility and therefore are rational but some options are satisfactory and as a result are rational yet if that are unsuccessful to make best use of utility. Such as, a chess player may rationally make a satisfactory but non-optimal move, having unsatisfactory strategic reasoning command. An individual who is rational follow the goals of rationality in a reasonable way even if their achievement becomes difficult through the obstacles. Thus, in pursuing the primary goal of successful action, rationality helps a lot and put forward the best ways. 30 Immanuel Kant who was a German idealist holds that all individuals have rights and they can act freely what they want, and according to the universal law everyone has freedom. But it does not mean that we should hurt others or destroy the rights of others. In this way the Kant's universal law of justice emerged that is called the categorical imperative: in which he assume that an action will be just if in which the freedom of an individual coexist jointly with the freedom of the other according to those law that is universal. According to Chloe Doss & Kara Delemeester, Kant follows the libertarian's view of justice that if the distribution comes from the free exchange of goods then the distribution of possessions will be just. Kant denies the concept of Utilitarianism. He maintains that the aim of a just organization is harmonize every person's freedom with others. 31 Alasdair Macintyre is a major critic of the Enlightenment and modernity, particularly as it appears in Kant's philosophy. He presents that Kant holds a negative view about the relation between practical rationality and human interests and attachments. In his view, human beings should adopt a neutral stance in identifying moral duties, using pure practical rationality. Macintyre, on the contrary, holds that there is no practical reasoning capacity for human beings independent of his training and upbringing in moral traditions. Human beings should acquire moral virtues by following moral exemplars before being able to reason about them. Macintyre's Conception about Justice and Rationality For moderns, all our thoughts are not based on traditions and that every culture has their own criteria's of justice and rationality. They hold that there is no place for traditions that is essential for moral and political investigations. Thus, Macintyre argued that these traditions off course different from each other over much more than their contending accounts of justice and practical rationality: they differ in their catalogues of virtues, in their conceptions of selfhood, and in their metaphysical cosmologies. 32 For Macintyre, we can make no any argument or no any enquiry that is possible without a specific type of tradition. That is called "the rationality of traditions" by Macintyre 33 . He distinguishes two related challenges to his position, one is the "relativist challenge" and other is the "perspectivist challenge." According to him, relativism is the concept that in a specific tradition rationality and ethics will be valid and it will not be devaluated outside that tradition. Since every tradition is superior as others. In contrary, perspectivism is the view that rationality and ethics of a specific tradition will be valid only in the perspective of a given tradition. These two perspectives holds that, in any tradition of thought validity and truth is impossible. Both perspectives hold that the criteria's of rationality are available in traditions and they agree on this point that nothing can be considered right or wrong without any traditional concern. Professor Macintyre arguing that: "No tradition can claim rational superiority to any other. For each tradition has internal to itself its own view of what rational superiority consists in, regarding such topics as practical rationality and justice, and the adherents of each will judge accordingly". 34 Thus there is no single concept of rationality but rationalities and no single concept of justice but justices 35 .According to Macintyre no any idea or ideas of practical rationality and justice that is outside any other tradition, in which our moderns are confused 36 . He holds that the ethical concepts are unchangeable and absolute and that are linked with practical rationality. Hence the world in which we are moving is continuously in conflict with the circumstances in which practical rationality and justice are exemplified. Conclusion Macintyre presents four traditions regarding justice and rationality. After the analysis of different traditions it is concluded that the Thomists tradition is quite well than others. Aristotelian tradition treats equals equally, unequals unequally which is not a good deal because it is often seen that the individuals who do not make just acts can achieve more than those who perform just acts. In this way a great gap occur among people and society because the concept of hard work will not sustain too long if people achieve what they want effortlessly. The views of Mill, Rawls and Kant are not satisfactory. Rawls focus on the equality on the whole level which is an unjust act because in this way the people who are poor just satisfy their basic needs and the richer become richer if they are granted twice. So, there is a big difference occurs among society and their members if we accept his view. According to utilitarian's the concept of justice is to promote happiness at maximum level. But it is concluded that in this way we can exclude the minorities of the society that will be unjust and irrational. As they focus on the welfare of the individuals rather than society it is not just. Kant accepts the person's freedom in his rational judgments. His view is liberal. But sometime this freedom leads us towards wrong decisions though which we intentionally or unintentionally hurt others and make decisions that are not helpful to us in future. In Humean tradition as it is seen earlier that they do not accept rationality. Hume adopts from Hutcheson the view that we are under our passions control. Thus, it is concluded that where there is no rationality there is no justice because through passions we cannot make valid or true judgments. If we see the Thomist tradition that holds that our good reasoning is not possible without moral virtues (prudence, courage, justice, temperance). When these will be in harmony we can reason well and we can judge well and if they will be in contradiction then we can neither reason well nor judge well. Thus, it is concluded that we should follow the Thomist tradition to construct a good society and promote justice in the society and these virtues helps us in maintain that what we ought to do or not to do.
5,374.6
2019-01-01T00:00:00.000
[ "Philosophy" ]
Fractionation of Lignin for Selective Shape Memory Effects at Elevated Temperatures. We report a facile approach to control the shape memory effects and thermomechanical characteristics of a lignin-based multiphase polymer. Solvent fractionation of a syringylpropane-rich technical organosolv lignin resulted in selective lignin structures having excellent thermal stability coupled with high stiffness and melt-flow resistance. The fractionated lignins were reacted with rubber in melt-phase to form partially networked elastomer enabling selective programmability of the material shape either at 70 °C, a temperature that is high enough for rubbery matrix materials, or at an extremely high temperature, 150 °C. Utilizing appropriate functionalities in fractionated lignins, tunable shape fixity with high strain and stress recovery, particularly high-stress tolerance were maintained. Detailed studies of lignin structures and chemistries were correlated to molecular rigidity, morphology, and stress relaxation, as well as shape memory effects of the materials. The fractionation of lignin enabled enrichment of specific lignin properties for efficient shape memory effects that broaden the materials’ application window. Electron microscopy, melt-rheology, dynamic mechanical analysis and ultra-small angle neutron scattering were conducted to establish morphology of acrylonitrile butadiene rubber (NBR)-lignin elastomers from solvent fractionated lignins. The key design aspects for such materials involve utilization of different molecular structures having different thermal responsiveness or contrasting phase change behavior under thermal activation [20]. This mechanism has been tailored to synthesize diverse molecular structures having various shape memory effects. For example, Zarek and colleagues [5] employed the thermal transition temperature of a semi-crystalline polymer, polycaprolactone (PCL) to build three-dimensional (3D) shape memory and I-lignin were vacuum-dried for 48 h. The collected lignin was used to melt-react with NBR as reported in our previous study [22]. A selected composition of high lignin content (50 wt.%) in NBR was synthesized using a Brabender Plastic Corder equipped with a 30-cc mixing chamber and high-shear twin roller blades. First, NBR was loaded and pre-mixed for 2 min at 90 rpm and 180 • C. Then lignin was added and mixed for a total of 60 min. After melt-reacting, the samples were collected and stored at ambient temperature for further characterizations. The 50 wt.% NBR-S lignin/I lignin composites were pressed and molded into films between two Teflon sheets at 190 • C for 20 min. Characterization of fractionated lignins, S-lignin and I-lignin, was accomplished by 1 H and 13 C nuclear magnetic resonance (NMR) spectroscopy in DMSO-d 6 at 23 • C on a Varian VNMRS 500 MHz spectrometer (Palo Alto, CA, USA) [23]. The samples for 13 C NMR contain Chromium (III) acetylacetonate at 0.01 M to shorten the relaxation time for integration purposes. The two lignin samples were also analyzed by Fourier transform infrared spectroscopy (FTIR) using a PerkinElmer Frontier spectroscopy (Waltham, Massachusetts, USA) in an attenuated total reflection (ATR) mode. All measurements were performed at ambient temperature in a wave number range of 500-4000 cm −1 using a force gauge of 70 (a.u.) at a speed of 1 cm/s and 4 cm −1 resolution. A total of 32 scans/sample were carried out. The background was subtracted and corrected to get signals from the samples only. Thermal stability of the samples was investigated by a thermogravimetric analyzer (TGA), Q500-TA instruments (New Castle, Delaware, USA). A sample weight of ca. 15 mg and a ramp rate of 10 • C/min were used. The TGA measurements were carried out in air atmosphere. To remove the residual moisture, the samples were first loaded at ambient temperature then ramped to 100 • C and kept isothermally for 20 min before increasing to 800 • C. The thermal transition characteristics of the samples were investigated by using a differential scanning calorimeter (DSC, Q2000, TA instruments) from −85 • C to 230 • C. A sample weight of ca. 4 mg, a ramp rate of 10 • C/min, and hermetic pans were utilized for the DSC measurements. Data analysis was performed using the Trios Software (Version 5.0.0.44608) provided by TA instruments. Morphological characteristics of the samples were studied by scanning electron microscopy (SEM, Hitachi S-4800, Tokyo, Japan). The investigated samples were fractured in liquid nitrogen. The cryo-fractured cross-sections were imaged at different magnifications with an accelerating voltage of 10 kV and a working distance of ca. 10 mm. The samples were also examined by ultra-small angle neutron scattering (USANS) within a q range of 0.0001-0.002A −1 , beamline BL-1A at the Spallation Neutron Source at Oak Ridge National Laboratory. Rheological characteristics of the samples were studied by using a Discovery Hybrid Rheometer (DHR-3, TA instruments). Flow tests of different solutions of I-lignin and S-lignin in dimethyl sulfoxide (DMSO) were performed to determine their intrinsic viscosity [33]. In these tests, 40 mm parallel Peltier plates were used with precise temperature control at 25 • C. A solvent trap and silicon oil were used to avoid any moisture absorption or solvent evaporation during measuring. Additionally, strain sweeps and frequency sweeps at high temperatures were carried out to investigate the melt behavior of both S-lignin and I-lignin. 25 mm parallel plates were used for all S-lignin measurements due to its very low viscosity. Whereas, I-lignin exhibited very high melt viscosity and was measured with 8 mm parallel plates to avoid overloading the rheometer torque limit. All measurements were determined in the linear regions. Dynamic mechanical analysis and shape memory characterization of the materials were performed using our previous method [22] with a dynamic mechanical analyzer (Q800-TA instruments). Temperature-dependent elastic modulus was studied at a ramp rate of 3 • C/min and 1 Hz frequency. To determine the shape memory effects, the sample was first ramped to a selected temperature and kept isothermally for 5 min (described specifically in the results and discussion section). After reaching equilibrium, the sample was stretched to an applied strain at a rate of 10%/min. Then the sample was cooled to the fixing temperature and kept isothermally for 10 min. Next, the applied force was set at 0.001 N to release the stress. After fixing, the sample was heated back to the deforming temperature at a ramp rate of 10 • C/min then kept isothermally for 30 min to record strain recovery of the material after deforming and fixing. The process of deforming-fixing-recovery was repeated 3 times. Results and Discussion The chemical characteristics of acetone fractionated lignin were carefully investigated by NMR and FTIR. S-lignin and I-lignin showed distinguishable molecular structures evidenced through the dominance of lignin functional groups and representative linkages in S-lignin. The integration of 1 H NMR peaks was relative to the methoxy region (peak near 3.75 ppm) being set to 3 (±5%). 1 H NMR results (Figure 1a) demonstrated the presence of aromatic structure domination in S-lignin at 5.6-7.8 ppm. Peak integration relative to methoxy was 1.64 ± 7% which is higher than the integrated peak (1.29 ± 7%) of I-lignin. In S-lignin the presence of aldehyde and phenol groups as well as -COOH was detected at~7.8-10.6 ppm (integrated peak 0.57 ± 7%) and~11-13 ppm (0.07 ± 7%), respectively [34][35][36]. Comparatively, there was no observation of the carboxyl groups in I-lignin. The -CHO and phenol groups determined in I-lignin was only 0.35 ± 7%. Structural characteristics of the fractionated lignin were further examined by 13 C NMR as shown in Figure 1b [23]. The integral values reported here are relative to the integrated methoxy peak at~55 ppm region being set to 1. The uncertainty in each integration is around ±5%. The results also exhibited a higher amount of carbonyl group in S-lignin. Integrated peaks from~168-184 ppm of S-lignin and I-lignin are 0.16 and 0.02, respectively. It was demonstrated that oxygen-containing functional groups such as carbonyl, hydroxyl, methoxy, aldehyde and ether are key factors affecting lignin reactivity [22,29,37,38]. Structure-property relationships of lignin will be further discussed in following sections. Noticeably, the amount of branched aromatic structure of I-lignin was considerably higher than that of S-lignin. For example, integration of I-lignin's branched structure was ca. 2.35. In contrast, the corresponding integrated peak of S-lignin was only 1.83. Note that the ether bonds, non-branched and aliphatic structure of both samples are not much different. For example, the non-branched aromatic structure of I-lignin and S-lignin was 1.12 and 0.95, respectively, as measured from the integrated peaks from 98 to 122 ppm (see Figure 1b) [23,24,39]. The presence of high aromatic content in I-lignin could result in a very rigid structure and high melt-viscosity that will be further discussed in the following sections. The integrated peak of ether bonds and aliphatic structure of S-lignin was 0.47 and 0.64, respectively. Comparatively, these areas integrated as 0.57 and 0.57, respectively, for I-lignin. It is noted that the aromatic structure of S-lignin and I-lignin contains dominant tertiary (-CH) carbons (non-branched ones) as evidenced by the chemical shift~100-130 ppm (See Figure 1c,d, positive peaks). The branched aromatic carbons were indicated by negative peaks in the region of~130-160 ppm. The substantial structural differences of S-lignin and I-lignin were furthered examined by FTIR. Overall, the measured signals obtained from S-lignin were more intense than the signal of I-lignin as shown in Figure 1e,f. Steric hindrance and rigid structure of I-lignin could reduce degree of freedom and restrict the molecular vibration [29,40]. For example, variation of the -C-H aromatic deformation was observed at 831 cm −1 [41]. A sharp vibration signal of the C-O deformation in primary alcohol was also determined at 1030 cm −1 in the S-lignin data. Noticeably, wide and intense stretching peak for the C-O guaiacyl ring was exhibited at 1214 cm −1 . Various vibration peaks of aromatic structure were also determined at 1426, 1461, 1515, and 1600 cm −1 [41][42][43]. A considerably intense stretching peak for the carbonyl and carboxylic groups in S-lignin was observed at 1700 cm −1 . Additionally, S-lignin indicated a large amount of C-H stretching in methyl and methylene groups at 2850 and 2930 cm −1 (Figure 1f). Contribution of O-H stretching indicates a higher degree of hydrogen bonds in S-lignin as observed in the peak at 3400 cm −1 . Noticeably, the amount of branched aromatic structure of I-lignin was considerably higher than that of S-lignin. For example, integration of I-lignin's branched structure was ca. 2.35. In contrast, the corresponding integrated peak of S-lignin was only 1.83. Note that the ether bonds, non-branched and aliphatic structure of both samples are not much different. For example, the non-branched aromatic structure of I-lignin and S-lignin was 1.12 and 0.95, respectively, as measured from the integrated peaks from 98 to 122 ppm (see Figure 1b) [23,24,39]. The presence of high aromatic content in I-lignin could result in a very rigid structure and high melt-viscosity that will be further discussed in the following sections. The integrated peak of ether bonds and aliphatic structure of S-lignin was 0.47 and 0.64, respectively. Comparatively, these areas integrated as 0.57 and 0.57, respectively, for I-lignin. It is noted that the aromatic structure of S-lignin and I-lignin contains dominant tertiary (-CH) carbons (non-branched ones) as evidenced by the chemical shift ~100-130 ppm (See Figure 1c,d, positive peaks). The branched aromatic carbons were indicated by negative peaks in the region of ~130-160 ppm. The substantial structural differences of S-lignin and I-lignin were furthered examined by FTIR. Overall, the measured signals obtained from S-lignin were more intense than the signal of I-lignin as shown in Figure 1e,f. Steric hindrance and rigid structure of I-lignin could reduce degree of freedom and restrict the molecular vibration [29,40]. For example, variation of the -C-H aromatic deformation In this study, the molecular weight (M w ) difference of the S-lignin and I-lignin was determined indirectly through their intrinsic viscosity in DMSO. The detailed method was reported elsewhere [33]. The measured rheological data and analysis results showed that I-lignin has a higher molecular weight. Figure 2a,b presents the measured flow characteristics of the lignin-DMSO solutions at different solid weight contents, from 0.8 to 5.0 wt.% in a dilute regime. I-lignin solutions exhibited distinguishably higher shear stress. The measured stress and shear rate data were fitted with the Newtonian model to obtain the viscosity (η) of the solutions. Figure 2c presents the specific viscosity as a function of lignin content (C, g/mL). The specific viscosity (η sp ) was computed using our previous method [33]. The data was linear fitted revealing a slope of ca. 1 that indicated no molecular interactions [44]. These solutions were truly in the dilute regime. The obtained specific viscosity was employed to determine the intrinsic viscosity of I-lignin and S-lignin (Figure 2d). The extrapolated data of η sp /C when C approaches 0 is the intrinsic viscosity of the I-lignin and S-lignin, 19.1 and 12.7, respectively. The molecular weight is proportional to the intrinsic viscosity of the materials [44]. as a function of lignin content (C, g/mL). The specific viscosity (ηsp) was computed using our previous method [33]. The data was linear fitted revealing a slope of ca. 1 that indicated no molecular interactions [44]. These solutions were truly in the dilute regime. The obtained specific viscosity was employed to determine the intrinsic viscosity of I-lignin and S-lignin (Figure 2d). The extrapolated data of ηsp/C when C approaches 0 is the intrinsic viscosity of the I-lignin and S-lignin, 19.1 and 12.7, respectively. The molecular weight is proportional to the intrinsic viscosity of the materials [44]. Thermal and rheological analyses of both S-lignin and I-lignin were carried out to further understand the effects of their structures and molecular weight. The I-lignin demonstrated a much better thermal stability in comparison to the S-lignin (Figure 3a). It is noted that the first and second degradation temperatures of S-lignin happened at ca. 139 • C and 258 • C. Whereas, I-lignin exhibited the corresponding degradation peaks at 181 • C and 291 • C, approximately 30-40 • C higher. The degradation of lignin occurs in a broad range of temperatures. These degradation peaks were mostly from the remaining moisture release, the cleaved oxygenated functional groups, and small molecular weight fractions [45]. Interestingly, the first and second derivative weight loss (dotted lines) of S-lignin showed sharp and apparent peaks. These behaviors could be due to the presence of a higher low molecular weight fraction and higher oxygenated functional groups of S-lignin as demonstrated in the NMR data (see the previous discussion). For example, 5% weight loss (dashed line, Figure 3a) of S-lignin and I-lignin occurred at 229 • C and 253 • C, respectively. High concentration of aliphatic moieties and low aromatic content in S-lignin also contributed to its low thermal stability. Additionally, I-lignin exhibited much higher thermal transition temperature in comparison to S-lignin. For example, the glass transition temperature (T g ) of S-lignin is ca. 100 • C (Figure 3b). Whereas, I-lignin did not indicate any thermal transition at any temperature up to 200 • C. Only an upturn of the heat flow was observed at temperatures above 200 • C. The glass transition of I-lignin and decomposition could happen simultaneously and were not distinguishable. Similarly, the decomposition temperature of S-lignin was also observed at above 150 • C. The DSC data were consistent with the TGA results. We anticipate that the higher thermal transition temperature of I-lignin most likely comes from high molecular rigidity (higher molecular weight and highly aromatic structures). the corresponding degradation peaks at 181 °C and 291 °C, approximately 30-40 °C higher. The degradation of lignin occurs in a broad range of temperatures. These degradation peaks were mostly from the remaining moisture release, the cleaved oxygenated functional groups, and small molecular weight fractions [45]. Interestingly, the first and second derivative weight loss (dotted lines) of S-lignin showed sharp and apparent peaks. These behaviors could be due to the presence of a higher low molecular weight fraction and higher oxygenated functional groups of S-lignin as demonstrated in the NMR data (see the previous discussion). For example, 5% weight loss (dashed line, Figure 3a) of S-lignin and I-lignin occurred at 229 °C and 253 °C, respectively. High concentration of aliphatic moieties and low aromatic content in S-lignin also contributed to its low thermal stability. Additionally, I-lignin exhibited much higher thermal transition temperature in comparison to S-lignin. For example, the glass transition temperature (Tg) of S-lignin is ca. 100 °C (Figure 3b). Whereas, I-lignin did not indicate any thermal transition at any temperature up to 200 °C. Only an upturn of the heat flow was observed at temperatures above 200 °C. The glass transition of I-lignin and decomposition could happen simultaneously and were not distinguishable. Similarly, the decomposition temperature of S-lignin was also observed at above 150 °C. The DSC data were consistent with the TGA results. We anticipate that the higher thermal transition temperature of I-lignin most likely comes from high molecular rigidity (higher molecular weight and highly aromatic structures). To have a better understanding of the materials' thermal characteristics, rheological properties of softened S-lignin and I-lignin were examined and presented in Figure 3c-e). The two fractionated lignin samples revealed contrasting rheological behavior. Specifically, storage modulus (G') as a function of oscillatory strain of I-lignin (collected at 230 • C) was more than 3 orders of magnitude in comparison to the measured G' of S-lignin (collected at 150 • C). Note that rheological properties of I-lignin were collected at a much higher temperature than the measurement temperature of S-lignin due to high thermal rigidity of I-lignin. The rheological measurements of I-lignin were conducted at 230 • C to avoid the torque overload. The phase angle of S-lignin was over 80 • indicating a liquid behavior (Figure 3c). A similar trend was also determined in the frequency dependent modulus (Figure 3d). I-lignin exhibited an extremely high melt-flow resistance (Figure 3e). The complex viscosity of I-lignin was much higher than that of S-lignin (ca. 2-3 orders of magnitude difference). In this study, the S-lignin and I-lignin were separately melt-reacted with NBR at 50 wt.% loading. The effects of different molecular structures and molecular weight resulted in selective thermal and mechanical characteristics of the lignin/NBR mix. Strong molecular interactions of S-lignin with NBR led to considerable thermal stability enhancement. For example, the NBR-S-lignin composite revealed a similar thermal stability with the NBR-I lignin composite. The first and second degradation peak temperatures of the two samples were not much different, only ca. 10 • C difference (Figure 4a). It was noted that the pristine lignin (S-lignin and I-lignin) revealed a degradation peak temperature difference of ca. 40 • C (Figure 3a). Strong intermolecular interactions of S-lignin with NBR were also demonstrated in the DSC results (Figure 4b). S-lignin resulted in significant increase in T g of the NBR matrix, from −16.5 • C (neat NBR) to 2.6 • C (ca. 19 • C) in the compound (NBR-S-lignin). However, the presence of the I-lignin did not change the T g of the matrix significantly, only ca. 4 • C increase in T g by the addition of I-lignin to the matrix. A lower concentration of oxygen containing functional groups, thermally cleavable linkages, and hydroxyl content in I-lignin (than those in S-lignin) could cause poor interfacial interactions and mild melt-reactivity between NBR and I-lignin [22,29]. As a result, the dynamic mechanical analysis of the materials exhibited high modulus (E') for NBR-S lignin around ambient temperature (zone I, see Figure 4c,d). Contrastingly, NBR-I-lignin mix indicated high E' in the zone II and III (Figure 4c) that is mostly due to the presence of I-lignin segments having high molecular rigidity at extremely low temperatures or at elevated temperatures. These distinct differential dynamic mechanical characteristics of NBR-S-lignin and NBR-I-lignin mixes allowed different shape memory effects at ambient temperatures and even at elevated temperatures. The effects of different molecular structures and molecular weight resulted in selective thermal and mechanical characteristics of the lignin/NBR mix. Strong molecular interactions of S-lignin with NBR led to considerable thermal stability enhancement. For example, the NBR-S-lignin composite revealed a similar thermal stability with the NBR-I lignin composite. The first and second degradation peak temperatures of the two samples were not much different, only ca. 10 °C difference (Figure 4a). It was noted that the pristine lignin (S-lignin and I-lignin) revealed a degradation peak temperature difference of ca. 40 °C (Figure 3a). Strong intermolecular interactions of S-lignin with NBR were also demonstrated in the DSC results (Figure 4b). S-lignin resulted in significant increase in Tg of the NBR matrix, from −16.5 °C (neat NBR) to 2.6 °C (ca. 19 °C) in the compound (NBR-S-lignin). However, the presence of the I-lignin did not change the Tg of the matrix significantly, only ca. 4 °C increase in Tg by the addition of I-lignin to the matrix. A lower concentration of oxygen containing functional groups, thermally cleavable linkages, and hydroxyl content in I-lignin (than those in S-lignin) could cause poor interfacial interactions and mild melt-reactivity between NBR and I-lignin [22,29]. As a result, the dynamic mechanical analysis of the materials exhibited high modulus (E') for NBR-S lignin around ambient temperature (zone I, see Figure 4c,d). Contrastingly, NBR-I-lignin mix indicated high E' in the zone II and III (Figure 4c) that is mostly due to the presence of I-lignin segments having high molecular rigidity at extremely low temperatures or at elevated temperatures. These distinct differential dynamic mechanical characteristics of NBR-S-lignin and NBR-I-lignin mixes allowed different shape memory effects at ambient temperatures and even at elevated temperatures. Large phase separation and poor interfacial molecular interactions of I-lignin with NBR were examined and shown in Figure 5. A variety of large domains from 5 to 10 µm were observed in NBR-I lignin (Figure 5a(a1)). Yellow arrows indicate a very sharp interface of the lignin phase separated domains in the NBR matrix. The SEM data presented some small voids and defects within the samples that could be due to the lignin degradation during sample processing. Very large micro phase separation of I-lignin was also consistent with the data obtained by USANS as evidenced in Figure 5c. The NBR-I lignin sample revealed a straight line with a power law of~−4 after desmearing indicating the presence sharp interface between phase-separated domain and surrounding matrix [46]. In contrast, NBR-S lignin showed very good dispersion of small phase-separated lignin domains within the NBR matrix. Only a small domain size was seen, from 1-2 µm (Figure 5b(b2)). The interface between S-lignin domains and NBR matrix was not very clear suggesting a better interfacial interaction within the composite. Interestingly, the USANS data also agreed very well with the SEM results (Figure 5c). A wide shoulder indicating a characteristic length of ca. 1.5 µm was determined. indicating the presence sharp interface between phase-separated domain and surrounding matrix [46]. In contrast, NBR-S lignin showed very good dispersion of small phase-separated lignin domains within the NBR matrix. Only a small domain size was seen, from 1-2 µm (Figure 5b(b2)). The interface between S-lignin domains and NBR matrix was not very clear suggesting a better interfacial interaction within the composite. Interestingly, the USANS data also agreed very well with the SEM results (Figure 5c). A wide shoulder indicating a characteristic length of ca. 1.5 µm was determined. To demonstrate the shape programmability and shape memory characteristics of NBR-I lignin and NBR-S lignin, selected examples programed at different conditions were performed and shown in Figure 6. The samples were programmed at 70 °C or 150 °C and fixed at ambient temperature, ca. 20 °C. All digital images in Figure 6a-d denoted by 1 and 2 are the initial and programmed shapes of the materials. Figure 6c is an example of the NBR-S lignin having shape programmed by simple stretching indicating that the sample was not yielded even at a high temperature. At ambient temperature, the shape fixity was excellent, and the programmed shape was maintained very well. To recover the initial shape, heat was applied. For example, Figure 6d (images 3-6) showed a process of shape recovery at different time periods after heating the temporarily programmed shape at 150 °C for only 60 s. To demonstrate the shape programmability and shape memory characteristics of NBR-I lignin and NBR-S lignin, selected examples programed at different conditions were performed and shown in Figure 6. The samples were programmed at 70 • C or 150 • C and fixed at ambient temperature, ca. 20 • C. All digital images in Figure 6a-d denoted by 1 and 2 are the initial and programmed shapes of the materials. Figure 6c is an example of the NBR-S lignin having shape programmed by simple stretching indicating that the sample was not yielded even at a high temperature. At ambient temperature, the shape fixity was excellent, and the programmed shape was maintained very well. To recover the initial shape, heat was applied. For example, Figure 6d (images 3-6) showed a process of shape recovery at different time periods after heating the temporarily programmed shape at 150 • C for only 60 s. To quantitatively evaluate the shape memory effects of the materials, a detailed study was carried out and presented in Figure 7 (see the experimental section for more details). To quantitatively evaluate the shape memory effects of the materials, a detailed study was carried out and presented in Figure 7 (see the experimental section for more details). To quantitatively evaluate the shape memory effects of the materials, a detailed study was carried out and presented in Figure 7 (see the experimental section for more details). (2) increasing stress during the fixing process indicating low stress fixity and stability of the studied material. (c) Strain recovery after multiple deformation of NBR-S lignin (green profile) and NBR-I lignin (yellow profile) at 70 °C; Δε is the strain loss during fixing process (d) Strain recovery after multiple deformation of NBR-S lignin (green) and (2) increasing stress during the fixing process indicating low stress fixity and stability of the studied material. (c) Strain recovery after multiple deformation of NBR-S lignin (green profile) and NBR-I lignin (yellow profile) at 70 • C; ∆ε is the strain loss during fixing process (d) Strain recovery after multiple deformation of NBR-S lignin (green) and NBR-I lignin (yellow) at 200 • C. (e,f) Stress profile of the two studied samples during programming process at two selected temperatures, 150 and 200 • C, respectively (∆σ the stress increase during fixing). NBR-S lignin is green, NBR-I lignin is yellow. In this study, NBR-S lignin demonstrated excellent shape fixity at ambient temperature when programming at a low temperature, 70 • C. In contrast, NBR-I lignin revealed excellent shape recovery and excellent stress stability at an elevated temperature, 150 • C, or even up to 200 • C. Data in Figure 7a,b are examples of a programming process showing shape fixity and stress fixity of NBR-I lignin and NBR-S lignin, respectively. For example, Figure 7a, step 1 is the deformation process, the sample was stretched to 50% strain at 70 • C. After stretching, the sample was kept at that selected strain and the temperature was decreased to 20 • C to fix the temporary shape (step 2). Then, the applied stress was removed (step 3). Note that the strain was dropped in step 3 after removing the applied stress due to a low fixity of NBR-I lignin. Step 4 and 5 is the ramp and isothermal steps to recover the material shape (see the experimental section). An example of stress stability during programming process was given in Figure 7b. Step (1) is the stress profile applied to the NBR-S lignin sample during deforming and fixing processes. However, during isothermal process, step (2), the material exhibited very low stress stability, in which the applied stress increased significantly, from ca. 0.2 MPa to 0.4 MPa. A low stress stability of this material is most likely due to the shrinking of the molecular network in the NBR matrix and low molecular rigidity of S-lignin. However, NBR-S lignin demonstrated very good shape recovery and good strain fixity at ambient temperature. After fixing, the sample still maintained the programmed strain, only~5% strain loss was measured, see ∆ε1 (Figure 7c). NBR-I lignin had poor strain fixity at ambient temperature. For example, after fixing, the strain loss of NBR-I lignin was more than 20% (see ∆ε2, Figure 7c). Low strain fixity of this composite is mostly due to the low glass transition temperature of the material and poor molecular interaction between I-lignin with NBR matrix. The polymeric chains relaxed during fixing. The strain loss resulted from the elastic recovery of the NBR segments. However, the presence of rigid I-lignin resulted in excellent shape recovery and stress fixity at elevated temperatures, such as 150 and 200 • C as demonstrated in Figure 7d-f. We anticipate that the interactions between I-lignin and NBR are more favorable at high temperatures. Conclusions We have demonstrated that by selecting appropriate lignin through solvent fractionation, the composites of NBR with lignin could be tailored with unique thermal and mechanical characteristics in selected applicable conditions for preferred shape memory effects. The structural characteristics of fractionated lignin were examined and correlated to its thermomechanical properties. Acetone-soluble fractionated lignin had high aliphatic moieties and low branching aromatic structure along with the presence of plentiful functional groups, whereas the insoluble lignin fraction revealed a higher molecular weight in comparison to the acetone soluble lignin fraction. Both fractionated lignin samples exhibited their unique properties that can be utilized for specific shape memory applications. The initial results obtained from acetone fractionation of organosolv hardwood lignin are very promising. Further investigation of selectively manipulated lignin structures by using different solvents combined with multiple-fractionation processes will open a new avenue of lignin valorization for applications in stimuli responsive materials.
6,855.8
2020-04-01T00:00:00.000
[ "Materials Science" ]
Innovative Practice of Music Education in the Universities in the Context of 5G Network Music education is among the most signi€cant subjects covered in providing high-quality education in Chinese universities and colleges. Music education is critical to providing high-quality education to students. It contributes signi€cantly to the development of students’ creative motivation, inventive capacity, and personality development. Music education provides excellent outcomes in music instruction and fosters students’ original thinking and comprehensive abilities, and therefore supports the overall development of high-quality education.With the development of the educational system, it is becomingmore vital to teach students a high level of musical literacy. With the advent of 5Gmobile communication, it will become one of the core technologies in Chinese music education, providing an innovative framework for music education. In this study, a novel music education model is proposed for the development of music education using the GTZAN dataset which is comprised of 100 distinct specimens for every genre and ten various kinds of music. e dataset is normalized to prepare it for further processing and the characteristics of the song are retrieved using a technique called spectrum-based feature extraction (SBF). Bi-recurrent neural networks (Bi-RNN). are used to classify objects in space. An improved TCP congestion control algorithm (ITCCA) is proposed for e™cient data transmission between the 5G networks. To optimize the performance of the transmission protocol, the honey bee optimization algorithm is employed. e performance of the proposed model is examined and contrasted with that of the currently used approaches. e proposed model shows high performance in terms of throughput, average delay, and packet delivery ratio.emodel has the potential to successfully integrate 5G technologies and music education and provide the students with rich and diversi€ed teaching materials and šexible instructional formats. Introduction A signi cant push has been given for contemporary teacher learning by current scienti c and technical discoveries, particularly the rapid growth of computer technology as represented by "music education + 5G." Modern curriculum reform has pushed the bar for teachers' ability to teach to new levels, resulting in increased expectations for them [1]. Traditionally, education has been delivered via schools, and individuals have only had a few options for learning. However, as "Music education + 5G" continues to develop, more channels and methods of disseminating knowledge are becoming available, and the methods of education have been altered signi cantly. In recent years, the continual growth and strengthening of "Music education + 5G" has steadily eroded the stronghold of schools on information distribution, facilitating the shift from a limited to an accessible educational system. When "Music education + 5G" is developed, people will receive education on campus and the 5G platform in the future [2]. As a result, the positive growth trend of actual learning, complementing, and expanding online education will be realized. Music education and 5G are currently being developed and the overall development trend is positive. With the ongoing growth of "Music education + 5G," so far, new types of music training, including catechism, mobile applications, have helped to provide the groundwork for future growth and research in music education [3]. It is expected that 5G technology, also known as nextgeneration mobile cellular technologies, will signi cantly impact our every day in the near future. e introduction of 5G will bring about signi cant improvements in present network technologies in terms of increased capacity, more dependable service, and a greater density of devices [4]. Because of their unique qualities, these features can affect or even change a wide variety of human activities, from business to entertainment, by improving presently accessible services and introducing whole new ones. is will be possible through the enhancement of current services offered and the initiation of entirely new services. erefore, it is believed that 5G will influence educational experiences as well. In music education, a lot of bandwidth is critical for exchanging high-quality multimedia streams and it is essential to keep latency in bidirectional connection to a minimum, ideally, in the millisecond range [5]. During the Fifth Mobile 5G Expo in China, the concept of "5G + music education" was introduced to integrate 5G as a source with all walks of life, boldly innovating and promoting the fast growth of China's economy benefiting people via this platform. "5G Plus" is a hub that unites people from all areas of life via network information and communication technology, which establishes a new field and allows for new developments in that sector [6]. With the arrival of the 5G era, 5G mobile communication will play a significant role in our future education, and 5G technology will become one of the most important educational technologies. 5G can connect music teachers with their students while also allowing new music technologies like virtual reality, augmented reality, and improved streaming. Students may take part in events even if they cannot make it in person, and they can immerse themselves more fully in the artists' vision by employing multiple camera angles and virtual environments created to match the music. In this study, a new music education model is presented for the development of music education using the GTZAN dataset. e dataset is preprocessed and the discriminant features of the songs are retrieved using the spectrum-based feature extraction method. Bi-recurrent neural networks are used to classify objects in space and an improved TCP congestion control algorithm is employed for fast data transmission between the 5G networks. To optimize the performance of the transmission protocol, the honey bee optimization algorithm is utilized. e rest of the manuscript is organized as Section 2 illustrates the related works in the field of 5G and music education. In Section 3, a detailed description of the proposed bi-RNN algorithm and honey bee optimization technique are presented. Section 4 is about performance analysis and the conclusion is given in Section 5. Related Work e Internet is used in all fields of education, learning, and life. Many new music teaching models have emerged through the Internet platform and information and communication technologies. Many scholars have presented advanced models for the development of music education. Barate et al [7] employed virtual reality and augmented reality technology-based activities to outline the major features of 5G and proposed an application in music education as an example. e authors in [8] used multimedia networked communications to communicate with professors and students and provide the course's contents and recommended that the development of modern distant education is essential for developing a lifelong learning system for individuals in the information age. Law and Hou [9] examined the legitimacy of values in music education to prepare students for admission into China's new "knowledge society" and illustrated how values education connects to the teaching of musical and nonmusical meanings in the twin contexts of nationalism and globalization and some of the difficulties that value education encounters in school music classes. Zhang [10] worked on the Chinese ethnic minority's musical and cultural traditions in government-designed national K1-9 music textbooks. To realize speech and music recognition, this work applied the distinct principles of the magnitude of new information in the signal sequence of speech and music, as well as their changing range, to accomplish the intended effect. Lin [11] investigated the importance of assessing music teaching ability before summarizing the use of neural network and deep learning technologies in music teaching ability evaluation and created an assessment model using a compensating fuzzy neural network technique and tested its correctness. In [12], a power iteration-based technique of system resource allocation for 5G music education is proposed. e unloading system's throughput is defined as an optimization challenge, with the best allocation of regular power achieved via an iterative method of the optimal solution. A heterogeneous network that depends on network edge is constructed to compensate for edge servers' loss efficiency and resource consumption to discover the best technique for ensuring the Nash equilibrium point. Sun [13] used multidimensional connection feature fusion and clustering algorithms to present an optimal fusion approach for an "offline and online" mixed music education model in the context of 5G. e author in [14] provided a historical overview of popular music in China throughout the twentieth century, both in the community and in school music, and examined how, since the beginning of 2000, reforms in music education have included popular songs into the curriculum. e developed system is divided into three sections: the multiple data mechanism, the server manager module, and the database administration module [14]. e genetic algorithm is used to integrate the data after the synchronization multimodal voice learning data are processed by the synchronized multimodal vocal education information processing [15]. As a result of mixing big data with customized recommendation algorithms based on the collaborative filtering recommendation system, a hybrid recommendation algorithm is developed (CF). To acquire the user assessment matrix, a large amount of data are required. Li [16] used the Pearson correlation coefficient to determine user similarity and constructed the closest neighbor set, the k nearest set of target users, and the subscriber recommendations set. As a part of this process, a questionnaire was created to gather the actual assessment ratings of each user on the audio contents. Cao [1] examined the possibility of using 5G for music education with big data and in light of rapid scientific and technological improvement, to clarify and lead music teachers to use spontaneous and conscious awareness of new media and fully apply new science and technology in the information society for future music classroom teaching, and to inspect the mode, method, trend, characteristics, advantages, and disadvantages of using 5G for music education. Although many researchers and analysts have investigated the role of 5G in the development of music education, the application of 5G communication technologies in the music field is still in its infancy and requires in-depth investigation and improvement. is study presents a 5G music education model using machine learning to classify musical sounds and an improved TCP congestion control algorithm is presented for fast data transmission between the 5G networks. Proposed Work e introduction of the 5G network in education will result in a significant change in music education [17]. Compared to the existing network technologies, 5G will provide considerable gains in terms of bandwidth, service dependability, and device density. Furthermore, this study focus on music education, a field in which having a huge network capacity is vital for sharing high-quality multimedia streams, and twoway communication latencies should be maintained within the milliseconds' range. A simplified illustration of the proposed technique is shown in Figure 1. Dataset. In this study, the GTZAN dataset is used since it has been used in much earlier research and would enable us to assess the model more correctly. e dataset of GTZAN contains 100 distinct specimens for every genre, and ten various kinds of music are used [18]. Since the input data have not been processed, it may contain duplicated sequences and incomplete data. An in-depth cleaning and high-level processing have been performed to remove recurring and duplicate occurrences, as well as missing data. Because of the large number of features in this database, approaches for extracting features are required to exclude characteristics that are not important. During the preprocessing stage, the dataset is normalized. (1) describes how the s-score is generated in the first step of the normalizing. where ω is the mean, α is the standard deviation, then the U can be written as follows: where Tu − − of the sample is mean and UUK is the standard deviation of the samples. e randomized sample is made up of the following individuals: where εo a represents the error that depends on α 2 Next, the errors must be independent of each other and as indicated in the following section. � where t i is the random variable. In the next step, the standard deviation is used to normalize the variations in the variables. It is possible to estimate the moment scaling deviation by using the given equations: where fuc denotes a scaling moment. where Exp represents the expected value and Ti represent a random variable. where it is the coefficient of variance. By changing the values of all parameters to 0 or 1, the feature scaling process is completed. e following method is used to accomplish the normalization. e data range and irregularity of the input may remain unchanged after it has been normalized. To reduce delay, this step is completed. Next, the normalized data may be utilized as an input to the following phases in the procedure. Spectrum-Based Feature Extraction 3.2.1. Centroid of the Spectral Spectrum. Spectral centroid (also known as the frequency spectrum centroid) is a statistic used in digital signal processing to describe the frequency band. It has a close link with the intensity of the noise source [19]. Since the spectral centroid better represents the brightness of the sound, it is based on digital audio and musical signal analysis. It is a tool for evaluating music's timbre. It is a musical term. Mathematically, it can be defined as follows: where letter 'a' designates the location of the "centroid" of the frequency range. It is closely related to the intensity of the noise source. e term B s [a] designates the Fourier transform magnitude. Flux Spectral. e flux spectral is a broad term that refers to the pace at which the signal spectrum changes [19]. Computational Intelligence and Neuroscience It is determined by calculating the current frame spectrum to the range of the previous frame. e 2-norm among two normalized spectra is often used to compute it, which is more exact. e spectrum flux computed in this method does not depend on the period since the spectrum has been normalized. To compare two signals, their amplitudes need to be known. It is a common practice to employ flux spectral to identify the timbre of an audio source or whether or not to pronounce it. e flux spectral can be computed as follows: Contrast in the Spectral Range. Spectral contrast is a property that is used to categorize different types of music [20]. Spectral contrast is described as the variation in decibels (DB) between both the ridges and valleys of a frequency range, which may illustrate the relative spectral features of different types of music and sounds. Cepstral Coefficients at Mel-Scale Frequencies. Because the ear cochlea contains filtering qualities, it may map various frequencies to different places on the basilar membrane, which allows for more accurate mapping. As a result, the cochlea is often referred to as a filter bank [21]. Psychologists were able to acquire a set of filter banks comparable to the cochlear effect via psychological research, which they named the Mel frequency filter bank, based on this characteristic. Because the sound level experienced by the human ear is not linearly proportional to the frequency of the sound, researchers have developed a new notion known as Mel frequency to account for this. e Mel frequency scale is better following the acoustic qualities of the human ear than the Richter frequency scale. e following is the relationship between Mel frequency and the integer u: where μ mel denotes the Mel frequency conversion and u denotes the frequency. For starters, the audio signal is separated into frames and pre-emphasized before being windowed. After that, a short-time Fourier transform is conducted to acquire the frequency spectrum of the audio signal. Next, we set the Mel-frequency bank of the L channel to the Mel frequency by adjusting the Mel filter bank of L stations. e N value is calculated when the signal has reached its most significant frequency, which is usually between 12 and 16. Each Mel filter has the same spacing on the Mel frequency as the previous one. (12) computes the relationship between the three frequencies of neighboring triangle filters: Assuming that d (l) denotes center frequencies, h (l) denotes the upper frequency's limit, and o (l) denotes lower frequency's limit. e outputs of the filter that get through the Mel filter are as follows: e filter's frequency characteristics are as follows: Computational Intelligence and Neuroscience e discrete cosine is transformed to MFCC by taking the natural log of the filter's actual output. is is computed as follows: Classification Using the Bi-RNN (Recurrent Neural Network). Over time, the RNN may detect the inherent structure buried in the sequence. e audio signal may be thought of as a time sequence in and of itself. e spatial dependency of the audio signal in the time dimension may be captured by using the RNN to process music. In the temporal dimension, the sound spectrum is likewise widened. Because the feature map after one-dimensional convolution can be thought of as a temporal feature sequence, the usage of the RNN to analyze sound spectrum information may also be considered in this way [18]. is research employs the Bi-RNN to describe the music sequence to better represent the multidirectional dependency in the time dimension and get closer to the brain's perception of music. e Bi-RNN takes into account both the previous and subsequent inputs, which may aid with data modeling. e architecture of the Bi-RNN is shown in Figure 2. In the estimate of the future, z a → is connected to, and in z a− 1 ���→ and in computation in reverse, is z a → connected to Z a+1 ← and z a → indicates the hidden layer's current condition. e calculating formula is as follows: To get the final network output, we combine the forward and rear of each network step. Improved TCP Congestion Control Algorithm. Live broadcasts would be out of place with this protocol designed for on-demand access to an extensive music collection. If a customer wants to upload a song, they will need to have the whole track on their computer. So, it simplifies things by not indicating which portions of a track a client owns anymore. e drawbacks are minimized due to the small size of the tracks. Instead of using UDP, which is the most common streaming app transport protocol, Spotify uses TCP. To begin with, having a dependable transportation protocol makes protocol plans easy to implement. Second, TCP is suitable for the network because congestion management is favorable, and stateful firewalls benefit from explicit relationship signaling. Finally, since streamed content is shared through a mentoring network, resending missing packets is beneficial to the program [22]. A single TCP connection is utilized between two hosts, and messages are multiplexed via the protocol specification. A client maintains a TCP connection to a Spotify server while it is active. Priority-ordered is buffering and sorting application layer messages before being delivered to the operating system's TCP buffers. Messages required to allow interactive surfing, for example, are prioritized above bulk traffic. Honey Bee Optimization Algorithm. For both functional and combinatorial optimizations, the honey bee's method uses random search and neighborhood search. e fundamental goal of this method, as illustrated in Figure 3, is to identify an optimum solution using honey bees' natural foraging activity. In general, scout bees (n), chosen sites in visited websites (m), resting criterion, best places in sample locations (e), starting patch size, which includes the network's size and its surroundings, bees for selected sites, and bees for sites are needed. e fitness of bees is assessed after they are randomly put in an area. e honeybees with the best fitness levels are chosen, and the bees who visit the places are selected for the neighborhood search. Now, it is time to recruit bees and assess their fitness at the desired locations. e fittest bees from each patch are chosen. e fitness of the remaining bees is evaluated after they are allocated to a search area at random. e stages are then repeated until the condition for halting is fulfilled. e bees method is utilized in various applications, including clustering techniques, neural network pattern matching, and construction. In sensors, nodes near the sink must transfer their data and data received nodes further away, depleting the energy of nodes near the sink. e network isolation issue, also known as the HOT SPOT problem, is caused by the surrounding nodes' energy depletion. It will significantly alleviate this issue, if sink mobility is used since the energy consumption of neighboring nodes will be balanced. In this study, biological methods are also utilized to improve the packet delivery ratio, throughput, and delay. Music Performance. In this section, we analyze the performance of the proposed method and compare it with existing methods. Figure 4, 4(a)-4(d) shows that the real data and simulated data curves agree, indicating a good model fit. e amount of negative emotion decreases, whereas the neutral and positive mood indexes increase, showing that the population is less suspicious and concerned than when "Music education + 5G" was initially introduced after a year of practice and investigation. 5G users started to think more critically about the issues that arose throughout the "Music education + 5G" process, and they expressed a favorable attitude toward "Music education + 5G" and a more accurate description. Figure 5 demonstrates that pop music is more popular among college students because it is closer to their lives. However, there are plenty of students who like classical, instrumental, and traditional music. e majority of pupils have a rudimentary understanding of music and can read pentatonic and short scores, and they learn music and associated basic information via many methods. Figure 3: . Honey bee optimization. In the beginning Computational Intelligence and Neuroscience Figure 6 shows that 71.14% of students feel that music electives are required, while just 5.67% say they are not, and the remainder are undecided. is is because the majority of pupils feel that music education may help them develop emotion and control their mood (55.20% and 39.28%, correspondingly), as well as the fact that music education may aid in the development of intellect, provide entertainment, and improve life (35.16%). Students fully comprehend and agree with the uses and benefits of music, yet many have qualms about music's various learning activities at school. On the one hand, as shown in Figure 7, universities are increasingly focusing on 5G music education and introducing new music classes and actions; however, due to a lack of publicity, the 5G music teaching method is not well known, and educators' overall participation is low, resulting in the majority of students not participating in school music activities. roughput. e throughput is defined as the number of data packets received by the destination at a certain time. Figure 8 shows the throughput comparison between the present and proposed methods. We compared the throughput of the proposed system with that of TCP [19], SACK [20] TCP vegas [21], and SCTP [23]. e graph clearly shows that the recommended strategy has a higher throughput of 700 for 80 nodes than traditional techniques. Average Delay. It is the amount of time that it takes a packet to travel from its origin to its destinations. Figure 9 depicts a side-by-side comparison of the average delay for the present and proposed techniques. We compared the average delay of the proposed system with that of TCP [19], SACK [20] TCP vegas [21], and SCTP [23]. e graph shows that the suggested process transmits data with the least latency compared to conventional methods. Computational Intelligence and Neuroscience Average Packet Delivery Ratio. It is calculated by dividing the obtained data packets by the transmitted data packets and is used to determine routing effectiveness. e comparison of packet delivery ratios for the proposed and existing methods is shown in Figure 10. e average packet delivery ratio of the proposed system is compared with that of TCP [19], SACK [20] TCP vegas [21], and SCTP [23]. It is evident that the proposed technique has a higher average packet delivery ratio than the existing methods which confirms the superiority of the proposed method. Conclusion e Internet is used in all aspects of our lives, including learning, education, and entertainment. rough the Internet platform and information and communication technology, many new models of education have evolved. With the arrival of high-speed Internet and 5G, all disciplines will undergo an unprecedented shift. is study employed the GTZAN dataset, which contains 100 individual specimens for each genre and ten different forms of music, to propose a new music education model for the advancement of music education. e dataset was normalized, and the song's features were extracted using a technique called spectrum-based feature extraction. To classify objects in space, a machine learning algorithm called bi-recurrent neural networks was used. For effective data transfer between 5G networks, an improved TCP congestion control algorithm (ITCCA) was employed and the honey bee optimization algorithm was used to improve the transmission protocol's performance. e proposed model showed high performance in terms of throughput, average delay, and packet delivery ratio as compared to the existing models. e model can successfully combine 5G technology with music education, as well as provide students with a wide range of teaching materials. Data Availability e data underlying the results presented in the study are available within the manuscript.
5,821.2
2022-06-14T00:00:00.000
[ "Education", "Computer Science" ]
Determination of Influence of the oil production factor on the Russian economy . Macroeconomic indicators characterizing the development of Russia has been analyzed during the study. The position of the Russian Federation in the world arena on the indicators of oil production and export has also been determined. The article describes the results of statistical analysis and forecasting of the oil market of the Russian Federation. An analysis of the dependence of monetary aggregates on factors of oil production, export and the price of oil has been conducted using three-factor model. A close relationship between the factors of oil and monetary aggregates has been revealed. The obtained results of the study are of interest for forecasting the economic state of the country and monitoring oil production volumes. Introduction At a present time of economic instability and political difficulties in the world, it is necessary to conduct a study of the most important macroeconomic indicators of the development of our state and some developed foreign countries. The main purpose of such study is to identify effective methods of overcoming the crisis. The most important macroeconomic indicators used for comparison are the oil production and the money supply in the state. The circulation of money in absolutely any state develops in accordance with its historical economic internal and external factors. Internal factors include monetary policy at each specific stage of development, as well as the development of goods and services in the market. External factors include participation of states in military activities, the number of monetary relations with other countries, the level of development of trade relations. The conditions of the Russian economy, both external and internal, are subject to constant changes and are extremely unstable. Relevance The situation in the national economy of any state is determined by a set of specific macroeconomic parameters. Each of these parameters characterizes the economic situation of the country in its own way. The main macroeconomic indicators of the country's socioeconomic development include gross domestic product, gross national product, national income, inflation index, unemployment rate, external and domestic public debt, foreign investment and other. The study of the main macroeconomic indicators allows to evaluate the main processes that occur in the socio-economic life of any country. Understanding the obtained macroeconomic results, which reflect the trend and dynamics of functioning of the economic system, contribute to more efficient and reasonable decision-making, improve planning and forecasting of economic development. The purpose of the study is to identify the main possible conditions for the dependence of the Russian economy on its natural wealth factors, i.e. the dependence of the availability of money supply on oil production. Literature Review The number of studies of Russian and foreign scientists and specialists is devoted to substantiating the optimal methods for monitoring the development of oil fields, for example, the works of M.M. Ivanova, L.F. Dementyev, I.P. Cholovsky, I.T. Mishchenko. The ways of using graphical methods are described in detail (construction and analysis of graphs and development maps). Hydrodynamic research is examined in detail in the works of D. However, the known approaches have a number of disadvantages, which include the narrow focus of the tasks being solved, the lack of consideration of the complex multi-factor influence of geological and technological indicators on the peculiarities of the implementation of economic processes. Problem Statement Analysis of scientific papers has shown that the main problems in terms of substantiating the interdependence of the geological and economic indicators of the state have not been solved. In this regard, the use of multivariable statistical modelling is relevant, which makes it possible to consider the factor of influence of the oil production indicator on the money supply in the state. Aim, Objectives, and Hypothesis of the Study The purpose of the study is to identify the main possible conditions of dependence of the Russian economy on factors of its natural wealth, i.e. dependence of the availability of money supply on oil production. The objectives of the study are: -to analyze the state of Russia's macroeconomic situation in dynamics; -to identify factors determining the interdependence of geological and economic indicators; -to build a multivariable regression and justify its reliability. The hypothesis can be formulated as follows. Based on the results of multiple regression, it is possible to estimate the degree of each factor and its relationship in combination with factors of different composition. Methods Materials from various sources relevant to the topic have been selected with a view to writing this paper. On the basis of these, an analysis of the problem has been carried out, ways of solving it have been identified and conclusions have been drawn. Empirical-theoretical research methods have been used, in particular the method of analysis, the method of analogy, the method of comparison and induction. I. Based on the data obtained, problems have been identified and analyzed, solutions have been identified and conclusions have been drawn. Results The study of macroeconomic indicators of the countries of the world acquires relevance and requires further study in connection with the rapidly changing operating environment, that is, the change of internal and external factors affecting the dynamics of economic development. The actions associated with the stabilization of monetary circulation, as well as the containment of the monetary system, have their own peculiarities. One of the main characteristics of Russia's macroeconomic development is the availability of oil reserves, as well as oil production and export. Russia is among the five richest countries in the world in terms of oil production ( The Russian Federation is considered the richest country in terms of reserves not only of "black gold", but also of other minerals. Oil is produced not only for export, but also for the production of fuel material. The total volume of its proven reserves amounts to more than 14 billion tons. More than 10 million barrels per day are extracted from fields every day, and this volume is constantly increasing (Figure 1). More than 12% of the world's total oil production comes from Russia ( Figure 2). Therefore, a complex and balanced assessment of a comprehensive analysis of Russia's macroeconomic indicators will make it possible to find solutions for the recovery of its economy from the unstable state and to improve the socio-economic situation. It is advisable to consider Russia's share in world exports (Figure 1), oil production ( Figure 2) and its price (Figure 3) in the study to further determine the welfare of the state from the macroeconomic factors of the country's development. Table 2 shows that in recent years the Russian economy is not developing at the pace that would have ensured its stability: 1. Population in the period under review (from 2015 to 2022) in Russia increased slightly (0.1%). 2. GDP per capita in Russia is growing steadily. GDP per capita from 2015 (6,844.00 USD) to 2020 (11,931.00 USD) increased by 174%, and in 2021 and 2022 there is a slight decline. 3. Accordingly, there is a stable growth of the country's GDP from 2015 to 2020 (approximately 43%) and some decline over the past two years. 4. There is a noticeable decline in the unemployment rate by 1.4%. 5. Inflation rate in Russia has non-linear characteristics, for 2021-2022 there is a significant increase compared to previous periods. 6. Trade balance in Russia is relatively stable, in 2015 it was 9.6 billion USD, and was 2020 it was 12.27 billion USD. However, in 2016 and 2017 it fell to 9.1 billion USD, and in 2019 it reached 19.7 billion USD. The decline in the trade balance in 2020 is due to the global COVID-19 pandemic. Rapid growth is observed during 2021-2022. 7. Exports and imports over the period under review are relatively stable and moderate growth, only in 2022 there is a slight decrease in imports to Russia. 8. Corruption index throughout the period is kept at 28-29 points. 9. Indicator of industrial production in Russia has a positive trend, from 2015 (-3.7%) to 2020 (17.7%), and today it has decreased to 9.0%. The stability of the economy is ensured primarily by the financial sphere, in particular, by monetary circulation and credit activity. Cash flows permeate all industries, the nonproduction segment, foreign economic relations, in the form of cash outside the banking system, in the form of transferable and other deposits. Having considered the main economic indicators of Russia, it is advisable to analyze, evaluate and identify the trend of monetary aggregates in the dynamics (Table 3), which are a tool for influencing inflation, and to a large extent, they are the result of changes in the operating conditions of the state. To do this, it is necessary to analyze how oil production affects the monetary aggregate M2 by constructing graphs of polynomial dependence for the period from 2015 to 2022 (Figure 4). The studied monetary aggregates have a polynomial dependence with a very high degree of data reliability, as evidenced by the results of R². Polynomial dependence of monetary aggregates in Russia shows a steady trend of growth, which is a negative phenomenon and entails an increase in the corruption index. The polynomial nature of the dependence of monetary aggregates on oil production represents a more natural form of general models of country development in the world. It is possible to stabilize the level of macroeconomic development in Russia under certain conditions. Money from the sale of oil should not be invested in the business of foreign countries and not abused by fraud but should be developed to develop one's own country and the citizens living in it. According to the World Bank, macroeconomic factors of the country's development are share in world exports, oil production and its price. These set of factors characterize Russia's position in the global space, which is in a difficult situation. The authors propose that the dynamics of oil production in Russia should be considered as a factor determining the development of the country's economy adjacent to monetary aggregates. This study was based on the so-called hierarchy analysis method, which is the mathematical tool for a systematic approach to decision-making in complex situations. The objective of the study is to build a hierarchy of factors influencing the financial condition of the state. The authors propose a system of variables for inclusion in multivariate regression models (Table 4). To determine the dependence of monetary aggregates on the oil production factor in the country, let us build a model of multiple linear regression (without multicollinearity) based on the analysis of the matrix of paired correlation coefficients. This requires: I. Build a matrix of paired coefficients including all variables (Table 5). The matrix indicates that the greatest influence on monetary aggregates (M2) in Russia is exerted by such factor as oil price (0.79), while the oil production factor has a weak effect on the monetary aggregates of the state (-0.36). II. Using the selected variables, let us build a regression model for a more reliable analysis of macroeconomic factors of the country's development (Table 6): positive When analyzing the obtained data of the regression model in Table 6, it can be noted that the tightness of the relationship (R2) of the variable Y from the three variables X is ≈0.72. This indicates that the M2 variable is 72% dependent on oil production, export, and price. However, when analyzing each factor in detail, it can be seen that the existing dependence is provided by the oil price factor, which has a value of ≈0.79. And the coefficient of the oil production factor has a negative influence (≈-0.37) on the M2 monetary aggregate. Therefore, the average approximation error in the three-factor model is 13.71%. As a result of quantitative and qualitative analysis, factors influencing the economic situation of Russia have been evaluated, the weight of each of them is represented in absolute value. Based on the results of the multiple regression, it has been possible to estimate the degree of each factor and its relationship in combination with factors of different composition. We will check the quality of the regression model graphically, consider the results of the actual and predicted data for the variables X visually ( Figure 5). Here the regression model is supported by empirical evidence. It can be used successfully, both for further analysis and forecasting, and for practical application.
2,858.8
2023-01-01T00:00:00.000
[ "Economics" ]
Mathematical Attack of RSA by Extending the Sum of Squares of Primes to Factorize a Semi-Prime : The security of RSA relies on the computationally challenging factorization of RSA modulus � = � � � � with � being a large semi-prime consisting of two primes � � and, � � for the generation of RSA keys in commonly adopted cryptosystems. The property of � � and � � , both congruent to 1 mod 4, is used in Euler’s factorization method to theoretically factorize them. While this caters to only a quarter of the possible combinations of primes, the rest of the combinations congruent to 3 mod 4 can be found by extending the method using Gaussian primes. However, based on Pythagorean primes that are applied in RSA, the semi-prime has only two sums of two squares in the range of possible squares √� − 1, ��/2 . As � becomes large, the probability of finding the two sums of two squares becomes computationally intractable in the practical world. In this paper, we apply Pythagorean primes to explore how the number of sums of two squares in the search field can be increased thereby increasing the likelihood that a sum of two squares can be found. Once two such sums of squares are found, even though many may exist, we show that it is sufficient to only find two solutions to factorize the original semi-prime. We present the algorithm showing the simplicity of steps that use rudimentary arithmetic operations requiring minimal memory, with search cycle time being a factor for very large semi-primes, which can be contained. We demonstrate the correctness of our approach with practical illustrations for breaking RSA keys. Our enhanced factorization method is an improvement on our previous work with results compared to other factorization algorithms and continues to be an ongoing area of our research. Introduction The RSA (Rivest-Shamir-Adleman) public-key primitive named after its inventors is the most widely used in cryptography since its introduction in 1977 [1]. The generation of RSA keys in public key cryptosystems is based on the modulus of a positive integer = , where is a semi-prime, which is a product of two large primes and [2,3]. The computationally intractable factorization property of semi-primes when they are large has been the fundamental premise in using RSA keys for computer security in several applications [4][5][6]. Modern cryptosystems make wide use of RSA encryption and digital signatures for secure message exchange and communication over different types of networks within government as well as various industry sectors [7][8][9]. These include popular transactions in various applications including Internet of Things (IoT) such as mobile banking, online shopping, smart card payments, e-health and e-mail communications that are available to the common man over the Internet [10,11]. Despite difficulty in breaking RSA keys (i.e. semi-prime factorization), cybercrimes and RSA key attacks are still on the rise [12][13][14]. A cryptanalytic attack of a short RSA key by M. J. Wiener was established as the first of its kind in 1990 [15,16]. Hence, the difficulty of factoring RSA modulus by choosing strong prime factors and was considered as a solution to address these attacks. Since then, it has become a common practice to employ 512-bit RSA to provide the required strong primes for many cryptographic security protocols [17]. Even though 512-bit RSA modulus was first factored in 1999, with the high computational power of today, there is still difficulty with 512-bit factorization in real-time applications [18]. Therefore, 512-bit RSA keys are actively used by popular protocols for email authentication such as Domain Keys Identified Mail (DKIM), retrieving messages from email servers by the client such as Post Office Protocol 3 (POP3), Simple Mail Transfer Protocol (SMTP) and Internet Message Access Protocol (IMAP), data exchange on the web such as Hypertext Transfer Protocol Secure (HTTPS), connecting two computers such as Secure Shell (SSH), and providing privacy such as Pretty Good Privacy (PGP) [19][20][21]. Enthusiastic amateurs have tried to factor 512-bit RSA keys and in 2012, Zachary Harris factored the 512-bit DKIM RSA keys used by Google and several other major companies in 72 hours per key, by using distributed cloud services [22][23][24]. The Factoring as a Service (FaaS) project demonstrates that a 512-bit RSA key can be factored reliably within four hours in a public cloud environment [19]. Factorization of the 768-bit RSA key has also been demonstrated using various methods such as the number field sieve factoring method and is several thousand times more difficult than 512-bit RSA keys. The higher the key size, the more difficult it is to factor the RSA key. Hence, with new methods and progress in computing power over time, there are risks and implications for the future of RSA. This forms the motivation for studying the mathematical principles of semi-prime factorization in proposing a novel method to increase the likelihood of breaking an RSA key. Euler's factorization of a semi-prime = is based on the property of , and , both congruent to 1 mod 4 [25]. These constructions are based on Pythagorean primes that are applied in RSA [26][27][28]. These contribute to only a quarter of the possible combinations of primes and the rest of the combinations congruent to 3 mod 4 are found based on the Gaussian prime extension method [29]. The semi-prime has only two sums of two squares in the range of possible squares from √ − 1 to /2, and therefore, we extend previously established methods to increase the likelihood that a sum of two squares can be found [30,31]. Even though many sums of squares exist, once any two of them are found, we show that it is sufficient to only find two solutions to factorize the original semi-prime. Our enhanced factorization method is an improvement of our previous work. We apply our method for case scenarios and provide the necessary conjectures. Our algorithm is practically simple and is implemented using rudimentary arithmetic operations that require minimal computational memory with search cycle time being a factor to be considered for very large semiprimes. Further, we demonstrate the successful breaking of RSA keys such as 768-bit RSA verified through the implementation of our algorithm in Java. We provide the results highlighting the complexity of our enhanced factorization algorithm and comparing the performance with other factorization algorithms laying the scope for future research as well. The rest of the paper is organized as follows. Section 2 discusses related work and the uniqueness of our work. Section 3 provides our enhanced semi-prime factorization by extending the sum of squares method and specifies our algorithm with implementation steps. In addition, the performance of the algorithm, its complexity and comparison with other factoring methods are described. Section 4 demonstrates the correctness of our algorithm when applied to break the 768-bit RSA key. Section 5 discusses the results and finally conclusions along with future research directions are given in Section 6. Background The difficulty of the semi-prime factorization problem forms the essential aspect to the security of an RSA cryptosystem. Revisiting RSA cryptography [1,3,17], assuming that and are two primes used to generate a semi-prime = , the Euler's totient function is given by In RSA based public key cryptography, two different keys known as public and private keys are used to perform the encryption and decryption of data or messages [32,33]. Any sensitive information is encrypted with a public key and it requires a matching private key to decrypt it. The public key is chosen arbitrarily as a pair = ( , ) where is an integer and not a factor of and 1 < < . The private key is based on tuples ( , , , ) , such that = ( , ) and is determined using the extended Euclidean algorithm = 1. A public key is used to encrypt a message , into a cipher text , such that = To retrieve the original message, the corresponding private key is used to decrypt the encrypted message such that = RSA cryptosystems make use of public key and private key generation techniques for security of data and end-to-end security of information transmission that cannot be understood by anyone except the intended recipients. These techniques are employed to authenticate the sender and the receiver of a message and to ensure that the integrity of the data or message received without being tampered with. However, the problem of determining from = ( , ) is equivalent to finding factors of RSA modulus . Hence, choosing strong primes for and is very important such that the factorization of the semi-prime , becomes computationally infeasible and nontrivial for an adversary. In practical applications, a smaller private key may be used for a faster decryption algorithm to improve the computational speed of online transactions. Once the private key is found, it can result in a total breaking of RSA posing a great security risk. Hence, an enhanced prime factorization method to attack the small decryption exponent could pose serious security challenges to RSA cryptosystems that are widely adopted even today. Several studies have attempted to perform a general survey of attacks on the RSA cryptosystem since its introduction [16,19,34]. In 1990, Wiener introduced the first RSA cryptanalytic attack showing that if the decryption exponent is small with an upper bound given by < . using the continued fractions method [35]. Subsequently, Boneh and Durfee [36] proposed an attack on short decryption exponents with an improved upper bound while the RSA attack by Blomer and May [37] had demonstrated an upper bound of . with lattices of smaller dimension. Coppersmith's technique used lattice reduction approach for finding small solutions of modular bivariate integer polynomial equations [38]. Most of the recent research has also been focusing on extending the number range upper bound of and in the RSA private and public keys by working on the limitation of the Wiener and Coppersmith methods by approaching the problem differently. One recent work [39] considers the prime sum + using sublattice reduction techniques and Coppersmith's methods for finding small roots of modular polynomial equations, achieving slight improvement in the upper bound with reduced lattice dimension. Another work [17] uses a small prime difference − method which is then developed into a continuous fraction as per Wiener's original method. While these extend the range of Boneh's original limit, the Lenstra-Lenstra-Lovász (LLL) method is still seen as the state of the art which was first proposed by Lenstra and Lenstra in 1991 [40]. A survey on the history of number theory reveals that it has been explored widely by mathematicians to establish different representations of integers, in particular prime numbers, with a view to arrive at more efficient methods in deriving them [41][42][43][44][45][46][47]. Up until about a decade ago, there had been a strong mathematical interest in polynomials which generated sums of squares. However, recently there is a reviving interest in their application to practical experimentations for establishing their rudimentary computations and implications to cryptography [29,[48][49][50][51][52]. In line with these approaches, it has been proved in our previous work that the semi-primes can be represented as the sum of four squares [30]. A new factorization method as a faster alternative to Euler's method [25] was proposed by establishing the relationship among the four squares. Our interest is to apply this method in a novel way to the semi-prime factorization problem and is part of our ongoing research. This paper aims to explore further, on new findings of the method that once one sum of the squares is known, this can be used to find the other. Proposed Method We consider the basis of an earlier work [25,42] that showed a semi-prime , constructed from two primes, and , is also congruent to 1( 4) ≡ . Further, in [30], it was established that a sum of four squares, could be reduced to two sums of two squares using the Brahmagupta-Fibonacci identity given in [49]. We note that Gaussian primes are of the form ≡ 3 4 , and cannot be represented as the sum of two squares [29]. On the other hand, Pythagorean primes are of the form ≡ 1 4 [26][27][28]53]. In accordance with Fermat's Christmas theorem, an odd prime can be represented as the sum of two squares of integers and , if and only if 1 ( 4) ≡ = + [42,54]. This property was useful to determine which numbers could be represented as the sum of two squares, which was later proved by Euler [25]. In an earlier work [30], it was proved that if a semi-prime is constructed using two Pythagorean primes of the form ≡ 1 4 then Euler's factorization can be used to find two representations as the sum of two squares. Finding these two representations is non-trivial and computationally intensive for large numbers even with high performance computers. We make use of a previously established property that all Pythagorean triples can be represented as = + ( + 2 − 1) [28]. This equation provides a computationally simpler search using increments of and fine convergence using . In this paper, we extend our related works that were reported previously to show that once one sum of the squares is known, it can be used to find the other. We provide our proposed method taking a step by step approach. First, we apply our method to semi-prime factorization of two simple case examples and arrive at a conjecture. Following this, we apply our proposed method to two more large semi-primes as case examples. Finally, we demonstrate in the next section, the breaking of 768-bit RSA using our proposed method as the final result achieved. The first 3 sums of two squares can easily be found by decrementing from 403 such that a perfect square remains. By applying Equations (1)-(5), we arrive at the following sequence of steps: In this way, by using the modified Euler factorization from previous work [30], the factors of a compound number can be found. This can be extended to compound numbers where the factorization is not known. Factorizing a Semi-Prime With an increase in the application of number theory in information security, it has drawn the attention of researchers to explore the interesting problem of factorizing a semi-prime, which is a positive integer that has two prime factors, and forming = . Encryption algorithms such as RSA and public-key cryptosystems rely on special large prime factors of a semiprime for encoding a sender's message and decoding it at the receiver end. Since only one of the two prime factors of the semi-prime is known at either end of sender and receiver, even if the semi-prime is revealed, an interceptor is required to know both prime factors and only then the message can be decoded. Hence, with the evolution of information and communication technologies, a fast and efficient method factorization of very large semi-primes forms much interest among mathematicians and information security researchers. With the current state of knowledge, we apply our proposed method of using the sum of squares (SoS) to factorize a semi-prime and illustrate our algorithm with case examples. In broad terms, our proposed algorithm for factorizing a semi-prime consists of three parts as given below: 1. Part 1 of the algorithm, namely new sum of squares (N-SoS1) is proposed in this paper, 2. Part 2 of the algorithm, namely polynomial-based sum of squares (P-SoS2) leveraging on another work [53], and 3. Part 3 of the algorithm, namely a modified Euler-based sum of squares (E-SoS3) factorization using an earlier work [30]. In the above algorithm, we take advantage of having special attributes which when known allows for the recovery of N-SoS1 using "simple" algebra as given below: ( , ) = (2 + 2 + 2 + + 1) + This has special properties to be discussed in a future paper. In short, it generates two numbers , of the form: ( , ) = ( + ( + 1) )(( + ) + ( + + 1) ) Numbers whose squares are one apart as per and ( + 1) enable special factorizing properties to be maintained. Hence, multiplying a semi-prime ( ) to be factored by such a number allows for the recovery of N-SOS1 for . It is conjectured in this paper that by multiplying by , leads to the determination of N-SoS1 more quickly. The algebra takes the form of 8 possible equations as explained below: We posit that once an SoS for the product of × is found, N-SoS1 for can be recovered using one of 8 equations. This is the essence of the N-SoS1 algorithm we have proposed in this work. Once an N-SoS1 for ( × ) becomes known a simple GCD test with a result of determines which of the 8 equations will yield N-SOS1 for . Only a simple division is then required to yield the N-SoS1 solution for . Once N-SoS1 is known, this is then used to find P-SoS2. Further, once N-SoS1 and P-SoS2 are known, a modified Euler's factorization using sum of squares (E-SoS3) is able to yield and and hence factorization is achieved. A Java implementation of N-SoS1 uses simple arithmetic operations such as +, -, * and / as well as GCD and is provided as Supplementary Resources. It is possible to avoid P-SoS2 once the first square becomes known as shown in Case Example 3 (below), by decrementing the square from ( , ). However, Case Example 4 illustrates that even for small numbers not using P-SoS2 creates unpredictability of finding the second SoS. The RSA case illustrates that as per Case Example 3 and 4 a solution exists but poor selection of leads to an intractable result for RSA768. Hence, from the above, the factorization of = 27161 = (157)(173). Results We employ our proposed method to demonstrate the factorization of a large RSA key. The commonly adopted 512-bit RSA key has been attempted to be attacked by successfully completing factorization several times with different factorization methods [22][23][24]. To take the challenge of RSA key attack further in this work, we consider a higher key size of RSA such as 768-bit RSA key to apply our proposed semi-prime factorization method with the enhanced sum of squares of primes. The RSA case illustrates that a solution exists but poor selection of leads to an intractable result for RSA768. The selection of , of ( , ) can be better determined. As presented in this paper, as n is incremented, a range of m values is searched through. This is continued until a perfect square is found. The attributes of have been determined via experimentation. The authors are currently characterizing with a view to reducing the search field for a suitable . This is an area of ongoing research. Performance and Comparison In Table 1, we summarize the cost performance of our proposed method of the extended sum of squares approach to factorize semi-primes of different lengths (key sizes). For the various case examples of semi-primes including RSA768 that were computed in this work, we provide the cost factors of memory used and the non-optimized search time taken in terms of decrement loops of the square for completing the semi-prime factorization using our method. If a linear search is used from the square root of = , the number of iterations is given thus. However, this can be significantly reduced by using = + ( + 2 − 1) congruent to 1 4. * decrement loops of the square (non-optimized search). Table 1 demonstrates that the memory required for our algorithm is minimal and the search cycle time is the main factor that needs to be contained for large RSA keys. It is important to note that each search cycle refers to only six steps involving rudimentary algebra consisting of multiplication, addition, subtraction and division operations exhibiting very low processing time. Next, we elaborate on the complexity of our algorithm and its comparison to other factoring algorithms. Complexity of the Algorithm We summarize the complexity of our proposed semi-prime factorization algorithm in terms of memory and computational time. These complexity measures are followed in similar lines to existing methods reported in the literature [55]. The memory requirement of our algorithm is very minimal with most computations operating on the accumulator (using BigInteger arithmetic). The number of memory variables used for each major part and step involved in our algorithm are given below: The manipulation of ( , ) to generate requires the use of three variables. The resultant SoS ( , ) requires the use of two variables. The recovery ( , ) requires the use of two variables. Hence, these requirements occupy 7 BigInteger values in memory. Studying the time complexity of our proposed algorithm, we find that each of the algorithm steps is rudimentary. Steps 3 and 5 use a function, which is the most expensive function in this algorithm. Only the integer part is considered so a Newton-Rapson algorithm is used here. Step 6 uses a GCD function. The remaining parts are simply multiplication, addition, subtraction and division operations that require minimal processing time. Continued research is underway to characterize ( , ) so that a reduced stochastic search can be undertaken in a reduced search field. Our proposed semi-prime factorization method is simple to understand and to implement. It uses rudimentary algebra of multiplication, addition, subtraction, and division operations that require minimal processing time unlike other algorithms that focus on lattice reduction (LR) techniques requiring time consuming operations. While this work is limited to an empirical study of our proposed method, formal proofs are quite extensive and are reserved for future work. Discussion By factoring the modulus of an RSA private key an attacker can compute the corresponding public key or vice versa. Various studies have surveyed RSA key size and limitations across public key infrastructure and new methods for attacking commonly used 512-bit and 768-bit RSA keys are continuing to interest researchers [19,56]. Most of the previous studies have studied attacks on RSA cryptosystems with specific focus. For instance, partial key exposure (PKE) attacks were studied in [57,58]. A PKE attack on low public-exponent RSA key of a variant of the RSA public-key cryptosystem was found to be less effective than for standard RSA. However, the large public exponent RSA key was reported to be more vulnerable to such attacks than for standard RSA. Some generalized studies have also been conducted comprehensively at a practical Internet scale to survey vulnerabilities of RSA key generation for various protocols such as SSH, HTTPS, and DKIM [19][20][21]59]. However, in this paper, we have considered the number theoretic properties of semi-primes that form the underlying principle behind any RSA key generation to be the fundamental cause for any RSA attack. For instance, by choosing an RSA modulus with a small difference of its prime factors, private key attacks could be improved [60]. Some early studies have considered polynomial equations to study low exponent RSA vulnerabilities and their variants [61,62]. Recent number theory-based studies in this domain have focused on lattice reduction (LR) techniques with prime number theory to study RSA key attacks [31,38,39]. Such LR based techniques come under the general category of Coppersmith attacks. However, the uniqueness of our contribution is that we propose factorization using the sum of squares that is more in line with Euler's sum of squares approach rather than Coppersmith's approach. Whilst there is active research in LR methods, sum of squares remains an area of interest and continues to yield surprises, and it has been the motivation of this work. The subtleties in the value of are an area of interest that we explored in this study. The value of in this paper has special properties, which are described briefly. was constructed with two semi-primes. Each of the primes has special properties. They are all congruent to 1 4. Their squares are one apart. In the case of both semi-primes, the highest square of the smallest prime was the lowest square in the larger prime. Both semi-primes are perfect squares when 1 was subtracted. Multiplying the two semiprimes together created 8 sums of 2 squares, three of which are very close together (402, 401, 399). When such a number is multiplied with a semi-prime to be factored, 32 sums of 2 squares are available. As was shown previously, only two sums of two squares are required to factor the semiprime in question. This increased the likelihood of finding two such sums of squares. Equations (4)-(8) describe the general form for . In this case = (65)(2501) = 162565. It would appear counter-intuitive to make the semi-prime to be factored ( ) larger, however this is offset by the likelihood of finding two sums of two squares near the square root of = . In the case of RSA768 the first solution was some way off from the square root. The search can be restricted by only considering congruent co-prime sums of squares that are consistent with a previous study [30]. In this way only sums of two squares which approximate need be tested and of those, only the ones equal to yield a valid solution. This substantially reduces the search cost. Since a linear search is conducted, the search field can easily be divided up and this lends itself very well to parallel processing. The mathematics in our approach is rudimentary, consisting of multiplications, additions and subtractions. The perfect square test does require a square root which is computationally costly, but this too can be avoided if the approach in [30] is used to find solutions equivalent to = + ( + 2 − 1) all of which are congruent to 1 4. Overall, the search area to find the first sum of squares is key to finding the second. Hence, the search is focused on finding the first sum of two squares. The relationship between the two squares of the sum of two squares of the semi-prime determines the spread of, and the likelihood of, finding the first sum of two squares for . Table 1 shows the search cycle time for different key sizes explored in this work. The search area can be limited by √ − 1 and /2. This can be narrowed by considering the properties of and the closeness of the sums of squares of (402, 401, 399). The smearing of across by determines the search field. This field is then divided over a number of parallel processes running a very simple (but fast) + ( + 2 − 1) algorithm. This search field to find the first sum of two squares is an area of ongoing research. Conclusion and Future Work We studied the semi-prime factorization and the impact on the security of the RSA cryptosystem. We proposed a novel extension to our previously established methods of semi-prime factorization using a sum of squares approach for cryptanalytic attacks on RSA modulus = with being a large semi-prime, constituting two primes and . We gradually developed our proposed technique by providing illustrations of semi-prime factorization for small and large numbers. We arrived at the conjecture that a composite consisting of unique primes ≡ 1 4, has 2 sums of squares. We provided the detailed steps involved in the implementation of our enhanced semiprime factorization algorithm. We then applied our proposed factorization algorithm to attack 768bit RSA successfully. Finally, we discussed the strengths and limitations of our algorithm by providing the complexity of the algorithm in terms of memory and search cycle time as well as its comparison to other factorization algorithms. In future, a more extensive comparison of various RSA keys adopted in cryptosystems will be studied for key attacks using our method as against other contemporary methods such as Shor's algorithm. Our ongoing research will also focus on characterising a reduced search field in arriving at the semi-prime factorization result and the formal theoretical proofs.
6,505.8
2020-09-28T00:00:00.000
[ "Mathematics", "Computer Science" ]
Multipass cell post-compression at 515 nm as an efficient driver for a table-top 13.5 nm source . We present a table-top, efficient and power scalable scheme enabling the effective generation of extreme ultraviolet radiation up to 100 eV photon energy. Therefor ultrashort pulses (< 20fs) in the visible spectral range (515 nm) are used to drive high harmonic generation (HHG) in helium. This allows for a significant efficiency boost compared to near-infrared (NIR) drivers, enabled by the favourable scaling of the single-atom response of λ -6 [1]. The experimental realization of a mulitpass cell delivering 15.7 fs pulses with a peak power close to 25 GW at 515 nm and an overall efficiency (IR to compressed green pulse) of >40 %. In conjunction, preliminary HHG results will be presented, paving the way for mW-class HHG sources at 13.5 nm. High flux high harmonic sources High harmonic generation (HHG) allows for the creation of coherent radiation in the extreme ultraviolet (XUV/EUV) spectral region.The access to this spectral region, through the means of a lab-scale setup, enabled a plethora of applications such as the investigation of electron dynamics in matter [2], spectroscopic analysis [3] and coherent diffractive imaging (CDI) [4].Where in recent years the spectral region at 92 eV has gained particular interest due to its use in EUV-lithography [5].Explicitly, actinic metrology of the used masks remains an open question.Here, 13.5 nm ptychography (a form of scanning CDI) poses as an attractive solution.All of the aforementioned applications would benefit from a high photon flux, e.g. to decrease measurement times or increase signal to noise ratios.This can be achieved by an increase in driving laser power, which linearly improves the XUV flux but quickly reaches practicability and cost constraints.However, by simultaneously increasing the HHG efficiency the generated photon flux can be increased dramatically.This can be achieved by using short-wavelength drivers, allowing for a boost in the single-atom response by λ -6 [1], e.g.frequency doubling of the driving laser in principle yields a factor of 64.Furthermore, shorter driving pulse durations enable phase matching at higher intensities, resulting in an increased cut off and efficiency [9].Combining these scaling laws, we believe a high-power ultrafast source in the visible spectral range (VIS) is the optimal driver for high harmonic generation with cut off energies up to 100 eV. Multipass cell post-compression and XUV generation The scheme we are presenting consists of four building blocks; (1) the driving laser, (2) a nonlinear frequency conversion stage, (3) a consecutive post-compression stage and, finally, (4) the generation of high harmonic radiation.A first experimental realization of this setup is published in [10], which will briefly be covered in the following. Fig. 1. Schematic of the experimental setup The driving laser is an Ytterbium fiber laser system delivering 55 W of average power at 50 kHz, corresponding to a pulse energy of 1.1 mJ.The spectrum is centered around 1030 nm with a close to transformlimited pulse duration of 280 fs.Frequency-doubling takes place in a 1.5 mm long BBO-crystal cut for type-I phase matching, yielding 29 W at 515 nm, corresponding to a conversion efficiency of more than 52%.The 240 fs long pulses are guided into a gas-filled Herriot-type multipass cell, where they undergo spectral broadening through self-phase modulation.In 19 focal passes 0.6 bar Krypton provide a sufficient amount of nonlinearity to fully cover the supported bandwidth of the employed dielectric mirrors.The cavity mirrors are 50 mm in diameter separated by 1.95 m in a concentric The transmission through the cell is close to 95 %, which is in line with the expected linear losses of the used optics.Temporal recompression is realized through 28 dispersive mirror reflections, with each one providing -100 fs² of group delay dispersion (GDD).The shortest pulse duration is achieved by fine tuning the dispersion with additional 4 mm of fused silica, resulting in a total GDD of approximately -2525 fs².After compression 22.4 W of average power and 0.44 mJ pulse energy are available.Temporal characterization of a sample, split off via two fused silica wedges, is carried out through a commercially available TIPTOE device [11].The retrieved pulse and spectrum can be seen in Fig. 2., in combination with a grating-based spectrometer measurement and the respective calculated Fourierlimited pulse duration.The retrieved pulse duration of 15.7 fs is within 5 % of the transform-limited pulse duration of 14.9 fs.The retrieved pulse shows a high temporal contrast with more than 93 % of energy in the main feature resulting in a peak power of 24.9 GW.Additionally, an M²measurement of the compressed and sampled beam reveals a high beam quality with an M²<1.2 in both axis.Overall, the nonlinear frequency conversion and the post-compression stage operate at a total efficiency of more than 40 %.In the following these pulses are used to generate high harmonic radiation in a helium gas jet, targeting the spectral region around 90 eV.In a preliminary study we were able to push the phase-matched cut-off energy beyond 100 eV through the use of helium as interaction medium (see Fig. 3.).The achieved conversion efficiency at the 39 th harmonic (94 eV) was determined to be 1e-8, which is comparable to state-of-the-art systems around 90 eV driven by NIR few-cycle lasers [12]. Summary and Outlook In summary, we report a nonlinear frequency conversion stage followed by a multipass cell based postcompression delivering close to 25 GW of peak power at an overall efficiency of more than 40 %.The combination of ultrashort pulses (< 10 optical cycles) and high peak power enable the phase-matched generation of EUV radiation. We believe the presented approach will allow for unprecedented amounts of XUV photon flux up to 100 eV photon energy.The inherent advantage of this scheme is the maturity and power scalability of every building block utilized within this demonstration.Ultrafast Yb-fiber lasers have been demonstrated with up to 10 kW of average power [13].Second harmonic generation to the green spectral range with ultrafast lasers was demonstrated up to 1.4 kW [14], albeit being with picosecond pulses.Multipass cells equally show power scalability to kilowatt level average power [15].Based on this, power scalability to 100 W and beyond is possible and is currently being investigated.Furthermore, an improvement of the generation conditions (e.g.optimized gas target and focusing geometry) should allow for a boost in conversion efficiency by up to two orders of magnitude.In combination, the improvement in driving source average power and the boost in HHG efficiency should enable photon flux of more than 100 µW per harmonic line at around 90 eV -an increase in EUV average power by two orders of magnitude compared to the state-of-the-art [12,16]. Fig. 2 . Fig. 2. Temporal characterization of the compressed output pulses via the TIPTOE technique.(Left) Retrieved temporal pulse profile and Fourier limit calculated from the measured spectrum normalized to the output pulse energy.(Right) Measured and retrieved spectrum and corresponding spectral phase.
1,584.8
2023-01-01T00:00:00.000
[ "Physics" ]
Targeting editing of tomato SPEECHLESS cis-regulatory regions generates plants with altered stomatal density in response to changing climate conditions Flexible developmental programs enable plants to customize their organ size and cellular composition. In leaves of eudicots, the stomatal lineage produces two essential cell types, stomata and pavement cells, but the total numbers and ratio of these cell types can vary. Central to this flexibility is the stomatal lineage initiating transcription factor, SPEECHLESS (SPCH). Here we show, by multiplex CRISPR/Cas9 editing of SlSPCH cis-regulatory sequences in tomato, that we can identify variants with altered stomatal development responses to light and temperature cues. Analysis of tomato leaf development across different conditions, aided by newly-created tools for live-cell imaging and translational reporters of SlSPCH and its paralogues SlMUTE and SlFAMA, revealed the series of cellular events that lead to the environmental change-driven responses in leaf form. Plants bearing the novel SlSPCH variants generated in this study are powerful resources for fundamental and applied studies of tomato resilience in response to climate change. Introduction In response to changing climates, plants adjust their physiology and growth.Clear examples of plant responses are the activity and production of stomata.Stomata are cellular valves in the epidermal surfaces of aerial organs; a pair of stomatal guard cells (GCs) flank a pore whose aperture they modulate through turgor-driven cell swelling.Stomata opening and closing responses occur within minutes of a change in light, water availability, carbon dioxide concentration ([CO2]) or temperature, and many of the signal transduction pathways for perception and response have been described in detail (reviewed in (1)).Over longer timescales, stomatal responses to environmental change are detected as changes in stomatal density (SD, stomata/unit area) or stomata index (SI, stomata/total epidermal cell number).Because stomata are preserved in fossils and herbaria, these developmental responses have been used as evidence of past climates (2) (3).Changes to SI and SD can also occur during the development of individual leaves of plants subjected to changes in light (4,5) or temperature (6)(7)(8). Genes responsible for stomatal production and pattern have been identified, typically first in Arabidopsis thaliana, with subsequent studies showing that a core set of transcription factors, receptors and signaling peptides are broadly used among plants (e.g.(9)(10)(11)(12)(13)(14)(15)(16)).Among angiosperms, the plants that comprise the majority of natural and cultivated ecosystems, the transcription factors SPEECHLESS (SPCH), MUTE and FAMA play key roles in the specification of stomatal precursors and in the differentiation of the stomatal guard cells and subsidiary cells (reviewed in (17,18)).EPIDERMAL PATTERNING FACTORS (EPFs) can promote or repress stomatal production and are attractive targets for genetic engineering.Broad overexpression of EPFs can modulate stomatal production and this has measurable effects on important agronomic traits like water use efficiency (WUE) (15,19). The stomatal lineage-initiating factor SPCH appears at the nexus of environmental and developmental pattern and, as might be predicted by this position, is subject to extensive regulation at the transcriptional and post-translational levels.Direct regulation of SPCH transcription in response to warm temperature is mediated by PHYTOCHROME INTERACTING FACTOR4 (PIF4) (6), and SPCH protein is phosphorylated by MAPK, GSK3 and CDK kinases, (20)(21)(22).There is also indirect evidence that SPCH transcript levels and/or protein stability changes when stimuli like light intensity or auxin availability change (23)(24)(25)(26). Advances in precision genome editing have enabled new ways of modulating gene activities, with the potential to also reveal endogenous regulatory mechanisms.Multiplex CRISPR/cas9 mediated editing of tomato gene cis-regulatory regions, for example, revealed enhancers responsible for architecture and fruit traits that mimic those bred into specialized commercial tomato lines, as well as generating novel fruit morphologies (27).Given these technical innovations and the placement of SPCH in stomatal and leaf networks.SPCH emerged as an ideal candidate for a CRISPR/CAS9-enabled cisregulatory region dissection for elements that could drive responsiveness to environmental inputs. Here we show that SlSPCH cis-regulatory alleles can exhibit unique and differential "stomatal set points", as well as responsiveness to environmental change.To characterize the stomatal response to environmental change at a cellular level in tomato, we created translational reporters of SlSPCH, SlMUTE and SlFAMA, and used a genetically encoded plasma membrane reporter to track cell division patterns in the leaf epidermis over time.Together these new genetic tools revealed that tomato adjusts stomatal production by enabling cells that would typically become stomata to divert into a non-stomatal fate.This adds to fundamental knowledge of plant response and generates potentially useful lines for agronomic use in future climates. Stomatal lineage establishment, expansion and maturation is marked by expression of bHLH transcription factors SlSPCH, SlMUTE and SlFAMA in tomato. Tomato orthologues of the stomatal lineage cell-type specific bHLH transcription factors SPCH, MUTE and FAMA (SlSPCH, SlMUTE and SlFAMA) can complement loss of function mutations in their Arabidopsis mutant counterparts when expressed in Arabidopsis (28), but the expression pattern and role of these tomato genes in their native context has not been described.Knowing these genes' distribution in tomato is important because, although the epidermal cell type distributions in Arabidopsis and tomato leaves look superficially the same, recent live-cell imaging studies that tracked stomatal lineages over several days identified different modes of cell division in these two plants lead to each species' final distribution of stomata (Figure 1a) (29). We designed native promoter-driven translational reporters and created stable transgenics in tomato cultivar M82 (details in methods).Compared to similar of translational reporters in Arabidopsis, SlSPCHpro::NeonGreen-SlSPCH, SlMUTEpro::mScarlet-SlMUTE, and SlFAMApro::mTurquoise-SlFAMA, were found in roughly the same cell types and patterns (Figure 1b and S1a-d).In newly emerging cotyledons, 0 day after germination (dag) SlSPCH is in many epidermal cells (Figure S1a), but one day later, becomes restricted to the smaller daughters of asymmetric cell divisions (ACDs, Figure S1b).SlMUTE is expressed in cells with the round morphology of guard mother cells (GMCs) (Figure S1c).SlFAMA is expressed in both young guard cells (GCs) before they form a pore (31/48 stomata) and mature GCs with pores (56/69 stomata) (Figure S1d).Unlike Arabidopsis AtFAMA, however, SlFAMA was not detected in GMCs (0/22 GMCs).Under standard long-day growth conditions of 26°C, 700 µmol m 2 s -1 light intensity, there were no obvious overexpression phenotypes generated by the extra copies of SlSPCH, SlMUTE or SlFAMA provided by the transgenes in genetic backgrounds that retained functional copies of the respective endogenous genes.Mutagenesis of the SlSPCH cis-regulatory region generates small deletions that display a range of stomatal phenotypes.In other species, SPCH has a primary role in lineage flexibility, thus we chose this gene for targeted mutagenesis of its cis-regulatory region.We expected that the complete loss of SlSPCH activity would eliminate stomata and be lethal, thus we employed a CRISPR/Cas9based multiplex strategy (27) to make a series of small deletions in the 5' regulatory region of SlSPCH.The nearest protein coding gene lies more than 7.5 Kb upstream of the SlSPCH start site (Figure S1e), but ATAC-seq profiles of open chromatin suggested that for most tomato genes, the region ~3Kb upstream of the start site is likely to contain key regulatory regions (30), and indeed in this smaller region we find numerous predicted target sites for transcription factors with roles in developmental and environmental regulation (Figure S1e).We generated 11 sgRNAs, as evenly distributed as sequence would allow, and generated a series of SlSPCH cis-regulatory alleles that range from 2bp to ~2.5Kbp (Figure 1c and Supplemental file 1), as assayed by genotyping of T1 plants.Plants with deletions spanning more than ~100 bp were selected and self-pollinated to obtain homozygous lines for all subsequent experiments. We chose five deletion lines to test for phenotypic response to altered light or temperature conditions.We selected these lines to optimize coverage of the SlSPCH cis-regulatory region and we required that when grown under standard growth chamber or greenhouse conditions, the lines were fertile and their overall size and leaf morphology was not substantially different from M82 (Figure 1d-e).Because CRISPR/Cas9 mutagenesis can lead to a spectrum of mutations, we also recovered some plants where deletions disrupted the SlSPCH coding region (line #13, Figure 1d-e and S1e-f).These plants were pale, lacked stomata and arrested as seedlings like Arabidopsis spch null mutants (31).SlSPCH null plants were not characterized further, but served as confirmation that SlSPCH is essential for stomatal lineage initiation. SlSPCH cis-regulatory mutants display altered responsiveness to changes in light intensity. As the main organs of photosynthesis, leaves have finely tuned responses to changes in light quality and intensity.Along with increasing their mesophyll and chloroplast production, most plants respond to elevated light with an increase in stomatal index (SI) and in total numbers of stomata per leaf.In Arabidopsis, light-mediated promotion of stomatal development is accompanied by higher levels of SPCH.Dissection of the pathway from light perception to stomatal response involves the core light response factor ELONGATED HYPOCOTYL 5 (HY5) upregulating transcription of the mobile See also Figure S2 signal STOMAGEN in mesophyll cells.STOMAGEN then moves to the epidermis and suppresses the signaling pathway that normally degrades SPCH, ultimately resulting in higher accumulation of SPCH and more stomata (4). It has been suggested, though not experimentally confirmed, that SPCH transcription would also be regulated by light (26). To establish a baseline light response for tomato line M82 (with intact SlSPCH), we grew M82 plants at 26°C and two light intensities, ~130 µmol-photons m -2 s -1 and ~1300 µmol-photons m -2 s -1 .We quantified SI, SD and leaf area (Figure 2a-b and S2a-b) as measures of developmental response.As expected, M82 plants increased their SI and SD in the higher light condition.We then tested our SlSPCH cisregulatory deletion mutants in the same conditions and found a variety of responses (Figure 2a).Lines #2, #3 and 1-6#10 have similar SIs to M82 at low light but exhibit a dampened response to higher light intensity.Line #20 has a lower SI at low light, but is similar to M82 at high light, and therefore has an exaggerated response.Line 1-6#4 exhibits both a lower SI at low light and no response to increase in light intensity.Similar trends are seen when SD is calculated (Figure 2b and S2b) except that 1-6#4 does show an increase in SD, but this measurement is sensitive to changes in overall leaf size which typically changes when plants are grown in high light (Figure S2a).Taken together these phenotypes suggest that alteration of the cisregulatory region of SlSPCH can render stomatal production differentially sensitive to light. We wanted to know how tomatoes alter SlSPCH and the stomatal lineage to change SI in response to light.In Arabidopsis, a shift in the relative frequency of amplifying to spacing asymmetric divisions leads to an increase in SI (32).However, in tomato, spacing divisions are nearly non-existent (29), so it was unclear what cellular mechanisms were available for tomato stomatal lineages to alter SI.We therefore imaged plants expressing the epidermal plasma membrane reporter ML1p::RCI2A-NeonGreen (29), every two days to trace asymmetrically dividing precursor cells to their final fate outcome (Figure 2c).From these timecourses, we found that the major mechanism responsible for altering SI was a shift from asymmetric divisions that yield one stoma and one pavement cell to asymmetric divisions that produced two pavement cells (Figure 2d).Thus, tomatoes appear to use a "lineage exit" strategy (29) to lower SI in low light.Plants also respond to elevated temperature by changing stomatal behavior and production (7,33).In Arabidopsis, there is evidence for regulation of SPCH transcription through repression by the warmtemperature activated PIF4 protein (6).Adapting the temperature-shift protocols in Lau et al., (2018) to account for the different standard temperatures of Arabidopsis and tomato growth (22°C and 26°C respectively), we measured the M82 response to elevated (34°C) temperature, again incorporating lineage tracing to identify the cellular mechanisms responsible for the observed changes in SI and whole leaf attributes.Tomato responses to growth at elevated temperature differed from those in Arabidopsis, most strikingly in the direction of response.In Arabidopsis, higher temperatures suppressed SI, whereas in M82, they led to higher SI and SD (Figure 3a and S2c-d). The opposite responses of tomato and Arabidopsis to increased temperature intrigued us.While there are numerous reports of different plants exhibiting stomatal opening and closing responses to changing temperature (34)(35)(36), stomatal production or pattern responses are less clear.The Arabidopsis accession used in Lau (2018) was Col-0, a plant from a cool moist climate.Because there are Arabidopsis accessions from diverse climates and with diverse life history traits, we repeated the Arabidopsis temperature-shift experiments with Col-0 and accessions Bur-0 (Ireland) and Kz-9 (Kazakhstan).As before, high temperature results in significantly lower SI in Col-0, but this reduction is not seen in Bur-0 or in Kz-9 (Figure 3b).Additional measurements of SD and leaf area indicate that there is a diversity of responses, and Col-0 alone is not sufficient to fully encompass the "Arabidopsis" response (Figure S2e-f). Returning to tomato, under the temperature shift regime, the SlSPCH cis-regulatory variants again showed diverse responses.SI and SD responses in lines #2, #3 and 1-6#10 to a shift from 26°C to 34°C were slightly dampened relative to those in M82, line #20 was essentially insensitive, and line 1-6#4 showed a dramatic decrease in SI at 34°C (Figure 3a and S2c-d).The higher temperature resulted in an overall decrease in leaf area in all lines (Figure S2x).Sequencing of line 1-6#4 revealed that a deletion extended into the coding region of SlSPCH and thus this response cannot be solely attributed to altering the cisregulatory region; this allele will be investigated in more detail below (Figure 4).Response to changing temperature could be separated from responses to light in that line #20 was hypersensitive to light, but insensitive to temperature and 1-6#4 only showed a strong change in response to temperature (Figure 2a vs. 3a).Overall, these results show that in addition to the insights CRISPR/Cas9 editing of tomato cisregulatory regions has generated for morphological diversity (37-39), the approach can reveal sites for environmental inputs. We used live cell imaging to track how temperature affected stomatal lineage divisions and fates, and found, similar to light responses, that the primary cellular mechanism underlying an increase SI was an increase the proportion of asymmetric precursor divisions producing a stoma and pavement cell (rather than two pavement cells) at higher temperature (Figure 3c).We also took advantage of our SlSPCH, SlMUTE and SlFAMA translational reporters (Figure 1b and S1c and d) to refine how the stomatal lineage responded to changing environments.Due to the broad expression of SlSPCH and the guard cell restricted expression of SlFAMA (Figure S1), we did not gain any additional insights from these reporters.The SlMUTE translational reporter, however, produced a striking phenotype in response to changing temperature (Figure S3).At 34°C, the higher SlMUTE dose provided by the transgene leads to a strong stomatal overproduction and stomatal clustering phenotype (Figure S3a, b and d).We did not see the same phenotype in response to higher light (Figure S3a-c), despite our light shift regime inducing an increase in SI at least as great as our temperature shift regime (Figure 2, 3 and S3).Several models of gene activity are consistent with this phenotype (Figure 3d), and we consider ways in which extra SlMUTE creates a sensitized background that reveals differential behaviors of SlSPCH and asymmetric division programs in the discussion. Deletion of coding sequences at the SlSPCH N-terminus generates a temperature sensitive allele of potential agronomic use The striking and unexpected reduction of SI and SD at high temperature in line 1-6#4 led us to characterize the effects of this mutation on the sequence and function of the resultant SlSPCH protein, as well as on cellular and whole plant phenotypes (Figure 4).The mutation disrupts the N-terminal region of SlSPCH, resulting in a protein that removes the basic residues, including the putative HER motif that mediates DNA binding, but retains the helix-loop-helix domain that is highly conserved among SPCH homologues (Figure 4a).At 26°C, both stomatal production and whole plant phenotypes are indistinguishable from M82, but at 34°C, stomata numbers drop precipitously (Figure 4b-e) and overall growth is stunted (Figure 4f).To assess how a SlSPCH protein missing its N-terminus might behave, we created NeonGreen-tagged reporters and tested expression using a transient expression system in N. benthamiana.We infiltrated N. benthamiana with a plasmid co-expressing ML1p::RCI2A-mScarletI (control for transformation) and either 35Sp::slspch1-6#4-NeonGreen or 35Sp::SlSPCH-NeonGreen at three temperatures, 22°C, 28°C and 34°C.At all temperatures, 35Sp::SlSPCH-NeonGreen was nuclear, although at higher temperature, fewer cells had detectable signal (Figure S4a, panels 1-3, quantified in S4b).At 22°C, 35Sp::slspch1-6#4 -NeonGreen was also predominantly nuclear, though fewer transformed cells showed expression relative to the full length SlSPCH (Figure S4a, panel 4).At 28°C and 34°C, very little nuclear 35Sp::slspch1-6#4-NeonGreen signal could be detected (Figure S4a, panels 5-6, quantified in S4b).The loss of nuclear localized SlSPCH correlates with the severity of the stomatal phenotype, therefore we conclude that we have generated a temperature sensitive SlSPCH allele whose phenotype is linked to its ability to act in the nucleus as a transcription factor.Although we can only speculate about the mechanism underlying the temperature-sensitivity of this SlSPCH allele, the ability to create a form of SPCH whose activity can be controlled by temperature opens up new possibilities for testing tomato lines of varying stomatal densities for photosynthetic and water use efficiency phenotypes. Discussion Plants perceive and interpret environmental cues and effect developmental changes.Understanding the signaling, transcriptional and cellular responses that bridge input and output can reveal fundamental mechanisms of information flow and provide materials for creating climate-resilient crops.Here, using CRISPR/Cas9enabled genome editing of the cis-regulatory region of the stomatal lineage regulatory factor SPCH in tomato, we showed that stomatal production could be made more or less sensitive to light and temperature cues.By combining molecular data on alleles with cellular tracking data, we identified a cell fate switching mechanism that underlies adjustments made in response to light and to temperature.These alleles and new tomato lines bearing reporters for SlSPCH, SlMUTE and SlFAMA will enable finer dissection of stomatal lineage behaviors under many conditions and in other mutant backgrounds.Finally, the identification of a SlSPCH allele whose strong temperature-sensitive activity response appears linked to the subcellular localization of the protein provides both insights into intrinsic regulation of SPCH and a tool for manipulating stomatal production. Cis-regulatory elements dictate cell-type specific expression and/or can tune expression in response to the environment.We showed that deletions of the 5' cis-regulatory region of SlSPCH result in altered stomatal response to environmental change while leaving overall stomatal patterning (e.g.spacing) intact, suggesting that we identified mainly tuning elements.This outcome was by design, given our initial screen was for CRISPR/Cas9-edited plants with overall wildtype appearance under standard growth conditions.Within the 3Kb region our scRNAs target, there are predicted sites for the light responsemediating transcription factor HY5 (including A-boxes) and two bHLH target sites (E-Boxes) of the type used by the temperaturemediating transcription factor PIF4, and abscisic acid response elements (ABREs).Line #20, which was insensitive to temperature, but hypersensitive to light, disrupts HY5 and two E-boxes (Figure 1c and S1e).Line #10, a relatively small deletion which exhibits a dampened light response, interrupts one of the A-boxes (Figure 1c and S1e).Future experiments may reveal the roles of tomato HY5 and PIF genes (40,41) in stomatal regulation.It will be particularly interesting to find that HY5 is a direct transcriptional regulator of SlSPCH because in Arabidopsis, there is an intermediary between HY5 and regulation of SPCH protein (4).Sequence predictions in the SlSPCH upstream region, and alignment with SPCH 5' regions from other Solanaceae and from Arabidopsis reveal a number of conserved regions (CNS, Figure S1e).Some of the Solanaceae CNS are included in our characterized deletion lines (Figure 1c and S1e) but the CNS conserved between tomato and Arabidopsis (red in Figure S1e) lie further upstream.Generation of additional cis-regulatory alleles that specifically target CNS may be a productive strategy to identify additional regulatory elements, including both those that drive celltype specific expression and those that affect SlSPCH production in response to additional environmental inputs. The behavior of the SlSPCH1-6#4 allele that eliminates the N-terminal region, including the predicted DNA-binding region of the bHLH domain, challenges our assumptions about which regions of the SlSPCH protein are essential, and for which activities.At 26°C SlSPCH1-6#4 lines produce slightly fewer stomata than M82 (Figure 3a), but the stomata appear morphologically normal and wellpatterned (Figure 4b and c), therefore this version of SlSPCH must be capable of performing the general SlSPCH functions of inducing asymmetric cell divisions and promoting eventual guard cell identity.Although it may seem surprising to ascribe transcriptional regulator activity to a bHLH missing the motif that typically mediates DNA contacts, there are classes of transcription factors that can function without sequence N-terminal of the HLH domains (42), and previous deletion and domain swap experiments showed that AtMUTE, and to a lesser extent, AtSPCH, can function when three putative DNAbinding residues (HER) are replaced (43).In other plants, SPCH is an obligate heterodimer working with INDUCER OF CBF EXPRESSION (ICE1) and paralogous ClassIIIb bHLHs (11,16,44).It is possible that as a heterodimer, it is the ICE1 residues that make contact with DNA, and SPCH provides some other essential activities for transcriptional activation (45).Loss of the SlSPCH N-terminal region is also predicted to eliminate a nuclear localization signal, and we see that, at high temperatures, SlSPCH1-6#4 cannot be retained in the nucleus (Figure S4).That SlSPCH1-6#4 is ever nuclear we attribute to its heterodimerization with SlICE.We speculate that, given ICE1's activity promoting gene expression at low temperatures (46,47), it may be less present or active at warm ones, and low ICE1 would be insufficient to retain the N-terminal deleted SlSPCH. Ultimately how do the manipulations we made to the SlSPCH locus lead to environment-dependent changes in stomatal production?Cell tracking data in M82 suggest that SlSPCH-induced ACDs normally resolve into either a stoma and pavement cell or two pavement cells, and light and temperature modulate SlSPCH production and/or activity to tip this balance (Model in Figure 3d).One hypothesis is a quantitative shift in SlSPCH in cells about to undergo ACDs defines whether one or no cells acquire stomatal fate after ACD.We, however, prefer a second explanation that also takes into account that in response to higher temperature, pairs or small clusters of stomata result from the presence of SlMUTEpro::SlMUTE-mScarletI.Broad overexpression of MUTE in Arabidopsis leads to the formation of extra stomata by inducing non-stomatal cells to take on stomatal fate (48).The ectopic stomata produced by the extra copy of SlSMUTE, however, are morphologically normal, and the orientation of the GC pairs suggests that they do not originate from extra divisions of GCs (Figure S3, compared to (49)).Therefore, we hypothesize that the extra SlMUTE dosage is revealing more of the range in the post-ACD toggle between pavement cell and GC identity.As diagrammed in Figure 3d, this suggests that tomato ACDs produce a continuum of cell fates.Under conditions optimal for productive photosynthesis and leaf growth, ACDs result in one GC and one pavement cell.The ACDs are induced by SlSPCH, but the resulting choice of fate is dictated by SlSPCH targets (or SlSPCH in complex with a target) and among those targets is SlMUTE.Under limiting environmental conditions, SlSPCH levels or activity are insufficient to induce fate-promoting targets, and both products of an ACD become pavement cells.In high temperature conditions, elevated SlSPCH and/or its direct targets would have both endogenous and transgene-derived SlMUTE to induce and this higher expression would push the ACDs to yield two stomata.Alternatively, SlMUTE itself could be temperature responsive and an additional copy of SlMUTE could cause SlMUTE to accumulate to levels high enough to pass a cell fate threshold in both daughters of an ACD. Nearly all future climate predictions suggest that temperatures and atmospheric [CO2] will increase, and severe weather events (drought, flooding) will become more prevalent (50,51).Stomata, with their roles in capturing atmospheric carbon and regulating plant transpiration, which has both water transport and leaf cooling components, must integrate and prioritize potentially conflicting signals to maintain plant health.While the increase in stomatal production in response to increasing light intensity appears to occur in most plants, the response to increased temperature can vary, as seen comparing tomato and Arabidopsis, and even among Arabidopsis accessions.These diverging responses to warm temperature may reflect different priorities for conserving water and leaf cooling.It is therefore hard to predict, but important to test, the combinatorial effects of predicted future climate conditions on plants.Genetic tools that can alter sensitivity to specific inputs, such as the SlSCPH lines generated in this work, will be instrumental in deciphering complex response and may also be useful to assay the effects of having tomato plants with different stomatal numbers for growth in large-scale or urban agricultures systems of the future. Figure 1 Figure 1 Overall progression of stomatal development in tomato and generation of SlSPCH cis-regulatory mutations (a) Schematic of stomatal development in tomato leaves based on observations of divisions and expression of reporters (b) Confocal images showing cell-type specific expression of stomatal reporters, larger fields of view are in Figure S1a-d.(c) Diagram of SlSPCH cis-regulatory mutagenesis scheme and resultant deletions.(d) Seedling morphology and (e) cotyledon epidermis phenotypes in wildtype M82, SlSPCH cis-regulatory deletion line 1-6#10 and SlSPCH coding region deletion line #13.Stomata are false-colored purple in (e).Scale bars in (b) 10 µm, in (d) 20 mm, and in (e) 50 µm.See also Figure S1 Figure 2 Figure 2 Cellular basis of stomatal response to changes in light intensity and alternation of this response in SlSPCH cis-regulatory mutants (a) Plot of stomatal index (SI) response to low and high light conditions in M82 and cis-regulatory mutants.(b) Plot of stomatal density (SD) response to low and high light conditions in M82 and cis-regulatory mutants.(c)lineage tracing of asymmetrically dividing cells and their fate outcomes from confocal images of M82 cotyledons expressing epidermal plasma membrane reporter ML1p:RCI2A-NeonGreen. Red arrows mark ACDs that yield stomata (purple) and black arrows indicate ACDs that lead to two pavement cells (symmetric differentiation).Scale bars represent 20 µm.(d) Plot of shift in the number of ACDs that yield two pavement cells in low and high light.Statistical tests in (a), (b) and (d) are represented as mean ± 95% confidence interval.Bonferronicorrected p values from Mann-Whitney U test are *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001.n.s.: P > 0.05, not significant.Sample sizes in (a) and (b) are n = 8-21 0.4mm 2 fields from 3-5 cotyledons.Sample sizes in (d) are n = 8-14 0.4mm 2 fields from 3-4 cotyledons.See also Figure S2 Figure 3 Figure 3 Stomatal response to changes in temperature and alteration of this response in SlSPCH cis-regulatory mutants and among Arabidopsis accessions (a) Plot of shift in SI of tomato SlSPCH cis-regulatory mutants.(b) Plot of SI response of Arabidopsis accessions to change in temperature.(c) Plot of shift in the number of tomato stomatal lineage ACD that produce two pavement cells instead of one stoma and one pavement cell (symmetric differentiation) in plants grown at 26°C and 34°C.(d) Scheme of environmental and genetic influence on the balance between pavement cells and stomata as mediated by SlSPCH and its targets inducing ACDs and then influencing the fates of the daughters.Statistical tests in (a-c) are represented as mean ± 95% confidence interval.Bonferroni-corrected p values from Mann-Whitney U test are *P < 0.05; ***P < 0.001; ****P < 0.0001.n.s.: P > 0.05, not significant.Sample sizes in (a) n = 20-30 0.4mm 2 fields from 4-6 cotyledons.Sample sizes in (b) n= 15-16 cotyledons each from an individual plant.Sample sizes in (c) are n = 8-9 fields 0.4mm 2 from 4-6 cotyledons.See also Figures S2 and S3 SlSPCH cis-regulatory mutants display altered responsiveness to changes in temperature. Author Contributions : IN designed research, performed research, analyzed data, and wrote the paper; AB and NKS performed research, JE performed research and analyzed data, DCB designed research, analyzed data and wrote the paper.Funding: IN was funded by BARD Fellowship no.FI-583-2019 and a Koret postdoctoral fellowship at Stanford University.JE is supported by the Cellular and Molecular Biology training grant (National Institutes of Health, T32GM007276).DCB is an investigator of the Howard Hughes Medical Institute.Competing Interest Statement:The authors declare no competing interests.
6,451.2
2023-11-02T00:00:00.000
[ "Biology", "Environmental Science" ]
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. INTRODUCTION Deep neural network (DNN) (LeCun et al., 2015) based models have demonstrated unprecedented accuracy, in cases exceeding human level performance, in cognitive tasks such as object recognition (Krizhevsky et al., 2012;He et al., 2015;Simonyan and Zisserman, 2015;Szegedy et al., 2015), speech recognition , and natural language processing (Collobert et al., 2012). These accomplishments are made possible thanks to the advances in computing architectures and the availability of large amounts of labeled training data. Furthermore, network architectures have been adjusted to take advantage of data properties such as spatial or temporal correlation. For instance, convolutional neural networks (CNNs) provide superior results for image recognition and recurrent neural networks (RNN) in speech and natural language processing. Therefore, the application space of the traditional fully connected deep learning network is apparently diminishing. In a recent paper we have introduced the concept of a resistive processing unit (RPU) as an architecture solution for fully connected DNN. Here we show that the RPU concept is equally applicable for CNNs. Training large DNNs is an extremely computationally intensive task that can take weeks even on distributed parallel computing frameworks utilizing many computing nodes (Dean et al., 2012;Le et al., 2012;Gupta et al., 2016). There have been many attempts to accelerate DNN training by designing and using specialized hardware such as GPUs (Coates et al., 2013;Wu et al., 2015), FPGAs (Gupta et al., 2015), or ASICs that rely on conventional CMOS-technology. All of these approaches share the common objective of packing more computing units into a fixed area and power budget by using optimized multiply and add hardware so that acceleration over a conventional CPU can be achieved. Although various microarchitectures and data formats are considered for different accelerator designs (Arima et al., 1991;Lehmann et al., 1993;Emer et al., 2016), all of these digital approaches use a similar underlying transistor technology and therefore the acceleration factors will eventually be limited due to scaling limitations. In order to achieve even larger acceleration factors beyond conventional CMOS, novel nano-electronic device concepts based on non-volatile memory (NVM) technologies (Burr et al., 2017), such as phase change memory (PCM) (Kuzum et al., 2013), resistive random access memory (RRAM) (Chi et al., 2016), and memristors (Prezioso et al., 2015;Soudry et al., 2015;Merced-Grafals et al., 2016) have been explored for implementing DNN training. Acceleration factors ranging from 25X − 2, 000X (Xu et al., 2014;Burr et al., 2015;Seo et al., 2015) compared to the conventional CPU/GPU based approaches and significant reduction in power and area have been predicted. However, for these bottom-up approaches the acceleration factors are still limited by device non-idealities that are fundamental to their application as non-volatile memory (NVM) elements. Instead, using a top-down approach it is possible to develop a new class of devices, so called Resistive Processing Unit (RPU) devices (Gokmen and Vlasov, 2016) that are free from these limitations, and therefore can promise ultimate accelerations factors of 30, 000X while still providing a power efficiency of 84, 000 GigaOps/s/W. The concept of using resistive cross-point device arrays (Chen et al., 2015b;Agrawal et al., 2016b;Gokmen and Vlasov, 2016;Fuller et al., 2017) as DNN accelerators have been tested, to some extent, by performing simulations for the specific case of fully connected neural networks. The effect of various device properties and system parameters on training performance has been evaluated to derive the required device and system level specifications for a successful implementation of an accelerator chip for DNN compute efficient training (Agrawal et al., 2016a;Gokmen and Vlasov, 2016). A key requirement is that these analog resistive devices must change conductance symmetrically when subjected to positive or negative pulse stimuli. Indeed, these requirements differ significantly from those needed for memory elements and therefore require a systematic search for new physical mechanisms, materials and device designs to realize an ideal resistive element for DNN training. In addition, it is important to note that these resistive cross-point arrays perform the multiply and add in the analog domain in contrast to the CMOS based digital approaches. Optimizing machine learning architectures that employ this fundamentally different approach to computation requires careful analysis and trade-offs. While this has been done for the specific case of fully connected DNNs, it is not clear whether the proposed device specifications for that case generalize to a more general set of network architectures, and hence requires further validation of their applicability to a broader range of networks. Fully Connected Neural Networks Deep fully connected neural networks are composed by stacking multiple fully connected layers such that the signal propagates from input layer to output layer by going through series of linear and non-linear transformations (LeCun et al., 2015). The whole network expresses a single differentiable error function that maps the input data on to class scores at the output layer. In most cases the network is trained with simple stochastic gradient decent (SGD), in which the error gradient with respect to each parameter is calculated using the backpropagation algorithm (Rumelhart et al., 1986). The backpropagation algorithm is composed of three cyclesforward, backward and weight update-that are repeated many times until a convergence criterion is met. For a single fully connected layer where N inputs neurons are connected to M output (or hidden) neurons, the forward cycle involve computing a vector-matrix multiplication (y = Wx) where the vector x of length N represents the activities of the input neurons and the matrix W of size M × N stores the weight values between each pair of input and output neurons. The resulting vector y of length M is further processed by performing a non-linear activation on each of the elements and then passed to the next layer. Once the information reaches to the final output layer, the error signal is calculated and backpropagated through the network. The backward cycle on a single layer also involves a vector-matrix multiplication on the transpose of the weight matrix (z = W T δ), where the vector δ of length M represents the error calculated by the output neurons and the vector z of length N is further processed using the derivative of neuron non-linearity and then passed down to the previous layers. Finally, in the update cycle the weight matrix W is updated by performing an outer product of the two vectors that are used in the forward and the backward cycles and usually expressed as W ← W + η (δx T ) where η is a global learning rate. Mapping Fully Connected Layers to Resistive Device Arrays All of the above operations performed on the weight matrix W can be implemented with a 2D crossbar array of twoterminal resistive devices with M rows and N columns where the stored conductance values in the crossbar array form the matrix W. In the forward cycle, input vector x is transmitted as voltage pulses through each of the columns and resulting vector y can be read as current signals from the rows (Steinbuch, 1961). Similarly, when voltage pulses are supplied from the rows as an input in the backward cycle, then a vector-matrix product is computed on the transpose of the weight matrix W T . Finally, in the update cycle voltage pulses representing vectors x and δ are simultaneously supplied from the columns and the rows. In this configuration each cross-point device performs a local multiplication and summation operation by processing the voltage pulses coming from the column and the row and hence achieving an incremental weight update. All three operating modes described above allow the arrays of cross-point devices that constitute the network to be active in all three cycles and hence enable a very efficient implementation of the backpropagation algorithm. Because of their local weight storage and processing capability these resistive cross-point devices are called RPU devices (Gokmen and Vlasov, 2016). An array of RPU devices can perform the operations involving the weight matrix W locally and in parallel, and hence achieves O(1) time complexity in all three cycles, independent of the array size. Here, we extend the RPU device concept toward CNNs. First we show how to map the convolutional layers to RPU device arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. Next, we show that the RPU device specifications derived for a fully connected DNN hold for CNNs. Our study shows, however, that CNNs are more sensitive to noise and bounds (signal clipping) due to analog nature of the computations on RPU arrays. We discuss noise and bound management techniques that mitigate these problems without introducing any additional complexity in the analog circuits, and that can be addressed by the associated digital circuitry. In addition, we discuss digitally-programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of these techniques enables a successful application of the RPU concept for the training of CNNs. Furthermore, a network trained with RPU devices, including imperfections, can yield a classification error indistinguishable from a network trained using conventional high-precision floating point arithmetic. Convolutional Layers The input to a convolutional layer can be an image or the output of the previous convolutional layer and is generally considered as a volume with dimensions of (n, n, d) with a width and height of n pixels and a depth of d channels corresponding to different input components (e.g., red, green, and blue components of an image) as illustrated in Figure 1A. The kernels of a convolutional layer are also a volume that is spatially small along the width and height, but extends through the full depth of the input volume with dimensions of (k, k, d). During the forward cycle, each kernel slides over the input volume across the width and height and a dot product is computed between the parameters of the kernels and the input pixels at any position. Assuming no zero padding and single pixel sliding (i.e., stride is equal to one), this 2D convolution operation results in a single output plane with dimensions n − k + 1 , n − k + 1 , 1 per kernel. Since there exists M different kernels, output becomes a volume with dimensions n − k + 1 , n − k + 1 , M and is passed to following layers for further processing. During the backward cycle of a convolutional layer similar operations are performed but in this case the spatially flipped kernels slide over the error signals that are backpropagated from the upper layers. The error signals form a volume with the same dimensions of the output n − k + 1 , n − k + 1 , M . The results of this backward convolution are organized to a volume with dimensions (n, n, d) and are further backpropagated for error calculations in the previous layers. Finally, in the update cycle, gradient with respect to each parameter is computed by convolving the input volume with the error volume used in the forward and backward cycles, respectively. This gradient information, which has the same dimensions as the kernels, is added to the kernel parameters after scaled with a learning rate. Mapping Convolutional Layers to Resistive Device Arrays For an efficient implementation of a convolutional layer using an RPU array, all the input/output volumes as well as the kernel parameters need to be rearranged in a specific way. The convolution operation essentially performs a dot product between the kernel parameters and a local region of the input volume and hence can be formulated as a matrix-matrix multiply (Gao et al., 2016). By collapsing the parameters of a single kernel to a column vector of length k 2 d and stacking all M different kernels as separate rows, a parameter matrix K of size M × k 2 d is formed that stores all of the trainable parameters associated a single convolutional layer as shown in Figure 1B. After this rearrangement, in the forward cycle the outputs corresponding to a specific location along the width and height can be calculated by performing a vector-matrix multiplication y = Kx, where the vector x of length k 2 d is a local region in the input volume and vector y of length M contains all of the results along the depth of the output volume. By repeating this vector-matrix multiplication for different local regions, the full volume of the output map can be computed. Indeed, this repeated vector-matrix multiplication is equivalent to a matrixmatrix multiplication Y = KX, where the matrix X with dimensions k 2 d × (n − k + 1) 2 has the input neuron activities with some repetition and resulting matrix Y with dimensions M × (n − k + 1) 2 has all the results corresponding to the output volume. Similarly, using the transpose of the parameter matrix, the backward cycle of a convolutional layer can also be expresses as a matrix-matrix multiplication Z = K T D, where the matrix D with dimensions M × (n − k + 1) 2 has the error signals corresponding to an error volume. Furthermore, in this configuration the update cycle also simplifies to a matrix multiplication where the gradient information for the whole parameter matrix K can be computed using matrices X and D, and the update rule can be written as K ← K + η(DX T ). The rearrangement of the trainable parameters to a single matrix K by flattening of the kernels enables an efficient implementation of a convolutional layer using an RPU array. After this rearrangement, all the matrix operations performed on K can be computed as a series of vector operations on an RPU array. Analogous to the fully connected layers, matrix K is mapped to an RPU array with M rows and k 2 d columns as shown in Figure 1B. In the forward cycle, the input vector corresponding to a single column in X is transmitted as voltage pulses from the columns and the results are read from the rows. Repetition of this operation for all (n − k + 1) 2 columns in X completes all the computations required for the forward cycle. Similarly, in the backward cycle the input vector corresponding to a single column in D is serially fed to the rows of the array. The update rule shown above can be viewed as a series of updates that involves computing an outer product between two columns from X and D. This can be achieved by serially feeding the columns of X and D simultaneously to the RPU array. During the update cycle each RPU device performs a series of local multiplication and summation operations and hence calculates the product of the two matrixes. We note that for a single input the total number of multiplication and summation operations that need to be computed in all three cycles for a convolutional layer is Mk 2 d(n − k + 1) 2 and this number is independent of the method of computation. The proposed RPU mapping described above achieves this number as follows: Due to the inherent parallelism in the RPU array Mk 2 d operations are performed simultaneously for each vector operation performed on the array. Since there are (n − k + 1) 2 vector operations performed serially on the array, the total number of computations matches the expectation. Alternatively, one can consider that there are Mk 2 d trainable parameters and that each parameter is used (n − k + 1) 2 times due to the parameter sharing in a convolution layer. Since each RPU device in an array can perform a single computation at any given time, parameter sharing is achieved by accessing the array (n − k + 1) 2 times. For fully connected layers each weight is used only once and therefore all the computations can be carried out using single vector operations on the array. The end result of mapping a convolutional layer onto the RPU array is very similar to the mapping of a fully connected layer and therefore does not change the fundamental operations performed on the array. We also emphasize that the convolutional layer described above, with no zero padding and single pixel sliding, is only used for illustration purposes. The proposed mapping is more general and can be applied to convolutional layers with zero padding, strides larger than a single pixel, dilated convolutions or convolutions with non-square inputs or kernels. This enables the mapping of all of the trainable parameters of a conventional CNN within convolutional and fully connected layers to RPU arrays. RESULTS In order to test the validity of this method we performed DNN training simulations for the MNIST dataset using a CNN architecture similar to LeNet-5 (LeCun et al., 1988). It comprises of two convolutional layers with 5 × 5 kernels and hyperbolic tangent (tanh) activation functions. The first layer has 16 kernels while the second layer has 32 kernels. Each convolutional layer is followed by a subsampling layer that implements the max pooling function over non-overlapping pooling windows of size 2 × 2. The output of the second pooling layer, consisting of 512 neuron activations, feeds into a fully connected layer consisting of 128 tanh neurons, which is then connected into a 10-way softmax output layer. Training is performed repeatedly using a mini-batch size of unity for all 60,000 images in the training dataset which constitutes a single training epoch. Learning rate of η = 0.01 is used throughout the training for all 30 epochs. Following the proposed mapping above, the trainable parameters (including the biases) of this architecture are stored in 4 separate arrays with dimensions of 16 × 26 and 32 × 401 for the first two convolutional layers, and, 128 × 513 and 10 × 129 for the following two fully connected layers. We name these arrays as K 1 , K 2 , W 3 , and W 4 , where the subscript denotes the layer's location and K and W is used for convolutional and fully connected layers, respectively. When all four arrays are implemented as simple matrices and the operations are performed with floating point (FP) numbers, the network achieves a classification error of 0.8% on the test data. This is the FP-baseline model that we compare against the RPU based simulations for the rest of the paper. We assume all activations and pooling layers are implemented in the digital circuits for the RPU based simulations. RPU Baseline Model The influence of various RPU device properties, variations, and non-idealities on the training accuracy of a deep fully connected network are discussed in Gokmen and Vlasov (2016). We follow the same methodology here and as a baseline for of the RPU models discussed below, we use the device specifications that resulted in an acceptable test error on the fully connected network. The RPU-baseline model uses the stochastic update scheme in which the numbers that are encoded from neurons (x i and δ j ) are implemented as stochastic bit streams. Each RPU device performs a stochastic multiplication (Gaines, 1967;Poppelbaum et al., 1967;Merkel and Kudithipudi, 2014) via simple coincidence detection as illustrated in Figure 2. In this update scheme the expected weight change can be written as: where BL is the length of the stochastic bit stream, w min is the change in the weight value due to a single coincidence event, C x and C δ are the gain factors used during the stochastic translation for the columns and the rows, respectively. The RPU-baseline has BL = 10, C x = C δ = η/(BL w min ) = 1.0 and w min = 0.001. The change in weight values is associated with a conductance change in the RPU devices; therefore, in order to capture device imperfections, w min is assumed to have cycleto-cycle and device-to-device variations of 30%. Actual RPU devices may also show different amounts of change to positive and negative weight updates (i.e., inherent asymmetry). This is taken into account by using separate w + min for the positive updates and w − min for the negative updates for each RPU device. The ratio w + min / w − min among all devices is assumed to be unity as this can be achieved by a global adjustment of the voltage pulse durations/heights. However, device-to-device mismatch is unavoidable and therefore 2% variation is assumed for this parameter. To take conductance saturation into account, which is expected to be present in actual RPU devices, the bounds on the weights values, w ij , is assumed be 0.6 on average with a 30% device-to-device variation. We did not introduce any non-linearity in the weight update as this effect has been shown to be insignificant as long as the updates are reasonably balanced (symmetric) between up and down changes (Agrawal et al., 2016a;Gokmen and Vlasov, 2016). During the forward and backward cycles the vector-matrix multiplications performed on an RPU array are prone to analog noise and signal saturation due to the peripheral circuitry. The array operations, including the input and output signals, are illustrated in Figure 2. The output voltage (V out ) is determined by integrating the analog current coming from the column (or row) during a measurement time (t meas ) using a capacitor (C int ) and an op-amp. This approach will have noise contributions from various sources. These noise sources are taken into account by introducing an additional Gaussian noise, with zero mean and standard deviation of σ = 0.06, to the results of vector-matrix multiplications computed on an RPU array. This noise value can be translated to an acceptable input referred voltage noise following the approach described in FIGURE 2 | Schematics of an RPU array operation during the backward and update cycles. The forward cycle operations are identical to the backward cycle operations except the inputs are supplied from the columns and the outputs are read from the rows. Gokmen and Vlasov (2016). In addition the results of the vectormatrix multiplications stored at V out are bounded to a value of |α| = 12 to account for a signal saturation on the output voltage corresponding to a supply voltage on the op-amp. Table 1 summarizes all of the RPU-baseline model parameters used in our simulations that are also consistent with the specifications discussed in Gokmen and Vlasov (2016). The CNN training results for various RPU variations are shown in Figure 3A. Interestingly, the RPU-baseline model shown in Table 1 performs poorly and only achieves a test error between 10 and 20% (black curve). Not only is this value significantly higher than the FP-baseline value of 0.8% but is also higher than the 2.3% error rate achieved with the same RPU model for a fully connected network on the same dataset. Our analysis shows that the larger test error is mainly due to contributions of analog noise introduced during the backward cycle, and signal bounds introduced in the forward cycle on the final RPU array, W 4 . As shown by the green curve, the model without analog noise in the backward cycle and infinite bounds on W 4 reaches a respectable test error of about 1.5%. When we eliminate only the noise while keeping the bounds, the model exhibits reasonable training up to about the 8th epoch but then the error rate suddenly increases and reaches a value about 10%. Similarly, if we only eliminate the bounds while keeping the noise, the model, shown by the red curve, performs poorly and the error rate stays around 10% level. In the following, we discuss the origins of these errors and methods to mitigate them. Noise and Bound Management Techniques It is clear that the noise in the backward cycle and the signal bounds on the output layer need to be addressed for the successful application of the RPU approach to CNN training. The complete elimination of analog noise and signal bounds is not realistic for real hardware implementation of RPU arrays. Designing very low noise read circuity with very large signal bounds is not an option because it will introduce unrealistic area and power constraints on the analog circuits. Below we describe noise and bound management techniques that can be easily implemented in the digital domain without changing the design considerations of RPU arrays and the supporting analog peripheral circuits. During a vector-matrix multiplication on an RPU array, the input vector (x or δ) is transmitted as voltage pulses with a fixed amplitude and tunable durations as illustrated by Figure 2. In a naive implementation, the maximal pulse duration represents unity (t meas → 1), and all pulse durations are scaled accordingly depending on the values of x i or δ j . This scheme works optimally for the forward cycle with tanh (or sigmoid) activations, as all x i in x including a bias term are between [−1, 1]. However, this assumption becomes problematic for the backward cycle, as there are not any guarantees for the range of the error signals in δ. For instance, all δ j in δ may become significantly smaller than unity (δ ≪ 1) as the training progresses and the classification error gets smaller. In this scenario the results of a vector-matrix multiplication in the backward cycle, as shown by Equation (2) below: are dominated by the noise term σ , as the signal term W T δ does not generate enough voltage at the output. This is indeed why the noise introduced in the backward cycle brings the learning to a halt at around 10% error rate as shown by models in Figure 3A. In order to get better signal at the output when δ ≪ 1, we divide all δ j in δ to the maximum value δ max before the vectormatrix multiplication is performed on an RPU array. We note that this division operation is performed in digital circuits and ensures that at least one signal of unit amplitude exists at the input of an RPU array. After the results of the vector-matrix multiplication are read from an RPU array and converted back to digital signals, we rescale the results by the same amount δ max . In this noise management scheme, the results of a vector-matrix multiplication can be written as: The result, z =W T δ + σ δ max , effectively reduces the impact of noise significantly for small error rates δ max ≪ 1. This noise management scheme enables the propagation of error signals that are arbitrarily small and maintains a fixed signal to noise ratio independent of the range of values in δ. In addition to the noise, the results of a vector-matrix multiplication will be strongly influenced by the |α| term that corresponds to a maximum allowed voltage during the integration time. The value |α| = 12 does not strongly influence the activations for hidden layers with tanh (or sigmoid) nonlinearity because the error introduced during the calculation of tanh (z) (or sigmoid(z)) due to the bound is negligible for an input value z that is otherwise much larger. However, for the output layer with softmax (or ReLU) activations the error introduced due to the bound may become significant. For instance, if there are two outputs that are above the bounded value, they would be treated equally and the classification task would choose between the two classes with equal probability, even if one of the outputs is significantly larger than the other. This results in a significant error (major information loss) in estimating the class label and hence limits the performance of the network. As with the noise, the bounded signals start to become an issue for later stages of the training as the network "starts to perform good test results" (approaches an optimum configuration) and the decision boundary between classes become more distinct. As shown by the blue curve in Figure 3A, at the beginning of the training the network successfully learns, and test errors as low as 2% can be achieved; however, around the 8th epoch signal bounding forces the network to learn unwanted features and hence the error rate suddenly increases. In order to eliminate the error introduced due to bounded signals, we propose repeating the vector-matrix multiplication after reducing the input strength by a half when a signal saturation is detected. This would guarantee that after a few iterations (n) the unbounded signals can be read reliably and properly rescaled in the digital domain. In this bound management scheme, the effective vector-matrix multiplication on an RPU array can be written as: with a new effective bound of 2 n |α|. Note the noise term is also amplified by the same factor; however, the signal to noise ratio remains fixed (only a few percent) for the largest numbers that contribute most in calculation of softmax activations. In order to test the validity of the proposed noise management (NM) and bound management (BM) techniques, we performed simulations using the RPU-baseline model of Table 1 with and without enabling NM and BM. The summary of these simulations is presented in Figure 3B. When both NM and BM are off, the model using the RPU baseline of Table 1, shown as black curve, performs poorly similar to the black curve in Figure 3A. Similarly, turning on either NM or BM alone (as shown by red and blue curves) is not sufficient and the models achieve test errors of about 10%. However, when both NM and BM are enabled the model achieves a test error of about 1.7% as shown by the green curve. This is very similar to the model with no analog noise and infinite bounds presented in Figure 3A and shows the success of the noise and bound management techniques. By simply rescaling the signal values in the digital domain, these techniques mitigate both the noise and the bound problems inherent to analog computations performed using RPU arrays. The additional computations introduced in the digital domain due to NM and BM are not significant and can be addressed with a proper digital design. For the NM technique, δ max needs to be determined from δ and each element in δ (and z) value needs to be divided (and multiplied) by δ max . All of these computations require additional O(M) comparison, division and multiplication operations that are performed in the digital domain. However, given that the very same circuits need to compute O(M) error signals using the derivative of the activation functions, performing these additional operations do not change the complexity of the operations that needs to be performed by the digital circuits. Basically, the combination of all of these operations can be viewed as computing a slightly more complicated activation function. Therefore, with proper design these additional operations can be performed with only a slight overhead without causing significant slowdown on the digital circuits. Similarly, BM can be handled in the digital domain by performing O(N) computations only when a signal saturation is detected. However, BM may require an additional circuitry that detects a signal saturation that can be fed as a control signal to the digital circuits for the repeated vector-matrix multiplication. Sensitivity to Device Variations The RPU-baseline model with NM and BM performs reasonable well and achieves a test error of 1.7%, however, this is still above the 0.8% value achieved with a FP-baseline model. In order to identify the remaining factors contributing to this additional classification error, we performed simulations while selectively eliminating various device imperfections from different layers. The summary of these results is shown in Figure 4, where the average test error achieved between 25th and 30th epochs is reported on the y-axis along with an error bar that represents the standard deviation for the same interval. The black data points in Figure 4 corresponds to experiments where device-to-device and cycle-to-cycle variations corresponding to parameters w min , w + min / w − min and w ij are completely eliminated for different layers while the average values are kept unaltered. The model that is free from device variations for all four layers achieves a test FIGURE 4 | Average test error achieved between 25th and 30th epochs for a various RPU models with varying device variations. Black data points correspond to simulations in which device-to-device and cycle-to-cycle variations corresponding to parameters w min , w + min / w − min and w ij are all completely eliminated from different layers. Red data points correspond to simulations in which only the device-to-device variation for the imbalance parameter w + min / w − min is eliminated from different layers. Green points correspond to simulations in which multiple RPU devices are mapped for the second convolutional layer K 2 . RPU-baseline with noise and bound management as well as the FP-baseline models are also included for comparison. error of 1.05%. We note that most of this improvement comes from the convolutional layers as a very similar test error of 1.15% is achieved for the model that does not have device variations for K 1 &K 2 , whereas the model without any device variations for fully connected layers W 3 &W 4 remains at 1.3% level. Among the convolutional layers, it is clear that K 2 has a stronger influence than K 1 as test errors of 1.2 or 1.4% are achieved respectively for models with device variations eliminated for K 2 or K 1 . Interestingly, when we repeated similar analysis by eliminating only the device-to-device variation for the imbalance parameter w + min / w − min from different layers, the same trend is observed as shown by the red data points. These results highlight the importance of device asymmetry and shows that even a few percent device imbalance can significantly increase test error rates. It is clear that the reduction in device variations in some layers can further boost the network performance; however, for realistic technological implementations of the crossbar arrays variations are controlled by fabrication tolerances in a given technology. Therefore, complete or even partial elimination of any device variation is not a realistic option. Instead, in order to get better performance, the effects of the device variations can be mitigated by mapping more than one RPU device per weight, which averages out the device variations and reduces the variability (Chen et al., 2015a). Here, we propose a flexible multidevice mapping that can be realized in the digital domain by repeating the input signals going to the columns (or rows) of an RPU array, and/or summing (averaging) the results of the output signals generated from the rows (or columns). Since the same signal propagates through many different devices and the results are summed on the digital domain, this technique averages device variations in the array without physically hardwiring the lines corresponding to different columns or rows. To test the validity of this digitally controlled multi-device mapping approach, we performed simulations using models where the mapping of the most influential layer K 2 is repeated on 4 or 13 devices. We find that the multi-device mapping approach reduces the test error to 1.45 and 1.35% for 4 and 13 device mapping cases, respectively, as shown by the green data points in Figure 4. The number of devices (# d ) used per weight effectively reduces the device variations by a factor proportional to √ # d . Note that 13-device mapping of K 2 effectively reduces the device variations by a factor of 3.6 at a cost of increase in the array dimensions to 416 × 401 (from 32 × 401) Assuming RPU arrays are fabricated with equal number of columns and rows, multi-device mapping of rectangular matrixes such as K 2 does not introduce any operational (or circuit) overhead as long as the mapping fits in the physical dimensions of the array. However, if the functional array dimensions becomes larger than the physical dimensions of a single RPU array then more than one array can used to perform the same mapping. Independent of its physical implementation this method enables flexible control of the number of devices used while mapping different layers and is therefore a viable approach for mitigating the effects of device variability. Update Management All RPU models presented so far use the stochastic update scheme with a bit length of BL = 10 and amplification factors that are equally distributed to the columns and the rows with values C x = C δ = η/(BL w min ) = 1.0. The choice of these values is dictated by the learning rate, which is a hyper-parameter of the training algorithm; therefore the hardware should be able to handle any value without imposing any restrictions on it. The learning rate for the stochastic model is the product of four terms; w min , BL, C x and C δ . w min corresponds to the incremental conductance change on an RPU device due a single coincidence event; therefore the value of this parameter may be strongly restricted by the underlying RPU hardware. For instance, w min may be tuned only by shaping the voltage pulses used during the update cycle and hence requires programmable analog circuits. In contrast, the control of C x , C δ , and BL is much easier and can be implemented in the digital domain. To test the effect of C x , C δ , and BL on the training accuracy we performed simulations using the RPU-baseline model with the noise and bound management techniques described above. For all models, we used the same fixed learning rate η = 0.01 and w min = 0.001. The summary of these results is shown in Figure 5. For the first set of models we varied BL, and both C x and C δ are fixed at η/(BL w min ). Interestingly, increasing BL to 40 did not improve the network performance, whereas reducing it to 1 boosted the performance and a test error of about 1.3% is achieved. These results may be counter intuitive as one might expect the larger BL case to be less noisy and hence would FIGURE 5 | Average test error achieved between 25th and 30th epochs for a various RPU models with varying update schemes. Black data points correspond to updates with amplification factors that are equally distributed to the columns and the rows. Red data points correspond to models that uses the update management scheme. RPU-baseline with noise and bound management as well as the FP-baseline models are also included for comparison. perform better. However, for BL = 40 case, the amplification factors are smaller (C x = C δ = 0.5) in order to satisfy the same learning rate on average. This reduces the probability of generating a pulse, but since the streams are longer during the update, the average update (or number of coincidences) and the variance do not change. In contrast, for BL = 1, the amplifications factors are larger with a value 3.16 and therefore pulse generation becomes more likely. Indeed, for cases in which the amplified values are larger than unity (C x x i > 1 or C δ δ j > 1) a single update pulse is always generated. This makes the updates more deterministic but with an earlier clipping for x i and δ j values encoded from the periphery. Also note that for a single update cycle the weight can change at most BL × w min and for BL = 1 the weight value can only move by a single w min per update cycle. However, also note that the convolutional layers K 1 and K 2 receive 576 and 64 single bit stochastic updates per image due to weight reuse (sharing) while the fully connected layers W 3 and W 4 receive only one single bit stochastic update per image. The interaction of all of these terms and the tradeoffs are nontrivial and the precise mechanism by which BL = 1 performs better than BL = 10 is still unclear. However, the empirical data shows clearly there is an advantage to be had for the above CNN architecture, which favors BL = 1; whereas the DNN used in Gokmen and Vlasov (2016) favored BL = 10. These results emphasize the importance of designing flexible hardware that can control the number of pulses used for the update cycle. We note that this flexibility can be achieved seamlessly for the stochastic update scheme without changing the design considerations for peripheral circuits generating the random pulses. In addition to BL, for the second set of simulations the amplification factors C x and C δ used during the update cycle are also varied, to some extent, while keeping the average learning rate fixed. The above models all assume that equal values of C x and C δ are used during updates; however, it is possible to use different values for C x and C δ as long as the product satisfies η/(BL w min ). In our update management scheme, we use C x and C δ values such that the probability of generating pulses from columns (x) and rows (δ) are roughly the same order. This is achieved by rescaling the amplification factors with a ratio m = √ δ max /x max , and in this scheme the amplification factors can be written as C x = m η/(BL w min ) and C δ = ( 1 m ) η/(BL w min ). Although for BL = 10 this method did not yield any improvement, for BL = 1 the error rate as low as 1.1% is achieved; and hence shows that the proposed update management scheme can yield better performance. This proposed update scheme does not alter the expected change in the weight value and therefore its benefits may not be obvious. Note that toward the end of training it is very likely that the range of values in columns (x) and rows (δ) are very different; i.e., x have many elements close 1 (or -1) whereas δ may have elements very close to zero (δ ≪ 1). For this case if the same C x and C δ are used, the updates become row-wise correlated. Although unlikely, the generation of a pulse for δ j will result in many coincidences along the row j, as there are many pulses generated by different columns since many x i values are close to unity. Our update management scheme eliminates these correlated updates by shifting the probabilities from columns to rows by simply rescaling the values used during the update. This can be viewed as using rescaled vectors (mx and δ/m) for the updates which are composed of values of roughly the same order. This update management scheme relies on a simple rescaling that is performed in the digital domain, and therefore does not change the design of the analog circuits needed for the update cycle. The additional computations introduced in the digital domain are not significant, and only require additional O (M) (or O (N)) operations, similar to the overhead associated with the noise management technique. Results Summary The summary of CNN training results for various RPU models that use the above management techniques is shown in Figure 6. When all management techniques are disabled the RPU-baseline model can only achieve test errors above 10%. When noise and bound management techniques are implemented, this large error rate is reduced significantly to about 1.7% Additionally when the update management scheme is enabled, with a reduced bit length during updates, the model achieves a test error of 1.1%. Finally, the combination of all of the management techniques with the 13-device mapping on the second convolutional layer (K 2 ) brings the model's test error to 0.8%. The performance of this final RPU model is almost indistinguishable from the FP-baseline model and hence shows the successful application of RPU approach for training CNNs. We note that all these mitigation methods can be turned on selectively by simply programing the operations performed on digital circuits; and therefore can be applied to any network architecture beyond CNNs without changing design considerations for realistic technological implementations of the crossbar arrays and analog peripheral circuits. We note that for all of the simulation results described above we do not include any non-linearity in the weight update as this effect is shown to be not important as long as the updates are symmetric in positive and negative directions (Agrawal et al., 2016a;Gokmen and Vlasov, 2016). In order to check the validity of this behavior for the above CNN architecture, we performed simulations using the blue model of Figure 6 while including a weight dependent update rule with different functional forms w min (w ij ) that included a linear or a quadratic dependence on weight value. Indeed this additional non-linear weight update rule does not cause any additional error even when w min is varied by a factor of about 10 within the weight range. DISCUSSION AND CONCLUSIONS The application of RPU device concept for training CNNs requires a rearrangement of the kernel parameters and only after this rearrangement the inherent parallelism of the RPU array can be fully utilized for convolutional layers. A single vector operation performed on the RPU array is a constant time O(1) and independent of the array size, however, because of the weight sharing in convolutional layers, the RPU arrays are accessed several times, resulting in a series of vector operations performed on the array for all three cycles. These repeated vector operations introduce interesting challenges and opportunities while training CNNs on a RPU based hardware. The array sizes, weight sharing factors (ws) and the number of multiply and add (MAC) operations performed at different layers for a relative simple but respectable CNN architecture AlexNet (Krizhevsky et al., 2012) are shown in Table 2. This architecture won the large-scale ImageNet competition by a large margin in 2012. We understand that there has been significant progress since 2012 and we only choose AlexNet architecture due to its simplicity and to illustrate interesting possibilities that RPU based hardware enables while designing new network architectures. When AlexNet architecture runs on a conventional hardware (such as CPU, GPU or ASIC), the time to process a single image is dictated by the total number of MACs; therefore, the contributions of different layers to the total workload are additive, with K 2 consuming about 40% of the workload. The total number of MACs is usually considered as the main metric that determines the training time, and hence, practitioners deliberately construct network architectures to keep the total number of MACs below a certain value. This constrains the choice of the number of kernels, and their dimension, for each convolutional layer as well as the size of the pooling layers. Assuming a compute bounded system, the time to process a single image on a conventional hardware can be estimated using the ratio of the total number of MACs to the performance metric of the corresponding hardware (Total MACs/Throughput). In contrast to conventional hardware, when the same architecture runs on a RPU based hardware, the time to process a single image is not dictated by the total number of MACs. Rather, it is dominated by the largest weight reuse factor in the network. For the above example, the operations performed on the first convolutional K 1 takes the longest time among all layers because of the large weight reuse factor of ws = 3, 025, although this layer has the smallest array size and comprises only 10% of the total number of MACs. Assuming a RPU based accelerator with many RPU arrays and pipeline stages between them, the average time to process a single image can be estimated as ws × t meas using values from layer K 1 , where t meas is the measurement time corresponding to a single vector-matrix multiplication on the RPU array. First, this metric emphasizes the constant-time operation of RPU arrays as the training time is independent of the array sizes, the number of trainable parameters in the network, and the total number of MACs. This would enable practitioners to use increasing numbers of kernels, with larger dimensions, without significantly increase training times. These network configurations would be impossible to implement with conventional hardware. However, the same metric also highlights the importance of t meas and ws for layer K 1 which represents a serious bottleneck. Consequently, it is desirable to come up with strategies that reduce both parameters. In order to reduce t meas we first discuss designing small RPU arrays that can operate faster. It is clear that large arrays are favored in order to achieve high degree of parallelism for the vector operations. However, the parasitic resistance and capacitance of a typical transmission line with a thickness of 360 nm and a width of 200 nm limit the practical array size to 4, 096 × 4, 096 as discussed in Gokmen and Vlasov (2016). For an array of size 4, 096×4, 096 the measurement time of t meas = 80ns is derived considering the acceptable noise threshold value, which is dominated by the thermal noise of RPU devices. Using the same noise analysis described in Gokmen and Vlasov (2016) the following inequality can be derived for the thermal noise limited read operation during the forward/backward cycles of an array of size N × N: where R device is the average device resistance, β is the resistance on/off ratio for an RPU device, and V in is the input voltage used during read. For the same noise specification, it is clear that for a small array with 512 × 512 devices t meas can be reduced to about 10ns for faster computations assuming all other parameters are fixed. It is not desirable to build an accelerator chip all composed of small arrays, as for a small array power and area are dominated by the peripheral circuits (mainly by ADCs); and therefore, a small array has worse power and area efficiency metrics compared to a large array. However, a bimodal design consisting of large and small size arrays achieves better hardware utilization and provides speed advantage while mapping architectures with significantly varying matrix dimensions. While the large arrays are used to map fully connected layers or large convolutional layers, for a convolutional layer such as K 1 using the small array would be better a solution that provides a reduction in t meas from 80 to 10 ns. In order to reduce the weight reuse factor on K 1 , next we discuss allocating two (or more) arrays for the first convolutional layer. When more than one array is allocated for the first convolutional layer the network can be forced to learn separate features on different arrays by properly directing the upper (left) and lower (right) portions of the image to separate arrays and by computing the error signals and the updates independently. Not only this allows the network to learn independent features for separate portions of the image and does not require any weight copy or synchronization between two arrays, but also for each array the weight reuse factor is reduced by a factor of 2. This reduces the time to process a single image while making the architecture more expressive. Alternatively, one could try to synchronize the two arrays by randomly shuffling the portions of the images that are processed by different arrays. This approach would force the network to learn same features on two arrays with same reduction of 2 in the weight reuse factor. These discussed subtle changes in the network architecture do not provide any speed advantage when run on a conventional hardware; and therefore, it highlights the interesting possibilities that a RPU based architecture provides. In summary, we show that the RPU concept can be applied beyond fully connect networks and the RPU based accelerators are natural fit for training CNNs as well. These accelerators promise unprecedented speed and power benefits and hardware level parallelism as the number of trainable parameters increases. Because of the constant-time operation of RPU arrays, RPU based accelerators provide interesting network architecture choices without increasing training times. However, all of the benefits of an RPU array are tied to the analog nature of the computations performed, which introduces new challenges. We show that digitally-programmable management techniques are sufficient to eliminate the noise and bound limitations imposed on the array. Furthermore, their combination with the update management and device variability reduction techniques enable a successful application of the RPU concept for training CNNs. All the management techniques discussed in this paper are addressed in the digital domain without changing the design considerations for the array or for the supporting analog peripheral circuits. These techniques make RPU approach suitable for a wide variety of networks beyond convolutional or fully connected networks. AUTHOR CONTRIBUTIONS TG conceived the original idea, TG, MO, and WH developed methodology, analyzed and interpreted results, drafted and revised manuscript.
12,649
2017-05-22T00:00:00.000
[ "Computer Science", "Engineering" ]
Pure and Sb-doped ZrO2 for removal of IO3− from radioactive waste solutions Radioactive 129I with a long half-life (1.57 × 107 y) and high mobility is a serious radiohazard and one of the top risk radionuclides associated with its accidental and planned releases to nature. The complex speciation chemistry of iodine makes its removal a complicated task, and usually a single method is not able to remove all iodine species. Especially its oxidized form iodate (IO3−) lacks a selective and effective removal method. Here, the granular aggregates of hydrous zirconium oxides with and without antimony doping were tested for IO3− removal and the effects of contact time, competing anions in different concentrations and pH were examined. The materials showed high selectivity for IO3− (Kd over up to 50,000 ml/g) in the presence of competing ions and relatively fast uptake kinetics (eq. < 1 h). However, B(OH)4− and SO42−, as competing ions, lowered the iodate uptake significantly in basic and acidic solution, respectively. The suitability of the materials for practical applications was tested in a series of column experiments where the materials showed remarkably high apparent capacity for the IO3− uptake (3.2–3.5 mmol/g). Introduction Iodine is a vital element for mammals as it is a critical component of hormones produced in a thyroid gland. The insufficient supply of iodine has several harmful health effects which are all together described as the iodine deficiency disorders (Zimmermann and Andersson 2012;Zimmermann and Boelaert 2015). In order to eliminate the adverse health effects of iodine deficiency, serious efforts have been put in action worldwide to supply the populations with the vital iodine (Andersson et al. 2010). However, in addition to the stable isotope 127 I, the thyroid gland absorbs the radioactive iodine isotopes generated by nuclear fission reaction. This increases the absorbed radiation dose of the gland which leads to an elevated risk of thyroid cancer. Iodine has several radioactive isotopes from which the short-lived isotopes, e.g. 131 I (t ½ = 8 d) or 133 I (t ½ = 20.8 h), pose an acute risk in the case of a nuclear accident, whereas the long-term significance comes from 129 I with an extremely long half-life (1.57 × 10 7 y). Indeed, the nuclear accidents at Chernobyl and Fukushima released large amounts of 129 I to the environment (Aldahan et al. 2007;Hou et al. 2009Hou et al. , 2013. To minimize the internal dose of population, the authorities have set national or global guidance levels for the concentration of 129 I (WHO: 1 Bq/L (World Health Organization 2017); DWS in USA 0.04 Bq/L (Kaplan et al. 2014)) in drinking water. In the case of elevated concentrations, the purification of the water is required (Li et al. 2018). Because of the potential radiation dose to populations, efficient treatment materials and methods are required for iodine containing wastes. Vast volumes of iodine waste waters are stored at various places worldwide, e.g. at the Hanford site (a former plutonium production site in Washington USA) because of the lack of a suitable treatment method . The geosphere or the conventional synthetic materials used for the purification of other radioactive contaminants do not efficiently adsorb the different species of iodine (Li et al. 2018). The complex chemistry of iodine makes the waste treatment difficult. Iodine has multiple stable redox states with different chemical and physical properties. For example, iodine (I 2 ) can occur as gas and iodide (I − ) and iodate (IO 3 − ) are usually present as dissolved ions in solution. (Kaplan et al. 2014) IO 3 − is the main species in aqueous solution and oxidizing conditions, whereas I − prevails in more reducing environments. Molecular iodine, I 2 , is a major species only in low pH and over a narrow E h range (Moore et al. 2019). In addition to the inorganic species, iodine also reacts readily with organic compounds producing organo-iodine compounds with a wide range of chemical characteristics (Andersen et al. 2002). The diversity of iodine speciation and the fact that iodine can be present at several redox states simultaneously make a development of a universal iodine removal material hard or even impossible. Therefore, different removal strategies are required for the different iodine species. Zirconium oxide (ZrO 2 , also known as zirconia) and its doped (e.g. Y, Ca, Ce or Sb) derivatives have a versatile range of favourable properties like toughness, high physical and chemical stability, low solubility in water, and high ionic conductivity, which are utilized in a wide range of applications, including catalysts (Shao et al. 2010), gas sensors (Borhade et al. 2018), ceramics ) and fuel cells (Malolepszy et al. 2015). ZrO 2 has different crystalline and non-crystalline structures: the three crystalline (monoclinic, tetragonal and cubic) and the amorphous phase are stable at low pressures. The stability and prevalence of the different structures depend on many factors like the concentration and nature of dopants, crystallite size and synthesis conditions. (Graeve 2008) ZrO 2 is also known for both cation and anion-exchange properties (Singh and Tandon 1977;Veselý and Pekárek 1972). Antimony as dopant can affect the adsorption performance of ZrO 2 in many ways. It might be involved in redox reactions depending on the adsorbate element and the oxidation state (III or V) of antimony used as a dopant. Most importantly, the introduction of trivalent dopant ion, such as antimony, to the crystal structure of ZrO 2 creates crystal defects increasing the number of the oxygen vacancies (Graeve 2008). The latter could enhance the interaction of the material with oxygen-containing substances. Iodine, similar to antimony and technetium, is present as an oxyanion in oxidizing conditions, but its adsorption properties on zirconium oxides have not been extensively studied. This study aims to fill that gap. The effect of the solution matrix (different competing ions in different concentrations, pH) and the uptake kinetics were studied by batch experiments. In addition, the application of the ZrO 2 -based material in water treatment was demonstrated using 125 IO 3 − and 127 IO 3 − spiked simulant solution in dynamic column experiments. The main laboratory experiments of this study were performed at the Radiochemistry unit at the Department of Chemistry, the Faculty of Science, the University of Helsinki, Finland, between the years 2019 and 2020. The synchrotron experiments were performed at BL22-CLAESS beamline of the ALBA synchrotron light source, at Barcelona, Spain. Chemicals All reagents were of analytical grade (Alfa Aesar, Sigma-Aldrich, Riedel de Haen) and used without further purification. The radioactive Na 125 I tracer was purchased from PerkinElmer. Solutions were made by dissolving solids in grade 1 deionized water (18.2 MΩ cm at 25 °C, Milli-Q® Merck Millipore). The test solutions used in the experiments consisted of simple solutions containing single competing anion prepared by dissolving appropriate amount of H 3 BO 3 in the case of the borate, or sodium salts of the other corresponding anions (NaCl, NaNO 3 , Na 2 SO 4 ) in water. In addition to the simple solutions, also a more complex water simulant solution representing chloride containing process water (Table 1) was used in the batch experiments and in a single column experiment. The solution was prepared by dissolving Na, Ca or Mg salts of the anions and pH of the solution was finally adjusted to 7.0 with 1 M NaOH (Na + from this step is also included in the values described in Table 1). Synthesis of materials ZrO 2 with and without antimony doping was synthesized with a precipitation method. First, in the case of ZrO 2 100 g of zirconium basic carbonate (Alfa Aesar) was carefully dissolved in 1 L of 6 M HNO 3 under vigorous stirring using a mechanical stirrer. In the case of Zr(Sb)O 2 , 45 g of ZrCl 4 (Riedel de Häen) and 2 g of SbCl 3 (Sigma-Aldrich) were used as the starting materials and they were dissolved in 2 L of 3 M HCl. Next, approximately 1200 ml of 6 M NH 3 solution was slowly added to both beakers until pH reached 7.8 and white precipitate was formed. The precipitate was let to settle for overnight, and white precipitate and clear supernatant were separated. The precipitate was washed with deionized water until the conductivity of the supernatant was less than 4.0 mS/cm. Finally, the supernatant was discarded, and the slurry was dried in an oven at 70 °C for three days. The dried sample was ground to fine powder and sieved to particle size 74-149 μm. Characterization The surface morphologies of the materials coated with 4 nm Au-Pd sputtering were examined using Hitachi S4800 FE-SEM (field-emission scanning electron microscopy). The crystallinity was studied with powder X-ray diffraction (XRD) in the Bragg-Brentano geometry using a Pananalytical X'pert PW3710 MPD diffractometer, a PW3020 vertical goniometer and a Cu Kα radiation source (λ = 1.54056 Å, 40 kV and 40 mA). Zeta potential of the materials was measured with Malvern Pananalytical Zetasizer Nano ZS. In these experiments, 20 mg of ZrO 2 or ZrSbO 2 ground fine powder and 10 ml of 10 mM NaNO 3 were mixed in a rotary mixer for 24 h before the measurement of the equilibrium pH and Zeta potential. Separate samples were prepared, and their pH was individually adjusted using diluted HCl or NaOH solutions. Measurement of iodate concentration Two different iodate probes were used in the experiments of this study: radioactive 125 IO 3 − and stable 127 IO 3 − . In the case of experiments with radioactive 125 IO 3 − tracer, 5 ml of filtered solutions in polyethylene vials was measured with Wallac 1480 Wizard 3″ automated NaI-scintillation γ-detector. For the experiments with non-radioactive 127 IO 3 − , 1 ml of filtered solution in a glass HPLC vial was analysed for iodide/iodate concentrations with an anion-exchange chromatography column (Dionex AS11 4 × 250 mm analytical column and AG11 4 × 50 mm guard column) attached to an Agilent 1260 Infinity quaternary pump and autosampler HPLC-system connected to an Agilent 7800 ICP-MS via direct connection between the column and ICP nebulizer. 50 mM sodium hydroxide (NaOH) was used as an eluent with a flow rate of 0.8 ml/min. The ICP-MS was driven in the no-gas mode. The dissolution of CO 2 from the ambient laboratory air was prevented by a constant Ar gas bubbling to the eluent container. The two main species of iodine I − and IO 3 − were separated based on their retention times, first determined with standard solutions prepared from KI and KIO 3 , and the concentrations were calculated from the chromatogram peak areas using external standards on the concentration range from 0 to 200 μg L −1 for both iodine species. The stability of the instrument was followed using quality control (5 + 5 and 50 + 50 μg L −1 of total iodine I − + IO 3 − ) and blank samples. The retention times were 120 s for IO 3 − and 360 s for I − and they remained stable during the analysis period. A limit of detection (LOD) for HPLC-ICP-MS system was 0.5 μg L −1 and 0.2 μg L −1 for I − and IO 3 − , respectively. A repeated injection method was used for LOD determination by measuring 8 replicates of 1 μg L −1 of I − and IO 3 − calibration standard and using a calculation procedure described the literature (Wells et al. 2011). The speciation of iodine was confirmed with two methods depending on whether radioactive 125 IO 3 − or non-radioactive 127 IO 3 − was used. For experiments with non-radioactive iodine, commercial K 127 IO 3 was used as the tracer without any treatment. In addition, the speciation was confirmed using HPLC-ICP-MS. However, because the radioactive tracer was in the form of Na 125 I, the oxidation of iodide to iodate was needed. In order to obtain 125 IO 3 − solutions, an appropriate volume of NaOCl solution was added to obtain 2 × 10 −4 M NaOCl concentration in 10 mM NaOH and the solution was let stand for at least 24 h before use. The speciation of 125 IO 3 − was confirmed with repeated simple batch experiments done for tracer solutions where silver-impregnated activated carbon (Silcarbon Aktivkohle GmbH, Germany) was used to quantify the iodate fraction. The procedure has been previously validated for the alteration and separation of the oxidation states of iodine ). Batch ion exchange experiments To assess the potential of zirconium oxides on iodate uptake, the combined effect of different competing ions in different pH's and concentrations was studied with batch experiments. The sample preparation in all batch experiments was similar: 20 ± 1 mg of ground material was weighed to a polyethylene vial, 10 ml of appropriate test solution was pipetted, and pH was adjusted by adding appropriate volume of NaOH or HCl and finally either 125 IO 3 − or 127 IO 3 − was added as tracer. The samples were equilibrated for 24 ± 2 h and solid and liquid phases were separated by centrifuging (2100 G, 10 min) followed by filtering with a 0.2 μm filter (PVDF LC. Arcodisc, Gellman Sciences). The filtered solution was either put to a 10 mL polyethylene vial ( 125 I) or a glass HPLC vial ( 127 I). The equilibrium pH of remaining supernatant was measured using Ross combined electrode. The determination of iodine concentration was done with NaI-scintillation γ-detector ( 125 I) or HPLC-ICP-MS ( 127 I) depending on whether the radioactive or stable isotope of iodine was used in the particular experiment. The effect of competing ions to iodate adsorption was tested by using 10 mM solutions of different anions (Cl − , NO 3 − , B(OH) 4 − , SO 4 2− ) as a function of pH in the range 3-10. The effect of concentration of ions was screened at single pH value using different concentrations (1, 10 and 100 mM). The experiments with single competing ion were separately done with the radioactive 125 IO 3 − . In addition, a pH series with simulant solution (see Table 1) containing several components was done with non-radioactive 127 IO 3 − . The latter was done to assess the effect of a more complex matrix as well as the potential difference between 125 IO 3 − and 127 IO 3 − possibly caused by the concentration or speciation differences. The radioactivity of carrier-free 125 I was between 100-250 Bq per 10 ml sample corresponding to concentrations on the scale ~ 10 −13 M. With non-radioactive 127 I, the concentrations were 8 × 10 −7 M. The distribution coefficients, K d 's, in the batch experiments were calculated by Eq. 1: where c i = initial concentration of iodate in the solution; c f = final concentration of iodate in the solution; V = volume of solution in mL; and m = mass of material in g. The error calculations are represented in SI. Column experiments Dynamic column experiments were done to test the zirconium oxide materials iodate uptake properties in more practical setup. Three sets of column experiments were done: first experiment with solution containing only 10 mM 127 KIO 3 to see maximum apparent capacity, second experiment with 10 mM NaNO 3 , 1 mM KIO 3 and 125 IO 3 as tracer to see the effect of weakly competing anion and last experiment with more complex simulant solution (Table 1) demonstrating the uptake properties in more realistic conditions from the waste management Two first experiments were done for both ZrSbO 2 and ZrO 2 , the last experiment only with ZrSbO 2 . Approximately 0.5 g (1 g in the case of third experiment with simulant solution) of the sieved material (74-149 μm) was loaded by pipetting with a small volume of 10 mM NaNO 3 to a low-pressure borosilicate glass column with a diameter of 0.7 cm and a porous polymer bed support at the bottom (Econo-Column®. Bio-Rad Laboratories, Inc.). Two different approaches were used for the probing of the breakthrough: non-radioactive 127 IO 3 − tracer and radioactive 125 IO 3 − . The specifications of the individual column experiment sets are described in detail in Table 2. 125 IO 3 − or 127 IO 3 − spiked simulant solution was pumped through the column bed (volume ~ 0.6 ml for the experiments 1 and 2, ~ 1.2 ml for column experiment 3 with simulant solution) with a flow rate of 10 bed volumes per hour. For the column experiment 1, only the total concentration of iodate and pH of effluent and eluent were measured, i.e. no individual fractions were collected. In the case of column experiments 1 and 2, the automated fraction collector was used to collect samples with the interval between one and two hours and samples were measured for iodate concentration and pH. In addition, the feed solutions were inspected regularly for iodate concentration and pH to make sure no changes in speciation, for example, had occurred before the contact with the materials. The I K-edge XANES (X-ray absorption near edge structure) spectra of the IO 3 − loaded ZrO 2 and ZrSbO 2 (from Column experiment 1) were measured at BL22-CLAESS beamline (Simonelli et al. 2016) of the ALBA synchrotron light source, Barcelona, Spain. The loaded column beds were moved to separate vials and washed two times by mixing with 5 mL of deionized water, followed by centrifugation (2100 G, 10 min) and removal of supernatant. After the wash, the samples were dried in 70 °C for overnight. The samples and the reference materials were homogenized with a mortar and pestle, and approximately 50 mg of sample (less in the case of references) was mixed with 150 mg of cellulose and pressed into pellets. The samples were measured in transmission mode inside a He-cryostat. The collected spectra were analysed and merged with Athena software (Ravel and Newville 2005). The XANES spectra of the samples were compared with KIO 3 , I 2 and KI as reference materials. Characterization The synthesized materials were examined using XRD for their crystallinity (Fig. 1a), SEM for their morphology (ZrSbO 2 : Fig. 1b, see Supplementary Information (SI) Fig. 1 for ZrO 2 ) and electrophoretic light scattering for the isoelectric point (see SI Fig. 2). XRD of the zirconium oxide materials shows a broad peak located at 2θ of 30° making the determination of the crystal structure impossible. For both materials, the XRD and SEM measurements indicate the large aggregates of poorly crystalline zirconia. This kind of a structure is optimal for practical column use due to the low flow resistance and fast adsorption on the edge surfaces. Similar ZrO 2 syntheses in our previous studies yield materials with high surface area . The isoelectric points of the materials were determined by measuring Zeta potential in the pH range 4-10 (see SI Adsorption kinetics The kinetics of iodate uptake was studied for both materials to see how fast uptake reaction proceeds to equilibrium (see SI for the graph). From the start, both materials showed a similar trend, and the adsorption reaches over 99.5% after 60 min. We suspect that reaction kinetics is fast because IO 3 − does not have to diffuse inside the crystalline structure; instead, the adsorption occurs on the crystallite surfaces. Effect of pH and competing ions on the adsorption The effect of pH was studied in the range between 3 and 10 with a series of batch experiments done with different competing ions (Cl − , NO 3 − , SO 4 2− and B(OH) 4 − ) in a concentration of 10 mM. The results are shown as a mean of parallel samples (n = 2), the error bars represent the standard deviation of the parallel samples (n = 2) and are included in the figure but are not showing for all of the data points as they are less than the size of the markers. Both ZrSbO 2 and ZrO 2 show a similar uptake trend for IO 3 − as the function of pH (Fig. 2): the K d values are the highest at low pH (10,000 ml g −1 ) and decrease as function of pH (100 ml g −1 ). A clearly visible decrease in uptake is observed for all the ions at pH 8. This is most likely associated with the isoelectric point of ZrO 2 and ZrSbO 2 which lie within this range (see SI Fig. 2). Above this point, the surface charges of the materials turn negative causing the repulsion of negatively charged ions and inhibiting the adsorption process. From all the ions, SO 4 2− has the most dramatic effect on IO 3 − uptake. The uptake of IO 3 − in SO 4 2− solution was notably less at low pH compared with other competing ions. At low pH, the divalent charge of sulphate strongly affects the iodate uptake. At pH's closer and above the isoelectric points, the differences in uptake are significantly smaller apart from borate that has changed its speciation from neutral B(OH) 3 to negatively charged B(OH) 4 − . It is suggested that the uptake of iodate at high pH is other than traditional anion uptake (rather uptake to defect sites at the material surface). In addition, SO 4 2− may react with the hydroxyl groups in zirconia causing the sulphonation of the surface (Chen et al. 1993;Deshmane and Adewuyi 2013). Compared with 2− , the uptake of IO 3 − is significantly higher in the case of NO 3 − and Cl − as competing ions. In borate solution, the IO 3 − uptake is at the same level with the latter at low pH (< 6), but at higher pH a significant drop in the IO 3 − uptake was observed most likely due to the speciation change of borate. The anionic B(OH) 4 − becomes thermodynamically stable above pH 7 and competes more with IO 3 − compared with neutral B(OH) 3 which is the only species in low pH. The effect of concentration (1-100 mM) of competing ions Cl − , NO 3 − , SO 4 2− was tested (Fig. 3) for both ZrO 2 and ZrSbO 2 but only at pH 4, where the IO 3 − uptake should be rather optimal based on the pH effect studies conducted in this study (Fig. 2). Figure 3 represents the distribution coefficients calculated as a mean of the parallel samples (n = 3). The error bars representing the standard deviation of the parallel samples are included in the figure but are not showing as they are less than the size of the markers. At the tested conditions, the difference in Cl − and NO 3 − concentrations did not show any effect on IO 3 − uptake as the K d 's remained high (> 10,000 ml/g) over the concentration range. In the case of borate, pH 8 was used because the anionic species of borate are not thermodynamically stable in pH 4. For B(OH) 4 − at pH 8, the decrease of IO 3 − uptake as the function of borate concentration is dramatic: 100-fold from 10 000 to 100 ml/g for ZrSbO 2 and even more in the case of ZrO 2 . The adsorption behaviour of IO 3 − in the presence of B(OH) 4 − resembles the typical uni-uni ion exchange reaction, but it should be noted that pH values are close to the PZC and were measured with a pH electrode and therefore are not accurate enough for a precise determination of exchange reaction. Compared with borate, the rise of SO 4 2− concentration affected the uptake less: K d values drop ten-fold from 10 000 ml/g at 1 mM to almost 1 000 ml/g at 100 mM. This does not agree with typical 1:2 ion exchange between SO 4 2− and IO 3 − and the role of other uptake processes, like surface complexation, has to be considered for IO 3 − uptake besides ion exchange. As the IO 3 − adsorption is unaffected by increasing ionic strength in the case of NO 3 − or Cl − , it would indicate that the adsorption is not, at least solely, due the ion exchange. The rising SO 4 2− and B(OH) 4 − concentrations have a strong effect on IO 3 − adsorption which would indicate that they are competing for the same sites with the latter at the corresponding conditions. Column experiments The series of column experiments were performed to assess the suitability of ZrO 2 and ZrSbO 2 in a more practical column set-up. Three sets of column experiments were done (Table 3) 10 mM 127 KIO 3 (to see maximum apparent capacity), second experiment with 10 mM NaNO 3 , 1 mM KIO 3 traced with 125 IO 3 (as an iodate tracer for accurate and simple measurements) and third experiment with more complex simulant solution (Table 1), demonstrating the performance under more realistic conditions. In simple KIO 3 solution, both materials showed similar apparent capacities for iodate: ZrSbO 2 (3.5 mmol/g) performed slightly more efficiently compared with ZrO 2 (3.2 mmol/g). The feed pH was 6.0 and final pH 4.7 and 4.3 for ZrSbO 2 and ZrO 2 , respectively. The HPLC-ICP-MS (See SI for the example chromatograms) analysis did not show any changes in iodine oxidation state as only IO 3 − was observed in the eluent and no I − or other iodine species were detected in effluent. In addition, the measured iodine K-edge XANES spectra of both ZrO 2 and ZrSbO 2 (Fig. 4) show the features characteristic to iodate, confirming that no redox changes occurred during the adsorption and iodine adsorbed to the materials as iodate. In 10 mM NaNO 3 solution, the IO 3 − uptake performance was reduced for both materials. Like in the first set of column experiments, ZrSbO 2 (0.64 mmol/g) performed more efficiently compared with ZrO 2 (0.58 mmol/g). The breakthrough curves of IO 3 − in these experiments are presented in Fig. 5. The breakthrough curve shows highly symmetrical shape for ZrSbO 2 and the breakthrough started at approximately 500 BV's. For ZrO 2 , the breakthrough started at earlier stage after approximately 300 BV's. The complete breakthrough was achieved at the same stage for both materials at 700 BV's. The pH evolutions of the eluents were similar for both materials (Fig. 6). pH raised steadily from initial 2.5 to 4.0. At the same point when the actual breakthrough started (600 BV's), pH rose fast to 5.0. In general, the pH was slightly lower in the case of ZrSbO 2 compared with ZrO 2 at the corresponding points of the experiment. The performance of ZrSbO 2 , which showed higher apparent capacity for IO 3 − in previous experiments, was also tested in a more complex solution (Table 1) to better assess its suitability for real applications (Fig. 7). The breakthrough of IO 3 − appeared after 1250 BV's, and the complete breakthrough was achieved at 2500 BV's. The apparent capacity in these conditions seems to be dramatically lower (approximately 5 μmol/g) compared with simpler solution in the previous column experiments. Most probably, this is because of the competition of other ions, especially divalent sulphate, which was also observed in the batch experiments. A steep rise in pH occurred at the same time with the breakthrough. Altogether, the column experiments demonstrate that the IO 3 − apparent adsorption capacity is lowered from the ideal conditions due to competing anions. However, even at complex solution (Fig. 7) with a high excess of other ions (SO 4 2− and Cl − to IO 3 − ratios 700 and 14,000, respectively) a selective removal of IO 3 − is achieved. The lower uptake in the column experiments with higher ionic strength can − on the zirconium oxide materials with different adsorption mechanisms. This is supported by the results of the batch experiments ("Effect of pH and competing ions on the adsorption") as the rising concentrations of Cl − and NO 3 − do not affect the IO 3 − uptake at all, whereas higher SO 4 2− concentrations reduce it strongly. It is suggested that at ideal conditions without competing anions, IO 3 − is adsorbed to both selective and non-selective sites at the zirconium oxide materials. The introduction of NO 3 − or Cl − lowers the removal performance because then IO 3 − is adsorbed mostly or only at the selective sites. In the presence of both Cl − and SO 4 2− , the IO 3 − uptake is furthermore decreased because the selective sites are shared between IO 3 − and SO 4 2− . Like suggested by the batch experiment results, SO 4 2− seems to compete for the IO 3 − selective sites indicating a similar adsorption mechanism. However, more detailed experiments are needed to verify the exact mechanism behind the adsorption of IO 3 − to the zirconium oxide materials. Conclusion This study altogether demonstrates the potential of zirconia-based materials on the selective removal of radioactive IO 3 − which currently is an unresolved task. The ZrO 2 and Sb-doped ZrO 2 exhibited the high apparent capacities of 3.2-3.5 mmol/g for IO 3 − in ideal conditions. More importantly, the materials showed high selectivity to IO 3 − as the uptake performance of the materials was unaffected by the large excess of competing NO 3 − or Cl − ions (in batch). However, SO 4 2− and B(OH) 4 − lowered the iodate adsorption significantly in acidic and basic conditions, respectively. In addition, the apparent capacity was reduced from the ideal situation in more complex solutions due to the competition of the other anions from which the most important is SO 4 2− . This is most likely explained by its divalent charge and sulfonation of the material surface, which is a known phenomenon and observed in catalysis applications with zirconia. Although the apparent capacity was lowered significantly from the ideal in the presence of competing anions, the zirconia materials show potential for the remediation of radioactive wastes where the contaminants are often in trace concentration levels and the most important factor regarding the performance is the selectivity of the materials. The pure and Sb-doped ZrO 2 performed both efficiently in IO 3 − removal, but the latter showed approximately 10% higher apparent capacity. It is likely that antimony disturbs the formation of ZrO 2 structure and produces more defects to the structure that are typically active adsorption sites for IO 3 − uptake. However, the exact role of antimony and the detailed uptake mechanism of IO 3 − require further investigation on the structure at the sorption site (preferably synchrotron-based XAS measurements) in order to optimize the removal performance of the materials. Consequently, competent zirconia adsorbent materials suitable for direct disposal could be developed for the present and forthcoming radioactive IO 3 − waste, which at the moment, lacks a specific removal method. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s13762-021-03487-9. Funding Open access funding provided by University of Helsinki including Helsinki University Central Hospital. This work was funded by the Doctoral Programme in Chemistry and Molecular Sciences (CHEMS) at the University of Helsinki. Data availability The additional data are contained in supplementary material. Conflict of interest The authors declare that they have no conflicts of interest. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,569.8
2021-06-23T00:00:00.000
[ "Chemistry" ]
Brain-derived neurotrophic factor in primary headaches Brain derived neurotrophic factor (BDNF) is associated with pain modulation and central sensitization. Recently, a role of BDNF in migraine and cluster headache pathophysiology has been suspected due to its known interaction with calcitonin gene-related peptide. Bi-center prospective study was done enrolling four diagnostic groups: episodic migraine with and without aura, episodic cluster headache, frequent episodic tension-type headache, and healthy individuals. In migraineurs, venous blood samples were collected twice: outside and during migraine attacks prior to pain medication. In cluster headache patients serum samples were collected in and outside cluster bout. Analysis of BDNF was performed using enzyme-linked immunosorbent assay technique. Migraine patients revealed significantly higher BDNF serum levels during migraine attacks (n = 25) compared with headache-free intervals (n = 53, P < 0.01), patients with tension-type headache (n = 6, P < 0.05), and healthy controls (n = 22, P < 0.001). There was no significant difference between patients with migraine with aura compared with those without aura, neither during migraine attacks nor during headache-free periods. Cluster headache patients showed significantly higher BDNF concentrations inside (n = 42) and outside cluster bouts (n = 24) compared with healthy controls (P < 0.01, P < 0.05). BDNF is increased during migraine attacks, and in cluster headache, further supporting the involvement of BDNF in the pathophysiology of these primary headaches. Introduction Migraine is a neurologic disorder affecting about 11 % of the population [1]. The mechanisms that have been implicated in the pathophysiology of migraine include neurogenic inflammation, cortical spreading depression, central sensitization, and vascular involvement [2]. Activation of the trigeminovascular [3] system is thought to play a pivotal role in both, migraine and cluster headache pain processing [4]. Various peptides like calcitonin-gene related peptide (CGRP), vasoactive intestinal peptide, and substance P have been shown to mediate nociceptive effects during headache pain generation and central sensitization. Brain-derived neurotrophic factor (BDNF) is the most abundant neurotrophin within the central and the peripheral nervous system [5]. In addition to effects on neuronal development and differentiation it has been shown to exert a pivotal role in the modulation of pain signaling [6]. It affects plasticity of synapses in trigeminal nociceptive pathways and is expressed in trigeminal ganglion neurons [7][8][9]. Trigeminal release of BDNF is induced by inflammatory stimuli such as tumor necrosis factor-alpha and vasoactive mediators of neurogenic inflammation including CGRP [8,10]. CGRP is an important regulator and player in the pathogenesis of migraine and cluster headache through the modulation of nociceptive transmission in the trigeminovascular system [3]. Animal studies show a co-expression of CGRP with BDNF in trigeminal ganglion neurons [7]. Additionally, BDNF release has been shown to be induced by CGRP [9]. Hence interactions between CGRP and BDNF have been suggested to increase migraine susceptibility [11]. This prospective bi-center study aimed to analyze serum levels of BDNF in patients with primary headache disorders, i.e. migraine, cluster headache, and tension-type headache. Study sites and design Three groups of headache patients were enrolled in this prospective study: patients with episodic migraine with and without aura, episodic cluster headache, and frequent episodic tension-type headache according to the current criteria of the International Headache Society (ICHD-II [12]). All patients were diagnosed by a headache specialist and recruited in two headache outpatient clinics, respectively: headache outpatient clinic at Innsbruck Medical University, Innsbruck, Austria, and the West German Headache Center at the University of Duisburg-Essen, Essen, Germany. In migraineurs and patients with cluster headache collection of blood samples was performed twice: during migraine attacks and during the interictal phase, inside and outside cluster bouts, respectively. In patients with tension-type headache (outside attack) and healthy controls, one single blood sample was taken. Migraine patients were contacted by phone 3 days after venipuncture during headache-free periods to rule out blood sampling during the premonitory phase of an upcoming migraine attack. Hence, samples of patients, who experienced a migraine attack within 72 h after blood sampling, were excluded from the study. Standard protocol approvals, registrations, and patient consents The study protocol was approved by the institutional review boards at Innsbruck Medical University and the University of Duisburg-Essen (Innsbruck AM3793a, Essen 10-4345). All patients and controls gave written informed consent to study participation prior to study-related procedures. Inclusion and exclusion criteria Patients, males and females, were included if they consented to study participation, and were aged between 18 and 50 years. Migraine patients were included if they suffered from at least one, but no more than eight migraine attacks per month, and had not taken any pain medication including non-steroidal anti inflammatory drugs (NSAIDs), triptans or opioids 48 h prior to blood sample collection and if phone contact was possible 3 days after blood sampling. Patients taking any preventive medication for prophylaxis of migraine or tension-type headache were excluded. Further exclusion criteria were diagnosis of medication overuse headache (according to ICHD II [12]), more than ten headache days per month in migraineurs and patients with tension-type headache and more than 10 days per month with reported intake of medication for the acute treatment of headache. Patients were not included if they had a history of cardio or cerebrovascular disease, diabetes, cancer, severe renal or hepatic disease. Patients with more than one diagnosis of a primary headache, i.e. coexistent migraine and tension-type headache, were excluded from study participation. Pregnancy and breastfeeding were additional exclusion criteria. Blood samples and BDNF analysis All blood samples were processed strictly following an identical predefined protocol in both participating centers. Venous cubital blood samples were centrifuged after a clotting time of 30-60 min at 1,500 rcf for 15 min to obtain serum. Immediately after centrifugation, serum samples were divided into aliquots of 300 ll each and stored at -20°C until use. Repeated freezing thaw cycles were avoided. Measurement of BDNF was performed using sandwich enzyme-linked immunosorbent assay according to the manufacturer's instructions (R&D Systems, Minneapolis, Minnesota). Precision of the assay was verified by determination of the inter-and intra-assay coefficients of variation, which were \15 and \10 %, respectively. Patient and control samples were run on the same plates with identical standard curves. Statistical analysis The Kolmogorov-Smirnov test was used to test for normal distribution of continuous variables. To analyze repeated measurements of serum levels the paired t test for parametric data was performed and Dunn's corrected for multiple comparisons. The Mann-Whitney U test was performed for comparison between two groups; for more than two comparisons the Kruskal-Wallis test was used. In order to test for important covariates (age, gender, hormonal contraception and smoking status) general estimating equation (GEE) models were calculated. The Pearson correlation was used to test the association between headache days per month/attack frequency and BDNF serum levels. Data are presented as median (interquartile range, IQR) unless otherwise stated. A two-sided P value of\0.05 was considered as statistically significant. All statistical analyses were performed using the PASW 18.0 package (SPSS Inc., Chicago, IL, USA) and the GraphPad Prism 5 software (GraphPad Prism Software Inc. San Diego, CA, USA). Baseline characteristics Fifty-nine patients with episodic migraine were included in the study. Of those 23 patients were diagnosed as migraine with aura (ICHD II [12]). In 19 patients paired samples during and outside migraine attacks were available. A total number of 52 cluster headache patients were included in the analysis. Paired samples, i.e. in versus out bout, were collected in 14 patients with episodic cluster headache. Six patients with tension-type headache in a pain-free interval as well as 22 healthy controls were included. Age, gender, smoking, and hormonal contraception did not have significant effects on peripheral BDNF serum levels (data not shown). For demographic and clinical data of the study population (see Table 1). BDNF serum concentrations are listed in Table 2. Migraine In migraineurs, BDNF was significantly elevated during migraine attacks compared with headache-free periods (P \ 0.01), tension-type headache (P \ 0.05) and healthy controls (P \ 0.001, Fig. 1a). When comparing paired samples in 19 patients, for whom sera were available during and outside migraine attacks, no statistically significant difference in BDNF concentrations was evident (Fig. 1b). This effect is mainly driven by three patients. When excluding these three patients there is a statistical trend towards higher interictal BDNF levels compared with inside-attack BDNF concentrations (P = 0.05). However, both outside-attack as well as ictal BDNF levels were significantly higher compared with controls (P \ 0.001). A subgroup analysis for migraine with and without aura did not reveal different distribution of BDNF levels (Fig. 2). However, in both groups, migraine with and without aura, BDNF was again significantly elevated during migraine attacks compared with interictal levels (P \ 0.05, respectively). No statistically significant correlation with migraine attack frequency/headache days per month (r = 0.006), headache duration (r = 0.14) and serum concentrations of BDNF was found. Cluster headache BDNF was increased outside cluster bouts as well as during cluster bouts compared with healthy control subjects (P \ 0.05 and P \ 0.01, respectively, Fig. 3a). When comparing paired blood samples in patients with cluster headache, BDNF was significantly elevated inside and outside bouts compared with patients with tension-type headache, and healthy controls (P \ 0.01, P \ 0.05, Fig. 3b. In three patients with cluster headache serum was obtained during cluster attacks. In those patients BDNF concentrations did not emerge different from in bout levels (data not shown). There was no significant difference in BDNF levels between patients with tension-type headache and healthy controls (P = 0.84). Discussion In this prospective bi-center study, BDNF serum levels in patients with episodic migraine with and without aura, episodic cluster headache, episodic tension-type headache, and healthy control subjects were analyzed. Our main findings were (1) BDNF is significantly elevated in patients with migraine and cluster headache compared with patients with tension-type headache and healthy controls. (2) During migraine attacks BDNF serum levels are significantly higher compared with headache-free periods and controls. This effect also holds true in both migraine subpopulations, with and without aura, respectively. (3) Patients with cluster headache revealed significantly increased BDNF during cluster bouts, but showed also elevated out-bout levels of BDNF. BDNF is a member of the neurotrophin family and has been recognized as an important modulator of nociceptive pathways [13]. However, the effects of BDNF within the nociceptive system seem to be manifold. An anti-nociceptive effect is suggested in central pathways [14][15][16]. This is supported by results from animal studies showing analgesia after intracerebroventricular administration of BDNF [14]. By contrast, allodynia is mediated by BDNF in experimental models of neuropathic pain [17]. These contradictory results are partly explained by dose-dependent effects of BDNF with low doses causing hyperalgesia, whereas higher doses may lead to analgesia, an effect that might be induced by the activation of different intracellular pathways [13]. The trigeminal system is supposed to play a central role not only in migraine but also in cluster headache pathology [2,3]. In vitro studies have demonstrated the expression of BDNF in trigeminal ganglion neurons [6]. BDNF release is induced by trigeminal stimulation and nociceptive inputs [18]. Interestingly, BDNF is co-expressed with CGRP in trigeminal ganglion neurons [9]. CGRP is one of the key molecules in migraine and cluster headache pathogenesis [4,19]. It is released after activation of perivascular sensory trigeminal nerve fibers [20]. Levels of CGRP measured in the external jugular vein have been found to be elevated during migraine attacks and cluster headache [21]. Interactions between neurotrophic factors and neuropeptides such as CGRP are manifold: BDNF release from trigeminal ganglion neurons has been shown to be induced by CGRP in vitro, an effect that was reversible in the presence of a CGRP-receptor antagonist [8]. Interestingly, CGRP gene expression is increased by the nerve growth factor NGF via activation of CGRP promotor enhancers [22]. The application of an anti-BDNF antibody decreased both CGRP and BDNF in rat dorsal root ganglia [23]. Results of a recent case-control study suggest an interaction of CGRP and BDNF polymorphisms, contributing to migraine susceptibility [11]. Neurogenic inflammation is thought to play a pivotal role in migraine pathology [24,25]. Interestingly, BDNF is up-regulated in primary sensory neurons by inflammation [10,26]. Peripheral and cerebrospinal fluid levels of pro- inflammatory cytokines have been found to be elevated in migraineurs [27][28][29]. Cytokine-induced release of pain modulators such as BDNF from trigeminal neurons might underline the interaction of inflammatory and neuronal pathways leading to neurogenic inflammation. This study is limited by the fact that we could not find a statistical significant difference between interval and attack BDNF values in the subgroup of the paired samples. When interpreting these findings one has to keep in mind that this result is driven by three patients only. Thus, excluding these three patients reaches again statistical significance, confirming the results of the whole group analysis. Still this cannot be simply regarded as a selection bias and should be further investigated in a future study. Our study supports the hypothesis that BDNF has an important role in migraine pathophysiology. Significant elevation during migraine attacks even in serum samples from peripheral, cubital venipuncture could be shown. BDNF increase might be interpreted as a general reaction to pain. However, the elevation of BDNF even outside migraine attacks (in the subgroup analysis of patients with paired samples) and also outside cluster bouts without presence of pain in contrast to normal levels of BDNF in tension-type headache suggests an exclusive effect in headaches with trigeminal involvement. Cortical spreading depression, characterized by a wave of oligemia spreading along the cortex of the brain, has been considered as the pathophysiological equivalent of migraine aura [30,31]. Up-regulation of BDNF mRNA has been observed after cortical spreading depression in experimental in vivo studies [32,33]. However, we did not find a difference in BDNF serum concentrations between patients with migraine with and without aura. Our findings are in contrast with the results of Blandini et al. [34] reporting decreased platelet levels of BDNF in patients with migraine with and without aura as well as patients with cluster headache. By contrast, circulating BDNF serum levels measured in our study are elevated in migraineurs. These conflicting results might be attributable to platelet activation during migraine attacks with immediate release of BDNF upon activation leading to decreased platelet levels [35]. In a prospective pilot study BDNF serum levels were analyzed in nine migraineurs during migraine attacks and interictal phase [27]. Consistent with our results Tanure et al. [27] report an elevation of BDNF during migraine attacks compared with the headache-free period. The present study is the first to report an increase of BDNF in patients with cluster headache. Surprisingly, elevated BDNF levels were detected not only during cluster bouts but also outside bouts. Increased CGRP levels were reported in cluster headache patients in jugular venous blood samples, indicating trigeminal activation [36]. Since BDNF is coexpressed with CGRP in trigeminal ganglion neurons, increased BDNF levels might indicate trigeminal activation [8,9]. In a study by Di Piero et al. [37] pain stimulation in cluster patients outside bouts revealed a pathological response in cerebral blood flow compared with healthy controls suggesting that alterations in pain processing are also present outside active cluster periods. Elevation of BDNF during and outside cluster bouts might result from continuous trigeminal activation in cluster headache patients, which should be investigated in future trials. This study is limited by the small size of the patient group with frequent episodic tension-type headache. However, all patients were recruited from two specialized headache outpatient clinics, where the patient population is rather focused on chronic but on episodic tension-type headache. Moreover, many patients with tension-type headache seen in specialized headache centers suffer from medication overuse headache and were thus not eligible for study participation according to the predefined study protocol. We were not able to include a higher number of patients with cluster headache during cluster attacks. This was mainly caused by the fact that cluster attacks showed an average duration of 75 min and face-to-face patient contact was often not possible within that time period. Various studies suggest the involvement of BDNF in pain processing and peripheral as well as central sensitization [38,39]. This is the first study to show elevation of BDNF in both migraine and cluster patients. Our results underline the important role of this neurotrophic factor in nociceptive pathways.
3,774.4
2012-05-15T00:00:00.000
[ "Medicine", "Psychology", "Biology" ]
APPLICATION OF DATA MINING ALGORITHM TO INVESTIGATE THE EFFECT OF INTELLIGENT TRANSPORTATION SYSTEMS ON ROAD ACCIDENTS REDUCTION BY DECISION TREE Due to the large amount of available data in this study, authors have utilized data mining algorithms, especially the decision tree, to process these data and obtain the information, which would result in increasing road safety, determining the causes affecting it and patterns leading to traffic accidents. The effective use of this tool and its role in controlling the number of driving accidents is the subject of this study with the help of data mining algorithms. The results show that the increase in the number of roadside assistances to more than 41; number of driving accidents (fatally injured) is not significantly different, hence one of the proposed strategies for intelligent relay stations and its organization with the intelligent transportation tool is available. The intelligent transportation system utilities comprise of monitoring, guidance and enforcement tools, plus service tools such as rescue, driver assistant and road improvement. Introduction Annually more than twenty thousand people die on the roads of Iran because of car accidents. This issue is a social health problem not only for Iran but for other countries, as well. For example, more than 40000 people die on European roads annually. Every year, road accidents impose 200 million euros expenses in the European economy [1]. Expenses of road accidents in Iran are unprecedented in their kind and they account for about 7 % of the country's GDP [2][3][4][5][6]. Presence and monitoring by police at mostly accident hotspots, modifying the geometric scheme of these places, using warning signs, increasing vehicle safety and using seat belts are some of the solutions that can reduce road casualties. Besides these solutions, applying intelligent transportation systems is another method that the majority of developed countries and many developing countries have followed that today and by applying this system they have been able to increase the road safety level [4][5][6][7][8][9]. Currently, the road conditions in Iran are such that, in spite of the implemented efforts by relevant organizations and bodies over the past few years, this country is still considered one of the most unsafe countries in terms of the traffic problems. Despite the new telecommunication and communication technologies, the intelligent transportation system and 2010, was because of overturning accidents and after that, vehicles accidents with each other. During these years, deaths due to vehicle overturning was approximately 59 % of the total casualties in suburban road accidents in Semnan province and this statistics is more than twice of the national average. Semnan province had the first rank in country at 2011 for overturning vehicle casualties. It is worth mentioning that regarding to regulations from roads and transportation organization of the country, points or sections on road that has p number (quantity of measuring the accident rate of points) equal or more than 30, are considered as the high priority accident points and approaches should have provided for solving them in two-time length; short time and middle time. The changes after installing the intelligent transportation systems are in this way. Currently, 29 points of province roads have online controlling cameras, 40 critical and important axis have online traffic counting system, 6 sections of the province have variable message signs (VMS), 28 points have variant speed limit (VSL) and 34 sites have speed controlling cameras. Locating the installation of weigh in motion system (WIM) in two important sections of province (Semnan-Sarkhe and Shahrud-Sabzevar) were completed. The results of the mentioned research show that by considering the installation of intelligent transportation system and casualty statistics and traffic in Shahrud-Sabzevar road, the casualty numbers in the mentioned road during 2011 to 2013 (after installing intelligent systems) had a significant decreasing trend and by considering that the rate of traffic has increased 12 % from 2011 to 2013, in spite of this increase in traffic, the number of casualties in Shahrud-Sabzevar has decreased [1]. Another case study, related to one of the intelligent transportation systems in Karaj city has been implemented. In this study, besides the introduction of effective parameters in selecting the location of variable message sign with the purpose of high efficiency of these tools in traffic distribution in city passages, ARC GIS software was used. In this regard, with simulation of Karaj city passages in the mentioned software and defining the parameters involved in location selection, some points were proposed for locating that provided favorable visual success in passages by considering traffic volumes, as well as decision making by users in selecting other routs after passing the sign. In the following, after defining the favorable passages at the level of the urban passages network, in the next stage by using visual and geometrical parameters from the existing regulations, locating the variable message sign (VMS) along this passage with preparing logical points of view in the distances of vision and change of direction took place [5]. Application of data mining knowledge has been used in extensive safety researches in safety field, for example in studying the effects of variables and this aim, a summary of these researches is presented here and in the following they are discussed. Finally, the decision tree is described as an effective way in the road transportation accidents. Reviewing the references In one of these studies, the efforts are made to introduce some effective intelligent systems in road safety and it is tried to review some case studies in this issue. The results of this study show that these systems can decrease the number of accidents significantly or the time for assistance on roads. Video controlling systems are the main tools of traffic management and their advantage is to provide video information for deciding and law enforcement. Speed controlling cameras have a significant role in the speed reduction of passing vehicles and this speed decrease will reduce accidents and finally reduce road casualties [3][4][5][6][7][8][9]. In this regard, the locating issue of this tool is essential for making them more practical and optimized their performance. In this research, after identifying the effective layers in locating speed controlling cameras, first desired information and maps were collected and in following, by weighing them, these layers were put on each other in ARCGIS media by aids of the hierarchical analysis technique and maps of appropriate zones for installing speed controlling cameras were extracted [10][11][12][13][14][15][16][17]. Another study is based on describing-analyzing method and it tried to determine the correct localization of intelligent road transportation equipment on the Shahrud-Sabzevar road in Semnan province by using analyzing and recognition of mostly accidental places and by means of the GIS localization method, localization of installing the intelligent transportation systems in Shahrud city was implemented. ARC GIS, Google Earth, RMG and MAP Source software were used for optimal localization in installing the mentioned systems. The results of this research can provide a scheme for installing intelligent systems in all of the provinces and it can provide approaches for optimal usage of intelligent systems [4]. The experimental Bayes method (EB) is commonly used by transportation safety analyzers for doing different safety analyses, such as before and after study and analyzing spots. Until today, most of implementation the EB methods were used by a negative two section method. Recent studies have shown that a limited mixture of the NB models with mixing GFMNB-K sections can be used in simulation of accident data in excessive closure and in overall it provides better statistical performance in comparison to the NB models [9][10][11][12][13] A group of authors, in a case study in Semnan province, evaluated the conditions of the Shahroud-Sabzevar road before and after the installing the intelligent systems. The study on the share of different accidents on the roads shows that the highest number F38 K H A B I R I e t a l . at 2004 to 2.10 % at 2007 [14]. In the studies before and after the installation of surveillance cameras in Jordan by Naqvi and et al. (2018), it was described that in the period between 2011 and 2014 the sped controlling cameras were installed in eight locations, an average of 59 to 63 % of traffic accidents decreased, but in a location traffic accidents were increased to 35 % [10][11][12][13]. A summary of related researches with the importance of subject are presented in Table 2. Research method and data gathering For analyzing data in this research by the SPSS software, a statistical pattern from the decision tree was plotted. There are different techniques for plotting a decision tree utilizing different algorithms, regarding the analyzed database, selecting an appropriate algorithm for applying on data is essential. The different parameters, the effects of sewage caps cover in the showoff of the motorcyclists were analyzed by decision tree, in the result of the mentioned research it is pointed out that more than 50 % of motorcyclists make dangerous sudden change in location when they are faced with these cap covers [9][10][11][12][13]. Table 1 shows the comparisons between effectiveness of intelligent transportation system in country and different areas that are proposed by researchers, Sheikh Zeinodin and et al. [2]. Based on performed analyses by Niazi et al. (2018) [14], in reviewing the references it was clear that installing the speed controlling cameras have decreased the accidents 29 %, or they say that 19 % of all accidents and 44 % of dangerous accidents in the UK have decreased after the installation of speed controlling cameras as a tool of intelligent transportation systems, with the practical implementation of intelligent transportation tools in China, the death statistics reached from 2.17 % Table 1 The comparison of effectiveness of intelligent transportation systems in roads safety [2] Tool type The comparison of safety performance of tools in intelligent transportation systems in different places Table 2 Overall summary from related studies with the current research [3][4][5][6][7][8][9] Subjects of study Results 1 The severity of the driver injury in single car accidents The safety of new cars influence the driver reaction. 2 Analyzing the effectiveness of human factors on predicting and categorization of accident severity in Iran Safety belt, age and gender are human factor signs that have influence in road accident severities in Iran. 3 Defining the relation between injury severity of accidents with driver behavior, vehicle and environment Type of variable vehicle is important that has the highest effect in flow. 4 Investigating the effect of weight and size of vehicle on the severity of accidents Difference in weight and distance of vehicle axis in accidents increase the accidents. 5 Analyzing the effective factors on accident severity The speed factor is very important in increasing the severity of the injury. So that, at high speeds the importance of age and gender is less than speed. In the decision tree algorithm, the classification algorithm is considered with monitoring, thus, data are classified in learning and teaching media, in this kind of algorithm, the performance and accuracy of their classification in data mining is in lower entropy index [15][16][17][18]. The display of the resulting knowledge in the shape and structure of a tree makes it comprehensible. Each branch of a tree includes a combination of different variables that have close properties [15]. The optimum number of clusters in each model is calculated by using the Dun index based on Equation (6). The purpose of this index is to maximize the distances in clusters and minimize the distance between clusters [15][16][17][18]. where: d(c i , c j ): the minimum distance between existing records in branch i that is calculated from Equation (7): diam c ĵ h : the maximum distance between existing records in branch j that is calculated from Equation (8): where: d (x,y) : is the distance in each branch [15][16][17][18]. For analyzing the performance and the used algorithm, convergent validity is used. This index calculates the correct performance of the clustering algorithm, if a record is not placed correctly, it is recorded as "positive correct index" and if it is placed in an unrelated branch it is recorded as "negative correct index" and finally, the convergent validity is calculated from [15][16][17][18]: convergent valitidy positive data negative data positive integer negative integer , = + + (9) in that: positive data: data with positive sign, negative data: data with negative sign [15][16][17][18]. Equation (9) shows the degree of reliability of data segmentation and classification. The convergent validity parameter denotes the validity of the classified data to the total data. The larger this parameter is, the more reliable the decision tree algorithm is. Nevertheless, if a large amount of data is not properly segmented (negative data), it reduces the level of reliability. The decision tree is one of the non-parametric methods of data classification. Tree classification in audit analysis methods is based on logistic regression. In this research, the decision tree algorithm has been used to present the advantages of the decision tree are: • Using the knowledge of human • Easy perception in training and test data • Easy perception of categories • Applicable for training data with low volume and easy problems [15][16][17][18][19]. • Facility in understanding the relationship between variables, but despite its easiness, it can work with complex data and make decisions from it. • Ability to use a massive amount of data and a large number of variables and work with complex data are its advantages. • It can be combined with other decision-making techniques to achieve better results. For determining the appropriate branches in the categorization of data, a criteria for calculating the impurity amount for each group is defined that is obtained based on: where: i (N) : entropy impurity criteria, P(w): the relation of samples that in N knot it is in J group. When the amount of i (N) is maximum, all the samples with uniform distribution belong to all categories, this means that equal probability is created for all the groups or sub-branches [15]. One of the criteria for categorization in the decision tree is variance impurity, which is obtained for even groups based on: ( The Gini impurity i_j (N) is the generalized impurity variance for the multi-group state that is obtained from: i_j (N) Another impurity criterion is misclassification impurity that shows the minimum possibility of misclassification and in the highest state, in which all the groups are equal, is obtained from: mi is the index of misclassification impurity. The multi-branch state with a branching rate of B, can be used for the number of possible branches in classification and based on Equation (5) for normalization the number of impurity decrease is used. Introduction of statistical population The statistical population includes a complete collection of recorded data or possible measurements of a qualitative characteristic or property, about the complete collection of units, which is considered, perceptions about it are performed. This research seeks to link safety variables and applications of intelligent transportation system tools, that the summary of statistical data from the statistical yearbook of the Road Transport Organization are extracted and are introduced in Table 4 in the form of descriptive statistics from statistical software [20]. For analyzing the normal state of data, the tensile data are tested first; that its histogram is drawn in Figure 1. In Table 4 the tensile descriptive statistics or kurtosis show a normal distribution that its amounts are in (-2, +2) range. Decision tree model A model is a mental and non-physical display of the internal components and connections of a phenomenon pattern of traffic accidents. In the decision tree, Chi-square Automatic Interaction Detector (CHAID) algorithm, this possibility exists that one knot is divided into more than two knots but it is used in the Cluster-Based Routing (CRT) algorithm for double trees (it means that each knot is divided into two other knots). Introduction of research variables In this study, variables of research include the number of accidents (death-injury) and independent variants and the intelligent control and surveillance of the road police or other intelligent transportation systems (ITS) in provinces from 2008 to 2017 (10 years' statistics). First, with using the available data from the summary of management statistics of roads and transportation in 2018 [20], the information statistics were inserted in Excel software and the next effort was modeling the decision tree for reaching an appropriate graphical display from the relationship of independent and dependent data, which in Table 3 input data for research are presented. The graphical tool for displaying the classification and clustering data from decision support tools has the following advantages: • Training samples are placed in the correct categorization as possible. that shows the relationships between data and different variables of that phenomenon. Models are used for drawing future phenomena and future situations. In the following, two types of statistical models, the multiple linear and the data mining model of the decision tree are used. in these categories that are the roadside assistance (emergency). It is observed that in this node, with increasing the number of roadside assistance to more than 41 bases, the number of accidents (deaths) changes dramatically. As the number of vehicles increases, these are effective in accidents, as the number of crossings and the volume of vehicles increases by 50 %, the number of accidents (death) increases. Based on this graph, the importance of existence of the relief stations and attention to the nature of their services is emphasized, especially when the possibility of making smart these Intelligent Equipment Relief Service in order to the in-time presence in traffic accidents. Another measure of relative reduction in traffic volume is the widening of road lanes, which in proportion to the passing traffic, reduces the intensity and number of accidents. In addition, it is recommended that the higher priority be given to locating intelligent traffic and transportation equipment on routes with higher passing traffic. In Figure 3, the diagram of the extended decision tree model is considered as a dependent variable for the accidents in 2015, this pattern is based on the CHAID data clustering model. An important variable, affecting the data clustering in the first open node in these statistical data, is the number of roadside • The emergence of training samples in a form that neglected samples are categorized with high accuracy. • In the availability of new training samples, facile updating of trees is possible. • It has as simple a structure as possible. In the decision tree, the first leaf is divided into two leaves with various samples and different predictions. The used pattern in this modeling is the CHAID technique. Each category is divided into other categories with different numbers of specimens and different predictions, except for its own leaf. In addition, these steps continue to reach the final nodes called the leaf, the best branch and its final result is called the leaf, the category is found, the best branch and its final result that means leaf is selected according to the percentage of importance of the independent variables. Then, the constructed branches of the tree are removed or pruned to achieve the stopping criteria or reaching to the desired level of complexity. The first sample of a constructed decision tree, from 13 variables in this study, is shown in Figure 2 based on the CHAID pattern; as it is seen, this decision tree is based on a double classification pattern, it has 5 main leaf or data groups, that describe variables • Dependent variables, such as network length and volume of passing traffic, were of a great importance in constructing the accident prediction model, which, due to the specificity of this issue, is proof of the accuracy of the constructed models. • Compared to more important variables, such as passing volume and route length, variables related to intelligent transportation did not have a significant impact on the model construction, hence it is essential to pay attention to develop the appropriate primary network infrastructures. • The influential factor in the number of accident models in the drawing techniques was the relief situation, that equipping the relief stations and coordinating them with other equipment of the intelligent transportation system can be effective in reducing the severity of accidents. Recommendations for increasing the traffic safety According to the research results, in general, approaches for increasing the effectiveness of the application of road intelligence tools and road transport are provided. assistance stations The effectiveness of the roadside assistance parameter in this collection is such that in the first node three separate leaves are made for the data categorization. The second factor in the separation and categorization of data is the road networks. Of course, as the length of the route network increases, the number of accidents increases obviously, but this result emphasizes the correct construction of this graphic model. Summary of results Analysis of accidents occurring on suburban roads with the aim of identifying the parameters affecting the increase in the frequency of accidents, can be effective in the decision of those involved in improving a road safety in order to reduce the road accidents. The purpose of this study was holistic, in other words, minor indicators such as speed were not included in the study. In this study, according to the published general data related to the last ten years on the status of the road network and intelligent transportation system tools and statistical modeling techniques, clustering models are used that in summary, the main results of this study in a divided form include: of telecommunication and internal electronic systems of used cars in the country such as car-car systems and car infrastructure needs more attention. Finally, in order to complete the present study, several study horizons are suggested as follows. • Providing an appropriate data mining framework to implement appropriate models in determining the influence of effective parameters on the performance of intelligent transportation tools. • Using other data mining and neural network models to calculate the effectiveness of intelligent transportation tools in the severity and the number of events such as CART and K-MEANS methods and phase neural network. • Despite the widespread use of smart devices in the road transport, there are still many shortcomings, for example, in-time informing in accidents and the golden time of rescue to rescue forces can decrease a significant proportion of the number of accidents and casualties and the severity of injuries. • Decentralization and proportional distribution of intelligent transportation system infrastructures between the provinces and accident-prone places with high transit volume and providing its related tools are among the essentialities of increasing the efficiency of this system. • Completing the primary infrastructures and then equipping them with intelligent transportation is an obligation, but along with it, the development
5,273.8
2022-02-01T00:00:00.000
[ "Computer Science" ]
Functional annotation of the 2q35 breast cancer risk locus implicates a structural variant in influencing activity of a long-range enhancer element A combination of genetic and functional approaches has identified three independent breast cancer risk loci at 2q35. A recent fine-scale mapping analysis to refine these associations resulted in 1 (signal 1), 5 (signal 2), and 42 (signal 3) credible causal variants at these loci. We used publicly available in silico DNase I and ChIP-seq data with in vitro reporter gene and CRISPR assays to annotate signals 2 and 3. We identified putative regulatory elements that enhanced cell-type-specific transcription from the IGFBP5 promoter at both signals (30- to 40-fold increased expression by the putative regulatory element at signal 2, 2- to 3-fold by the putative regulatory element at signal 3). We further identified one of the five credible causal variants at signal 2, a 1.4 kb deletion (esv3594306), as the likely causal variant; the deletion allele of this variant was associated with an average additional increase in IGFBP5 expression of 1.3-fold (MCF-7) and 2.2-fold (T-47D). We propose a model in which the deletion allele of esv3594306 juxtaposes two transcription factor binding regions (annotated by estrogen receptor alpha ChIP-seq peaks) to generate a single extended regulatory element. This regulatory element increases cell-type-specific expression of the tumor suppressor gene IGFBP5 and, thereby, reduces risk of estrogen receptor-positive breast cancer (odds ratio = 0.77, 95% CI 0.74–0.81, p = 3.1 × 10−31). Introduction Over the last 15 years, genome-wide association studies have transformed our ability to map genetic variation un-derlying complex traits. 1 The vast majority of variants identified in genome-wide association studies are non-coding and are thought to influence transcriptional regulation, 2,3 a process which can be highly cell type and tissue specific. 4 Our ability to translate these findings into a greater understanding of the mechanisms that influence an individual woman's risk will require the identification of causal variants (as opposed to correlative variants), the targets of these functional variants (the genes or non-coding RNAs that mediate the associations observed in genome-wide association studies) and an understanding of the disease causal cell types and processes. 1 Genomewide association studies of breast cancer coupled with large-scale replication and fine-mapping studies have led to the identification of approximately 200 breast cancer risk loci; 3,5-9 two of these loci, annotated by rs13387042 10 and rs16857609, 5 map to a gene desert at chromosome 2q35. Fine-scale mapping, combined with in silico annotation, reporter gene assays, and allele-specific qRT-PCR led to the identification of a putative causal variant (rs4442975) at the rs13387042 locus. 11,12 rs4442975, which is highly correlated with the tag SNP rs13387042 (r 2 ¼ 0.92, D 0 ¼ 0.96), maps to a consensus binding site for the transcription factor (TF) forkhead box A1 (FOXA1 [MIM: 602294]) with the alternative T-allele promoting binding of FOXA1. 11,12 To date, no putative causal variant at the rs16857609 locus has been reported. Chromatin interaction methods implicate IGFBP5 (MIM: 146734) as the target gene at both loci [11][12][13] and for the rs13387042 locus, eQTL analyses demonstrated association of the protective T-allele with slightly increased IGFBP5 levels in normal breast tissue 11 and estrogen receptor-positive (ER þ ) breast cancers. 12 Taking a functional approach based on chromosome conformation capture (3C) assays that were anchored at the IGFBP5 promoter, Wyszynski and colleagues identified a putative regulatory element centered on a structural variant (SV; esv3594306) that maps approximately 400 kb telomeric to IGFBP5. 14 Allele-specific expression analyses and follow-up genotyping identified 14 highly correlated variants (all r 2 > 0.8 with the top SNP, rs34005590) associated with breast cancer risk, which represent a third risk signal (OR ¼ 0.82, p ¼ 5.6 3 10 À17 ). 14 In this analysis we report fine-scale mapping of the 2q35 region in European and Asian individuals with breast cancer and control subjects from the Breast Cancer Association Consortium. We confirm three independent, highconfidence signals at 2q35 annotated by rs13387042 (signal 1), rs138522813 (signal 2), and rs16857609 (signal 3). We carry out functional annotation of credible variants at signals 2 and 3 and implicate the deletion variant (esv3594306) at signal 2 as causally associated with increased IGFBP5 expression and reduced breast cancer risk. Material and methods Fine-scale mapping of the 2q35 breast cancer risk locus Fine-scale mapping of the 2q35 breast cancer risk locus was carried out as part of a large collaborative project; full details have been published. 3 Briefly, for the current analysis we accessed data from 94,391 individuals with invasive breast cancer and 83,477 individuals of European ancestry and 12,481 individuals with invasive breast cancer and 12,758 control subjects of Asian ancestry from 87 studies participating in the Breast Cancer Association Consortium. All participating studies were approved by their appropriate ethics review board and all subjects provided informed consent. Directly genotyped or imputed (info score > 0.8) calls for 10,314 SNPs mapping to a 1.4 Mb region at 2q35 (chr2:217,405,832-218,796,508; GRCh37/hg19) were available for analysis. At this threshold, the proportions of common variants (MAF R 0.05), low-frequency variants (0.01 % MAF < 0.05), and rare variants (0.001 % MAF < 0.01) 3 that could be analyzed were 89.7%, 68.5%, and 3.6%, respectively, for OncoArray and 64.2%, 40.5%, and 0.8%, respectively, for iCOGS. Analysis of the association between each SNP and risk of breast cancer was performed using unconditional logistic regression assuming a log-additive genetic model, adjusted for study and up to 15 ancestry-informative principal components. p values were calculated using Wald tests. Forward stepwise logistic regression was used to explore whether additional loci in the fine-mapping region were independently associated with breast cancer risk. We carried out stratified analyses to determine whether each of the independent associations 30 Argyrios Ziogas, 11 Paul D.P. Pharoah, 4,7 Alison M. Dunning,4 Douglas F. Easton, 4,7 Stephen J. Pettitt, 1,3 Christopher J. Lord, 1,3 Syed Haider, 1 Nick Orr, 2 and Olivia Fletcher 1, * differed according to estrogen receptor (ER) status; heterogeneity between stratum-specific estimates was assessed using Cochran's Q-test. All statistical analyses were carried out using R version 3.6.1. Cloning of reporter assay constructs All reporter assay plasmids were derived using the pGL4 reporter vector (Promega). Reporter vectors were constructed using a restriction digest-based cloning approach. The IGFBP5 promoter and putative regulatory element regions (containing WT alleles) were synthesized as gBlocks (Integrated DNA Technologies, full details in Table S2). Double restriction digests of plasmid or gBlock were performed using BglII and XhoI (for IGFBP5 promoter) or SalI and BamHI (for putative regulatory element regions) according to the manufacturer's instructions (New England Biolabs [NEB]). Ligations were performed in a 3:1 insert:vector ratio using T4 DNA ligase (NEB), according to manufacturer's instructions. Correct cloning was validated by Sanger sequencing using a commercially available service (Eurofins Genomics). Alternative (ALT) alleles of each variant were introduced into reporter vectors using QuikChange Lightning Site-directed Mutagenesis kit (Agilent Technologies), according to the manufacturer's instructions. Accurate mutagenesis was confirmed by Sanger sequencing (Eurofins Genomics). All reporter gene constructs are shown in Figure S1. Reporter assays Reporter assays were performed in T-47D, MCF-7, 293T, HCT116, and HepG2 cell lines. Antibiotics were removed from standard growth media 24 h before transfection to improve viability. For assays performed under standard conditions, approximately 16,000 cells were seeded per well of a 96-well plate for T-47D, MCF-7, and HepG2, and approximately 8,000 cells were seeded per well of a 96-well plate for 293T and HCT116. Transfection was performed upon reaching 70% confluency (~24 h after cell seeding). For assays performed after 17b-estradiol treatment, cells were first hormone starved for 48 h. Approximately 10,000 cells (T-47D) and 8,000 cells (MCF-7) were seeded, per well of a 96-well plate, in standard growth media and cultured for 24 h. The media was then replaced with phenol red-free media (GIBCO) supplemented with 10% charcoal-stripped FBS (GIBCO), 100 U/mL penicillin with 100 mg/mL streptomycin, 10 nM fulvestrant (I4409, Sigma), and 10 mg/mL human insulin (T-47D only). After 48 h, growth media was replaced with phenol red-free media supplemented with 10% charcoal-stripped FBS, 10 mg/mL human insulin (T-47D only), with the addition of either (1) 10 nM 17b-estradiol (E2758, Sigma) or (2) vehicle (ethanol). Transfection was performed upon reaching 80% confluency (6 h after 17b-estradiol or vehicle treatment). Transfection was performed using X-treme GENE HP DNA transfection reagent (Roche). Equimolar amounts of the test pGL4based firefly luciferase vector and pRL-TK renilla luciferase control (Promega) were combined in a 3:1 reagent:DNA ratio in OptiMEM (Fisher Scientific). After a 30 min incubation at room temperature, 10 mL transfection mixture was added per well. Each biological replicate was performed in technical triplicates with non-transfected, mock-transfected, and pEGFP-transfected controls (Takara Bio Inc). Cells were screened for luciferase activity 48 h after transfection using the Dual-Glo Luciferase Assay System (Promega) according to the manufacturer's instructions. Confirmatory genotyping and sequencing of putative regulatory element 2 (PRE2) Four of the five variants mapping to PRE2 (rs72951831, rs199804270, rs138522813, and esv3594306) are highly correlated based on 1000 Genomes data (1KGP), with the ALT alleles of rs72951831, rs199804270, and rs138522813 all predicted to occur in combination with the ALT (deletion) allele of esv3594306 (esv3594306: rs72951831 However, rs572022984 (hg19, chr2:217955897) theoretically maps within the esv3594306 deleted region (chr2:217,955,891-217,957,273) casting doubt on whether the (imputed) rs572022984-del allele could occur in combination with the esv3594306 deletion allele. To clarify this, we genotyped all five variants in 300 randomly selected women participating in the Generations Study 18 using MassARRAY (Agena Bioscience; full details of primers available on request). The number of carriers of the alternative (A>-) allele at rs572022984 (MAF ¼ 0.035) was 0 (expected number ¼ 21; p ¼ 0.00002). To confirm our genotyping, we carried out Sanger sequencing (Eurofins) of a 2.4 kb region spanning (chr2:217,955,586-217,958,000) in two individuals who were heterozygous at the linked PRE2 SNP rs138522813. Primers were: forward 5 0 -CGCTTCCCCTTCATCACTTG-3 0 and, reverse 5 0 -TCTCTCAGGCCAAGTCACAG-3 0 . Sequencing confirmed the presence of REF and ALT alleles of esv3594306, rs72951831, and rs199804270 (rs138522813 maps just outside the amplified region) but only REF alleles at rs572022984; on this basis we excluded rs572022984 from further analyses. Cloning of guides for CRISPR-based enhancer perturbation Guides were designed using the online design tool CHOPCHOP (http://chopchop.cbu.uib.no). Guides were selected based on their proximity to variants of interest and specificity scores. Full details are provided in Table S3. Cloning was performed essentially as described in Ran et al. 19 Briefly, guides were produced as two complementary oligonucleotides with overhangs to facilitate cloning. Oligos were annealed with T4 Polynucleotide Kinase (NEB). The expression vector pKLV-U6gRNA(BbsI)-PGKpuro2ABFP (Addgene #50946) was digested using BbsI (NEB), and ligation performed using T4 DNA ligase (NEB). Cloning was validated by sequencing (Eurofins Genomics). Statistical analysis of reporter gene assays and CRISPRbased enhancer perturbation Firefly luciferase activity was internally normalized to renilla luciferase activity, and each test condition normalized to the ''IGFBP5 promoter-alone'' (IGFBP5-PROM) construct. Setting IGFBP5-PROM to 1.0, for each putative enhancer-containing reporter gene construct we used t tests to test (1) H 0 : the mean dual luciferase ratio does not differ from 1.0 and (2) H 0 : the ALT construct does not differ from the REF construct. To compare mean dual luciferase ratios for each combination of SNP and SV at PRE2, we used three-way analysis of variance adjusting each variant for all other variants. To account for multiple testing, we used a Bonferroni corrected p value of 0.0056 (individual constructs, Figure 2; 9 tests) and 0.017 (PRE2 combinations, Figure 3; 3 tests). Relative gene expression was calculated using the DDC T method. For the negative control sgRNAs (TAG-1 and TAG-2), we used t tests to test H 0 : the relative gene expression does not differ from 1.0. To maximize the power of subsequent analyses, we then combined the negative control data and for each of the other sgRNAs we tested H 0 : relative gene expression does not differ from the combined negative control relative gene expression. To account for multiple testing, we used a Bonferroni corrected p value of 0.017 (PROM sgRNAs Figures 4A; 3 Ethics approval and consent to participate All participating studies were approved by their appropriate ethics review board and all subjects provided informed consent. Results Fine-scale mapping of a 1.4 Mb region at 2q35 (chr2:217,407,297-218,770,424; GRCh37/hg19; Figure 1A) in combined data from up to 109,900 individuals with breast cancer and 88,937 control subjects of European Ancestry from the Breast Cancer Association Consortium confirmed the presence of three independent signals (p < 5 3 10 À8 ; Figure S2) at this region. 3 After conditioning on the top SNP at each of these three signals (signal 1, rs4442975; signal 2, rs138522813; signal 3, rs5838651), there were no additional high-confidence signals (defined as signals for which p < 1 3 10 À6 ). 3 Defining credible causal variants at each signal as variants with conditional p values within two orders of magnitude of the index variant there were 1, 5, and 42 credible causal variants at PRE1, PRE2, and PRE3, respectively (Table S4). Fine-scale mapping of this region in women of Asian Ancestry (12,481 affected individuals and 12,758 control subjects) did not identify any population-specific signals (all associations p > 5 3 10 À8 ; Figure S3). None of the credible causal variants at signal 2 was present in women of Asian ancestry. The published causal variant at signal 1 (rs4442975) and all of the signal 3 credible causal variants (Table S5) were nominally associated with breast cancer risk in Asian women (p < 0.05). At signal 3, the index variants differ between Europeans and Asians (rs5838651 and 2:218265091:G:<INS:ME:ALU>:218265367, respectively) but none of the European credible causal variants could be excluded on the basis of the Asian data. Prioritization of credible variants for functional follow up Fachal and colleagues 3 used a Bayesian approach (PAIN-TOR) that combines genetic association, linkage disequilibrium, and enriched genomic features to determine variants with high posterior probabilities of being causal (Table S4). 20 rs4442975, the only credible causal variant at signal 1 (posterior probability ¼ 0.84), has previously been proposed to have a functional effect on breast cancer risk. 11,12 Four of the five variants at signal 2 had posterior probabilities R 0.20 (combined posterior probability 0.997); none of the variants at signal 3 had posterior probabilities > 0.15. To further prioritize putative causal variants at signals 2 and 3, we aligned the 47 credible variants at these signals with markers of open chromatin (DNase I), active transcription (P300), active enhancers (H3K27Ac, H3K4me1), and breast-relevant TFs (FOXA1, GATA3, ERa) generated in T-47D and MCF-7 breast cancer cells [15][16][17] (Table S4). Consistent with the PAINTOR posterior probabilities, four variants at signal 2 colocalized with at least one of these features. In addition, we identified two variants at signal 3 that colocalized with one of these features. These six variants were prioritized for further functional annotation. Reporter gene assays of prioritized variants For SNPs, we generated reference (REF) and alternative (ALT) constructs in which the putative regulatory element, defined in the first instance as a 500 to 700 bp region centered on the SNP or SNP pair (PRE2A rs572022984; PRE2B rs199804270 and rs72951831; PRE3 rs12694417 and rs12988242, Table S2; Figures 1B and 1C), was cloned upstream of a luciferase reporter gene, driven by the IGFBP5 promoter ( Figure S1). Figure 2). To test these constructs for cell type specificity, we used HepG2 (hepatocyte carcinoma), 293T (embryonic kidney), and HCT116 (colorectal carcinoma) cells; the only construct that influenced transcription from the IGFBP5 promoter in these nonbreast cells was PRE2DEL-ALT/ALT in 293T cells and with an effect size that was an order of magnitude lower (FC ¼ 1.9, p ¼ 0.002; Figure S4 Figure 2). Repeating these assays in cells that were grown in the presence of low-dose estradiol did not alter these results; both PRE2B and PRE3 were responsive to low-dose estradiol (Figures S5A and S5B) but only PRE2 showed a difference between alleles, with the protective PRE2DEL-ALT/ALT allele once again being associated with significantly greater activity than the Table S2, diagrams are in Figure S1. Figure S5A). The PRE2DEL-ALT/ALT construct comprises a haplotype of three tightly linked variants: the ALT alleles of the two SNPs (rs199804270:GA:G, rs72951831:G:T) with the ALT (deletion) allele of the structural variant (esv3594306) that brings two separate ERa, FOXA1, GATA3, and P300 ChIP-seq peaks into juxtaposition ( Figure 1B). To differentiate individual effects, each allele of each SNP was introduced onto esv3594306 insertion and deletion backgrounds separately using site-directed mutagenesis. The PRE2A SNP (rs572022984) was not considered further due to technical issues (Material and methods). In a combined analysis, adjusting each variant for the other two variants, there was evidence that deletion constructs consistently showed greater activity than insertion constructs (MCF-7: DEL FC ¼ 43. 4 Figure 3). CRISPR-based perturbation of PRE2 Reporter gene assays do not reflect the ''normal'' genomic context of a regulatory element. Specifically, the assay tests whether the putative regulatory element can influence expression in an episomal context 21 and from a distance of a few kilobases; in vivo, PRE2 maps approximately 400 kb from the IGFBP5 promoter. To determine whether PRE2 acts as an enhancer element in a cellular context, we used a systematic CRISPR-based enhancer perturbation approach. We hypothesized that if PRE2 acts as an enhancer in vivo, targeting a catalytically inactive Cas9 (dCas9) fused to a repressive (KRAB) domain to regions within PRE2 would result in lower levels of expression of IGFBP5 (CRISPR interference; CRISPRi); by contrast, targeting dCas9 fused to an activating VPR domain would result in higher levels of expression of IGFBP5 (CRISPR activation; CRISPRa). 22,23 We designed CRISPR single-guide (sg) RNAs to the ERa ChIP-seq peak at the centromeric breakpoint of the deletion (guides PRE2-1 and -2), within the esv3594306 deletion region (guides PRE2-3 to -6) and to the ERa ChIP-seq peak at the telomeric breakpoint of the deletion (guides PRE2-7 to -9; Figure 1B). As positive controls we designed sgRNAs to target the IGFBP5 promoter (guides PROM-1 to -3; Figure S6A) and the previously characterized causal variant (rs4442975, guide PRE1-1; Figure S6B). As negative controls we designed sgRNAs to the published genome-wide association study signal 1 tag SNP (rs13387042, guides TAG-1 and -2; Figure S6B). We used MCF-7 cell lines engineered to stably express (1) dCas9 with a repressive KRAB domain and (2) dCas9 with an activating VPR domain; as an additional control we used MCF-7 cells that expressed dCas9 without the KRAB or VPR domains. Discussion Fine-scale mapping at the 2q35 breast cancer locus in women of European ancestry 3 confirmed rs4442975 as the probable causal variant at signal 1 and reduced the number of credible causal variants at signal 2 from 14 to 5; 3,14 at signal 3, however, there remained 42 credible causal variants that could not be excluded as causal on statistical grounds alone in either the European or the Asian data. Low-throughput functional approaches that are used to investigate putative causal variants, including reporter gene assays and CRISPR screens, become prohibitive with large numbers of credible causal variants and most single locus 11,14,[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] (Table S4), we were able to prioritize 4 of the 5 credible causal variants at PRE2 and 2 of the 42 credible causal variants at PRE3 for followup studies. By taking this approach there is, inevitably, the possibility that we have excluded one or more causal variants from our follow-up analyses. For PRE2 this seems unlikely as we selected four out of the five credible causal variants for further follow-up studies. For PRE3 it is entirely possible, or even probable, that we failed to prioritize one or more causal variant(s); improving our ability to discriminate more accurately between potentially functional variants and large numbers of correlated variants will require genome-wide datasets with functional outputs 21,39,40 generated in more relevant cellular disease models and taking advantage of single-cell technologies. 1 Using reporter gene assays, we have demonstrated that both the distal region of PRE2 (PRE2B) and the entire PRE3 region can enhance transcription from the IGFBP5 promoter in a cell-type-specific manner. Despite co-localizing with multiple markers, we found no evidence that the proximal region of PRE2 (PRE2A) acts as an independent enhancer element. The ChIP-seq peaks at this region are, however, relatively weak ( Figure 1B); combining data from both PRE2A alleles, in both breast cancer cell lines to increase our power (i.e., using 12 replicates rather than 3) the overall mean fold change for PRE2A was 1.14 (1.03-1.26, p ¼ 0.01), consistent with the presence of a very modest enhancer element. Comparing REF constructs with ALT constructs, we found no evidence that either of the credible causal variants at PRE3 (rs12694417, rs12988242) altered the activity of the PRE. This does not exclude these SNPs as functional; as above, modest effects on enhancer activity may be difficult to detect and variants that, for example, influence chromatin accessibility may not be detectable in transient assays. 11 However, without preliminary in vitro evidence to suggest that one of these variants alters cell-type-specific transcription from the IGFBP5 promoter, pursuing further functional studies that are predicated on this very assumption seems unlikely A B Figure 5. Increasing the local density of activator TF domains with dCas9-VPR or by juxtaposition of two ChIP-seq peaks is associated with increased expression of IGFBP5 (A) Introducing dCas9 fused to a VPR activator domain at the ERa, FOXA1, GATA3 ChIP-seq peak at the centromeric end of the deletion breakpoint (PRE2-1 and PRE2-2), proximal to, or at, the ERa, FOXA1, GATA3 ChIP-seq peak at the telomeric end of the deletion breakpoint (PRE2-5 and PRE2-8, respectively) increases expression of IGFBP5 in MCF-7 cells. (B) Deletion of 1.4 kb on the ALT allele of esv3594306 juxtaposes these two ERa, FOXA1, GATA3 ChIP-seq peaks. In each case (A and B) this increases the density of activating TF domains in the region and is associated with increased expression of IGFBP5. to be fruitful. By contrast, one comparison that was consistent and significant between constructs and across the two breast cancer cell lines was that PRE2 deletion alleles had stronger enhancer activity than PRE2 insertion alleles. The purpose of our CRISPR-based enhancer perturbation was 2-fold: specifically, to interrogate the PRE2 region within its normal genomic context and more generally to evaluate CRISPRi and CRISPRa approaches for interrogating long-range regulatory elements that harbor credible causal variants. As none of our PRE2 sgRNAs impacted IGFBP5 expression significantly in the CRISPRi setting, our analysis raises questions as to the utility of this approach for characterizing longrange regulatory elements (PRE2 maps approximately 400 kb telomeric to the IGFBP5 promoter). This is at odds with results of a systematic CRISPRi screen to identify enhancer elements in K562 cells, which demonstrated CRISPRi-mediated repression of c-MYC expression by sgRNAs targeting sequences mapping up to 1.9 Mb downstream of c-MYC. 22 In this analysis, however, CRISPRi-mediated repression by these distal elements was modest compared to CRISPRimediated repression by more proximal elements and, even based on 12 biological replicates, of borderline statistical significance. 22 By contrast, using CRISPRa we were able to confirm that one or more elements within PRE2 can act as a long-range regulatory element that specifically targets IGFBP5 (rather than IGFBP2 or RPL37A). Four of the nine guide RNAs targeting dCas9-VPR to sequences at PRE2 increased expression of IGFBP5; three of these colocalized with ERa, FOXA1, and GATA3 ChIP-seq peaks (PRE2-1, -2, and -8) and a fourth (PRE2-5) mapped within the esv3594306 deleted region ( Figure 5A). There were also two guides which targeted dCas9-VPR to sequences that map close to the distal ERa, FOXA1, and GATA3 ChIP-seq peak (PRE2-6 and -7) but did not increase IGFBP5 expression; this may reflect the very variable efficiency of different guide RNAs. 22 We present a theoretical model in which we hypothesize that all of the PRE2 guides that increased expression of IGFBP5 increased the local density of activating TF domains by bringing a VPR domain into the proximity of a cluster of TF ChIP-seq peaks; one implication of the increase in IGFBP5 expression we observed with PRE2-5, which maps approximately 450 bp from the center of the nearest cluster of ChIP-seq peaks ( Figure 5A), is that these regulatory elements may extend over relatively large (>1 kb) regions. This should not, perhaps, be surprising; at a subset of strongly activated E2-responsive enhancers, it has previously been shown that ERa recruits DNA-binding transcription factors in trans, to form a large (1-2 MDa) complex. 41 It has previously been suggested that sequences mapping to PRE2 act as a repressor element which, in the presence of low-dose estradiol, acts to reduce IGFBP5 expression. 14 By contrast, our data support PRE2 acting as a powerful enhancer element with the deletion allele increasing expression of IGFBP5 over and above that of the insertion allele with or without estradiol stimulation. Overall, our data are consistent with a hypothetical model in which the juxtaposition of the two ERa, FOXA1, GATA3 binding sites at PRE2 by deletion of approximately 1.4 kb of intervening sequence generates a single extended binding region ( Figure 5B) that is causally associated with increased enhancer activity, higher levels of expression of the putative tumor suppressor gene IGFBP5, 42 and a reduction in breast cancer risk (OR ¼ 0.77, p ¼ 2.2 3 10 À29 ) that is largely restricted to ER þ disease. In conclusion, we have identified putative enhancer elements at two additional 2q35 breast cancer risk loci. One of these, mapping approximately 400 kb telomeric to IGFBP5, enhances transcription from the IGFBP5 promoter by a factor of 30-to 40-fold. For this element we provide evidence that a deletion of 1.4 kb is causally associated with increased enhancer activity and suggest a mechanism for this increased activity. Data and code availability Summary results for all variants genotyped by the Breast Cancer Association Consortium BCAC (including rs45446698) are available at http://bcac.ccge.medschl.cam.ac.uk/. Requests for data can be made to the corresponding author or the Data Access Coordination Committee (DACC) of the Breast Cancer Association Consortium via email to BCAC@medschl.cam.ac.uk.
6,039.6
2021-06-16T00:00:00.000
[ "Biology", "Medicine" ]
Livelihood changes matter for the sustainability of ecological restoration: A case analysis of the Grain for Green Program in China's largest Giant Panda Reserve Abstract Payments for ecosystem services (PES) are expected to promote ecological restoration while simultaneously improving human livelihoods. As an adaptive management tool, PES programs should be dynamic and adjusted according to changing natural and socio‐economic contexts. Taking the implementation of China's famous ecological restoration policy known as the Grain for Green Program (GGP) in the Wolong National Nature Reserve as an example, we analyzed changes in the livelihood capitals and strategies of local households that had participated in the GGP over a 10‐year period and discussed the implications of these changes for the next stage of the program's implementation. Data were collected from a locally implemented questionnaire in both 2004 and 2015. We found that local livelihood capitals and strategies had experienced dramatic change over the 10‐year period. Natural capital decreased and was unequally distributed among local respondents. In terms of financial capital, despite that agricultural and nonagricultural income increased, compensation from the GGP decreased and did not keep pace with increasing cost of cropland, household income and more broadly national economic development and inflation. Regarding human capital, the local labor force is facing huge transformational pressures. In particular, there is a increase in the supply of local labor force aged between 21 and 40 and the implications of this for the future of the GGP should be given more attention. The findings have demonstrated that: Some changes in participants’ livelihood were expected by the GGP but were not evenly distributed among the participants; and PES programs are embedded in changing and multi‐dimensional socio‐economic contexts, and so their design and implementation must be coordinated with other related policies if they are to achieve long‐term success. Consideration of contextual changes is necessary when designing and adapting a PES program. Such changes may or may not be induced by a program itself, and they can advance or impede its further implementation. Our aim in this study was to analyze the livelihood changes of participants in the GGP and discuss the implications of these changes for the next stage of the program. Our research objectives were to analyze the quantity and distribution of the changes to the participants' livelihood capitals and strategies; explore the reasons for and significance of these changes; and identify how these changes could inform the design and implementation of the next stage of the GGP. | Description of the study area The research was conducted in the Wolong National Nature Reserve which is located in Wenchuan County, Sichuan Province, southwest China (102°2′to 102°24′ E, 30°45 to 31°25′N). The reserve has an area of 2,000 km 2 and is famous for supporting the conservation of Chengdu-Xiaojin Road is the only main road connecting the local people to areas outside of the reserve. There are six administrative villages within the reserve, each with a committee responsible for management. The GGP has been implemented in the reserve since 2000. By the end of 2003, approximately 370 ha of cropland had been converted to forest. Since then, the program has been implemented in the reserve in two stages. In the first stage, for each hectare of cropland converted to forest, the local people were compensated with 2,250 kg of grain and 300 Yuan in cash once a year for 8 years'. In the second stage, only cash compensation was provided in the amount of 3,600 Yuan per hectare of cropland converted to forest once a year for 8 years'. A third stage of the program commenced in the reserve in 2015. While researchers have devoted significant attention to the levels of enrollment and farmers' re-converting their previously forested land back to cropland associated with the GGP (Chen, Lupi, et al., 2009;Chen, Zhang, et al., 2009;Chen et al., 2012) and the program's sustainability in the study area (Xu, Chen, Lu, & Fu, 2007), no attention has been given to understanding the changes that have occurred from the two former stages of the GGP in the reserve and how these changes could influence the design and implementation of future stages of the program. | Data collection Research data were obtained by Participatory Rural Appraisal (PRA). PRA is a conventional method for learning from farmers and is widely used to gather research information from rural people. Local livelihood capitals in 2004 and 2015 were chosen to discern changes over this 10-year period. The 2004 livelihood capitals values were derived from a questionnaire used to evaluate the sustainability of the GGP (Xu et al., 2007). In 2015, a questionnaire was again used to collect relevant information about local livelihood capitals and local perceptions of the GGP. A total of 137 and 182 households were randomly selected for investigation in 2004 and 2015, respectively. One adult person (>18 years old) from each household was interviewed in his or her residence. To avoid potential bias, we made it clear to the participants that the survey was for academic research only and that we were not affiliated with the GGP management authority or any of the local administration authorities. | A sustainable livelihood analysis framework Following previous studies (Bremer et al., 2014;Chen et al., 2013;Liang, Li, Feldman, & Daily, 2012), we used a livelihood approach as an organizing framework to better understand changes in the local socio-economic context. The Sustainable Livelihood Framework (SLF) was used. The SLF is defined by Chambers and Conway (1992) as "the capabilities, assets (including both material and social resources) and activities required for a means of living. A livelihood is sustainable when it can cope with and recover from stresses and choices, maintain, or enhance its capabilities and assets, while not undermining the natural resource base." The SLF encompasses five components: (1) vulnerability context; (2) livelihood capitals; (3) transforming structures and processes; (4) livelihood strategies; and (5) livelihood outcomes (DFID, 1999, pp. 1-8). These components are intended to be dynamic due to both external interventions and the activities of the rural residents. The SLF is often used in evaluating rural development projects as it helps to organize complex data into a form that summarizes and analyzes "core influences and processes" and the interactions between different factors that impact on people's livelihoods. It provides a holistic framework for understanding and exploring the dynamics of rural livelihood outcomes and strategies associated with rural development interventions. According to the SLF, livelihoods depend on five types of capitals: natural, human, financial, material, and social. These capitals are affected by the "vulnerability context" and "transforming structures and processes." Thus, we use livelihood capitals to characterize local livelihood changes. Livelihood strategies, which means the choices rural residents employ in the pursuit of income, security, well-being, and other productive purposes, are used to reveal local responses to livelihood changes. In order to relate the SLF to the GGP and our research objectives, livelihood capitals were measured at the household level and indicators were selected relating to both the capitals and the GGP to provide insight into factors that could influence the sustainability of the GGP. Indicators were selected as follows: cropland holding indicating natural capital, household income indicating financial capital, the number and educational level of household members indicating human capital. For the household material capital, the numbers of pigs and cattle owned by the household was chosen as an indicator because pig-feeding and cattle-feeding have a close relationship with both biodiversity conservation and agricultural practice (An, Lupi, Liu, Linderman, & Huang, 2002 | Changes in financial capital There were dramatic changes in household financial capital in recent years. Table 3 shows that both average total income per household and per capita income increased greatly, by 433% and 511%, respectively. Although average agricultural income also increased substantially (i.e., by 245%), its proportion of total household income decreased by approximately 35%. In contrast, local nonagricultural income increased by 805% and its proportion of total household income increased to 70%. In the first stage of the GGP, cash and grain were provided to the local people as compensation for their cropland conversion, while in the second stage only cash was provided as compensa- Average total income per household also became more unevenly spread. | Changes in human capital Changes in household human and material capitals are outlined in T A B L E 4 Changes in human capitals that the potential local labor force will be reduced in the future years. The potential adult labor force (i.e., household members aged between 15 and 50) increased to varying degrees, with the fastest growth in potential workers in the 41-50 years age bracket. In terms of the numbers of people in the potential labor force, most were aged between 21 and 30 followed by those aged between 31 and 40. These two age groups of the potential labor force comprised 42% of the total respondents and became the largest stakeholders for off-farm employment. | Changes in material capital Changes in household material capitals are outlined in | Changes in livelihood strategy The GGP aimed to reduce farmers' dependence on agriculture and encourage redundant rural labors to find other nonagricultural employment. We defined the labor force as those household members aged 18 or older. Every family member of the surveyed households was asked about his/her occupation and so we were able to determine the number of labor force that was employed in nonagricultural industries. The results showed that nonagricultural employment increased substantially since 2004 (Table 6). The proportion of the local population and the potential local labor force employed in nonagricultural industries increased by 250% and 153%, respectively. The local workers were categorized as either regular employees or temporary employees. Regular employees had stable employment with good social welfare such as hospital cover and social security. Temporary employees had unstable employment with associated high mobility and uncertainty, and limited social security. Ordinarily, temporary employees retain a strong connection to agriculture and would be likely to return to rural areas and agricultural employment if they lost their jobs or when they were older. Our results showed that more respondents were engaged in temporary employment than regular employment. The proportion of temporary employees in the local population and the potential local labor force were approximately 27% and 61%, respectively. This was higher than the respective percentages of regular employees (i.e., approximately 9% and 19%, respectively). | DISCUSSION There have been dramatic changes in livelihood capitals and distribution among households in our study area over the past 10 years. In general, it is optimistic to have found a dramatic decrease in local cropland holdings and a dramatic increase in the households' nonagricultural income, as these outcomes indicate that the locals' dependence on agriculture has decreased and that redundant rural workers have shifted away from growing crops. These changes align with the initial objective of the GGP. Although the GGP is not the only driver of these changes, it is now confronted with new socio-economic conditions in our study site and will have to be adjusted if its planned next stage is to be an effective and smooth implementation. We highlight the following three challenges for the sustainability of the GGP's planned next stage in our study site. | Challenge 1: Heterogeneity in agricultural dependence There were distinct uneven distributions of cropland holdings and agricultural income among the respondents. At least two typical groups were evident among the respondents-landless respondents and landsecure respondents. In the study area, it has been established that households with more cropland are less willing to re-enroll in the GGP due to their dependence on farming (Xu, Kong, Liu, & Wang, 2017). Thus, uneven distribution of cropland holdings deserves consideration in next stage of the program. Yin, Liu, Yao, and Zhao (2013) have suggested a need to target disadvantaged households and communities because they are more likely to reconvert some of their afforested cropland back to farming once the program expires. Liu et al. (2010) also remind us that certain households with limited engagement in off-farm activities and that have continued their dependence on farming for their livelihood are significant stakeholders in the GGP. These two significant groups bring about distinct issues for the next stage of the program. For the landless respondents, most have had to transfer to off-farm industries due to their lack of access to cropland. For the land-secure respondents, their high dependence on cropland could impede their re-enrolling in the next stage of the program. Therefore, we advise GGP designers and managers to take local household heterogeneity into consideration and implement targeted measures aimed at encouraging local re-enrollments in the next stage of the GGP. | Challenge 2: Decreasing ecological compensation and increasing value of cropland Compared to the rapid growth in household agricultural and nonagricultural income between 2004 and 2015 (245.2% and 805.7%, respectively), compensation paid to households for their cropland conversion decreased substantially (−46.9%). The proportion of this ecological compensation to household total income and agricultural income decreased even more so (−89.9% and −84.0%, respectively). This is a result of the continued decline in compensation from the first stage to the third stage of the GGP (http://tghl.forestry.gov.cn/ portal/tghl/s/2166/content-846817.html) and the obvious increase in local incomes. In the first stage of the program, the absolute value of compensation was lower than cropland income, because most local croplands were under cabbage cultivation (Xu et al., 2007). At present, the compensation for converting cropland to forest remains far below the actual cropland revenue potential. In contrast to the rapid decline in ecological compensation, the value or cost of local cropland was increasing because of the emergence of agricultural cooperatives which lease local cropland for commercial, market-oriented cultivation. The amount these organizations pay to lease the cropland is higher than the GGP compensation (Xu et al., 2017). It was also evident that some respondents were changing their livelihood strategy from subsistence farming to market-driven farming. Pig and cattle breeding, which in the past served as a means of supplying meant for family consumption and draft power, has increasingly turned to a focus on commercial sales. In addition, there have been obvious rises in the costs of food, houses, and other livingrelated items in China over recent years (Guo, Li, Yu, & Hao, 2011). As market-based instruments, PES programs should at least cover the opportunity costs of the changes to land management necessary for suppliers to provide the sought-after environmental services. Thus, decreasing compensation, together with the increasing costs of cropland and living items, will challenge the next stage of the GGP. The fact that decreasing compensation from the GGP might induce participants to abandon the program and return to farming has been a concern expressed by other researchers (Cao, Chen, Chen, & Gao, 2007;Zhen et al., 2014). In the study area, croplands were converted to ecological forests that aside from the ecological compensation are providing no direct productive value (such as from wood or fruit) for the local people. Hence, the emergence of market-driven farming has provided the local households with a better understanding of the opportunity cost of cropland conversion and how this compares with the ecological compensation they could receive from the GGP. This ecological compensation needs to be adjusted to better reflect local expectations. | Challenge 3: Pressures on labor supply The long-term success of a PES program depends on whether a local labor force can find a livelihood alternative to growing crops. In the study area, a positive finding was that the level of education among respondents had substantially increased. This may be conducive with their re-enrolling in the next stage of the GGP (Xu et al., 2017). Relating the observed changes in human capital and livelihood strategies to the next stage of the program, we found two key challenges for the program: One is that the potential adult labor force aged between 21 and 40 has increased and is presently at a peak; the other is that most off-farm workers are engaged in temporary, high-mobility nonagricultural jobs that have little technical requirements and social security such as the provision of a pension. This meant that these off-farm workers were not completely divorced from agriculture. As we found in the survey, most temporary employees stated that they would return to work on the farm if they lost their job or when they were older. Together, the high pressure on the local labor force plus the high potential for nonagricultural employees to return to the farm will challenge the next stage of the GGP. The GGP itself actually has a small positive effect on nonfarm employment (Kelly & Huo, 2013). The program should not be limited to stipulations themselves, but should resort to other related policy measures (Yin, Xu, Li, & Liu, 2005). This is important because all policies concerning rural land and the labor market can alter the opportunity cost of land-use and eventually influence the incentive for farmers to participate in a cropland conversion program and the likelihood that they will convert their afforested land back to cropping (Yao, Guo, & Huo, 2010). Yin et al. (2013) have also emphasized the need to explore the interactions of different policies to achieve program effec- | CONCLUSIONS It is well-known that PES or restoration programs should be adaptable As both society and the environment are not static, a constant PES will lag behind the socio-economic and environmental context at its targeted locations. Ensuring a PES is dynamic according to it targeted context will contribute to its long-term success. Taking
4,061.4
2018-03-14T00:00:00.000
[ "Economics", "Environmental Science" ]
Normal forms of dispersive scalar Poisson brackets with two independent variables We classify the dispersive Poisson brackets with one dependent variable and two independent variables, with leading order of hydrodynamic type, up to Miura transformations. We show that, in contrast to the case of a single independent variable for which a well-known triviality result exists, the Miura equivalence classes are parametrised by an infinite number of constants, which we call numerical invariants of the brackets. We obtain explicit formulas for the first few numerical invariants. Introduction Let A be the space of differential polynomials in the variable u, i.e. formal power series in the variables ∂ k 1 x 1 ∂ k 2 x 2 u with coefficients which are smooth functions of u: for U ⊂ R. The standard degree deg on A counts the number of derivatives ∂ x 1 , ∂ x 2 in a monomial, i.e. it is defined by deg(∂ k 1 x 1 ∂ k 2 x 2 u) = k 1 + k 2 . In this paper, we classify, up to Miura transformations, the dispersive Poisson brackets with one dependent variable u and two independent variables x 1 , x 2 of the form u x 1 , x 2 , u y 1 , y 2 = u x 1 , x 2 , u y 1 , y 2 0 where A k;k 1 ,k 2 ∈ A and deg A k;k 1 ,k 2 = k − k 1 − k 2 + 1. The leading term {u(x 1 , x 2 ), u(y 1 , y 2 )} 0 is a (scalar, two-dimensional) Poisson bracket of Dubrovin-Novikov (or hydrodynamic) type [12,13], and in other words it is of the form u x 1 , x 2 , u y 1 , y 2 0 which we assume to be non-degenerate. The conditions imposed on the functions g i (u) and b i (u) by the requirement that {, } 0 is skew-symmetric and satisfies the Jacobi identity have been studied by several authors [15,21,22]. We require the additional condition that the bracket is non-degenerate, namely that the bracket does not vanish for any value of the function u(x). In the specific case considered here, where there is a single dependent variable and two independent variables, such conditions guarantee the existence of a change of coordinates in the dependent variable (a Miura transformation of the first kind), to a flat coordinate that we still denote with u, in which the bracket assumes the form We can moreover perform (see [2]) a linear change in the independent variables x 1 , x 2 such that the Poisson bracket assumes the standard form u x 1 , x 2 , u y 1 , y 2 0 = δ x 1 − y 1 δ (1) The Miura transformations (of the second kind [18]) are changes of variable of the form where F k ∈ A and deg F k = k. They form a group called Miura group. We say that two Poisson brackets which are mapped to each other by a Miura transformation are Miura equivalent. As follows from the discussion so far, the classification of dispersive Poisson brackets of the form (1) (with non-degeneracy condition) under Miura transformations (3), diffeomorphisms of the dependent variable and linear changes of the independent variables reduces to the problem of finding the normal forms of the equivalence classes under Miura transformations of the second kind (3) of the Poisson brackets (1) with leading term (2). We solve this problem in our main result: (1) with leading term (2) under Miura transformations of the second kind is given by u x 1 , x 2 , u y 1 , y 2 = δ x 1 − y 1 δ (1) Theorem 1 The normal form of Poisson brackets for a sequence of constants c = (c 1 , c 2 , . . .). Remark 1 By "normal form", in the main theorem, we mean that: i. for any choice of constants c k formula (4) defines a Poisson bracket which is a deformation of (2); ii. two Poisson brackets of the form (4) are Miura equivalent if and only if they are defined by the same constants c k ; iii. and any Poisson bracket of the form (1) can be brought to the normal form (4) by a Miura transformation. We call the constants c k the numerical invariants of the Poisson bracket. Example 1 (Hamiltonian structure of KP equation, [8]) Kadomtsev-Petviashvili (KP) equation describes two-dimensional shallow water waves, being a generalisation of K dV equation. In its standard form, it is a (2 + 1)-dimensional PDE for a scalar field [24]; it is generally treated as the compatibility condition of an integrable (1 + 1)-dimensional hierarchy, where both the t and y coordinates play the role of times. However, it is possible to cast the equation in evolutionary form, with the introduction of the inverse derivative operator ∂ −1 x . KP equation can be written as KP equation is Hamiltonian and integrable which is Hamiltonian with respect to the Hamiltonian functional and the Poisson bracket The Poisson bracket (5) is of the form (4) for c k ≡ 0 and the relabeling of the independent variables (x 1 → y, The deformation theory of Hamiltonian-and, albeit not addressed in our paper, bi-Hamiltonian-structures plays an important role in the classification of integrable Hamiltonian PDEs [10,14]. Most results in this field have been obtained for (1 + 1)dimensional systems, namely the ones that depend only on one space variable. The main result in this line of research is the triviality theorem [9,14,16] of Poisson brackets of Dubrovin-Novikov type. Together with the classical results by Dubrovin and Novikov [12], this allows to conclude that the dispersive deformations of non-degenerate Dubrovin-Novikov brackets are classified by the signature of a pseudo-Riemannian metric. Similarly, deformations of bi-Hamiltonian pencils [1,20] are parametrised by functions of one variable, the so-called central invariants [10,11]; in a few special cases, the corresponding bi-Hamiltonian cohomology has been computed, in particular for scalar brackets [4,5,19], and in the semi-simple n-component case [3,6] . The (2 + 1)-dimensional case is much less studied: the classification of the structures of hydrodynamic type has been completed up to the four-component case [15], and the nontriviality of the Poisson cohomology in the two-component case has been established [7]. In our recent paper [2] we computed the Poisson cohomology for scalar-namely, one-component-brackets. Since such a cohomology is far from being trivial, the actual classification of the dispersive deformations of such brackets is a highly complicated task. We address and solve it in the present paper. The outline of the paper is as follows: in Sect. 1 we quickly recall basic definitions and facts related with the theta formalism. In Sect. 2, we specialise some results from our previous work [2] to the D = 2 case to obtain an explicit description of the second Poisson cohomology. In Sect. 3, we prove our main result. The proof is split into three steps corresponding to the three parts in Remark 1. In Sect. 4.4, we prove some technical lemmas that are required in the proof of Proposition 2. Finally, in Sect. 4 we give an explicit expression of the first few numerical invariants of the Poisson bracket. Theta formalism We present here a short summary of the basic definitions of the theta formalism for local variational multivector fields, specialising the formulas to the scalar case with two independent variables, i.e. N = 1, D = 2. We refer the reader to [2] for the general N , D case. Let A be the space of differential polynomials where we denote u (s,t) = ∂ s x ∂ t y u, and C ∞ (R) denotes the space of smooth functions in the variable u. The standard gradation deg on A is given by deg u (s,t) = s + t. We denote A d the homogeneous component of degree d. Using the standard derivations ∂ x and ∂ y on A, we define the space of local functionals as and the projection map from A to F is denoted by a double integral, which associates to f ∈ A the element f dx dy in F. Moreover, we will denote by the partial integrals dx , dy the projections from A to the quotient spaces A/∂ x A, A/∂ y A. The variational derivative of a local functional F = f is defined as (s,t) . A local p-vector P is a linear p-alternating map from F to itself of the form where P (s 1 ,t 1 ),...,(s p ,t p ) ∈ A, for arbitrary I 1 , . . . , I p ∈ F. We denote the space of local p-vectors by p ⊂ Alt p (F, F). Clearly an expression of the form (1) defines a local bivector by the usual formula u(x 1 , y 1 ), u(x 2 , y 2 ) δ I 2 δu(x 2 , y 2 ) dx 1 dy 1 dx 2 dy 2 which equals to The theta formalism, introduced first in the context of formal calculus of variations in [16], can be easily extended to the multidimensional setting [2] and allows to treat the local multivectors in a more algebraic fashion. We introduce the algebra of formal power series in the commutative variables u (s,t) and anticommuting variables θ (s,t) , with coefficients given by smooth functions of u, i.e. The standard gradation deg and the super-gradation deg θ of are defined by setting We denote d , resp. p , the homogeneous components of standard degree d, resp. super-degree p, while The derivations ∂ x and ∂ y are extended to in the obvious way. We denote byF the quotient of by the subspace ∂ x + ∂ y , and by a double integral dx dy the projection map from toF. Since the derivations ∂ x , ∂ y are homogeneous,F inherits both gradations ofÂ. It turns out, see Proposition 2 in [2], that the space of local multivectors p is isomorphic toF p for p = 1, while 1 is isomorphic to the quotient ofF 1 by the subspace of elements of the form (k 1 u (1,0) + k 2 u (0,1) )θ for two constants k 1 , k 2 . MoreoverF 1 is isomorphic to the space Der (A) of derivations of A that commute with ∂ x and ∂ y . The Schouten-Nijenhuis bracket where the variational derivative with respect to θ is defined as It is a bilinear map that satisfies the graded symmetry and the graded Jacobi identity for arbitrary P ∈F p , Q ∈F q and r ∈F r . A bivector P ∈F 2 is a Poisson structure when [P, P] = 0. In such case d P := ad P = [P, ·] squares to zero, as a consequence of the graded Jacobi identity, and the cohomology of the complex (F, d P ) is called Poisson cohomology of P. The Miura transformations of the second kind [18] are changes of variable of the form They form a subgroup of the general Miura group [14] which also contains the diffeomorphisms of the variable u. The action of a general Miura transformation of the second kind on a local multivector Q inF is given by the exponential of the adjoint action with respect to the Schouten-Nijenhuis bracket where X ∈F 1 1 is a local vector field such that e ad X u =ũ. Poisson cohomology In our previous paper [2], we gave a description of the Poisson cohomology of a scalar multidimensional Poisson bracket in terms of the cohomology of an auxiliary complex with constant coefficients. Our aim here is to give an explicit description of a set of generators of the Poisson cohomology in the D = 2 case, which will be used in the proof of the main theorem in the next Section. Let us begin by recalling without proof a few results from our paper [2], specialising them to the case D = 2. Consider the short exact sequences of differential complexes where the differential is induced in all spaces by (s,t) . OnF such differential coincides with ad p 1 , where p 1 = 1 2 θθ (0,1) dxdy. In the long exact sequence in cohomology associated with (6), the Bockstein homomorphism vanishes; therefore, Moreover, the cohomology classes in H (Â) can be uniquely represented by elements of the polynomial ring Θ generated by the anticommuting variables θ (s,0) , s 0 with real coefficients. The map induced in cohomology by the map ∂ y in the short exact sequence (7) vanishes; therefore, we get the following exact sequence where the third arrow is the Bockstein homomorphism. This sequence allows us to write the Poisson cohomology H p (F) as a sum of two homogeneous subspaces of Θ/∂ x Θ in super-degree p and p + 1, respectively, where the first one is simply injected, while the second one has to be reconstructed via the inverse to the Bockstein homomorphism. Let The Bockstein homomorphism assigns to the cocycle a dx dy the which clearly commutes with ∂ x , and therefore induces a map from Θ ∂ x Θ toF. We have that ΔB = ∂ y , and consequently, B defines a splitting map for the short exact sequence (8). We have therefore shown that We remark that this lemma gives an explicit description of representatives of the cohomology classes in H p d (F). In particular, the only non-trivial classes in Θ/∂ x Θ in super-degree p = 2 are given by θθ (2k+1,0) for k 1 and correspond to the deformations of the Poisson brackets in Theorem 1. The following reformulation of this observation will be useful in the proof of Proposition 2: Moreover, we can define an explicit basis of Θ where we use the notation θ k = θ (k,0) . Proof More generally we can prove that a basis of Θ with A basis of Θ p d is given by monomials θ i 1 · · · θ i p with We arrange such monomials in lexicographic order, that is, we say that θ i 1 · · · θ i p > θ j 1 · · · θ j p if i 1 > j 1 , or if i 1 = j 1 and i 2 > j 2 , and so on. For an element a = θ i 1 · · · θ i p of the basis of Θ p d−1 , we have that the leading term (in lexicographic order) of ∂ x a is given by , we can express all the monomials of the form (11) in terms of combinations of monomials of strictly lower lexicographic order. It follows that a basis can be chosen in the form (10). By specialising to the case p = 3, and spelling out the allowed sets of indexes, we obtain the statement of the lemma. It follows that a basis of B for indices a, b, c chosen as in the basis above. Proof of the main theorem Let us first reformulate our main statement in the θ -formalism. The Poisson bracket of Dubrovin-Novikov type of the form (2) corresponds to the element given by Therefore, the normal form (4) in θ -formalism corresponds to the element The proof of Theorem 1 reduces to prove the three statements listed in Remark 1. Proof of the statement in Remark 1.i Our first observation is: Proof The Poisson bivectors p k do not depend on u and its derivatives; therefore, the variational derivatives w.r.t. u appearing in the definition of Schouten-Nijenhuis bracket are vanishing. It clearly follows that p(c) is a Poisson bivector for any choice of the constants c = (c 1 , c 2 , . . .). Proof of the statement in Remark 1.ii Next we show that for any distinct choice of the constants c = (c 1 , c 2 , . . .) the corresponding bivector P belongs to a different equivalence class under Miura transformations. e ad X p(c) = p(c), for X ∈F 1 1 . This identity can be rewritten as The operator inside the brackets has the form e ad X − 1 ad X = 1 + 1 2 ad X + · · · and therefore we can invert it. We obtain By assumption the two sequences, c andc are not identically equal, and hence there exists a smallest index k for which c k =c k . It follows that where the dots denote terms of standard degree greater than 2k + 1. We conclude that ad X p(c) has to vanish in standard degree less or equal to 2k, i.e. So, the leading order term in the standard degree in (14) is The key point of the proof is to prove that the lefthand side is a ad p 1 coboundary, which leads to a contradiction since we know that p 2k+1 is a non-trivial class in H 2 2k+1 (F, p 1 ). Notice that the lefthand side in (16) can be written and hence it is sufficient to prove that the sum in the lefthand side is in the image of ad p 1 . Equation (15) gives a sequence of constraints on X . Let us consider in particular the constraints with odd degree (ad X p(c)) 2s+1 = 0, s = 1, . . . , k − 1, which can be written This equation for s = 1 simply says that X 2 is a cocycle w.r.t. ad p 1 , By the vanishing of the Poisson cohomology H 1 2 (F, p 1 ), X 2 is necessarily a coboundary, i.e. More generally, we have that for each s = 1, . . . , k − 1 for some f 2l−1 ∈F 0 2l−1 , l = 1, . . . , 2s − 1. We can prove this by induction. Let us therefore assume that (19) holds for s = 1, . . . , t − 1 for t k − 1, and show that it holds for s = t too. Substituting the inductive assumption in (18) for s = t, we get that The expression inside the brackets is therefore a cocycle, which has to be a coboundary due to the triviality of H 1 2t (F, p 1 ), i.e. for some f 2t−1 ∈F 0 2t−1 . This gives (19) for s = t. Substituting (19) in (17), we get that up to a term that can be written as and therefore clearly vanishes. Equation (20) leads to sought contradiction. The Lemma is proved. Proof of the statement in Remark 1.iii Finally, we prove that any Poisson bivector with leading order p 1 given by (12) can always be brought to the form (13) by a Miura transformation of the second kind. Proposition 2 Let P ∈F 2 1 be a Poisson bivector with degree one term equal to p 1 . Then there is a Miura transformation that maps P to a p(c) for a choice of constants c = (c 1 , c 2 , . . .). Proof The Poisson bivector P ∈F 2 1 has to satisfy [P, P] = 0. We want to show by induction that, taking into account this equation, it is possible, by repeated application of Miura transformations, to put all terms in normal form and to remove all terms that come from the Bockstein homomorphism. Let us denote by p (s) (c 1 , . . . , c s/2−1 ) a bivector of the form for s, respectively, even or odd, where Q l ∈ B Θ ∂ x Θ 3 l , P l ∈F 2 l , the dots denote higher-order terms, and The inductive hypothesis is valid for s = 2; indeed, p (2) is exactly of the required form. Let us now show that by a Miura transformation a Poisson bivector of the form p (s) can be made of the form p (s+1) . When s = 2k is even, in degree 2k + 1, Eq. (21) gives The first observation is that both terms above need to be separately zero. This follows from the fact that the first term has nonzero degree in the number of derivatives w.r.t. y, while the second term has degree zero. By Corollary 1, the cohomology H 2 2k (F) is given only by elements coming from the Bockstein homomorphism, and therefore exists Q 2k ∈ B Θ ∂ x Θ 3 2k such that P 2k + ad p 1 X 2k−1 = Q 2k for some X 2k−1 ∈F 1 2k−1 . Acting with the Miura transformation e ad X 2k−1 on p (2k) , we get a new Poisson bivector, where the terms of degree less or equal to 2k − 1 are unchanged, the term P 2k has been replaced with the term Q 2k , and the terms of higher order are in general different. We have therefore that p (2k+1) = e ad X 2k−1 p (2k) is of the form above, as required. When s = 2k + 1 is odd, in degree 2k + 2 from (21) we get As in the previous case, the first term has to vanish; hence, P 2k+1 is an ad p 1 -cocycle. The cohomology H 2 2k+1 (F) decomposes in two parts; therefore, there is a constant c k and an element Q 2k+1 in B Θ ∂ x Θ 3 2k+1 such that P 2k+1 +ad p 1 X 2k = c k p 2k+1 + Q 2k+1 for some X 2k ∈F 1 2k . The second and third terms in (22) have also to be both zero. This follows from the fact that they have different degree in the number of u (s,t) . As we have seen in Sect. 3, the elements Q k are linear in the variables u (s,t) , while the elements p k do not contain them. From the vanishing of the last term, [Q k+1 , Q k+1 ] = 0, we finally derive that Q k+1 is zero. This is guaranteed by Lemma 4. The proof of this Lemma, being quite technical, is given in Sect. 4.4. Taking into account this vanishing, the action of the Miura transformation e ad X 2k on p (2k+1) gives exactly the term p (2k+2) . By induction, we see that we can continue this procedure indefinitely; therefore, we conclude that we cannot have any non-trivial deformation coming from Θ ∂ x Θ 3 via the Bockstein homomorphism, and that the Miura transformation · · · e ad X 2 e ad X 1 given by the composition of the Mira transformations defined above, sends the original Poisson bivector P = p 1 + · · · to a Poisson bivector of the form p(c) for a choice of constants c 1 , c 2 , . . .. The Proposition is proved. Some technical lemmas In this section, we prove the following statement, which is essential in the proof of Proposition 2: where the second and third equalities follow from the simple identities Since we proved that the map is injective, the vanishing of (23) implies that δχ . From this fact, it follows that χ = 0, as we prove in the remaining part of this section. 1 Let sq : Θ 2 k → Θ 4 2k be the map that sends an element α ∈ Θ 2 k to α 2 ∈ Θ 4 2k . In the rest of this section, we will use the notation θ d = θ (d,0) . Proof A basis in Θ 4 2k−1 is given by standard monomials θ i 1 θ i 2 θ i 3 θ i 4 with total degree i 1 + i 2 + i 3 + i 4 = 2k − 1. By standard monomial, we indicate a monomial where the indices are ordered as i 1 > i 2 > i 3 > i 4 0 to avoid duplicates. We can write Θ 4 2k−1 = V 1 ⊕V 2 , where a basis for V 1 is given by standard monomials with the restriction i 1 + i 4 k − 1, and a basis for V 2 is given by standard monomials It is convenient to define also the subspace W of Θ 4 2k which is spanned by all monomials that appear in the ∂ x V 1 ; more explicitly W is generated by the monomials We denote by Θ 2 k · Θ 2 k the subspace of Θ 4 2k spanned by standard monomials θ i 1 θ i 2 θ i 3 θ i 4 with i 1 > i 2 > i 3 > i 4 0 and i 1 +i 2 +i 3 +i 4 = 2k with i 1 +i 4 = k and i 2 + i 3 = k. It is indeed the subspace given by the product of two arbitrary elements of Θ 2 k . Clearly, both ∂ x V 1 and Θ 2 k · Θ 2 k are subspaces of W. Let us now prove that ∂ x V 2 has zero intersection with W. Let v = γ v γ γ be an element in V 2 , where γ is in the standard basis of V 2 described above. We have already seen that the elements ∂ x γ are linearly independent. 4 plus lexicographically lower terms. The lexicographically leading order term is therefore of a standard monomial θ j 1 θ j 2 θ j 3 θ j 4 with j 1 + j 4 k +1. But all basis elements in W are standard monomials with j 1 + j 4 k. It follows that, if γ is the lexicographically highest term in v, we must have v γ = 0. By induction v vanishes. The two facts ∂ x V 1 ⊆ W and ∂ x V 2 ∩ W = (0) imply at once that the preimage ∂ −1 x (W) in Θ 4 2k−1 is contained in V 1 , and the same holds for Θ 2 k · Θ 2 k since it is a subspace of W, i.e. we have Since sq(Θ 2 k ) ⊆ Θ 2 k · Θ 2 k , our original problem reduces to finding the intersection of sq(Θ 2 k ) and ∂ x V 1 . k whose square is in ∂ x V 1 . We want to show that at most one of the coefficients α i is nonzero. We therefore assume that at least two such coefficients are nonzero and show that it leads to a contradiction. Let s be the higher index for which α s = 0 and t < s the second higher index for which α t = 0. Denote by W ( j) the subspace of Θ 2 k · Θ 2 k spanned by monomials of the form and denote by W the space spanned by the basis monomials in W which are not in and consequently Observe that a monomial θ i θ j θ k− j θ k−i in W ( j) can appear in the ∂ x -image of four different monomials in Θ 4 2k−1 but only two of them are elements of V 1 , i.e. so we only need to consider these two. Notice that a monomial in V 1 of such form, i.e. θ l θ j θ k− j θ k−l−1 is mapped by ∂ x to the sum of four monomials, two of which are in W ( j) , i.e. and two are in W. Since α 2 ∈ Θ 2 k · Θ 2 k , it can be decomposed in its components (α 2 ) j ∈ W ( j) , and we have in particular that since we have assumed that α i = 0 for i > s and t < i < s. All these observations imply that there must be an element β of V 1 of the form such that its image through ∂ x gives (α 2 ) t plus some element in W. The lexicographically higher term in β, i.e. for i = k − 1, is sent by ∂ x to a term proportional to θ k θ t θ k−t θ 0 , which does not appear in (α 2 ) t , therefore β k−1 = 0. Proceeding like this, we set to zero all the constants β k−1 , . . . , β s . Similarly, we can proceed from the lower part of the chain and set to zero all the remaining constants β t+1 , . . . , β s−1 . But then β = 0, therefore α s α t = 0 and we are led to a contradiction. We have proved that at most one of the constants α i can be nonzero. In such case, α 2 = 0. The Lemma is proved. Lemma 6 Consider an arbitrary element given in Lemma 2, and the basis For this choice of bases, the map δ δθ has a two-step triangular structure. In order to explain that, let us consider the two cases of odd and even d separately. Consider first the d = 2k + 1 case. One can check 2 that the variational derivative δ δθ of a basis element θ k−l+1 θ k−l θ 2l , with 3l < k, is equal to plus terms which are of lower lexicographic order. Notice that the coefficients of the two monomials above are non-vanishing. Observe that δ δθ θ k+1 θ k θ 0 contains the monomials θ d θ 0 and θ d−1 θ 1 , while the variational derivatives of all other basis elements with l 1 cannot contain θ d θ 0 and θ d−1 θ 1 . Thus, if δχ δθ = c · θ i θ d−i for some i, then the coefficient of θ k+1 θ k θ 0 in χ has to be equal to zero. We can continue this process by induction. Assume that we have already proved that the first l elements of the basis cannot appear in χ . Then the variational derivative of the basis element θ k−l+1 θ k−l θ 2l is the only one that contains θ d−2l θ 2l and θ d−2l−1 θ 2l+1 . It follows from the same reason as above, that such basis element cannot appear in χ . In the case d = 2k, we can apply the same reasoning. In this case, the variational derivative δ δθ of a basis element θ k−l θ k−l−1 θ 2l+1 , with 3l < k − 2, is equal to plus terms of lower lexicographic order. Notice that θ d θ 0 never enters the image of any basis element in Since the coefficients of the two monomials above are non-vanishing, we can apply the same argument as in the case of odd d, mutatis mutandis. Now let us consider an arbitrary element χ ∈ Θ 3 d , such that ( δχ δθ ) 2 belongs to the image of ∂ x . From Lemma 5, it follows that δχ δθ = c · θ i θ d−i for some i = 0, 1, . . . , d/2 . Then Lemma 6 implies that δχ δθ = 0; hence, χ belongs to the image of ∂ x . We have proved that χ = 0 as element of Θ ∂ x Θ 3 d . Lemma 4 is proved. The numerical invariants of the Poisson bracket In principle all the numerical invariants of a Poisson bracket of the form (1), namely the sequence (c 1 , c 2 , . . .), can be extracted iteratively solving order by order for the Miura transformation which eliminates the coboundary terms. Providing a general formula for the invariants of a Poisson bivector is hard, since the elimination of each coboundary term affects in principle all the higher-order ones and it is necessary to give an explicit form for the Miura transformation. However, the lowest invariants can be computed as follows. Proposition 3 Consider a Poisson bracket of the form as in (1). Here A k;k 1 ,k 2 ∈ A and deg A k;k 1 ,k 2 = k −k 1 −k 2 +1. Then the first numerical invariants of the bracket, giving the normal form of Theorem 1, are Notice that A 2;3,0 is implied to be a constant. Proof We recall that, given a Poisson bracket P of form (1), it can be expanded according to its differential order. For notational compactness, we will denote In this proof, we replace (x 1 , x 2 ) with (x, y) as we did in the previous sections; moreover, with a slight abuse of notation we identify the Dirac's delta derivatives with the corresponding elements ofF previously used Using this notation, the Schouten identity [P, P] = 0 reads for k 2. The first equation is [p 1 , P 2 ] = 0; we solved it in [2], finding for P 2 for any function f (u). Since H 2 2 (F) = 0, we have P 2 = [X 1 , p 1 ] and the Miura transformation that eliminates P 2 from P is e −ad X 1 . The evolutionary vector field X 1 has characteristic We also observe that ad m X 1 p 1 = 0 for m > 1. We apply the Miura transformation generated by − X 1 to P and get P = e −ad X 1 P = p 1 + 2 P 3 + 3 (P 4 − [X 1 , P 3 ]) The first equation of the system (26) forP, and the results used in the proof of Lemma 2 give us P 3 = c 1 p 3 + [X 2 , p 1 ]. [X 2 , p 1 ] is a bivector whose degree in the number of derivatives w.r.t. x 2 is at least 1; notice that x 1 corresponds to x and x 2 corresponds to y, in the notation of Sects. 3 and 4. Hence, we can write P 3 = A 2;3,0 (u)p 3 + A 2;2,1 (u)δ (2) This equation immediately gives A 2;3,0 (u) = A 2;3,0 = c 1 as in (24). Moreover, we can solve it for X 2 ; the characteristic of the evolutionary vector field which is a differential polynomial with top degree w.r.t. the x derivatives is 1/2 A 2;2,1 (u)∂ 2 x u +Ã(u) (∂ x u) 2 . Here we are interested only in first summand because it is the one that gives the highest number of x-derivatives in [X 2 , p r ], for any r . We apply toP the Miura transformation e −ad 2 X 2 to eliminate the coboundary term of P 3 and are left with We now use the fact that H 2 4 (F) = 0 to get for some homogeneous vector field X 3 of degree 3. This allows us to replace P 4 in (27) and to apply the Miura transform e −ad 3 X 3 to it to get rid of the term 3 in the expansion. The terms of order < 3 are left unaffected by this transformation, while the coefficient of 4 becomes where the equality is given by our results about H 2 5 (F) and the proof of Lemma 2. The invariant c 2 must be read taking the coefficient of p 5 in the lefthand side of the equation: this coefficient cannot be obtained by summands that are of y-degree bigger or equal to 1. Thus we focus on the summands A direct computation shows that in ad 2 X 1 p 3 the term p 5 does not appear, while it does appear in [X 2 , p 3 ]. Using the form of X 2 we have previously derived, we find P 5 = (A 4;5,0 (u)p 5 + · · · ) = c 2 + c 1 A 2;2,1 (u) p 5 + · · · from which we get (25). Example 2 We can compute all the numerical invariants when the Poisson bracket is particularly simple. Let us consider the bracket Proposition 3 immediately tells us that c 1 = 1 and c 2 = −1. Let us denote for brevity p s,t the bivector corresponding to 1 2 θθ (s,t) . The bivector corresponding to the bracket then reads P = p 1 + p 3 + p 2,1 , and p 2,1 = ad X 2 p 1 . It is very easy to derive X 2 = 1 2 u 2x θ . We have ad X 2 p s,t = p s+2,t . The Miura transformation e −ad X 2 applied to P gives Notice that the term n = 0 in the first sum gives the only contribution of order 3, giving c 1 = 1. The further p 1 -coboundary term should be read in the n = 1 term of the second sum, namely for − 1 2 p 4,1 = ad X 4 p 1 . The next Miura transformation leads to n+2m 2 m m!n! p 2n+4m,1 . The procedure goes on-always requiring us to find the vector field cancelling the lowest order term of the form p s,1 . At each step, we will need vector fields X 2s+2 such that ad , y), ω(w, z)} = ω x δ(x − w)δ (1) It should be noticed that also in this case, as for Example 1, the Hamiltonian functional is not local in the field ω. Indeed, the incompressibility of the fluid in two dimensions allows us to introduce the stream function ψ(x, y) such that u = ψ y and v = −ψ x , for which Δψ = −ω and δ H/δω = −ψ. The deformation is a coboundary in the Poisson cohomology of p 1 , which follows from H 2 2 (p 1 ) = 0. In particular, it is obtained by [p 1 , X 1 ] with X 1 = −uu x θ . Moreover, a simple computation shows that [X 1 , [X 1 , p 1 ]] = 0. This means that with a Miura transformation exp −X 1 P = p 1 , and the normal form of the bracket (30) has numerical invariants c k ≡ 0.
9,122.4
2017-07-12T00:00:00.000
[ "Mathematics" ]
Conformal Parallel Plate Waveguide Polarizer Integrated in a Geodesic Lens Antenna Here, we propose a low-profile polarizing technique integrated in a parallel plate waveguide (PPW) configuration, compatible with fully metallic geodesic lens antennas. The geodesic shape of the antenna is chosen to resemble the operation of a Luneburg lens. The lens is fed with 11 waveguide ports with 10° separation producing 11 switchable beams in an angular range of <inline-formula> <tex-math notation="LaTeX">$\pm \mathrm{50}^{\circ }$ </tex-math></inline-formula>. Two metallic polarizing screens are loaded into the aperture of the antenna to rotate the electric field from a vertical linear polarization, which is the polarization of the transverse electromagnetic (TEM) mode supported in the lens, to a <inline-formula> <tex-math notation="LaTeX">$+ \mathrm{45}^{\circ }$ </tex-math></inline-formula> linear polarization. Since the polarizing unit cells are integrated into the aperture of the antenna, the final design is compact. In addition, the size of the polarizing unit cells is about <inline-formula> <tex-math notation="LaTeX">$0.55\lambda $ </tex-math></inline-formula> at the central frequency of operation making the antenna suitable to produce an array formed of stacked lenses. A prototype of the antenna in the <inline-formula> <tex-math notation="LaTeX">$\text{K}_{\text {a}}$ </tex-math></inline-formula>-band was manufactured and tested, verifying the performance obtained in simulations. electronically scanned arrays becomes complex and lossy [10]. Losses in the feeding network can be avoided by having a fully mechanically scanned array; this, however, results in a slower scanning speed and a bulky design [8]. Lens antennas have received attention due to their wideband behavior and their ability to produce high-gain switchable beams over a wide angular range without requiring a complex feeding network [11]. Lens antennas were broadly studied in the 1940s-1960s [12], [13] but deemed unpractical at low frequencies due to their large size and high manufacturing cost. Nevertheless, when the frequency increases, the physical size of the lenses is reasonably small while being large in terms of wavelength meaning they can provide high-gain beams [14]. The Luneburg lens [15] is an interesting case for antenna designs since it can be used to produce multiple beams in a large angular range and a wide frequency bandwidth. The planar version of the Luneburg lens is a compact device with a wide scanning range in one plane [16]. The required graded refractive index of the Luneburg lens can be achieved using dielectrics [17], metasurfaces [18], [19], [20], [21] or geodesic surfaces [22], [23], [24], [25], [26]. Geodesic surfaces can mimic the refractive index of rotational symmetric lenses using a profiled surface [22], [27], [28]. By making use of the direction orthogonal to the beamforming plane, a parallel plate region can be deformed into a geodesic surface [29]. The resulting surface is a device that can be easily manufactured and eliminates the need for inhomogeneous materials to achieve the required refractive index of the lens. However, this is done at a cost of an increased height of the lens, hence sacrificing compactness. Solutions to this problem have been addressed in [23], [24], and [30]. Investigating ways to implement dual-polarized lens antennas is of special interest for both the satellite and terrestrial communication systems, where it is needed to transmit or receive dual-polarized signals to or from the antennas [31], [32]. Polarization manipulation is indeed a challenge in fully metallic parallel plate waveguide (PPW) beamformers due to the supported fundamental modes. This is less challenging in 3-D lenses since the polarization can be readily implemented in the feed or before the lens [33], [34]. The typical solution in PPW beamformers is to have an array of polarizing unit cells at some distance away from the beamformer. One such designs is proposed in [35] where the polarizer array transformed the linear polarization of the PPW beamformer to circular This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ polarization. In addition, the polarizer array acted as a reflector which could be mechanically steered. However, for some antenna applications it is not suitable to use big reflectors for polarization transformation, and hence compact solutions are needed [36]. Using septum polarizers integrated within a square waveguide is one method to achieve circular polarization for a PPW beamformer in a compact way [37]. In septum polarizers, a square waveguide is fed by two rectangular waveguides, each of which supports one propagating mode. The square waveguide, on the other hand, supports two orthogonal propagating modes. The polarization of the incident signal is converted by adjusting the phase between these two orthogonal propagating modes. In [38], septum polarizers integrated within a square waveguide were used to generate circular polarization in a PPW multiple beam quasi-optical beamformer operating in K a -band. This was done by discretizing the aperture of the beamformer which results in a reduced scanning range. In [39], a modification of the standard septum is used where the vertical walls added to create square waveguides are removed, and the stepped septum is replaced with a saw-like design based on periodic teeth. This results in a continuous aperture avoiding the problem of the limited scanning range in [38]. However, this concept comes at a cost of limited bandwidth because the TEM mode is maintained in the teeth area, while the orthogonal mode is a TE mode. The two modes have very different propagation characteristics, meaning the required phase shift between both the modes will only be obtained at the design central frequency. In this article, we propose a low-profile solution to obtain a linear-to-linear polarization transformation in a fully metallic geodesic Luneburg lens operating at the K a -band. The concept is based on a PPW polarizer with wideband matching to free space and polarization conversion. The implementation of the presented solution relies on thin polarizing sheets that may be bent to adapt to the curvature of the aperture (e.g., combination with a Luneburg or geodesic lens), hence supporting wide scanning. The polarizing screen provides linear polarization rotation (±45 • ) to enable polarization diversity from a stack of PPW beamformers. Even though this concept is being demonstrated using a geodesic Luneburg lens, this method can be applied to any kind of PPW beamformer such as a shaped parallel plate delay lens [40] or a pillbox antenna [11], [41]. This article is organized as follows. In Section II, the geodesic surface of the lens antenna is introduced, and a flare and feeding design are presented. This section also reports the simulation results of the whole lens antenna. Section III is dedicated to the polarizing unit cell design and a discussion on how to integrate the polarizer together with the antenna. Section IV includes the experimental results of the integrated antenna, and finally, the main conclusions are drawn in Section V. A. Geodesic Lens Design The Luneburg lens is a rotationally symmetric graded index lens [15]. The refractive index of the lens varies from (2) 1/2 in the center to 1 at the border of the lens. When the Luneburg lens is excited with a cylindrical source on its boundary, it produces a wave with a planar phase front in the diametrically opposite direction of the lens. Due to the rotational symmetry, when this lens is integrated in an antenna, the scan losses are low and scanning is readily achieved by feeding the lens at different locations along its boundary, eliminating the need for a complex feeding network. In [29], a surface equivalence of the Luneburg lens is proposed and implemented with a PPW which is deformed to mimic the gradient refractive index of the Luneburg lens. The resulting PPW lens is referred to as a geodesic Luneburg lens. The main advantage of the geodesic lens is that it can be realized using a homogeneous material, such as vacuum or air. One disadvantage of the geodesic lens is the required height of the equivalent surface which is a third of its diameter. This issue is addressed in [30] where it was proposed to fold the geodesic surface to achieve a more compact device while preserving the optical characteristics of the lens. Such structure was not studied again in the literature until the recent implementation presented in [23], where a height reduction of 2.5 from the conventional geodesic Luneburg lens is reported. In [24], a rigorous design procedure for implementing a compact geodesic lens is described. The resulting lens is referred to as a water drop lens. We have followed such a design procedure in the current investigation. The shape of the designed water drop lens used in this work is shown in Fig. 1. The dashed red lines show the surface profile of the water drop lens. The distance between parallel plates is 2 mm, which is about 0.2λ at the central frequency. This distance was chosen so only the TEM mode can propagate within the frequency range of interest. In Fig. 1, the solid blue lines show modifications applied to the initial profile, in terms of cuts, to improve the matching at the folding points. These cuts are C 1 = 0.5 mm and C 2 = 0.35 mm. Since they are small, the beamforming capability of the lens is not affected. This lens is designed to operate from 25 to 31 GHz. The diameter of the lens is chosen as 107 mm, which is approximately 10λ at the center frequency. The inset of Fig. 1 shows how the lens transforms a cylindrical wave excited at its border to a planar wavefront in the opposite direction. B. Flare and Feeding Design To feed the water drop lens, we use a standard WR28 waveguide. Since the distance between parallel plates in the water drop lens does not match the height of a WR28, a stepped waveguide transition is needed, as illustrated in Fig. 2. To reduce the computational time of the simulations, only a portion of the lens is included in the optimization of the stepped waveguide, as shown in Fig. 3(a). On the other side of the lens, a flare design is required to allow efficient radiation to free space. The flare has an exponential tapering from the height of the parallel plate of the lens, 2 mm, to a final aperture height of 10 mm which is approximately one wavelength at the central frequency. The reflection coefficients for the designed feeding and flare are depicted in Fig. 3(c). The flare has a reflection coefficient below −20 dB in the frequency band of interest, and the feeding reflection coefficient is below −18 dB. Fig. 5(a), demonstrating that all the reflection coefficients are below −16 dB and portto-port coupling is below −19 dB in the frequency band of interest. The simulated 2-D radiation patterns for ports 1, 6, and 11 at 25, 28, and 31 GHz, respectively, are shown in Fig. 5(b). The antenna produces directive beams, and the performance is stable over the scanning range and the frequency band. The sidelobe levels are as low as −20 dB at the highest frequency, 31 GHz, and never higher than −14 dB at the lowest frequency, 25 GHz. Fig. 5(c) shows the maximum directivity of all 11 feeding ports at 25, 28, and 31 GHz. A low directivity variation is observed between pointing directions with a maximum of 0.2 dB. A. Polarization Diversity in PPW Beamformers Polarization transformation in fully metallic PPW beamformers introduces challenges, mainly due to the fundamental modes supported in such devices. The height of the PPW used to form the required geodesic surface must be chosen so that only the TEM mode is allowed. Introducing the second mode, the TE 01 mode, would not result in the desired performance of the lens due to the dispersive relationship between the two modes. This means that any polarization transformation must be done after the beamforming stage of the antenna. Techniques to achieve dual-polarized PPW antennas are of special interest for both the satellite and terrestrial communication systems as polarization diversity doubles the link capacity for a given frequency bandwidth allocation. Here, we want to investigate a way to transform the nominal vertical polarization of the TEM mode supported in the geodesic Luneburg lens to a +45 • linear polarization, so polarization diversity may be achieved stacking two lenses with polarization transformation of +45 • and −45 • . In this article, we propose a compact polarizer that does not limit the bandwidth or the beam scanning capabilities of the antenna. B. Complementary Split-Ring Resonators In [42], it is shown that twist-symmetric complementary split-ring resonators (CSRRs), perforated from a sheet of aluminum, can be used to construct a flat fully metallic lens. This type of perforated sheets with CSRRs can also be used to generate polarization transformation with a wideband performance as demonstrated in [43]. In the aforementioned studies, the CSRRs were used in a 2-D array configuration. In this work, we load the CSRRs in a PPW with a 1-D design for polarization transformation in a geodesic Luneburg lens. The CSRR unit cell is shown in Fig. 6(a). The unit cell is characterized with a rectangular section with dimensions P z and P x , a center bar of width g, and a slot of width w made by two concentric circles with radii S and S-w. The center bar is rotated by an angle δ n , so the electric field traveling through the CSRR rotates as well. This rotation of the electric field is smoothly achieved in steps as illustrated in Fig. 6(b), where the center bar of every consequent unit cell is rotated by an incremental value. This polarization conversion in steps improves the matching between screens when rotating the center bar. This means that the fewer the screens, the larger the reflections. A study to determine the minimum required number of polarizing screens to achieve a good matching to free space must be carried out. C. Design Procedure for the Polarizer In [42] and [43], the CSRRs were studied at a unit cell level, as commonly done when designing a 2-D transmit array. That design environment does not fit the purpose of this work since the polarizer will not be used in a planar configuration but rather in a PPW environment. The CSRRs will be repeated in the x-direction but there is only one CSRR unit cell in the z-direction. To mimic this environment, the polarizer is analyzed directly in its PPW environment as shown in Fig. 7, where a PPW is loaded with n = 4 polarizer screens. For each screen, the unit cell is repeated 16 times in the x-direction. In this way, when the PPW is excited at the waveguide port with a TE 10 mode, the excited wave will behave like a quasi-TEM wave. Furthermore, with 16 unit cells, the width of this simulation model is roughly the same as the lens diameter. The PPW height is tapered from the height of the PPW of the lens to the height of the CSRR unit cells. As can be seen in the cross-sectional view of the loaded PPW in Fig. 7(b), the polarizer screens are placed in grooves for assembly purposes. Two, three, and four polarizer screens are considered for the study, as shown in Fig. 8(a). In all the cases, the outer radius and the slot width are kept constant as S = 2.65 mm and w = 1 mm, respectively. The lateral dimensions are P x = P z = 5.7 mm, roughly 0.55λ at the central frequency. These lateral dimensions were chosen to be small enough so the final design is suitable for a vertical stacked antenna array. It must be noted that with this lateral dimension, the cutoff of the TE 01 PPW mode is around 26 GHz, which means that this mode (horizontal polarization) is operating below cutoff. Therefore, in order not to suppress this mode in the PPW region between the screens, they must be closely spaced. The thickness of the perforated metal sheets, t, is chosen as 0.3 mm since at this thickness aluminum is still quite pliable and the sheets can, thereby, be bent into a circular shape without damage. For n = 2, 3, and 4, the rotation of the last polarizing screen is set to 45 • . The rest of the parameters are then tuned to get the best matching over the frequency range. The reflection coefficients for the different numbers of screens are shown in Fig. 8(b). The best result is achieved with the highest number of screens, n = 4. However, decreasing the number of screens down to two still yields acceptable results, with a reflection coefficient which is below −15 dB over the frequency band of interest. Therefore, we continue with the design process using only two polarizer screens. To estimate the performance of the polarizer screens, we study the far-field using Ludwig's third definition for co-polarization (co-pol) and cross-polarization (x-pol), which is suitable for a linear directional radiation pattern [44]. The inset of Fig. 9 defines the co-pol and x-pol. The parameters of the unit cell and screens were optimized, and their values are given in Table I. Fig. 9 shows a parametric sweep of the rotation angle of δ 2 at 45 • , 47 • , 49 • , and 50 • while keeping δ 1 = 29 • . The bar needs to be rotated more than 45 • since the metallic plates in the z-direction suppress the TE 01 mode, especially at the lowest frequency where this mode is in cutoff. By rotating δ 2 beyond 45 • , it is possible to get a lower level of x-pol. Therefore, the chosen δ 2 is 50 • . The numerical results clearly indicate that the performance of the polarizer improves as the frequency increases. This is because the modal response becomes less dispersive when we move further away from the cutoff of the TE 01 PPW mode. The overall x-pol level over the design frequency band could therefore be improved by increasing the unit cell height so that the TE 01 PPW mode operates further away from cutoff. However, this would result in increased spacing between the antenna elements if the design is implemented in a vertically stacked antenna array, limiting beam steering in elevation. D. Polarizer Under Oblique Incidence The polarizer is intended to be installed conformally in the aperture of the PPW beamformer. As a result, the performance of the polarizer is not dependent on the scanning angle and is maintained when scanning to any direction. On the other hand, this implies that the polarizer partially operates under oblique incidence for all the scan angles, which may lead to reduced performance [45], [46]. To evaluate the performance under oblique incidence, we simulate the truncated planar polarizer that is rotated relative to the waveguide feed. The model used for the study is illustrated in Fig. 10(a). Specifically, we evaluate how rotation affects the polarization isolation under incident angles, β, from 0 • to 40 • in 10 • steps. The polarization isolation is defined as the difference between the peak gains of the co-pol and x-pol in the −3 dB beamwidth of the main beam. The results of the study are shown in Fig. 10(b). We observe that the polarizer maintains the performance under incident angles up to β = 20 • . At β = 30 • , the performance at higher frequencies deteriorates as we observe that the polarization isolation is above −15 dB. At β = 40 • , the polarizer is no longer performing as intended, with the polarization isolation above −15 dB over the whole frequency Fig. 11(a). range. With this study, we conclude that the angular stability of the polarizer is approximately ±30 • . Fig. 11 illustrates the electric field distribution in the lens designed in Section II when fed with port 6 at 28 GHz. We sample the power density on the dashed line as shown in Fig. 11(a). The dashed line corresponds to the position where the polarizer screens will be integrated in the final antenna design. We plot the result in Fig. 11(b). It is interesting to note that approximately 85% of the total power is confined within the ±30 • range, corresponding to the shaded area in Fig. 11(b). This means that the majority of the power is incident at the polarizer with angles lower than 30 • . This analysis provides further confidence in the operation of the conformal polarizer design prior to integration with the lens. E. Comparison Between Antenna With a Flare and Polarizer Screens In this section, we compare the simulation results of the antenna with a flare and the antenna with the polarizer in the aperture. As noted above, the vertical aperture size in the lens with the polarizer is smaller than that in the lens without the polarizer. To provide a fair comparison of the two antennas, we therefore compare the normalized radiation patterns. Fig. 12 shows a cross-sectional view of the two lenses, and Fig. 13 shows the simulation model and integration of the polarizer in the lens antenna. As illustrated in Fig. 13, the polarizer screens are bent to the shape of the lens and then placed between the top and bottom plates of the antenna. Fig. 14(a) presents normalized radiation patterns for port 6 in the beamforming plane (H-plane) and the plane orthogonal to the beamforming plane (E-plane) at 28 GHz. As can be observed, the beamwidth in the H-plane is roughly the same for both the antennas. However, since the antenna with the radiating flare has a larger aperture in the E-plane than the antenna with polarizer screens, it has a narrower radiation pattern in this plane. In Fig. 14(b), it can be seen that the radiation efficiency for both the antennas is above 95%. In the same figure, we plot the 3 dB beamwidth in the H-plane of the two antennas when exciting the center port. The beamwidth of the two lenses is similar up to 30 GHz where we observe an increase in beamwidth for the antenna with polarizer screens. This is most likely due to phase aberrations caused by the polarizer screens. IV. EXPERIMENTAL RESULTS OF THE FULLY METALLIC INTEGRATED ANTENNA AND POLARIZER In this section, we present the simulations and experimental verification of the integrated lens antenna. The antenna is simulated using the time-domain solver of CST Microwave Studio. The polarizing screens are curved into a circular shape with a radius that matches the placement grooves allocated in the flare of the antenna. The manufactured prototype is shown in Fig. 15. Fig. 15(a) shows the top and bottom plates of the geodesic lens antenna before assembly with the polarizer screens placed in the bottom plate. The two plates that form the geodesic surface were manufactured using CNC milling, and the polarizing screens were manufactured by water-jet cutting aluminum sheets. Fig. 15(b) illustrates the assembled antenna during measurements in the anechoic chamber at KTH Royal Institute of Technology. The measured and simulated reflection coefficients are represented in Fig. 16(a). The measured reflection coefficients are below −10 dB in the frequency band of interest, but are a bit higher than the simulated results, below −15 dB. The measured and simulated port-to-port coupling between selected ports are depicted in Fig. 16(b). Similar levels of portto-port coupling are obtained from the measurement and simulation with most of the values below −20 dB. The measured and simulated normalized radiation patterns of the co-pol, in the scanning plane at 25, 28, and 31 GHz are depicted in Fig. 17. There is a good agreement between the simulated and measured results, especially at 25 and 28 GHz. In the measurements, the beamwidth at 31 GHz slightly increases. The sidelobe levels in both the simulations and measurements are below −15 dB for all the ports at 25 and 28 GHz. Slightly higher side lobes are observed at 31 GHz but always below −13 dB. The measured and simulated peak gains of ports 5, 6, 8, and 11 are shown in Fig. 18(a). Although omitted here, the other ports had a similar performance. The gain is stable between ports although some differences are noticeable between the measured and simulated results. This discrepancy is due to the surface roughness of the materials. A simulation with a root mean square (Rq) surface roughness of 5 μm is illustrated in Fig. 18(a), demonstrating a good agreement with the measurements. Port 6 experienced more losses than port 1, especially at 31 GHz. The main difference between these two ports is in the waveguide length, which for port 6 is substantially longer. The longer the waveguide, the higher the undesired leakage and material losses due to surface roughness. It is important to note that the waveguides are only for testing purposes. In a real implementation, these waveguides are not necessary since the lens will be directly connected to the PCB. Fig. 18(b) presents the polarization isolation of the integrated antenna. The isolation obtained with the truncated planar polarizer (Fig. 9) is included as reference. The polarization isolation of the antenna is defined in the same manner as in Section III. Good agreement is observed between the x-pol of the truncated planar polarizer and the simulated and measured x-pol levels in the integrated lens antenna. The polarization isolation is below −18 dB over the whole frequency range and mostly below −20 dB in both the simulations and measurements. V. CONCLUSION A fully metallic and low-profile method to achieve polarization transformation in a geodesic Luneburg lens antenna has been proposed and validated experimentally. The polarization transformation is achieved by placing two metallic screens of CSRRs in the aperture of the antenna. The polarizing screens rotate the electric field from a vertical linear polarization to a linear +45 • polarization while providing a good impedance matching to free space. Since the screens are placed conformally in the aperture of the antenna, the antenna performance is unaffected when the beam is steered in the angular range of ±50 • . The proposed concept can be extended to design PPW antennas that generate circular polarization by replacing the linear-to-linear polarizer with a linear-to-circular polarizer similar to the designs in [32], [47], and [48]. To the authors' best knowledge, no polarizer design, applicable for a PPW beamformer, has been reported that transforms the polarization while providing impedance matching to free space. In addition, the reported designs on low-profile polarization transformation in PPW beamformer suffer from either limited bandwidth or limited scanning range [38], [39] which is not the case for the proposed design. The integrated antenna operates at a center frequency of 28 GHz with over 20% bandwidth having measured reflection coefficients below −10 dB, a port-to-port coupling below −18 dB, and a measured peak gain of 17.6 dBi. Good agreement is achieved between the measured and simulated radiation patterns, apart from slightly lower gain observed in measurements. The lower gain is attributed to the surface roughness of the metal, which was not considered in the simulations. The antenna supports a beam scanning of ±50 • , which is demonstrated with 11 discrete beams with 10 • separation. By vertically stacking the proposed integrated antenna, polarization diversity can be exploited by mirroring the polarizing screens in the antennas. One of the antennas would then radiate a +45 • polarized wave, while the other one a −45 • polarized wave. Furthermore, the presented design has an aperture height of 0.55λ, and as a result, 2-D beam steering (switching in azimuth and scanning in elevation) is enabled by stacking antennas with the same polarization. For the 2-D beam steering antenna stack, the polarization diversity can be exploited using two independent systems, one for each polarization. Although the polarization technique proposed in this article was only applied to a geodesic Luneburg lens, it is general and it can be applied to both flat PPW lenses and other versions of conformal beamformers. He has authored or coauthored over 60 peerreviewed journals and conference papers. His research has been focused on transformation optics, lens antennas, metamaterials possessing higher symmetries, and leaky-wave antennas Mr. Zetterstrom is a member of the newly formed EurAAP Working Group for Early Careers in Antennas and Propagation (ECAP). He has been awarded the first price in the Student Design Competition at APS/URSI in 2016, the Best Student Paper Award at URSI Spain in 2020, and the Best Antenna Technology Paper Award at EuCAP 2022 for his works.
6,598.2
2022-11-01T00:00:00.000
[ "Physics" ]
INVESTIGATION OF THE HERDING AND CONTRARIAN BEHAVIOUR OF INVESTORS , Introduction Investor behavior is playing an increasingly important role in analyzing changes in financial markets. A lot of research is done to find out what key factors and what causes influence certain investor financial decisions, also how investor decisions based on their mood or opinion affect investors' individual investment performance and the overall financial market atmosphere. The earlier performed research shows that investor behavior is an important element in understanding financial market volatility and predicting its causal links. It has been observed that one of the main factors influencing investors' decisions is the opinion formed by investors about specific financial instruments. Most investors are exposed to prevailing market views and make investment decisions by monitoring what the majority of investors are going to do. Such market participants ignore other market signals and try to replicate the actions of the majority of investors. Such phenomenon is called herding behavior. However, there is another side where investors disagree with the majority and make decisions based on a different approach from the majority of investors, in order to achieve the set investment goals. These two paradigms of investor behavior are widespread and have a significant impact on financial market fluctuations. For this reason, the performed research focuses on examining the behavioral patterns of these two groups of investors. In order to make efficient use of their available financial resources, every investor should be able to accurately predict future changes in the financial market. Given that both the methods and the market itself are constantly changing objects, including the fact that investor behavior in financial markets is also constantly changing, the search for the most appropriate forecasting method is a never-ending process and still very relevant. Therefore, monitoring investor behavior and the investor perception that determines it can be one of the tools that can contribute to the development of an effective forecasting method. Investor opinion, or in other words, investor sentiment, can be analyzed in a variety of ways. But one of the most accurate methods and the most appropriate artificial intelligence algorithms based on the functions of artificial neural networks have been selected for this study. The use of these methods in the analysis of investor sentiment has not been extensively studied, for this reason the study of investor behavior is practically useful. The aim of the research is to determine which pattern of investor behavior is more in line with real changes in the prices of financial instruments. To achieve the goal of the research, the following tasks have been set: 1. To perform the analysis of scientific literature on the topic of investor behavior; 2. To examine the main methods of measuring and analyzing the herding behavior of investors; 3. To estimate price forecasts for selected financial instruments using the deep learning method; 4. To perform the classification of social network records related to the financial instruments in question, using the sentiment classification algorithm; 5. To determine how the forecasts, investor sentiment data and the opinion formed on social networks correspond to the real changes in the prices of the selected financial instruments; 6. To determine which pattern of investor behavior better corresponds to the real changes in the prices of selected financial instruments. The following research methods have been applied: analysis and generalization of scientific literature, prediction using deep learning algorithm, sentiment classification method, data mining using Tweet Sentiment Visualization tool, data systematization and graphical representation of results, comparative analysis of results. The structure of the paper is as follows. Section 1 presents the literature review on the topics of investor behaviour, decision-making, investor sentiment, and herding behaviour. Section 2 deals with the methodology of the research based on technical indicators, deep learning algorithms, and sentiment scores. Section 3 describes the results of the research. Conclusions of the paper provide the generalization of the obtained results and future research directions. Literature review Classical financial theories are based on the assumption that investor behavior in the market is rational. However, research confirms that investors feel various emotions, misunderstand information and are affected by other irrational factors that determine their choices when making investment decisions in conditions of uncertainty and risk . With the expansion of financial markets and interest in investing, many theories have emerged that help the investor to develop successful investment strategies, but most of them are based on the assumption that the investor's decisions in the market are rational. Much modern research provides reason to believe that classical financial models and theories cannot reliably explain market price dynamics and the behavior of financial market participants. Classical financial theories cannot be applied effectively because the behavior of investors is irrational, influenced by various psychological factors and does not meet the assumptions of classical theories . Studies have shown that an investor in an uncertain situation is not able to analyze all the information and all the factors, so he is forced to use different methods or simply behaves randomly due to the limited time to make a decision (Barber & Odean, 2013). One of the main factors in making a random decision is that an individual uses insufficient information to draw conclusions or ignores important information knowing that he or she does not have enough time to analyze all available and relevant information. The investor understands and evaluates the result obtained in each case differently based on the level of risk available to him. It means that he makes decisions based on a separate assessment of each action when assessing the overall risk of the actions taken and is not inclined to assess the overall result (Baker & Ricciardi, 2014). Thus, the loss of funds in a single operation overshadows the threat of a greater loss in a number of transactions and the investor thus attaches greater importance to an action that is less important than their totality. One of the main principles of the prospect theory is that the investor focuses more on the value of the amounts gained or lost than on the final result, and that investors experience a greater emotional burden by incurring losses than by making the same amount of profit (Lim et al., 2013;Ryu et al., 2017). According to this theory, investors focus more on the amounts they can gain or lose than on the end result of the investment. The prospect theory is also related to the declining marginal utility, according to which the large amount earned first will give the investor a much greater sense of pleasure than the second exactly the same (Mak & Ip, 2017). Some examples of investors' irrational behavior were described as early as the 19th century. The herding effect was one of the first factors of irrational behavior that researchers observed and investigated its influence on decision-making. Gustave Le Bon's book, The Psychology of the Crowd, in 1895 highlighted the fact that, as part of a group, individuals begin to think collectively, and this thinking forces them to behave not individually but as other people. Under conditions of uncertainty, when a limited amount of information is available, the only good option for an investor to make a decision may seem to be to monitor and simulate the behavior of other market participants, i. e. follow the example of the majority. Such investor behavior in the financial market leads to speculative information bubbles, various anomalies and mass hysteria (Lapanan, 2018). The analysis of the works of various authors (Danarti et al., 2020;Shankar & Shashidhara, 2019) and their research results revealed that all the factors that determine the irrational behavior of investors can be divided into two groups: -Cognitive factors. These factors are related to the investor's misperception of reality and incorrect assessment of the available information and the real situation. The stereotypical thinking thus formed leads to wrong decisions. -Emotional factors. Such factors occur to most investors because they are determined by the characteristics of human nature. As the situation becomes more complex and its uncertainty grows, it becomes difficult for investors to adequately understand and assess reality, and they begin to look for ways to simplify the current situation. Such simplifications created by investors hinder adequate decisions, as useful information is discarded and irrelevant information is valued. Kahneman and Tversky in 1974 found that investors predict the values of unknown variables based on available information, whereas it is important to determine what changes are likely in the future when making a decision under conditions of uncertainty (Barber & Odean, 2013). The application of probability theory requires an understanding of complex mathematical formulas, having certain knowledge and skills, and the ability to process large amounts of data. A person begins to rely on a basic set of heuristic methods that help him make a decision using only the limited information he has, and simplifies the whole process. He begins to use his experience, intuition, and stereotypes. Many authors (Barber & Odean, 2013;Bikas & Kavaliauskas, 2010;Kariofyllas et al., 2017;Pascual-Ezama et al., 2014) single out the main situations that lead to incorrect evaluation of information and then irrational decision making. The reason for such situations is prejudice, errors and superficial evaluation of information. The first situation arises when investors overestimate the information available. The second situation is when an investor misuses probability theory models and mathematical statistics formulas to assess the reliability and significance of new information (Mak & Ip, 2017). The third situation is when the description of the event or the provision of any information about it affects its assessment. It can be observed that if the investor has a preconceived notion about the assessed event, he gives insufficient significance or even ignores the newly received information, which contradicts his formed perception. In the last decades of the XXth century, financial professionals have increasingly come to terms with a situation in which market phenomena could not be explained on the basis of classical theories alone. It has been found that individuals are influenced by preconceived notions, stereotypes, various perceptual illusions, information processing errors and emotions when making investment decisions. Clearly, these facts are inconsistent with rational behavior, which is based on classical models, and therefore the science of behavioral finance emphasizes such key features of irrational behavior that characterize the modern investors (Barber & Odean, 2013;Ryu et al., 2017;L. Zhou & Huang, 2019): -Investors often do not choose a passive portfolio management strategy, they actively buy and sell securities, sometimes using outdated or insignificant information, relying on unreliable expert advice, misusing market future prices forecasting methods and not paying enough attention to diversifying existing investment portfolios. -Investors do not apply the principle of maximum expected utility when valuing risky transactions. Investors are often biased in their assessment of the specifics of results that meet their expectations, as they face the fear of losing, incurring losses and overestimating their goals. -Investors are engaged in forecasting expected results of indefinite size, form probability and statistical models in the short term, which cannot be the basis for the use of mathematical statistics or probability theory. -Investors can make any financial decisions depending on what tasks they have set and according to what strategy they are investing. When choosing between long-term and short-term investments, investors randomly decide whether to invest in stocks or bonds, and no classical theory can predict this. -Investors sometimes under-react to changes in the market or vice versa, give them too much weight due to their inherent conservatism or exposure to representativeness. This determines changes in the amount of profits received by investors and changes in the prices of financial assets. Irrational investor behavior becomes more pronounced as the uncertainty of the situation and the risk of action increase, regardless of the investor's age, education or experience. When the situation is not clear and there is uncertainty, they all behave in the same way and make the same mistakes, which are called investor biases. The investor's behavior in the financial market is greatly influenced by the emotional characteristics of his character, which are unavoidable in certain situations, especially when a decision has to be made in a very short time. Such psychological factors are widely studied in the scientific literature (Alrabadi et al., 2018;Andersen et al., 2020;Baker & Ricciardi, 2014;Rezaei & Elmi, 2018) and are identified as investor biases. They seek to explain why an investor behaves in one or another way in different situations. The main biases influencing investor decisions are: -Overconfidence. A situation where an investor overestimates his abilities, has too much confidence in his opinion and creates the illusion that he knows best and controls the consequences of decisions. This bias usually occurs when an investor manages to make a large profit and then it inspires more confidence in his own strength because he thinks that the result is due to his knowledge and abilities (Baker & Ricciardi, 2014). -Cognitive dissonance. This phenomenon is described as a mismatch between two investor cognitions, which can be defined as any element of knowledge, such as: attitude, emotions, beliefs, or behavior. It occurs when a person receives information that clearly contradicts the information he already has, then he feels discomfort and tells himself a new truth in order to avoid it (Feldman & Lepori, 2016). -Illusion of control. This bias appears when an investor believes he can influence uncontrollable phenomena. The more information an investor has, the greater the illusion of control he experiences (Chhapra et al., 2018). -Conservatism. This bias occurs when investors are reluctant to adjust or revise their estimates even after receiving new and significant information (Rezaei & Elmi, 2018). -Representativeness. Investors tend to give certain changes, reports or information more importance or a greater degree of probability than they are actually worth. An investor may not notice truly noteworthy events and continue to waste time on irrelevant information, which will negatively affect his investment decisions (Andersen et al., 2020). -Confirmation. It is a tendency for the investor to interpret the available evidence in a way that confirms or is consistent with his views or beliefs. The investor pays less attention to evidence that refutes his view or is inconsistent with it (Rezaei & Elmi, 2018). -Anchoring. It is a human tendency to attach to certain reference points called an anchor. When the investor focuses on such aspects, he may give less importance to long-term trends (Pahlevi & Oktaviani, 2018). -Framing. An investor expecting some specific framework creates the illusion of certainty, thus creating choices that cannot be explained on the basis of rationality. Investors are more likely to use positive frames than negative ones, as the latter are always associated with losses (Rezaei & Elmi, 2018). -Optimism. Excessive optimism stems from overconfidence and includes the belief that future events are more likely to be positive than realistic. When an investor is overly optimistic, he does not evaluate forecasts and the situation realistically, he thinks the result will be only positive (Chhapra et al., 2018;Pahlevi & Oktaviani, 2018). In addition to the above-mentioned biases, other biases such as hindsight, loss aversion, self-control, endowment, self-attribution, ambiguity aversion, and others are distinguished. Other authors (Tekce et al., 2016;Ye et al., 2020;L. Zhou & Huang, 2019) mention other deviations in investor behavior in financial markets in addition to the biases listed above. One of them is the lack of diversification. Risk is poorly distributed, invested in a small number of assets, which leads to an increase in their riskiness. This is due to recklessness and underestimation of investments. Another factor is that investors tend to sell profitable positions too quickly and keep loss-making positions too long. This is a common phenomenon in financial markets as investors avoid taking losses, so they hold unprofitable positions and expect their value to rise. Also, investors in profitable positions are in a hurry to sell, for fear of losing profits. Another factor is related to the orientation of investors towards short-term investments and the prioritization of already known assets (Feldman & Lepori, 2016). They can make deals at a very unfavorable moment or react too sensitively to bad or good news. Investors are more likely to choose stocks that are in the home market because they are better known. The authors (Beer & Zouaoui, 2012;Yang & Wu, 2019;G. Zhou, 2017) mention in their works that irrational decisions by investors and changes in their moods cause market price bubbles, unusual price fluctuations, and market anomalies. Thus, the mood of investors is also a very significant factor that influences the actions of investors. The more complex the situation, the stronger the influence of emotions and moods on actions. The dependence of investor behavior on their mood in the science of financial behavior is called investor sentiment. It is important to emphasize that investor behavior in the financial market is greatly influenced not only by the biases listed above, but also by an individual's personal characteristics, such as: age, gender, relationships with other people (Mak & Ip, 2017;Ryu et al., 2017). Over time, the investor's personal characteristics change, so his decisions and their nature also change, as investors gain more experience, their expectations and goals change, also risk perception, and tolerance to it. All financial market participants have a significant influence on each other, thus shaping the behavior of a group of investors or, in other words, a herd. Herding behavior is a common phenomenon in financial markets and is defined in quite different ways. Chhapra et al. (2018) describes the behavior of a group of investors as a process in which investors imitate each other's actions and make financial decisions based on the opinion of other investors, actions taken and market trends. As a result of this simulation of each other's actions, investors behave in the financial markets in a similar way, and this is called herding. Arjoon and Bhatnagar (2017) believe that herding behavior arises from an investor's inherent desire to belong to some group or community. According to Park and Kim (2017), such behavior is driven by deliberate copying of the actions of investors who achieve better investment results, Satish and Padmasree (2018) explain that the reason is investors seeking to reduce uncertainty or compensate for missing knowledge, and Calderon (2018) suggests that investors do so in an effort to reduce investment risk and expect to follow the example of other investors to reduce the likelihood of potential losses. Rezaei and Elmi (2018) identify the lack of experience as the main reason for the emergence of herding behavior. The financial behavior of a group of investors is divided into two theoretical models (Economou et al., 2018). The first model is when herding results from the deliberate actions of investors in replicating the market. This means that investors monitor the prevailing market trends, actions taken by other investors, analyze the views and opinions of the majority of investors and make decisions based on it. The second model is when herding results from similar information received by investors. In this case, investors use the same sources of information, examine the conclusions and recommendations of specific analysts, perform similar analysis, which, giving the same results, form certain principles of investor behavior. Other authors (Park & Kim, 2017;Yao et al., 2014) divide herding behavior into rational and irrational. Herding has a major impact on investors' decisions, but at the same time, it also affects the entire financial market very strongly. Calderon (2018) argues that herding behavior may be a major cause of adverse market phenomena such as: growth in speculative investment, formation of market bubbles, or anomalies. Lim et al. (2013) emphasize that herding can lead to the ignorance or loss of valuable information, the accumulation of irrelevant information, complicate investor decision-making, and increase the likelihood of irrational actions. Chakravarty and Ray (2020) note that on the positive side, herding behavior can help the market grow. The pace of this market volatility is determined by the strength of herding behavior, which usually depends on direct communication between market participants. This is highly dependent on the quality of the information available to the investors whose actions are being tracked. However, a review of the scientific literature has shown that the authors are more likely to highlight the negative consequences of herding behavior. For this reason, identifying and analyzing herding in financial markets is a crucial task to avoid these negative effects. One of the tools for identifying herding behavior is measuring and analyzing investor sentiment. Investor sentiments in the behavioural finance are examined as a separate theory determining investors' decisions. Investor sentiment theory explains how investors' beliefs are formed, how investors' sentiment influences their investment decisions, and how this affects the entire financial market (He et al., 2019). People are emotional by nature, so often their moods and emotions become the basis for taking particular action in all life situations, including investing. When an investor is in a bad mood, he is more prone to a pessimistic view of the future, while those who are in a good mood, on the contrary, are optimistic, that is, they are more likely to see positive market developments and expect favorable events. When an investor is optimistic, he will be more willing to take risks when investing than being in a bad mood. Investors cannot completely distance themselves from emotions, even if they try to make decisions based on models of technical analysis, they still have to make certain assumptions to determine a specific value (Bouteska, 2019;Ren et al., 2019). According to Pandey and Sehgal (2019), the overall level of investor optimism and pessimism or social sentiment changes over time. When the market is bullish, investors are more inclined to think optimistically, and when the market is bearish, they are more pessimistic. Kaustia and Rantapuska (2016) note that a pessimistic or optimistic mood can prevail in the financial market at any point in time. Since changes in optimism and pessimism affect changes in the market price of financial instruments, it can be argued that the market price of an asset does not reflect its value, but the psychological state of the market as a whole, which constantly fluctuates. Fluctuating investor mood in the financial market is defined as investor sentiment. The positive sentiments held by investors reflect their optimistic attitude towards changes in the market, while the negative sentiments of investors express investors' pessimistic forecasts and negative attitudes towards market volatility. When the market reaches its peak and when it reaches the bottom, understanding and analyzing investor sentiments can help predict changes in the prices of financial instruments and identify the herding behavior. One of the main methods of measuring the sentiment of investors, which helps to analyze the herding behaviour in the financial market, is the calculation of the open position ratios, textual analysis of various sources and classification of sentiments. Methodology The aim of this study is to link investor behavior to possible numerical real and projected market data. The change in the prices of 10 financial instruments per week was chosen to measure the financial market. Predictions for the same instruments were obtained using technical indicators and a deep learning algorithm. These predictions represent the rational thinking of the investor. The rational behavior of the investor in this study is represented by the ratio of positions held by investors and sentiments in social networks (Figure 1). Different financial instruments from different markets have been selected for the study, such as the foreign exchange market, the cryptocurrency market, the commodities market and the stock market, in order to be able to evaluate investor behavior as widely as possible. Selected instruments are: exchange rates of Euros and United States of America Dollar (EUR / USD), Great Britain Pound and US dollar (GBP / USD), Euros and Great Britain Pound (EUR / GBP), Australian Dollar and Japanese Yean (AUD / JPY), Euros and Japanese Yean (EUR / JPY) currency pairs, gold (XAU / USD), silver (XAG / USD), French stock index (FRANCE 40), United States of America stock index (US 500), BITCOIN cryptocurrency. All evaluations are performed twice: on 26 October 2020 and on 02 November 2020. Technical indicators. In most cases, investors make decisions using technical indicators. For the study, we selected a simple moving average with a normal 14-day period and Bollinger bands with a 14-day period and 2 standard deviations in both directions to limit the band. Technical indicators analyze past data and apply a variety of statistical econometric methods to describe signals. Deep learning prediction. The Matlab deep learning algorithm based on LSTM was chosen for the research (Math-Works, n.d.-a): Deep Learning (DL) is a field of machine learning that is the smallest and most complex learning method in the overall hierarchy of Artificial Intelligence (AI) (Tang et al., 2018). During the operation of the LSTM algorithm, it is selected which of the analyzed data should be left, learned and saved, and which should be removed from the process. In this way, important information is transmitted in a long sequence and a final result is obtained, which in this work is understood as a prediction of the analyzed data. To assess the accuracy of this forecasting model, a measure of RMSE (Root Mean Squared Error, RMSE) error is calculated. The advantage of this criterion is that by raising the value obtained by squaring, large deviations are further highlighted. Consequently, it is important that the resulting error is as small as possible in order to obtain the most accurate forecasting model. If the RMSE is zero, then the actual and predicted values completely coincide (Boulemtafes et al., 2019). The algorithm provides several financial instrument prices forecasts ahead. The first forecast and the trend for the whole week are used for further analysis of the study (Figure 2). Open position ratios. Investor sentiments or open position ratios in the financial market are assessed and measured in various ways. Each investment platform has its own methodologies, which are used to measure what proportion of their clients have buying positions and what have sales positions. This ratio is expressed as a percentage and therefore reflects the views of the majority and minority of investors. Such data is constantly updated, the results change periodically depending on the opinion of investors. The ratio of the positions held by investors is provided for each financial instrument separately in order to measure the sentiments of each specific instrument. Investor holdings are taken from the free access websites of OANDA, Daily FX and Finance Attitude. OANDA (OANDA, n.d.) has a large number of clients, both individual investors and legal entities from around the world, so the results of the investor position ratio are extensive, updated daily every 20 minutes. OANDA provides a table of positions held by investors in 16 currency pairs. Daily FX (Daily FX, n.d.) is a free news and research website that is one of the leading sources on the currency, commodities, and stock index trading community. Daily FX provides IG Client Sentiment data, which shows the percentage of investors in buy and sell positions and provides market signals for a specific financial instrument. The signal provided has three types: bull, bear or mixed market. Finance Attitude (FinanceAttitude, n.d.) is a website that provides the latest financial, economic and business knowledge and the results of various analyzes performed by qualified analysts or independent experts. Finance Attitude, like the sources mentioned above, provides data on investors' positions in the financial market, which is updated daily every 30 minutes. This page shows the sentiments of investors in as many as 55 currency pairs. Sentiment scores from social media. The prevailing opinion on social media about individual financial instruments can influence investor behavior. The purpose of the textual analysis is to determine the sentiments of the records in question, i.e. to determine which investors evaluate a particular financial instrument positively and which in a negative way. Data from people's posts selected for textual analysis is collected from the social network Twitter using a self-developed Tweet Sentiment Visualization tool that is freely available online. The Matlab classification algorithm, which is based on machine learning, is used to classify posts from the social network Twitter sentiments. Basic principles according to which the algorithm performs classification (MathWorks, n.d.-b) are: 1. A dictionary of English word sentiments is used, which is divided into two sets: positive words and negative words. 2. The algorithm compares the words of each Twitter post with the words in the dictionary of sentiments and gives each post a score from -1 to 1, which reflects which part of the words has a positive rating and which part has a negative rating. 3. Calculate the average of the estimates for the whole set of records about one financial instrument, it is called the sentiment score. All of the methods listed above were selected for suitability for the investor behavior survey. The methods of technical analysis, deep learning, investor positions, and textual analysis are very different in nature, but their properties and functions complement each other and are used to achieve the purpose of the study and to obtain the expected results. Research Understanding investor behavior and being able to anticipate their future actions is an integral part of successful investment in all financial markets. The study focuses on two paradigms of investor behavior: herding behavior, where investment decisions are made following the views of the majority and actions are taken on that basis; contrarian behaviour when decisions are made contrary to those of the majority. The study aims to link the manifestation of these different paradigms in investor behavior with the change in prices of different investment instruments and the opinion of investors in social networks. Data on investor behavior under the selected instruments are taken from three Internet sources: OANDA, Daily FX, Finance Attitude. The data measures how many investors have long and how many have short positions as a percentage. Investor behavior data are compared to real price changes over time, price changes projected through deep learning, and public opinion pulled from the social network. The study takes two different time periods. The overall results of the investor behavior survey are presented in Table 1. The results presented show that the prediction and classification results obtained during the study are very diverse and ambiguous. The forecasting of the prices of selected financial instruments using in-depth learning showed that the forecasting method used is not reliable, as only half of all forecasts corresponded to real changes. The results of the classification of entries in social networks related to the selected financial instruments showed that in most of the cases examined, the sentiments of the majority of the entries corresponded to the real direction of the change in the prices of the selected financial instruments. Consequently, exactly the same proportion of records with opposite sentiments, which accounted for the smallest share of all records, corresponded to real price changes. This means that, just like the forecasting method mentioned above, this classification method cannot pinpoint the direction of the price change and does not specify which way of investor opinion is more reliable. Repeating the results in the next time period (Table 2) did not provide any new information. Different sources assessing investors' positions present quite different results for investors' positions under the financial instruments in question. In both periods, according to an OANDA source, most investors predicted an increase in the prices of all the financial instruments in question. According to all sources of investor positions, that is, according to OANDA, Daily FX and Finance Attitude, there were more cases when investors opposed to the majority were right about future price changes. Therefore, when assessing the results of sentiments in general (Figure 3), it can be said that the forecasts of the investors who are contrarian to the majority are more accurate. Summarizing the results of the research on investor behavior, it can be stated that it is not more reliable to make investment decisions based on the forecasts of the majority of investors. Based on the forecasting of prices of financial instruments and classification of social network records, it can be stated that the methods used for these analyzes cannot provide accurate forecasts of price changes, but they can very well serve the investor by combining them with other forecasting methods. Conclusions The chosen research methodology allowed to combine and compare different approaches to the financial market: -changes in the market can be estimated with a numerical historical data, -changes in the market due to the prevailing opinion, -market changes are driven by investor behavior. Numerical methods of technical analysis presented different coincidences of price changes: a moving average of only 20% and the Bollinger bands of 70%. The best match between the changes in the prices of financial instru-ments and the forecast prices was shown by the deep learning algorithm (80%), apparently because the pur-pose of this code is forecast. Investor behavior is a key factor in examining financial market volatility. One of the main phenomena of investor behavior that has a significant impact on investors and their decisions is herding behavior. There are many reasons for this investor behavior, but the consequences are usually negative for both investors and the financial market. For this reason, there is a need to be able to identify and measure herding behavior. The main unit of measurement that helps to describe the attitude of a group of investors is the ratio of investor positions. The analysis of the data on the positions held by investors helped to make sure that herding could have a negative impact on investment results, as the opinion of the majority of investors is less in line with changes in the prices of financial instruments in the market. The classification of entries in social networks showed that the expression of opinion in social networks reflects the opinion of the social networking community but it cannot be related to the opinion of investors. The research can help investors to analyze and apply data on investor sentiment, to provide knowledge about the relationship between investor behavior and financial instruments in financial markets. The study could be extended to other forecasting methods in addition to the methods used, by exploring the deep learning algorithm for more accurate forecasting, including more sources of investor position data and social networking records to make the results even more accurate and specific. Disclosure statement The authors declare no conflict of interest.
7,999.6
2021-05-14T00:00:00.000
[ "Economics", "Business", "Computer Science" ]
Gut microbiome signatures linked to HIV-1 reservoir size and viremia control Background The potential role of the gut microbiome as a predictor of immune-mediated HIV-1 control in the absence of antiretroviral therapy (ART) is still unknown. In the BCN02 clinical trial, which combined the MVA.HIVconsv immunogen with the latency-reversing agent romidepsin in early-ART treated HIV-1 infected individuals, 23% (3/13) of participants showed sustained low-levels of plasma viremia during 32 weeks of a monitored ART pause (MAP). Here, we present a multi-omics analysis to identify compositional and functional gut microbiome patterns associated with HIV-1 control in the BCN02 trial. Results Viremic controllers during the MAP (controllers) exhibited higher Bacteroidales/Clostridiales ratio and lower microbial gene richness before vaccination and throughout the study intervention when compared to non-controllers. Longitudinal assessment indicated that the gut microbiome of controllers was enriched in pro-inflammatory bacteria and depleted in butyrate-producing bacteria and methanogenic archaea. Functional profiling also showed that metabolic pathways related to fatty acid and lipid biosynthesis were significantly increased in controllers. Fecal metaproteome analyses confirmed that baseline functional differences were mainly driven by Clostridiales. Participants with high baseline Bacteroidales/Clostridiales ratio had increased pre-existing immune activation-related transcripts. The Bacteroidales/Clostridiales ratio as well as host immune-activation signatures inversely correlated with HIV-1 reservoir size. Conclusions The present proof-of-concept study suggests the Bacteroidales/Clostridiales ratio as a novel gut microbiome signature associated with HIV-1 reservoir size and immune-mediated viral control after ART interruption. Video abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40168-022-01247-6. subjects that initiate ART within first weeks after HIV-1 acquisition may temporarily achieve HIV-1 viremia suppression after ART interruption (ATI) [5]. Understanding the mechanisms behind immune-mediated viremia control after ATI is key to progress towards a functional HIV cure. Broader and higher-magnitude CTL (cytotoxic T-lymphocyte) responses against less diverse HIV-1 epitopes [6,7] in the context of favorable HLA class I genotypes [8] and smaller HIV-1 reservoir size [9] have all been related to such post-treatment HIV-1 control. There is indirect evidence that the gut microbiome might also contribute to immune-mediated control of HIV-1 replication [10,11]. Vaccine-induced gut microbiome alterations, consisting in lower bacterial diversity and negative correlation between richness and CD14 + DRmonocytes in colorectal intraepithelial lymphocytes, have been recently associated with HIV/SIV (SHIV) protection in a non-human primate challenge study after mucosal vaccination with HIV/SIV peptides, modified vaccinia Ankara-SIV and HIV-gp120-CD4 fusion protein plus adjuvants through the oral route [12]. In the HVTN 096 trial [13], where the impact of the gut microbiota on HIV-specific immune response to a DNA-prime, poxvirus-boost strategy in human adults was assessed, baseline and vaccine-induced gp41-reactive IgG titers were associated with different microbiota community structures, in terms of richness and composition [14]. In particular, co-occurring bacterial groups, such as Ruminococcaceae, Peptoniphilaceae, and Bacteroidaceae, were associated with vaccine-induced IgG response and inversely correlated with pre-existing gp41 binding IgG antibodies, suggesting that the microbiome may influence the immune response and vaccine immunogenicity [14]. Also, another study evidence has shown that HIV vaccine-induced CD4+ T and B cell responses could be imprinted by prior exposure to cross-reactive intestinal microbiota-derived antigens [15]. Further evidence emerged from other studies in typhoid Ty21 [16], rotavirus [17] and oral polio virus, tetanus-toxoid, bacillus Calmette-Guérin, and hepatitis B immunization strategies [18], in which specific gut microbiome signatures (Bifidobacterium, Streptococcus bovis, and Clostridiales, respectively) positively correlated with vaccine-induced immune response. In the absence of immune correlates of viral control, HIV cure trials usually incorporate an ART interruption phase to address the efficacy of a therapeutic intervention [19]. Data on the role of gut microbiome composition in the responsiveness to a curative strategy and the relationship with viral control after ART interruption are lacking. The BCN02 study [20] was a single-arm, proof-of-concept "kick and kill" clinical trial evaluating the safety and the in vivo effects of the histone deacetylase inhibitor romidepsin given as a latency reversing agent [21] in combination with a therapeutic HIV vaccine (MVA.HIVconsv) in a group of early-ART treated HIV-1-infected individuals [22,23]. During a monitored ART interruption (MAP), 23% of individuals showed sustained viremia control up to 32 weeks of follow-up. Here, we aimed to identify salient compositional and functional gut microbiome patterns associated with control of HIV-1 viremia after ART interruption in the "kick and kill" strategy used in the BCN02 study. Study design This was a sub-study derived from the BCN02 clinical trial (NCT02616874). The BCN02 was a multicenter, open-label, single-arm, phase I, proof-of-concept clinical trial in which 15 HIV-1-infected individuals with sustained viral suppression who started ART within the first 6 months after HIV transmission were enrolled to evaluate the safety, tolerability, immunogenicity, and effect on the viral reservoir of a kick and kill strategy consisting of the combination of HIVconsv vaccines with romidepsin, given as a latency reversing agent (LRA) [20] (Additional file 1: Fig. S1a). Romidepsin is a histone deacetylase inhibitor (HDACi), developed as an anti-cancer drug, which has been shown to induce HIV-1 transcription both in vitro and in vivo [21,24]. The HIVconsv immunogen was constructed by assembling 14 highly conserved regions derived from HIV-1 genes Gag, Pol, Vif, and Env alternating, for each domain, the consensus sequence of the four major HIV-1 clades A, B, C, and D and delivered by non-replicating poxvirus MVA vector [20]. Fifteen individuals enrolled in the BCN02 trial (procedures for recruitment and eligibility criteria are detailed elsewhere [20]) were immunized with a first dose of MVA. HIVconsv (MVA1, 2 × 10 8 pfu intramuscularly), followed by three weekly-doses of romidepsin (RMD 1-2-3 , 5 mg/m 2 BSA intravenously) and a second boost of MVA. HIVconsv (MVA2, 2 × 10 8 pfu intramuscularly) 4 weeks after the last RMD 3 infusion. To assess the ability for viral control after ART interruption, participants underwent a monitored antiviral pause (MAP), 8 weeks after the second vaccination (MVA2), for a maximum of 32 weeks or until any ART resumption criteria were met (plasma viral load > 2000 copies/ml, CD4 + cell counts < 500 cells/mm 3 and/or development of clinical symptoms related to an acute retroviral syndrome [20]). The study was conducted between February 2016 and October 2017 at two HIV-1 units from university hospitals in Barcelona (Hospital Germans Trias i Pujol and Hospital Clínic) and a community center (BCN-Checkpoint, Barcelona). The microbiome sub-study concept, design, and patient information were reviewed and approved by the institutional ethical review board of the participating institutions (Reference Nr AC-15-108-R) and by the Spanish Regulatory Authorities (EudraCT 2015-002300-84). Written informed consent was provided by all study participants in accordance to the principles expressed in the Declaration of Helsinki and local personal data protection law (LOPD 15/1999). Sample disposition and data analysis Fourteen participants from the BCN02 trial consented to participate in the BCN02-microbiome study, 1 was excluded due to a protocol violation during MAP, and 13 were included for multi-omics analyses. Twelve from the thirteen participants that finalized the "kick and kill" intervention completed the MAP phase (n = 3 controllers and n = 9 non-controllers) and one subject (B07) did not enter the MAP period due to immune futility predefined criteria and absence of protective HLA class I protective alleles associated with natural HIV-1 control (Additional file 1: Fig. S1b). Based on the gut microbiome similarity with non-controllers at study entry and over the "kick and kill" intervention, the participant B07 was included in the non-controller arm to increase the statistical power in this microbiome sub-study. Fecal specimens were longitudinally collected at BCN02 during the intervention period at study entry (pre-Vax), 1 week after 1st vaccination (MVA1), 1 week after RMD 3 (RMD) and 4 weeks after 2nd vaccination (MVA2). Samples were also collected over the MAP period (from 4 to 34 weeks after ART interruption) and 24 weeks after ART resumption (Additional file 1: Fig. S1a). All samples were processed for shotgun metagenomics analysis. Taxonomical classification, microbial gene content and functional profiling were inferred using Metaphlan2 [25], IGC reference catalog [26], and HUMAnN2 [27], respectively. Sequencing analysis and quality control of metagenomics data are provided in the Additional file 1: Supplementary results. To facilitate the interpretation, longitudinal time points were schematically grouped into three phases (Additional file 1: Fig. S2a). Fecal material, peripheral blood mononuclear cells (PBMC), and plasma samples were also sampled at baseline to assess fecal metaproteome, host transcriptome profiles and soluble inflammation biomarkers, respectively (Additional file 1: Fig. S2b). Microbial proteins from fecal samples were measured by mass spectrometry and protein identification performed using Mascot search engine (v2.4, Matrix Science) and Scaffold Q+ software (v4.9.0, Proteome Software) [28]. PBMC transcriptomes were evaluated using RNA-sequencing and sequence reads aligned to the human reference genome by STAR v2.5.3a [29]. Read counts estimation was inferred using RSEM v1.3.0 [30] and differential expression analysis performed by DESeq2 [31]. Plasma proteins were estimated using the Proximity Extension Assay based on the Olink Inflammation Panel [32]. Correlations between 'omic' datasets were computed using Spearman's correlation coefficients and integrative multiomics analysis was assessed based on the mixOmics R package [33]. A detailed description of wet-lab procedures, bioinformatic methods and statistical analysis of metagenome, metaproteome, transcriptome, soluble plasma markers and multi-omics data is available in the Additional file 1: Supplementary methods. Patient characteristics In this microbiome sub-study, we evaluated 13 participants of the BCN02 study. Three had sustained lowlevel HIV plasma viremia (< 2000 copies/ml) during 32 weeks of MAP (viremic controllers), whereas 9 developed HIV-1 RNA rebound (> 2000 copies/ml) during MAP (non-controllers). One additional subject (B07) did not qualify for MAP due to pre-specified immune futility criteria and absence of protective HLA alleles, and therefore, was also considered a non-controller in this microbiome study. (Additional file 1: Fig. S1b). Study participants were predominantly MSM (92%) of Caucasian ethnicity (92%), with median age of 42 years and median body mass index of 22.9 kg/m 2 ( Table 1). Median baseline CD4 + T cell counts was 728 (416-1408) cells/mm 3 and median CD4/CD8 T cell ratio was 1.4 (0.97-1.9). All subjects had been on integrase strand-transfer inhibitorbased triple ART for >3 years, begun during the first 3 months after HIV-1 infection. Median baseline HIV-1 proviral DNA was 140 copies/10 6 CD4 + T cells, being numerically lower in controllers than in non-controllers (65 vs 165 copies/10 6 CD4 + T cells, p = 0.29). Baseline gut-associated Bacteroidales/Clostridiales ratio and lower microbial gene richness discriminate between viremic controllers and non-controllers Viremic controllers had significantly higher Bacteroidales levels than non-controllers at study entry (pre-Vax p = 0.007) and during all the intervention phase (MVA1 p = 0.049, RMD p = 0.049 and MVA2, p = 0.014) (Fig. 1a) as well as lower baseline Clostridiales abundance (p = 0.014) (Fig. 1b). Accordingly, the Bacteroidales/Clostridiales ratio remained significantly higher in controllers at study entry and throughout the intervention (pre-Vax p = 0.007 and MVA2, p = 0.028) (Fig. 1c). Also, controllers were significantly depleted in archaeal members from the methanogenic order Methanobacteriales (Additional file 1: Fig. S4). More detailed analyses at lower taxonomic level within the orders Bacteroidales and Clostridiales showed that controllers were mainly depleted in Clostridiales species, such as Eubacterium spp. and Subdoligranulum spp., whereas the Bacteroidales species Prevotella copri was significantly higher after the three romidepsin doses (Additional file 1: Fig. S5), being such trends maintained throughout the intervention (Additional file 1: Figs. S5 and S6). Viremic controllers also had lower microbial gene richness than non-controllers at the study entry (p = 0.028) and MVA1 (p = 0.049), although such differences lost statistical significance in the RMD and MVA2 assessments (Fig. 2a). Alpha diversity also remained numerically lower in controllers, but differences were not statistically significant (Fig. 2b). Controllers exhibited lower beta-diversity, particularly already at the study entry (p = 0.004; Additional file 1: Fig. S7), and showed less intra-host longitudinal evolution than non-controllers (p = 0.001) (Fig. 2c). Whereas the gut microbiome composition of controllers was significantly different from that of non-controllers (p = 0.001), no significant longitudinal differences were observed across time points (p = 0.815), suggesting that the combined intervention did not significantly alter the gutmicrobiome composition (Fig. 2c). Of note, results did not change after removing B07 from the non-controller arm (Additional file 1: Fig. S8). No differences in Bacteroidales and Clostridiales abundances and microbial diversity were observed during MAP and after ART reinitiation (Additional file 1: Supplementary results). Furthermore, longitudinal profiling of metabolic pathways associated with Bacteroidales, Clostridiales, and archaea showed that controllers were mainly enriched in functions related to fatty acid biosynthesis, whereas functions related to methanogenesis and carbohydrate biosynthesis were overrepresented in non-controllers (Additional file 1: Supplementary results and Figs. S9-S12). Taken together, these data showed that differences between controllers and non-controllers mainly emerged from resident microbial communities, before any intervention was started in BCN02 study. Thus, subsequent analyses were focused on characterizing further discriminant signatures at study entry. Increased Bacteroidales/Clostridiales ratio in viremic controllers negatively correlated with longitudinal HIV-1 viral reservoir size The Bacteroidales/Clostridiales ratio inversely and significantly correlated with longitudinal total CD4 + T cellassociated HIV-1 DNA measured at study entry (rho = − 0.6, p adj = 0.03) and over the intervention, whereas an opposite trend was observed for gene richness (rho = 0.65, p adj = 0.01 at study entry) (Fig. 3a). A similar trend was observed for cell-associated (CA) HIV-1 RNA (Bacteroidales/Clostridiales; rho = − 0.7, p adj = 0.01 and gene richness; rho = 0.61, p adj = 0.0 at study entry), although stronger correlations were found at RMD and MVA2 (Fig. 3b). In both assessments, alpha-diversity (Shannon index) exhibited weak positive correlation with the viral reservoir, being correlations not significant. Moreover, baseline ratio Bacteroidales/Clostridiales and gene richness showed a strong negative correlation (r ho = − 0.87, p adj = 0.0001) (Fig. 3a, b), in line with trends observed in the microbiota characterization. In the longitudinal comparison, controllers tended to displayed lower viral reservoir size (Fig. 3c, d), although differences were statistically significant only for CA HIV-1 RNA at RMD and MVA2 (p = 0.03) (Fig. 3d). An additional set of clinical and vaccine-response variables was screened for association with gut microbial signatures. Absolute CD4 + T cell count before ART initiation was the only factor significantly associated with the Bacteroidales/ Clostridiales ratio (rho = 0.65, p adj = 0.01) and gene richness (r ho = − 0.62, p adj = 0.02), whereas a strong and inverse correlation was found between the Shannon index and CD4/CD8 ratio at BCN02 study entry (rho = 0.9, p adj = 2.83e−05) (Additional file 1: Fig. S13). Increased baseline immune activation and inflammatory response transcripts in viremic controllers Full-PBMC gene expression analysis detected a total of 27,426 transcripts at baseline, after filtering for lowexpressed genes (Additional file 2: Dataset S3). Using DESeq2 [30], a total of 31 differentially expressed genes (DEGs) were identified (log 2 FoldChange = 0 and p adj < 0.1), of which 15 and 16 were upregulated in controllers and non-controllers, respectively ( Fig. 5a and Additional file 3: Table S1). Hierarchical clustering based on transcriptional DEG profiles showed that controllers grouped together, while non-controllers separated into two distinct expression groups (Additional file 1: Fig. S14a). Upregulated genes in non-controllers included 11 transcripts with unknown function (Additional file 3: Table S2), which were excluded from downstream analyses. Upregulated genes in controllers ( Fig. 5b and Additional file 1: Fig. S14b-c), such as myeloperoxidase (MPO), defensin alpha 1 and 4 (DEFA1, DEFA4), and neutrophil elastase (ELANE) (Additional file 3: Table S2) were known to be implicated in immune response signaling and regulation of inflammatory processes [34,35]. Gene Ontology (GO) analysis confirmed that genes upregulated in controllers were enriched in functions related to immune system activation, such as neutrophilmediate immunity, leukocyte degranulation and antimicrobial humoral response ( Fig. 5c and Additional file 3: Table S3). Moreover, a group of inflammation-related plasma proteins was significantly increased in controllers at baseline (Additional file 1: Supplementary results and Fig. S15). Integration analysis between Bacteroidales/Clostridiales ratio, host immune activation transcripts, bacterial proteins, and HIV-1 reservoir size The baseline Bacteroidales/Clostridiales ratio positively correlated (p adj < 0.05) with differentially expressed genes involved in inflammatory response and immune system activation, including DEFA1, DEFA4, TOP1MT, CTSG, MPO, AZU1, ELANE (Fig. 6a, Spearman rho and adjusted p values are given in Additional file 2: Dataset S5). Additional correlation and enrichment analysis extended to the full set of host transcripts supported such observation (Fig. 6b), also showing significant associations with the baseline viral reservoir (Additional file 1: Supplementary results and Fig. S16). In the integrated analysis of metagenomic, transcriptomic, and metaproteomic data for the identification of discriminating signatures between controllers and non-controllers, Bacteroidales and Clostridiales were clearly separated through the components (Additional file 1: Fig. S17a). While Bacteroidales clustered and positively associated with immune activation transcripts (MPO, AZU1, ELANE, TCN1, DEFA1, BPI, DEF4) as well as proteins from Ruminococcus, Blautia, and Prevotella, the order Clostridiales inversely correlated with such features (Additional file 1: Fig. S17a-b). Multi-omics correlations at lower taxonomic scale, including viral reservoir data confirmed that Bacteroidales species (B. dorei and B. eggerthii) inversely correlated with HIV-1 DNA levels, whereas members of Clostridiales (S. unclassified, D. formicigenerans, and E. siraeum) positively correlated with both HIV-1 DNA and CA HIV-1 RNA (Fig. 6c). In turn, viral reservoir size negatively correlated with genes involved in neutrophil-mediated immunity and host defense ( Fig. 6c and Additional file 2: Dataset S6). These data together showed positive associations between Bacteroidales taxa and host transcripts related to immune system activation, and in turn negative correlation with the HIV-1 viral reservoir. Whereas, members within the order Clostridiales showed the opposite trend. Such observations emerged by analyzing either the ratio or individual taxa, further supporting potential pre-existing interactions between intestinal Bacteroidales species, host immune activation, and reservoir size in viremic controllers. Discussion In this proof-of-concept study, a longitudinal multi-'omics' analysis identified the Bacteroidales/Clostridiales ratio as a novel gut microbiome signature associated with HIV-1 reservoir size and viremic control during a monitored ART pause. Individuals with high Bacteroidales/Clostridiales ratio showed gene expression signatures related to immune activation, particularly neutrophil-mediated immunity and antimicrobial humoral response, which negatively correlated with the viral reservoir size. Our findings largely arise from unsupervised analyses where many other signatures could have emerged, especially given the relatively low number of individuals analyzed. However, they are internally coherent and consistent with a theoretical framework where increased inflammation might contribute to immune-mediated HIV-1 control. They also suggest a putative biomarker for safer ART interruptions in HIV cure studies. The gut microbiome of controllers was enriched in proinflammatory species, such as P. copri [36], and depleted in bacteria, traditionally associated with the maintenance of gut homeostasis through production of SCFAs [37], including R. intestinalis and Subdoligranulum spp. Lower microbial diversity and gene richness found in controllers were consistent with a previous work from our group in people living with HIV [38], as well as other studies [39], in which higher gene richness associated with increased levels of butyrate-producing bacteria and methanogenic archaea. Microbial functional enrichment in lipid and fatty acid biosynthesis in controllers might be reflective of mechanisms of lipopolysaccharide biosynthesis and production of inflammatory mediators [40] mediated by members of Bacteroidales [41]. No discernible longitudinal variations were observed in the gut microbiome of BCN02 participants, in line with previous evidence in oral typhoid immunization [16]. Of note, the gut microbiome in healthy population has been described generally resilient to perturbations [42]. Taken together, these observations would suggest a trend toward the maintenance of a relative stability in the gut microbiome composition upon vaccination. Baseline metaproteome analysis confirmed that functional differences between controllers and non-controllers were mainly driven by Clostridiales, which were actively producing microbial proteins in both groups albeit in distinct functional contexts. Further discriminating baseline signatures linked to increased immune system activation and inflammatory response in controllers emerged from PBMC transcriptome and inflammation-related plasma proteins profiling. It also emerged that the ratio Bacteroidales/ Clostridiales inversely correlated with the viral reservoir size in terms of HIV-1 DNA and CA HIV-1 RNA. Although controllers did not display significantly lower viral reservoir size compared to non-controllers, such associations were consistent with previous studies suggesting a role of low viral reservoir on ART interruption outcomes [9]. The results of our independent analyses (i.e., different omics approaches), as well as their Baseline functional enrichment in levels of immune activation and inflammatory response in viremic controllers. a Volcano plot of differentially expressed genes between controllers and non-controllers at baseline (pre-Vax) with adjusted p value < 0.1 (violet dots), adjusted p value < 0.05 (red dots) and log2 (foldchange) > 2 and adjusted p value < 0.05 (green dots). Gray-colored dots represent genes not displaying statistical significance (BH-adjusted p value > 0.1). The log2 fold change on the x-axis indicates the magnitude of change, while the −log10 (p-adjust) on the y-axis indicates the significance of each gene. b Violin plots showing relative expression levels (rlog, regularized log transformation) of differentially expressed genes with functional annotation. c Gene ontology (GO) enrichment analysis of upregulated genes in controllers. In the y-axis, only representative enriched GO terms (biological process) are reported (terms obtained after redundancy reduction by REVIGO). X-axis reports the percentage of genes in a given GO terms, expressed as 'Gene ratio' . The color key from blue to red indicates low to high Bonferroni-adjusted log 10 p value. Dot sizes are based on the 'count' (genes) associated to each GO term. Significantly enriched GO terms, number of genes associated with each GO term and adjusted p values are provided in Additional file 3: Table S3 integration suggest that baseline immune activation potentially associated with a microbial shift toward proinflammatory bacteria and lower viral reservoir may contribute to sustained post-ART interruption HIV-1 control. While there is evidence suggesting a strong impact of the gut microbiota composition on host immune system and inflammatory status [43], the mechanistic basis of how microbial communities may interact with the viral reservoir and, in turn, exert immunomodulatory effects on HIV-1 control during ART interruption remains to be delineated. We speculate that a pre-existing, altered balance of 'beneficial' gut microbial groups, such as Clostridiales, and concomitant overabundance of pro-inflammatory bacteria would boost host immune system activation, thus triggering a prompt control of rebounding virus, as observed in controllers. In support Fig. 6 Integrative analysis of baseline gut microbial signatures, immune activation-related transcripts, bacterial proteins and HIV-1 reservoir. a Spearman's correlations between the ratio Bacteroidales/Clostridiales and DEGs (annotated transcripts). Color and size of the circles indicate the magnitude of the correlation. White asterisks indicate significant correlations (*p < 0.05; **p < 0.01; ***p < 0.001, Benjamini-Hochberg adjustment for multiple comparisons). b Network visualizing significant Spearman's correlations between the ratio Bacteroidales/Clostridiales and transcripts involved in the enrichment analysis described in Additional file 3: Table S4. Transcripts are represented as vertices and border width is proportional to transcript expression (log2 |cpmTMM_w0 +1|) in controllers. Edge width indicates the magnitude of correlation. c Network showing Spearman's correlation between viral reservoir (CA HIV-1 RNA and HIV-1 DNA), bacterial species within Bacteroidales and Clostrdiales, host transcripts correlated with the ratio Bacteroidales/Clostrdiales and bacterial proteins (p ≤ 0.025). Features are showed as vertices and colored by 'omic' dataset. Edge width indicates the magnitude of correlation coefficient. Protein-associated bacterial genera are reported in parentheses. In all panels, positive and negative correlations are indicated in blue and red, respectively. Abbreviations: DEGs, differentially expressed genes between controllers and non-controllers; R, Ruminococcus; Ps, Pseudoflavonifactor and pre-Vax, baseline timepoint (1 day before first MVA vaccination) to this hypothesis, increased abundance of members from Clostridiales were previously associated with neutrophilia and lower poliovirus and tetanus-hepatitis B vaccine response [18]. Moreover, baseline transcriptional pro-inflammatory and immune activation signatures were suggested as potential predictors of increased influenza [44], systemic lupus erythematosus [45] and hepatitis B [46] vaccine-induced immune response, with weaker responses in elderly [44,46]. It is then reasonable to postulate that immune activation prior vaccination together with microbiome-associated factors may affect vaccine outcomes. Results obtained in this exploratory study are indented to generate hypotheses and potentially contribute to establish a coherent framework for subsequent confirmatory studies. In this context, several studies have suggested plausible mechanisms by which the microbiota could modulate immune responses to vaccination in humans [47]). Despite causal links remain to be fully deciphered, the ability of the microbiota to directly interact with immune cells in the gut as well as to regulate the systemic availability of critical metabolites in distal locations is well-established [47]. A few studies also attempted to investigate the exact role played by the gut microbiota in HIV control. Despite gut dysbiosis persistence in HIV-1 infected patients after ART initiation (reduced gut microbial richness and depletion in antiinflammatory bacteria [48]), microbial richness may be restored by prolonged ART treatment [49]. Also, elite controllers, who spontaneously control HIV replication without ART, showed higher gut microbial richness with a metabolic profile resembling that of HIV-uninfected adults [50] and increased Th17 cells in the gut mucosa compared to ART-treated patients [51]. In line with our results, a recent study showed enriched levels of Bacteroidetes genera, such as Bacteroides and Prevotella, in patients with sustained HIV control during ART interruption, after receiving dendritic cells-based HIV-1 immunization [52]. Moreover, additional evidence on HIV virologic controllers receiving a TLR7 agonist before ART interruption with baseline enrichment in P. copri and negative association between abundance of Ruminococcus gnavus and time to viral rebound suggested a potential impact of certain microbiome species on HIV persistence [53]. Together, such observations illustrate the exceptional challenge in deconvoluting the complex dynamics between specific microbial species or byproducts and HIV-1 control. Our study has several limitations. Due to eligibility criteria in the parental BCN02 study, there was a limited sample size, and we were unable to include a control arm without the intervention. Our considerations were therefore narrowed to three individuals showing viremic control during ART interruption. In addition, given the small sample size and limitations of non-parametric methods coupled with multiple testing correction [54], we used a reasonable threshold of 10% false-discovery rate (adjusted p value < 0.1) to prevent false negatives. Another fundamental issue to consider is that this sub-study was largely restricted to Caucasian/MSM population. Large-scale studies have revealed broad differences in the gut microbiome composition [55] and modes of HIV transmission [56] across distinct geographical populations, thus leading to dramatic variations in the characteristics of study participants at study entry [57]. MSM sexual behavior itself was associated with Prevotella-rich/Bacteroidespoor microbiota profile and increased microbial diversity [58], suggesting that failure to control for this factor could confound data interpretation. Yet, most of the HIV-associated microbiome studies published to date, including this, have largely focused on single populations or individuals [57], posing the question on the generalizability of the results in different population settings. Ensuring diversity among study participants (i.e., in ethnic group, sexual behavior, or transmission mode) in future microbiome studies, is then critical to both identify population-specific features and to achieve broaderreaching biological conclusions. Bearing these limitations in mind, our results should be interpreted with caution, emphasizing the need of independent validation in randomized and placebo-controlled trials to assess potentially unmeasured confounders and provide further perspectives on factors that might induce gut microbial shifts. Upcoming analyses in larger longitudinal trials, including the recently reported AELIX-002 trial [59], where fecal samples have been stored longitudinally, are expected to validate our results. These preliminary findings might have important implications in the design of HIV-1 cure intervention trials that include ART interruption. As proposed for other therapeutic areas [60], microbiome-associated predictive patterns could help to optimize patient stratification, thus resulting in more targeted studies and higher efficacy of HIV-1 interventions. In addition, if a given resident microbial community is to be defined that is indeed predictive of viral control during ART interruption, then modulating participants' gut microbiota before immunization might potentially impact vaccine responsiveness and ultimately, clinical outcomes. While hostgenetics and other vaccine-associated factors as baseline predictors are less amendable, the gut microbiome is potentially modifiable and even transferrable to another host. Strategies manipulating the gut microbiota composition and relative by-products via prebiotics and/or probiotics administration [61] or microbiota engraftment following fecal microbiota transplantation [62] are under intense evaluation [63], albeit with several limitations. Conclusions In this exploratory study, we identified pre-existing gut microbial and immune activation signatures as potential predictors of sustained HIV-1 control in the absence of ART, providing a potential target for future treatment strategies and opening up new avenues for a functional HIV cure. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s40168-022-01247-6. Figure S2. Overview of sample disposition for multi-omic analysis. Figure S3. Taxonomic classification of fecal samples at the order level. Figure S4. Differential abundance in Methanobacteriales between controllers and non-controllers. Figure S5. Linear Discriminant Analysis (LDA) effect size (LEfSe) at the species level. Figure S6. Longitudinal feature-volatility analysis of Bacteroidales and Clostridiales species. Figure S7. Bray-Curtis dissimilarity index between controllers and non-controllers. Figure S8. Gut microbiome profiling excluding B07 participant from non-controllers arm. Figure S9. Differentially abundant metabolic pathways from Bacteroidales and Clostridiales at the study entry. Figure S10. Differential metabolic pathways between controllers and non-controllers. Figure S11. Longitudinal variation of differentially abundant pathways over the trial. Figure S12. Differential archaeal metabolic pathways between controllers and non-controllers. Figure S13. Spearman's correlation between clinical data, vaccine response and gut microbial variables. Figure S14. Differentially expressed PBMC host genes between controllers and non-controllers at baseline. Figure S15. Protein inflammation markers from controllers and non-controllers at baseline. Figure S16. Functional enrichment of transcripts correlated with the ratio Bacteroidales:Clostridiales and viral reservoir. Figure S17. Integrated analysis of microbiome, metaproteome and transcriptome data. Additional file 3: Table S1. List of differentially expressed genes between controllers and non-controllers at baseline (adjusted p-value <0.1and log2 FoldChange = 0). Table S2. Detailed information of differentially expressed genes (DEGs) between controllers and non-controllers at baseline (adjusted p-value <0.1 and log2 FoldChange = 0). Table S3. GO terms from enrichment analysis (biological process) based on upregulated genes in viremic controllers (n. transcripts =15). Table S4. Enriched GO biological process terms of host transcripts significantly correlated with the ratio Bacteroidales/Clostridiales. Table S5. Enriched GO biological process terms of transcripts (n=453) significantly correlated with the ratio Bacteroidales/Clostridiales and viral reservoir (CA HIV-1 RNA and HIV-1 DNA). Table S6. Summary of shotgun metagenomics sequencing yield from longitudinally-collected fecal samples. Statistics of reads mapped to integrated gene catalog (IGC) are also shown.
7,056
2021-10-04T00:00:00.000
[ "Medicine", "Biology" ]
Developing a multi-functional sensor for cell traction force, matrix remodeling and biomechanical assays in self-assembled 3D tissues in vitro Cell-matrix interactions, mediated by cellular force and matrix remodeling, result in a dynamic reciprocity that drives numerous biological processes and disease progression. Currently, there is no available method for direct quanti�cation cell traction force and matrix remodeling in 3D matrices as a function of time. To address this long-standing need, we recently developed a high-resolution microfabricated sensor 1 that measures cell force, tissue-stiffness and can apply mechanical stimulation to the tissue. Here the tissue self-assembles and self-integrates with the sensor. With primary �broblasts, cancer cells and neurons, we demonstrated the feasibility of the sensor by measuring single/multiple cell force with a resolution of 1 nN, and tissue stiffness 1 due to matrix remodeling by the cells. The sensor can be translated into a high-throughput system for clinical assays such as patient-specic drug and phenotypic screening. In this paper, we present the detailed protocol for manufacturing the sensors, preparing experimental setup, developing assays with different tissues, and for imaging and analyzing the data. Introduction Cell-matrix interaction is the most important component in mechanotransduction, which plays vital roles in various physiological and pathological processes, such as wound healing 2,3 , brosis 4 , angiogenesis 5 , migration 6,7 , and metastasis [8][9][10][11][12] .Communications between cells and surrounding extracellular matrices (ECM) is primarily mediated through cellular forces that provide the link between physical cues and chemical signaling and hence create a dynamic reciprocity 8,[13][14][15][16][17] between cells and the surrounding microenvironment (ME).One of the most critical aspects of cell-ECM interactions is the feed-forward relationship between cell contractility and matrix remodeling.The dynamics of cell force and ECM remodeling have been implicated in numerous biological processes and disease progression; and thus, measurement of traction force in association with matrix remodeling is extremely important.As a result, methods to quantify cell traction on 2D substrates have been developed and advanced over the past decades.However, cells in vivo are in 3D environment with extra-cellular matrices (ECM) around them. There is a gap in literature for methods to directly quantify cell traction and cell induced matrix remodeling in 3D.We have recently developed a novel sensor 1 for direct measurement of single cell forces and determination of matrix remodeling in 3D ECM as a function of time.In this paper, we present the detailed methodology for manufacturing the sensors, preparing experimental setup, forming different types of tissues, imaging and analyzing the data. Besides stiffness, cells transduce other mechanical stimuli such as stretch/contraction into electrical and/or biochemical signals for functions relevant in many organs such as lung alveoli, bladder, or the heart [18][19][20][21][22][23] .To study the role of cellular stretching in signaling pathways, devices have been developed to apply mechanical stretch to cells and determine their stress-strain relationship [24][25][26][27] .However, most of these devices rely on elastomeric membranes for applying stretch to 2D substrate-adhered cells.Even if the cells are embedded in 3D ECM on top of these membranes, they sense the rigidity of the membranes and thus the setup falls short in representing in-vivo environment.The sensor presented herein eliminates this limitation by creating a self-organized tissue that does not require support from such stiff membranes.Hence, cells in the specimen are truly in 3D scaffold that better mimics in-vivo microenvironment. The ultra-sensitive sensor we describe here is designed such that a tissue sample can self-assemble and self-integrate with the sensor.The tissue can have a single or multiple cells embedded in a threedimensional extra cellular matrix (ECM).The sensor is prepared by casting polydimethylsiloxane (PDMS) in silicon molds micro-fabricated with standard photolithographic process.With a resolution of ~1 nN, the sensor is capable of directly quantifying single cell forces in collagen (ECM), using force equilibrium law that allows circumventing complicated constitutive relations.In addition, the sensor can be used as an actuator to measure change in ECM stiffness due to remodeling as a function of time, as well as to apply prescribed stretch or compression on the cell-ECM matrix to explore cell response to mechanical deformation in 3D.Hence, the novel sensor offers a platform with a range of application for biophysical investigations of cells and tissues.In our previous paper 1 , we presented the details of the sensor and experimental results that established its novelty, applicability and versatility.Here, we describe an elaborate protocol that would provide systematic and thorough guidance for successful employment of the sensor for a diverse set of experiments. Basic concept: The simplest sensor consists of three components-a soft spring (spring constant, K s ), a stiff spring (K r ) and two grips connected to the springs as shown in Fig. 1.The soft spring is the force sensing element, while the stiff spring stabilizes the specimen that is self-assembled between the grips.The specimen can be formed by dispensing a precursor solution (e.g.matrigel, cell-collagen mixture etc.) on the grips and allowing it to polymerize and self-assemble in situ (Fig. 1A). Protocol development: The rst step is to design the sensor and fabricate the mask.The sensors can be designed as single units or as an array of connected sensors for high throughput applications.The array of sensors on the mask can be drafted such that a 4-inch wafer can accommodate about 100 sensor units.Next step is to microfabricate the silicon molds in a cleanroom, as illustrated in Fig. 2A.A standard photolithography process can be employed to spin-coat photoresist, expose to UV and develop a wafer which is then etched to a desired depth by deep reactive-ion etching (DRIE) technique.The nal step in preparation of the molds is coating the etched wafer with Polytetra uoroethylene (PTFE) that facilitates removal of the sensors from these templates.Such silicon molds can be used many times, if carefully used and kept clean.Finally, the sensors can be prepared by pouring liquid polydimethylsiloxane (PDMS) in the casts, polymerizing and removing from the molds, as shown in Fig. 2A. The sensors are extremely adaptive and can be designed for diverse biophysical assays.By altering the dimensions of the beams (springs), high sensitivity can be achieved for measurement of single cell forces in 3D matrices.With increasing force resolution, the beams become very soft and susceptible to collapse (i.e.buckling, twisting, or sticking to each other) due to meniscus forces (surface tension) that they are subjected to at different stages of operation.We have developed an innovative protocol to circumvent the challenge.In essence, the soft elements of the sensor are immobilized (as illustrated in Fig. 1) by a gelatin sacri cial layer that can be removed after the completed test assembly is immersed in culture media.This novel technique is one of the key contributions for successful employment of supersensitive micro-electro-mechanical systems (MEMS) sensors that are required to overcome surface energy challenges.Fig. 2B presents a step-by-step procedure for preparation of the experimental setup.First, a thin gelatin layer is prepared to prevent the sensor from touching and sticking to the bottom glass.Next, the sensor is positioned on site and additional gelatin is added to immobilize the springs.After that, the tissue precursor solution is dispensed and allowed to polymerize into the nal specimen.Finally, culture media is added to the dish when gelatin dissolves and gets washed out in ~30 mins at 37.5 ºC.Sensor is thus released and activated for force measurement.Boxes 1-3 illustrate further about methods for constructing different tissue con gurations: Type I) tissue with a single cell, or a few cells, or a co-culture of multiple cells, Type II) tissue with similar cells in the grips only, with the central region constructed of cell-free ECM, Type III) tissue with two types of cells in two grips, keeping the central portion free of cells. Limitations and advantages of the protocol over current methods: It is well established that cell traction force is at the center of most biophysical processes and consequently, extensive research went into investigation of cellular forces.However, most of the methods developed for measuring cell generated forces are on 2D substrates, e.g., traction force microscopy (TFM) 28,29 , Förster resonance energy transfer (FRET) 30 , micro-pillar arrays 31 .These techniques are widely used at present, but all of them are limited to 2D culture.While 2D cell culture is convenient and has provided important insights into biophysical processes, cells in 3D matrices may respond very differently.Hence, quanti cation of cell force in 3D matrices is of utmost importance.Despite the great necessity, method for evaluating cell forces in 3D is not available, largely due to the unique challenges that bio-matrices present.Most natural brous matrices (e.g.collagen, Matrigel, and brin) in 3D are heterogeneous at cellular scales; and hence, do not exhibit simple constitutive relations.In addition, cells continuously remodel the scaffolds by ECM deposition, crosslinking and force induced plastic strains.Hence, elasticity of the ECM cannot be used to determine cell forces.The protocol presented describes a novel approach to determine cell generated forces in 3D by direct measurement from force balance, bypassing such spatio-temporal variations of the mechanical properties of the ECM.Compared to other procedures, this is the most signi cant advantage of our protocol. There are a few computational methods, such as kinematics-based mean deformation metrics (MDM) 32 , nite element based approach 33 that can estimate cell generated deformation and thus approximate forces in 3D.However, limitations of these methods include the assumptions of constitutive equations (stress-strain relations), computationally expensive analysis and/or use of uorescent lights with damaging intensities 34 .Our protocol is free from these drawbacks.In fact, by analyzing phase-contrast images using simple ImageJ plugins, it is possible to calculate the forces in real-time.Moreover, lack of uorescent lights enables monitoring of force for a long duration. In addition to cell forces, the sensor is capable of 'direct' measurement of stiffness of the specimen/tissue at different time points.This unique ability helps track the dynamics of ECM remodeling and traction force simultaneously.While there are methods that can provide 'indirect' quanti cation of remodeling 35 , there are no other technique currently available to directly assess cell force and matrix remodeling dynamics in the same sample.Our previously developed a silicon-based sensor 36 can measure stiffness of larger tissues; however, dissolved silicon in the culture media may have toxic effect on the cells 37 . The sensor also has a few limitations.First, the sensor can only measure the total cell force, and local traction stresses within the matrix cannot be determined.However, addition of ducial beads in the matrix may facilitate reconstruction of 3D deformation eld and compute an approximation of local stresses. Another limitation is bright eld or phase contrast imaging.While these imaging methods are su cient for force measurement and 2D projected visualization; tomographic imaging e.g.confocal, two-photon microscopy is necessary for accurate 3D spatial correlations.The next section describes how the sensor's limitations can be overcome and how to complement the sensor by incorporation of other techniques for enhanced performance. Potential applications: The greatest strength of the sensor is adaptability and versatility.As a result, the sensor and the selfassembled specimen can be modi ed for multi-functionality without major investments.Table 1 presents several prospective applications of the sensor and the protocol.For example, by simply adding 1 um polystyrene (PS) beads in the scaffold and confocal imaging, we can detect local deformations in addition to the total force by the cells.The local deformation eld can provide vital information pertinent to speci c cellular activities such as polarization and migration.Also, by precise placement of cancer and stromal cells at a distance, it is possible to investigate the role of both biochemical and biophysical crosstalk within tumor microenvironment.Furthermore, it is conceivable that the sensor platform can be scaled up for translational application such as personalized drug screening with patient derived primary cells.Commercially mechanizing a few steps of the process, it is possible to develop a high throughput system for clinical assays that can leverage biophysical outputs (i.e.traction force, ECM remodeling, effects of drugs) from the sensors. In addition to measuring cell generated forces and remodeling of ECM, the sensor provides a unique platform for material testing and manipulation at micro-scale.For example, the sensor can be employed to investigate emergent behavior of active matter systems (e.g.F-actin-myosin II network).The sensor leverages self-organization of such material-systems and formation of the sample.It is virtually impossible to construct samples of such small scales and perform tensile or compressive tests otherwise.Furthermore, by applying prescribed amount of strain, we can possibly manipulate micro-structures in the sample, e.g., ber orientation, pore sizes, anisotropy, and thus study the effects of microstructural changes on their dynamics of their macroscopic properties.We demonstrate a few prospective applications below.They serve as a guide to customize and utilize the sensor for transformative applications.Prepare broblast growth media by mixing 500 ml FBM Basal Medium, 0.50 ml Insulin, 0.50 ml hFGF-B, 0.50 ml Gentamicin sulfate-Amphotericin (GA-1000) and 10 ml FBS (all provided with the FGM-2 bulletkit) and ltering through the Stericup ltration system. Reagents FET cancer cell culture media: Prepare FET culture media by mixing 222.5 ml DMEM, 222.5 ml Hams F-12 medium, 50 ml FBS and 5 ml Penicillin-Streptomycin and ltering through the Stericup ltration system. Gelatin solution preparation: Prepare 10% gelatin solution by mixing 10 g bovine skin gelatin powder with 100 ml DI water (or, DPBS) and keeping the mixture in a warm water bath at 37 °C overnight.Check if the gelatin has dissolved completely and the solution is homogenous.If needed, the solution can be left in the warm water bath for 12 more hours.This solution can be stored in the fridge at 4 °C and warmed up to 37 °C before use. Oxaliplatin solution: Prepare 5.0 mM stock solution of Oxaliplatin chemotherapy drug by dissolving 5 mg Oxaliplatin powder in 2.5 ml DPBS.The solution may need to the heated to 37 °C and sonicated for fast dissolution.The stock solution was diluted 1:1000 with culture media to get a nal concentration of 5 uM. Staining reagents: Fixing solution: Prepare 4% PFA solution in PBS from stock solution of 16% PFA Permeabilization solution: Prepare stock solution of 10% (v/v) Triton X-100 by dissolving 5 ml Triton X-100 in 50 ml PBS.The solution may need to be sonicated for fast dissolution.Dilute the stock solution by 1:50 with PBS to prepare 0.2% Triton X-100 solution. Blocking 2. Bake the wafer for 120 s on a hotplate at 110 °C for further dehydrating. 3. Place the wafer on the spinner, and spin coat adhesion promoter AP8000 using a standard recipe (e.g.500 rpm spread cycle for 10 s followed by a 30 s spin at 3000 rpm for solvent drying). 4. Spin coat SPR 220-4.5 positive photoresist (PR) using the same spinning recipe.Note: Negative PR can also be used, if the mask is designed accordingly. 5. Soft bake the wafer for 180 s (120 s at 60 °C, then 60 s at 110 °C) on hotplates. 6. Properly place the design mask and PR coated wafer in the mask-aligner and expose to UV light with a dose of 160 mJ/cm 2 .CRITICAL: Choose the appropriate contact mode between the mask and the wafer, based on the minimum dimension in the design. 7. Develop the exposed wafer with AZ 400K (diluted 1:5 with DI water) for ~45 s.Rinse with DI water and blow dry with nitrogen. 8. Hard bake the wafer 120 s at 110 °C on a hotplate.The wafer is now ready for deep reactive ion etching (DRIE). 9. Etch the wafer in a DRIE machine (e.g.STS Pegasus ICP-DRIE) to achieve an etching depth of ~200 um.Etching time is controlled to achieve the desired depth of the mold.CRITICAL: Etching depth may vary based on the machine, as well as the proportion of etching area on the wafer.Some iteration may be necessary to achieve the desired depth. 10. Strip the remaining PR off the wafer using Microposit Remover 1165, acetone and DI water.Blow dry with nitrogen. 11. (Optional) Clean the wafer by reactive ion etching (RIE) process using oxygen and argon plasma in RIE-March Jupiter III.Note: this step is optional, but highly recommended. 12. Deposit a layer of Polytetra uoroethylene (PTFE) on the wafer in a standard machine (e.g.Plasmatherm SLR 770).The mold should be ready to use.A 4-inch wafer can contain as many as ~100 sensors, or even more.Note: The thickness is not very critical, since this layer is to facilitate removal of the PDMS sensors from the mold.!!CAUTION: DO NOT use IPA or acetone on the mold after PTFE deposition, since they will strip the coating off.!! Fabrication of the sensors (timing 1-2 days) 13.In a plastic dish, pour PDMS (Sylgard 184) base and cross-linker at 10:1 ratio by weight; and vigorously mix with a spatula.CRITICAL: The mixing should be done thoroughly so that the mixture becomes homogeneous.Bubble formation is normal at this step. 14. Remove the bubbles from the mixture by applying negative pressure for 30 mins in a vacuum desiccator.CRITICAL: As the bubbles grow in volume in the desiccator, they may over ow the container if it is not large enough. 15. Very carefully pour small amounts of liquid PDMS in the molds on the silicon wafer using a ne pipette tip e.g. 10 ul pipette tip.CRITICAL: Add the rst droplet on base side of the mold and allow liquid PDMS to spread on its own.As the liquid reaches the beam channels, capillary tension will drive it towards every part of the mold.Add very small incremental amounts to the molds until the total depth is lled.Very carefully visualize the level of PDMS after each drop is added.Try tilting the wafer and looking from different angles to get a better view.Take especial care to avoid over ow. 16.After lling all the molds in the wafer with liquid PDMS, place the wafer in an oven at 60 °C for about 4~12 hours.CRITICAL: Place the wafer on a at surface that is perfectly horizontal.A bubble level can be used to achieve this goal.17.After curing and polymerization, the PDMS sensors are ready to be taken off the molds.Using a pipette, drop small amounts of IPA on the sensors in the molds; and let sit for ~10 mins.CRITICAL: IPA helps in peeling the sensors off the molds.Removal of the sensors without using IPA is also possible; however, this may result in breakage of thin elements in a few specimens.Preparation of the experimental setup (timing 2.5-3 hours) 22. Prepare a 10% (w/v) gelatin solution and leave in a warm water bath at 37 °C overnight.Note: Gelatin can be very slow to dissolve and hence may take hours to make a homogenized solution.The solution can be stored in the refrigerator and warmed up for repeated use. 23.Take the sensors out of DI water, place in a petridish with tweezers, and let dry in a biosafety cabinet. 24.While the sensors are drying, stick a double sided tape to the bottom glass of a petridish, pour gelatin as shown in Fig. 2B and let it gel in the refrigerator for ~30 mins.CRITICAL: During gelation process, gelatin also dries and shrinks.Hence, slightly over ll with liquid gelatin so that a at substrate (as thick as the taoe) can be achieved after gelation. 25. Stick the sensor's base to the tape, and place the beams and grips on the gelatin substrate as shown in Fig. 2B.Use a syringe needle to straighten the beams and position the grips at the correct locations. Note: A microscope, or at least a magnifying glass is required to perform this step. 27.This step has to be carried out with the petridish under the microscope.Based on the desired tissue con guration, proceed with one of the following three options: A. Type I: for a single cell, number of cells, or a mixed co-culture in all part of the tissuei. Using a needle, carve out two blocks of gelatin from the substrate and place them by the sides of the grips, as shown in Box 1A. ii. Using a 20 ul pipette, pour liquid gelatin on the sensor beams and other parts of the sensor away from the grips (Box 1B).Use a needle to guide gelatin to every corner and pay close attention so that there are no air pockets.CRITICAL: This is the most critical step in the process.Pour very small amount of gelatin at a time, so that the grips are not inundated.The gelatin blocks previously placed help stop liquid gelatin from owing into the grips.This gelatin layer solves a number of potential problems such as air bubbles, tissue rupture, and stiction between beams.CRITICAL: Lab temperature should be at ~20-25 °C to allow enough time for maneuvering gelatin.Lower temperatures cause high viscosity and quick gelation, making it di cult to ll all the corners.At higher temperatures, low viscosity makes gelatin ow faster, increasing possibility of ooding the tissue site.B. Type II: for placing the cells in the grips and keeping the center free of cellsi. Using a needle, carve out a block of gelatin from the substrate and place it between the grips, as shown in Box 2A. ii. Proceed as directed in step 27.A.ii. C. Type III: for two types of cell in two grips keeping the central portion devoid of cells-i.Proceed as directed in step 27.B.i. ii. Proceed as directed in step 27.A.ii. iii. Carve out another block of gelatin, and place it on top of one of the grips as shown in Box 3B. Preparation of cell-ECM mixture and assembly of the tissue (timing 1.5-2 hours) 28. Prepare a neutralizing solution (NS) by mixing 1N sodium hydroxide, 10X PBS, and DI water following Corning recommended protocol 58 .For example, to prepare 500 ul 2 mg/ml nal collagen solution from a stock solution with a concentration of 3.8 mg/ml, make NS by mixing 7.6 ul NaOH, 50 ul 10X PBS and 113.5 ul DI water.This 171.1 ul NS is mixed with 328.9 ul collagen stock solution to prepare 500 ul working solution with 2 mg/ml nal concentration. 29. Fill up a crystalizing dish with crushed ice, and place the collagen stock solution bottle, vial of neutralizing solution and a few empty vials in the ice.Leave the container in the refrigerator until needed.CRITICAL: Most of the steps with collagen must be performed on ice, since collagen polymerize very quickly at higher temperatures. 30.Trypsinize cells in the culture ask, centrifuge and aspirate the supernatant to get the cell pellet at the bottom of the tube.Note: Use cell-speci c passaging protocols during this step. 31.Put the cell containing tube in ice, and mix collagen stock solution with NS on ice. 32. Mix the cells with the collagen solution by pipetting vigorously.For a single cell in the tissue, suspend the cells at a density of ~150 x10 3 cells/ml.Cell density can be increased for higher number of cells in the tissue.Note: Take care not to form bubbles when mixing cells with collagen. 33.Based on the tissue structure, proceed with one of the following three options: A. Type I: for cells in all parts of the tissuei. Using a 20 ul pipette, add a small droplet of cell-collagen mixture on the tissue site and grips, as shown in Box 1B.CRITICAL: Remember to perform these steps with all the materials on ice; and from this point onward, all remaining activities involving cell-collagen mixture should be nished in ~5-7 mins. ii. Apply negative pressure for about 20 seconds in a vacuum desiccator.This step facilitates removal of air pockets in the grips and allows cell-collagen mixture to ll in the space.Note: Keep ice packs in the desiccator to keep the temperature low. iii. Add some cell culture media close to the dish walls, away from the sensor; so that the tissue does not dry up while polymerizing.Note: Adding media is very important to keep the humidity in the petridish, maintain integrity of tissue structure and ensure good health of the cells.CAUTION: Do not add media to the tissue at this step. iv. Keep the petridish in a biosafety hood for 10-15 mins at room temperature for collagen polymerization and tissue formation.CRITICAL: It is not possible to raise the temperature to 37 °C for fast polymerization; since gelatin melts at such temperature and causes collapse of the set up.B. Type II: for cells in the grips, with central section free from cells i. Proceed as directed in step 33.A.i.Refer to Box 2B. ii. Proceed as directed in step 33.A.ii. iii. Using the edge of a kimwipe, remove excess cell-collagen mixture from the vicinity of the tissue location.Note: Removal of excess cell-collagen mixture will not remove collagen and cells that occupied the inside of the grips, since they are locked inside.CRITICAL: This step should be nished as quickly as possible since collagen already starts to polymerize. iv.Using a 20 ul pipette, add a small droplet of collagen (without cells) at the tissue site and grips, ), analyze images of the sensor gauges (or, the tissue grips) using template matching plugin in ImageJ with sub-pixel resolution. Tension-compression testing for stiffness measurement (timing 10-30 mins) 41. Attach a needle to the XYZ linear stage and place it on the microscope stage, as shown in Box 1G. Note: The micrometers in the XYZ linear motorized stage can either be manual or piezo-actuated.A piezoactuated stage allows more control and precision; however, the manual stage also works. 42.Using the stage and microscope, guide the needle through the hole in the stiff beam. 43.For compression test, move the needle towards the tissue at a controlled rate, so that the stiff spring compresses the tissue.At the same time, keep imaging at a frequency commensurate with the rate of displacement. 44. Unload the sample by moving the needle away from the tissue. 45.For tension test, keep moving the needle away from the tissue at a controlled rate, so that the stiff beam starts to extend the tissue.Keep imaging during the whole process. 46. Unload the sample by moving the needle back to initial position, and then retract the needle. 47. Analyze the images for calculating displacement of both grips.The data should provide tissue force and deformation, which can be used to determine the tissue stiffness. Image analysis with ImageJ (timing 5-10 mins) 48. Install Template matching and Slice Alignment plugin 59 on ImageJ 49.Create a stack with all the images. 50.Rotate the images, if necessary, so that the grips and the tissue align with either X or Y axis. 51. Align the slices in the stack with respect to the grip connected to the stiff beam.Select the analysis settings (e.g., matching method: Normalized correlation coe cient; subpixel registration; bicubic interpolation for subpixel translation) and choose the stiff grip as the region of interest (ROI).All the slices should now be aligned with the stiff grip in a xed location.Save the translation values in pixels and convert to microns ( ). 52. Align the slices with respect to the soft grip by selecting the grip as the ROI.Analysis settings can be the same as before.Save the translation values that give spring displacement in pixels. Anticipated Results Here we demonstrate that following the protocol, we can construct tissues with various con gurations and measure cell/tissue force and stiffness.Fig. 3 shows three distinct types of tissues for different assays.We de ne these three types as follows-Type I: tissue with single or multiple cell(s) in the tissue (Box 1), Type II: similar cells inside the grips with the central region free of cells (Box 2), and Type III: different types of cells in two grips keeping the central portion without cells (Box 3).Fig. 3A-F show a cancer model with cancer and stromal cells in the tissue (type I).The confocal images show that the model consists of one FET (human colon cancer cell line) cluster, a few CAF05 (human colon) broblasts and collagen as ECM.Confocal z-stack images of F-actin/nuclei labeled cells and two-photon second harmonic generation (SHG) images of collagen were used to reconstruct the 3D tissue structure (Fig. 3D-F).Fig. 3G-H show models for type II tissues, where mouse primary neurons and glial cells are placed in the grips and the central region is ECM without cells.Confocal immuno uorescence images show the nuclei, astrocytes and neurites (axons and/or dendrites) of the neurons (Fig. 3G).Live imaging of F-actin (labeled with SiR-Actin) shows that the cells extend neurites through the tissue and create connections (possibly synapses) (Fig. 3H).Fig. 3I-J present two examples of type III tissues-cancer cells in one grip, CAFs in the other.In Fig. 3I, we demonstrate a low density of cells; and Fig. 3J exhibits a tissue with high number of cells.Interestingly, by keeping cancer and stromal cells at a distance, this type of tissue allows creating a gradient of secreted factors and also physical in uence.This feature makes the sensor a convenient tool for investigating biophysical activities in 3D.Fig. 3K shows a tissue with polystyrene micro-beads for tracking ECM deformation. Fig. 4A shows force data from single broblasts (CAF05).Phase-contrast images of the cells were also collected during the experiments.Representative images of the cells at different time-points are also shown.The data presents force dynamics of each CAF and highlights the common trends and heterogeneities between cells.For example, sample 3 shows a large increase in force at ~16 th hour; and this event was accompanied by excessive elongation of the cell (see phase-contrast images).Also, force curves from sample 1 and 2 show two different trends.The cell in sample 1 gradually increased its force without major relaxation at any point; while cell 2 shows a periodic increase and decrease in force.It can be anticipated that the cells exhibited different functions and signaling corresponding to the cellular force. To establish clinical relevance, we showed that the sensor can host tissue with human patient-derived primary cells that can be used for personalized drug screening.The cancer associated broblasts were extracted and sorted from a primary colon tumor that was clinically diagnosed as an invasive moderately differentiated adenocarcinoma.Fig. 4B shows data with PrCAFs.Interestingly, these cells are highly contractile compared to the CAF05 cell line (human colon cancer associated broblast).Traditional chemotherapy drug Oxaliplatin was administered on the cells at 17 th hour at a concentration of 5 µM 60 . Evidently, the drug was not effective in reducing force generation of the cells.This result might indicate that the drug does not affect the stromal cells' contractility.By assembling the sensors in a highthroughput array, we could possibly perform clinical assays, with a range of drugs and concentrations, to assess the e cacy and optimal dosage for a particular patient.Most importantly, the data from the sensor is free from the artifacts of 2D culture.Therefore, this protocol shows prospects of developing into a novel method for personalized drug/phenotypic screening. We also created tissues with multiple CAFs (CAF05) and cancer cells (FETs) to show collective force evolution (Fig. 4C).As expected, CAFs are more contractile than the cancer cells.Also, CAFs migrate and generate force as individual entities, while the FETs exhibit epithelial behavior by coalescing into large clusters and pull the ECM generating force.CAF force was inhibited with Y-21632 drug, which inhibits rhoassociated kinases (ROCK).This con rms that the force reported by the sensor is indeed generated by cell contractility.In addition to force, we measured ECM remodeling by the cells from stiffness tests performed at the start and end of the experiments.For both CAFs and FETs, force-strain curves show nonlinearity.The tensile loading data were tted to the Mooney-Rivlin model and tangent tensile stiffness's were measured from the model (Fig. 4C).The CAF sample had a very small increase in stiffness, but the FET sample shows substantial stiffening of the tissue. It is known that cell induced matrix remodeling has two mechanistic sources-(i) ber alignment due to cell force and (ii) chemical cross-linking and ECM deposition by cells.We wanted to show that the sensor is capable of detecting remodeling from cross-linking alone.To this end, we chemically modi ed collagen tissue in two steps-rst by Ribo avin-UV treatment (RT) and then by glutaraldehyde treatment (GT).RT was performed following Dresden protocol 61,62 (used for corneal collagen cross-linking as a treatment of keratoconus) and GT was done with 0.5% (w/v) concentration for 24 hours 63 .Fig. 4D shows stiffness of collagen before and after treatment.Interestingly, for pre-treatment (PT) and RT conditions, the sample showed linear force-strain relationship and constant stiffness.However, RT increased the stiffness by ~2 fold.Remarkably, GT resulted in a major transformation.The sample exhibited non-linear behavior with substantial strain hardening.At low strains, the stiffness is similar to the PT or, RT sample; but, at higher strains, stiffness is signi cantly higher.For example, at 50% strain, the stiffness was increased by ~7 fold, compared to RT. Utilizing the sensor, we also applied stretch and compression to the tissue by applying a prescribed motion on the supporting spring using a piezo stage.The ECM is thus subjected to tensile and compressive strains, which transfers the strains to the cell.Fig. 4E presents phase-contrast images of cells under stretch and contraction, and the relationship between cell and ECM deformation. The data suggests that the cell strain is about 93% of the tissue strain in tension and 52% in compression.One explanation is that the cell gets stretched by the collagen due to cell-ECM adhesion. Under compression, ber buckling reduces force and strain transmission to the cells.Readers should be careful in measurement and interpretation of cell strains from phase-contrast or bright eld images, since First, the springs of the sensor is immobilized using gelatin.A tissue is formed by dropping cell-ECM mixture and allowing the ECM to polymerize between grips.With time, the cell(s) activate and engage with ECM bers and generate contractile force that transfers to the springs and deform the soft spring (spring constant = KS).Using a microscope, we track movement of the grips, measure deformation of the springs and observe cellular activities.Cell force is quanti ed as the product of spring constant and deformation (Ks*δc).(B) Technique for measurement of stiffness of the tissue on the sensor.Stiffness of the tissue can be measured by applying axial compression and/or tension to the tissue.Using a linear stage, the stiff beam is pushed or pulled; while continuously monitoring spring deformation and grip movement to measure force and strain.A simple design of the sensor shows different components.The thin and wide beams represent the soft and stiff springs respectively.The circular hole provides access to probes for manipulating the tissue for stiffness measurement. 18 . Using ne tweezers, gently start lifting off the sensors from the base side and slowly work up to the spring beams and tissue grips.The whole sensors should come off the molds without any tears or damage.CRITICAL: This step should be conducted slowly and gently; otherwise, several parts can break which may change the con guration of the sensor.However, the sensors are very robust and are only broken when excessive force is applied.19.To remove unreacted PDMS monomers, submerge the sensors in 70% IPA and leave overnight.20.Wash the sensors with DI water, and then autoclave to remove remaining IPA and sterilize.Note: Autoclaving can be done multiple times to ensure no toxic substance in present in PDMS.21.After sterilization, the sensors can be stored in DI water or 70% IPA for a long time at room temp.or in the refrigerator.CRITICAL: If stored in IPA, the sensors must be cleaned before use.Perform cleaning by keeping the sensors in DI water at 60 °C in an oven for 2-3 days, changing water everyday.Also, autoclave multiple times, if needed. Figure 4 . Figure 4. Application of the sensor for measurement of cell force, tissue stiffness and cell stretching.(A) Traction force evolution of single CAF05 broblasts.The inset shows an enlarged portion of sample 3 data points taken every 5 mins.Phase-contrast images show the cell in sample 3. (B) Cell force dynamics for PrCAF cells with Oxaliplatin (chemotherapy drug) treatment.Phase contrast images show that the cells are more contractile when elongated (sample 1).As the cells retract lopodia, cell force tends to decrease.Also, the drug apparently does not affect cellular traction generation for stromal CAFs.(C) Force and ECM remodeling by multiple CAF05 and FET cells.CAF05 generate higher force compared to FET cancer cells.Stiffness tests were performed for the specimens at initially (2 hrs) and nal time points (24 hrs for CAF05s; 40 hrs for FETs).Force-strain curves are shown for both initial and nal tests.Tangential tensile stiffnesses were measured from Mooney-Rivlin tting of the force-strain curves.FET specimen exhibits substantial stiffening, while CAF05 specimen shows slight increase of stiffness.(D) The sensor detects stiffness change of collagen with chemical crosslinking by Ribo avin-UV (RT) and Glutaraldehyde (GT).RT increased the stiffness by 2-3 fold; while GT stiffened the tissue by ~7 fold at 50% strain.(E) Cell stretch-contraction test results on the sensor.The orange and blue markers represent tension and compression data points respectively.Linear regression lines for the data shows the relationship between applied tissue strain and corresponding cell strain.The phase contrast images and the insets show the cells at different stages of the experiment-compressed, at rest and elongated.Arrows (black or white) indicate the location of the cells in the tissues.Scale bars: 100 µm. Figure 9 Box 3 : Figure 9 The design photomask must be available before starting this procedure.The mask can be made by commercial facilities, if supplied with the design drawing.1.Clean a 4-inch single side polished 500 um thick silicon wafer with acetone, IPA and DI water.Then blow dry the wafer using a nitrogen gun.CRITICAL: This step is performed in a chemical fume hood. solution: Prepare stock solution of 10% (w/v) BSA by dissolving 5 g BSA in 50 ml PBS.Dilute the stock solution by 1:2 with PBS to prepare 5% BSA solution.Then add the NGS stock solution (100%) to get a dilution of 1:20.The nal blocking solution will contain 5% BSA and 5% NGS.Staining solution: Dilute the primary and secondary antibodies at different ratios in blocking solution.ProcedureFabrication of the mold (timing 2-2.5 hours) !!CAUTION: All steps in this procedure must be carried out in a microfabrication cleanroom.Chemicals and instrument required for this process are usually available in a standard fabrication cleanroom.!! CRITICAL: shown in Box 2C.v.Use a needle to quickly remove the gelatin block between the grips.The central portion of the tissue will now be occupied by collagen without cells.Gently add adequate amount of cold (4-8 °C) culture media so that the sensors with assembled tissues are submerged in the media and place the dish in the incubator.CRITICAL:The temperature of the media must be below 20 °C.Otherwise, if the media is warm, it will dissolve gelatin quickly damaging the Place the petridish containing sensors on a motorized stage in an environment-controlled chamber enclosing an optical microscope (e.g.Olympus IX81) mounted on a vibration isolation table.37.Set the temperature at 37 °C, CO 2 at 5% and humidity at 70%.38.Set the locations and focal distances for all the tissues in the dish in the software (e.g.Metamorph)that controls the microscope and the stage.Also set the time interval for imaging. x.Proceed as directed in step 33.A.iii.39.Start image acquisition in phase contrast or bright eld mode using an objective with 20x magni cation or higher.CRITICAL: Initial 2 hours of imaging can have problems with focusing, since the set up adjusts to the temperature changes.Keep an eye on the images and correct focusing until it becomes stable.40.For calculating spring displacements (and force, . Table 1 : Potential applications
9,042.6
2021-06-08T00:00:00.000
[ "Biology", "Engineering" ]
Commodity Prices and the Stock Market in Thailand This study aims to investigate the association between Thai stock market and the commodity markets using 20-year historical monthly data from January 2000 to January 2020. Commodity prices used in the research consist of the prices of crude oil, natural gas, liquified natural gas, commodity agricultural raw materials, and gold. The traditional VAR is used in analyzing the relations between the commodity prices and stock index. The findings show how changes in each commodity prices had significant influence on the stock market. Both evidences of a long-run and a short-run impacts are examined to determine if the past values of changes in the prices of energy, agricultural raw materials, and gold are important in predicting the developments in the stock market or not. The results from the study provide the evidence that Thai stock market is responsive to energy-related, precious metal, and commodity-related indicators. INTRODUCTION With an increasing demand in terms of consumptions and investments in commodities such as crude oil, gold, and agricultural products, the knowledge about price behavior of commodity prices and the volatility transmission mechanism between commodity markets and the stock markets are important and can be applied in decision making process for different groups of participants including governments, traders, portfolio managers, consumers, and producers. Commodities play a vital role in supporting the economic and social development since they are important resources for the nations. Commodity prices are usually considered key drivers for changes in stock prices and the linkage between stock prices and commodity prices have been studied among researchers in the past decade (Kilian and Park, 2009;Creti et al., 2013;Mensi et al., 2013;Broadstock and Filis, 2014;Caporale et al., 2015;Du and He, 2015;Pan et al., 2016;Degiannakis et al., 2017;Joo and Park, 2017;Reboredo and Ugolini, 2018;Alio et al., 2019;Sun et al., 2019). However, there is no consensus about the association between equity prices and energy prices among researchers (Alio et al., 2019). Some literatures suggested that oil price risk impacts stock price returns in both developed and emerging markets including Chang et al. (2010), Hamma et al. (2014), Caporale et al. (2015), Gupta (2016), Tian (2016), Ulussever (2017) while others suggested that the results were mixed or there was no significant influence of oil price risk on stock markets. (Alom, 2015;Bastianin et al., 2016;Degiannakis et al., 2018;Yıldırım et al., 2018;Alio et al., 2019;Lv et al., 2019;Singhal et al., 2019) Some evidence regarding the latter case is presented in the following paragraph. Bastianin et al. (2016) investigated the impact of oil price shocks on the stock market volatility in the G7 countries and found that the stock market volatility did not respond to oil supply shocks, however, demand shocks had significant impact on the stock market volatility. Yıldırım et al. (2018) also reported mixed results in BRICS countries. Singhal et al. (2019) have studied the association between gold prices, oil prices, and equity market in Mexico and found that although both gold prices and energy prices have significant impact on equity markets, gold prices have a more positive effect on equities than oil. Moreover, in the Chinese markets, Lv et al. (2019) employed GARCH-M model to This Journal is licensed under a Creative Commons Attribution 4.0 International License discuss the influence of oil price and stock price in sub-industries such as wind and solar energy. In addition, the spillover effects between oil risk and stock volatility in these sub-industries are also explored. The results showed that there was no statistically significant relationship between the oil prices and the energy stocks in the Chinese equity market. An extensive review of the theory and empirical evidence between oil prices and stock markets was well summarized in Degiannakis et al. (2018). This paper reviewed related studies on the oil price and stock market relationship and found that the significance of these causal effects depend heavily on several factors including whether the data used were aggregate stock indices or firm-level, whether stock markets were in countries which are oil-importers or exporters, and whether the data of changes in oil price used in the study were symmetric or asymmetric. The following section provides more details about the results and summarizes empirical evidence from related previous literatures. LITERATURE REVIEW The impact of fluctuations in commodity prices such as oil prices, agricultural prices, and gold prices on stock returns has continued to draw substantial interests and has come under empirical investigation in numerous recent literatures. Ding et al. (2017) conducted the principal component analysis and SVAR model to analyze the Chinese stock markets and presented evidence of a significant causal relationship between investor emotion in the Chinese stock market and fluctuations in international crude oil prices. The results also revealed that oil price fluctuations significantly Granger cause stock market investor sentiment and crude oil price has negative contagion effects on stock markets. In view of the impact of fluctuations in international oil prices on the sentiment of investors in the Chinese stock market, this study suggested that the government can adopt emergency response measures to stabilize investor sentiment and reduce the risk of stockholders, such as fighting for the pricing power of crude oil to avoid fluctuations in crude oil prices and also to protect the national energy security by striving to independently establish an oil storage system, advocate energy conservation, reduce dependence on oil in the international market, accelerate the development of new energy, and recommend subsidies to purchase new energy vehicles. The impact of oil price shocks on China's stock market was explored in Wei and Guo (2017) by using the VAR model. They found an unstable relationship between the stock market and oil price shocks in the sample and there was a structural break in December 2006. In particular, the impact of oil price shock was positive during the period before the structural break, but it turned to be negative during the period after the structural break. Moreover, these results were confirmed by the impulse response analysis and variance decomposition analysis. Yıldırım et al. (2018) investigated the relationship between crude oil prices and stock market prices in five countries including South Africa, China, India, Russia, and Brazil by employing the VAR model The analysis of the impulse response showed that the stock markets response to price of oil shocks varies from country to country. In specific, the unpredicted positive shocks in oil prices led to an increase in stock prices in Brazil and Russia. Overall, the results showed existing relationship between oil prices and stock markets in most countries under study except for China. However, the results also found that Stock market responses to oil prices disappeared within five months in all countries. The analysis of the co-integration relationship in Wei et al. (2019) suggested that oil prices had a negative significant impact on Chinese stock market. The volatility in oil prices have severely interfered the relationship between crude oil prices and China's oil prices, which has led to the dramatic decline in Chinese stock prices. They also reported two structural breakpoints. The first structural break point was set in March 2008 followed by the second break point in 2012 due to the global financial crisis. This study noted that Brent crude futures prices were significantly co-integrated with the Chinese stock market and there was a significant negative correlation between Chinese stock market and crude oil price. In addition, the correlation was stronger during the crisis time. Kathiravan et al. (2019) investigated the relationship between crude oil price and airline stock prices by using Granger causality test. Their empirical results showed that crude oil price triggered volatility in most of the airline stock returns. However, the relationship was not significant during a certain subperiod. The impact of gold, oil price, and their volatilities on stock prices was explored in recent studies such as Raza et al. (2016), Jain and Biswal (2016), Arfaoui and Rejeb (2017), Chen and Wang (2017), Wen and Cheng (2018), and Coronado et al. (2018). The results in Raza et al (2016) concluded that gold price has a significant and positive impact on the stock prices of emerging markets in BRIC and ASEAN while oil price has a significant and negative impact on the stock prices of these markets. In addition, Jain and Biswal (2016) revealed that there existed the dynamic linkages among oil price, gold price, exchange rate, and stock market in India. Arfaoui and Rejeb (2017) reported the significant interdependencies among oil, gold, and stock markets. Chen and Wang (2017) suggested that there were some dynamic relationships between gold and stock markets in China. Their results showed that Chinese investors could use gold as a hedge for stock investment during the periods of bear market. Similarly, Wen and Cheng (2018) also found that in China and Thailand, gold can be used as a hedging tool for stock investment. However, the results showed that US Dollar had more advantage of being the hedging instrument relative to gold. Coronado et al. (2018) studied the direction of causality between gold, oil, and the US stock market. They proposed that the three markets were interrelated. For the agricultural commodity market, the dependence linkage between commodity and stock fields in China was examined in the study of Hammoudeh et al. (2014). Their findings provided the evidence that there was significant linkage between the two markets. The results for the commodity prices and US stock prices in the water industry revealed similar conclusion in the study conducted by Vandone et al. (2018) for the US markets. Furthermore, Creti et al. (2013) showed that the recent financial crisis of 2007-08 strengthened the link between equity and commodity markets, during the financial turmoil, a high correlation was generally observed between the two markets. Mixed results were found in Nordin et al. (2020) which studied the relationship between commodities and the Malaysian stock market. It was found that the impact of the palm oil prices on the stock market index is positive and significant, both long-term and short-term. However, the impact of oil and gold prices on the stock market performance is not significant. In conclusion, previous studies showed mixed results although the majority of results support the significant linkage between commodities and stock markets. The following section explains about the data and methodology employed under this study. DATA AND METHODOLOGY The energy price and stock market dynamics are analyzed using monthly data obtained from the Stock Exchange of Thailand, World Bank, World Gas Intelligence, and International Monetary Fund spanning the period during January 2000 to January 2020. The energy components in the analysis are monthly change in crude oil, average spot price of Brent, Dubai and West Texas Intermediate, equally weighed in US Dollars per Barrel (COP), natural gas price in US dollars per million metric British thermal (NGS), and Indonesian liquified natural gas monthly price in US dollars per million metric British thermal (LNG). The other commodity prices in the analysis are commodity agricultural raw materials index based on timber, cotton, wool, rubber, and hides Price Indices (CAI) and the gold, 99.5% fine, London afternoon fixing, average of daily rates in troy ounce (GP). The response variable is the stock market returns from the Stock Exchange of Thailand index (SET). The Vector Auto Regressive Model (VAR) and Granger's Causality Test are implemented to test the relationships among multiple variables in the time series and to measures precedence and information content among these time series. The analysis is performed in R version 3.6.3 (2020-02-29). According to a general VAR model, which is employed to analyze the direction of causality energy prices and stock market returns, the multivariate time series can be explained in a VAR of order P: Where  t is an error vector of random variables with zero mean and covariance matrix: Consequently, the time series of each variable COP, NGS, LNG, CAI, GP, and SET enter the models endogenously, and the general VAR(P) form of the covariance matrix can be rewrite as: Where ω t , , γ I , and ε t denote a vector of jointly determined variables, a vector of constants, a matrix of coefficients to be estimated, and a vector of error terms, respectively. Equation (2) In addition, the unit root of each variable is investigated by Augmented Dickey-Fuller (ADF) test in order to ensure the stationarity of each time series employed in the VAR. RESULTS AND DISCUSSIONS The results of Augmented Dickey-Fuller unit root test were presented in Table 2. It was performed to confirm the stationarity of the monthly dataset and to ensure that the data do not have unit root and are stationary. The alternative hypothesis is that the true delta is less than zero. Since the series are all integrated of order zero [i.e., I(0)], it is more appropriate to employ the unrestricted VAR cointegration rank test method to cointegration than the Johansen method in evaluating the long-run association between variables under study. Consequently, the breakpoint test is employed to analyze the structural break in the stock market returns and the changes in the natural logarithm of all other independent variables. The breakpoint test revealed that both SET and COP have structural break in September 2008, while the break date for changes in NGP was in December 2005. Besides, changes in CAI, LNG, and GP were in October 2011, March 2015, and July 2011, respectively. All the series demonstrate strong signs of volatilities over the sample period as illustrated in Figure 1. In addition, Table 3 reports the analysis on the effect of fluctuations in commodity prices on the stock market. The data were investigated by using the VAR method in order to examine if the energy prices, agricultural prices, and gold price have significant effect on the stock market or not. The VAR estimate suggested that only the LNG had significant effect on the stock market. Other energy prices, agricultural price, and gold price were found to have insignificant effect on the stock market in the sample period. Since there is no evidence of cointegration, the causal influence is further analyzed using the VAR Granger causality test. Table 4 presents the results of cointegration rank test. The minimum of 4 cointegrating equations are identified which seem to indicate that there is a long-run association between response variable and commodity prices. In addition, the null hypothesis of causality that stock prices do not Granger-cause commodity prices was conducted, and the results shown in Table 5 suggests the rejection of null hypothesis which implies that there is significant instantaneous causality between the response variable and commodity prices. Particularly, changes in the prices of crude oil, natural gas, agricultural index, liquefied natural gas, and gold were critical in forecasting the stock market. In the same vein, the stock market was instrumental in anticipating changes in crude oil price, agricultural index price, and gold price, but not for the natural gas and liquefied natural gas, during the period under study. Figure 2 reports the inverse roots of the characteristic polynomial. The estimated VAR is stable (stationary) if all roots have modulus less than one and lie inside the unit circle. If the VAR is not stable, certain results (such as impulse response standard errors) are not valid. The results based on Figure 2 illustrate that no root lies outside the unit circle which imply that the estimated VAR model in this study is stationary. CONCLUSION In this study, the association between the Thai stock market and the international commodity markets was investigated. Empirical evidence in the prior studies mostly focused on the crude oil price for the commodity markets in countries other than Thailand. For a wider representation of the commodity market, this study included the prices of crude oil, natural gas, commodity agricultural raw materials index, liquefied natural gas, and gold price into the existing models based on the historical monthly data from January 2000 to January 2020. The findings revealed that changes in commodity prices did not have significant influence on the stock market. However, there was evidence of a significant long-run relationship between the commodity markets and the stock market. In addition, significant causal relationship was found to exist among stock market and some commodity markets. These results conclude that historical prices of crude oil were the most significant factor among other commodities under this study in predicting the fluctuations in the stock market. Furthermore, lagged values of the stock market indices were not influential to the movements in crude oil price, agricultural prices, and gold prices. However, they were vital in the prediction of movements in other energy products namely natural gas price and liquefied natural gas price. The results support previous literatures that the significance of the interrelations among these markets depends on the degree to which the country is the exporter or importer of the commodity and also on how important role the commodity plays on the portfolio of investors and the economy of the country.
3,938.2
2020-12-01T00:00:00.000
[ "Economics" ]
The Use of High-Speed Cameras as a Tool for the Characterization of Raindrops in Splash Laboratory Studies : Measuring the characteristics of raindrops is essential for different processes studies. There have been many methods used throughout history to measure raindrops. In recent years, automatic image recognition and processing systems have been used with high-speed cameras to characterize rainfall by obtaining the spectrum of droplet sizes and their speeds and thus being able to use this technology to calibrate rainfall simulators. In this work, two phases were carried out: in the first one, individual drops with terminal speeds of different sizes were measured and processed both in speed and in shape with a high-speed camera; and in the second phase, a calibration procedure was designed but in multidrop images, determining the characteristics of the drops produced by a rain simulator. According to results, the real shape of each drop depending on the size was determined, from round to ovaloid shapes, and the terminal velocity of water drops with different sizes was measured. Based on the rain images used to calibrate a rainfall simulator, it was observed that, with a higher intensity of rain, the drops produced were smaller, which contrasts with real rain, in which just the opposite happens. This calibration evaluates their resemblance to reality, calculates the real kinetic energy of the rain they produce and see if they can be used to model events in nature. Introduction Measuring the characteristics of raindrops is essential for different processes and recently has attracted increased attention of meteorological research worldwide [1]. It is well known that the size of the raindrops is related to the capacity of the rainfall to clean the atmosphere of aerosols [2,3], and it also has an influence in the attenuation of radio waves [4][5][6], changing the propagating radio frequency. There are other important applications for the determination of drop-size distributions (DSD), some more related with human health, such as the influence in cancer of lung, allergies or other problems [7], and other more related with planet health, such as the influences in herbicides application [8,9] or the analysis of the splash erosion effects [10,11]. Indeed, this last topic is receiving a lot of scientific attention due to the great importance of water as a main erosive agent for the soil [12]. Splash erosion occurs at the initial stages of soil water erosion [13][14][15][16]. The first stage of water erosion is the splash phenomenon, when raindrops falling on the soil surface cause the loosening and ejection of soil particles, which are displaced over different distances [17,18]. Splash erosion is directly related to the breakdown of soil aggregates [8,9,[19][20][21], the enhancement of aggregates dispersion and transport [22][23][24][25][26], and surface crusting [27], resulting in changes to soil infiltration parameters [28]. It should be noted that the effects of splash also contribute to the transportation of microorganisms [29,30] and pollutants [31], along with the ejected particles. The capacity of splash effects, including the displaced mass [32,33] and analysis of distances over which particles are ejected [34][35][36], has been determined by the different drop-size distributions, which control the kinetic energy of the impacts of raindrops on soil [20,37]. The rainfall characteristics such as rainfall kinetic energy [38][39][40], intensity [41] and raindrop diameter [21,42,43] have been assessed. In fact, the determination of the size of the drops has been one of the parameters that has attracted the most attention of many researchers due to the errors that can originate in the estimates of the rest of the characteristics [20]. There have been many methods used throughout history to measure raindrops: from the most precarious, such as using containers with flour exposed for a short and defined time to rain [44] in which the drops fell and became small stones of dry flour with which they could know the size of these drops, or the use of absorbent papers or covered fabrics [45] with soluble paints on which the drops fell, thus being able to measure their sizes reflected in the spots on the paper. This last system can also be organized with dry invisible ink that reacts on contact with water, leaving with each drop a circular spot related to its size that can be measured later. To calculate the size of the drop the following formula is used: D = a × S × b, with D the diameter of the drop, S the diameter of the stained spot, and constants a and b that are established by calibrating the used paper in the laboratory [45]. More recently, more complex methodologies have been developed, such as using a wind tunnel to be able to measure the terminal velocity of the drops and see their behavior during their fall [46], or the use of disdrometers, which are specialized instruments for measuring drops. Two types of disdrometers exist: the optical disdrometers [47,48], which measure droplet sizes by measuring interruptions produced by water droplets in a wave emission, or impact disdrometers [49,50], which consist of a sensor that transforms the moment associated with the impact of the drop into an electrical pulse, whose amplitude is a function of the diameter of the drop. Today, the most widely used are optical disdrometers [47,51], consisting of a transmitter and a laser receiver, which are exposed to the air. When it rains, the drops pass through the laser beam, and the instrument registers with each drop a decrease in the power of the transmitter, which is proportional to the size of these drops since they allow the use of different wavelengths to evaluate the distribution of the size of the drops, as well as their speed and shape. Besides that, they allow the kinetic energy of these drops to be determined-one of the most relevant parameters in water erosion studies. However, in addition to the instruments named so far, in the age of the image, we cannot forget the possibilities of integrating photography and automatic image recognition and processing systems, which are ways to improve the possibilities of determining sizes of drops and their velocities. In fact, in recent years, some work has been carried out with high-speed cameras to characterize rainfall by obtaining the spectrum of droplet sizes and their speed and thus be able to use this technology to calibrate rainfall simulators. This method has new and high potential, such as the evaluation of the influence of the wind on the formation of the drops; the determination of animal pollen or other objects in the sampling area; or the determination of the influence of the oscillations of any drop. However, it also implies great difficulties when using this method for the determination of speed and size, such as the need to carry out preliminary calibrations of the devices because they are not specific for this purpose, the requirement of robust computer programs that allow performing the treatments of the photos, the need for correct lighting to avoid drops that are not detected, the shadows of the raindrops, and, of course, the need to control the background by using different focus planes. If these errors are not taken into account, there could be different depths in the image, leading to distant drops that are confused with smaller drops or nearby drops that appear larger than they really are. Therefore, it is necessary to calibrate, which means working with patterns that must be made from drops of known sizes that are launched from known heights, in order to determine both the speed they reach and what shape and how they are detected by the camera. Speed calibration has not been carried out up to now, due to its difficulty, and is usually solved by applying the theoretical model of Gunn and Kinzer [52], which only serves to work with very controlled situations and without the influence of wind currents. Therefore, the theoretical exponential and gamma relations of the Marshall and Palmer droplet size distributions [53] and Gunn and Kinzer terminal velocities models [52] cannot represent the situations of all the natural precipitations that take place, much less of the existing rainfall simulators. This is why it is necessary to establish a methodology that defines the complete calibration procedure for the use of a high-speed camera to determine the sizes of the water droplets and velocity. To be able to carry it out, in this work the tasks have been divided into two phases: in the first phase, individual drops were produced to reach terminal speeds of different sizes and were measured and processed both in speed and in shape with a high-speed camera. In the second phase, the knowledge determined in the first part was applied to design a calibration procedure for multidrop images, determining the characteristics of the drops produced by a rainfall simulator (the spectra of the number of drops per minute according to the different sizes, at different simulated rainfall intensities) through the systematic taking of photographs and the use of software for their characterization. Materials For the initial experiments, two high-speed cameras were used: a Smart High-Speed Camera ProcImage250 (Monochrome version) and a FASTCAM-APX RS model 250K that allowed taking pictures under water. The Smart High-Speed Camera has a resolution of 640 × 480 pixels, 252 frames per second (fps) at full resolution and up to 54,000 fps, ROI configurable in size and position, and I/O connector: trigger input, sync input/output, strobe output. The camera was controlled by EyeMotion software. The "FASTCAM-APX RS model 250K" camera which has the following specifications: Global electronic shutter up to 2 µs, camera control interface: High-Speed Gigabit Ethernet, National Instruments DAQ support, Photron FASTCAM Viewer: simple and easy-to-control software for Ultima camera, Photron FASTCAM Analysis (PFA): entry-level analysis software for displacement, velocity, and acceleration measurements. This camera configuration used was: definition 256 × 256, 563.4 fps, exposure 40.1 µs. After taking the average of the thirty measurements taken, the following equivalence was achieved: 10 mm = 30.6102 pixels SD. A metal tripod was used to hold the cameras, with which the appropriate height for the camera was adjusted. In addition, a photography focus with a continuous light source was used, widely used when performing studies of this type: 800 W tungsten lamp, 3200 K color temperature with adjustable visors to direct the light (16 × 17 cm). A screen was added to the spotlight to achieve a homogeneous light background to facilitate measurements. For the treatment of the images with a few drops (individual or with a maximum of 5 drops), the free program ImageJ was used, which is a digital image processing program, programmed in Java and developed at the National Institutes of Health [54]. It was designed with an open architecture that provides extensibility via plugins and macros that allow solving various image processing and analysis problems and developing a custom image scan, among other things. For the treatment of the multidrop images, the MATLAB program is used to process and treat different files such as images; for this reason, it was decided upon to use this program to carry out the analysis of the images collected by the camera. MATLAB program is able to modify images pixel by pixel, which allows the process to be automated and thus treat thousands of images in a short time. Work began with the Smart High-Speed Camera. It is important to note that in the videos, the shadows of the drops generated by the light source are the most reflected marks; therefore, the optimal placement of the screen is important to capture these shadows, since they are easier to analyze. The first thing therefore is the calibration of the systems, producing drops of known sizes that fall along the sample area to evaluate if the size that is reflected in the image is adequate. That is why the distance adjustment is very important, and the installation of a ruler at the same distance where the drops are produced allowed for the focusing directly on the drops of the same plane at the same time as it allowed for the comparison of the sizes with a pattern in the form of rule. To convert from pixels to mm, the ImageJ program was used, and the conversion factor from pixels to mm was obtained to measure the different sizes of the drops. To convert to mm, a known measurement was selected; a cm from the ruler, and that actual size was adjusted to the number of pixels it covered in the photograph. To make this value more accurate, this measurement was performed thirty times with each size, and the average was made. At first, drops were generated by using several pipettes manually. The camera began to record and in a controlled way individual drops were released. Thus, the first drops could be studied separately, something important when starting with their automatic detection. Different tests were carried out with the pipette, increasing the number of recorded drops and modifying the drop generation speed, which was performed manually. Tests were also made by increasing the number of frames per second since shape changes were detected in the drops and vibrations in the largest drops ( Figure 1). program to carry out the analysis of the images collected by the camera. MATLAB program is able to modify images pixel by pixel, which allows the process to be automated and thus treat thousands of images in a short time. First Phase: Calibration and Study of Individual Drops Work began with the Smart High-Speed Camera. It is important to note that in the videos, the shadows of the drops generated by the light source are the most reflected marks; therefore, the optimal placement of the screen is important to capture these shadows, since they are easier to analyze. The first thing therefore is the calibration of the systems, producing drops of known sizes that fall along the sample area to evaluate if the size that is reflected in the image is adequate. That is why the distance adjustment is very important, and the installation of a ruler at the same distance where the drops are produced allowed for the focusing directly on the drops of the same plane at the same time as it allowed for the comparison of the sizes with a pattern in the form of rule. To convert from pixels to mm, the ImageJ program was used, and the conversion factor from pixels to mm was obtained to measure the different sizes of the drops. To convert to mm, a known measurement was selected; a cm from the ruler, and that actual size was adjusted to the number of pixels it covered in the photograph. To make this value more accurate, this measurement was performed thirty times with each size, and the average was made. At first, drops were generated by using several pipettes manually. The camera began to record and in a controlled way individual drops were released. Thus, the first drops could be studied separately, something important when starting with their automatic detection. Different tests were carried out with the pipette, increasing the number of recorded drops and modifying the drop generation speed, which was performed manually. Tests were also made by increasing the number of frames per second since shape changes were detected in the drops and vibrations in the largest drops ( Figure 1). Subsequently, in order to be able to produce several drops of different sizes at the same time, the drop formation system was modified. Brushes of 5 different sizes began to be used as a means of generating drops with different diameters, to observe how the shape of the drop changed during the fall ( Figure 2). The brushes were immersed in water to let them drip, and 10 of those drops per brush were collected on a plate to calculate the aver- Subsequently, in order to be able to produce several drops of different sizes at the same time, the drop formation system was modified. Brushes of 5 different sizes began to be used as a means of generating drops with different diameters, to observe how the shape of the drop changed during the fall ( Figure 2). The brushes were immersed in water to let them drip, and 10 of those drops per brush were collected on a plate to calculate the average mass of each of the drops produced with the different brushes. Subsequently, once the different sizes had been checked, the brushes were hung on a rope, the brushes were placed in water, and the recording began, with the camera and the light set on both sides to be able to record the drops ( Figure 3). Despite the fact that the drops with this method took a long time to precipitate, since they did not fall until there was enough water accumulated on the tip of the brush, it was possible to measure drops of different sizes falling simultaneously. Once the production of drops of different sizes controlled in a chain had been achieved, it was ensured that said drops could reach the terminal velocity. To perform this, these drops had to be produced at a height of more than 10 m above the camera. For this reason, the brush system was placed at a height of 12 m placing the camera in the lower part ( Figure 3). Thus, it was possible to measure the time (in number of frames) a drop took to travel the space of the photo, from which the characteristic terminal velocity of each drop size produced was calculated. Terminal velocity is reached when there is balance between the forces: friction resistance of Stokes, the force of gravity and the forces As the measurements are made on the projected shadow of the generated drops, which have rounded shapes, the figure detection option was selected in the EyeMotion program, and the detection of color changes was used-in this specific case, we detected black ( Figure 4). When the program was ready, it proceeded to detect the drops automatically. It was possible to detect during the entire video or by selecting the specific range of frames that one wants to analyze. The use of the automatic markers generated by the program can cause errors, since the drop to which they are associated disappears from the screen, because it leaves the frame of the photographs in its fall; for example, it automatically searches for another drop, which would imply that the new drop to which it is associated is not recognized as a new individual drop. Therefore, to avoid these problems, this process should always be reviewed manually. In addition, there may be noise in the images, that is, backgrounds that are not perfectly smooth or drops that also suffer horizontal displacement or even insects; therefore, that it is necessary to perform a manual follow-up, adding the markers individually to each drop in each of the frames in which it appears. Otherwise, the marker jumps from the drop to the noisy area and vice versa, such that the resulting measurements are wrong. Each of the markers used to track was assigned a number, making them unique for each drop, and in this way, it was possible to proceed to represent its acceleration and velocity graph, the data of which are exported to an Excel file, in which we have the data of time, position in the x and y axes, displacement speed and acceleration in these axes. Once the production of drops of different sizes controlled in a chain had been achieved, it was ensured that said drops could reach the terminal velocity. To perform this, these drops had to be produced at a height of more than 10 m above the camera. For this reason, the brush system was placed at a height of 12 m placing the camera in the lower part ( Figure 3). Thus, it was possible to measure the time (in number of frames) a drop took to travel the space of the photo, from which the characteristic terminal velocity of each drop size produced was calculated. Terminal velocity is reached when there is balance between the forces: friction resistance of Stokes, the force of gravity and the forces of thrust. As the measurements are made on the projected shadow of the generated drops, which have rounded shapes, the figure detection option was selected in the EyeMotion program, and the detection of color changes was used-in this specific case, we detected black ( Figure 4). When the program was ready, it proceeded to detect the drops automatically. It was possible to detect during the entire video or by selecting the specific range of frames that one wants to analyze. The use of the automatic markers generated by the program can cause errors, since the drop to which they are associated disappears from the screen, because it leaves the frame of the photographs in its fall; for example, it automatically searches for another drop, which would imply that the new drop to which it is associated is not recognized as a new individual drop. Therefore, to avoid these problems, this process should always be reviewed manually. In addition, there may be noise in the images, that is, backgrounds that are not perfectly smooth or drops that also suffer horizontal displacement or even insects; therefore, that it is necessary to perform a manual follow-up, adding the markers individually to each drop in each of the frames in which it appears. Otherwise, the marker jumps from the drop to the noisy area and vice versa, such that the resulting measurements are wrong. Each of the markers used to track was assigned a number, making them unique for each drop, and in this way, it was possible to proceed to represent its acceleration and velocity graph, the data of which are exported to an Excel file, in which we have the data of time, position in the x and y axes, displacement speed and acceleration in these axes. Second Phase: Calibration of Rain Events Formed by Sets of Drops Once the parameters of the characteristic sizes and speeds of the drops were known, we were ready to carry out an analysis of several drops at the same time. To carry out this experiment, frames taken in a large rainfall simulator, called "The Wageningen Rainfall Simulator" [55], located in the Netherlands, were used. In it, tests were carried out under various rain intensities, 30, 60, 90 and 125 mm/h. These trials were conducted in 2017 and have not been studied to date. In this case, it is important to distinguish events according to different rainfall intensities, since they involve different numbers of drops. Therefore, the photographs are analyzed by dividing them into 4 different intensities, analyzing the different number of raindrops and their different sizes, and thus being able to make a spectrum of the number of drops per minute for each size. Rainfall Simulator The rain simulator used is part of the Kraijenhoff van de Leur laboratory for water and sediment dynamics at Wageningen University in the Netherlands. The simulator (Figure 5) is 6 m long and 2.5 m wide, with a height of 2.8 m. The sides of the simulator are covered by a plastic curtain to prevent the surroundings from getting wet. In addition, the simulator plot can be tilted up to 15.5° with a hydraulic lift. There are four nozzles, as described in detail by Lassu [55]. Second Phase: Calibration of Rain Events Formed by Sets of Drops Once the parameters of the characteristic sizes and speeds of the drops were known, we were ready to carry out an analysis of several drops at the same time. To carry out this experiment, frames taken in a large rainfall simulator, called "The Wageningen Rainfall Simulator" [55], located in the Netherlands, were used. In it, tests were carried out under various rain intensities, 30, 60, 90 and 125 mm/h. These trials were conducted in 2017 and have not been studied to date. In this case, it is important to distinguish events according to different rainfall intensities, since they involve different numbers of drops. Therefore, the photographs are analyzed by dividing them into 4 different intensities, analyzing the different number of raindrops and their different sizes, and thus being able to make a spectrum of the number of drops per minute for each size. Rainfall Simulator The rain simulator used is part of the Kraijenhoff van de Leur laboratory for water and sediment dynamics at Wageningen University in the Netherlands. The simulator ( Figure 5) is 6 m long and 2.5 m wide, with a height of 2.8 m. The sides of the simulator are covered by a plastic curtain to prevent the surroundings from getting wet. In addition, the simulator plot can be tilted up to 15.5 • with a hydraulic lift. There are four nozzles, as described in detail by Lassu [55]. Recording of the Drops To carry out the experiments, the high-speed camera FASTCAM-APX RS model 250K was used, with which different recordings were made at two points of the rain simulator, below nozzle A and B. Therefore, for each of the rainfall intensities at which the experiment was carried out, there are recordings of the drops at both points, so the results could also be compared between different areas of the simulator. For each rain intensity, one camera calibration was carried out using, similarly to the first part of the work, a ruler; in this way, it was possible to know the different sizes of the drops, and the definition of the camera could also be adjusted to better distinguish the drops that were at the same distance as the ruler and could thus study them. Analysis of Images An important procedure in this phase of the work was to adjust the contrast of the images, since when compiling them, they were compressed so that they were apparently black, and they required the application of a code that adjusted the contrast with MATLAB (saturates the lower 1% and the top 1% of all pixel values), thereby increasing the contrast of the output image ( Figure 6). Recording of the Drops To carry out the experiments, the high-speed camera FASTCAM-APX RS model 250K was used, with which different recordings were made at two points of the rain simulator, below nozzle A and B. Therefore, for each of the rainfall intensities at which the experiment was carried out, there are recordings of the drops at both points, so the results could also be compared between different areas of the simulator. For each rain intensity, one camera calibration was carried out using, similarly to the first part of the work, a ruler; in this way, it was possible to know the different sizes of the drops, and the definition of the camera could also be adjusted to better distinguish the drops that were at the same distance as the ruler and could thus study them. Analysis of Images An important procedure in this phase of the work was to adjust the contrast of the images, since when compiling them, they were compressed so that they were apparently black, and they required the application of a code that adjusted the contrast with MATLAB (saturates the lower 1% and the top 1% of all pixel values), thereby increasing the contrast of the output image ( Figure 6). The first aspect to take into account was that now they were no longer individual drops, but rather that there were different drops falling at the same time in an area that included different distances from the camera and therefore different depths of field. The selection of the drops and the sample space was based on the sharpness and color of the drops in the image. In this way, an algorithm could be defined that selected only the darkest and most defined drops, which were those that occupy the sample space. The image was scanned with MATLAB, and an Excel file was created with the following variables: the position on the vertical and horizontal axes of the drop, its major and minor axis, and its area. The sizes of the area of the drops in the images were compared with previously chosen sizes, and finally, the number of drops of each size was counted. Although large drops were deformed during their fall and only the smallest retained their spherical shape [20], the measurements of the vertical and horizontal drops allowed for an approximate volume to be assigned to each drop located in the photo. Droplet size ranges were decided upon after considering that in natural rain it is very rare to see drops larger than 5 mm in diameter. Using the studies of other researchers [11,17,56], very large droplets are those that exceed 5.1 mm in diameter. Large drops are those that are between 3.6 and 5.1 mm in diameter; medians are those between 1.7 and 3.6 mm; small are those that are above 0.85 mm; and finally very small are those that did not reach 0.85 mm (Table 1). Furthermore, it was necessary to make a selection of frames to count the number of drops, since the same drop appeared in different positions in different numbers of photos, depending on its terminal velocity. For this reason, a specific study was conducted on the number of frames that each drop travels according to its size (Table 1). first part of the work, a ruler; in this way, it was possible to know the different sizes of the drops, and the definition of the camera could also be adjusted to better distinguish the drops that were at the same distance as the ruler and could thus study them. Analysis of Images An important procedure in this phase of the work was to adjust the contrast of the images, since when compiling them, they were compressed so that they were apparently black, and they required the application of a code that adjusted the contrast with MATLAB (saturates the lower 1% and the top 1% of all pixel values), thereby increasing the contrast of the output image ( Figure 6). Measuring Accelerating Drops With the markers correctly added to the drops, a table was obtained with the necessary values to make an analysis of the position, velocity, acceleration and trajectory of the drops. The most relevant for the study being carried out are time, position on the y-axis, velocity on that axis and acceleration. In the first moments, the speed increases until it reaches a more stable value, corresponding to its terminal velocity, while the acceleration remains stable. Speed increases as time passes, and because the photographs were taken of drops which did not have enough distance to reach their terminal speeds, these drops were still in its acceleration phase. The last variable is acceleration, d 2 y/dt 2 , which is expressed in pixels per millisecond squared. It is difficult to know the exact acceleration of the drops using this method as there can be small margins of error due to precision errors when using the markers. Another relevant observation of this experiment was the study of the deformation of the drops during their fall. During their acceleration phase, the drops present deformations while they fall, which is due to the force exerted by the air on the drop and the force exerted by the particles of the drop to stay together. If the drops were too large, it would end up breaking (Figure 7) because of the air. Water 2021, 13, x FOR PEER REVIEW force exerted by the particles of the drop to stay together. If the drops were too would end up breaking (Figure 7) because of the air. As a general rule, drops larger than 5 mm in diameter are very difficult nature. This phenomenon can also be seen in the tests carried out with the brus deformation of the drops can be seen in the Figure 1: the drop during its fall maintain the same shape but undergoes oscillations and breaks into several dro it exceeds 5 mm. If drops are released from a height greater than 10 m, they ca duced in large sizes, and if the intention is to launch them from lower heights by it is recommended that the drops are generated using very fine brushes (less tha so that drops are available of different sizes but similar in size to natural rain, drops generated are always greater than the size of the tips of the brushes. Terminal Velocity For drops falling at terminal velocity, the velocities of the drops were ana cording to their size resulting in three different speeds for the three sizes of d were generated: 2, 3 and 4 mm. The drops generated by the 2 mm tube all ha speeds of 15-16 px/ms (4.90 and 5.22 m/s); the 3 mm drops had a speed betwee 19 px/ms (between 5.88 and 6.20 m/s); and those of 4 mm, registered speeds be and 22 px/ms (6.87 and 7.18 m/s). In this experiment, it was also possible to see th ior of the drops with respect to their shape, and it was observed that the large flattened, acquiring an "ovaloid" shape (the drop is not round but seems to be at its upper and lower ends). Calibration of Simulator Based on the Analysis of Pictures In the first place, an attempt was made to make a measurement of the int rain through the coefficient of uniformity of Christiansen [58], which has indic there is a precipitation uniformity of 71-82%. However, this method only indi amount of water collected per time, without specifying the sizes of raindrops or locities. That is why we are going to complement the calibration by studying wit speed camera, taking photos both under nozzle A and under nozzle B (Figure 5 quently, an integration of the data was carried out in both nozzles, in order to the distribution of droplet sizes throughout the entire area of the rain simulator tion, we repeated this study with four intensities (30, 60, 90 and 125 mm/h). From the analysis of the drop-size distributions taken with both nozzles wit ferent intensities, it was observed that in all cases nozzle A presents a greater n drops. Furthermore, an increase in the intensity of rain supposes a direct increa number of registered drops. Likewise, it can be observed that with an intens mm/h, in nozzle A, a higher number of medium and large drops was detected th rest of the sizes, the majority concentrating around 4 mm in diameter (Figure 8). As a general rule, drops larger than 5 mm in diameter are very difficult to see in nature. This phenomenon can also be seen in the tests carried out with the brushes. The deformation of the drops can be seen in the Figure 1: the drop during its fall does not maintain the same shape but undergoes oscillations and breaks into several drops when it exceeds 5 mm. If drops are released from a height greater than 10 m, they can be produced in large sizes, and if the intention is to launch them from lower heights by gravity, it is recommended that the drops are generated using very fine brushes (less than 1 mm) so that drops are available of different sizes but similar in size to natural rain, since the drops generated are always greater than the size of the tips of the brushes. Terminal Velocity For drops falling at terminal velocity, the velocities of the drops were analyzed according to their size resulting in three different speeds for the three sizes of drops that were generated: 2, 3 and 4 mm. The drops generated by the 2 mm tube all had similar speeds of 15-16 px/ms (4.90 and 5.22 m/s); the 3 mm drops had a speed between 18 and 19 px/ms (between 5.88 and 6.20 m/s); and those of 4 mm, registered speeds between 21 and 22 px/ms (6.87 and 7.18 m/s). In this experiment, it was also possible to see the behavior of the drops with respect to their shape, and it was observed that the largest drops flattened, acquiring an "ovaloid" shape (the drop is not round but seems to be flattened at its upper and lower ends). Calibration of Simulator Based on the Analysis of Pictures In the first place, an attempt was made to make a measurement of the intensity of rain through the coefficient of uniformity of Christiansen [58], which has indicated that there is a precipitation uniformity of 71-82%. However, this method only indicates the amount of water collected per time, without specifying the sizes of raindrops or their velocities. That is why we are going to complement the calibration by studying with a high-speed camera, taking photos both under nozzle A and under nozzle B ( Figure 5). Subsequently, an integration of the data was carried out in both nozzles, in order to conclude the distribution of droplet sizes throughout the entire area of the rain simulator. In addition, we repeated this study with four intensities (30,60,90 and 125 mm/h). From the analysis of the drop-size distributions taken with both nozzles with the different intensities, it was observed that in all cases nozzle A presents a greater number of drops. Furthermore, an increase in the intensity of rain supposes a direct increase in the number of registered drops. Likewise, it can be observed that with an intensity of 30 mm/h, in nozzle A, a higher number of medium and large drops was detected than in the rest of the sizes, the majority concentrating around 4 mm in diameter (Figure 8). The integration of the two nozzles shows that the mean drop-size value is lower when the intensity of the rainfall simulator increases (Figure 9) The number of drops registered by nozzle B is considerably less than in nozzle A, which is due to the fact that nozzle B is located in a position closer to the pump; therefore, there is less probability of blockages in the network due to impurities and also a lower pressure loss. The number of drops registered by nozzle B is considerably less than in nozzle A, which is due to the fact that nozzle B is located in a position closer to the pump; therefore, there is less probability of blockages in the network due to impurities and also a lower pressure loss. With an intensity of 60 mm/h, we have a considerable increase in the number of small drops in camera A, especially in drops with diameters of 1 mm, although there are still more medium drops ( Figure 5). In the case of nozzle B, a general decrease in the number of drops is once again seen, especially the small ones. Once the data from nozzle A and B with the intensity of 60 mm/h are added, it can be seen that in this case there are far more drops than in the measurements with intensity of 30 mm/h, with a quite considerable increase in the drops of 1 mm in diameter. The number of drops registered by nozzle B is considerably less than in nozzle A, which is due to the fact that nozzle B is located in a position closer to the pump; therefore, there is less probability of blockages in the network due to impurities and also a lower pressure loss. With an intensity of 60 mm/h, we have a considerable increase in the number of small drops in camera A, especially in drops with diameters of 1 mm, although there are still more medium drops ( Figure 5). In the case of nozzle B, a general decrease in the number of drops is once again seen, especially the small ones. Once the data from nozzle A and B with the intensity of 60 mm/h are added, it can be seen that in this case there are far more drops than in the measurements with intensity of 30 mm/h, with a quite considerable increase in the drops of 1 mm in diameter. With an intensity of 60 mm/h, we have a considerable increase in the number of small drops in camera A, especially in drops with diameters of 1 mm, although there are still more medium drops ( Figure 5). In the case of nozzle B, a general decrease in the number of drops is once again seen, especially the small ones. Once the data from nozzle A and B with the intensity of 60 mm/h are added, it can be seen that in this case there are far more drops than in the measurements with intensity of 30 mm/h, with a quite considerable increase in the drops of 1 mm in diameter. With an intensity of 90 mm/h, there are many more drops of 1 mm diameter in nozzle A compared to the previous measurements, while in nozzle B, the number of medium drops is still much higher. When added together, two peaks are clearly visible: one at 1 mm and the other at 4 mm. In the recordings of nozzle A in the 125 mm/h experiment, a clear increase in the number of drops of all sizes is seen, considerably noticeable in the medium and small ones. The picture with nozzle B recorded a very different drop distribution compared to the rest of the intensities, with the number of small drops being very high and the spectrum of the number of drops per size completely changing. In the sum of the number of drops registered by each one of the nozzles in the rain intensity of 125 mm/h, it is seen how the number of drops is much greater than in the rest of intensities, especially the small drops of 1 mm, which undergo very large growth. In the graphs, we can see how as the intensity of rain increases, there is a greater number of drops per minute, especially drops with sizes close to 1 mm in diameter. This may be because in the simulator, the drops are produced with a pump and increasing the pressure to produce greater intensity produces smaller drops. It is very striking to observe the great difference in the drop-size distribution (DSD) of the simulated rainfall with the natural rain spectra obtained by Fernández-Raga [59] ( Figure 10), which would adopt an exponential or gamma form [47,56,60]. Sizes smaller than 1 mm in diameter show a higher number of drops in the natural rainfall, but with drops bigger than 2 mm in diameter, the number of drops is much more comparable. This means that the simulator can be used to evaluate the erodibility of soils, despite the fact that in erosion processes, not all droplet sizes are equally important, since droplet sizes less than 1 mm hardly have an impact [11,17], as they are able to pull up very few aggregates from the natural terrain due to the fact that their terminal velocity is also related to their size, resulting in a very small kinetic energy [61]. This kinetic energy is truly responsible for the splashed particles (which is the total energy transferred to the particles in order to eject and displace them). In any case, the form of the DSD is very different under natural or simulated rainfall, and consequently, the kinetic energy associated to the natural or simulated events is also different. According to the available literature, the energy values of the ejected particles in relation to the falling drops are between 0.2% and 45%, which was demonstrated by diverse authors using various techniques and experimental conditions. However, it should be noted that previous research in this field has been based primarily on experiments in which the splashed material was treated only as water droplets when the impacting drop hit the surface of a liquid of different thicknesses [62][63][64] or solid-phase particles, mostly grains of sand [34]. Conclusions A study was conducted on how the drops of water deform during their fall, in which it was seen that the drops do not have the same shape during the entire fall path and that only the smaller drops, when they have sufficient speed, present a round shape, while the larger ones are flattened and have an "ovaloid" shape. It was possible to successfully carry Conclusions A study was conducted on how the drops of water deform during their fall, in which it was seen that the drops do not have the same shape during the entire fall path and that only the smaller drops, when they have sufficient speed, present a round shape, while the larger ones are flattened and have an "ovaloid" shape. It was possible to successfully carry out an experiment in which the terminal velocity of water drops with different sizes, previously established, falling from more than 6 m in height was measured, finding that the drops in free fall of 2 mm in diameter have a speed of 5 m/s, drops of 3 mm have a speed of 6 m/s, and drops of 4 mm have a speed of 7 m/s. Regarding the second phase, rain images were used made in a rain simulator in the Netherlands, and it was observed that, with a greater intensity of rain, there were more drops, and the peak of the graph moved toward the area where the drops were minor. This is due to the fact that because it is simulated rain, increasing the intensity leads to a higher nozzle pressure; therefore, the drops that are produced are smaller, which contrasts with real rain, in which just the opposite happens. This is why it is necessary to calibrate the rain simulators, to be able to evaluate their resemblance to reality, calculate the real kinetic energy of the rain they produce, and see if they can be used to model events in nature. In addition, several aspects must be taken into account when selecting the drops at the indicated distance to be able to study them, with a controlled space to be able to give the values per unit area in the images since the camera records drops that pass too close or too far from the camera's calibration point, making these droplets appear larger and smaller than they actually are. This was solved by eliminating those droplets that appear out of focus on the camera. On the other hand, the use of the camera is more comfortable and cheaper when performing this type of study, and results that are in line with reality are achieved.
10,823.4
2021-10-13T00:00:00.000
[ "Environmental Science", "Physics" ]
Recharge–Discharge Relations of Groundwater in Volcanic Terrain of Semi-Humid Tropical Highlands of Ethiopia: The Case of Infranz Springs, in the Upper Blue Nile : The major springs in the Infranz catchment are a significant source of water for Bahir Dar City and nearby villages, while sustaining the Infranz River and the downstream wetlands. The aim of the research was to understand the hydrogeological conditions of these high-discharge springs and the recharge–discharge relations in the Infranz catchment. The Infranz catchment is covered by highly pervious and young quaternary volcanic rocks, consisting of blocky, fractured, and strongly vesicular scoriaceous basalt. At the surface, these rocks crop out as lineaments forming ridges, delimiting closed depressions in which water accumulates during the rainy season without causing surface runoff. Geology and geomorphology thus combine to produce very favorable conditions for groundwater recharge. Three groundwater recharge methods were applied to estimate groundwater recharge and the results were compared. Groundwater recharge was calculated to be 30% to 51% of rainfall. Rapid replenishment raises the groundwater level during the rainfall period, followed by a rapid decline during the dry season. Shallow local flow paths discharge at seasonal springs and streams, while more regional and deeper flow systems downstream sustain the high-discharge springs and perennial Infranz River. The uptake of 75% of spring water for the water supply of Bahir Dar City, local extraction for domestic and small-scale irrigation use from springs, rivers and hand-dug wells, encroaching farming, and overgrazing are exacerbating wetland degradation. Introduction Groundwater is the major source of fresh water across the globe where human and ecological communities depend on it [1]. Sometimes groundwater can become critical to survival when surface water resources run dry during a severe drought. The dynamics, mainly recharge-discharge conditions of groundwater, are essential for the sustainable use of groundwater which affects groundwater-dependent streams, wetlands, and ecosystems [2]. The recharge-discharge mechanism of groundwater highly differs from place to place and depends on different variables. Groundwater occurrence and dynamics in Ethiopia, specifically in the Tana basin, are complex, which is attributed to assess the hydrological system of the catchment. The demand to quantify groundwater replenishment to the aquifer system is increasing and has become crucial for water resource management. In this study, three recharge estimation methods-chloride mass balance (CMB), soil water balance (SWB), and water table fluctuation (WTF) have been used to estimate the groundwater recharge of Infranz catchment. The purpose of this study was to assess groundwater recharge-discharge conditions in volcanic rock formations with particular emphasis on young Quaternary highly vesicular basalt, and to understand the hydrogeological settings of the major springs of the Infranz catchment. Finally, the development of a conceptual groundwater flow model is also an important objective. Location and Climate of the Study Area The Infranz catchment is part of the Lake Tana basin in north-eastern Ethiopia, Amhara region. The catchment is situated to the south of Lake Tana, which is known as the source of the Blue Nile. The climate of the Lake Tana basin is categorized as a temperate climate. However, some parts are sub-tropical to sub-afro-alpine. The rainfall season mainly extends from June to September. The annual average precipitation of Bahir Dar area is estimated at 1415 mm. The air temperature varies from place to place, with an annual average of about 20 °C [27]. At Bahir Dar Station (which is the nearest meteorological station to the study area), the annual daily maximum temperature is about 27 °C, while the daily minimum is 13 °C. Geology and Hydrogeology of the Study Area The Lake Tana basin, in which the study area is situated, comprises three types of geologic units: (1) Trap Series (Oligocene to Miocene) volcanic flows and edifices mainly found in the highlands and escarpments. The Trap Series geologic unit encompasses Ashange, Aiba, Alaje, and Tarmaber basalt; (2) Quaternary basalt locally found in the lowlands of the basin (old Quaternary and recent Quaternary), and occurring to the south of the lake particularly, including the Infranz catchment; and (3) Quaternary sediments that comprise alluvial lake deposits that are found in the floodplain of major rivers and at the shore of Lake Tana [25,[28][29][30][31]. The geology of the Infranz catchment was studied in detail by field mapping during several campaigns between 2013 and 2018. The geology was linked to the specific landscape of the catchment, as observed in the field (documented by photographs) and on Google Earth. A geological crosssection has been established. The recent Quaternary lava flows form vesicular and highly pervious basalts, attributing a high hydraulic conductivity to them. This can even be increased at the surface by slight weathering, as observed in the field. This hydrogeologic unit comprises two types of porosity: primary porosity, which is related to the vesicles, and secondary porosity, which is related to the fractures and fissures. In this formation, large and small volcanic tubes are common. This contributes relatively high transmissivity and storage capacity to the formation. Generally, the transmissivity of Quaternary basalts is high and varies from 100-200 m 2 /day [32]. Most of the boreholes drilled in the recent Quaternary basalt unit mainly yield more than 20 l/s [19]. A group of important springs is emerging from this recent Quaternary basalt in the Infranz catchment ( Figure 1). The discharges of the Areke and Lomi springs are estimated by [13] to be 210 l/s and 90 l/s, respectively. The Tikur wuha spring discharges more than 40 l/s [19]; the Infranz twin springs have a discharge of more than 60 l/s [19]. In addition to these high-discharge springs, there are other low discharges and intermittent springs emerging in the same formation. The occurrences and appearances of the springs are assumed to be linked to the contact with the underlying old Quaternary basalt and less pervious Tertiary rhyolitic formations that are mapped in the spring area [24]. The Infranz wetland is found immediately downstream from the springs and stretches to the river mouth at Lake Tana. This wetland is formed by groundwater seepage and outflows of water from springs before joining the Infranz River and is sustained by the river discharge. However, this wetland is becoming degraded as it is drying up, due to most of the spring water being taken away, leading to encroaching farming and overgrazing. Since groundwater recharge is from the entire catchment, the water chemistry may be influenced by activities in the whole catchment area. The concentration of specific constituents of the water chemistry is also influenced by the length of the flow path or residence time in contact with the various rock types, along with anthropogenic activities. The geological setting of the Infranz catchment was assessed based on drilling descriptions and field observations. This information was included in the geological map. The hydrogeological conditions of the springs and wetlands were deduced on this basis, and are presented in a crosssection, showing the conceptual model of groundwater flow in the catchment. Data Collection To understand the hydrologic situation and sources of water for the wetland, different field campaigns were conducted in and around the Infranz catchment. The field trips were performed in 2013 and 2015 to 2018 in all seasons: the dry season, (February to May), slightly wet season (September to January), and wet season (June to August). While conducting the fieldwork, spring inventories were set up in accessible areas. Monitoring wells were selected based on accessibility, geology, and physiography. Key informant interviews with local people were undertaken by questioning them about the wetland, stream and spring conditions, and how the wetland conditions are changing with time, and if they have observed informal encroaching farming affecting the wetland system. In different field campaigns, we tried to explore the intermittent streams upstream and perennial streams downstream with reference to the emerging point of the high-discharge springs (Areke, Lomi, Tikur Wuha, and Infranz) and the change of spring discharge with time. The first author was part of the group involved in a wetland restoration study conducted in 2012. During repeated field works, hand-dug wells were dug to study the interaction of the wetlands and the shallow groundwater. On the basis of field observations of geology and geomorphology, informant interviews, and existing geological and hydrogeological maps and interpretation of Google Earth satellite images, we tried to investigate the hydrological system of the catchment. The riverine system of the Infranz catchment was investigated and the sources of water in the river system and wetlands were studied. Based on these field observations, the geological map of the area was compiled, as a prerequisite for understanding groundwater recharge in the specific conditions of the Infranz catchment, which are in contrast to the regional conditions of Tana basin. Meteorological data from 2012-2016 have been collected from Bahir Dar Meteorological Station, which is a first-order meteorological station providing minimum and maximum temperature, wind speed, relative humidity, and sunshine hours. These data were used to estimate groundwater recharge by using three methods. Groundwater samples in the catchment and rainwater samples (in and around the catchment) were collected in different years (from 2015-2016) to determine chloride concentration. Water samples for chemical analysis were collected in clean polyethene bottles and the bottles were rinsed three times with the sample water before being filled. The analysis of the water samples was carried out at the water chemistry lab of the Laboratory for Applied Geology and Hydrogeology, Ghent University, Belgium. The limit of detection of chloride concentration was 0.1 mg/l. Before using the water chemistry data, we tested the ionic balance of the major ions, and the error on the ionic balance is less than 5%. Groundwater Flow System and Groundwater Level Monitoring Three groundwater flow systems were identified in a small basin: local, intermediate, and regional [33]. The groundwater system of a basin is mainly influenced by soil type, hydrology, hydrogeological condition, and topography in recharge areas (upland areas) and discharge areas, such as wetlands, springs, and rivers. Physical observations related to natural groundwater flow and well logging data were used to infer the local and regional flow of the catchment scale. On this basis, we tried to outline the local and regional flow lines in the Infranz catchment. To understand the groundwater flow system of the Infranz catchment, shallow groundwaterlevel measurements were conducted in eight community-owned hand-dug wells. The measurement was conducted weekly and, occasionally, bi-weekly between 10/05/2016-6/03/2017 by electric sensor water level meters. The monitoring wells were selected based on the hydrogeomorphic and geologic situation and accessibility ( Figure 1). Groundwater Recharge Estimation Groundwater recharge is defined as the downward movement of water reaching the water table, joining to the groundwater reservoir [4][5][6]34,35]. Even though there is an evident uncertainty with current recharge estimation methods, accurate recharge estimation is still important [36,37] in water resources studies. To estimate groundwater recharge, three commonly used and globally accepted estimation techniques were applied to increase the reliability of the estimation and the results were compared. Each method is explained in detail in the following sub-sections. Groundwater Recharge Estimation by Chloride Mass Balance (CMB) Method The chloride mass mass-balance method was developed by [38]. It compares chloride in precipitation with chloride in groundwater, in which recharge from precipitation is the only source of chloride [39]. The increased chloride in groundwater is attributed to evapo-concentration. To calculate the chloride mass balance for recharge estimation, water samples were collected from springs and rainwater. With the assumptions that chloride is a conservative tracer, that the source of chloride ions in the soil zone or groundwater is from precipitation, and that no recycling or concentration of chloride within the aquifer occurs [36], it is possible to use the following formula to estimate a spatially averaged recharge flux to the aquifer [8,[36][37][38][39]: where P is the mean annual effective precipitation in mm (annual precipitation-surface runoff), R is total annual recharge (mm), Clp is the mean chloride concentration in precipitation (mg/l), and Clgw is the chloride concentration in shallow groundwater (mg/l). The chloride ion is used in chemical recharge studies because of its conservative nature, not being leached from or adsorbed by the soil [39]. Groundwater recharge to the regional system was estimated based on chloride concentration of eight rainwater samples and 18 groundwater samples. The low chloride concentrations found in the sampled springs provide confidence in the absence of other sources of chloride. Chloride concentrations from eight rainfall samples and 18 groundwater samples, all taken in 2015/2016, were used. The groundwater samples were fairly distributed spatially, and the rainfall was sampled at different months of the rainy season to account for the possible temporal variation of rainfall composition. Groundwater Recharge Estimation by Soil Moisture Balance (SMB) Method Estimation of recharge in a variety of climatic conditions is possible using a daily soil moisture balance based on a single soil store [40]. The method was first developed by [41] and later was applied for estimating the groundwater recharge of different studies (e.g., [40,[42][43][44][45][46]). In this method, groundwater recharge is calculated on a daily time scale by the following formula: where ∆SM is the difference between soil moisture content (SM) of the calculated and preceding day, which is zero if the moisture content remains at field capacity, i.e., SM on both days would be equal to the plant available water (PAW) value, usually in the wet season. P, PET and Runoff are the daily precipitation, potential evapotranspiration, and overland flow, respectively. The PET is estimated using the meteorological data of the nearby Bahir Dar Meteorological Station. The Penman-Monteith formula modified by [47] was used to calculate daily PET. Precipitation data of the Bahir Dar Meteorological Station were used. There was no river discharge monitoring station on the Infranz River. Hence, the surface runoff was estimated based on expert judgement, i.e., by taking the geological, topographical, and human activity factors into account. The PAW was estimated using the hydraulic properties calculator (SPAW) computer program developed by USDA and Washington State University in cooperation in 2006, and later revised in April 2019. In addition, the table of [41] reporting PAW values for different possible land use and soil combinations was consulted for refining the estimated value. Groundwater Recharge Estimation by Water Table Fluctuation (WTF) Method Groundwater recharge estimation based on groundwater level data is among the most widely applied techniques [48]. The method has been applied by many researchers (e.g., [46,[48][49][50]). The effectiveness of the technique highly depends on how close the assumptions of the method are met, and how accurate the estimation of the specific yield is. The principle behind the WTF method is that the water level rise is caused only by precipitation. The WTF method works well with the assumption that the aquifer shows sharp water level rises and declines, respectively, for the presence and absence of precipitation. The aquifer system needs to be unconfined with a shallow water table, otherwise it may not display sharp rises because wetting fronts tend to disperse over long distances [48]. The influence of pumping is null so that the fluctuations are only attributed to rainfall events. The groundwater recharge is calculated by the following formula: where ∆h = the head change through the recharging period; = is the specific yield, and ∆ = is the recharging time period. Geology of the Infranz Catchment In the Infranz catchment, the Tertiary volcanics are largely covered by Quaternary volcanics, particularly by young Quaternary volcanic rocks ( Figure 2). The Alaji formation, part of the Tertiary Trap Series formation, is exposed near to the major springs area. This formation mainly consists of flood basalts associated with rhyolite and subordinate trachyte plugs that contain transitional to tholeiitic basalts [24]. Rhyolite was outcropping near the major springs area. The Tertiary volcanics have relatively modest average permeability. The old Quaternary volcanic rock is exposed in the eastern part of the catchment at the Meshenti area. It consists of olivine basalts and subordinate phonolitic lavas and it covers much of the southern Tana basin [51], as cited in [52]. The aphanitic basalt is dark grey to black, fine-grained, and compacted rock. The old Quaternary basalt is mainly composed of 60% plagioclase, 30% pyroxene, 8% opaque minerals, and 2% olivine [24]. The young Quaternary volcanic rocks in the Infranz catchment are characterized as scoriaceous basalt flows: dark gray to greenish-gray, strongly vesicular to scoriaceous, porphyritic olivine, pyroxene-plagioclase, zeolite-rich phyric, thin basalt flows. They are mainly composed of 50% plagioclase, 20% pyroxene, 15% opaque minerals, and 15% olivine [24]. They are scarcely weathered and it is possible to recognize the original "pahoehoe" and "aa" lava flow structures. The basalts are highly vesicular, with rounded to elongated vesicles, filled, in most cases, by zeolites, calcite, quartz, or chalcedony amygdales of up to 6 cm diameter, that show strong alignment or elongation parallel to the flow layering ( Figure 3). They show narrow to large volcanic tubes as observed around the springs at outcrop and cuts from quarries in the Merawi area. Scoria cones are also associated with this rock unit (Figure 4). The Quaternary basalts are highly jointed and fractured, forming blocky to ball-shaped vesicular flows, commonly known as pillow lava (Figure 4), flowing out of fissures. The joints and fractures develop during cooling. They are vertical to sub-vertical with dominant N-S direction, while two subsidiary sets are orientated NE-SW and NW-SE. The dominant N-S orientated joints are manifestations of the regional fault direction (extension of the N-S orientated Tana graben), which A defines the weakness direction along which the basaltic lava flows likely erupted through fissures. The interconnection of the vesicles combined with fractures developed in the unit, as well as its inherent scoriaceous nature, result in high porosity and permeability along with vesicles, joints, and fractures. Most parts of this young Quaternary volcanic rock unit form a gently undulating plain, with a typical landscape of closed depressions surrounded by the lineaments of blocky fractured rocks ( Figure 4A). This is clearly visible on Google Earth ( Figure 4B). The depressions are covered by thin alluvial deposits whereas the blocky fractured bare rock lineaments surround the depressions. The geomorphology causes a collection of water in the closed depressions during the rainy season, preventing surface run-off and favoring infiltration to the groundwater Springs Occurrence and Relationship with the Wetland System The surface geology of the area is made up of highly porous and permeable vesicular basalt, which has scoriaceous contact zones of individual local lava flows. This provides access to conduits for rapid local and regional (at catchment scale) flow. The five major springs, along with other smaller springs which are situated approximately 8 km west of Bahir Dar, discharge from these vesicular and highly fractured basalts. On one hand, intermittent springs and wetlands can be observed, whereas on the other hand, high-discharge springs and the perennial river occur, as shown in Figure 5. Streams upstream of the major spring zone show low flows and run dry during the dry season. Wetlands upstream of the major spring zone are also seasonal, becoming dry in the dry season. From the conducted field work and random interviews with local people (2013 and 2015 to 2018), it was clear that some streams, seepage zones, and some of the low discharge springs are drying up in the dry season, though these are the sources of water to wetlands during the wet season, and that these conditions have worsened with time. The water withdrawal from the high-discharge springs results in reduced outflow to the downstream wetlands and lower flow in downstream Infranz River to the lake. The local people also confirmed that before the extraction of water for water supply (2002), the wetland was sustained throughout the year, especially around the high-discharge springs, with no access possible for farming nor grazing. It was assessed by [53] that the areal coverage of the Infranz wetlands had decreased to 34% in the year 2011. They confirmed, as we have witnessed, that the wetlands now become dry in the dry season and this encourages local farmers to encroach the wetland with farming ( Figure 5). Groundwater Level Monitoring The aquifer storage is affected by the addition or extraction of water from the aquifer. This, in turn, causes groundwater level fluctuations. Figure 6 shows the response of shallow groundwater to precipitation. During the rainfall months (June, July, and August), the water level rises and reaches a maximum level in July. All the wells instantaneously respond to the start of the rainfall period due to rapid recharge and reaching the maximum in July, staying somewhat constant till halfway through September, where they then start to drop at the end of the rainfall period ( Figure 6). The small peak ( Figure 6) in the month of February is related to limited rainfall in winter, which is not very common in the area. During the dry season, the water table has a relatively large depth in the upstream sections at high elevation (reaching down to 15 m depth in the most upstream Well 1), while depth is smaller in downstream wells (around 2 m depth in most downstream Well 4). This is due to elevation variations. The rise during the rainy season is the largest in the upstream sections, reaching 8 m amplitude in Well 1 while being restricted in downstream sections (less than 2 m amplitude in Well 6). In the rainy season, the water level may reach up to the surface in the downstream part (overflowing in Well 6), while in most upstream, well 1, the water table is still at 7 m depth. The steep rise in the upstream wells indicates fast and important groundwater recharge. This increases the volume of water stored in the aquifer. The recession is also relatively steep in the higher parts due to groundwater draining from elevated to lowland areas due to gradient variations, which decreases the groundwater storage from the aquifer. Wells in the lowlands show minimum fluctuations and stay almost stable throughout the year. Groundwater Recharge from Chloride Mass Balance (CMB) Method The rainfall in Bahir Dar shows chloride concentrations varying from <0.1 mg/l to 4 mg/l whereas the concentrations of the high-discharge spring water samples vary from 0.3 mg/l to 12.5 mg/L. For the calculation of recharge by the CMB method, we used the average concentration of the rainwater samples (~ 1.8 mg/L) and groundwater samples (~ 6.1 mg/L). Chloride concentrations from eight rainfall samples and eighteen groundwater samples, all taken in 2015/2016, were used. The groundwater samples were fairly distributed spatially, and the rainfall was sampled at different months of the rainy season to account for the possible temporal variation of rainfall composition. The correlation between nitrate and chloride was checked beforehand to eliminate pollution, and they were not correlated (Figure 7). This provides confidence that chloride in the groundwater is from rainfall only, and that the CMB method is applicable. The mean value of Cl concentration in rainfall (4.6 mg/L) in Abu Delige in Sudan [54] is higher than the mean value of Cl in precipitation of our study area (the former location being near to the coast) but comparable with the mean value (2.88 mg/L) found at Sahl, Senegal [55]. The mean of Cl from rainfall samples from Addis Ababa is about 0.95 mg/l [8], which might be due to the location being far from the coast and with low atmospheric dust. Yearly precipitation from Bahir Dar Meteorological Station is 1467 mm (based on the measuring period 2012-2016). The area is typically characterized by many closed depressions floored and surrounded by the highly permeable open jointed basaltic rocks, with a very thin soil covering some of the depressions' floors. The slope is generally flat to very gentle in between the low ridges of blocky volcanic rocks, where the closed depressions collect the rainwater, and ponding during the wet season may occur. The land use is predominantly bush and shrub type, with a small part of agricultural land. Considering all these physical characteristics of the catchment, we concluded that the surface runoff following rainfall events is negligible, and hence the runoff is set to 0 mm. The effective precipitation is 1467 mm/year. With these values, the recharge is estimated using CMB to be 436 mm/year or about 30% of the annual rainfall. With the approximate surface area of the catchment (~ 200 km 2 ), this results in recharge of 87.2 * 10 6 m 3 or ~ 2,765 l/s. This is approximately the total groundwater recharge in the entire catchment based on the CMB method. The high-discharge springs are then discharging about 14% of this total recharge. Even if the recharge was only 10% of rainfall (minimum estimate of [56] instead of 30%, the recharge would still largely exceed discharge of all perennial springs approximately by 400 l/s. The groundwater recharge was also estimated in a nearby area (Dangila district) by using different recharge estimation methods [57], and results were found ranging between 280-430 mm/year [58]. Based on hydrograph separation, [26] we estimated an average annual recharge of 195.6 mm over the entire Lake Tana basin. Another study undertaken east of the Infranz catchment, in the Rib catchment, estimated groundwater recharge to be 154 mm/year based on the CMB method. Groundwater Recharge from Soil Moisture Balance (SMB) Method The daily PET is calculated starting from January 2012 to January 2016 using the Penman-Monteith formula. The daily average value was 4.06 mm/day. The daily rainfall depth recorded at the Bahir Dar Meteorological Station for the same period was used; it has a mean annual value of 1467 mm. The runoff percentage (surface runoff over precipitation) is one of the input factors of the SMB method and is set to 0. The hydraulic properties calculator (SPAW computer program) supported with the table in [41], in which PAW values for land use and soil combinations are compiled, is used to deduce PAW; a value of 200 mm was adopted. Since the rainfall season is preceded by a long dry period, the thin soil covering the area would be close to dry. Therefore, the initial soil moisture content (which has very small control over the calculated annual recharge value) is set to 0.5 mm. The mean annual recharge calculated for the five years (2012-2016) with SMB method amounts to 748 mm, or 51% of rainfall. In other words, 150 *10 6 m 3 of water is annually recharging the whole Infranz river catchment. The high-discharge springs only discharge some 8% of this total recharge. Groundwater recharge is mostly taking place during the main rainy season (Figure 8), while the yearly rainfall is consumed by the plants to satisfy their evapotranspiration and growth demand and to bring soil moisture content up to field capacity. The annual actual evapotranspiration is 638 mm, which is 56% of annual potential evapotranspiration and 42% of yearly rainfall. Groundwater Recharge Estimation by Water Table Fluctuation (WTF) Method Based on hydrogeochemical and isotopic hydrological data, previous studies on the Lake Tana basin show that there is no/limited groundwater-lake water interaction [20,21]. Recharge is expected to be dominant in the upstream reaches. Therefore, the groundwater rise in the most upstream well (Well 1) was considered. The seasonal rise of the water table in this well was observed to be 8 m, for the yearly recharge period. Appropriate values of specific yield are difficult to assess for magmatic rocks [59]. The values of specific yield for the volcanic aquifers of Jeju Island was found to be mostly lying within the range of 5%-10% [60]. This range is similar to the range reported by [57] and [61] for mafic to intermediate volcanic aquifers. Using this range allows us to express uncertainty [59]. With this range of specific yield values, we obtained recharge values between 400 and 800 mm/year, corresponding to the range covered by recharge obtained with the SMB and CMB methods. Groundwater recharge was estimated elsewhere in the central and northern highlands of Ethiopia, where the study area is situated, by the water balance method, discharge analysis, and chloride mass balance method at about 10%-20% [56]. A similar study by [8] in a tropical highland climate in central Ethiopia using the CMB method estimated the groundwater recharge at 25% of the annual effective precipitation. The high values ranging from 33% to 51% we found in this study are due to the specific geological conditions of the Infranz catchment, with the highly vesicular and fractured scoriaceous young Quaternary basalts, and the typical geomorphology with the closed depressions surrounded by the low basalt ridges. Figure 9 shows the results of the three different recharge calculating methods. Recharge estimated by SMB is higher than the other two methods (CMB and WTF). The reason for this could be that SMB calculates potential recharge without considering the limit of aquifer storage, whereas both of the other methods calculate the 'actual recharge'. The lower part of the Infranz catchment is expected to be storage controlled (because of water table near to ground surface) rather than precipitation controlled. Conceptual Hydrologic/Hydrogeologic Model and Sources of Recharge to the Springs The conceptual hydrogeologic model was developed to infer the groundwater flow system and to indicate sources of water for seasonal springs and high-discharge springs. This model was developed based on the integration of geological, geomorphological, hydro(geo)logical, and hydrochemical observations. The sources of water for the most upstream part of the Infranz River are multiple low-discharge and seasonal springs. The major sources of water for the Infranz river, however, are the Areke, Lomi, Tikur Wuha, and twin Infranz high-discharge springs. The area downstream and occasionally upstream from the high-discharge springs is rich with wetlands ( Figure 5). The undulating surface, which is a characteristic of the area, also governs the surface water and local seepage flows. From lithologic logs of drilled boreholes, a geological study [24], and field observations, less permeable Tertiary rhyolite has been identified under the recent Quaternary volcanics. This indicates that the older Quaternary basalt pinches out and the Tertiary Formation, consisting of less permeable rhyolite, appears immediately underneath the recent Quaternary volcanics (Figure 10). The spring occurrences are thought to be linked to geological and geomorphological features: the Tertiary rhyolite, which is less pervious than the overlying Quaternary basalts together with the local topographic slope break, is perching up the groundwater, resulting in contact springs. Based on topography and geologic data, a geologic cross-section was developed in a S-N direction, in which the potential groundwater flow lines are represented ( Figure 9). As was shown by [15], based on chemical and isotopic data of a larger study area, comprising the Infranz catchment, the springs are locally recharged. As can be inferred from the cross-section in Figure 10 and from the geological map in Figure 2, the low-permeable Tertiary volcanics are outcropping in the area downstream of the major springs and are underlying in the wetland area. It is clear that the wetlands are mainly sustained by these springs. Based on the above principle, the local and sub-regional (at catchment scale) flow lines are outlined as indicated in Figure 10. The local flow lines (blue short arrows) discharge at seasonal springs/seepages that are influenced by climatic effects and become dry in the dry season and sources of water for seasonal springs, whereas the regional flow lines (long arrows in Figure 10) are less affected by climatic variables and are sources of water for perennial and highdischarge springs, the river, and wetlands. All collected data concerning springs, streams, wetlands, and their persistence and discharges, groundwater levels, and well discharges and recharge, were integrated to draw conclusions on how the hydrological system is functioning. Figure 10. Schematic conceptual model of groundwater flow and circulation in the Infranz catchment, Lake Tana basin (location of cross-section A-A1 in Figure 2). Conclusions The schematic cross-section and flow lines in the conceptual model show that the springs emerge from the highly pervious young Quaternary basaltic formation, at the point of contact with the underlying lower-permeability Tertiary volcanic rocks. Local shallow flow systems discharge into seasonal springs and stream, while a regional flow system (within the Infranz catchment) sustains the high-discharge springs and downstream perennial Infranz River. The recharge area of these highdischarge springs covers the whole catchment. The specific geological conditions of the Infranz catchment, along with the highly vesicular and fractured scoriaceous young Quaternary basalts, and the typical geomorphology with the closed depressions surrounded by the low basalt ridges, create favorable conditions for high groundwater recharge. It was deduced by employing CMB, SMB, and WTF methods that 436-748 mm/year or 30%-51% of annual rainfall is recharging the groundwater. The comparable results may show the effectiveness of the methods for estimating recharge in specific geological, hydrogeological, geomorphological, and hydro-climatic areas. The range of the values is attributed to the possible uncertainties that are pertinent to some deviation from assumptions of each method, measurement errors, and spatial heterogeneity. The high-discharge springs are discharging about 8%-13% of this total recharge. Recharge over the catchment is thus more than sufficient to sustain the high-discharge springs. These exceptionally high recharge percentages may be expected in other areas with comparable geology and geomorphology for the fieldwork conducted from 2015-2018 and data analysis. Last but not least thanks to Mr Wubneh Belete, who provided GIS shapefiles of the project area.
7,716.8
2020-03-18T00:00:00.000
[ "Environmental Science", "Geology" ]
From Bench to the Clinic: The Path to Translation of Nanotechnology-Enabled mRNA SARS-CoV-2 Vaccines Highlights Pfizer–BioNTech’s and Moderna’s nanotechnology-enabled mRNA vaccines are the first of its kind to be approved for human use. The COVID-19 pandemic has changed our lives and although SARS-CoV-2 has caused irreversible health, social and economic damage, continuous and extensive efforts world-wide were essential to reduce its deleterious effects. Introduction In December 2019, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was discovered, which then precipitated the emergence of the largest global pandemic since the 1918 Spanish flu which was caused by the Hemagglutinin Type 1 and Neuraminidase Type 1 (H1N1) influenza A virus [1]. The origin of SARS-CoV-2, the virus responsible for coronavirus disease , was traced to bats and pangolins, two mammals that serve as major reservoirs for various types of coronaviruses (CoVs), and many have concluded that human transmission possibly resulted from close interactions with one or both species, nevertheless, more evidence is necessary to confirm this hypothesis [2]. On March 11, 2020, the World Health Organization (WHO) officially categorized this disease as a global pandemic as it spread over the world causing death and economic devastation [3]. Within weeks after the outbreak of SARS-CoV-2 began, a myriad of public health measures that include lockdowns, testing, contact tracing, and hygiene campaigns, were implemented in several countries to control the spread of the virus; nevertheless, millions of people have been infected, and an unprecedented amount of deaths have occurred around the world [4]. Consequently, global financial markets have been very unstable, millions of people have lost their jobs, and a health and economic crisis has emerged. Based on this sanitary emergency, innovative solutions became urgently needed to curtail the spread of SARS-CoV-2. In recent years, nanotechnology has been widely used to solve some of the most pressing challenges of modern medicine, as it offers the opportunity to modify and manipulate matter at the nanoscopic scale to generate innovative therapeutic, diagnostic, or theranostic platforms [5]. These technologies include drug delivery nanosystems, nanosensors, nanostructured hydrogels, nanoengineered tissues, and nanovaccines. Nanotechnology has played an important role in the response to the COVID-19 crisis, as various nanoparticlebased vaccines have emerged from several companies around the world. Alliances across industrial (Moderna, Pfizer-BioN-Tech) and governmental settings allowed the acceleration in the development and clinical translation of nanotechnologyenabled SARS-CoV-2 vaccines. Previously, the development of nano-sized particles formulated to transport antigenic components that induce immune responses has been successfully implemented at the preclinical stage for various applications, such as cancer and some infectious diseases including severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) [6]. Nanoparticle-based approaches can achieve targeted delivery of viral proteins or genetic material to antigen-presenting cells (APCs) causing controlled immunogenic responses [7]. Immature APCs uptake nanoparticles (NPs) via phagocytosis or endocytosis and migrate to the closest lymph node through the lymphatic system while undergoing a process of maturation. Once fully matured, APCs complete antigenic presentation on their membrane, CD4 + and CD8 + T cells are activated, and immunity is produced to target a specific pathogen (Fig. 1) [8]. In this review, we present an overview of the clinical translation of SARS-CoV-2 messenger ribonucleic acid (mRNA) nanotechnology-enabled vaccines, by exploring in detail their mechanism of action and clinical develpopment during the COVID-19 pandemic. Coronaviruses CoVs are single-stranded ribonucleic acid (RNA) viruses from the Coronaviridae family, with a distinctive crownlike membrane envelope composed of spike glycoproteins localized into their surface [9]. Four genera of CoVs exist: alphacoronavirus, betacoronavirus, gammacoronavirus, and deltacoronavirus [10]. To date, seven CoVs are known to affect humans, 229E and NL63 from the alphacoronavirus genus, and HKU1, OC43, MERS-CoV, SARS-CoV and SARS-CoV-2 from the betacoronavirus genus [11]. Four main structural proteins, essential for the complete assembly of the viral particle are encoded by the coronaviral genome: the spike S protein, the nucleocapsid N protein, the membrane M protein, and the envelope E protein (Fig. 2a) [12]. Each protein has a specific function: the S protein mediates virus adherence to the host cell receptors and subsequent fusion; the N protein binds to the CoV RNA genome, arranges the nucleocapsid and participates in the viral replication cycle; the M protein forms the main structural part of the viral envelope and interacts with all other structural proteins; and the E protein, the smallest integral membrane structural protein incorporated in the viral envelope, is important for the virus production and maturation [13]. The S protein of SARS-CoV-2 consists 1 3 of two subunits: the S1 subunit contains a receptor-binding domain (RBD) that binds to angiotensin-converting enzyme 2 (ACE2) on the surface of host cells, whereas the S2 subunit mediates fusion between the membranes of the virus and the host cell ( Fig. 2a) [14]. SARS-CoV-2, the causative pathogen of COVID-19, has produced a global pandemic due to a highly infectious mechanism based on the co-expression of TMPRSS2 and ACE2 receptors on the cellular membrane of host cells [15] (Fig. 2b). Although ACE2 receptor is expressed on respiratory epithelial human cells, ACE2 is not limited to the lungs, and extrapulmonary spread of SARS-CoV-2 in ACE2-positive tissues has been observed, including the gastrointestinal tract [16][17][18][19]. In addition, it has been observed that apical cilia on airway cells and microvilli on type II pneumocytes may be important to facilitate SARS-CoV-2 viral entry [20]. SARS-CoV-2 infection is assisted by TMPRSS2, a cellular serine protease, by two independent mechanisms: cleavage of S glycoprotein to activate host entry, and proteolytic cleavage of ACE2 to promote viral uptake [19,21,22]. The priming of the S protein by TMPRSS2 or other proteases is followed by the affinity, binding of the viral S1 protein domain to the ACE2 receptor, and cellular internalization initiated by plasma membrane fusion and acidic-pH-dependent endocytosis [19,23]. Intracellular replication is then facilitated by RNA-dependent polymerases, and assembly of new viral nucleocapsids from genomic RNA and N proteins occurs in the cytoplasm, whereas new particles are produced by the synergistic action of both the endoplasmic reticulum and the Golgi compartments [14]. Lastly, assembly of the genomic RNA and structural proteins into new viral particles leads to their release via exocytosis [14,24,25]. The evolution of SARS-CoV-2 has led to the emergence of multiple variants containing amino acid mutations, some of which have been classified as 'variants of concern' (VOC) that impact virus characteristics, including transmissibility and antigenicity [26]. Reports from several countries on the identification of VOCs (United Kingdom-B. 1 [26]. Although no significant evolutionary changes occurred approximately 11 months after the emergence of SARS-CoV-2 in late 2019, multiple mutations were identified since late 2020, and novel lineages are expected to emerge for the duration of the COVID-19 pandemic [26]. Clinical manifestations of COVID-19 may include flulike symptoms such as cough, fever, and fatigue to more serious clinical consequences including shortness of breath, anosmia, pneumonia, coagulopathy, acute kidney injury, and accelerative inflammation referred to as a cytokine storm [33]. Other manifestations have been reported in the gastrointestinal tract, liver, heart, skin, and central nervous system [34]. High mortality rate and clinical complications of COVID-19 are particularly associated with advanced age, and multiple co-morbidities such as obesity, hypertension, diabetes, and heart disease [35]. As the world faced a fast-evolving and highly contagious threat, innovative approaches were crucial to develop effective vaccines with the aim to suppress the pandemic and decrease mortality. Nanotechnology as a Tool to Develop Vaccines During the last decades, nanotechnology has enabled the development of candidate vaccines for the effective delivery of genetic material and antigenic proteins with high specificity through various administration routes (i.e., oral, intramuscular, intranasal, intradermal, and subcutaneous) [36][37][38][39][40][41]. Nanovaccination delivery systems have been developed in different forms and can be classified in several categories based on their composition: lipid, polymeric, inorganic, and virus-like nanoparticles (VLNPs) (Fig. 3) [42,43]. Each class of nanoparticle contains multiple subclasses, with various advantages and disadvantages regarding cargo, delivery, and patient response. Lipid-Based Nanoparticles Lipid-based nanoparticles have been the most common class of nanomedicines approved by the U.S. Food and Drug Administration (FDA) [44]. Lipid-based nanoparticles are The viral replication cycle initiates by the activation of the serine protease TMPRSS2 and angiotensin-converting enzyme 2 (ACE2) receptors 1 3 excellent platforms for the encapsulation of diverse hydrophobic or hydrophilic therapeutics, including small molecules, proteins, and nucleic acids. Their multiple advantages include formulation and synthesis simplicity, self-assembly, biocompatibility, and high bioavailability [45]. Common fabrication techniques for lipid-based nanocarriers are highpressure homogenization, high-speed stirring, ultrasonication, emulsion/solvent evaporation, double emulsion, phase inversion, and solvent injection [46]. Lipid-based NPs are divided into several types of systems: liposomal NPs, composed of a lipid bilayer enclosing a hydrophilic core; lipid nanoparticles (LNPs), liposome-like structures with diverse morphologies that usually form an inverse micelle within the core to encapsulate hydrophilic agents; and solid lipid nanoparticles (SLNPs) composed of a lipid monolayer enclosing a solid lipid core [47]. Gene delivery systems often use LNPs to encapsulate nucleic acids in spherical vesicles composed of several materials: ionizable or cationic lipids, helper lipids, and polyethylene glycol (PEG) [48]. Ionizable lipids, which are neutral at physiological pH and positively charged at low pH, are embedded in the micellar structure of LNPs to complex with negatively charged genetic material, aid endosomal escape, and protect against nuclease-mediated degradation [49,50]. Helper lipids such as distearoylphosphatidylcholine and cholesterol, promote cell membrane binding and provide structural rigidity. Because LNPs can be rapidly taken up by the reticuloendothelial system, PEG is commonly used to decorate the NP surface to increase bioavailability in the human body [51]. Despite these advantages, limitations of lipid-based nanoparticles include low encapsulation efficiency of hydrophobic molecules, poor biodistribution due to a high accumulation in the liver and spleen and, in rare cases, anaphylactic or severe allergy-like reactions as a response to high antibody levels induced by PEG [50,52]. Polymeric Nanoparticles Polymeric NPs can be fabricated with various types of natural materials such as chitosan, chondroitin, alginate, pectin, guar gum, dextran, and xanthan gum [42]. Similarly, synthetic polymeric materials such as polyacrylates, polycaprolactones (PCL), polylactic acid (PLA), poly (lactic-coglycolic acid) (PLGA), polylactide-polyglycolide copolymers, and charged polymers such as poly(amidoamine) (PAMAM) and poly(ethylenimine) (PEI) have been widely used for nanoparticle fabrication [42,53]. Techniques for the synthesis of polymeric NPs such as emulsification, nanoprecipitation, ionic gelation, and microfluidics, allow precise control of multiple features including size, shape, charge, surface chemistry, and solubility [54]. Polymeric NPs enable different modalities for delivery: drugs, proteins, or genetic material can be conjugated to the polymer, encapsulated, immobilized into its matrix, or attached to the NP surface [54]. By fine-tunning properties such as composition and surface charge, the loading efficacies, release kinetics, and tissue-specific accumulation of these therapeutics are highly controlled [55]. Polymeric NPs are divided into two main categories, which include nanospheres (matrix systems) and nanocapsules (reservoir systems) [56]. These categories are further divided into micelles, polymersomes, and dendrimers. Polymeric micelles are nanostructures composed of amphiphilic block copolymers that self-assemble into a core shell structure in aqueous solutions [57]. Designing safe and effective micelles that exhibit multifunctional properties by integrating stimulisensitive groups and ligands for specific targeting has become especially relevant for the delivery of antibodies and small interfering RNA (siRNA) [57]. Polymersomes, self-assembled vesicles composed of amphiphilic polymers, offer several advantages over traditional liposomes in terms of structural stability; nevertheless, lipid/polymer hybrid vesicles allow a greater control and adaptability of physicochemical properties to any desired functionality or application [58]. Dendrimer NPs, composed of three-dimensional branched synthetic polymers that form interior and exterior layers ideal for molecular conjugation and encapsulation, have been employed in nanovaccination approaches for DNA or RNA delivery [59][60][61]. Polymeric NPs have been regarded as excellent candidates to deliver molecules due to their biodegradability, water solubility, biocompatibility, stability, and ability to perform targeted delivery. Some disadvantages, however, include risk of particle aggregation and toxicity [54]. Inorganic Nanoparticles Inorganic-based platforms composed of metallic (e.g., gold and iron oxide), carbon-based (e.g., nanotubes), or semiconductor NPs (e.g., quantum dots) have been proposed as delivery vehicles for vaccination with promising results [62,63]. Inorganic NPs exhibit unique size-dependent electrical, magnetic, and optical properties useful for immunological applications through targeting of multiple immune signals, enhanced stability, and delivery of otherwise insoluble cargo [64]. Simple surface modifications allow inorganic NPs to bind to antibodies, drugs, or other ligands and increase their biocompatibility [56]. Some common strategies followed for the fabrication of inorganic NPs include controlled crystallization (solvothermal synthesis or seeded growth), programmed assembly (thermodynamically driven), and templated assembly (coating, casting or breadboard) [65,66]. Gold nanoparticles (AuNPs) are one of the most well-studied inorganic nanosystems due to their tunable properties and ease of functionalization [67,68]. Although limited, some inorganic materials such as iron oxide NPs (IONPs) have been FDAapproved for human use, while others are undergoing clinical trials [69,70]. Besides showing potential as delivery vehicles for antigens and adjuvants, carbon-based nanotubes (CNTs) have shown unique infrared light-responsive properties to induce systemic immune responses [64]. Quantum dots, luminescent nanocrystals with a typical size between 2-10 nm, have been primarily used for in vitro and in vivo imaging applications, nevertheless, they have also shown potential as co-delivery vaccine agents [66,71]. In general, inorganic NPs are well suited for theranostic applications and offer the advantage of being highly versatile in size, structure, and geometry [54]. Some concerns that arise in the scientific community from these nanomaterials are potential long-term toxicity and limited biodegradability [64]. Virus-Like Nanoparticles Immunization strategies have achieved mimicking the conformation of viral structures to create VLNPs or purify viral proteins to synthesize NPs. VLNPs provide several vaccination advantages, including an enhanced uptake through native viral mechanisms and more efficient stimulation of the immune response [72][73][74][75]. These nanoparticle systems can serve as vaccination platforms to facilitate the delivery of functionalized or encapsulated adjuvants, antigens, and genetic material that expresses antigenic structures to immunize the organism against pathogens. VLNP-based vaccines have received attention as they allow the incorporation of ligands, immunomodulators, and targeting moieties into their structure via genetic engineering strategies [76]. VLNPs also offer diverse bioinspired-structures based on human (e.g., Ebola virus, hepatitis B virus, and human immunodeficiency virus) or plant (e.g., Tobacco mosaic virus, Cucumber mosaic virus, Cowpea chlorotic mottle virus) viruses, and modifications include synthetic surface glycans that play a crucial role in modulating protein-receptor interactions, proinflammatory signaling pathways, and cytokine expression [77][78][79][80][81][82]. Compared to the delivery of live-attenuated or wholeinactivated virus vaccines, VLNPs do not carry the native viral genetic material and thus are safer and non-infectious [83]. As platforms for vaccine development, VLNPs offer the advantage to be highly scalable and adaptable, and some expression systems such as yeast and bacterial cells may greatly reduce their cost during production [76,84]. Additionally, VLNPs may be designed as multivalent antigen structures, which could provide enhanced cellular uptake and superior immune activation [84,85]. Despite the potential of VLNPs, some challenges include eliciting the formation of specific protective antibodies, improving the limited duration of immune responses, and effectively mimicking the complex life cycle of some pathogens [85]. Clinical Development of Vaccines Several biotechnology companies, hospitals and universities created alliances across industrial and academic settings to rapidly advance basic and clinical research while developing nanovaccination systems during this pandemic (Table 1). For these vaccines to reach the general population, a rigorous clinical development and evaluation process has been followed. According to the WHO, a successful vaccine should reduce disease by at least 50% and show precise information to conclude that vaccine efficacy exceeds 30% (95% CI trial result should exclude lesser efficacies than 30%) [86]. Evaluation from the FDA includes this lower limit of 30% as a criterion for vaccine approval. Part of these efforts resulted in the creation of two highly efficacious vaccines, BNT162b2 by Pfizer-BioNTech and mRNA-1273 by Moderna; both vaccines use nanotechnology as an essential part of their design to deliver mRNA [87]. Once a potential antigen of an infectious pathogen has been identified, the first step involves the development of the mRNA sequence that can express this antigen and its cellular and animal testing (pre-clinical stage) to determine its efficacy [88]. The second step consists of clinical trials, a sequential four-phase process in which the vaccine candidate is tested on humans [89]. During Phase I, small groups of people (hundreds) receive the trial vaccine to evaluate safety and immunogenicity. If satisfactory results are obtained, the vaccine candidate proceeds to Phase II with the objective to expand safety evaluation, identify the optimal dose, and study the efficacy in a larger population (25-1000 or several hundred volunteers) [90]. Phase III trials assess the efficacy of the vaccine in hundreds or thousands of participants, and if other vaccines for the same pathogen exist, a direct comparison is often performed [91]. If favorable results occur in these three phases, an application for registration and approval of a vaccine can be presented to regulatory agencies, such as the FDA in the case of the U.S., which in turn will evaluate the data and make a final decision as to whether it should be approved for clinical use [92]. As all potential short-and long-term adverse events cannot be anticipated until vaccine administration to the general population occurs and time has passed, a Phase IV clinical trial is often done after vaccine approval. This allows for monitoring the safety and efficacy of the vaccine in populations of 100,000 to millions [90]. The probability of success for a vaccine candidate varies by phase and therapeutic area, according to an analysis of Wong et al. that included vaccine candidates from 406,038 trials conducted from January 1, 2000, to October 31, 2015 [93]. The probability of successfully advancing vaccine candidates for infectious diseases to the next clinical phase is 76.8% for Phase I to II, 58.2% from Phase II to III, and 85.4% for Phase III to approval [93]. Approximately one third (33.4%) of the candidates succeed clinical trial phases and reach the public based on this analysis [93]. Generally, vaccine development from conception to approval can take from years to decades. As an example, mumps, rotavirus, and varicella vaccines took four, fifteen, and twenty-eight years to reach the general population, respectively [94]. Nevertheless, because of this global crisis, vaccines against COVID-19 have been developed with unprecedented speed [95]. Each of these stages of pre-clinical and clinical development was speeded up and profuse amounts of investments from the private and public sector were provided to facilitate rapid progress. To accelerate the clinical testing process of a SARS-CoV-2 vaccine, several clinical phases ran in parallel, not sequentially, and in the U.S., Pfizer-BioNTech and Moderna were the first to be granted a fast-track designation by the FDA to expedite clinical studies and approval processes [96,97]. The accelerated development of vaccines, combined with the novelty of the technologies adopted for their production may result on several concerns, including technical manufacturing problems and ethical matters regarding global access and availability of vaccines [98]. The unprecedented speed in the development of vaccines provides many lessons for the future such as insights on regulations, global access, clinical development, chemistry manufacturing and controls, and post-deployment monitoring [99]. Differences Between DNAand mRNA-Based Nanovaccines The use of nucleic acids is one of the strategies that biotechnology companies and academic institutions have implemented to generate vaccines against SARS-CoV-2. This vaccine type relies on the delivery of genetic information to cells, usually as plasmid deoxyribonucleic acid (pDNA) or mRNA, to encode antigens and induce an immune response in the organism [100,101]. While mRNA vaccines only need to cross the cellular membrane and reach the cytoplasm of a targeted cell to elicit an effect, pDNA vaccines need an additional step by crossing the nuclear envelope [102]. However, the delivery of unprotected pDNA or mRNA represents a challenge, as enzymatic degradation of the generic material and inefficacy crossing biological barriers such as cellular and nuclear membranes normally occurs [103]. Nanoencapsulation of nucleic acids to produce vaccines has been established as an innovative approach to enable the delivery and protection of genetic material against possible extracellular degradation while preserving its programmed immunologic effect, and several of the nanoparticle-based vaccines against COVID-19 take advantage of this strategy [104,105]. To fabricate nucleic acid nanovaccines, DNA or mRNA are typically mixed with cationic lipids or polymers to form an electrostatic complex that is subsequently encapsulated into a nanoparticle system [106]. The resulting nanocarrier prevents ribonuclease activity in the genetic construct and facilitates its cellular uptake by APCs via a cell membrane fusion mechanism in the case of most lipidbased systems or through endocytic and phagocytic pathways when polymer-based systems are used [106][107][108][109]. One advantage of using nanotechnology-based systems to deliver genetic material is that specialized transfection equipment, such as electroporation or gene guns, is not necessary [110]. During the internalization process of most lipid-based systems, when the nanoparticle shell integrates into the cell membrane, the genetic material is released directly into the cytoplasm (Fig. 4a) [111]. When hybrid lipidpolymer nanoparticles (LPNPs) with cationic components are taken up into the cell, a phagosome or endosome is formed around the particle and maturation into a phagoor endolysosome will result in its disruption via a pHdependent proton sponge effect releasing its content to the cytoplasm (Fig. 4b) [112]. In the case of pDNA-based nanovaccines, the genetic material is expected to reach the nucleus where transcription of mRNA molecules will take place, a more complex mechanism when compared to mRNA-based nanovaccines where the genetic cargo just needs to reach the cytoplasm to have an effect [113]. Both approaches result in ribosomal translation, production of antigenic proteins, proteasome activity, and subsequent extracellular presentation of the genetically encoded antigens [114,115]. After the migration of APCs to local lymph nodes, antigen presentation will trigger cytokine release, induction of cellular responses in CD4 + and CD8 + T cells, activation of the adaptive immune system and humoral immunity by antibody-producing B cells [115]. Some advantages of using mRNA-based vaccines over their DNA counterparts include their null interaction with the host-cell DNA avoiding possible risks of genomic integration [110]; other benefits when mRNA-based vaccines are compared to viral-based platforms include the absence of anti-vector immunity as it contains an open reading frame encoding the selected antigen and specific regulatory elements which permits its administration multiple times [110]. The Emergence of mRNA Nanovaccines During the COVID-19 Pandemic Before the emergency use authorization (EUA) of Pfizer-BioNTech's and Moderna's vaccines by the FDA, mRNA-based vaccines have never been FDA-approved in humans for any disease. In the past, however, DNA-based vaccines were already commercially available for veterinary uses such as the prevention of the West Nile Virus in horses and canine melanoma showing no safety concerns [116][117][118]. In addition, during 2016 and 2017, there were several ongoing human clinical trials evaluating the efficacy of mRNA-based vaccines against cancer and some infectious diseases, [101,[119][120][121]. Although mRNA technology has shown promising results on in vitro and in vivo models since 1990, there was no substantial investment in developing mRNA therapeutics, mainly because of concerns associated with mRNA instability, high innate immunogenicity, and inefficient in vivo delivery [101,122]. Novel strategies, including the incorporation of pseudouridine and development of nanoparticle delivery platforms, were crucial for mRNA-based vaccines to emerge as attractive approaches with excellent biocompatibility profiles, facile scalability, and easy manufacturing [101,123]. Efforts from several companies (Pfizer-BioNTech, Moderna and Arcturus) and academic institutions (Imperial College London) have been made for the development of effective nanotechnology-enabled mRNA vaccines, nevertheless, to date only Pfizer-BioNTech's and Moderna's mRNA vaccines have received emergency approval, and in the next sections of this review article, the preclinical and clinical development of these two SARS-CoV-2 vaccines will be described in detail. Pfizer-BioNTech SARS-CoV-2 Vaccine BioNTech is a biotechnology company that collaborated with Pfizer and Fosun Pharma to test and develop a SARS-CoV-2 vaccine. For this purpose, four nanoparticle-based mRNA vaccine candidates (BNT162a1, BNT162b1, BNT162b2, and BNT162c2) were under investigation. Each candidate possessed a different mRNA format encapsulated in a LNP: two vaccines (BNT162b1 and BNT162b2) contained N 1 -methyl-pseudouridine (m1Ψ) nucleoside-modified mRNA (modRNA); one (BNT162a1) uridine containing mRNA (uRNA); and one (BNT162c2) self-amplifying Schematic representation of nucleic acid-based nanovaccination mechanism. a Liposomal nanoparticle vaccines (i) The liposomal nanoparticle reaches APC membrane and fuses. (ii) If the cargo is mRNA, it reaches the cytoplasm and is ready for translation. If the cargo is DNA, it must reach the nucleus for transcription into an mRNA molecule. (iii) Subsequently, ribosomes will translate the mRNA molecules into proteins. (iv) Proteasome activity will break the protein down in small antigenic fragments. (v) The antigenic fragments are presented on the APC membrane, and stimulation of the innate immune response is initiated by CD4 + and CD8 + T cells. b Lipid-polymer nanoparticle vaccines (i) lipid polymer nanoparticle (LPNP) reaches APC membrane. (ii) The LPNP is taken up by a phagosome or endosome. (iii) As the phagosome or endosome ages, a phago-or endo-lysosome is formed and later disrupted due to pH changes releasing the genetic material to the cytoplasm. If the cargo is mRNA, it reaches the cytoplasm and is ready for translation. If the cargo is DNA, it must reach the nucleus for transcription into an mRNA molecule. (iv) The ribosomal machinery begins translating the mRNA to produce a protein. (v) Proteasome activity causes the breakdown of the protein in small antigenic fragments. (vi) The antigenic fragments are presented on the APC membrane for stimulation of the innate immune system by CD4 + and CD8 + T cells mRNA (saRNA) [124]. Two of the vaccines had a genetic sequence that expressed the S protein (BNT162b2 and BNT162c2) and the other two expressed the RBD of the spike protein (BNT162a1 and BNT162b1) [125]. The 80-nm sized NPs were composed of ionizable cationic lipids, phosphatidylcholine, cholesterol, and polyethylene glycol [126]. Preclinical Studies In parallel to Phase I/II clinical trials (NCT04380701), antigenicity and immunogenicity of BNT162b1 and BNT162b2 were confirmed in vivo in both murine and primate animal models [127]. First, preclinical studies were performed in BALB/c mice (n = 8) by administering 0.2, 1, or 5 μg of the BNT162b1 or BNT162b2, or a buffer as control, using a single-dose regimen. Results showed a high dosedependent response of either RBD-or S1-specific binding antibodies after the single dose, which increased more steeply for BNT162b2. On day 28, the administration of 5 μg of BNT162b1 was enough to elicit a high RBD-binding response [Geometric mean titer (GMT) = 752,680], similar to that observed when BALB/c mice were immunized with 5 μg of BNT162b2 (GMT = 434,560). To determine the protective immunity of the nanovaccine, a neutralization assay using vesicular stomatitis virus (VSV)-based SARS-CoV-2 pseudovirus was tested in mouse serum [127]. On day 28 after injection, a steady increase of 50% pseudovirus-neutralization levels were observed after administering 5 μg of either candidate vaccine (GMT = 1,056 for BNT162b1; 296 for BNT162b2). On days 12 and 28 after BNT162b injections, enzymelinked immunospot assay (ELISpot) showed production of IFN-γ by CD4 + and CD8 + T cells and IL-2 by CD8 + cells in murine splenic T cells. These results were confirmed by intracellular-cytokine-staining flow cytometry analysis after ex vivo restimulation with a full-length S peptide pool. An additional immunogenic analysis was performed in re-stimulated splenocytes obtained on day 28 from BNT162b-immunized animals with a full-length S peptide pool [127]. After stimulation, IFN-γ and IL-2 secretion was increased in Type 1 helper T-cell (Th1) cells compared to other cytokines, and IL-4, IL-5, or IL-13 in Type 2 helper T-cell (Th2) cells were undetectable. Although similar CD4 + and CD8 + T cell response patterns were observed for both vaccines, a stronger IFNγ-producing CD8 + T cell response was observed in mice inoculated with BNT162b2. To investigate the principal compartments for T and B cell priming and evaluate systemic effects, the effects of the BNT162b vaccine on proliferation and dynamics of immune cells in draining lymph nodes, blood, and spleen were studied [127]. Twelve days after the administration of 5 μg of either vaccine, an increase of plasma cells, class-switched IgG1 + and IgG2a + B cells, and germinal-center B cells in draining lymph nodes were observed, as well as an increase of class-switched IgG1 + and germinal-center B cells in spleens of mice, compared to the control. Mice injected with either vaccine also showed an increased level of CD8 + and CD4 + T cells in the draining lymph nodes, which were notable for T follicular helper (Tfh) cells. Although both vaccines induced higher levels of Tfh cells in the blood and spleen, only BNT162b2 induced an increase of circulating CD8 + T cells. To further test the clinical potential, nonhuman primates were selected in the same study to evaluate the neutralizing response and protective ability of both BNT162b vaccine candidates [127]. Rhesus macaques (n = 6, male, 2-4 years old) were administered with two intramuscular inoculations (at a 3-week interval) with 30 or 100 μg of BNT162b1, BNT162b2 or saline control. Results showed detectable levels of RBD-specific binding IgG antibodies by day 14 after one dose and increased levels 7 days after the second dose. On day 28, RBD-binding IgG geometric mean concentrations (GMCs) for BNT162b1 were 20,962 units (U) mL −1 and 48,575 U mL −1 at 30-μg and 100-μg dose levels, and for BNT162b2 were 23,781 U mL −1 and 26,170 U mL −1 at 30-μg and 100-μg dose levels, respectively. Compared to the GMCs of RBD-binding IgG of a panel of 38 SARS-CoV-2-convalescent human sera (602 U mL −1 ), the GMCs of inoculated primates were higher after one or two doses. Neutralizing activity was measured from sera collected 7 or 14 days after the second dose by a SARS-CoV-2 neutralization assay [127]. Results showed that animals administered with 30 μg and 100 μg BNT162b1 had a reciprocal 50% inhibitory dilution (ID 50 ) GMT of 768 and 1714, respectively, and those administered with 30 μg and 100 μg BNT162b2 had a ID 50 GMT of 962 and 1,689. To further investigate the antibody responses for viral inhibition, neutralization GMT of sera collected 21 or 35 days after the second dose from vaccinated animals was compared to neutralization GMT of human sera from COVID-19 convalescent patients. Results showed that GMT neutralization of macaque sera was substantially higher than that of human samples (GMT = 94). To determine the protective immunity of the BNT162b2 nanovaccine after one or two doses, ELISpot was employed to evaluate CD4 + and CD8 + T-cell cytokine specific responses for S protein [127]. Results showed strong IFN-γ but low IL-4 responses after the second immunization, and cytokine staining confirmed CD8 + T cells secretion of IFN-γ as well as CD4 + T cells secretion of high IFN-γ, IL-2 or TNF levels but low IL-4 levels, indicating a Th1biased response. The protective efficacy of both vaccines was further evaluated, as macaques (n = 12) previously immunized with either 100 μg BNT162b1 (n = 6) or BNT162b2 (n = 6), were exposed to a total dose of 1.05 × 10 6 plaque-forming unit (PFU) of the SARS-CoV-2 USAWA1/2020 strain by intratracheal and intranasal routes forty-one to fifty-five days after the second vaccine dosage was administered [127]. Additionally, control macaques (n = 9), previously immunized with saline, received the same viral challenge. Reverse-transcription quantitative polymerase chain reaction (RT-qPCR) analysis was performed in bronchoalveolar-lavage (BAL) fluid, and viral RNA was found in the control group on day 3 (7 of 9) and on day 6 (4 of 8, with one indeterminant result) after challenge; nevertheless, viral RNA was found in BNT162b1-immunized macaques only on day 3 (2 of 6) and not detected in BNT162b2-immunized macaques at any of the time points. In addition, nasal, oropharyngeal and rectal swabs were collected, and results analyzed by RT-qPCR showed viral RNA in the control group on the day after challenge (4 of 9) and in BNT162b2-inoculated macaques (5 of 6), but not from BNT162b1-inoculated macaques. Subsequent nasal swabs showed a decrease of viral RNA detection in control macaques at each sampling time point, a single detection on day 6 from BNT162b1-inoculated macaques, and no detection of BNT162b2-inoculated macaques at any time point. Similar patterns were observed in oropharyngeal and rectal swabs, validating the previous results. Analysis of the SARS-CoV-2 neutralizing titers on inoculated and control macaques showed values that ranged from 208 to 1185 (BNT162b1), 260 to 1004 (BNT162b2), and undetectable levels (saline) [127]. An increase of SARS-CoV-2 neutralizing titers was observed in control macaques as a response to the viral challenge. Nevertheless, no increase was observed on inoculated macaques with either vaccine, confirming a suppression of SARS-CoV-2 infection. Histological examination was performed in the lungs of the animals, and results showed localized areas of inflammation that were observed among all groups, including the control. This led to the conclusion that the primate animal model was primarily to study SARS-CoV-2 infection rather than COVID-19 disease. Phase I/II Clinical Trials Using the pegylated lipid nanoparticle system and based on the preclinical results, a combined Phase I/II (NCT04380701), randomized, placebo-controlled, and observer-blinded clinical study among healthy adults was initiated to determine the effective dosage, safety, tolerability, and immunogenicity [128] . The vaccine was administered in a population aged 18 to 55 years old while people aged 65 to 85 years. Three different dose levels of BNT162a1, BNT162b1, and BNT162b2 vaccines following a Prime/Boost (P/B) regimen were under evaluation. In a separate cohort, the BNT162c2 vaccine was administered using a single dose (SD) regimen. The design of the BNT162b1 vaccine is based on the nanoencapsulation of modRNA encoding for RBD of a trimerized SARS-CoV-2 S protein [129]. It has been previously shown that the addition of a trimerization "foldon" derived from bacteriophage T4 fibritin promotes the formation of trimers that allow the presentation of multiple sites for protein-protein interactions [130]. Thus, the genetic material that encodes the RBD antigens was designed with this modification to increase immunogenicity. The clinical results from three groups of subjects between 18 to 55 years old that were intramuscularly inoculated with the BNT162b1 vaccine at escalating dose levels (10, 30, and 100 µg) and one placebo group were reported [129]. The first group (n = 12) received two injections of 10 µg on day 1 and 21; the second group (n = 12) received two injections of 30 µg on day 1 and 21; the third group (n = 12) received a single injection of 100 µg on day 1; and the fourth group (n = 9) received two doses of placebo (control) at day 1 and 21. Seven days after the first and second dose were administered, localized pain at the injection site was the most frequent reaction with mild to moderate severity in all the treatment groups, except for 1 patient who reported severe pain after the first administration of 100 µg. Other experienced symptoms included muscle and joint pain, fatigue, headaches, and chills. Fever was reported after the first and second administrations of BNT162b1, and in the case of the 100-µg group 50% of patients presented side effects; based on this, researchers decided to not administer a second dosage of this dose range. In other groups, only 8.3% had fever which was self-limited after 1 day with no other serious adverse effects reported. The concentrations of RBD-binding IgG and SARS-CoV-2 neutralizing titers were evaluated before (day 0) and after the first dose was administered [129]. Similarly, titers were assessed again 7 and 14 days after the administration of the second dose. By day 21 after the first dose, GMCs of RBD-binding IgG of the three dosages (10, 30 and 100 µg) were 534-1778 U mL −1 , compared to 602 U mL −1 of the convalescent sera obtained from 38 subjects (18 to 83 years old) 14 days after a COVID-19 diagnosis was confirmed. By comparison, the recipients of the 10 µg presented similar RBD-binding IgG GMC levels to those found in the convalescent sera obtained from COVID-19 patients, whereas the 30 and 100 µg groups had significantly higher titer levels than those measured in the convalescent serum panel GMC. Seven days after the second dose, the levels increased for 10 and 30 µg dose groups (4813-27,872 U mL −1 ) and highly elevated concentrations persisted until day 35 (5880-16,166 U mL −1 ) [129]. These results represent a ~ 8.0-fold to ~ 50-fold increase in the RBD-binding IgG GMCs compared to convalescent serum panel GMC. RBD binding antibody titers did not increase for the 100 µg dose group beyond 21 days after the first vaccination. Twenty-one days after the first dose of BNT162b1 was administered in all the treatment groups, a modest increase in SARS-CoV-2 neutralizing GMTs was seen. Seven days after the second dose of 10 and 30 µg was administered, 1.8-fold and 2.8fold higher serum neutralizing GMT levels were detected in comparison to the ones found in the convalescent serum panel from SARS-CoV-2 infected patients. No significant difference in immunogenicity was found between the 30 and 100 µg groups, and the authors concluded that a dose range between 10-30 µg was well tolerated and produced significant neutralizing titers against SARS-CoV-2. The antibody and T cell responses from the BNT162b1 vaccine were studied in a second non-randomized, openlabel Phase I/II (NCT04380701) clinical trial in a population of healthy adults aged 18 to 55 years old [131]. Results indicated that after the administration of two doses of 1 and 50 µg of the vaccine, strong antibody, CD4 + and CD8 + T cell responses were observed. RBD-binding IgG concentrations were quantified, and superior levels were detected when compared to those found in the COVID-19 convalescent human serum panel. On day 43, neutralizing GMTs from SARS-CoV-2 serum presented an increase of 0.7-fold for the 1 µg dose group and 3.5-fold for the 50 µg dose group in comparison to convalescent human serum. Th1 skewed T cell immune responses with RBD-specific CD8 + and CD4 + T cell expansion were observed in most subjects and IFN-γ production was detected in both immune cell types. These results indicate that the BNT162b1 vaccine elicits a protective response against SARS-CoV-2. Phase II/III Clinical Trials As the Phase I/II clinical trial showed promising results, Pfizer-BioNTech decided to subject BNT162b2 into evaluation in a combined Phase II/III (NCT04368728) human clinical trial [125,132]. BNT162b2, which nanoencapsulates modRNA encoding for SARS-CoV-2 full-length S protein, was initially administered at a 30-µg dose level in a two-dose regimen. The study was estimated to involve 30,000 subjects from 18 to 85 years old, including 120 sites globally. In December 2020, Phase II/III clinical trials results showed an overall 95% protection efficacy against COVID-19 for the BNT162b2 vaccine [133]. This multinational, randomized, placebo-controlled, observer-blinded clinical trial consisted of a total of 43,548 volunteers (≥ 16 years old) who were randomly assigned at 152 sites worldwide in a 1:1 ratio to receive either the intramuscular vaccine (30 µg) or a placebo, in a two-dose regimen, administered 21 days apart. This study evaluated the efficacy, safety, and immunogenicity of the vaccine for the prevention of COVID-19 illness with onset at least 7 days after the second inoculation in participants who were healthy or had a stable chronic condition (HIV, hepatitis B or C virus infection) and had not previously been infected by SARS-CoV-2 or received an immunosuppressive therapy. After screening, a total of 43,448 participants received in a 1:1 ratio either BNT162b2 or a placebo, and data were collected on October 9, 2020, from a total of 37,706 participants [133]. The mean age among participants was 52 years, 49% were female, 42% were older than 55 years, 35% were obese and 21% had at least one co-existing medical condition. The racial and ethnic proportions of participants were White (83%), Black or African American (9%) and Hispanic or Latinx (28%). In the primary analysis for the efficacy of the vaccine, onset cases with no evidence of existing or prior SARS-CoV-2 infection were identified in 162 individuals, 7 days after the second dose in the placebo group, while only 8 onset cases were found in subjects that received the BNT162b2 vaccine, resulting in an efficacy of 95%, in a 95%CI of 90.3 to 97.6 [133]. These results met the prespecified success criteria, and greatly exceeded the minimum FDA criteria (primary efficacy > 30%) for authorization. In the case of participants without evidence of SARS-CoV-2 infection, nine individuals exhibited symptoms of COVID-19 at least 7 days after the second dose of the vaccine, and 169 individuals among placebo recipients (94.6% vaccine efficacy). Severe cases of COVID-19 with onset at any time after the first and second inoculation were identified in 39 individuals of the vaccine group and 82 of the placebo group (52% vaccine efficacy), indicating early protection that starts as soon as 12 days after the first dose. Safety evaluations considered specific local or systemic reactogenicity and the use of pain medication within 7 days after each inoculation or placebo, and unsolicited adverse events through 1 month and through 6 months, both after the second dose (data through 14 weeks after the second dose is reported) [133]. The most common adverse reaction in participants 16 to 55 years old after the first dose or placebo was injection site pain (83% or 14%), fatigue (47% or 33%), and headache (42% or 34%). In participants over 55 years old, adverse reactions after the first dose or placebo were injection site pain (71% or 9%), fatigue (34% or 23%), and headache (25% or 18%). Most side effects in both vaccine and placebo groups were mild to moderate and were less common in older vaccine groups. Severe adverse events were identified in 0.6% of vaccine recipients and 0.5% of placebo recipients and both local and systematic reactogenicity resolved shortly after the injection [133]. Although other adverse events included shoulder injury, lymphadenopathy, paroxysmal ventricular arrhythmia, and right leg paresthesia, only a few participants were withdrawn because of serious adverse events. Death cases occurred in six participants during the trial, nevertheless, none of them were considered as consequences from either the vaccine or the placebo. Overall, reactogenicity events were transient and resolved within a couple of days after onset, and serious adverse events were minimal. As it has been reported, current BNT162b2 efficacy against new SARS-CoV-2 variants could be compromised [135]. For that reason, the implementation of an updated vaccine boost is under clinical review. Moderna SARS-CoV-2 Vaccine Moderna is a biotechnology company based in Cambridge, Massachusetts, which has developed a vaccine candidate (mRNA-1273) by nanoencapsulating a modRNA sequence encoding the S protein of SARS-CoV-2 with 2 proline mutations substituted into residues 986 and 987 (SARS-CoV-2 S-2P) into LNPs [136]. The composition of the nanodelivery system includes ionizable lipids, distearoyl phosphatidylcholine, cholesterol, and polyethylene glycol (molar ratio: 50:10:38.5:1.5) and is fabricated using an ethanol nanoprecipitation method. The nanoparticle size of this vaccine is between 80-100 nm and possesses a mRNA encapsulation efficiency superior to 90%. Preclinical Studies Prior to human clinical trials, the antigenicity and immunogenicity of this vaccine candidate were confirmed in vitro and in vivo in several murine strains [136]. Preclinical studies were performed in BALB/cJ, C57BL/6J, and B6C3F1/J mice by administering 0.01, 0.1, or 1 μg of the mRNA-1273 nanovaccine using a two-dosage regimen at a 3-week interval. Results showed a dose-dependent response of S-specific binding antibodies in all mice strains after the first (prime) and second (boost) dosages of the vaccine were administered. The administration of 1 μg of mRNA-1273 was enough to elicit a potent neutralizing response, based on the reciprocal IC 50 GMT results, similar to that observed when BALB/cJ mice were immunized with 1 μg of Sigma Adjuvant System (SAS)-adjuvanted S-2P protein. Based on these results, a wider dose range of mRNA-1273 (0.0025-20 μg) was investigated in BALB/cJ mice, and dose-dependent correlations between binding antibodies induced by mRNA-1273 and neutralizing antibodies were observed. To further understand the potential clinical utility of a single-dose vaccination regimen, BALB/cJ mice were immunized with 1 or 10 μg of the nanoencapsulated vaccine mRNA-1273. In 2 to 4 weeks, the group administered with 10 μg developed solid and increasing neutralizing antibody responses as determined by the reciprocal IC 50 GMT. These data confirmed the efficacy of the SARS-CoV-2 S-2P mRNA nanovaccine to produce neutralizing antibodies with a single dose. Th1 and Th2 responses were evaluated by comparing the levels of S protein-specific IgG2a/c and IgG1 between the nanovaccine group and a group administered with the SARS-CoV-2 S-2P protein and the TLR4-agonist SAS [136]. The Th1/Th2 response observed was balanced in both groups, as S-binding antibodies, IgG2a and IgG1, were elicited by both immunogens. A single administration of mRNA-1273 resulted in a similar S-specific IgG subclass profile than the two-dose regimen. A different group was administered intramuscularly with the SARS-CoV-2 S protein in combination with 250 μg of alum hydrogel resulting in lower response ratios of IgG2a/IgG1 subclass Th2-biased antibodies. Based on these results, the group concluded that the mRNA-1273 vaccine does not generate Th2-biased responses, which have been related to the vaccine-associated enhanced respiratory disease and observed in children vaccinated with wholeinactivated measles or respiratory syncytial viruses [137,138]. An immunogenic analysis was performed in splenocytes obtained from mRNA-1273-immunized animals and re-stimulated with different peptide pools (S1 and S2) that correspond to the S protein of the virus [136]. After stimulation, IFN-γ secretion was increased compared to other cytokines such as IL-4, IL-5, or IL-13, and the group stimulated with SARS-CoV-2 S protein combined with alum resulted in a skewed Th2 cytokine secretion. After 7 weeks, intracellular cytokine staining was used to measure the cytokine patterns induced by mRNA-1273 in memory T cells. A dominant Th1 response was found in CD4 + T cells when higher immunogen doses were present; in the case of CD8 + T cells, a robust immune reaction to the S1 peptide pool with 1 μg dose of mRNA-1273 was observed. These results indicate that mRNA nanovaccination is able to induce a balanced Th1/Th2 response, compared to the co-administered S protein and alum group where the Th2 response was dominant. To determine the protective immunity of the nanovaccine, adult BALB/cJ mice were exposed to a mouseadapted (MA) SARS-CoV-2 that presents localized viral replication in the nasal airways and lungs [136]. Two doses of 1 μg of mRNA-1273 were administered and it was determined that animals were fully protected, as viral replication was undetectable in the lungs after the challenge at 5-and 13-week intervals following a boost. Viral replication was undetectable in nasal turbinates on 6 out of 7 animals. For 0.01 or 0.1 μg dose levels, a dosedependent efficacy was observed as the lung viral load was reduced ~ 3-and ~ 100-fold, respectively. Animals that received the MA SARS-CoV-2 challenge at 7 weeks after one dose of 1 or 10 μg of the nanovaccine was administered, were completely protected against MA SARS-CoV-2 replication in the lungs. These results confirmed the immunogenicity and efficacy of mRNA-1273 in a murine model and positioned this prototype as a robust candidate for SARS-CoV-2 vaccine. In a second preclinical study, nonhuman primates were selected to test the neutralizing response and protective ability of this mRNA vaccine [139]. Rhesus macaques (n = 24, 12 per sex) were divided into groups of three and administered with two intramuscular inoculations (at a 4-week interval) of 10 or 100 μg of mRNA-1274 in 1 mL of 1X PBS depending on the group. An unvaccinated control group was administered with the same volume of PBS without mRNA. Results showed a dose-dependent response of S-specific binding IgG antibodies after two vaccinations [139]. Neutralizing activity in animals that received 100 μg of mRNA-1273 had a ID 50 GMT that was 5 and 18 times greater than the 10 μg dose groups after the first and second vaccination. To further investigate the antibody responses for viral inhibition, serum from vaccinated animals was collected and compared to serum from COVID-19 convalescent human samples. Results analyzed by enzyme-linked immunosorbent assay (ELISA) showed that the binding inhibition of ACE2 to the RBD in serum from 100-μg vaccinated animals was 938 and 348 times higher than that of the control animal group and human convalescent serum, respectively. Additionally, to assess the nanovaccine potential to recognize multiple functional S domains, additive neutralizing activity, post-attachment fusion inhibition, and binding of the NTD of S1 were evaluated. A higher S1 NTD-specific antibody response was elicited by the nanovaccine than the one observed in the human convalescent serum. Finally, the neutralizing activity was evaluated for 10 and 100 μg vaccinated animals using a live SARS-CoV-2 reporter virus, resulting in an ID 50 GMT 12 and 84 times higher, than the one observed on human convalescent sera. To determine the protective immunity of the nanovaccine, flow cytometry was employed to evaluate the functional heterogeneity of CD4 + and CD8 + T-cell cytokine responses specific for S protein [139]. Due to previous observations of robust antibody responses in modRNA vaccines associated with CD4 + Tfh cells, interleukin-21 was also measured. Results showed a dose-dependent relationship in increasing Th1, and interleukin-21-producing Tfh cellular responses, and low or undetectable CD8 + T-cell and Th2 responses for both dose levels of the vaccine. The protective efficacy of the vaccine was further evaluated as macaques were exposed to a total dose of 7.6 × 10 5 PFU of the SARS-CoV-2 USAWA1/2020 strain by intratracheal and intranasal routes four weeks after the second vaccine dosage was administered [139]. Polymerase chain reaction (PCR) analysis was performed in BAL fluid, and subgenomic RNA was found in all the groups at different times; nevertheless, lower levels were observed in the 10 and 100 μg groups compared to the control group. Post-challenge S-and N-specific IgG antibody levels in BAL fluid were measured to investigate the immune mechanism against viral replication in lungs, and a dose-dependent increase in S-specific IgG antibody levels was detected in vaccinated animals [139]. S-specific IgG responses were higher than IgA responses. No anamnestic response was observed, at 2 weeks post-challenge, humoral S-and N-specific IgG levels were stable in vaccinated animals, in comparison with the increased antibody levels observed in the control group. Histological examination was performed in the lungs of the animals, and the 10 μg group presented mild inflammation without viral RNA. No substantial inflammatory response was found in the 100 μg group as no viral RNA or antigens were detected. These results confirmed that no immunopathologic changes associated with the vaccine were present. Phase I/II Clinical Trials After obtaining promising preclinical results, a human Phase I (NCT04283461) open-label, dose-escalation clinical trial was initiated to evaluate the security and reactogenicity of mRNA-1273 in 45 healthy adults (18 to 55 years old) [140][141][142]. The participants were intramuscularly inoculated with 25, 100, and 250 µg of mRNA-1273 (n = 15 per group) in a prime-boost regimen 28 days apart. On days 29 and 57 after the first and second inoculation, anti-S-2P antibody GMT was measured by ELISA and results showed that when a higher mRNA-1273 dose was used, an increased antibody response was present. The GMT levels after the first inoculation were 40,227 for the 25 µg group, 109,209 for the 100 µg group and 213,526 for the 250 µg group, and the GMT levels after the second inoculation were 299,751 for the 25 µg group, 782,719 for the 100 µg group, and 1,192,154 for the 250 µg group. Serum neutralizing activity was measured by both pseudo-typed lentivirus reporter single-round-of-infection neutralization assay (PsVNA) and live wild-type SARS-CoV-2 plaque-reduction neutralization testing (PRNT) assay. PsVNA responses were identified in half of the participants after the first vaccination, and in all the participants after the second vaccination [142]. The geometric mean ID 50 levels at day 43 were 112.3 for the 25 µg group, 343.8 for the 100 µg group, and 332.2 for the 250 µg group, similar to those found above the median values for human convalescent serum specimens in the distribution. In parallel at day 43, all participants developed a wild-type virus-neutralizing activity that was able to reduce SARS-CoV-2 infectivity by 80% or more (PRNT 80 ) (geometric mean of 339.7 for the 25 µg group and 654.3 for the 100 µg group). These responses were higher than the values of three convalescent serum specimens tested. When stimulated by S-specific peptide pools, the 25 and 100 µg groups induced CD4 + T cell responses in line with Th1 cytokines (TNF-α, IL-2 and IFNγ). Th2 cytokine expression (lL-4 and IL-13) was minimal in these 2 groups. Adverse events such as localized pain at the injection site, nausea, arthralgia, fatigue, chills, headache, myalgia, erythema, and induration were observed in a mild and moderate manner after vaccine administration. However, after the first vaccination, the vaccine was considered safe with mild-to-moderate adverse systemic events in 33%, 67%, and 53% of the 25, 100, and 250 µg groups, respectively; after the administration of a second dose, mild-to-moderate adverse systemic events were observed in 54%, 100%, and 100% of the participants in the 25, 100, and 250 µg groups. Severe adverse events were found in three participants (21%) from the high dose (250 µg) group. Fever was present after the second vaccination in 0%, 40%, and 57% of the 25, 100, and 250 µg groups, respectively. It was concluded from these results that the mRNA-1273 vaccine was safe and able to generate immunogenic responses. On 29 May 2020, a Phase II (NCT04405076) clinical randomized, observer-blinded, placebo-controlled study was conducted to evaluate the safety and immunogenicity of mRNA-1273 in 600 healthy adults (≥ 18-< 55, n = 300; and ≥ 55 years old, n = 300) [143,144]. Participants were randomly chosen from either group to receive intramuscular inoculations with either 50 or 100 μg of mRNA-1273 or a placebo in a two-dose regime, 28 days apart. Anti-SARS-CoV-2-spike antibody GMT was measured by ELISA on days 1 and 29 after the first inoculation, and days 43 and 57 after the second inoculation. Results showed that 14 days following the second vaccination (day 43), antibodies were significantly enhanced to maximum GMTs and exceeded those of convalescent sera, remaining elevated through day 57. On day 43, GMT levels (95% CI) were 1733 and 1909 µg mL −1 at 50 µg and 100 µg mRNA-1273 in younger adults; and 1827 and 1686 µg mL −1 at 50 µg and 100 µg mRNA-1273, in older adults [144]. The most common adverse events were localized pain at injection site, headache, and fatigue following each vaccination in both age cohorts [144]. Local and systemic adverse reactions were mostly mild-to-moderate in severity, at higher frequencies after the second dose, and one serious adverse event that occurred 33 days post-vaccination was concluded unrelated. These observations were consistent with previous reports on Phase I clinical studies. After these results, it was concluded that mRNA-1273 vaccine was safe and able to generate immunogenic responses. On December 18, 2020, Moderna received investigational new drug approval from the FDA, and the initial placebo-controlled, dose-confirmation Phase I clinical trial was expanded to include older adults (56 to 70 years old/≥ 71 years, n = 40) [145]. This study evaluated the safety, reactogenicity, and immunogenicity of a prime-boost regimen of 2 dosages (25 μg, n = 10 per age group; or 100 μg, n = 10 per age group) of the mRNA-1273 vaccine, administered 28 days apart. On days 29 and 57 after the first and second inoculation, anti-S-2P antibody GMT was measured by ELISA and results showed that at higher mRNA-1273 doses, antibody responses increased [145]. GMT levels after the second inoculation in the 25 µg group were 323,945 for adults whose age ranged between 56 to 70 years old and 1,128,391 among older adults (≥ 71 years). In the 100 µg group, GMT levels after the second inoculation were far superior to those observed in convalescent sera (GMT = 138,901): 1,183,066 among participants of the 56-70 years old group and 3,638,522 that were 71 years or older. Pseudovirus neutralization assay responses were identified 7 days after the second dose in participants independently of their age [145]. At day 43, the geometric mean ID 50 titers to 614D induced by the 100 µg dose level were 402 among adults between ages 56-70 years, similar to 317 observed in adults that were 71 years of age or older. Responses among the 100 µg subgroups were higher than those observed in the 25 µg subgroups and above values in human convalescent serum. On day 43, strong neutralization responses were observed in all participants by nLuc HTNA, similar to those detected by FRNT-mNG. In parallel, on day 43, PRNT 80 geometric mean levels were 878 among adults between ages 56 and 70 years, and 317 among adults 71 years of age or older. All neutralization assays, nLuc HTNA, FRNT-mNG, and PRNT, correlated well, except results from PRNT and ELISA. When stimulated by S-specific peptide pools, participants (56-70 years old) that received 25-μg and 100-μg (56 to 70 and ≥ 71 years), induced CD4 + T cell responses in line with Th1 cytokines (TNF-α, IL-2 and IFN-γ) [145]. Th2 cytokine expression (lL-4 and IL-13) was minimal in these 2 groups, similar results to those observed in the Phase I trial. Dose-dependent adverse events such as localized pain at the injection site, fatigue, chills, headache, myalgia, were observed in a mild and moderate manner after vaccination. However, after the first inoculation, the vaccine was considered safe with mild adverse systemic events in 30% of the 25 and 100 µg groups (56-70 years), and 50% and 30% of the 25 and 100 µg groups (≥ 71 years), respectively; after the second vaccination, mild-to-moderate adverse systemic 1 3 severe events were observed in 70% and 88.9% of the 25 and 100 µg groups (56-70 years), and 30% and 70% of the 25 and 100 µg groups (≥ 71 years), respectively. Severe adverse events were found in only two cases: fatigue in one participant (10%) who was 71 years of age or older in the 100-μg dose subgroup, and fever in one participant (10%) between 56 to 70 years old from the 25 µg subgroup after the second vaccination. In addition to Phase II results, it was concluded from this study that the 100-μg dose of mRNA-1273 vaccine was safe, able to generate immunogenic responses and could further proceed for Phase III clinical trials. Phase III Clinical Trials In late July 2020, the Coronavirus Efficacy (COVE) Phase III clinical trial (NCT04470427) was initiated to evaluate the mRNA-1273 vaccine in preventing SARS infection. Early results from this randomized, stratified, placebo-controlled, observer-blinded Phase III clinical trial, conducted at 99 centers across the USA, were available in February 2021 [146]. The COVE Phase III clinical trial enrolled 30,420 volunteers who were randomly assigned in a 1:1 ratio to receive either the intramuscular vaccine (100 µg) or a placebo, in a two-dose regimen, administrated 28 days apart [146]. This study evaluated the efficacy, safety, and immunogenicity of the vaccine for the prevention of COVID-19 illness with onset at least 14 days after the second inoculation in participants who had not previously been infected by SARS-CoV-2. The mean age among participants was 51.4 years, 47.3% were female, 24.8% were older than 65 years, and 16.7% were under the age of 65 and had high-risk chronic diseases that increased the probability to develop severe COVID-19, such as diabetes, severe obesity, or cardiac disease. The racial and ethnic proportions of participants were White (79.2%), Black or African American (10.2%), and Hispanic or Latinx (20.5%). The efficacy of the vaccine was evaluated according to the protocol mRNA-1273-P301 based on the most recent clinical study protocol (CSP) Amendment 3 [146]. Immunogenicity analyses included serum binding antibody levels against SARS-CoV-2 as measured by ELISA specific to the SARS-CoV-2 S protein and tests such as VAC58 Spike IgM Antibody, VAC58 Spike IgA Antibody, and VAC65 Spike IgG antibody. Additionally, serum neutralizing antibody titers against SARS-CoV-2 as measured by pseudovirus and/or live virus neutralization assays were performed, including tests such as PsVNT50 PsVNT80, and MN50 (live virus neutralization assay). In the primary analysis for the efficacy of the vaccine, 11 symptomatic COVID-19 participants were identified in the vaccine group and 185 in the placebo group. Secondary efficacy analysis of the vaccine included the identification of severe COVID-19 symptoms among participants. In the placebo group, 30 participants developed severe COVID-19 disease while no cases were observed in the vaccine group. The vaccine showed a 94.1% efficacy, which was similar to secondary analyses, including subjects who had evidence of infection at baseline and analyses of participants 65 years of age or older. Reactogenicity after vaccination was observed to be transient and moderate, which occurred more often in the mRNA-1273 group. Safety evaluations considered participants' adverse events (solicited) after the first inoculation, as well as unexpected adverse events (unsolicited), after the second administration [146]. The most common adverse reaction after the two-dose series was injection site pain (84.2%) and systemic adverse events after the first and second dose occurred more often in the vaccine group (54.9% and 79.4%) than in the placebo (42.2% and 36.5%) group after the first dose and the second dose, respectively;; the most common unsolicited events included headache (1.5% vaccine group, 1.2% placebo) and fatigue (1.4% vaccine group, 0.9% placebo). While many of these adverse events were mild (grade 1) or moderate (grade 2), there was a higher occurrence of severe (grade 3) reactions in the vaccine group after the first (2.9%) and second (15.8%) inoculations. Most local adverse events occurred within the first one to two days after injection and generally persisted for a median of one to two days. The adaptability of mRNA-based vaccines allows the manufacturing of specific nanovaccines against new SARS-CoV-2 variants. For that reason, Moderna will conduct further clinical studies to evaluate mRNA-1273.351 against the variant B.1.351, as well as continue to evaluate a third vaccine boost of mRNA-1273 to determine its efficacy against other new variants that have emerged around the world [147]. Nanoparticle-Based Vaccines to Deliver Nucleic Acids As discussed above, Pfizer-BioNTech's and Moderna's vaccines were the first vaccines that received emergency authorization by the FDA. For the development of their respective vaccines, Pfizer-BioNTech and Moderna used mRNA that encodes genetic variants of the SARS-CoV-2 spike protein that are more stable and immunogenic than the natural protein. To further increase the efficacy of the formulation, both vaccines use LNPs to encapsulate mRNA molecules to provide them with stability and protection. Advantages offered by this nanosystem include the use of biodegradable lipids, improving safety and tolerability. Other features include the incorporation of multifunctional lipids that act as adjuvants to boost vaccine efficacy [148]. Although immunogenicity of mRNA might represent a safety concern, Pfizer-BioNTech's and Moderna's vaccines include chemical modifications to nucleotides (N 1 -methyl-pseudouridine) which reduce mRNA instability and innate immune responses from exogenous mRNA translation [148,149]. Protein-based vaccines offer different advantages as the protein subunits are readily processed into antigens by APCs, avoiding the need for intracellular transcription. However, exposed antigens are subjected to enzymatic degradation before being recognized by immune cells, which impacts negatively on the vaccine effectiveness and multiple boost doses are usually required. By including adjuvants, however, this intrinsic disadvantage of peptide-based vaccines is countered, allowing the induction of higher immune responses [150]. When comparing mRNA and peptide-based vaccines, one key difference relies on the higher adaptability of mRNA platforms to emergent virus variants or future pandemics. A main concern of genetic vaccines is their safety profile; nevertheless, mRNA vaccines have proven to be not infectious and cannot potentially be integrated into the host genome [151]. Another difference is the storage conditions for each type of vaccine. Protein-based vaccines are stable at temperatures that range 2 to 8 °C and Moderna's and Pfizer-BioN-Tech's mRNA vaccines require temperatures of − 20 °C and − 60 to − 80 °C to remain stable for 6 months, respectively. Although it has been possible to store mRNA vaccines at 2 to 8 °C for 30 days, these conditions still represent a significant challenge for an equitable vaccine distribution to many developing countries [101,152]. As third doses and seasonal vaccine boosters may become necessary, worldwide efforts continue to ensure health and safety, including the approval of children's vaccination [153][154][155]. Health and Economic Impact Around the world, biotechnology companies and academic institutions accelerated the development of nanotechnologyenabled vaccines with the aim of solving the existent health and economic crisis. It is imperative to accelerate the manufacturing and distribution of vaccines in a sustainable and equitable manner to rapidly decrease the economic and health impact caused by this pandemic, especially among the most vulnerable and health professionals. A significant difference between mRNA vaccines is the dosage required: Moderna's vaccine requires 100 μg whereas Pfizer-BioNTech's vaccine requieres 30 μg. Higher dosages also represent a limitation regarding mass production. In terms of efficacy against the original SARS-CoV-2, Pfizer-BioNTech possess the highest (95%), following by Moderna (94.1%). In a globalized world, lack of vaccine supply for lowincome countries, which are the ones that would likely suffer from limited access, would not only affect public health, but also would take a substantial toll on the global economy. Further consequences would be related to a poor and slow stabilization in the productivity and economy of the world-wide population as the World Bank estimated a 5.3% contraction of the global gross domestic product (GDP) in 2020 [156]. All these effects are aggravated as a significant percentage of patients recovered from COVID-19 will end up with a potential disability due to organic damage, as early evidence has shown that 75.4% of the survivors present abnormalities in lung function [157]. Additionally, 50% of intensive care unit (ICU) patients will experience post-intensive care syndrome (PICS) that includes post-traumatic stress disorder, anxiety, depression, fatigue, insomnia, decreased memory, poor concentration and difficulty talking, and these factors contribute to a loss of economic productivity [158]. Thus, innovation and equitable manufacturing and distribution of effective vaccines will prevent new infections and significantly spur the economic outlook and recovery of societies around the world. Most vaccines developed or under development are based on injectable solutions. In the near future, inhalable vaccines could be created for self-administration without medical assistance, facilitating their distribution and benefiting people worldwide especially those with less developed or limited medical access [159,160]. Even if vaccines are available, social distancing, use of face covering, and personal hygiene is recommended to control the spread of the pandemic in non-vaccinated populations [161,162]. Planning security protocols for diverse industrial sectors will also be pivotal to lessen adverse consequences and prevent new outbreaks [163]. In addition, it is crucial to consider the continued emergence of SARS-CoV-2 lineages and their impact on vaccine efficacy. As an example, the lineage B.1.1.7 has been associated with a higher viral load and a reduced neutralization by RBD-specific and NTD-specific neutralizing antibodies (16% and 90%, respectively) [164,165]. In the case of lineage B.1.351, concurrent mutations have been hypothesized to overcome the polyclonal antibody response due to the significant reduction of neutralizing activity in sera collected from patients recovered from COVID-19 [26,166]. Besides the effects of mutations on antibody binding and neutralization activity, the biological impact of mutated SARS-CoV-2 variants on T cell reactivity has also been investigated [167]. Tarke et al. directly compared SARS-CoV-2-specific CD4 + and CD8 + T cell responses of COVID-19 convalescent patients previously infected by B.1.1.7, B.1.351, P.1, and CAL.20C (B.1.427/B.1.429) viral lineages, and that were recipients of either mRNA-1273 or BNT162b2 vaccines [168]. Results showed a decrease of 10-22% of the total reactivity, in terms of magnitude and frequency of response, against some VOC combinations. As new variants continue to emerge, information from previous mutations will help to predict altered antigenicity or transmissibility of new viral lineages [169]. Given the urgent need to increase the production and distribution efforts of vaccines against COVID-19, monetary support to scientific and industrial infrastructure is of uppermost importance. These efforts will decrease mortality and disability around the world, and increase the productivity rates across the nations, having permanent long-term economic and health benefits. Monitoring and evaluating longterm adverse effects associated with current mRNA vaccines will also be crucial; however, current clinical evidence supports their short-and long-term biosafety, as severe side effects are rare [170,171]. To date, two cases of thrombosis with thrombocytopenia syndrome (TTS) have been confirmed following Moderna's mRNA vaccination [171]. Other rare reported side effects after mRNA COVID-19 vaccination are myocarditis and pericarditis. These conditions, however, have been reported as rare events, appearing in 0.1% of vaccinated individuals [172]. As of October 27, 2021, 1,784 reports of myocarditis or pericarditis have been confirmed [171,173]; further studies are necessary to evaluate whether there is a relationship between these rare adverse events and COVID-19 mRNA vaccination [174]. An additional concern during vaccine administration is the antibody-dependent enhancement (ADE) effect, which has been previously documented for some viral respiratory infections (e.g., syncytial and dengue virus) as it may decrease vaccine success and exacerbate disease [175]. Zhou et al. demonstrated that one group of RBD-specific antibodies found in one convalescent donor induced ADE of entry in Raji cells via an Fcγ receptor-dependent mechanism [176]. Nevertheless, this study concluded that although the ADE effect for coronaviruses may be observed in vitro, a potential pathological relevance during SARS-CoV-2 infection seems unlikely [176]. Conclusion SARS-CoV-2 represents a severe health risk to the world's population, and a rapid and multidisciplinary response has been necessary to overcome this pandemic. At the intersection of engineering and biology, medical nanotechnology, or nanomedicine, as it is more commonly referred to, has proven to be a versatile field that allows researchers to implement novel nanoscopic strategies against newevolving pathogens. Several biotechnology companies and academic institutions used nanotechnology for the successful design and development of mRNA vaccines against SARS-CoV-2. Based on these results, it is evident that the nanotechnology field has played a pivotal role to prevent further mortality, curtail the pandemic, and aid in the economic recovery of the world. Funding Open access funding provided by Shanghai Jiao Tong University. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
16,415
2022-01-03T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
Nighttime magnetic perturbation events observed in Arctic Canada: 3. Occurrence and amplitude as functions of magnetic latitude, local time, and magnetic disturbances Rapid changes of magnetic fields associated with nighttime magnetic perturbation events (MPEs) with amplitudes |ΔB| of hundreds of nT and 5-10 min periods can induce geomagnetically-induced currents (GICs) that can harm technological systems. In this study we compare the occurrence and amplitude of nighttime MPEs with |dB/dt| [?] 6 nT/s observed during 2015 and 2017 at five stations in Arctic Canada ranging from 75.2° to 64.7° in corrected geomagnetic latitude (MLAT) as functions of magnetic local time (MLT), the SME and SYM/H magnetic indices, and time delay after substorm onsets. Although most MPEs occurred within 30 minutes after a substorm onset, ̃10% of those observed at the four lower latitude stations occurred over two hours after the most recent onset. A broad distribution in local time appeared at all 5 stations between 1700 and 0100 MLT, and a narrower distribution appeared at the lower latitude stations between 0200 and 0700 MLT. There was little or no correlation between MPE amplitude and the SYM/H index; most MPEs at all stations occurred for SYM/H values between -40 and 0 nT. SME index values for MPEs observed more than 1 hour after the most recent substorm onset fell in the lower half of the range of SME values for events during substorms, and dipolarizations in synchronous orbit at GOES 13 during these events were weaker or more often nonexistent. These observations suggest that substorms are neither necessary nor sufficient to cause MPEs, and hence predictions of GICs cannot focus solely on substorms. 62 Although early studies of nighttime magnetic perturbation events (MPEs) that induce 63 large geoelectric fields and geomagnetically-induced currents (GICs) noted the small-scale 64 character of these events (e.g., Viljanen, 1997), many efforts to predict GICs have continued to 65 focus on global processes (geomagnetic storms and substorms). Recent observational studies by Individual events also displayed no close or consistent temporal correlation with substorm 72 onsets. 73 Here we present additional analyses of a large number of nighttime MPEs that document 74 lack of any close correlation between their occurrence and levels of the SME index, the SYM/H 75 index, or of near-tail dipolarizations, and show that a substantial fraction of these events are not 76 temporally associated with substorms. MPEs occurring in the post-midnight sector showed a 77 different dependence on both latitude and prior substorm activity than did the more numerous 78 pre-midnight MPEs. For each of the five stations we sorted the MPE events as functions of several variables: 152 magnetic local time (MLT), the SYM/H index, the SME index (the SuperMAG version of the 153 AE index, described in Newell and Gjerloev, 2011a), and derivative amplitude. 154 Over the range of magnetic latitudes covered in this study (from 75° to 65° MLAT) all ≥ 155 6 nT/s perturbation events fell into the local time range from 17 to 07 MLT. Figure 4a shows the appears at all latitudes shown, and a distribution in the midnight to dawn sector (2 to 7 MLT) 162 that is prominent only at the lower latitude stations. This difference in latitudinal distribution, 163 which is consistent with observations of large ionospheric equivalent current perturbations by 164 Juusola et al. (2015), appears to reflect the latitudinal dependence of the auroral electrojet, which 165 is located at higher latitudes pre-midnight and lower latitudes post-midnight. As will be shown 166 in later parts of this study, the properties of these two populations also differed somewhat in their 167 association with different geomagnetic conditions. 168 Consistent with the distribution of occurrences shown in Table 2 shown with plus signs was somewhat narrower in time and shifted toward slightly later MLT, 176 and a second post-midnight peak (with similar peak occurrences) appeared between 2-3 and 6 h 177 MLT. In contrast, the distributions for events shown with squares and triangles were flat across 178 the entire MLT range shown (but with fewer occurrences). 179 Figure 4b shows that the largest-amplitude MPEs occurred at all 5 stations between 1800 180 and 2300 h MLT, but derivatives with amplitude at or above 15 nT/s also appeared after 0300 h 181 MLT at both SALU and KJPK. Table 3 shows an analysis of the distribution of these events as a 182 7 function of time delay when separated into pre-and post-midnight occurrences. In order to 183 clearly separate these categories, pre-midnight events were chosen to include those observed 184 between 1700 and 0100 MLT, and post-midnight event those between 0200 and 0700 MLT. 185 The time delay distributions were similar for pre-and post-midnight events at all 5 stations, but 186 on average over all 5 stations, post-midnight events were slightly more likely to occur within 30 187 min after substorm onsets than pre-midnight events (70% vs. 66%), and less likely to occur more 188 than 60 minutes after onset (12% vs. 17%). These differences, however, were not statistically 189 significant. At all five stations > 6 nT/s perturbation events occurred over a wide range of SME 207 values, as shown in Figure 6a, but very few events occurred at any station for SME < 200 nT. At 208 the four highest latitude stations a large majority of events in each of the 3 time delay categories 209 occurred for SME values between 200 and 900 nT. This SME range also held at the lowest 210 latitude station (KJPK) for the Δt > 60 min category, but most of the events in the Δt ≤ 30 min 211 category were associated with SME values > 800 nT. However, fewer events occurred for high 212 SME at KJPK (64.7° MLAT) than at SALU (70.7° MLAT)note the differing vertical scales. Figure 6b shows that there was a modest correlation between the amplitude of the largest 214 derivatives and the SME index only over the SME range between 200 and 600 nT at all 5 215 stations; the distribution of amplitudes was nearly flat for SME > 600 nT at all stations. Most 216 events at all SME values and all 3 time ranges were below 12 nT/s. Only 7 of the 842 total 217 events occurred when SME exceeded 2000 nT. Table 4 show the number of MPE events at each station that occurred indicating that most substorms were not associated with large amplitude MPEs. The percentages 258 at CDR, IQA, and SALU were near the lower end of this range, and those at RBY and KJPK at 259 the higher end. We note the roughly inverse correlation between these percentages and the 260 number of MPE events observed at each station (Table 2). This suggests that the modest 261 differences in magnetic longitude between the five stations were a smaller factor in determining 262 the dependence of MPEs on substorm onsets than the magnetic latitude. This dependence on 263 MLAT may reflect the limited spatial extent of large MPEs, such that a station farther away from 264 the statistical auroral oval is more likely to detect an MPE with lower amplitude, and thus in 265 many cases one below our selection threshold of 6 nT/s. We also considered the effect of multiple prior substorm onsets separately for MPEs in 269 the two populations shown in Figure 4a: the "pre-midnight" population observed between 1700 270 and 0100 MLT, and the "post-midnight" population observed between 0200 and 0700 MLT. Table 6 shows the results of applying Pearson's Chi-squared test to the data in Table 5, 284 after reducing the number of prior substorm categories to 3: after 0, 1, and ≥ 2 onsets within 2 285 hours, respectively. The p values of << 0.05 confirm that the difference between pre-midnight 286 and post-midnight events is statistically significant at all 3 stations. Taken together, these 287 differences indicate a much stronger relation between multiple substorms and subsequent MPEs 288 in the post-midnight sector than in the pre-midnight sector. 289 Table 7 provides additional information on the relation between MPE onset and the level 290 of magnetic disturbance (as represented by the SME index) following multiple substorms. This with SME values ≥ 1000 nT occurred after two-hour intervals containing from 2 to 4 substorm 299 onsets, and 3) because of the large difference in total MPE occurrence in each bin between pre-300 midnight and post-midnight MPEs, the percentage distribution of pre-midnight MPEs 301 simultaneous with SME values ≥ 1000 nT increased greatly as the number of prior substorm 302 onsets increased from 1 to 4, but was more nearly flat for post-midnight events. The overall 303 fractions of pre-midnight MPEs associated with SME values ≥ 1000 nT were 9.2% at IQA, 8.5 304 11 % at SALU, and 19.4% at KJPK. The corresponding post-midnight fractions were much larger: 305 70%, 44%, and 52%, respectively. 306 The SME index is well correlated with auroral power (Newell and Gjerloev, 2011a). In 307 general, the relationship among discrete precipitation, ionospheric conductance, and upward 308 FAC density is instantaneous. In contrast, diffuse precipitation has a certain time lag; particles 309 are injected and then later forced to precipitate into the ionosphere. The associated enhancement 310 of ionospheric conductance lasts longer, which is favorable for more tail current to short-circuit 311 through the ionosphere at subsequent substorms. As a result, SME may increase following Some of the smaller GOES 13 Bz perturbations, and especially those in the Δt ≥ 60 min 332 category, were associated with brief (few min) transient pulses rather than step functions 333 (dipolarizations). It is difficult to discern whether such pulses arise from spatial or temporal 12 effects. If spatial, GOES 13 may have been rather distant in MLT from the center of a more 335 large-scale dipolarization. If temporal, the perturbation may have been associated with a bursty 336 bulk flow, dipolarization front, and/or pseudobreakup (e.g., Palin et al., 2015). Further analysis 337 of the features of the GOES 13 dataset during these MPE events is certainly warranted, but is 338 beyond the scope of this paper. onset. There was a modest correlation between the amplitude of the largest MPEs and the SME 379 index over the SME range from ~200 to ~600 nT at all 5 stations, but the distribution of restricted to times when SYM/H is large and negative; it simply means that they occur at higher 419 latitudes at these times. 420 We have also found that only 60 -67% of the ≥ 6 nT/s MPEs we observed occurred 478 The main implications of this study are 1) that neither a magnetic storm nor a fully 479 developed substorm is a necessary or sufficient condition for the occurrence of the extreme 480 nighttime magnetic perturbation events that can cause GICs, and 2) that the pre-midnight and 30 min < Δt < 60 min Δt ≥ 60 min Δt ≤ 30 min Table 3 shows an analysis of the distribution of these events as a 182 7 function of time delay when separated into pre-and post-midnight occurrences. In order to 183 clearly separate these categories, pre-midnight events were chosen to include those observed 184 between 1700 and 0100 MLT, and post-midnight event those between 0200 and 0700 MLT. 185 The time delay distributions were similar for pre-and post-midnight events at all 5 stations, but 186 on average over all 5 stations, post-midnight events were slightly more likely to occur within 30 187 min after substorm onsets than pre-midnight events (70% vs. 66%), and less likely to occur more 188 than 60 minutes after onset (12% vs. 17%). These differences, however, were not statistically 189 significant. indicating that most substorms were not associated with large amplitude MPEs. The percentages 258 at CDR, IQA, and SALU were near the lower end of this range, and those at RBY and KJPK at 259 the higher end. We note the roughly inverse correlation between these percentages and the 260 number of MPE events observed at each station (Table 2). This suggests that the modest 261 differences in magnetic longitude between the five stations were a smaller factor in determining 262 the dependence of MPEs on substorm onsets than the magnetic latitude. This dependence on 263 MLAT may reflect the limited spatial extent of large MPEs, such that a station farther away from 264 the statistical auroral oval is more likely to detect an MPE with lower amplitude, and thus in 265 many cases one below our selection threshold of 6 nT/s. We also considered the effect of multiple prior substorm onsets separately for MPEs in 269 the two populations shown in Figure 4a: the "pre-midnight" population observed between 1700 270 and 0100 MLT, and the "post-midnight" population observed between 0200 and 0700 MLT. Table 6 shows the results of applying Pearson's Chi-squared test to the data in Table 5, 284 after reducing the number of prior substorm categories to 3: after 0, 1, and ≥ 2 onsets within 2 285 hours, respectively. The p values of << 0.05 confirm that the difference between pre-midnight 286 and post-midnight events is statistically significant at all 3 stations. Taken together, these 287 differences indicate a much stronger relation between multiple substorms and subsequent MPEs 288 in the post-midnight sector than in the pre-midnight sector. 289 Table 7 provides additional information on the relation between MPE onset and the level 290 of magnetic disturbance (as represented by the SME index) following multiple substorms. This with SME values ≥ 1000 nT occurred after two-hour intervals containing from 2 to 4 substorm 299 onsets, and 3) because of the large difference in total MPE occurrence in each bin between pre-300 midnight and post-midnight MPEs, the percentage distribution of pre-midnight MPEs 301 simultaneous with SME values ≥ 1000 nT increased greatly as the number of prior substorm 302 onsets increased from 1 to 4, but was more nearly flat for post-midnight events. The overall 303 fractions of pre-midnight MPEs associated with SME values ≥ 1000 nT were 9.2% at IQA, 8.5 304 11 % at SALU, and 19.4% at KJPK. The corresponding post-midnight fractions were much larger: 305 70%, 44%, and 52%, respectively. 306 The SME index is well correlated with auroral power (Newell and Gjerloev, 2011a). In 307 general, the relationship among discrete precipitation, ionospheric conductance, and upward 308 FAC density is instantaneous. In contrast, diffuse precipitation has a certain time lag; particles 309 are injected and then later forced to precipitate into the ionosphere. The associated enhancement 310 of ionospheric conductance lasts longer, which is favorable for more tail current to short-circuit 311 through the ionosphere at subsequent substorms. As a result, SME may increase following Some of the smaller GOES 13 Bz perturbations, and especially those in the Δt ≥ 60 min 332 category, were associated with brief (few min) transient pulses rather than step functions 333 (dipolarizations). It is difficult to discern whether such pulses arise from spatial or temporal 12 effects. If spatial, GOES 13 may have been rather distant in MLT from the center of a more 335 large-scale dipolarization. If temporal, the perturbation may have been associated with a bursty 336 bulk flow, dipolarization front, and/or pseudobreakup (e.g., Palin et al., 2015). Further analysis 337 of the features of the GOES 13 dataset during these MPE events is certainly warranted, but is 338 beyond the scope of this paper. onset. There was a modest correlation between the amplitude of the largest MPEs and the SME 379 index over the SME range from ~200 to ~600 nT at all 5 stations, but the distribution of 380 amplitudes was nearly flat for SME > 600 nT. The amplitude of most MPEs at all SME values 381 and in all 3 time categories was below 12 nT/s. restricted to times when SYM/H is large and negative; it simply means that they occur at higher 419 latitudes at these times. 420 We have also found that only 60 -67% of the ≥ 6 nT/s MPEs we observed occurred The separation of nighttime MPEs into two populations in MLT, a pre-midnight one that 426 appeared at all 5 stations and a post-midnight one that was prominent only at the two lowest 427
3,767.4
2020-05-02T00:00:00.000
[ "Physics" ]
LOCC convertibility of entangled states in infinite-dimensional systems We advance on the conversion of bipartite quantum states via local operations and classical communication (LOCC) for infinite-dimensional systems. We introduce δ-LOCC convertibility based on the observation that any pure state can be approximated by a state with finite-support Schmidt coefficients. We show that δ-LOCC convertibility of bipartite states is fully characterized by a majorization relation between the sequences of squared Schmidt coefficients, providing a novel extension of Nielsen’s theorem for infinite-dimensional systems. Hence, our definition is equivalent to the one of ε-LOCC convertibility (Owari et al 2008 Quantum Inf. Comput. 8 0030), but deals with states having finitely supported sequences of Schmidt coefficients. Additionally, we discuss the notions of optimal common resource and optimal common product in this scenario. The optimal common product always exists, whereas the optimal common resource depends on the existence of a common resource. This highlights a distinction between the resource-theoretic aspects of finite versus infinite-dimensional systems. Our results rely on the order-theoretic properties of majorization for infinite sequences, applicable beyond the LOCC convertibility problem. Introduction The purpose of this article is to explore the complexities that may arise for the infinite-dimensional quantum systems when dealing with the convertibility of entangled states by local operations and classical communication (LOCC) [1].For example, it may be the case that a state cannot be converted by LOCC to a target state but can be converted to another state arbitrarily close to the former.To avoid such discontinuity, the notion of ϵ-convertibility under LOCC (ϵ-LOCC) was introduced [2].Roughly speaking, |ψ⟩ is ϵ-LOCC convertible to |ϕ⟩ if, for any neighborhood of |ϕ⟩, there exists a LOCC operation that takes |ψ⟩ to a state in that neighborhood of |ϕ⟩.Furthermore, ϵ-LOCC convertibility is completely characterized in terms of a majorization relation between the sequences formed by the squared Schmidt coefficients [2,3], which can be viewed as an extension of Nielsen's theorem [4] to the infinite-dimensional case.Additionally, a generalization of this result applies to quantum systems represented by commuting semi-finite von Neumann algebras [5]. The study of infinite-dimensional scenarios is essential both from a purely theoretical perspective and from practical applications to real systems.The qubits currently being used in various quantum computing platforms are ultimately embedded in infinite-dimensional systems, whether trapped ions or superconducting qubits.Also, many other applications of interest involve continuous-variable systems, meaning they are inherently of infinite dimension.A more comprehensive discussion on this point can be found, for example, in a recent work where the authors extend a known result on entanglement cost to the infinite-dimensional case [6]. Studying LOCC convertibility offers the advantage of operationally comparing entangled resources without initially specifying an entanglement measure.Specifically, if one state can be converted to another under any of the discussed LOCC transformations, the former state contains at least as much entanglement as the latter.While this method has some limitations, as state convertibility does not typically establish a total order, it still provides valuable insights. Our contribution involves the introduction and discussion of a new definition of approximate LOCC convertibility for infinite-dimensional systems, which we refer to as δ-LOCC convertibility.This concept relies on the observation that, for any bipartite pure state, there exists a state that is arbitrarily close to it (in terms of the trace distance) and whose Schmidt coefficients have finite support.We will demonstrate that this approach turns out to be equivalent to ϵ-LOCC convertibility, while offering the added advantage of dealing with states whose sequences of Schmidt coefficients have finite support. Additionally, we consider the following problem: suppose that two separated parties have to perform a series of quantum information tasks that require different entangled states.Rather than sharing multiple states, they aim to use a single entangled state, manipulating it to suit each task.Thus, the question arises: for any given set of target states, is there a minimal entangled state that can be locally transformed into any other target state using LOCC?This state, if exists, is known as an optimal common resource of the set [7].Similarly, we also explore the existence of a maximal entangled state that can be obtained from any state of the original set by LOCC.This state, if exists, is referred to as an optimal common product of the set [8]. Understanding these problems is crucial for quantum resource theories and entanglement [9].Exploring these issues in an infinite-dimensional setting provides insights into the fundamental properties of entanglement and its role as a resource in quantum information [10]. We recall that, in the case of pure bipartite finite-dimensional systems, the existence of an optimal common resource and an optimal common product has been established using the link between LOCC convertibility and majorization, as shown by Nielsen's theorem [4], and the fact that majorization forms a complete lattice [11,12]. Here, we exploit the characterization of δ-LOCC (or, equivalently ϵ-LOCC) in terms of majorization in order to describe the optimal common resource and optimal common product for infinite-dimensional systems.Unlike the finite-dimensional case, we obtain that the existence of the optimal common resource is conditioned to the existence of a common resource of the set state under consideration, which does not always exist.In particular, we provide two families of states, created by applying a two-mode squeezer to the product of a Fock state and the vacuum [13], for which the optimal common resource does not exist.This poses a novel distinction in the entanglement resource theories of finite versus infinite-dimensional quantum systems.On the other hand, we show that the optimal common product always exists.These results stem from our characterization of the majorization lattice fan or infinite-dimensional setting, which is a result of mathematical interest in itself and can be applied beyond the scope of the LOCC convertibility problem addressed here. The rest of the paper is organized as follows.In section 2, we present some important definition and results on majorization theory for infinite sequences.Also, we introduce the notion of majorization for infinite sequences based on finitely supported approximations.In section 3, we recall the definition of ϵ-convertibility and introduce the concept of δ-convertibility.Also, we state an extension of Nielsen's theorem based on our definition, and prove the equivalence between the two notions.In section 4, we show some applications of these ideas.Finally, in section 4, we provide some concluding remarks. Majorization for infinite sequences In this section, we present two results regarding the concept of majorization for infinite sequences, which will be useful to discuss the notion of LOCC convertibility.At the same time, they hold mathematical interest in their own right.For references regarding the finite-dimensional case, we recommend consulting the following sources [11,12,14]. To ensure clarity in our discussion, we introduce some notations.We consider the space ℓ 1 ([0, 1]) ≡ ℓ 1 of sequences whose series is absolutely convergent, Additionally, we define the space ℓ 1 (ℓ with one as sub-index) as the set of sequences (x n ) n∈N ∈ ℓ 1 ([0, 1]) that can be rearranged into non-increasing sequences.Accordingly, we define x ↓ as a sequence whose components are rearranged in non-increasing order, i.e. x n ⩾ x n+1 for all n ∈ N, and ℓ ↓ 1 as the set of correspondingly rearranged sequences. We also introduce the space ∆ ∞ as the set of sequences on ℓ 1 that satisfy the normalization condition ∞ n=1 x n = 1.This is nothing else that the set of denumerable probability vectors.We use ∆ ↓ ∞ to denote the set denumerable probability vectors whose components are sorted in non-increasing order.In addition, we consider the subset of denumerable probability vectors with finite support, denoted as ∆ ′ ∞ .We recall the notion of weak submajorization, which is defined as follows [15].Definition 1.Let x, y ∈ ℓ 1 .Then, x is said to be weakly submajorized by y, denoted as x ⪯ w y, if In addition to weak submajorization, we are interested in the notion of majorization in infinite dimensions [16].More precisely, if x and y are sequences on ∆ ∞ such that x ⪯ w y then x is said to be majorized by y, and denoted as x ⪯ y. Majorization lattice for infinite sequences We now present our first result. Proof.First, notice that the binary operation ⪯ w gives to ℓ ↓ 1 a structure of a partially ordered set (poset).Indeed, ⪯ w is reflexive and transitive and by an inductive argument, it follows that ⪯ w is anti-symmetric.Now, let us prove the lattice structure of ℓ ↓ 1 .Let S be a non-empty subset and let M be a fixed positive integer.Consider the infimum of the M-partial sums of the elements in S, The sequence of partial sums {s M } M∈N is increasing and, given that we are dealing with non-increasing ordered sequences, it satisfies [11] 2s 1 be another non-empty, upper-bounded subset, and consider the set Up(S ′ ) of all upper bounds of S ′ .The result follows by recalling that the supremum of S ′ equals the infimum of Up(S ′ ), that is, S ′ = Up(S ′ ).Thus, the lattice is conditionally -complete. We present the following observation of the order-structure of the set ∆ ↓ ∞ , which arise as a peculiarity in the infinite-dimensional context. Observation 3. The set ∆ ↓ ∞ is not bounded from below.In other words, there is no analog to the uniform probability vector for infinite-dimensional systems.An instance of this situation is presented in the example 7. On the other side, it can be proved that any finite subset of ∆ ∞ is bounded from below. Lemma 4. Let us consider the poset ∆ ↓ ∞ , ⪯, 1 , where 1 = (1, 0, 0, . ..).Then, for each non-empty finite subset S of ∆ ↓ ∞ , S admits a lower bound, that is, there exists z ∈ ∆ ∞ such that z ⪯ x for all x ∈ S. Proof.Without loss of generality, we can assume S = {x, y}.Let s M = min{s M (x), s M (y)} be the minimum of the M-partial sums.The sequence {s M } M∈N is increasing and satisfies 2s M ⩾ s M−1 + s M+1 . Lemma 5. Let S be a non-empty subset of ∆ ↓ ∞ and assume that z ∈ ∆ ↓ ∞ is a lower bound.Then, there exists the infimum of S. Proof.It is straightforward to observe that lemma 4 guarantees that ∆ ↓ ∞ is a lattice.Moreover, by lemma 5 the lattice is conditionally -complete.Let us prove now that ∆ ↓ ∞ is indeed -complete.Let S ′ be another non-empty subset and let Up(S ′ ) be the non-empty set of upper bounds.Notice that any element of S ′ is a lower bound of Up(S ′ ).Hence, from the previous lemma, Up(S ′ ) has an infimum in ∆ ↓ ∞ .In other words, the supremum of S ′ is in ∆ ↓ ∞ .Let us explore two illustrative examples that shed light on these results (later on, we will discuss the physical relevance of these examples).In the first case, we present two different families of sequences which infima do not exist, while in the second example, the infimum is clearly defined. The form of the suprema follows from the fact that The following example is a family of incomparable sequences that admits an infimum. Example 8. Consider the family of sequences {x (k) } k∈N ⩾3 defined as First, we can prove that the infimum {x (k) } k∈N ⩾3 exists.The M-partial sum of the sequence x (k) with k ⩾ 3 is given by In order to compute inf k⩾3 s M (x (k) ) we are going to use some techniques from calculus.For a fixed M, consider the function s(ω), For M = 1, 2, we have s ′ (ω) > 0, so the minimum is attained for ω = 3.For M ⩾ 3, taking derivatives and equating to 0, it follows that s(ω) has only one minimum at ω 0 , where ω 0 satisfies Given that s(ω) has only one critical point, the number k such that s M (x (k) ) is minimum happens at k = ⌊ω 0 ⌋ or at k = ⌈ω 0 ⌉.In other words, given M ⩾ 3, there exists k 0 such that It can be shown directly for . The value ω 0 can be computed (if necessary) with a fixed-point iteration, Notice that r 1 = M − 1 and if r i ⩾ 1, the value of r i+1 is always greater than M − 1.This implies that the function (M − 1)(1 + log x) is a contraction implying the convergence of the method to ω 0 .Finally, it is easy to observe that the supremum of this family is {x (k) } k∈N ⩾3 = (1, 0, . ..). Approximate majorization in terms of finite support probability vectors We will now proceed to define a notion of majorization in the infinite-dimensional case, based on approximations of the original sequences by sequences with finite support.With that purpose in mind, we first prove lemma 9 that provides an upper bound for the trace distance between two sequences in ∆ ↓ ∞ coinciding in the first N components. where Proof.By direct calculation of the trace distance between x and its finite support counterpart, x ′ , we have x n . Building on the previous lemma, we can now demonstrate that any sequence x ∈ ∆ ↓ ∞ can be approximated by another finite-support sequence x ′ ∈ ∆ ′↓ ∞ , which is arbitrarily close to x and majorizes the latter. For any δ ∈ (0, 1) and any K ∈ N, there exists x ′ ∈ ∆ ′↓ ∞ such that Proof.In order to prove this result, we are going to construct one such x ′ that fulfills the requirements.Given Thus, all that remains is to demonstrate that x N ⩾ x ′ M+1 , which is also directly satisfied by construction, since M + 1 Finally, by lemma 9, one has that It is also interesting to note that, as we demonstrate in the following proposition, the newly introduced approximation scheme preserves the majorization order (see figure 1(a)). Proof.Given δ > 0, there exists y ′ such that y ⪯ y ′ and d tr (y, y ′ ) ⩽ δ, by proposition 10.Let K ∈ N be such that y ′ n = 0 for all n ⩾ K.For this K and for the given δ > 0, there exists x ′ such that x ⪯ x ′ and d tr (x, x ′ ) ⩽ δ, by proposition 10.Let us see that x ′ ⪯ y ′ .By construction, we have In addition, we have the converse result. Proof. By hypothesis we can construct sequences {x Given that all norms are equivalent in R k , we get that x ′ m converges to x and in particular, being s k a continuous function, s k (x ′ m ) converges to s k (x).But we have the equalities and s k (y)).Then, taking limit in m to the relation s k (x ′ m ) ⩽ s k (y ′ m ) it follows that s k (x) ⩽ s k (y).Finally, from proposition 11 and appealing to the properties of the lattice, the following observation about the infimum and supremum elements for infinite sequences and their finite-support counterparts follows (see figures 1(b) and (c)). Observation 13. Consider x, y ∈ ∆ ↓ ∞ with x ′ and y ′ representing their respective approximated finite-support sequences.Then, one has x ∧ y ⪯ x ′ ∧ y ′ .Similarly, for the supremum, one can establish x ∨ y ⪯ x ′ ∨ y ′ . LOCC convertibility for infinite-dimensional systems In this section, we present a new definition of LOCC convertibility for infinite-dimensional systems.Later on, we will prove that our definition coincides with the one already defined by Owari et al [2], which is known as ϵ-LOCC convertibility.In what follows, we consider composite systems that consist of two parties, A and B, such that the Hilbert space of the joint system is H = H A ⊗ H B , with dim H A = ∞ and dim H B = ∞. ϵ-LOCC convertibility First, let us recall the Schmidt decomposition of bipartite pure states in the infinite-dimensional case [2].Finally, we recall the following theorem stating the equivalence between ϵ-LOCC and majorization of the squared Schmidt coefficients, which can be viewed as the infinite-dimensional version of Nielsen's theorem [4].It is worth mentioning that in the work [2], the authors not only give an infinite-dimensional extension of the deterministic conversion protocol via LOCC, but they also study the probabilistic case via stochastic-LOCC.Here, we focus on the deterministic scenario and leave the probabilistic case for future work. δ-LOCC convertibility The following two observations inspire our definition of LOCC convertibility: • The trace distance between any two bipartite pure states belonging to an infinite-dimensional Hilbert space is always minimized when they share the same Schmidt orthonormal sets.Proof.We will extend the second proof in [17, lemma 1] to the infinite-dimensional case.Let us prove Let σ ϕ and σ ψ be diagonal matrices of infinite dimensions constructed from the ordered Schmidt coefficients of |ϕ⟩ and |ψ⟩. Then, it follows that Furthermore, from (the proof of) [18, theorem 4.2] it follows that for every ϵ > 0, there exists a convex combination of permutations such that ∥c − In particular, there exists a sequence Then, by the continuity of the map µ ψ : Now, from lemma 27, tr(σ ψ Pσ ϕ P † ) ⩽ tr(σ ψ σ ϕ ) and then, for n ⩾ 0, Taking limit, tr(σ ψ C) ⩽ tr(σ ψ σ ϕ ), that is, Similarly, it is possible to prove the result, Building on the previous results, we can now provide our definition of δ-convertibility. Optimal common resource and optimal common product We have already studied the convertibility between infinite-dimensional entangled states via LOCC, and we have seen how this operation is subject to a majorization relationship between the sequences of Schmidt coefficients.We now introduce the notions of optimal common resource and optimal common product.Both concepts are related to the completeness of the majorization lattice, i.e. on the ability to define supremum and infimum elements for any subset of sequences.We will formulate these definitions in terms of δ-LOCC convertibility, but they can be formulated in an equivalent way in the ϵ-LOCC setting. Optimal common resource First, let us introduce the definitions of common resource and optimal common resource. Definition 21.Let P be an arbitrary set of bipartite pure states in H A ⊗ H B .The state |ψ cr ⟩ is said to be a common resource of P, if Moreover, the state |ψ ocr ⟩ is said to be an optimal common resource of P, if |ψ ocr ⟩ is a common resource and for any other common resource |ψ cr ⟩, one has For finite-dimensional P, there always exists an optimal common resource.Unlike the finite-dimensional case, the existence of an optimal common resource for infinite-dimensional systems is conditioned to the existence of a common resource.At the same time, this is subject to the completeness of the lattice of sequences discussed in proposition 6.More precisely, Proposition 22.Let P be an arbitrary set of bipartite pure states in H A ⊗ H B .Then, if there exists a common resource |ψ cr ⟩ of P, there also exists an optimal common resource |ψ ocr ⟩ of P. Proof.Let |ψ cr ⟩ be a common resource for the set P, as in definition 21.In that case, proposition 19 says that ψ cr ⪯ ϕ∀|ϕ⟩ ∈ P, with ψ cr , ϕ the corresponding sequences of Schmidt coefficients.Thus, ψ cr is a lower bound for all the considered sequences and, by lemma 5, there exists an infimum.That infimum gives us the sequence of Schmidt coefficients associated with the optimal common resource |ψ ocr ⟩. Let us see two examples, where in the first an optimal common resource does not exit whereas in the second it does.In particular, the next example was introduced in [13] in the context of the Gaussian channel minimum entropy conjecture.Here, we use this example to illustrate two sets of bipartite pure states that do not admit an optimal common resource. Example 23.Let a two-mode squeezer of parameter r, that is, U(r) = exp r(ab − a † b † ) where a, b, a † and b † are the creation and annihilation operator of the inputs modes, respectively.The action of the two-mode squeezer over the input state |k⟩|0⟩ can be expressed in the Schmidt decomposition as |ψ given by equation ( 2) and λ = tanh r [13].Let consider the sets |ψ , which have the peculiarities that |ψ The corresponding sets of sequences of Schmidt coefficients were studied in example 7, in which we showed they do not have infima.Then, optimal common resources for these sets do not exist. The following example was introduced in [19] in order to show that the entropy of entanglement for infinite-dimensional quantum systems is not necessarily continuous in the trace-norm.We use this example in order to illustrate the case of a set of bipartite pure sates having an optimal common resource. Example 24.Let consider the set of bipartite pure sates |ψ (k) ⟩ k∈N⩾3 , where |ψ (k) k) given by equation (3).In particular, we have that all the states are not LOCC convertible to each other, that is, |ψ (k) ⟩ ←→ δ−LOCC |ψ (k ′ ) ⟩ for all k ̸ = k ′ .However, an optimal common resource of the set |ψ (k) ⟩ k∈N⩾3 exists and its Schmidt coefficients can be computed algorithmically as shown in example 8. Optimal common product We introduce now the notion of a common product and the optimal common product of a set of states. Moreover, the state |ψ ocp ⟩ is said to be an optimal common product of P, if |ψ ocp ⟩ is a common product and for any other common product |ψ cp ⟩, one has Just as the common resource problem is associated with the existence of lower bounds in the space of Schmidt sequences, the common product problem is linked to the existence of upper bounds.In that sense, given that the majorization lattice is -complete, there always exists an optimal common product. Proposition 26.Let P an arbitrary set of bipartite pure states in H A ⊗ H B .Then, there exists an optimal common product |ψ ocp ⟩ of P. Proof.It follows directly from proposition 6, noting that there always exists a supremum for the corresponding set of sequences of squared Schmidt coefficients. Reviewing the examples 23 and 24 just discussed, it is evident that in both cases there exist the so-called optimal common products, whose Schmidt coefficients are determined by the suprema outlined in examples 7 and 8. Concluding remarks In conclusion, this article delves into the intricacies of infinite-dimensional systems, specifically focusing on the convertibility of entangled states through LOCC.In particular, we have introduced a new definition of LOCC convertibility for infinite-dimensional systems, termed δ-LOCC convertibility, which is fully characterized by a majorization relation between sequences of squared Schmidt coefficients and proves to be equivalent to ϵ-LOCC convertibility.Notably, this definition offers the mathematical advantage of dealing with finitely supported sequences. Moreover, we have explored the LOCC convertibility problem in practical situations involving two parties aiming to perform various quantum information tasks using a single entangled state.In these scenarios, the concepts of optimal common resource and optimal common product for a given set of infinite-dimensional target states naturally arise.While the existence of an optimal common product is always guaranteed, an optimal common resource is conditionally dependent to the existence of a common resource, highlighting a novel difference in the entanglement properties between finite and infinite-dimensional systems. We have leveraged the majorization lattice characterization for infinite sequences to establish these results.This not only contributes to the understanding of the LOCC paradigm in the infinite-dimensional case, but also presents mathematical insights with broader applicability beyond the specific scope of the addressed problem.Moreover, our results can be applied to other majorization-based resource theories.Overall, the exploration of these issues for infinite-dimensional systems enhances our comprehension of the fundamental properties of entanglement and its role as a quantum resource.
5,663
2024-05-24T00:00:00.000
[ "Physics", "Computer Science" ]
A MS‐lesion pattern discrimination plot based on geostatistics Abstract Introduction A geostatistical approach to characterize MS‐lesion patterns based on their geometrical properties is presented. Methods A dataset of 259 binary MS‐lesion masks in MNI space was subjected to directional variography. A model function was fit to express the observed spatial variability in x, y, z directions by the geostatistical parameters Range and Sill. Results Parameters Range and Sill correlate with MS‐lesion pattern surface complexity and total lesion volume. A scatter plot of ln(Range) versus ln(Sill), classified by pattern anisotropy, enables a consistent and clearly arranged presentation of MS‐lesion patterns based on geometry: the so‐called MS‐Lesion Pattern Discrimination Plot. Conclusions The geostatistical approach and the graphical representation of results are considered efficient exploratory data analysis tools for cross‐sectional, follow‐up, and medication impact analysis. Introduction Multiple sclerosis (MS) is an inflammatory demyelinating disease of the central nervous system with neurodegenerative processes in the later course. It affects over 2.5 million people worldwide and is the leading nontraumatic cause of serious neurologic disability in young adults. MS is characterized by unpredictable episodes of clinical relapses and remissions followed by continuous progression of disability over time (secondary progressive MS) in most instances. The course of MS is highly variablefrom benign to disastrous (Compston and Coles 2008). While some patients may acquire severe and irreversible disability within a few years, others may run a benign course with little or no disability even after decades. The hallmark of MS is sclerotic lesions within cerebral white matter, which are hyperintense on T2-weighted brain MRI sequences. These lesions present rather heterogeneously across patients not only with regard to the number and overall volume but also with regard to spatial pattern, predilection sites, and shape of single lesions (Filippi and Rocca 2011). Researching the geometrical configurations of white matter MS-lesions from MRI investigation is considered an opportunity for greater understanding of the relationship between MS clinics and neuroradiological findings (Pham et al. 2010;Marschallinger et al. 2014;Taschler et al. 2014). Until now, the heterogeneity of MRI findings could not be related fully to the heterogeneity of the disease course. This may be achieved by the application of mathematical tools, which are not well established in neuroimaging. Here, we aim to characterize white matter lesions in MS using measures from geostatistics. With the advent of brain geometry normalization (Penny et al. 2007) and automatic MS-lesion segmentation (Garcia-Lorenzo et al. 2012), large numbers of classified images can be made available for continued evaluation. For this study, we define a MS-lesion pattern as the ensemble of MS-lesions identified in a specific MRI exam-ination of a single patient. In a pilot evaluation of the approach followed here (Marschallinger et al. 2014), a small yet representative dataset of three synthetic and three manually segmented real-world MS-lesion patterns was used to show the potential of geostatistics to yield key geometrical information on MS-lesion patterns. This study applies the geostatistical approach to 259 automatically segmented binary MS-lesion patterns that are representative of probable MS-lesion pattern geometries. The dataset consists of 259 binary MS-lesion patterns projected to the MNI space. Dimensions of the voxel arrays are (x*y*z) 121*145*121 voxels, with 1.5*1.5*1.5 mm 3 per voxel, with the MS-lesion voxels assigned to gray level 1, and the remaining (void) voxels to gray level 0. For the remainder of this study, we refer to this binary, normalized dataset as the "MS-259 dataset". The histograms in Figure 1 To this end, neighboring voxels are analyzed and assigned to lesions under certain conditions. This is done iteratively until no further voxels are assigned to lesions. Here, the likelihood of belonging to WM or GM is weighted against the likelihood of belonging to lesions (Schmidt et al. 2012). Finally, 3D binary lesion maps in MNI space are generated, which were used here. The workflow followed in this study is depicted in Figure 2: Per pattern (MS-lesion mask), three directional empirical variograms are estimated at orthogonal orientations in 3D. Per empirical variogram, a variogram model function is fitted that provides a summary description of the pattern by means of two parameters: Range (a) and Sill (c). Parameters a and c are expressed in classified scatterplots to provide a straightforward presentation of the geometrical summary characteristics of MS-lesion patterns. Empirical variograms Geostatistics provides algorithms for characterizing, modeling, and simulating multidimensional data in a variety of disciplines (Conan et al. 1992;Christakos 2000;Kourgli et al. 2004;Blewett and Kildluff 2006;Caers 2010). The variogram, a measure of spatial correlation, is a central tool in geostatistics and can be used for exploratory data analysis (EDA) (Gringarten and Deutsch 1999). Applied to binary MS-lesion patterns from MRI, variography enables characterization and quantification of the geometrical properties of MS-lesion patterns (Marschallinger et al. 2009). When MS-lesion patterns are normalized to MNI space, variography enables single patient follow-up analysis, and intra or intergroup analysis (Marschallinger et al. 2014). The empirical variogram c (h) is calculated using (eq. 1): z(x) value of variable at some 3D location x, here: voxel with z = binary variable (0 or 1); h lag vector of separation between observed data (units: mm); n(h) number of data pairs [z(x), z(x+h)] at lag h; c(h) empirical variogram value for lag h. The c(h) of a binary MS-lesion pattern is estimated by comparing the binary values (0 or 1) of all voxel pairs within a specified lag h according to equation 1. Calculating c(h) for increasing lag distances |h|, the empirical variogram plot ("the variogram") is derived (Fig. 3C, F, I). Computing variograms for specific lag orientations yields directional variograms that quantify spatial anisotropies in the data. Variograms of MS-lesion patterns generally start with small values of c at small |h|, reflecting the large correlation of adjacent voxel pairs (neighboring voxels tend to have the same binary value). After an initial increase with lag away from the origin, with further increases in |h| the correlation decreases, and eventually the variogram begins to level off. As a rule of thumb, the flatter the variogram near the origin, the more pronounced is the spatial correlation (i.e., the larger the lesions will be). As pointed out in (Marschallinger et al. 2014), variograms of binary MS-lesion patterns should be confined to distances from 0 to 15 mm, because this area holds most of the relevant correlation information and a variogram model can be fitted straightforwardly; a more detailed introduction to using variography with MRI datasets is given there. Since the LST algorithm provides binary MS-lesion patterns in MNI space, LST results can be interpreted directly using variography. For each member of the MS-259 dataset directional empirical variograms were estimated in the three main orthogonal orientations X, Y, Z (dextral-sinistral, caudal-rostral, dorsal-ventral orientations), within distances between 0 and 15 mm. Figure 3 shows the sensitivity of the directional variograms (Deutsch and Journel 1997) to MS-lesion pattern geometry. It contrasts three MS-lesions patterns and the associated variograms. Case Wbles_274 has dominantly isotropic (spherical) lesions. Accordingly the variograms for X, Y, Z directions show approximately the same shape, indicating similar spatial correlation in all directions. Most MS-lesions in wbles_212 are anisotropic; they are stretched in the Y and Z directions. Here, the variograms for the Y and Z directions exhibit a shallower slope near the origin than for the X direction, indicating greater spatial continuity in the Y and Z than X directions. In wbles_133 the majority of the MS-lesions are stretched in the Z direction; as a consequence, the variogram for the Z direction has the shallowest slope near the origin. Variogram models Empirical variograms are graphical representations of spatial correlation, primarily intended for visual analysis. Several permissible variogram model functions exist for quantification (Cressie 1993): After being fitted to an empirical variogram, these model functions express a variogram's shape by the model type (e.g., exponential, spherical) and commonly two parameters: the variogram range a, and the variogram sill c. Among the available and permissible variogram model functions, the exponential variogram model (eq. 2) was found to be the most suitable for quantifying the MS-lesion patterns (Marschallinger et al. 2014). Figure 3(C, F, I) illustrate the process of variogram model fitting. They combine directional empirical variograms (symbols: red square = X, green triangle = Y, blue diamond = Z direction), the fitted exponential variogram model functions (lines: red continuous = X, green dash = Y, blue dash-dot = Z direction), the estimated a and c parameters and the goodness-of-fit (R 2 ). Model fitting and parameter estimation were computed with the software R (R Development Core Team, 2008): the range a is roughly the same in the X, Y, Z directions for wbles_274, indicating a nearly isotropic pattern. In contrast, for wbles_212 the range a in the X direction is just about half of a in the Y and Z directions, indicating greater spatial correlation in the Y and Z directions. This is confirmed by Figure 3(D and E) where the majority of the MS-lesions are stretched in the Y and Z directions. The dominant stretching of MS-lesions in the Z direction in pattern wbles_133 is expressed by a larger range in the Z direction. The sill c increases in the order wbles_212, wbles_274, wbles_133, but is similar per pattern. The variogram measures spatial continuity (or discontinuity), which in the case of 3D structures can be interpreted as surface complexity (Kourgli et al. 2004;Trevisani et al. 2009). The surface complexity of biological structures is often expressed as the ratio of surface area and volume (Schmidt-Nielson 1984). To cross-check the correlation between the a and c parameters and lesion pattern surface complexity, the total lesion volume (mm 3 ), and total lesion surface area (mm 2 ) were calculated for each pattern of the MS-259 dataset. Correlating parameters a and c with total lesion surface area and total lesion volume (Vtot) reveals an almost perfect linear correlation (R 2 = 0.997) between c (sill) and Vtot. Furthermore, there is a significant log correlation (R 2 = 0.935) of a with the ratio of the total lesion volume/total lesion surface area. In other words: in binary MS-lesion patterns, the variogram model sill c is a substitute for total lesion load ( Fig. 4A) and the model range a is a proxy of MSlesion pattern surface complexity (Fig. 4B). The greater c, the greater is the total lesion volume. The greater a, the greater is the overall spatial correlation and the smoother (i.e., the less complex) is the pattern's surface. a-c plot In geostatistics, the fitted variogram model range a and sill c are used to convey information for use in geostatistical operations such as spatial prediction and simulation (Isaaks and Srivastava 1989). In the current context, a and c are used to characterize MS-lesion pattern geometry by three value pairs: a [X] ,c [X] ; a [Y] ,c [Y] ; a [Z] ,c [Z] ; (with a [x] , c [X] ; . . . values of a, c in direction x, etc.). When lesion patterns are geometrically normalized, their geometry can be conveniently portrayed and compared in a diagram of a versus c (ac-plot (Marschallinger et al. 2014)). Figure 5 is a plot of a (abscissa) versus c (ordinate) for the MS-259 dataset. The plot shows dense clustering near the origin that obscures detail, and a possible bifurcation at medium to large a-c values. To overcome the clustering, natural log scaling was applied to both the a and c axes in Figure 6. ) is between À11.47 and À4.08. Since the MS-259 dataset comprises a broad range of very mild to extremely severe cases, we consider that the majority of possible MS-lesion patterns will plot within the axis limits of ln (a) = [0.3] and ln(c) = [À12,À3]. In Figure 6, the MS-259 dataset forms a loose, elliptic cloud with the long axis running about diagonal. Within the cloud, the X, Y, and Z directional components also form elliptic, overlapping areas. The visually discernible shift towards larger ln(a), with a [X] <a [Y] <a [Z] is confirmed by the respective mean centers (mean center definition see below). At the individual level, the vast majority of the MS-259 dataset lesion patterns show varying a [X,Y,Z], but similar c [X,Y,Z] values (the so-called "geometric anisotropy"). Figure 7 compares an isotropic and two anisotropic MS-lesion patterns in the a-c plot (patterns in Figure 3): the X, Y, Z symbols of the isotropic wbles_274 pattern plot closely together. The symbols of the anisotropic pattern wbles_212 clearly indicate a smaller a for X than for the Y and Z directions (lesion elongation in the Y and Z directions), whereas the anisotropy of wbles_133 is expressed by a larger a for Z than for the X and Y directions (lesion elongation in the Z direction). This is in accordance with the observed lesion pattern geometries in Figure 3. As such, the a-c plot straightforwardly communicates geometric anisotropy of MS-lesion patterns. While the geometrical characteristics of single patterns can be represented conveniently by separate X, Y, Z symbols per pattern, this can be confusing for larger datasets. When presenting many MS-lesion patterns in the a-c plot, it makes sense to identify each pattern with only one point and to express the magnitude of anisotropy by symbol classes. The mean center (eq. 3a,b) is widespread for representing average location, in the current context: a mean a (average geostatistical range); c mean c (average geostatistical sill); n number of data, here: 3 (x, y, z). Analogous to the standard deviation in univariate statistics, the standard distance ("SD", eq. 4) indicates deviation from the spatial mean (De Smith et al. 2007). The more the SD deviates from 0 (the isotropic case), the more anisotropic a lesion pattern is. In the MS-259 dataset, the SD in the a-c plot varies between 0.001 and 0.473. SD ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P n i¼1 ða i À aÞ 2 n þ P n i¼1 ðc i À cÞ 2 n s (4) a mean a (average geostatistical range); c mean c (average geostatistical sill); n number of data, here: 3 (x, y, z). Mean center and standard distance are used here to express average location and spatial spread in a-c space because both marginal distributions can be considered normal: both ln (a) and ln (c) of the a-c plot are almost perfect normal distributions for all (x, y, z) directional variograms. This is confirmed by Figure 8 which gives the relevant box-plots. MS-lesion pattern discrimination plot Combining the mean center and standard distance (SD) in a single plot, a compact representation of the spatial characteristics of MS-lesion patterns is achieved. We term this plot the Lesion Pattern Discrimination Plot (Fig. 9, "LDP"). The LDP also indicates total lesion load, derived from the correlation in Figure 4(A). Regarding spatial dispersion, the MS-259 dataset shows isotropic and anisotropic patterns scattered over the point cloud except a concentration of extremely anisotropic patterns at very small total lesion loads which is attributed to aliasing in the representation of very small lesions by small numbers of voxels. Comparing the visualization of the MS-259 dataset in the a-c plot (Fig. 6) and in the LDP (Fig. 9), the LDP is more easily understood. The loss of information on X, Y, Z anisotropy indicated by individual symbols is counterbalanced by the introduction of standard distance symbols. Figure 10 is a synopsis of MS-lesion pattern geometries (10a) and the corresponding positions in the LDP (10b). From the MS-259 dataset, 18 patterns were selected that cover a large part of the populated area in the LDP. To ensure representativeness, six volume classes with a large spread (a max -a min ) were chosen at iso-volumes of 200, 1300, 2000, 9000, 22,000, 55,000 mm 3 . From each volume class three patterns were selected that represent the minimum, average, and maximum a (class volume AE 15%). The patterns and positions can be identified by numbers in Figure 10(A and B). Recalling the volumesurface considerations above, the LDP represents MS-lesion pattern surface smoothness versus total lesion load. Working through Figure 10 reveals that complex patterns with many lesions or a "rough"/"complex" surface generally are positioned at the left fringe of the point cloud while patterns with few, big, and "smooth" lesions are placed toward the right border. Patterns around the long axis of the elliptic cloud mediate between rough and smooth extremes. This also holds quantitatively. For example, consider volume class 2000 mm 3 in Table 1: proceeding from pattern wbles_207 via wbles_070 to wbles_221, the number of lesions decreases, surface area decreases, and volume per lesion, surface per lesion and the ratio of volume/surface area increases. Table 1 expresses the quantitative geometry behind Figure 10: groups are defined by their average total lesion volume (AE 15%). Within each group the following holds: a increases top down and with increasing a, the number of lesions decreases, total surface area decreases, volume per lesion increases, surface per lesion increases, and the ratio of total volume/total surface area increases. In other words, at constant volume, with increasing a, pattern surface smoothness increases and surface complexity decreases. Follow-up examination expressed in the LDP The LDP is a versatile framework to portray lesion pattern evolution in follow-up exams because it combines total volume, lesion pattern surface complexity and geometrical anisotropy information in a single, wellarranged plot. Major changes as well as subtle fluctuations in MS-lesion pattern geometry can be explored straightforwardly. As an example, the follow-up exams of six MS-cases (f1-f6) were documented in the LDP. The six cases differ with respect to lesion loads, lesion numbers, lesion pattern geometry, and lesion pattern evolution. Total investigation epochs range from 7 to 33 months and comprise three to five follow-up exams of irregular duration. Figure 11(A) shows the follow-up lesion patterns of cases f1,f2,f3,f4,f5,f6 in projections to the axial plane; Figure 11(B) gives the respective LDP entries (see also Table 2). In the follow-up examination of case f1, major geometric features remain constant, but minor fluctuations in smaller lesions show up (Fig. 11A); accordingly, in the LDP f1-symbols plot closely together, but minor changes in anisotropy are indicated. Case f2 shows increasing total lesion load (TLL) over time, but the evolution path in the LDP indicates an approximately constant surface Figure 11. (A) Longitudinal studies f1 1-5 , f2 1-5 , f3 1-4 , f4 1-4 , f5 1-3 , f6 1-3 (line-oriented, order of follow ups from left to right). MS-lesion patterns in projections to axial plane. See text for discussion. (B) LDP used to portray the evolution of MS-lesion patterns f1 1-5 , f2 1-5 , f3 1-4 , f4 1-4 , f5 1-3 , f6 1-3 . Arrows indicate MS-lesion pattern evolution paths. Color coding: f1red; f2gold; f3green; f4cyan; f5blue; f6magenta. See text for details. roughness: despite fluctuations, the evolution path runs approximately parallel to the volume axis. Case f3 has decreasing TLL from exams 1-3, but a small TLL increase in exam 4. The path connecting exams 1-4 first runs towards the origin of the LDP but then points upwards in the last step. As TLL decreases, surface roughness increases due to the decomposition of large confluent lesion aggregates into smaller, mostly spherical ones. This is why lesion pattern anisotropy concurrently decreases. Case f4 is dominated by large, spherical lesions. A number of small elongated lesions accounts for a pronounced anisotropy. TLL decreases in all exams, but the last one. In the LDP, the pattern evolution path points towards the LDP origin except for step 4. Pattern surface roughness increases due to the decay of the large spherical lesions; there are major fluctuations in pattern anisotropy. Case f5 has a progressive trend with respect to TLL. Lesion pattern surface roughness decreases due to the confluence of several small lesions. The LDP evolution path points diagonally upwards. Case f6 shows decreasing TLL. Pattern surface complexity remains about constant between exams 1 and 2, but then increases due to concurrent formation of new lesions. In the LDP, the evolution path first runs approximately parallel with the ordinate and then takes a sharp bend to continue about perpendicular to increasing surface complexity at approximately equal TLL. Discussion The geostatistical approach to MS-lesion pattern characterization proposed and explored here is founded on the Theory of Regionalised Variables (ReV) in which a spatially continuous property is represented stochastically as a Random Function (RF). A RF is a stochastic generating mechanism which could have produced the data (represented as a random draw or ReV from the RF). Given second-order stationarity (the parameters of the RF are spatially invariant), the variogram parameters, together with the mean, then characterize the RF, in particular capturing its spatial correlation properties. The MNI brain creates a Euclidean space which is then dissected into voxels of constant size spatially. The binary outcome of the MRI scanning process, expressed in this MNI space, is readily represented using the RF formalism, and the constant nature of its extent and support (voxel) from image-to-image facilitates excellent opportunities for sensitive comparison across members and through time. Indeed, in geostatistics, this situation is relatively rare. Therefore, we were able to interpret very small differences between images, and it was possible to place expectations, including minima and maxima, for each parameter estimated. For example, the MNI space also means that parameter values have clear interpretations in terms of volume and surface area relations. The variogram represents a so called "two-point statistic" in that the semivariance is calculated between two points (the present location and another at a given lag vector away; compare equation 1). Two-point statistics have only limited capabilities to describe the potentially complex spatial structures exhibited by MS-lesion patterns. There is, thus, some trade-off between the sensitivity afforded by application of the RF formalism to the standardized MNI space and the limited spatial representation afforded by the variogram. Moreover, empirical variograms of MS-lesion patterns have to be limited to distances of 15 mm to enable meaningful variogram model fitting (Marschallinger et al. 2014). In making this restriction, some information on pattern granularity like repetitions (the so-called hole effect) is lost. Moreover, the variogram is not sensitive to the absolute position of objects within a defined space. Recently, much attention in geostatistics has been paid to multiple-point geostatistics (MPG) (Strebelle 2000;Remy et al. 2009). The MPG formalism captures a much richer information set than can be obtained from two-point statistics. For example, MPG has been used to represent properties such as hydraulic connectivity in sedimentary rocks, allowing modeling of properties such as permeability. Two-point statistics are incapable of capturing and representing such properties. Thus, there is scope for exploration of the value of MPG for application to brain images. While a natural choice given the fixed MNI space, the RF formalism is not the most natural interpretation of lesions. We tend to think of lesions as compact objects with fuzzy borders within the MNI space and this is particularly true in the representation afforded to us by the MRI scans in which lesions appear as having a more or less compact structure. This leads to alternative data models to the RF. The object-based model has been applied widely in handling geographic information with great utility (Blaschke 2010), for example, in application to the classification of land cover in remotely sensed images. Recently, 3D object-based image analysis applications have emerged in the biological and medical imaging domains (Schmidt et al. 2007;Marschallinger et al. 2011;Al Janab et al. 2012). The object-based model has a lot to offer for the characterization of lesions, including the ability to handle each lesion separately, to logically link corresponding lesions in time-series to track their status, and the ability to characterize the interrelations between lesions in a single image. Future research will focus on developing the MPG and object-based models. A further extension of our approach could be the inclusion of parameter uncertainty in the calculation of the mean geostatistical range and sill. By the use of simulations, this uncertainty could be used to identify significant changes in lesion volume and surface area relations between individual scans. This would represent a meaningful advantage of the modeling approach over the empirical analysis of those values. Conclusions An efficient and computationally cheap geostatistics-based method for characterizing MS-lesion patterns from binarized and normalized MRI images was developed and presented. This approach enables the expression of key geometrical aspects of MS-lesion patterns through estimation of the geostatistical range and sill (a,c) parameters which correlate with lesion pattern surface complexity and total lesion volume. The MS-lesion pattern discrimination plot ("LDP") introduced here and the a-c plot are based on the above geostatistical parameters. The LDP communicates summary information on surface complexity, total volume and geometrical anisotropy of MS-lesion patterns. The a-c plot complements the LDP, informing on the preferred directional components of MS-lesion patterns. The major advantage over existing methods is to achieve insight into the spatial development of whole MS-lesion patterns (i.e., selective growth/decay in specific directions) without requiring object-based/per-lesion characterization. The approach also offers high precision and comparability between either different brains or the same brain at different times. Both the LDP and the a-c plot are considered EDA tools adding to neurological standard image processing methods by quickly informing on the spatial or the spatiotemporal properties of MS-lesion patterns in the course of cross-sectional studies, longitudinal studies or the evaluation of medication efficacy.
5,882.8
2016-01-30T00:00:00.000
[ "Physics" ]
FGF19 increases mitochondrial biogenesis and fusion in chondrocytes via the AMPKα-p38/MAPK pathway Fibroblast growth factor 19 (FGF19) is recognized to play an essential role in cartilage development and physiology, and has emerged as a potential therapeutic target for skeletal metabolic diseases. However, FGF19-mediated cellular behavior in chondrocytes remains a big challenge. In the current study, we aimed to investigate the role of FGF19 on chondrocytes by characterizing mitochondrial biogenesis and fission–fusion dynamic equilibrium and exploring the underlying mechanism. We first found that FGF19 enhanced mitochondrial biogenesis in chondrocytes with the help of β Klotho (KLB), a vital accessory protein for assisting the binding of FGF19 to its receptor, and the enhanced biogenesis accompanied with a fusion of mitochondria, reflecting in the elongation of individual mitochondria and the up-regulation of mitochondrial fusion proteins. We then revealed that FGF19-mediated mitochondrial biogenesis and fusion required the binding of FGF19 to the membrane receptor, FGFR4, and the activation of AMP-activated protein kinase alpha (AMPKα)/peroxisome proliferator-activated receptor-gamma coactivator 1 alpha (PGC-1α)/sirtuin 1 (SIRT1) axis. Finally, we demonstrated that FGF19-mediated mitochondrial biogenesis and fusion was mainly dependent on the activation of p-p38 signaling. Inhibition of p38 signaling largely reduced the high expression of AMPKα/PGC-1α/SIRT1 axis, decreased the up-regulation of mitochondrial fusion proteins and impaired the enhancement of mitochondrial network morphology in chondrocytes induced by FGF19. Taking together, our results indicate that FGF19 could increase mitochondrial biogenesis and fusion via AMPKα-p38/MAPK signaling, which enlarge the understanding of FGF19 on chondrocyte metabolism. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01069-5. Introduction Mitochondria play a vital role in chondrocyte metabolism because they not only provide the indispensable adenosine triphosphate (ATP) for chondrocytes [1] but also directly participate in many cellular physiological activities by changing their biogenesis [2].The homeostasis of mitochondrial biogenesis is maintained by a mitochondrial quality control (MQC) system [3].MQC mainly preserves functional mitochondria by controlling the homeostasis of the fission-fusion process, and even removes redundant non-functional mitochondria [4].Mitochondrial fission is mainly driven by dynaminrelated protein 1 (Drp1), a cytoplasmic dynamin guanosine triphosphatase (GTPase) [5], and mitochondrial fission protein 1 (Fis1) [6].Drp1 is dynamically recruited to the outer mitochondrial membrane (OMM) and then oligomerized in a ring-like structure and drives membrane constriction in a GTP-dependent manner.Fis1 serves as a membrane-anchor that could also regulate mitochondrial fission through interaction with Drp1 and other fission components in mitochondria [7].For mitochondrial fusion, it is controlled by 2 mitofusins (Mfn1 and Mfn2) [8] and dominant optic atrophy 1 (Opa1) [9].Mfn1 and Mfn2 mediate the fusion of OMMs and then Opa1 mediates the fusion of the inner mitochondrial membrane (IMM).The outer membranes of two mitochondria are tethered by Mfns.GTP binding and/ or hydrolysis induces a conformational change of Mfns, resulting in increased mitochondrial docking and membrane contact sites.Following OMM fusion, the interaction between OPA1 and cardiolipin (CL) on either side of the membrane tethers the IMMs, which drives IMM fusion by OPA1-dependent GTP hydrolysis.Mitochondrial fusion promotes the exchange of important components among mitochondria, especially mitochondrial deoxyribonucleic acid (mtDNA), and ensures the continuity of mitochondrial function [10].Mitochondria adjust their number and mitochondrial network morphology in cells by coordinating the cycle of mitochondrial fission and fusion.These dynamic changes further regulate mitochondrial functions and determine cell metabolism [11]. Fibroblast growth factors (FGFs) are a type of cytokine that plays an important role in regulating organic growth, development, maturation and disease [12].They include a total of 22 members of 7 subfamilies.FGF19, belonging to the FGF19 subfamily including FGF19, FGF21 and FGF23, was first found to be expressed in the human cartilage in 1999 [13] and then it is recognized to be one of the predominant FGF ligands present in developing human cartilage [14].These reports indicate its potential role in chondrocyte development and homeostasis.Previous evidence has confirmed that FGF19 signalling is crucial to glucose metabolism [15].It could increase energy consumption and glucose utilization by increasing the cyclic-AMP response binding protein (CREB)-peroxisome proliferator-activated receptor-gamma coactivator 1 alpha (PGC-1α)-signalling cascade.Previous studies have shown that mitochondria are the potential targets of FGF19.FGF19 has been shown to increase energy homeostasis by increasing fatty acid delivery to mitochondria in the liver [16].In white adipose tissue, FGF19 levels are correlated with the mitochondrial number [17].FGF19 can prevent excessive palmitate-induced dysfunction of differentiated mouse myoblast cells by protecting mitochondrial function [18].These results suggest that FGF19 may work as a potential mediator of mitochondrial metabolism.Besides, the receptors of FGF19 include fibroblast growth factors receptor 1c, 2c, 3c and 4 (FGFR1c, 2c, 3c & 4), but FGFR4 is considered to be the primary receptor due to its high affinity for FGF19 [19].Report also indicates the binding of FGF19 to its receptor FGFR4 requires the participation of β-Klotho (KLB), a co-receptor to achieve high affinity [20].FGF19 has been reported to form a dimer with the β-Klotho monomer via its C-terminal tail at 1:1 ratio [21].Till now, although we realize the importance of FGF19 in the development and maturation of cartilage, there is a lack of evidence that FGF19 regulates cartilage behavior, especially mitochondrial changes. Cartilage is a special structure composed of dense extracellular matrix (ECM), mainly including type II collagen and proteoglycan, and highly differentiated cells called chondrocytes [22].In general, chondrocytes are localized in a relatively low-oxygen environment that energy producing is vital for them.Mitochondrial dysfunction could break the balance between glycolysis and oxidative phosphorylation (OXPHOS) in chondrocytes, reducing ATP production substantially [23].Thus, in the current study, we aim to explore the effect of FGF19 on the mitochondrial fission-fusion process in chondrocytes by characterizing the morphology of the mitochondria network and its fission-fusion mediator proteins, and its underlying bio-mechanism. Chondrocyte isolation The tissue materials used in the current study were obtained according to ethical principles and the protocol was firstly approved before the experiments began by our Institutional Review Board (No.WCH-SIRB-OT-2020-048).Chondrocytes were isolated from 0 to 3 days' newborn C57 mice as previously described [24].In brief, the chondrocytes from cartilage of the knee joints were collected by 0.25% trypsin digestion for 30 min at 37 °C and 0.2% type II collagenase (No.C6885, Sigma, MO, USA) digestion for about 16-18 h at 37 °C till the cartilage tissue mass was completely digested.The isolated chondrocytes were filtered and cultured in 10% FBS DMEM (No. D6429, HyClone, Logan, UT, USA).We used the chondrocytes at passage 1-2. ATP assay ATP concentrations were tested with enhanced ATP assay kit (No. S0027, Beyotime, Shanghai, China) according to the manufacturer's protocol as previously described [25].Cells were lysed with ATP lysis buffer (200 μl of lysate per well in 6-well plates) and centrifuged at 15, 000 g for 5-10 min at 4 °C.The lysates were collected and stored at -20℃.Before the ATP test, 100 μl ATP working solution (ATP test solution: ATP test dilution = 1: 5) was added to 1.5 ml EP tubes and incubated for 3-5 min at room temperature (RT).Next, the lysates were transferred to 100 μl of ATP working solution and mixed quickly.The amount of luminescence emitted was measured with a luminometer (Synergy HTX Multi-Mode Microplate reader, BioTek Instrument, WI, USA) immediately.The luminescence data were normalized to the control sample protein amounts.The statistical program GraphPad Prism 8 was used to process the data and images. Mitochondrial staining of living cells Cell Navigator ™ Mitochondrion Staining Kit (No.22667, AAT Bioquest, CA, USA) was used to stain mitochondria in living chondrocytes.Briefly, cells were cultured in petri dishes specified for confocal laser microscopy (1,000 cells per dish, Glass Bottom Cell Culture Dish, Φ15mm, No. 801002, NEST, Jiangsu, China).FGF19 (200 ng/ml, No.100-32, PEPRO TECH, USA) and/or KLB (200 ng/ ml, 2619-KB-050, R&D Systems, USA) at a 1:1 ratio were added into the culture media as the experimental group and continued to incubate for 72 h.Thaw all the components of Cell Navigator ™ Mitochondrion Staining Kit at RT before starting the experiment. 2 µl of 500X Mitolite ™ Orange (Component A) was added into 1 ml of Live Cell Staining Buffer (Component B) to make a working solution.200 µl working solution was added to the petri dishes and incubate at 37 °C for 30-120 min.Fluorescence was detected at Ex/Em = 540/590 nm (TRITC filter set).Replace the dye loading solution with phosphatebuffered saline (PBS, 1 ×).Then the cells were fixed in 4% paraformaldehyde for 20 min and rinsed with PBS for three times.After being penetrated by 0.5% Triton X-100 (Beyotime, Shanghai, China) for 15 min.Nuclei were counterstained with 4′,6-diamidino-2-phenylindole (DAPI; D9542, Sigma, USA) and the cytoskeleton was stained with phalloidin (FITC, A12379, Thermo, MA, USA).The immunofluorescence images were observed through a confocal laser scanning microscope (FV3000, Olympus, Tokyo, Japan). RNA sequencing and bioinformatics analysis Chondrocytes (at 1 × 10 6 cells per well) were treated by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ ml) for 72 h and harvested by trypsin digestion.Then the cells were sent for RNA sequencing at Shanghai Lifegenes Biotechnology CO., Ltd (Shanghai, China) as previously described [26].Total RNA was extracted from chondrocytes using Trizol reagent (Catalog#15596026, Thermo Fisher Scientific, Waltham, MA), and the quantification was performed with an RNA Nano 6000 assay kit (Bioanalyzer 2100 System, Agilent Technologies, CA).Illumina NeoPrep system was applied to purify and fragment the mRNAs, synthesize cDNAs, and amplify the targets.Sequencing was accomplished with the Illumina NovaSeq 6000 platform, and the raw data were obtained by matching reference genome using HISAT2 v2.1.0.The data were reported in Fragments Per Kilobase of exon model per Million mapped fragments (FPKM).Pheatmap was generated by online R package. Transmission electron microscopy (TEM) The cell pellets in agarose piece were treated with 1% OsO4 solution for 1 h at 4 °C, helping to provide an enhanced contrast to TEM images.Samples were further processed for dehydration, infiltration and embedding into LX-112 resin with serial changes into following solutions: 25% ethanol at RT for 15 min, 50% ethanol at RT for 15 min, 75% ethanol at RT for 15 min, 95% ethanol at RT for 15 min, 100% ethanol at RT for 15 min, twice; Ethanol: LX-112 (3:1) at RT for 30 min, Ethanol: LX-112 (1:1) at RT for 30 min, Ethanol: LX-112 (1:3) at RT for 30 min; pure LX-112 at RT for 60 min, twice.Finally, the samples were transferred in pyramid tip mold (Ted Pella; 10585) and polymerized at 60 °C for 72 h.Semi-thin sections (1 μm) were cut using an ultra-microtome (Leica EM UC7) after attaching the pyramid on mounting cylinders (Ted Pella; 10580) and stained with toluidine blue to identify the position of cells.Ultra-thin sections (70-100 nm) were cut and collected on 200 mesh grids.The grids were stained with 1% uranyl acetate, at RT for 10 min, followed by Reynolds lead citrate, at RT for 5 min.Sections were examined with a JEM-1400FLASH electron microscope (JEM-1400FLASH, JEOL, Tokyo, Japan), at 80 kV, using the AMT-600 image capture engine software.Images were transferred to photoshop software for final processing. Inhibitor treatments P38 pathway inhibitor SB203580 (No. A8254, APExBIO Technology, TX, USA) was prepared in dimethyl sulfoxide (DMSO) (No.196055, MP Biomedicals, OH, USA) as stock solutions and the treatment procedure was followed our previous report [29].Chondrocytes were pre-incubated by SB203580 (10 µM) for 2 h prior to the addition of FGF19.DMSO was added to the cell culture medium as a control. Statistical analysis All protein bands and immunofluorescence images were quantified using optical density (OD) value and fluorescent intensity by ImageJ software (ImageJ2, NIH, Bethesda, MD, USA).Data were presented as the mean ± SD of at least three independent experiments (n ≥ 3) and plotted with Graph Pad Prism.The significant difference analyses were all based on Student T-test.In each analysis, the critical significance level was set to be p = 0.05. FGF19 increases the mitochondrial biogenesis To explore the influence of FGF19 on the biological behaviors of mitochondria, we first used TEM to observe the morphological change of mitochondria in chondrocytes induced by FGF19 with the help of β Klotho (KLB), a vital accessory transmembrane glycoprotein for assisting the binding of FGF19 to its receptor [20].We found that FGF19 at 200 ng/ml could significantly increase the mitochondrial biogenesis as indicated in Fig. 1a.Quantification confirmed that the number of mitochondria in chondrocytes in the FGF19 + KLB group was significantly enhanced relative to that of the single FGF19 group or the KLB control group (Fig. 1b).To further confirm the number change of mitochondria in living chondrocytes, we then used mitochondria staining kit for living cells (Cell Navigator ™ Mitochondrion Staining) and performed immunofluorescence.The results revealed that the mitochondria number was significantly enhanced in FGF19-treated living chondrocytes in the presence of KLB (Fig. 1c).Linear quantification of fluorescence intensity (red) also showed that the number of mitochondria was increased and the distribution of mitochondria was broader in the cytoplasm region of living chondrocytes by FGF19 (Fig. 1d).The increase of mitochondrial biogenesis is usually accompanied with the generation of ATP products [30].Thus, intracellular ATP products were tested by enhanced ATP assay kit in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml) at 48 h.Results confirmed that the intracellular ATP products in chondrocytes were considerably increased by FGF19 (Fig. 1e).By using western blotting, we detected the expression of citrate synthase (CS), one of the key enzymes of aerobic respiration in mitochondria, was up-regulated in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ ml) (Fig. 1f, g).Taking together, these results indicated that FGF19 could increase the mitochondrial biogenesis and thus promote energy generation. FGF19-induced mitochondrial biogenesis accompanies with a fusion of mitochondria We performed RNA sequencing to precisely explore the associated gene changes in mitochondrial metabolism of chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Genes shown in red are upregulated and genes in green are downregulated in the pheatmap (Fig. 2a and gene information in Additional file 1: Table S1).We analyzed the expression of all changed genes and screened 22 changed mitochondrionrelated genes in chondrocytes induced by FGF19 in the presence of KLB.Among them, mitochondrial fusion genes, Mfn1, Mfn2 and Opa1, were substantially upregulated, which indicates that FGF19 enhances the expression of mitochondrial fusion genes in chondrocytes.By using western blotting, we then confirmed the protein changes of Mfn1, Mfn2 and Opa1 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml) at 72 h.As shown in Fig. 2b, FGF19 significantly upregulated the expression of Mfn1, Mfn2 and Opa1 in chondrocytes.Quantitative analysis confirmed the significant increase in mitochondrial-fusion proteins in chondrocytes induced by FGF19 (Fig. 2c).Since mitochondrial fission-fusion is a dynamic process [4], we also detected the protein expression of mitochondrial fission-related proteins, Drp1 and Fis1, by western blotting in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml) (Additional file 1: Figure S1).Results showed that FGF19 did not significantly change the expressions of Drp1 and Fis1 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of β-Klotho (200 ng/ml).We next used TEM to explore the fission-fusion change of mitochondrial morphology in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The results showed that FGF19 could elongate the individual mitochondrial morphology in chondrocytes (Fig. 2d), especially, the boxed images in yellow showing the elongation of mitochondrial morphology in chondrocytes induced by FGF19.The schematic diagram showed the morphological changes of individual mitochondria transferred from regularly shaped and circular to irregular and elongated.Further, we analyzed the mitochondrial morphology with Image J (Fig. 2e).Quantitative results confirmed a significant increase in spreading area (in nm 2 ), perimeter in 2D (in nm), aspect ratio (major to minor axis) and Feret's diameter (longest distance in one single mitochondrion) of individual mitochondria and a significant decrease of circularity (rated by 4π × area/perimeter 2 ) and roundness (rated by 4 × area/π × major axis 2 ) of individual mitochondria in chondrocytes induced by FGF19.Together, FGF19 could also mediate fission-fusion process of mitochondria by characterizing the enhancement of fusion proteins and elongation of mitochondrial morphology. FGF19 enhances the mitochondrial biogenesis and fusion via up-regulation of AMPKα signaling. It is widely recognized that FGF19 can bind to FGFR1, 2, 3 and 4 receptors but has a high affinity for FGFR4 with the help of KLB [20].In order to explore the gene S2).The gene expression of FGFR1 and FGFR4 were significantly increased by FGF19 in the presence of KLB, and moreover, the expression of FGFR4 was much higher than that of FGFR1 in the chondrocytes.Then, we performed qPCR and western blotting to affirm the change of FGFR4 expression in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ ml).The results in Fig. 3b showed that FGF19 could significantly increase the gene expression of FGFR4 in chondrocytes by qPCR and the up-regulation of FGFR4 gene in the FGF19 + KLB group was remarkably enhanced relative to that without the help of KLB (the single FGF19 group).The protein expression of FGFR4 was also increased in chondrocytes induced by FGF19 (Fig. 3c).With the help of KLB, FGFR4 in the FGF19 + KLB group showed a higher expression than that in the single FGF19 group. AMPKα signalling directly regulates the biogenesis of mitochondria through the AMPKα-PGC-1α-SIRT1 axis, a putative mitochondrial biogenesis relevant signalling [30].From western blotting, we found that the expression of AMPKα, p-AMPKα, PGC-1α and SIRT1 was up-regulated in chondrocytes by FGF19 (Fig. 3d).Quantitative analysis of these proteins further confirmed the increase in AMPKα-PGC-1α-SIRT1 signalling in chondrocytes induced by FGF19 in the presence of KLB (Fig. 3e).As the phosphorylation of AMPKα and activation of PGC-1α play a vital role in mitochondrial biogenesis.We further performed immunofluorescent staining to explore the expression and distribution of p-AMPKα and PGC-1α in chondrocytes induced by FGF19 (Fig. 3f-i).The results showed that FGF19 could increase the expression of p-AMPKα and PGC-1α.The expression of p-AMPKα was notably accumulated in the nuclear region (Fig. 3f ) while the expression of PGC-1α was increased in whole cytoplasm of chondrocytes (Fig. 3h).Quantification of fluorescent intensity (per cell) confirmed the increase of p-AMPKα and PGC-1α in chondrocytes induced by FGF19 in the presence of KLB (Fig. 3g, i).Taking together, these results indicated that FGF19 could enhance the biogenesis and fusion by up-regulation of AMPKα signalling. FGF19 enhances mitochondrial biogenesis and fusion through p38/MAPK pathway To determine the key cytoplasmic pathways related to FGF19-mediated mitochondrial biogenesis and fusion, we analyzed the RNA sequencing data and screened out all changed kinases involving classical pathways.These kinases were clustered by pheatmap (Fig. 4a and gene information in Additional file 1: Table S3).It showed that most of the kinases were related to MAPK signaling.In particular, MAP kinases such as Dusp4 and Dusp2 were shown to be significantly enhanced in chondrocytes.We then performed western blotting to confirm the changes of ERK/p-ERK, p38/p-p38 and JNK/p-JNK in chondrocytes induced by FGF19 (Fig. 4b).Among them, we found that the enhancement of total p38 and p-p38 were higher than the other two.Quantitative analysis confirmed a significant increase in total p38 and p-p38 but the increase of ERK/p-ERK and JNK/p-JNK was not as obvious as p38/p-p38 in chondrocytes induced by FGF19 in the presence of KLB (Fig. 4c and Additional file 1: S2).We further used immunofluorescent staining to explore the expression and distribution of p-p38 in chondrocytes induced by FGF19 in the presence of KLB (Fig. 4d, e).From the CLSM images, we found that FGF19 could increase the expression of p-p38 in the cytoplasm of chondrocytes, especially in the nuclear region (Fig. 4d).Quantification of total fluorescent intensity confirmed the increased expression of p-p38 in chondrocytes induced by FGF19 in the presence of KLB (Fig. 4e). Inhibition of p38/MAPK attenuates AMPKα signalling and impairs the biogenesis and fusion of mitochondria induced by FGF19 To further determine the importance of p38/MAPK in regulating the expression of AMPKα signalling, SB203580, a specific inhibitor of p38/p-p38 signalling, Fig. 3 FGF19 increases the mitochondrial biogenesis by up-regulating the expression of AMPKα signalling related proteins in chondrocytes.a RNA sequencing showing the change of FGFRs genes in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Three pairs of samples were obtained from three independent cell isolates (n = 3), namely, samples 1, 1′, 1′′and 1′′′, samples 2, 2′, 2′′and 2′′′, and samples 3, 3′, 3′′and 3′′′.The data were present as log2(FPKM + 1).b q-PCR showing the gene changes of FGFR4 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The results were based on three independent experiments (n = 3).c Representative western blotting showing the expression change of FGFR4 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).d Representative western blotting showing the expression changes of AMPKα, p-AMPKα, PGC-1α and SIRT1 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).e Quantifications of AMPKα, p-AMPKα, PGC-1α and SIRT1 by western blotting in (d).f Representative immunofluorescent staining showing the change in the distribution of p-AMPKα in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) for 72 h.The images were chosen based on three independent experiments (n = 3).Red, p-AMPKα; Green, F-actin; Blue, nucleus.g Quantification of fluorescence intensity of p-AMPKα in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The data were based on at least eight cells from three independent experiments.h Representative immunofluorescent staining showing the change in the distribution of PGC-1α in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) for 72 h.The images were chosen based on three independent experiments (n = 3).Red, PGC-1α; Green, F-actin; Blue, nucleus.i Quantification of fluorescence intensity of PGC-1α in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The data were based on at least eight cells from three independent experiments.The data in g and i were shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in b, e, g and i was based on Student T-test (See figure on next page.) was utilized [31].We detected the expressions of p38/p-p38, AMPKα/p-AMPKα, PGC-1α and Sirt1 in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) after pretreatment of SB203580 (10 µM) for 2 h (Fig. 5a).From western blotting, we revealed that SB203580 could effectively impair the up-regulation of p38/p-p38 and also attenuated the mitochondrial biogenesis proteins including AMPKα/p-AMPKα, PGC-1α and Sirt1.Quantitative analysis further confirmed a significant decrease in the expression of p38 pathway and AMPKα signalling in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) after pretreatment of SB203580 (10 µM) (Fig. 5b).We then used immunofluorescent staining to show the expression and distribution of p-AMPKα and PGC-1α (Fig. 5c-e).The results showed that the expressions of p-AMPKα (Fig. 5c) and PGC-1α (Fig. 5d) were largely reduced in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) after pretreatment of SB203580 (10 µM).Fluorescence quantification further confirmed the changes of p-AMPKα and PGC-1α (Fig. 5e). Next, we explored the role of p38/MAPK in regulating biogenesis and fusion of mitochondria.We detected the protein mediators in the fission-fusion process (Additional file 1: Figure S3 and 6a).The results showed that inhibition of p38 did not significantly change the expression of FGF19-induced mitochondrial fission proteins, i.e., Drp1 and Fis1 (Additional file 1: Figure S3), but indeed decreased the expression of FGF19-induced mitochondrial fusion proteins, i.e., Opa1, Mfn1 and Mfn2 (Fig. 6a, b).Further, we performed immunofluorescence and found the impairment of Opa1 (Fig. 6c) and Mfn2 (Fig. 6d) in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) after pretreatment of SB203580 (10 µM).Fluorescence quantification further confirmed the changes of Opa1 and Mfn2 (Fig. 6e).Finally, to confirm the role of p38/MAPK in controlling mitochondrial network morphology, we applied mitochondrial living cell staining (Fig. 6f, g).From CLSM, we observed that SB203580 could significantly decrease the FGF19-enhanced mitochondrial number, and moreover, it could sharply reduce the mitochondrial network morphology formed by FGF19 (cyan boxes).Quantitative analysis confirmed that about a 50% decrease in the total change of mitochondrial number (per cell) and a 60% decrease in the number of mitochondrial elongation (per cell) in chondrocytes induced by SB203580 (Fig. 6g). Together, the inhibition of p38/MAPK could decrease the expression of AMPKα signalling, and thus impair the biogenesis and fusion of mitochondria by characterizing Discussion It has been well established that cartilage is an avascular, non-lymphatic, and non-innervated tissue composed of ECM and chondrocytes [32].Chondrocytes, the only mature cell type, in cartilage are surrounded by a relatively low-oxygen extracellular environment.In chondrocytes, glycolysis and OXPHOS are both present, and the ATP produced by OXPHOS makes up about 25% of the total energy in chondrocytes [23].Nonetheless, reports have confirmed that OXPHOS is a kind of more effective method of ATP synthesis [33].OXPHOS thus plays a significant role in the energy metabolism of chondrocytes.Four ETC complexes (respiratory chain complexes I-IV) and complex V (ATP synthase) of the respiratory chain, which is located on the IMM, produce ATP during the OXPHOS process [34].Mitochondrial dysfunction impairs ATP generation and further interferes with the repair process against cartilage degradation [35].For these reasons, mitochondria are indispensable energy-producing organelles in the OXPHOS of chondrocytes.In this study, we discovered that FGF19 could greatly boost the production of the key enzyme, CS, for aerobic respiration, as well as intracellular ATP products (Fig. 1e-g).Our findings also showed that FGF19 might increase mitochondrial biogenesis by enhancing the number of functional mitochondria in chondrocytes (Fig. 1a-d).These results suggest that FGF19 is involved in mitochondrial biogenesis and fusion and enlarge our understanding of chondrocyte metabolism induced by growth factors. FGFs family members, in particular the members of the FGF19 subfamily, are a type of cytokines that play an important role in regulating cellular energy homeostasis and mitochondrial function [36].Previous researches indicated that FGF19 played a pivotal role in glucose metabolism.According to reports, FGF19 not only improves hepatic gluconeogenesis and glucose catabolism by activating the CREB-PGC-1α signalling cascade pathway, but also enhances glycogen synthesis by increasing glycogen synthetase (GS) activity [16].Martinez et al. also found that FGF19 levels are correlated with the mitochondrial number in white adipose tissue [17].In addition to FGF19, the FGF19 subfamily also includes FGF21 and FGF23.FGF21 was reported as a biomarker with high sensitivity for predicting mitochondrial disease in muscle [37].Moreover, deletion of the fission-related protein DRP1 from the mice liver disrupted mitochondrial fission, which would further promote the expression of FGF21 [38].Furthermore, it was discovered that FGF21 activates the AMPK-SIRT1-PGC-1α pathway to regulate mitochondrial fission-fusion, increase mitochondrial biogenesis, and promote mitochondrial function [39].Another FGF19 subfamily member, FGF23, could enhance mitochondrial function by upregulating CS activity [40].FGF23 treatment increased peroxisome proliferator-activated receptor δ (PPAR-δ) mRNA levels and improved mitochondrial function.Other FGF subfamily members may also affect mitochondrial function.For instance, it has been suggested that FGF13 may improve mitochondrial function in primary cortical neurons [41].Interestingly, we also found that FGF19 could significantly change the mitochondrion-related gene expression in chondrocytes (Fig. 2a).The regulation of chondrocyte mitochondria by FGF19 extends our understanding of FGF19.Since FGF21 and FGF23 are both FGF19 subfamily members, we are interested in whether the same changes in mitochondria will occur by the induction of FGF19.The mitochondrial metabolism induced by FGF19 may also be similar in other subfamily members.More detailed studies are needed to confirm this assumption. The mitochondrial network keeps the proper balance between fission and fusion, which helps to maintain Image J shows the change of mitochondrial network morphology analysis in cyan boxes.The images were chosen based on three independent experiments (n = 3).Red, mitochondrial network; Blue, nucleus.g Quantification of mitochondrial number (per cell) and mitochondrial elongated number (per cell) in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) by Image J. Quantitative analyses were based on three independent experiments (n = 3).The data in e and g were shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in b, e and g were based on Student T-test dynamic homeostasis of mitochondrial biogenesis [42].In addition to controlling mitochondrial biogenesis, fission and fusion proteins may also control mitochondrial bioenergetics.Traditionally, mitofusins drive the fusion of outer mitochondrial membrane and regulate the shape of mitochondrial cristae structure [43].However, it is reported that the master regulator effect of PGC-1 on mitochondrial biogenesis may require or may be mediated by Mfn2.Mfn2 overexpression activated mitochondrial metabolism by increasing the expression of several subunits of OXPHOS complexes in muscle cells and the connection between Mfn2 and mitochondrial metabolism has been also demonstrated using loss-offunction studies [44].The function of Mfn2 in mitochondrial energy metabolism was also demonstrated in Mfn2 knockdown mouse embryonic fibroblasts that Mfn2 affects mitochondrial energy metabolism by inhibiting the expression of complexes I, II, III and V and reducing mitochondrial membrane potential [45].The deletion of Mfn2 causes a deficiency in coenzyme Q that leads to electron transport chain (ETC) dysfunction and a decrease in ATP production.Opa1 resides and works in the IMM after the Mfn1/2 proteins are anchored in the OMM.In general, the crucial determinants of bioenergetic efficiency depend on the cristae structure on the IMM.The change in the morphology of mitochondria is inevitably related to IMM remodeling.OPA1 inactivation significantly alters the mitochondrial morphology, resulting in scattered mitochondrial fragments and disordered mitochondrial cristae [46].On the other hand, OPA1 overexpression can favor the assembly and stability of respiratory chain supercomplexes (RCS) by changing cristae shape [47].The change between these mitochondrial fission-fusion proteins and mitochondrial biogenesis in chondrocytes was the main focus of the current work.And we found that mitochondrial fusionrelated proteins Mfn1, Mfn2 and Opa1 were significantly enhanced by FGF19 (Fig. 2a-c), which was accompanied with mitochondrial biogenesis (Fig. 1).Moreover, elongated mitochondria were reported to have more cristae structure and higher ATP synthase activity [1].Additionally, mitochondrial fusion could lead to the elongation of the mitochondrial network under physiological conditions [48].Thus, mitochondria with an elongated morphology are regarded to be more bioenergetically efficient.As the study has demonstrated, FGF19 stimulated the fusional changes of mitochondria in chondrocytes (Fig. 2d, e).We discovered for the first time that FGF19 could upregulate mitochondrial biogenesis and mitochondrial fusion process by regulating fusion-related proteins in chondrocytes and thus promote the elongation of mitochondria in chondrocytes. Mitochondria provide energy and are involved in several metabolic activities through various signalling pathways.It is well-known that the AMPK pathway is associated with mitochondria.AMPK could sense the changes in the energy status of cells and adapt mitochondrial function by regulating its biogenesis, MQC and dynamics [30].Conversely, a deficiency of mitochondrial biogenesis could decrease the phosphorylation of AMPK.In the current study, we provided solid evidence to prove that FGF19 stimulation could enhance mitochondrial biogenesis and fusion via up-regulating AMPKα signalling (Fig. 3d-i).The level of mitochondrial biogenesis may be related to the level of AMPK phosphorylation.As reported that the decline of p-AMPK further leads to the depression of NAD + -dependent deacetylase SIRT-1 and the mitochondrial biogenesis master regulator PGC-1α.The activation of PGC-1α not only leads to its translocation from the cytoplasm to the nucleus, but also upregulates the transcription of genes that are important for mitochondrial OXPHOS [49].As we found in this study, SIRT-1 and PGC1α expression was also upregulated in chondrocytes by FGF19 (Fig. 3f-i).These results confirm that FGF19 enhances mitochondrial biogenesis and fusion by upregulating AMPKα signalling.It is reported that FGF19 related downstream signalling pathway mainly includes the MAPK, the phosphatidylinositol 3-kinase-(PI3K-) AKT, the phospholipase C (PLC) γ-protein kinase C (PKC), and the signal transducer and activator of transcription (STAT) pathway [50].Among them, MAPK signalling, as a canonical FGFs family signalling pathway, has been confirmed to be an important downstream pathway in maintaining the homeostasis of cartilage [36].Studies on the development of craniofacial cartilage in zebrafish have found that estrogen may disrupt the bone-related MAPK signalling pathway by affecting FGF19 [51].And according to our research, most of the changed kinases involving classical pathways kinases were related to MAPK signalling in chondrocytes induced by FGF19 (Fig. 4a).Hence, MAPK signalling may be a vital pathway in enhancing mitochondrial biogenesis and fusion in chondrocytes with FGF19 treatment.We also discover that FGF19 activated the MAPK subfamilies p-ERK, p-JNK, and p-p38 with p-p38 being the most significant (Fig. 4b, c).Besides, p38 is generally considered an active regulator in chondrogenesis and chondrocyte differentiation [52].Hence, we also provided evidence to validate mitochondrial biogenesis and fusion process in chondrocytes were mediated by p-38/MAPK signalling pathway.Inhibition of p38/MAPK attenuates AMPKα signaling (Fig. 5) and further impairs the biogenesis and fusion of mitochondria induced by FGF19 (Fig. 6).For these reasons, p-38/MAPK signalling pathway is one of the most important pathways that are activated by FGF19.However, this could not be the only pathway that can modulate mitochondrial fusion by FGF19 since the expression of mitochondrial fusion-related proteins was not completely abrogated by using SB203580.We assume that there may be other signal pathways involved in the regulation of mitochondrial fusion in chondrocytes.It will be interesting to identify which pathways are also involved in the process of mitochondrial fission-fusion by FGF19 in future studies (Fig. 7). Inflammatory joint diseases, such as osteoarthritis (OA), are characterized by metabolic disorders.In OA, chondrocytes rapidly change their metabolic pathways in the process of OA disease [53].Therefore, exploring the mechanism of chondrocyte metabolism may provide potential new therapeutic strategies for the treatment of OA and other inflammatory joint diseases.Previous researches have verified that FGFs could cause cell metabolic disorders and work as key participants in morphogenesis, angiogenesis, neoplastic and several diseases [54].For instance, FGF21 was found to be related to glucose and lipid metabolism [55].FGF23 was reported to be involved in phosphate and vitamin D metabolism [56].FGF2 and FGF18 were revealed to participate in cartilage remodeling [57].And FGF20 was verified to be associated with cartilage pathology [58].As for FGF19, it was recognized to be an important growth factor in cell metabolism and cartilage development, because it acted as a critical metabolic regulator in bile acid biosynthesis [59], gallbladder filling [60], glucose metabolism [37] and skeletal muscle development [61].Besides, FGF19 was also reported to play a key role in growth plate development [14] and morphogenesis during craniofacial development [51].Therefore, exploring change of FGF19-mediated cellular metabolism in chondrocytes enlarges our understanding of the physiology and pathology of cartilage and chondrocytes. In summary, we demonstrated that FGF19 promotes the process of mitochondrial fusion and elongates the morphology of mitochondrial network in chondrocytes and revealed the potential mechanism of mitochondrial fusion mediator proteins regulation in chondrocytes.These findings enhance our understanding of the molecular mechanisms of mitochondrial dynamics in chondrocytes and provide a new potential for therapeutic targets for the management of cartilage diseases.S1.RNA sequencing showing the change of mitochondrial metabolism-related genes in chondrocytes treated with FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Table S2.RNA sequencing showing the change of FGFRs genes in chondrocytes treated FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Table S3.RNA sequencing showing the changes in the expression of MAPK-related mediators in chondrocytes treated with FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml). Fig. 1 Fig. 1 FGF19 induces a transient increase in mitochondrial number and an enhanced generation of ATP products.a Representative TEM images showing the changes of mitochondrial number in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Orange arrows indicated individual mitochondrion.b Quantification of mitochondrial number (per cell) in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Quantitative analyses of the mitochondrial number were based on nine cells (per group) from three independent experiments (n = 3).c Representative immunofluorescent staining showing the number changes of mitochondria in living chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) for 72 h.The images were chosen based on three independent experiments (n = 3).Red, individual mitochondrion; Green, F-actin; Blue, nucleus.d Linear quantification of fluorescence intensity of mitochondrion number in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml) by Image Pro Plus 6.0.e ATP assay showing the increase of intracellular ATP products in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The results were based on three independent experiments (n = 3).f Representative western blotting showing the expression change of CS in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).g Quantification of CS by western blotting in (f).The data in b are shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in b, e and g was based on Student T-test ( See figure on next page.)Fig.2FGF19 promotes the elongation of mitochondrial morphology by up-regulating the expression of mitochondrial fusion proteins.a RNA sequencing showing the change of mitochondrial metabolism-related genes in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Three pairs of samples were obtained from three independent cell isolates (n = 3), namely, samples 1, 1′, 1′′and 1′′′, samples 2, 2′, 2′′and 2′′′, and samples 3, 3′, 3′′and 3′′′.The data were present as log2(FPKM + 1).FPKM, Fragments per kilobase of exon model per million mapped fragments.b Representative western blotting showing the expression changes of Opa1, Mfn1 and Mfn2 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).c Quantification of Opa1, Mfn1 and Mfn2 by western blotting in b was performed to confirm these protein changes (n = 3).d Representative TEM images showing the changes of mitochondrial network's morphology in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Cyan arrows indicated the elongation of mitochondrial morphology.Schematic diagram illustrated that elongation was correlated with mitochondrial fusion.e Measurements of mitochondrial network's morphology in d by Image J. Quantitative analyses of mitochondrial network's morphology were based on three independent experiments (n = 3).The data in e were shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in c and e was based on Student T-test expressions of FGFRs in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ ml), we analyzed RNA sequencing and the results were shown in the form of pheatmap (Fig.3aand gene information in Additional file 1: Table Fig. 4 Fig. 4 FGF19 activates p38/MAPK signalling in chondrocytes.a RNA sequencing showing the changes in the expression of MAPK-related mediators in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).Three pairs of samples were obtained from three independent cell isolates (n = 3), namely, samples 1, 1′, 1′′and 1′′′, samples 2, 2′, 2′′and 2′′′, and samples 3, 3′, 3′′and 3′′′.The data were present as log2(FPKM + 1).b Representative western blotting showing the expression change of ERK, p-ERK, p38, p-p38, JNK and p-JNK in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).c Quantification of p38 and p-p38 by western blotting in (b).d Representative immunofluorescent staining showing the change in the expression and distribution of p-p38 in chondrocytes induced by FGF19 (200 ng/ml) in the presence of KLB (200 ng/ml) for 72 h.The images were chosen based on three independent experiments (n = 3).Red, p-p38; Green, F-actin; Blue, nucleus.e Quantification of fluorescence intensity of p-p38 in chondrocytes induced by FGF19 at 200 ng/ml in the presence of KLB (200 ng/ml).The data were based on nine cells from three independent experiments.The data in e were shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in c and e was based on Student T-test Fig. 5 Fig. 5 Inhibition of p38 attenuated FGF19-enhanced AMPKα activity.a Representative western blotting showing the expression change of p38, p-p38, AMPKα, p-AMPKα, PGC-1α and SIRT1 in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).b Quantification of p38, p-p38, AMPKα, p-AMPKα, PGC-1α and SIRT1 by western blotting in (a).c Representative immunofluorescent staining showing the change in the distribution of p-AMPKα in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Red, p-AMPKα; Green, F-actin; Blue, nucleus.d Representative immunofluorescent staining showing the change in the expression and distribution of PGC-1α in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Red, PGC-1α; Green, F-actin; Blue, nucleus.e Quantification of fluorescence intensity of p-AMPKα and PGC-1α in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The data in e were shown as box (from 25, 50 to 75%) and whisker (minimum to maximum values) plots.The significant difference analysis in b and e was based on Student T-test (Fig. 6 See figure on next page.)Inhibition of p38 decreases the expressions of mitochondrial fusion proteins induced by FGF19 in chondrocytes.a Representative western blotting showing the expression change of Opa1, Mfn1 and Mfn2 in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).b Quantification of Opa1, Mfn1 and Mfn2 by western blotting in (a).c Representative immunofluorescent staining showing the change in the expression and distribution of Opa1 in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Red, Opa1; Green, F-actin; Blue, nucleus.d Representative immunofluorescent staining showing the change in the distribution of Mfn2 in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The images were chosen based on three independent experiments (n = 3).Red, Mfn2; Green, F-actin; Blue, nucleus.e Quantification of fluorescence intensity of Opa1 and Mfn2 in chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml).The data were based on at least eight cells from three independent experiments.f Representative immunofluorescent staining showing the changes of morphology mitochondrial network in living chondrocytes induced by SB203580 (10 µM) in the presence of FGF19 (200 ng/ml) and KLB (200 ng/ml) for 72 h. Fig. 6 ( Fig. 6 (See legend on previous page.) p-Erk and p-JNK in chondrocytes in the presence of β-Klotho.Figure S3.Inhibition of p38 changes the expression of FGF19-induced mitochondrial fission proteins in chondrocytes.Table
9,955.4
2023-03-13T00:00:00.000
[ "Medicine", "Biology" ]
A 5G-Based eHealth Monitoring and Emergency Response System: Experience and Lessons Learned 5G is being deployed in major cities across the globe. Although the benefits brought by the new 5G air interface will be numerous, 5G is more than just an evolution of radio technology. New concepts, such as the application of network softwarization and programmability paradigms to the overall network design, the reduced latency promised by edge computing, or the concept of network slicing – just to cite some of them – will open the door to new vertical-specific services, even capable of saving more lives. This article discusses the implementation and validation of an eHealth service specially tailored for the Emergency Services of the Madrid Municipality. This new vertical application makes use of the novel characteristics of 5G, enabling dynamic instantiation of services at the edge, a federation of domains and execution of real on-the-field augmented reality. The article provides an explanation of the design of the use case and its real-life implementation and demonstration in collaboration with the Madrid emergency response team. The major outcome of this work is a real-life proof-of-concept of this system, which can reduce the time required to respond to an emergency in minutes and perform more efficient triage, increasing the chances of saving lives. I. INTRODUCTION The new generation of mobile communications, 5G, is expected to bring significant improvements on many fronts: enhanced mobile broadband experiences to the enduser, ultra-reliable extremely low latencies to enable industry automation, autonomous driving, and massive machine-type communications, which will make the wireless Internet of Things a reality. But in addition to these highlight-worth well-known use cases, there are many application areas that could benefit from 5G and associated technologies. One key example, with a clear and direct impact on society at large, is emergency services and healthcare. Nowadays, emergency services depend on human intervention. A witness in the vicinity of the emergency will, The associate editor coordinating the review of this manuscript and approving it for publication was Ravinesh C. Deo . luckily, start the emergency procedure described as follows: (i) the witness calls the emergency number (112 in Spain) and explains the situation and the location (this explanation is subjective and prone to errors based on the background of the witness, since there is no available data of the patient's condition, also the location is subjective and very often referred to geographical items difficult to locate, e.g., next to the bakery), (ii) the operator at the 112 call center assesses the situation and decides which is the most suitable emergency response team, and, (iii) the emergency team is deployed. In the city of Madrid, this procedure takes around 4 minutes, while the time an ambulance takes to reach the location is estimated to be around 8 minutes (depending on distance and traffic). 1 By analyzing the data provided by the Emergency services of Madrid, we realized that considerable improvements can be achieved in improving the efficiency of the emergency service by employing automatized detection systems. The automatic detection of emergencies is an incipient business that will increase in the next years thanks to the new capacities in terms of massive connectivity of devices with low power consumption brought by 5G. The increasing trend on the use of smart wearable devices, together with the great advance in terms of portable medical monitoring and sensing, can be used to enable continuous monitoring of health parameters (e.g., heart rate, sugar blood level, blood pressure), and thus detect, and even predict, potential health issues in a personalized way. Additionally, the introduction of 5G brings more opportunities to improve the quality of care of the emergency team. New Augmented Reality (AR) technologies allow for better treatments on-site, as well as to enable remote support from other medical teams, which may reduce the cost of the service (reduced emergency teams supported remotely). AR requires significant computing resources that cannot be pushed to the cloud, due to strict latency requirements. However, 5G capabilities in terms of low latency, rapid edge deployment, and high reliability can be used to make possible the use of AR services when treating an emergency situation. The 5G network also enables providing such service globally through service federation. In this way, Emergency Systems that are customers of a given 5G service provider can track and respond to emergencies of their patients everywhere by using resources of other providers. With the goal of realizing an improved and automatized emergency service, this article proposes a design and realization of a 5G personalized health emergency system. It describes a real-life experience of the deployment of a system capable of detecting and responding to emergency situations in an automatic and personalized way while enriching the tools at hand of the emergency team by enabling the use of AR services at the location of the emergency, thanks to the dynamic network reconfiguration. In short, the system is capable of patients' live-monitoring so that when an accident occurs (e.g., irregular heartbeat, which is a sign of a possible cardiac arrest), the system triggers an alarm to send an emergency team to the emergency location while the network is reconfigured with the deployment of a new network service in order to support the emergency team with AR services on-site. The presented scenario has been completely implemented and it has been validated by the emergency services of the Madrid Municipality. As further validated, the time needed to detect and process the emergency, select the best team and send it to the right location can be mostly eliminated by enabling an automatic and personalized emergency detection system, which also removes the need for the witness. This reduction translates into an increase in the patient's chances of surviving. The rest of the paper is organized as follows. Section II introduces related work about the role of 5G, AR, and edge computing in eHealth. Section III describes the scenario tackled, before presenting the developed eHealth system (in section IV). Validation results, using a real prototype with the ambulance and firefighter services of Madrid, are presented in section V. Finally, section VI presents the main takeaways from this full 5G validation experience, while section VII concludes this article. II. RELATED WORK 5G will allow new kinds of health care services that are not feasible with the current capabilities of network technologies. In [1], the authors summarized the application of health care enabled by the capabilities of new mobile network technology. They divided them into mainly four categories, namely online consultation, online health monitoring, remote diagnosis, and mobile robot surgery (part of this work falls under the online health monitoring category). One of the challenges identified for this category was the high density of devices required to monitor a large part of the population. 5G is able to cope with the increasing number of chronic patients being monitored, as it introduces higher connection density, higher bandwidth, and lower latency with respect to the previous network generation (i.e., 4G). In [2], the authors demonstrate the need for a 5G network (compared to the 4G network) to guarantee high efficiency and fast responses in a health monitoring scenario. Different technologies combined with 5G networks aim to improve health services. The works in [3], [4] focus on applying deep learning and artificial intelligence (AI) to enhance the performance in heterogeneous networks, also tackling how to enhance health services in [5]. Other works [6], [7] offer solutions to improve the security in healthcare systems, while some of them focus on multiple heterogeneous networks settings [8]. Edge computing is a key part of the 5G concept. The possibility of placing computational resources closer to the user contributes to providing the low latency required by health care applications while opening the door to new patient-centric applications. Both [9] and [10] make use of the concept of edge to provide patient care. In [9], a remote patient monitoring system makes use of edge resources to reduce the bandwidth needs of the telemetry system used to monitor the patients. using a variety of sensors and cameras. Also, it shows how using the edge ensures the real-time constraints of webRTC. In [10], the authors present a framework to assess voice disorders through deep learning processing at the edge. In our work, we use the edge for two differentiated functions: i) deploying a network service implementing an AR supporting system, and ii) deploying a virtual local breakout point. Recent researches show that AR has huge potential for applicability in the healthcare system, comprising user-environment interfaces, telemedicine, and education [11]. A key example is the possibility to show relevant patient health records on the head-mounted AR device without losing focus on the patient. In [12], authors developed VOLUME 9, 2021 a smart AR application that supports healthcare professionals with procedure documentation and patient information during wound treatments. In addition, they evaluated the interest of their work among healthcare professionals, who showed to prefer AR-based documentation systems with respect to the current documentation procedures (i.e., books, tablets, or smartphones). However, they only considered scenarios in which the AR application runs locally on the AR device. Reference [13] evaluates the use of a particular AR device to assess its performance in a disaster scenario. Similar to the previous case, the AR application runs locally in the AR device, which additionally degrades the battery lifetime. In the study [14], AR devices are used to triage patients in a disaster event. In the study, the AR device operates in two modes (i) algorithm-assisted and (ii) with telemedical support from a remote professional. The results showed 90% of accurate triage. The study emphasizes the (WiFi) connectivity limitations, especially in the telemedical support case, and the low battery durability of the AR devices. III. THE SCENARIO: 5G PERSONALIZED HEALTH EMERGENCY SYSTEM In this section, we identify several areas where emergency services can be significantly improved by the use of new communication technologies enabled by next-generation mobile networks. On top of that, we present an eHealth emergency solution scenario, which we implemented as described in Section IV. A. eHealth IMPROVEMENTS FOR SAVING LIVES eHealth is defined as the delivery of health services by means of information and communication technologies (ICT). The goal is to improve the information flow between the actors involved (e.g., patients, paramedics, hospitals, doctors, surgeons), supported by ICT. Mobile health, which is a component of eHealth, is defined as a medical health practice supported by wireless devices, including wearable medical devices, patient monitoring devices, and personal assistants [15]. Adults are becoming more concerned and take measures for continuous health monitoring by investing in mobile health devices [16]. With the clear goal in mind of exploring how ICT can enhance the emergency response services, we have group different improvement opportunities as follows: 1) EMERGENCY RESPONSE TIME AND REAL-TIME DATA In addition to preventing cardiac arrests through constant monitoring, reducing an emergency team's response time and being able to reach the exact patient's location clearly contributes to saving more lives. For example, the reduction of the emergency response time significantly lowers the door-to-balloon time. 2 According to [17], the guidelines suggest a door-to-balloon time of fewer than 90 minutes. The study points out few effective strategies in reducing the doorto-balloon time. Some of them are (i) having a single call to a central operator, and (ii) providing real-time data feedback to emergency and catheterization laboratory prior to arrival. A similar study [18] analogously concludes that the major delays are due to reaching the patient and moving her to the closest hospital. 2) CONNECTIVITY, DURABILITY AND PERFORMANCE As mentioned, technologies such as AR, have been proven to improve emergency services [14]. However, the main requirements to make AR useful in emergency scenarios are: (i) to have a stable and high-bandwidth wireless connectivity, and (ii) long battery duration. Various works [19], [20] suggest that computational offloading at the Edge can reduce energy consumption by 90%, while improving the overall performance of the devices. On top of that, Ultra-Low Latency Communication (URLLC) slices in 5G networks are envisioned to address the connectivity requirements for AR [21], [22]. B. SCENARIO DESIGN Taking as a starting point the improvement considerations described before, plus the constraints imposed by the way emergency response teams operate and how cellular networks work, we arrived at the scenario shown in Figure 1. The ultimate goal, identified as critical by the Madrid emergency response team, is to develop a fully automatic and personalized emergency response system. To reach the goal, we set a simple scenario for which we designed the fully automatized system (explained in detail later). The scenario takes into account a continuously monitored patient for its vital signs (e.g., heart rate). Once the patient heart rate is abnormally low, an alarm is triggered which can be disregarded as a false alarm by the patient. If the patient does not mark the alarm as false within a short pre-configured amount of time, the system automatically dispatches an emergency team to the patient's location. In order to support the dispatched emergency team, the system deploys an AR system close to the patient's location. To provide intensive health monitoring capabilities to an increasing part of the population, we rely on the 5G massive connectivity properties. A smart wearable (e.g., a smartwatch) is used to monitor the health of a person. This device is able to detect potential health issues, such as low blood sugar incidents or, as in our testing case, a heart-attack. Although 5G will support direct communication of these wearable devices to a central cloud through a low power communication, for this early stage of 5G deployment, we can assume the wearable is connected to a mobile phone application which periodically reports the health status and the patient's location to a central cloud (Central eServer, step 1 in Figure 1). At the functional level, there is not much difference between both solutions as far as the concepts presented in this article are concerned. If the monitored data reveals a potential (predicted) or actual health issue (e.g., heart rate down to zero), the Central eServer issues an alarm to the user mobile smartwatch or mobile phone (to check if it is a false alarm, step 2) while continuing the processing of the emergency event. This involves analyzing the health issue and the medical records of the person, deciding which might be the disease, and selecting the most appropriate team to deploy, considering both the time required to reach the location and skills that best address the emergency (e.g., a quick intervention medical vehicle, a regular ambulance, or the combined deployment of a firefighters team). The Central eServer automatically dispatches the selected emergency team (this can be canceled by the user at any time if the person notifies that it was a false alarm) to the location of the person (step 3). In this specific example, our system is able to deploy an AR servicefor use of the emergency team -to improve the quality of care, by displaying geolocation and health information from the patient. To provide high-performance, stable and durable AR service, the system requests the deployment of an emergency Edge eServer closer to the emergency location (step 4). This Edge eServer hosts the AR service helping the emergency team once deployed at the emergency location and may include patient health data used by the emergency team (step 5). This edge service is automatically deployed in a matter of a few minutes (with current technologies), while the emergency team reaches the indicated location. The deployed edge application establishes a connection to the emergency team and guides towards the location of the user by streaming an AR-marked pathway to the doctor's AR headset (step 6). The edge application also obtains the user's health records and live-streams them on the doctor's AR headset together with real-time sensor data from the user's wearable. The AR headset is also used to live stream video to a remote medical team that can provide specialized support (if needed). Thanks to this, the paramedics' team can significantly increase their efficiency (e.g., faster triage and provide real-time feedback to the hospital), thus lowering the door-to-balloon time and increasing the probability of saving people's lives. The use of AR technology in emergency scenarios has been already proposed, but it is now thanks to 5G that it becomes feasible to actually consider its wide use by emergency response teams since AR requires very low latency between the AR device and the AR server. As it will be proved later in this article, previous mobile networks (i.e., 4G) do not provide a latency low enough to guarantee good AR experiences. Our 5G-based solution is capable of dynamically instantiating an Edge eServer close to the location of the emergency and adapting the mobile network infrastructure to provide a low latency path to the newly deployed Edge eServer. In order to achieve this, a network service federation from different operators might be needed to satisfy the requirements of the AR service dedicated to the emergency case. IV. THE SOLUTION: 5G-ENABLED PERSONALIZED HEALTH EMERGENCY SERVICE This section describes the technical solution enabling the scenario described before. First, we provide an overview of the used 5G vertical service orchestration platform. Then, a detailed description of the developed emergency and AR applications is provided. A. ORCHESTRATING NETWORK SERVICES IN 5G NETWORKS: 5G-TRANSFORMER The H2020 5G-TRANSFORMER (5GT) project 3 developed a platform with dynamic and flexible management features to automate the instantiation of multiple and heterogeneous services while satisfying the requirements coming from different vertical industries. These services can be concurrently instantiated over a shared infrastructure that includes multiple heterogeneous types of resources in terms of computing, storage, and networking, even spanning over multiple administrative domains. The 5GT architecture is made of four main building blocks, namely vertical slicer (5GT-VS), service orchestrator (5GT-SO), mobile transport and computing platform (5GT-MTP), and monitoring platform (5GT-MON). This architecture has been taken as a baseline by the H2020 5Growth project, 4 which is extending it with additional features and innovations. The 5GT-VS is the entry point for vertical industries to support the creation and management of network slices. The 5GT-SO oversees the end to end orchestration and the lifecycle management of NFV-network services (NFV-NSs) based on the available resources (compute, storage, and network) advertised by the underlying 5GT-MTP, which is the unified controller of transport stratum for integrated fronthaul and backhaul networks and computing and storage resources residing in multiple NFVI-Point of Presence. The 5GT-MON provides metrics to the 5GT platform so it can react and take decisions to adapt to network conditions and comply with service level agreements embedded in the NFV-NS request. A detailed description of the 5GT architecture can be found in [23]. As mentioned before, the 5GT architecture allows the deployment of network services spanning multiple administrative domains [24], known as network service federation (NSF). This is possible thanks to the capabilities of the 5GT-SO to orchestrate composite NFV-NSs (composed of multiple nested NFV-NSs). The NSF feature is essential for the deployment of the 5G personalized ehealth emergency system when and where needed, as shown in [25]. Let us just consider a simple example: the emergency services of a city municipality have a contract with a 5G operator to provide the patient's monitoring and edge emergency NFV-NSs and the communication services used by all the emergency teams. This operator deploys with the 5GT platform most of its core network components and the monitoring NFV-NS (Monitoring-NS in Figure 2) in a remote cloud location because of operational reasons, e.g., not demanding latency constraints (AD1 in Figure 2). In case of an emergency, the Central eServer requests to the operator (by means of a query to the 5GT-VS) the instantiation of an edge emergency service (Edge-NS in Figure 2) connected to the monitoring NFV-NS close enough to the emergency location. A placement algorithm (in the 5GT-SO) [26]- [28] is in charge of deciding the placement of the Edge eServer based on (i) location constraints, (ii) information regarding the availability of local computation resources, and (iii) latency constraints of the AR application. The placement algorithm only computes the ideal Edge eServer location over the operator's local resources. However, there might be situations in which the operator does not have the infrastructure available in the proximity of the emergency location, requiring the use of infrastructure from a different operator offering the edge emergency service (AD2 in Figure 2). Hence, excluding the NSF feature, an operator to implement our proposed solution would (i) require dedicated infrastructure, (ii) incur additional costs, and (iii) suffer from long implementation times. For example, a non-NSF implementation over the Madrid Municipality [27] (area of ∼8000 km 2 ) would require a dedicated infrastructure to satisfy the stringent AR latency requirements. That implies additional deployment & maintenance costs as well as longer implementation time. B. HEALTH MONITORING AND AR APPLICATIONS In addition to the network services and their orchestration logic, we developed the three applications needed for the 5G-enabled personalized health emergency service: (i) the monitoring application providing the heart rate measurements and location of users, (ii) the server application processing the monitored information, deciding if an emergency team has to be deployed and selecting the best one considering different information (e.g., time to reach the emergency based on the location of available ambulances), and (iii) the AR service, required to compute and stream the information reproduced on the AR headset (guidance to the physical location of the person, collection and representation of the relevant medical information at each moment, and video streaming to a remote medical team to better excel the patient's triage). The monitoring application is based on a smartwatch streaming heart rate data continuously to a 5G smartphone via ANT+ (Bluetooth could also be used). The smartphone is connected through 5G Non-Standalone (NSA) to the 5GT system and continuously sends new data to the Central eServer (see Figure 1). The Central eServer itself is continuously checking the state of the patient based on the information received, detecting (or even predicting) when an emergency occurs, and contacting the person (to detect potential false alarms). A false alarm occurrence is discussed in Section VI. The steps performed by the Central eServer while attending the emergency are: (i) contacting the closest ambulance using the legacy emergency location system of the Madrid municipality (based on a GPS Fleet Navigation API), and (ii) triggering the instantiation of the edge emergency network service providing networking and computational resources required to support the emergency team upon their arrival to the patient's location. The Edge eServer provides remote rendered AR/VR video flow streamed through a 5G smartphone to the AR headset carried by the emergency team (we use Microsoft Hololens v1). The reason for this setup is the current lack in the market of AR headsets with 5G modems. Once the ambulance arrives at the emergency location, the Edge eServer starts streaming guidance information to the Hololens, indicating directions to reach the patient location. When the team reaches the patient, medical information is displayed on the Hololens. This information is selected based on its temporal relevance and availability (e.g., results from historical blood tests) aimed at facilitating the decision flow of the medical team. This leads to more organized patient transportation along with a feedback video streamed from the Hololens, thus enabling real-time remote support from other remote medical teams or specialists in nearby hospitals. The monitoring application is implemented using Android studio, using ANT+ API to gather information from the wearable and REST services to request functions from and to push data to the server. The Central eServer runs behind Apache HTTP server and it consists of a set of REST APIs, functions developed in PHP and GPS Navigator (Tomtom) APIs are used to find and contact the closest ambulance to the patient location. The AR application is developed using Unity 2019 and built as a Universal Windows Platform application, it receives patient and paramedics position using the GPS location of the 5G smartphones. The streaming from the Edge eServer to the Hololens is implemented with the Holographic Remoting API provided by Windows Mixed Reality Toolkit (MRTK) [29]. The Hololens receives the stream using the MRTK native application, Holographic Remoting Player. The AR navigation was implemented using Mapbox [30]. Note that we did not implement any additional algorithms to assist the triage, as mentioned in [14]. The Hololens itself is able to capture uplink real-time video stream, while a hospital team is able to push feedback augmented information in the Hololens, assisting the emergency medic in real-time. V. VALIDATION RESULTS This section describes the experiments performed to assess the validity of our design and its usefulness for the emergency system of Madrid. To demonstrate the feasibility of the system, we deployed an end-to-end system as described in Figure 2, including a smartphone (Samsung S10) connected to a cellular 5G NSA network (provided by Ericsson BB630 baseband and Advance Antenna System AIR 6488) shown in Figure 5, the virtualized core network modules (implemented using the OpenEPC framework), a multi-domain 5G orchestration system (using the 5GT stack with one provider domain using Cloudify 5 and the other one OpenSource MANO 6 as coreMANO platforms) and the different applications required (both at the end-user and server sides). Validation was done by demonstrations/drills involving real emergency response teams with ambulances, medical staff and firefighters. The location of the emergency is the Institute IMDEA Networks, host of the 5TONIC lab. The network functions of the different involved network services were deployed at 5TONIC and also at CTTC premises in Barcelona, 5 https://cloudify.co/ 6 https://osm.etsi.org/ allowing us to resemble a scenario of a mobile operator providing services in Madrid, but having some of their core network entities in Barcelona (with a geographical distance of around 650 Kms). Following Figure 2, the RAN is deployed in 5TONIC, while the core network and eHealth Central eServer are deployed at CTTC. This geographical distance accounts for 10-15ms measured one-way delay in the communications between a UE and the Central eServer, which we will prove to be critical in order to deploy AR services. The first step in our experimentation was to evaluate the service deployment time upon an emergency occurrence. More specifically the time it takes for the deployment of the Edge eServer at 5TONIC premises using the NSF. The bar chart in Figure 3 sums up the average time of all phases included in the Edge eServer deployment: (i) VNFs deployment, (ii) Connectivity establishment, and (iii) Service integration. The summed average deployment time is 7 minutes for a set of 10 deployments that we performed in 5TONIC. To further evaluate the feasibility and usefulness of our design, we analyzed the duration of every emergency operation that occurred in the Madrid Municipality from 01/01/2019 to 31/12/2019. The dataset has been exclusively provided by the Emergency Services of the Madrid Municipality, excluding any patients' private information. The graph in Figure 4 represents the cumulative distribution function (CDF) of the duration of every emergency. The duration time for each emergency is measured from the moment the emergency team/unit accepts the operation until it reaches the emergency location. Given the maximum deployment time of 7.1 minutes, there is a probability over 0.55 that the VOLUME 9, 2021 FIGURE 4. CDF of the duration from the moment that an emergency team accepts an emergency to the moment it reaches its location. AR application is deployed and ready to be used by an emergency team upon arrival at an emergency site. As mentioned before, the emergency average response time in Madrid is around 12 minutes, including around 4 minutes to issue the alert (receive the alarm and allocate the appropriate medical resources). The remaining portion is the time required to achieve the patient location shown with the CDF in Figure 4. In this context, the automatic detection of the emergency reduces the first 4 minutes almost to zero. Also, by the time the ambulance arrives at the emergency location, the Edge eServer will be up and running. This highly improves the response time which results in increasing the number of lives saved and reducing the number of side effects of a stroke. The next step in our experiment was to evaluate the delay in the connection between the Hololens device and the server performing the AR computation, considering 4G and 5G technologies and the availability of local computing resources through federation. Note that if 5TONIC (local) computing resources are available for federation, the AR service can be instantiated in the Edge eServer at the 5TONIC site. In other cases, the AR service cannot be deployed close to the emergency location, and therefore the AR minimal latency requirements are not met, e.g., if the AR service is deployed in CTTC central location. Table 1 summarizes the average one-way delay (OWD) measurements in the different configurations. From the obtained results, it is clear that in order to achieve an optimal AR service we need a 5G network connectivity with the AR application deployed using federated service close to the emergency location. In the case of 4G with local federated service and 5G without federated local service, the measured latency is too close to the minimal AR requirements. For that reason, the final step in our experimentation has been to quantitatively assess the Quality of Experience (QoE) of the AR user, which is extremely sensitive to delay. To do so, we measured the average Frames Per Second (FPS) achieved at the Hololens on the different streaming settings. For example, X frames per second are streamed from the AR application to the Hololens, however, due to latency and packet loss, only Y ≤ X can be reproduced into the Hololens goggles. To capture the effects of latency on the FPS, we implemented an application module, using the diagnostic tool of MRTK, providing statistics about the actual frame rate sampled every half second. In this way, it was possible to analyze the average FPS achieved in the different streaming settings. Also, considering the caching, the frame prediction, and optimization of the Hololens, we performed our tests for a short time while moving into the AR world. In this way, it was possible to appreciate the real FPS experienced by the user. Obtained results are shown in Figure 6, where we denote the availability of local infrastructure for federated service with the label 5GT. It can be concluded that 5G is a clear must to have to achieve the best performance (being 60 FPS is the maximum frame-rate achievable by currently available AR devices). Although the difference between 50 FPS and 60 FPS may seem small for the reader, it is important to highlight that FPSs are critical for AR applications. Any difference in the FPSs makes a huge difference in the experience of the user since head's movement tracking introduces a latency which is clearly visible for FPSs below 60. Not only frame rate is affected by latency, but another problem that may occur is also object misplacement. Indeed, more than 10ms of delay leads to object misplacement of at least three degrees [31] in mobile AR. Figure 7 highlights the misplacement of 3D objects in the real world experience in the 4G scenario, compared to the correct position of the objects in the 5G one. The arrival point indicator is moved by various degrees from the original position in the test scenario using 4G without local edge (top-left), while in the best scenario, using 5G with a local edge, the indicator is placed correctly in the real world (top-right). Thus, enforcing the need for the proposed solution for mobile AR applications. VI. LESSON LEARNED This section lists the lessons learned during the implementation, integration, and deployment of the end-to-end eHealth system and network service development. We divided them by application-related and network-related. A. APPLICATION-RELATED LESSONS • Do not always trust your phone's Geo-location. The GPS location of smartphones has very high variability. In our work, an average of over ten samples of the coordinates was used to stabilize it. Hopefully, 5G will bring a more efficient localization mechanism. • Always synchronize the orientation. The AR headset used in this work often lost its orientation when operating in dark environments. As consequence objects are misplaced in the real world by various degrees. There is a need for a fast and continuous synchronization mechanism of orientation tracking at the application level. • Choose carefully your GPU. The remote rendering of the AR scene needs a server (Edge eServer) with powerful GPUs to stream the AR experience to the AR headset in real-time. In order to dynamically instantiate the Edge eServer, the GPU must be virtualized, a feature not supported by every GPU (including recent ones). • Need for AR feedback on AR stream reception. We experienced that in some cases, when the latency is too high (over 100ms) and the bandwidth is too low (less than 8Mbps) the device does not get any AR input, without the server getting any notification/error. This needs to be improved to make the system more reliable. • False-alarms. The occurrence of false alarms is a common and well-studied problem. However, there is no clear solution to approach it [32]. According to [33] almost 25% of the health emergency calls are false alarms, which avoiding them can produce immense savings. There are some models that can reduce these unnecessary calls [34], however, the authors warn that focusing on minimizing the false alarms can lead to an increase of more severe outcomes, even deaths. In our view, employing a limited timer to signal a false alarm (so once it expires, the emergency team is dispatched) could be sufficient to lower the number of false alarms without increasing the patients' risk. B. NETWORK-RELATED LESSONS • Some VNFs require function-specific management and platform adaptations. To exploit the capabilities of the 5GT platform, network services and VNFs deployed on top of the 5GT infrastructure might need to be adapted. Although this goes a bit against the virtualization principle of services and functions to be platform agnostic, as of today many virtualized functions (such as OpenEPC) have been designed with some platform assumptions in mind. In our tests, we had to deal with some specific VNF configurations in order to be able to instantiate and manage services over the 5GT platform. • VNFs are not just virtual machine images. As an example, the EPC software used imposes a rigid IP and MAC addressing scheme, hindering its deployment in generic scenarios. This may prevent the use of this EPC stack in interoperable scenarios, where VNFs may be provided by different VNF vendors or pose difficulties to interact with physical equipment (PNFs), like the RAN component. Additionally, the design constraints of the associated VNFs required the introduction of ad-hoc operations to effectively enable the connection among the PNFs and VNFs, because some interactions could not be captured in the associated NSDs. • NSF helps in ubiquitous emergency handling. As already mentioned in section IV-A, the NSF feature is an instrumental feature to enable the AR low latency requirements, and it provides an extended emergency) service geo-coverage while omitting the need of exclusive resources. This is proven through the validation results we obtained in section V. VII. CONCLUSION Nowadays, it is quite normal to find wearable devices capable of tracking sleep patterns, monitoring the heart rate, measuring the number of steps, or even perform an electrocardiogram. People with chronic diseases are taking these measurement devices to the next level, with patches able to measure glucose levels, connected insulin pumps, or wearable blood pressure meters. These devices will be complemented and augmented in 5G, one of its main characteristics being the focus on massive machine-type communications. Therefore, we expect all these devices to be connected to eHealth services, provided by public or private companies, which will perform continuous monitoring in order to detect anomalies and act upon them. In this work, we have departed from this assumption and implemented a real use case showcasing the impact of 5G in the emergency service of Madrid. The deployed service aims at improving the quality of care in two ways: (i) reducing the time required to detect an emergency, while removing the variable of a human witness, (ii) providing location and health-related data to the emergency team for increased triage efficiency through augmented reality deployed in the field, and deliver real-time data back to hospitals to ameliorate surgery preparations. These two services leverage the new capabilities of 5G, including not only the high-bandwidth and low-latency connectivity, but also the orchestration, federation, and dynamic instantiation of virtual functions at the edge of the network. The use case was tested and validated by a real emergency team, showing that decreasing the response time by 30% is possible. Since every second is relevant when responding to emergencies, we believe the designed use case showcases and exemplifies the future evolution of emergency services. He has over 20 years of experience in the networking field. He held various roles in several public funded and industrial research projects, such as EU 5Growth and 5G-REFINE (PI) on virtualization and automated network management, and also involving verticals, such as, automotive and eHealth. His research interests include mobile networks, machine learning for network optimization, and network functiosn virtualization. He was the Vice-Chair of IEEE WCNC 2018, Barcelona. VOLUME 9, 2021
8,684.2
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Bioinformatics and biomedical informatics with ChatGPT: Year one review The year 2023 marked a significant surge in the exploration of applying large language model (LLM) chatbots, notably ChatGPT, across various disciplines. We surveyed the applications of ChatGPT in bioinformatics and biomedical informatics throughout the year, covering omics, genetics, biomedical text mining, drug discovery, biomedical image understanding, bioinformatics programming, and bioinformatics education. Our survey delineates the current strengths and limitations of this chatbot in bioinformatics and offers insights into potential avenues for future developments. INTRODUCTION In recent years, artificial intelligence (AI) has attracted tremendous interest across various disciplines, emerging as an innovative approach to tackling scientific challenges [1].The surge in data generated from both public and private sectors, combined with the rapid advancement in AI technologies, has facilitated the development of innovative AI-based solutions and accelerated scientific discoveries [1, [2, [3].The launch of the Chat Generative Pre-trained Transformer (ChatGPT) to the public towards the end of 2022 marked a new era in AI.The biomedical research community embraces this new tool with immense enthusiasm.In 2023 alone, at least 2,074 manuscripts were indexed in PubMed when searching with the keyword "ChatGPT".These studies demonstrate that ChatGPT and similar models have great potential to transform many aspects of education, biomedical research, and clinical practices [4, [5, [6, [7]. The core of ChatGPT is a large-language model (LLM) trained on a vast corpus of text and image materials from the internet, including biomedical literature and code [8].Its ability to comprehend and respond in natural language positions ChatGPT as a valuable tool for biomedical text-based inquiry [9].Particularly noteworthy is its potential in assisting bioinformatics analysis, enabling scientists to conduct data analyses via verbal instructions [10, [11, [12].Surprisingly, a search on PubMed using the keywords "ChatGPT" and 3 Evaluating GPT models in genomics necessitates benchmark datasets with established ground truths.GeneTuring [17] serves this role with 600 questions related to gene nomenclature, genomic locations, functional characterization, sequence alignment, etc.When tested on this dataset, GPT-3 excels in extracting gene names and identifying protein-coding genes, while ChatGPT (GPT-3.5)and New Bing show marked improvements.Nevertheless, all models face challenges with SNP and alignment questions [17].This limitation is effectively addressed by GeneGPT [18], which utilizes Codex to consult the National Center for Biotechnology Information (NCBI) database. GENETICS In North America, 34% of genetic counselors incorporate ChatGPT into their practice, especially in administrative tasks [19].This integration marks a significant shift towards leveraging AI for genetic counseling and underscores the importance of evaluating its reliability.Doung and Solomon [20] analyzed ChatGPT's performance on multiple-choice questions in human genetics sourced from Twitter.The chatbot achieves a 70% accuracy rate, comparable to human respondents, and excels in tasks requiring memorization over critical thinking.Further analysis by Alkuraya, I. F. [21] revealed ChatGPT's limitations in calculating recurrence risks for genetic diseases.A notable instance involving cystic fibrosis testing showcases the chatbot's ability to derive correct equations but falter in computation, raising concerns over its potential to mislead even professionals [21].This aspect of plausible responses is also identified as a significant risk by genetic counselors [19]. These observations have profound implications for the future education of geneticists.It indicates a shift from memorization tasks to a curriculum that emphasizes critical thinking in varied, patient-centered scenarios, scrutinizing AI-generated explanations rather than accepting them at face value [22].Moreover, it stresses the importance of understanding AI tools' operational mechanisms, limitations, and ethical considerations essential in genetics [20].This shift prepares geneticists better for AI use, ensuring they remain informed on the benefits and risks of technology. BIOMEDICAL TEXT MINING For biomedical text mining with ChatGPT, we first summarized works that evaluate the performance of ChatGPT in various biomedical text mining tasks and compared it to state-of-the-art (SOTA) models.Then, we explored how ChatGPT has been used to reconstruct biological pathways and prompting strategies used to improve the performance. PERFORMANCE ASSESSMENTS ACROSS TYPICAL TASKS Biomedical text mining tasks typically include name entity recognition, relation extraction, sentence similarity, document classification, and question answering.Chen, Q., et al. [23] assessed ChatGPT-3.5 across 13 publicly available benchmarks.While its performance in question answering closely matched SOTA models like PubmedBERT [24], ChatGPT-3.5 showed limitations in other tasks, with similar observations made for ChatGPT-4 [7, [25, [26].Extensions to sentence classification and reasoning revealed that ChatGPT was inferior to SOTA pretrained models like BioBERT [27].These studies highlight the limitations of ChatGPT in some specific domains of biomedical text mining where domain-optimized language models excel.Nevertheless, when the training sets with task-specific annotations are not sufficient, zeroshot LLMs, including ChatGPT-3.5 outperform SOTA finetuned biomedical models [28].A compilation of performance metrics for ChatGPT and other baseline models on various biomedical text mining tasks is listed in Supplementary Table S2. Biomedical Knowledge Graphs (BKGs) have emerged as a novel paradigm for managing large-scale, heterogeneous biomedical knowledge from expert-curated sources.Hou, Y., et al. [29] evaluated ChatGPT's capability on question and answering tasks using topics collected from the "Alternative Medicine" subcategory on "Yahoo!Answers" and compare to the Integrated Dietary Supplements Knowledge Base (iDISK) [30].While ChatGPT-3.5 showed comparable performance to iDISK, ChatGPT-4 was superior to both.However, when tasked to predict drug or dietary supplement repositioned for Alzheimer's Disease, ChatGPT primarily responded with candidates already in clinical trials or existing literature.Moreover, ChatGPT's efforts to establish associations between Alzheimer's Disease and hypothetical substances were less than impressive.This highlights ChatGPT's limitations in performing novel discoveries or establishing new entity relationships within BKGs. ChatGPT's underperformance in some specific text mining tasks against SOTA models or BKGs identifies areas for enhancement; On the other hand, finetuning LLMs, although beneficial, remains out of reach for most users due to the high computational demand.Therefore, techniques like prompt engineering, including one/few-shot in-context learning and Chain-of-Though (CoT; See Table 1 for terminologies cited in this review), can be more practical to improve LLM efficiency in text mining tasks [23, [25, [27, [31].For instance, incorporating examples with CoT reasoning enhances the performances of ChatGPT over both zero-shot (no example) and plain examples in sentence classification and reasoning tasks [27] as well as knowledge graph reconstruction from literature titles [32].However, simply increasing the number of examples does not always correlate with better performance [25, [27].This underscores another challenge in optimizing LLMs for specialized text mining tasks, necessitating more efficient prompting strategies to ensure consistent reliability and stability. BIOLOGICAL PATHWAY MINING Another emerging application of biomedical text mining from LLMs is to build biological pathways.Azam, M., et al. [33] conducted a broader assessment of mining gene interactions and biological pathways across 21 LLMs, including seven Application Programming Interface (API)-based and 14 open-source models.ChatGPT-4 and Claude-Pro emerged as leaders, though they only achieved F1 scores less than 50% for gene relation predictions and a Jaccard index less than 0.3 for pathway predictions.Another evaluation work on retrieving protein-protein interaction (PPI) from sentences reported a modest F1 score for both GPT-3.5 and GPT-4 with base prompts [34].All the studies underscore the inherent challenges generic LLMs face in delineating gene relationships and constructing complex biological pathways from biomedical text without prior knowledge or specific training. The capabilities of ChatGPT in knowledge extraction and summarization present promising avenues for pathway database curation support.Tiwari, K., et al. [35] explored its utility in the Reactome curation process, notably in identifying potential proteins for established pathways and generating comprehensive summaries.For the case study on the circadian clock pathway, ChatGPT proposed 13 new proteins, five of which were supported by the literature but overlooked in traditional manual curation.When summarizing pathway from multiple literature extracts, ChatGPT struggled to resolve contradictions, but gained improved performance when inputs contained in-text citations.Similarly, the use of ChatGPT for annotating long non-coding RNAs in the EVLncRNAs 3.0 database [36] faces issues with inaccurate citations.Both works emphasize cautions on direct use of ChatGPT in assisting in database curation. Supplementing ChatGPT with domain knowledge or literature has been shown to mitigate some of its intrinsic limitations.The inclusion of a protein dictionary in prompts improves performance for GPT-3.5 and GPT-4 in PPI task [34].Chen, X., et al. [37] augmented ChatGPT with literature abstracts to identify genes involved in arthrofibrosis pathogenesis.Similarly, Fo, K., et al. [38] supplied GPT-3.5 with plant biology abstracts to uncover over 400,000 functional relationships among genes and metabolites.This domain knowledge/literature-backed approach enhances the reliability of chatbots in text generation by reducing AI hallucination [39, [40]. Addressing LLMs' intrinsic limitations can also involve sophisticated prompt engineering.Chen, Y., et al. [41] introduced an iterative prompt optimization procedure to boost ChatGPT's accuracy in predicting genegene interactions, utilizing KEGG pathway database as a benchmark.Initial tests without prompt enhancements showed a performance decline along with ChatGPT's upgrades from March to July in 2023, but the strategic role and few-shot prompts significantly countered this trend.The iterative optimization process, which employed the tree-of-thought methodology [42], achieved notable improvements in precision and F1 scores [41].These experiments demonstrate the value of strategic prompt engineering in aligning LLM outputs with complex biological knowledge for better performance. DRUG DISCOVERY Drug discovery is a complex and failure-prone process that demands significant time, effort, and financial investment.The emerging interest in ChatGPT's potential to facilitate drug discovery has captivated the pharmaceutical community [43, [44, [45, [46].Recent studies have showcased the chatbot's proficiency in addressing tasks related to drug discovery; a compilation of performance metrics for ChatGPT and other baseline models is listed in Supplementary Table S3.GPT-3.5, for example, has been noted for its respectable accuracy in identifying associations between drugs and diseases [47].Furthermore, GPT models exhibit strong performance in tasks related to textual chemistry, such as generating molecular captions, but face challenges in tasks that require accurate interpretation of the Simplified Molecular-Input Line-Entry System (SMILES) strings [48].Research by Juhi, A., et al. [49] highlighted ChatGPT's partial success in predicting and elucidating drug-drug interactions (DDIs).When benchmarked against two clinical tools, GPT models achieved an accuracy rate of 50-60% in DDI prediction and improved furhter by 20-30% with internet search through BING; a comparison to SOTA methods was not conducted [50].When evaluated using the DDI corpus [51], ChatGPT achieved an micro F1 score of 52%, lower than SOTA BERT-based models [23].In more rigorous assessments, ChatGPT was unable to pass various pharmacist licensing examinations [52, [53, [54].It also shows limitations in patient education and in recognizing adverse drug reactions [55].These findings suggest that, although ChatGPT offers valuable support in drug discovery, its capacity to tackle complex challenges is ineffective and necessitates close human oversight. In the following few sections, we will review three important aspects of using LLM-chatbots such as ChatGPT in drug discovery (Figure 3).We first focused examples and tools that facilitate a human-in-theloop approach for reliable use of ChatGPT in drug discovery.Then we highlighted the advances brought by strategic prompting using in-context learning with examples to increase response accuracy of ChatGPT.Lastly, we summarize the progress of using task-and or instruction finetune to adapt a foundational model to specific tasks, though demonstrated mostly by open-source models but could be extended to GPT-3.5 and GPT-4. HUMAN-IN-THE-LOOP The application of AI in drug development necessitates substantial expertise from human specialists for result refinement.This collaborative approach is illustrated in a case study focusing on the development of anti-cocaine addiction drugs aided by ChatGPT [56].Throughout this process, GPT-4 assumes three critical roles in sparking new ideas, clarifying methodologies, and providing coding assistance.To enhance its performance, the chatbot is equipped with various plugins at each phase to ensure deeper understanding of context, access to the latest information, improved coding capabilities, and more precise prompt generation.The responses generated by the chatbot are critically evaluated with existing literature and expert domain knowledge.Feedback derived from this evaluation is then provided to the chatbot for further improvement.This iterative, human-in-the-loop methodology led to the identification of 15 promising multi-target leads for anti-cocaine addiction [56].This example underscores the synergistic potential of human expertise and AI in advancing drug discovery efforts. Several tools leveraging LLMs offer interactive interfaces to enhance molecule description and optimization.ChatDrug [57] is a framework that can use GPT API or other open source LLMs to streamline the process of editing small molecules, piptides, or proteins (Figure 4).It features a prompt design module equipped with a collection of template prompts customized for different types of editing tasks.The core of ChatDrug is a retrieval and domain feedback module to ensure that the response is grounded in real-world examples and safeguarded through expert scrutiny: The retrievel sub-module selects examples from external databases, while the domain feedback sub-module integrates feedback from domain experts through iteration.Additionally, ChatDrug includes a conversational module dedicated to further interactive refinement.Similar tools though based on other LLMs have been developped.DrugChat based on Vicuna-13b [58] offers interactive question-and-answer and textual explanations starting from drug graph representations.DrugAssist [59] based on Llama2-7B utilizes external database retrieval for hints and allowing iterative refinement with expert feedback.This process of iterative refinement, supported by example retrieval from external databases as contextual hints, also known as retrieval-augmented generation (RAG), and expert feedback enhances the model's accuracy and relevance to practical applications. IN-CONTEXT LEARNING In-context learning (ICL) enhances chatbots' responses by leveraging examples from a domain knowledgebase through prompting without finetuning a foundation model [60].This approach utilizes examples closely aligned with the subject matter to ground the responses of ChatGPT with relevant domain knowledge [57, [61].Evaluating GPTs' capabilities across various chemistry-related tasks has shown that including contextually similar examples results in superior outcomes compared to approaches that use no example or employ random sampling; The performance of these models improves progressively with the inclusion of additional examples [48, [61, [62].ICL also boosts the accuracy in more complex regression tasks, rendering GPT-4 competitively effective compared to dedicated machine learning models [63, [64].Lastly, instead of using specific examples, enriching the context with related information-such as disease backgrounds and synonyms in a fact check task on drug-disease associations [47] -also augments response accuracy.These examples, with in-context learning and context enrichment, underscore the critical role of domain-knowledge in improving the quality and reliability of GPTs' responses in drug discovery tasks. INSTRUCTION FINETUNING Task-tuning language models for specific tasks within drug discovery has shown considerable promise, as evidenced by two recent projects.ChatMol [65] is a chatbot based on the T5 model [66], finetuned with experimental property data and molecular spatial knowledge to improve its capabilities in describing and editing target molecules.Task-tuning GPT-3 has demonstrated notable advantages over traditional machine learning approaches, particularly in tasks where training data is small [62].Task-tuning also significantly improves GPT-3 in extracting DDI triplets, showcasing a substantial F1 score enhancement over GPT-4 with few-shots [67].These projects demonstrate that task-tuning of foundation models can effectively capture the complex knowledge at the molecule level relevant to drug discovery. Instruction tuning diverges from task tuning by training an LLM across a spectrum of tasks using instruction-output pairs and enables the model to address new, unseen tasks [68].DrugAssist [59], a Llama-2-7B-based model, after instruction-tuned with data with individual molecule properties, achieved competitive results when simultaneously optimizing multiple properties.Similarly, DrugChat [58], a Vicuna-13b-based model instruction-tuned with examples from databases like ChEMBL and PubChem, effectively answered open-ended questions about graph-represented drug compounds.Mol-Instructions [69], a large-scale instruction dataset tailored for the biomolecular domain, demonstrated its effectiveness in finetuning models like Llama-7B on a variety of tasks, including molecular property prediction and biomedical text mining. Task-tuning may be combined with instruction tuning to synergize the strength of each.ChemDFM [70], pre-trained on LLaMa-13B with a chemically rich corpus and further enhanced through instruction tuning, exceled in a range of chemical tasks, particularly in molecular property prediction and reaction prediction, outperforming models like GPT-4 with in-context learning.InstructMol [71] is a multi-modality instruction-tuning-based LLM, featured by a two-stage tuning process, first by instruction tuning with molecule graph-text caption pairs to integrate molecule knowledge and then by task-specific tuning for three drug discovery-related molecular tasks.Applied to Vicuna-7B, InstructMol surpassed other leading open-source LLMs and narrows the performance gap with specialized models [71].These developments underscore the effectiveness of both task and instruction tuning as strategies for enhancing generalized foundation models with domain-specific knowledge to address specific challenges in drug discovery. It is important to note that the significant improvements observed through task-tuning and/or instructiontuning primarily involve open-sourced large language models.These techniques have shown great promise in enhancing model performance in various drug discovery tasks.We noticed that fine-tuning of GPT-3.5 is still in its infancy but encouraging preliminary results have been recently documented in chemical text mining [72].Unlike its predecessors, GPT-4's fine-tuning capabilities are currently under exploration in an experimental program by OpenAI.As these options become more broadly available, they are expected to significantly advance the field of drug discovery through task/instruction fine-tuning. BIOMEDICAL IMAGE UNDERSTANDING In recent advancements, multimodal AI models have garnered significant attention in biomedical research [73].Released in late September 2023, GPT-4V(ision) has been the subject of numerous studies that explored its application in image-related tasks across various biomedical topics [74, [75, [76, [77, [78, [79, [80].For biomedical images, GPT-4V exhibits a performance rivaling professionals in Medical Visual Question Answering [78, [79] and rivals traditional image models in biomedical image classification [81].For scientific figures, GPT-4V can proficiently explain various plot types and apply domain knowledge to enrich interpretations [82]. Despite the impressive performance, current evaluations reveal significant limitations.OpenAI acknowledges the limitation of GPT-4V in differentiating closely located text and making factual errors in an authoritative tone [83].The model is not competent in perceiving visual patterns' colors, quantities, and spatial relationships in bioinformatics scientific figures [82].Image interpretation with domain knowledge from GPT-4V may risk "confirmation bias" [84]: either the observation or conclusion is incorrect, but the supporting knowledge is valid by itself in other irrelevant context [82], or the observation or conclusion is correct, but the supporting knowledge is invalid/irrelevant [85].Such biases are particularly concerning as users without requisite expertise might be easily misled by these plausible responses. Prompt engineering has been instrumental in enhancing AI responses to text inputs.The emergence of GPT-4V emphasizes the need to develop equivalent methodologies for visual inputs to refine chatbots' comprehension across modalities.The field of computer vision has already witnessed some progress in this direction [86].Yang, Z., et al. [87] proposes visual referring prompting (VRP) by setting visual pointer references through directly editing input images to augment textual prompts with visual cues.VRP has proven effective in preliminary case studies, leading to the creation of a benchmark like VRPTEST [88] to evaluate its efficacy.Yet, a thorough, quantitative assessment of VRP's impact on GPT-4V's understanding of biomedical images remains to be explored. BIOINFORMATICS PROGRAMMING ChatGPT enables scientists who may not possess advanced programming skills to perform bioinformatics analysis.Users can articulate data characteristics, analysis details, and objectives in natural language, prompting ChatGPT to respond with executable code.In this context, we define "prompt bioinformatics": the use of natural language instructions (prompts) to guide chatbots for reliable and reproducible bioinformatics data analysis through code generation [13].This concept differs from the development of bioinformatics chatbot before the GPT era, such as DrBioRight [89] and RiboChat [90].In prompt bioinformatics, the code is generated on the fly by the chatbot in response to a data analysis description.In addition, the generated code inherently varies across different chat sessions even for the same instruction, adding challenges to new method developments for result reproducibility.Lastly, the concept covers a broad range of bioinformatics topics, particularly those in applied bioinformatics, where data analysis methods are relatively mature. Early case studies showcase ChatGPT's versatility in addressing diverse bioinformatics coding tasks, from aligning sequencing reads to constructing evolutionary trees [10], and excelling in introductory course exercises [11].ChatGPT excels at writing short scripts that call existing functions with specific instructions.However, it shows limitations in writing longer, workable code for more complex data analysis with errors often requiring domain-specific knowledge to spot for correction [91]. APPLICATION IN APPLIED BIOINFORMATICS In applied bioinformatics, established methods for data analysis are prevalent used, enhancing the likelihood of their incorporation into LLM training datasets.Thus, applied bioinformatics emerges as a fertile ground for practicing prompt bioinformatics and evaluating its effectiveness.AutoBA [12], a Python package powered by LLMs, streamlined applied bioinformatics for multi-omics data analysis by autonomously designing analysis plans, generating code, managing package installations, and executing the code.Through testing across 40 varied sequencing-based analysis scenarios, AutoBA with GPT-4 attained a 65% success rate in end-to-end automation [12].Error message feedback for code correction significantly enhanced this success rate.In addition, AutoBA utilizes retrieval-augmented generation to increase robustness of code generation [12]. Mergen [92] is an R package that automates data analysis through LLM utilization.It crafts, executes, and refines code based on user-provided textual descriptions.The inclusion of file headers in prompts and error message feedback notably improves coding efficacy.The evaluation tasks for Mergen, while relevant to bioinformatics, cater to a general-purpose scope, covering machine learning, statistics, visualization, and data wrangling.Interestingly, the adoption of role-playing does not yield significant enhancements [92], possibly due to the general nature of the tasks and the mismatch between the assumed bioinformatician role and the task requirements. LLMs exhibit inherent limitations in coding with tools beyond their training datasets.Bioinformaticians typically consult user manuals and source code to master new tools, a process LLMs could emulate.The BioMANIA framework [93] exemplifies this approach by creating conversational chatbots for open-source, well-documented Python tools.By understanding APIs from source code and user manuals, it employs GPT-4 to generate instructions for API usage.These instructions inform a BERT-based model to suggest top appropriate APIs based on a user's query, with GPT-4 predicting parameters and executing API calls.Evaluation of the method identifies areas for improvement, such as tutorial documentation and API design, guiding the future development of chatbot-compatible tools [93]. BIOMEDICAL DATABASE ACCESS Structured Query Language (SQL) serves as a pivotal tool for navigating bioinformatics databases.Mastering SQL requires users to have both programming skills and a deep understanding of the database's data schema-prerequisites that many biomedical scientists find challenging.Recent advancements have seen LLM-chatbots like ChatGPT stepping in to translate natural language questions into SQL queries [94], significantly easing database access for non-programmers. The work by Sima, A.-C. and de Farias, T. M. [95] explored ChatGPT-4's ability to explain and generate SPARQL queries for public biological and bioinformatics databases.Faced with explaining a complex SPARQL query that identifies human genes linked to cancer and their orthologs in rat brains-requiring to combine data from Uniprot, OMA, and Bgee databases-ChatGPT adeptly breaked down the query's elements.However, its attempt to craft a SPARQL query from a natural language description for the same database search revealed inaccuracies that require specific human feedback for correction.Notably, prompts augmented with sematic clues such as variable names and inline comments indicate a substantial improvement in the performance on translating questions into corresponding SPARQL queries, when evaluated on a fine-tuned OpenLlama LLM [96]. Another work by Chen, C. and Stadler, T. [97] applied GPT-3.5 and GPT-4 to convert user inputs into SQL queries for accessing a database of SARS-CoV-2 genomes and their annotations.Through systematic prompting and learning from numerous examples, the chatbot shows proficiency in understanding the database structure and generates accurate queries for 90.6% and 75.2% of the requests with GPT-4 and GPT-3.5, respectively.In addition, the chatbot initiates a new session to explain each query for the users to cross-ref with their own inputs to minimize risks of misunderstandings. ONLINE TOOLS FOR CODING WITH CHATGPT Shortly after the release of ChatGPT in November 2022, RTutor.AI emerged as a pioneering web-server powered by the GPT technology dedicated to data analysis.This R-based platform simplifies the process for users to upload a single tabular dataset and articulate their data analysis requirements in natural language.RTutor.AI proficiently manages data importing and type conversion, subsequently leveraging Open-AI's API for R code generation.It executes the generated code and produces downloadable HTML reports including figure plots.A subsequent application, Chatlize.AI, developed by the same team, adopts the treeof-thought methodology [42] to enhance data analysis exploration.This approach, extending to Python, enables the generation of multiple code versions for a given analysis task, their execution, and comprehensive documentation of the results.Users benefit from the flexibility to select a specific code for further analysis.This feature is particularly valuable for exploratory data analysis, making Chatlize.AI a flexible solution for practicing prompt bioinformatics. The Code Interpreter, officially integrated into ChatGPT-4 during the summer of 2023 and became a default option in GPT-4o in May 2024, represents a significant advancement in streamlining computational tasks.This feature facilitates a wide array of operations, including data upload, specification of analysis requirements, generation and execution of Python code, visualization of results, and data download, all through natural language instructions.It stands out for its ability to dynamically adapt code in response to runtime errors and self-assess the outcomes of code execution.Despite its broad applicability for general-purpose tasks such as data manipulation and visualization, its utility in bioinformatics data analysis encounters limitations such as the absence of bioinformatics-specific packages and the inability to access external databases [98]. BENCHMARKS FOR BIOINFORMATICS CODING A thorough assessment of bioinformatics necessitates the establishment of comprehensive benchmarks to cover a broad range of topics in the field.Writing individual functions is a fundamental skill in the development of advanced bioinformatics algorithms.BIOCODER [99] is a benchmark to evaluate language models' proficiency in function writing.This benchmark encompasses over 2,200 Python and Java functions derived from authentic bioinformatics codebases, in addition to 253 functions sourced from the Rosalind project.Comparative analyses have shown that GPT-3.5 and GPT-4 significantly outperform smaller, coding-specific language models on this benchmark.Interestingly, integrating topic-specific context, such as imported objects, into the baseline task descriptions markedly enhances accuracy.However, even the most adept models, namely the GPT series, reach an accuracy ceiling at 60% for GPT-4.A significant proportion of the failures are attributed to syntax or runtime errors [99], suggesting that ChatGPT's effectiveness in bioinformatics coding can be further enhanced through human feedback on error messages. Execution success is crucial, yet it represents only one facet of evaluating bioinformatics code quality.Sarwal, V., et al. [100] proposed a comprehensive evaluation framework that encompassed seven metrics, assessing both subjective and objective dimensions of code writing.These dimensions include readability, correctness, efficiency, simplicity, error handling, code examples, and clarity of input/output specifications.Each metric is scaled from 1 to 10 and normalized independently post-evaluation across models.When applied to a variety of common bioinformatics tasks, this framework highlighted GPT-4's superior performance over alternatives such as BARD and LLaMA.However, the current evaluation remains narrowly focused on a limited number of tasks [100].Expanding these evaluations to encompass a broader range of bioinformatics domains asks for community-led efforts for a comprehensive appraisal of these language models. CHATBOTS IN BIOINFORMATICS EDUCATION The potential of integrating LLMs into bioinformatics education has attracted significant discussions.ChatGPT-3.5 achieves impressive performance in addressing Python programming exercises in an entrylevel bioinformatics course [11].Beyond mere code generation, the utility of chatbots extends to proposing analysis plans, enhancing code readability, elucidating error messages, and facilitating language translation in coding tasks [101].The effectiveness of a chatbot's response depends on the precision of human instructions, or prompts.In this context, Shue et al. [10] introduced the OPTIMAL model, a framework for prompt refinement through iterative interactions with a chatbot, mirroring the learning curve of bioinformatics beginners assisted by such technologies.To navigate this evolving educational landscape, it becomes imperative to establish guidelines that enable students to critically assess outcomes and articulate constructive feedback to the chatbot for code improvement.Error messages, as one form of such feedback, turn out to be an effective way to boost the coding efficiency of ChatGPT across various studies [10, [12, [92]. The convenience of using chatbots for coding exercises poses a risk of fostering AI overreliance, which will lead to a superficial understanding of the underlying concepts [11, [13, [102].This AI reliance could undermine students' performance in summative assessments [11].Innovative evaluation strategies, such as generating multiple-choice questions from student-submitted code to gauge their understanding [103], are needed to counteract this challenge.Such methodologies should aim to deepen students' grasp of the material, ensuring their in-depth understanding of coding concepts. The art of crafting effective prompts emerges as a critical skill that complements traditional programming competencies.General guidelines are well summarized in a recent commentary [104].In the context of bioinformatics tasks, these include breaking down a complex task into sub-tasks, enriching context with details (e.g., spelling out package names in code-generation tasks and tissue names for cell type annotation in scRNA-Seq analysis), illustrating intent through examples (e.g., supplying a volcano plot for data visualization task in differentially expressed gene analysis), specifying the output format to facilitate downstream data process while mining gene relationships from literature abstracts, etc.It is important to note that effective prompting is not formulaic.Like coding in bioinformatics and experimental skills for bench works, experience is gained through repetitive experiments [104].Intriguingly, feedback from a pilot study involving graduate students interacting with ChatGPT for coding highlights the challenges in generating impactful prompts [105].This prompt-related psychological strain may discourage students from using the chatbot [13].In this context, the development of a repository featuring carefully crafted prompts for specific bioinformatics analyses-accompanied by quality metrics, reference code, and outcomes-could serve as a valuable resource for students to learn bioinformatics and biomedical informatics aided through prompting with chatbots [10, [13]. In conclusion, while chatbots demonstrate potential as educational tools, their efficacy and effectiveness have not yet been systematically evaluated in classroom settings with controlled experiments.The use of chatbots should be viewed as supplementary to traditional education methodologies [10, [11, [13].Meanwhile, new assessment methodologies are needed to measure the pedagogical value of chatbots in enhancing bioinformatics learning without diminishing the depth of understanding of concepts and analytical skills. DISCUSSION AND FUTURE PERSPECTIVES The year 2023 marked significant progress in leveraging ChatGPT for bioinformatics and biomedical informatics.Early studies affirming its capability in drafting workable code for basic bioinformatics data analysis [10, [11].The chatbot has also demonstrated competitiveness with SOTA models in other bioinformatics areas, including identifying cell type from single-cell RNA-Seq data [106], performing questionanswering tasks in biomedical text mining [107], and generating molecular captions in drug discovery [48].These achievements underscore ChatGPT's proficiency in text-generative tasks.Meanwhile, other LLMs are catching up.For example, Google developed Gemini and open-source LLM Gemma, which delivered impressive performance in various tasks.Although their applications in bioinformatics and medical informatics have not been reported, their potentials provide users a viable alternative to ChatGPT. Though not yet widely adapted in bioinformatics [72], OpenAI's fine-tuning APIs such as for GPT-3.5 and GPT-4 hold great potential for performance improvements when the training dataset is large.Nevertheless, the accuracy of ChatGPT's responses can be significantly improved through a strategic design of its input instructions with prompt engineering.Incorporating examples into prompts and employing CoT reasoning has proven an effective strategy, as evidenced in various bioinformatics applications [32, [41, [57, [63, [64, [97].While examples in prompts are sometimes hardcoded, they can also be dynamically and strategically sourced from external knowledge bases or knowledge graphs [57, [59, [61, [108].This approach, known as retrieval-augmented generation, improves ChatGPT's reliability by sourcing facts from domain-specific knowledge and represents a promising avenue for future development in bioinformatics with chatbots. Another significant limitation of ChatGPT, like all other LLMs, is hallucination [39, [40].This occurs when ChatGPT fabricates non-factual content.Instances in bioinformatics applications include inventing functions that do not exist in coding [10], generating false positives when mining gene relationships from biomedical text [41], and fabricating molecular function for gene annotation [36].While hallucination in codegeneration related tasks may be detected through code-execution and partially corrected through error-message feedback, other types require expert knowledge, posing significant risks to general users.To reduce hallucination, one can condition the chatbot with relevant context, such as through RAG, or supplement it with external tools such as task-specific APIs [18].Despite these strategies, developing evaluation and remediation techniques for detecting hallucinations in LLMs such as ChatGPT -with the accuracy of human experts and the efficiency of computational programs -is urgently needed and remains an ongoing challenge for bioinformatics applications with chatbots. In this rapidly evolving domain, ChatGPT has experienced several significant upgrades within its first year alone.We acknowledge that not every upgrade enhances performance across the board [109].Consequently, prompts that are highly effective with the current version for specific tasks may not maintain the same level of efficacy following future updates.The technique of prompt engineering, which includes strategies like role prompting and in-context learning, offers a way to partially counteract this variability [41].An innovative approach, rather than manually adjusting the prompts, involves instructing ChatGPT to autonomously optimize prompts to align with its latest model iteration.This strategy has shown promise in tasks such as mining gene relationships [41] but remains largely unexplored in other bioinformatics topics and therefore warrants further exploration to fully leverage ChatGPT's capabilities in the field. Numerous studies repeatedly show that using ChatGPT with human augmentations significantly improve the performance.Iterative human-AI communication plays a pivotal role in this process, where feedback from human operator grounds the chatbot's responses for improved accuracy.This human-in-the-loop methodology is particularly evident in prompt optimization [10] and molecular optimization [56, [59].For code generation tasks, runtime error message represents commonly used feedback that has been automated into several GPT-based tools [12, [92, [98].Conversely, the chatbot can also be instructed to provide feedback to human operators.As demonstrated by Chen, C. and Stadler, T. [97], ChatGPT can produce textual descriptions for the generated code through an inverse generation process.Comparing these descriptions with the original instructions from the human operator ensures that the chatbot's output aligns closely with the intended task requirements.This iterative exchange of feedback between AI and human operators enhances the overall quality of the bioinformatics tasks being addressed. The assessment of ChatGPT's capabilities across various bioinformatics tasks has illuminated both its strengths and weaknesses.Importantly, the reliability of these evaluations largely hinges on the quality of the benchmarks used and the methodologies applied in these assessments.Currently, many benchmarks are available for biomedical text mining and chemistry-related tasks.The development of benchmarks designed specifically for assessing ChatGPT's capability in other bioinformatics tasks, including multimodality, is still in its infancy.It's important to recognize that in generative tasks like coding, producing expected results is not the sole criterion for gauging effectiveness and efficiency.Factors such as the readability of the code and the inclusion of code examples also play crucial roles [100].Similarly, on prediction or classification tasks, an extension of the evaluation to inspect the text explanations behind the prediction/classification is equally important, as this will facilitate the detection of hidden flaws [85].Nonetheless, conducting such comprehensive evaluations can be resource-intensive, underscoring the need for community efforts.While alternatives exist for automation, such as transforming tasks into multiple-choice questions or verifying responses against reference texts, for example through lexical overlap or semantic similarity, each method comes with its own set of limitations [7].Consequently, there is a pressing need to develop new, scalable, and accurate evaluation metrics and benchmark datasets that can accommodate a wide range of bioinformatics tasks, ensuring that assessments are both meaningful and reflective of real-world and cutting-edge applicability. While aiming for comprehensiveness, our review does not encompass areas that, although outside the direct scope of bioinformatics and biomedical informatics, are closely related and significant.These areas include the management of electronic health records [110, [111], emotion analysis through social media [112], and medical consultation [113, [114].To mitigate transparency and security concerns, fine-tuning open-source language models deployed locally with task-specific fine-tuning presents a practical approach.Our review has spotlighted such advancements for drug discovery.However, we refer our readers to additional reviews for an expansive understanding of similar developments in other bioinformatics topics, as well as the ethical and legal issues involved [7, [8, [9, [115, [116].Looking ahead, we envision a future where both online proprietary models such as ChatGPT and open-source, locally deployable finetuned language models coexist for bioinformatics and biomedical informatics, ensuring users with the most suitable tools to address their specific needs.Figure 2: ChatGPT-Powered Cell Type Annotation for scRNA-Seq Data Analysis.In this application, marker genes for each cell cluster are identified using standard pipelines such as Seurat.These markers, along with the corresponding tissue name, are then incorporated into a prompt template, slightly modified 14 from the GPTCelltype tool [16].The prompts are submitted to ChatGPT to predict the cell type for each cluster.In ChatDrug [57], initial prompts are derived from a Prompt Design for Domain-Specific (PDDS) module, which provides tailored templates for specific drug editing tasks.If the response from the chatbot (using GPT-4 as an example) is unsatisfactory, a Retrieval and Domain Feedback (ReDF) module leverages domain knowledge to refine the prompts.Sample prompts, shown in red boxes, are extracted from Liu, S., et al. [57] for a small molecule editing task.In this case, the initial prompts did not yield satisfactory responses (first try), prompting updates from the ReDF module, which subsequently led to satisfactory outcomes (second try).Table 1.Terminologies cited in this review. Prompt engineering The practice of designing and refining input prompts (natural language instruction) to elicit desired responses from a language model chatbot. Zero-shot A way of prompting where instruction to the chatbot contains no example of a specified task. One-shot A way of prompting where instruction to the chatbot contains one example of a specified task. Few-shot A way of prompting where instruction to the chatbot contains more than one examples of that task. Chain of Thought (CoT) A way of prompting asking the chatbot to think step by step.This approach helps in enhancing the model's ability to solve complex problems by breaking them down into simpler, sequential steps.For one/few-shot, if an example includes details of step-by-step reasoning, the example is called CoT example. Tree of Thought (ToT) An extension of the Chain of Thought approach, where the model generates a tree-like structure of reasoning steps instead of a linear chain. In-Context Learning (ICL) A learning paradigm where a model leverages the context provided within the input to adapt and respond to new tasks or information without explicit retraining. Retrieval-Augmented Generation (RAG) A technique that combines a retriever model, which fetches relevant documents or data, with a generator model, which uses the retrieved information to generate responses or complete tasks.This approach is useful for tasks that require external knowledge or context. Fine-tuning The process of further training a pre-trained model on a specific dataset or task to improve its performance in that area. Instruction tuning The process of fine-tuning a pre-trained model to better understand and follow natural language instructions, improving its applicability across different tasks. Task tuning The process of fine-tuning a pre-trained model on a specific task to enhance its performance on that task. AI hallucination The phenomenon where a generative AI model produces false or misleading information not supported by the input data or its training. Figure 1 : Figure 1: Areas Explored in this Review for ChatGPT's Use in Bioinformatics and Biomedical Informatics in its Year One. Figure 3 : Figure 3: Key Themes from the Application of GPTs and Other LLMs in Drug Discovery Tasks.The human-in-the-loop section highlights a case study and three interactive tools that facilitate communication between users and chatbots.The in-context learning section emphasizes the use of ad-hoc examples or examples sourced by retrieval-augmented generation to guide chatbots for better performance.The finetuning section demonstrates examples on task and/or instruction tuning, primarily with open large language models.Works focusing on the use of GPTs are highlighted in red. Figure 4 : Figure 4: Illustration of ChatDrug for Conversational Drug Editing with GPT.In ChatDrug[57], initial prompts are derived from a Prompt Design for Domain-Specific (PDDS) module, which provides tailored templates for specific drug editing tasks.If the response from the chatbot (using GPT-4 as an example) is unsatisfactory, a Retrieval and Domain Feedback (ReDF) module leverages domain knowledge to refine the prompts.Sample prompts, shown in red boxes, are extracted from Liu, S., et al.[57] for a small molecule editing task.In this case, the initial prompts did not yield satisfactory responses (first try), prompting updates from the ReDF module, which subsequently led to satisfactory outcomes (second try).
9,164.4
2024-03-22T00:00:00.000
[ "Computer Science", "Biology" ]
3C-SiС Hetero-Epitaxially Grown on Silicon Compliance Substrates and New 3C-SiС Substrates for Sustainable Wide-Band-Gap Power Devices (CHALLENGE) The cubic polytype of SiC (3C-SiC) is the only one that can be grown on silicon substrate with the thickness required for targeted applications. Possibility to grow such layers has remained for a long period a real advantage in terms of scalability. Even the relatively narrow band-gap of 3C-SiC (2.3eV), which is often regarded as detrimental in comparison with other polytypes, can in fact be an advantage. However, the crystalline quality of 3C-SiC on silicon has to be improved in order to benefit from the intrinsic 3C-SiC properties. In this project new approaches for the reduction of defects will be used and new compliance substrates that can help to reduce the stress and the defect density at the same time will be explored. Numerical simulations will be applied to optimize growth conditions and reduce stress in the material. The structure of the final devices will be simulated using the appropriated numerical tools where new numerical model will be introduced to take into account the properties of the new material. Thanks to these simulations tools and the new material with low defect density, several devices that can work at high power and with low power consumption will be realized within the project. Introduction Emerging wide band gap (WBG) semiconductor devices based on both silicon carbide (SiC) and gallium nitride (GaN) have the potential to revolutionize power electronics through faster switching speeds, lower losses, and higher blocking voltages, relative to standard silicon-based devices. 1 Additionally, their attributes enable higher operating temperature yielding increased power density with reduced thermal management requirements. To date, the advantages demonstrated by WBG power electronics have yet to be fully realized due to their high costs and reliability problems. Silicon carbide (SiC) is a material presenting different crystalline structures called polytypes. Amongst these only two hexagonal structures (4H-SiC and 6H-SiC) are commercially available and the cubic form (3C-SiC) is an emerging technology. All these materials are broadly similar with high breakdown fields (2-4 MV/cm) and a high energy band gap (2.3-3.2 eV), much higher than silicon. The cubic polytype of SiC (3C-SiC) is the only one that can be grown on a Silicon substrate, reducing the cost by only growing the silicon carbide thickness required for the targeted application. 3C-SiC/Si technology also offers the possibility of increasing wafer size much faster than will be possible with the difficult crystal growth of the hexagonal polytypes. Different materials can be competitive with 3C-SiC for power applications, as reported in Tab. I. From this table it can be understood that 3C-SiC and GaN can work in the same range of breakdown voltage but 3C-SiC is more suitable for high power applications thanks to the high thermal conductivity while GaN is more suitable for RF applications thanks to the high saturated electron velocity. The relatively narrow band-gap of 3C-SiC (2.3eV) with respect to 4H-SiC (3.28 eV) is often regarded as detrimental in comparison with other polytypes but is in fact an advantage. The lowering of the conduction band minimum brings about a reduced density of states at the SiO 2 /3C-SiC interface. Therefore, a Metal Oxide Semiconductor Field Effect Transistor (MOSFET) on 3C-SiC has demonstrated the highest channel mobility of above 300 cm 2 /(V•s) ever achieved on any of the SiC polytypes, promising a remarkable reduction in the power consumption of these power switching devices. 2 A further advantage of 3C-SiC/Si compared to current 4H-SiC noted by Anvil is the much lower Temperature Coefficient of Resistance between RT and 200°C, ~10% compared with ~100%. This leads to a large reduction of device on-resistance at realistic junction temperatures for power device operation. As in 4H-SiC, the potential electrical activity of extended defects in 3C-SiC is a concern for electronic device functionality. To achieve viable commercial yields the mechanisms of defects formation must be understood and methods for their reduction developed. This project proposes a toolbox of solutions for the reduction of defects based around new compliance substrates that will help to reduce the inherent hetero-epitaxy stress and the defect density at the same time. The structure of these substrates will force the growth to proceed along selected pathways towards a reduction of the defects. Numerical simulations of the growth and simulations of the stress reduction will drive this growth process. Silicon Carbide and Related Materials 2017 Three different high voltage devices (Schottky diode, MOSFET and IGBT) operating at high power and with low power consumption will be realized within the project. These devices realized on the 3C-SiC material can have a very low R on (R on <5 mOhm) and then a considerable reduction of the power loss with respect to Si and 4H-SiC, in an intermediated breakdown voltage range, can be obtained. If we suppose to change all the actual silicon power devices used in the world in this range with 3C-SiC devices, a reduction of 1.2x10 10 kWh/year can be obtained. This reduction corresponds to a reduction of 6 millions ton of CO 2 emission. This reduction will be even higher in the next years when the increase of the Photo Voltaic (PV) and of the Hybrid Electric Vehicle (HEV) or Electric Vehicle (EV) market. The low cost of the 3C-SiC hetero-epitaxial approach, and the high scalability of this process to 300 mm wafers and beyond, makes this technology extremely competitive in the motor drives of Electric Vehicle/Hybrid Electric Vehicle (EV/HEV), air conditioning systems, refrigerators, and LED lighting systems. Furthermore the opportunity of using a p + Si substrate can be used to realize an Insulated Gate Bipolar Transistor (IGBT) with a further reduction of the R on . Our consortium is unique in breadth across the whole supply chain (equipment, materials, characterization, processing, power devices, simulations), and contains the wealth of talent that can be brought together only at international European scale. Furthermore, all the participants have a strong technology and Intellectual Property (IP) on the main arguments of this project and several patents will be used during the project. A successful CHALLENGE project will place all the consortium members at the leading edge of this technology, and place Europe in a competitive position to fully exploit this new and vital market. Objectives Typical figures of merit for power devices suggest that SiC is approximately ten times better than Si in terms of device on resistance for a given operating voltage and also in power density per unit area. Today 4H-SiC is the preferred material but its main limitation is the low channel mobility of carriers, which reduces the performance of the MOSFET switch used in high power applications. This limitation is extremely important especially in the region below a breakdown voltage of 800 V where DC-DC converters and DC-AC inverters are needed for electric vehicles or hybrid cars (see Fig. 1). Currently silicon power devices are used for these applications where the inevitable power dissipation requires the use of very heavy and expensive heat sinks to keep the device junction temperature in the range where Si devices are able to function, because of the low Si band-gap. The best alternative for these applications is 3C-SiC. To become feasible this emerging technology needs to improve the quality of the material that is grown on the silicon substrate, which can have a high density of defects at the Si/SiC interface. We propose a new approach to improve the quality and to reduce stress: it is necessary to modify the structure of the substrate (compliance substrate) 3,4 in order to force the system to reduce the defects while increasing the thickness of the layer. Furthermore, by using the typical bulk growth techniques used for 4H-SiC it is possible to grow bulk 3C-SiC wafers, improving considerably the quality of the material. This will circumvent the need for silicon with its poor thermal conductivity, leading to a more robust system. With these improvements in material quality and robustness, it will be possible to obtain good device characteristics and yields to meet the market needs of low power dissipation devices in the automotive applications of the future. Then the main objectives of the project are the following: • Develop new compliance substrates that can reduce both the defects (essentially stacking faults) and the stress, hence reducing wafer bow and making device manufacturing easier; • Develop a new CVD process specifically on these compliance substrate to grow thick 3C-SiC layers that can be used both for the realization of some power devices and as seed for the 3C-SiC bulk growth; • Develop a new bulk process using both Hot Wall CVD with chloride precursors and PVT systems on the seed obtained on the compliance substrates; • Develop new fabrication processes (e.g. gate oxidation, laser annealing of implanted layers, Vanadium doping and more)) that can be used for the fabrication of power devices; • Develop new simulation codes (Molecular Dynamics, Monte Carlo, Finite Elements, …) to help the experiments on the growth on compliance substrate, to simulate the fabrication process and to simulate the devices; • Develop new device structures and processes for the realization of some prototype devices (Schottky diodes, P/N junctions, MOSFET and IGBT) that can be used to test the properties of the material (both epitaxial and bulk) and the fabrication processes; Advance Beyond the State of the Art Although the hexagonal form of Silicon carbide is the current technology for advanced power conditioning applications, the possibility of 3C-SiC growth on a silicon substrate remains a real advantage in terms of wafer size and hence cost. To date growth of 3C-SiC on silicon has been demonstrated on 150 mm Si wafers and is feasible, with the appropriate tools, on 200 mm or 300 mm wafers. The pace of development of 3C-SiC technology has been hampered by the few industrial epitaxial reactors available, but Europe is currently the main centre of excellence. With this project we will do a further step in the development of this technology and we will bring it closer to the market needs. We will try to develop two different approaches. In the first one we will try to improve the quality of the hetero-epitaxial material using several structured substrates (compliance substrates) that can decrease the defect density and the stress. This approach keep low the cost of the material (a factor 10 or 20 lower with respect to 4H-SiC and comparable to silicon), but it does not take advantage of all the quality of the 3C-SiC in term of heat dissipation due to the low thermal conductivity of silicon. The second approach of the bulk growth increase the cost of the final material but the complete properties of 3C-SiC can be used. Furthermore, the device processing becomes easier because the high temperature processes typical of 4H-SiC processing can be also used. In this case the main advantage with respect to 4H-SiC is the possibility to reach large dimensions directly using the silicon substrates and not the long and expensive process of the ingot enlargement. Furthermore the lower band gap and the higher channel mobility can do of the 3C-SiC polytype the optimum candidate for the breakdown voltage region between 200 and 800 V that is too high for silicon and too low for 4H-SiC. Furthermore the main patents on this technology are inside the CHALLENGE consortium. The main problems to solve remain the intrinsic stress created during the growth process due to the lattice mismatch between 3C-SiC (4.36Å) and Si (5.43Å). In addition, thermo elastic stress is introduced during the post-deposition cooling due to the 8% difference in the thermal expansion coefficients between the materials. The resulting stress, which induces the formation of different planar or extended defects in 3C-SiC, is a major parameter leading to a noticeable degradation of the crystalline quality of the epitaxial layer. Two kinds of defects within epitaxial 3C-SiC films are widely documented. Anti-phase boundaries are planar defects formed at the geometrical separation of two 3C-SiC grains differing 916 Silicon Carbide and Related Materials 2017 one from each other by a 90° rotation in the Si(100) growth plane. These anti-phased domains (APDs) are formed by the presence of steps on the Si surface, these steps being constituted by an odd number of Si atomic steps. A second kind of defect of importance is linked to the formation of stacking faults along the {111} planes. Most of the published work dealing with 3C-SiC growth on silicon substrates highlight the intrinsic nature of these defects and point out that a drastic reduction of their density is possible by increasing the film thickness or converting the surface polarity of stacking fault. The electrical activity of extended defects in 3C-SiC is a major concern for electronic device functioning. Consequently a drastic reduction of the defects or of their electrical activity is essential to improve the yield and performance of power electronic devices. Tab. 2 -Advance beyond the state of the art of the CHALLENGE project Current State of the art of 3C-SiC Benefits offered by Project CHALLENGE Stacking fault density > 10 4 /cm Stacking faults density ~ 10 2 /cm Bow on 4 inches wafers < 10 m radius for 10 µm layer Bow on 4 inches wafers > 25 m radius for 10 µm layer Leakage current > 10 -2 A/cm 2 Leakage current < 10 -4 A/cm 2 Channel mobility < 100 cm 2 /Vs Channel mobility > 200 cm 2 /Vs With this project the participants want to reduce these defects density to a level that can decrease strongly the leakage currents of the devices. Furthermore the use of compliance substrates should decrease also the bow of the wafers and then increase the processability of this material. Finally the good quality of the material and the lower band gap with respect to 4H-SiC should give high channel mobility and then low R on of the MOSFETs or of the IGBT devices. CHALLENGE will address the lack of native substrates and structural defects in 3C-SiC epitaxial layers. These two unresolved problems are the main obstacles towards successful device applications based on 3C-SiC, which has an unexplored potential for power devices, solar cells, biosensors, and many others. There will be two main routes towards 3C-SiC material fabrication: (i) Si substrate based epitaxy and (ii) boule growth to produce native substrates for subsequent epitaxial growth and device processing. In both routes different innovative approaches will be pursued. The first approach can have a lower material cost but cannot take advantage of the 3C-SiC high thermal conductivity (see table I) that can reduce the device temperature in the high current applications (as in the HEV or EV) without the necessity of a heavy radiator. The production of a 3C-SiC substrate can solve this problem but will probably increase the cost of the material. An estimate of this cost increase cannot be easily done because it strongly depends on several parameters (growth rate, boule length, growth temperature, market size, etc.) that are not really known at present but after the project conclusion, a more accurate prevision can be done. Impact 3C-SiC technology can have a large impact in the future power device market. This market is segmented by voltage rating such that different materials can find their applications according to their technical capability and cost. In the low voltage (~100V) section silicon dominates thanks to the technology that has been developed in the last 50 years. In the high voltage (>1200V) section of the market probably 4H-SiC will dominate thanks to its material properties and the possibility to grow large wafers (up to 6 inches). The high voltage segment is not overly cost sensitive and so the high cost of the 4H-SiC substrate is not critical. The key requirements in each area vary and consequently so do the optimum technology to achieve those requirements. Figure 6 shows the different market sectors and the power ranges in which they operate with approximate market size for 2020. The technology choices for improving power efficiency in the consumer market between 200V and 1200V are still being debated. One key characteristic of this market is that it is very price sensitive, consequently 4H-SiC technology is unlikely to fit here. These huge markets, as illustrated below, are likely to be divided between two emerging technologies, We can suggest that between 200V and 500V GaN/Si is best suited to the market needs while between 600V and 1200V the 3C-SiC/Si technology is optimum. This market is growing rapidly and according to the HIS previsions it will go from 100 million dollars in 2020 to 300 million dollars in 2023. Today, low voltage applications (<1.2kV) represent over 99% of device sales and this is where 3C-SiC/Si comes into its own. Whilst not achieving quite such high breakdown voltages as 4H-SiC, (it is expected to be limited to below 2kV), once mature, it can achieve device prices near to Si and here cost is key. These systems need to be efficient to reduce the demand for electrical power, but designers will be driven by the cheapest way of achieving an acceptable efficiency. Using SiC has the advantage of significantly reducing the component count so SiC components can afford to be slightly higher price of Si and still achieve a lower cost system. The technology choices for improving power efficiency in the consumer market between 200V and 1200V are still being debated. One key characteristic of this market is that it is very price sensitive, consequently 4H-SiC technology is unlikely to fit here. These huge markets, as illustrated below, are likely to be divided between two emerging technologies, We can suggest that between 200V and 500V GaN/Si is best suited to the market needs while between 600V and 1200V the 3C-SiC/Si technology is optimum. This market is growing rapidly and according to the HIS previsions it will go from 100 million dollars in 2020 to 300 million dollars in 2023. CHALLENGE is a research and innovation action funded by the European Union's Horizon 2020 programme (8M€ total budget).
4,194.4
2018-06-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
ATLAS Triggering on SUSY in 2012 In 2012 the LHC has been operating at a higher center-of-mass energy and higher instantaneous luminosity compared to 2011, providing the experiments with un- precedented amounts of hadron collision data. This document summarizes how the chal- lenge of triggering on the 2012 LHC data on new physics signatures, such as supersym- metry (SUSY), was addressed by the ATLAS experiment. Introduction The LHC [1] operational conditions in 2012 presented a big challenge for the ATLAS trigger [2]; the higher center-of-mass energy and luminosity resulted in trigger rates an order of magnitude higher in 2012 than in 2011, as well as in large non-linearity in trigger rates as a function of the luminosity, due to the pile-up. Triggering on new physics, such as SUSY, represented a particular challenge, as trigger selections had to be inclusive enough to provide a broad coverage of phase-space. This was taken into account in the design of the 2012 trigger selection composition that was developed with the strategy to be as inclusive as possible in object thresholds and multiplicities. The ATLAS trigger system and the 201trigger menu The ATLAS trigger system consists of a hardware-based component, the Level-1 (L1), and two software parts, the Level-2 (L2) and the Event Filter (EF). The L2 and the EF are referred together as High Level Trigger (HLT). Cost reasons define the limitation in the output bandwidth of both the L1 and the HLT. The detector readout bandwidth and the processing power in the HLT limit the L1 output rate to 75 kHz and the L2 output rate to 6 kHz. The offline computing capacity for storing and processing data promptly limits the EF output rate to O(400 Hz). An additional O(200Hz) is stored for later reconstruction, in the so-called 'delayed stream'. The trigger menu, i.e. the list of trigger selection criteria used for data taking, consists of 'primary' triggers, which are used for physics measurements and are typically running unprescaled; 'support' triggers, which are used for efficiency and performance measurements or monitoring, and are typically running at a small rate (of the order of 0.5 Hz each); and 'calibration' triggers, which are used for detector calibrations and are often running at high rate but storing very small events with the relevant information from the detector needed for the calibrations. The total of the selections have to respect all three-level trigger limitations, what makes the optimal distribution of available bandwidth a challenge. The distribution is driven by physics priorities, and ATLAS has chosen to give the most generic triggers the larger fraction of the bandwidth. Single electron and muon triggers typically use 50 Hz each; generic triggers, such as multi-jets and multi-leptons, typically use 5-15 Hz; and specialized triggers are given ≈1 Hz. About 20% of the bandwidth is dedicated to the supporting triggers. The main triggers that compose the ATLAS 2012 trigger menu are shown in Table 1. SUSY triggers The triggers outlined in Table 1 are extensively used by SUSY searches, and in some cases, their thresholds were adjusted to fit the SUSY requirements (e.g. multi-jet triggers). Several additional selections were added to the ATLAS trigger menu, to extend the trigger coverage for SUSY searches. Some examples are outlined in Table 2. Additionally, the SUSY searches motivated the introduction in the delayed stream of looser hadronic and E miss T triggers, compared to what is in the prompt stream. Examples of such triggers are given in Table 3. Trigger performance improvements in 2012 To cope with the increased energy, luminosity and pile-up conditions of the 2012 data taking, the AT-LAS experiment deployed improvements in the trigger selections and algorithms. The improvements mostly affecting SUSY selections were implemented in jet and missing transverse momentum (E miss T ) triggers. A summary of these improvements can be found elsewhere [3]. Table 2. SUSY-motivated triggers in the 2012 trigger menu. The ∆φ selection is applied at EF, between the E miss T and the two leading jets with E T >45 GeV. H T is defined as the sum of jets with E T >45 GeV, and is calculated in events that already satisfied the requirement of a leading jet E T >145 GeV. In some combined triggers, EF-only selections are implemented for jets and E miss T when the L2 rejection is sufficient; this feature provides optimal online to offline correlations, as at EF the jet and E miss T reconstruction is similar to offline. Selection EF trigger election EF Avg. Rate (Hz) 3 electrons & muons p T > 2×7 (e), 6 (µ) GeV <1 p T > 7 (e), 2×6 (µ) GeV <1 Table 3. Triggers in the delayed stream, introduced to enhance the trigger coverage for searches for which SUSY was one of the main motivations. The E miss T selection differs not only at the HLT but also at the L1, where it is looser by 5 GeV. The jet variable R corresponds to the jet cone size. Summary The improvements made to jet and E miss T triggers for 2012 together with new trigger selections and the addition of a delayed processing stream have allowed ATLAS to meet the challenges of increased luminosity and pile-up and maintain excellent efficiency for SUSY signals in 2012 data taking.
1,226.4
2013-05-01T00:00:00.000
[ "Physics" ]
Correction of the Electrical and Thermal Extrinsic Effects in Thermoelectric Measurements by the Harman Method Although the Harman method evaluates the thermoelectric figure-of-merit in a rapid and simple fashion, the accuracy of this method is affected by several electrical and thermal extrinsic factors that have not been thoroughly investigated. Here, we study the relevant extrinsic effects and a correction scheme for them. A finite element model simulates the electrical potential and temperature fields of a sample, and enables the detailed analysis of electrical and thermal transport. The model predicts that the measurement strongly depends on the materials, sample geometries, and contact resistance of the electrodes. To verify the model, we measure the thermoelectric properties of Bi2-Te3 based alloys with systematically varied sample geometries and either with a point or a surface current source. By comparing the model and experimental data, we understand how the measurement conditions determine the extrinsic effects, and, furthermore, able to extract the intrinsic thermoelectric properties. A correction scheme is proposed to eliminate the associated extrinsic effects for an accurate evaluation. This work will help the Harman method be more consistent and accurate and contribute to the development of thermoelectric materials. current minimizes the Joule heating of both the sample and the lead wires. In spite of these efforts, however, there still exist the Joule heating and the heat transfer via radiation and conduction 2,5,15 . Especially, the heat transfer from or to the sample reduces the temperature difference across the sample (Δ T), leading to the reduction of V DC , and the underestimation of ZT. Furthermore, ZT measured by Harman method largely depends on the sample size. There have been efforts to correct the size effect. Theoretical studies have revealed the relation between the intrinsic ZT (ZT i ) and measured ZT (ZT m ) considering the thermal effects 2,5,12,13,15,16 . The theoretical relation suggests that the thermal effect becomes smaller when the sample shape factor, defined by the sample length (L) divided by cross-section area (A), is reduced. Thus, it seems to be possible to obtain ZT i by measuring ZT m as a function of L/A, and extrapolating ZT m to L/A = 0 5 . However, not only thermal effects but also electrical effects become important when L/A becomes smaller. For a sample with small L/A, the electrical resistance is small, thus the contact resistance associated with the voltage probes may add non-negligible contribution 2 . Moreover, for a sample with A much larger than the cross-section area of lead wires, the current density may be non-uniform over a finite length due to the current crowding. In this study, we investigate the electrical and thermal extrinsic effects associated with the Harman method, and seek to obtain the intrinsic ZT from measured ZT. First, we develop a finite element model that helps to understand the contributions of each of the extrinsic effects. Then, by measuring three types of thermoelectric materials with various shape factors, we verify the developed model, and show how the accuracy of the Harman method may be improved. Overview of Harman Measurement In our Harman measurement system, a sample is suspended by two pairs of lead wires in a vacuum chamber (10 −4 Torr) 5 . The wires are ~20 mm long with a diameter of either 25 μ m (Au wire) or 50 μ m (Pt wire). A pair of lead wires are attached at the sample end surfaces to pass an electric current (25 mA). Figures 1(a) and 2(a) show two attachment configurations: (1) spot-welding of wires directly on a sample; (2) soldering of wires on Cu foils (thickness of ~500 μ m) which are attached to the sample with electrically conducting paste. We employed an epoxy with dispersed silver particles which is assumed to have several orders of magnitude lower effective electrical resistivity as compared to the Bi-Te based alloys 12 . Nevertheless, it is desirable to distribute the conducting paste uniformly across the sample surface to avoid non-uniform electrical current distribution. Another pair of lead wires, which is spot-welded within the sample, measures the electrical potential variation. The positions of the voltage probes affect the measurement precision, which will be discussed below. Measurement of V AC provides measured electrical resistivity (ρ m ), and an additional measurement of V DC gives ZT m . To estimate the uncertainties in both ρ m and ZT m , we measured a single sample 10 times using the electrode configuration with Cu foils. For each measurement, the Cu foils and voltage probes were reattached. Table 1 lists three types of test materials used for this study. The methods to obtain physical properties of the test materials are described in the Section 3. The test samples are Bi 2 -Te 3 based sintered materials that were prepared in our group either via hot-extrusion technique (type 1 and 2) 17 , or spark plasma sintering (type 3) 18 . To vary the shape factor, we modified either L or A. For type 1 and 2, L was 35.4 mm for the first measurement. Then, the samples were cut to shorter lengths, keeping the same area, and further measurements were carried out. To see the influence of the thermal conduction, Joule heating, and Peltier effect along the lead wires, the wire material was chosen as either Au (k Au = 314 W/mK, ρ Au = 2.26 μ Ωcm, α Au = 1.94 μ V/K) 19,20 or Pt (k Pt = 71.6 W/mK, ρ Pt = 10.4 μ Ωcm, α Pt = − 5.15 μ V/K) 19,20 . Thus, as compared to Pt, if all the wires have the same dimension, Au roughly has ~4.4X larger thermal loss, ~4.6X smaller Joule heating, and smaller Peltier effect against Cu foil (α Cu = 1.83 μ V/K). For type 3, A was 43.6 mm 2 for an initial measurement. Then, the sample was cut to smaller area, maintaining the same length, and additional measurements were made. Finite Element Model We developed a three-dimensional thermoelectric finite element model (FEM) using a commercial software package (COMSOL Multiphysics) for the Harman measurement system. This FEM is useful to capture the non-uniform electrical and thermal transport between the lead wires and the sample, which arise from a large mismatch of the cross-section areas. Considering that the typical cross-section area of a sample is ≥ 1 mm 2 , the wire has ~1000X smaller cross-section area, which results in the spreading or crowding of the current or heat flow over a finite sample length. For the simulation of the DC measurement, the FEM solves for both the thermoelectric, electrical, and thermal equations. The governing equation for heat flux, q, in solid-state materials is q = − k∇ T + αTJ, where J is the current density. The governing equation for current density is J = − (Δ V + α∇ T)/ρ. The model includes a radiative heat flux (q rad ) based on a relation q rad = εσ(T 4 − T 0 4 ), where ε is the effective emissivity, σ is the Stefan-Boltzmann constant, and T 0 is an ambient temperature which is assumed as 298 K. As a thermal boundary condition, all the end surfaces of the lead wires maintains T 0 . As an electrical boundary condition, the end surfaces of the voltage probes are insulated. One of the current source wires supplies 25 mA, while the counter-side current source wire serves as a ground electrode. For the simulation of the AC measurement, the FEM solves for only the electrical governing equation which is The model simulates the Harman measurement by calculating the thermal and electrical potential distributions over the sample and lead wires. The model estimates the voltage between the voltage probes, and gives the simulated V AC (denoted as V AC,s ) and V DC (denoted as V DC,s ). Then, the measured resistivity can be simulated by the following relation. where I is the electric current, R c is the contact resistance associated with the voltage probes, and d w is the distance between the voltage probes. Similarly, the measured Z can be simulated by For the calculations, the model requires the input values of α,ρ, and k of all the materials. Literature provides the properties of the lead-wire materials 19,20 . Table 1 shows the input Seebeck coefficients of the test materials measured through a static DC method. To find appropriate input values of ρ for the test materials, we fit ρ s to the measured resistivity (ρ m ) for all the sample shapes and current source types. Particular combination of input ρ and R c provides the best fit. Likewise, to determine proper input values of k for the test materials, we fit Z s to the measured Z (Z m ). Since V DC,s depends on the choice of k, a certain input k gives the best fit. Table 1 shows all the fitted properties, and data for the fitting procedures will be shown below. Results and Discussion The developed FEM calculated the electric potential and temperature fields of the test samples under the Harman measurement condition. The calculation results are useful to understand the extrinsic effects and their contributions on the measurement accuracy. By fitting the measured and calculated data, the intrinsic values of the thermoelectric properties and contact resistance are determined, and the errors due to the extrinsic effects are uncovered. Figures 1 and 2 show the simulated electric potential and temperature fields of type 3 sample under DC measurement. The data are captured in the symmetry plane to qualitatively assess the electrical and thermal transport. When electric current passes from a thin lead wire directly to a large sample, the lead wire acts as a point current source. The electric potential and the temperature change radially over a finite length, indicating that the current and heat flow spread out to the large region. Hence, there exist a regions where the electrical and temperature fields are not one-dimensional. This effect is relatively enormous for the sample with a small shape factor. On the contrary, when electric current enters the sample through a sufficiently thick layer of highly conductive material, the conductive layer serves as a surface current source. The calculation shows that a 500 μ m-thick Cu layer well distribute the current and heat flow such that the electric potential and temperature fields are one-dimensional throughout the sample. Therefore, one-dimensional Ohm's law and Fourier heat conduction law are well suited with the Harman measurement with a surface current source. Not only the current source type but also the voltage probe position affects the Harman measurement. Figure 3 shows the resistivity as a function of the distance between the voltage probes (d w ), measured with the surface current source. The data for the two types of samples show that measured resistivity become more consistent and smaller with larger d w . When d w is too small (≤ 10 mm), the contribution of the error in d w measurement becomes prominent. Furthermore, when the measured voltage is small (≤ mV), the electrical extrinsic effects including the contact resistance may add more errors in the voltage measurement. Thus, we attached the voltage probes within few mm apart from the sample end surfaces for the following measurement in order to make d w , equivalently the sample resistance, as large as possible. Figure 4 shows the resistivity of the test samples as a function of the sample shape factor. When L/A approaches to 0, the resistivity decreases rapidly with a point current source, while increases with a surface current source. With an appropriate ρ and R c input in the FEM, the measured resistivity is well simulated. R c = 0.3 mΩ and ρ denoted as intrinsic value in the figure provide the best fit. When L/A is below 1, the sample resistance is usually few mΩ, thus R c, which is nearly consistent regardless of L/A, becomes non-negligible in our system. Accordingly, the resistivity measured with a surface source is not constant, and becomes larger with a smaller L/A. On the other hand, with a point source, the measurement is significantly affected by the non-uniform current density when L/A is below 1. In this case, the measured resistivity is, sometimes, more than 50% less than the estimated intrinsic resistivity. There are several ways to avoid the effect of the contact resistance. Based on the data in Fig. 4, measuring a sample with large enough resistance (≥ 100X of R c ) almost eliminates the influence of R c . Another method is to estimate R c by fitting the measured resistivity with Eq. 1 for various samples. The estimated R c can be subtracted from the measured resistance. Moreover, when the resistance is measured with two samples of different L/A, R c can be removed by subtracting those resistance each other (known as differential method). For example, when there are two samples with (L/A) 1 and (L/A) 2 , the difference of the resistance (Δ R) is expressed as Equation 2 evaluates ρ without the knowledge of R c . Note that all these approaches work with the measurement with a surface current source. Figure 5 shows the resistivity of the test samples subtracted by an effective contact resistivity (ρ c ). ρ c is simply defined as R c A/d w . For the data by a surface current source, ρ − ρ c obtained either by directly subtracting ρ c or by differential method show consistent values to each other and to the estimated intrinsic resistivity. However, when L/A is extremely small (< 0.5), ρ − ρ c still fluctuates, suggesting that sufficiently large L/A ensures the precise measurement. For the data by a point current source, ρ − ρ c still strongly depends on L/A due to the effect of non-uniform current density. Figure 6 shows Z of the test samples as a function of the sample shape factor. When L/A approaches to 0, Z increases almost linearly and suddenly drops when L/A becomes smaller than 1. With a proper k input in the FEM, the simulated figure-of-merit (Z s ) fits well the measured data. k value for the best fit enables the estimation of the intrinsic Z. The linear dependence of Z on L/A is due to the thermal effects such as radiative heat transfer, conductive heat flow through wires, and Joule heating occurring within both the wires and sample 2,5,12 . If the thermal effects are only dominant extrinsic factors for the Harman measurement, Z should keep increasing until L/A reaches 0. However, for the sample with a small L/A, the influence of the contact resistance becomes apparent, and causes the sudden decrease of Z. For instance, assuming R c = 0, Z is obtainable from Z = (V DC /V AC − 1) [α t /(α t − α w )]/T. However, if R c is non-zero, it adds a positive offset to both sides of the fraction as Z = [(V DC + IR c )/(V AC + IR c ) − 1] [α t /(α t − α w )]/T, and reduces Z. Thus, to avoid the influence of R c , the sample should possess sufficiently large resistance, which is possible with a large L/A. For Bi-Te based alloys, L/A > ~1.5 have ensured enough resistances over R c . If a test material possesses small resistivity, proper range of L/A should be found based on the dependence of Z on L/A. It was also observed that for a single sample with L/A = 3.4, the uncertainty of measurement results was ± 1.4% for ρ m and ± 2.2% for ZT m which ensured an accuracy up to the second decimal point. These uncertainties would result from the uncertainties in the probe distance measurements, electrode qualities, or the inconsistent thermal loss due to different wire length or the fluctuation of surrounding temperature. Differences between the point and surface current sources originate also from the thermal effects. With the surface source, heat flow across the sample end surface is also efficient, thus the conductive thermal loss through the lead wires tends to be large. Thus, for type 1 and 2, V DC was smaller for the surface source than for the point source. However, when the cross section area of the sample is much larger (≥5000X) than that of the lead wire, V DC was larger for the surface source than for the point source as shown in type 3. With the point source, heat due to the Peltier effect at the lead wire-sample junction does not efficiently flow across the sample end surface, resulting non-uniform temperature distribution. Thus, Δ T between the voltage probes may further reduce when the sample cross-section area is too large. In spite of the non-uniformity problem with the surface source, Z and its dependence on L/A were not greatly different between the measurements with the point and surface sources. This fact indicates that the effects of non-uniform fields in measured V DC and V AC roughly cancel each other. However, it is evident that the surface source must be more reliable, since it ensures the one-dimensional electrical potential and temperature fields, which are compatible with the Harman relation and its correction method. By correcting both the electrical and thermal extrinsic effects, intrinsic Z (Z i ) is obtainable from the Harman measurements. First, the electrical effects are corrected by measuring with surface current source, and subtracting R c . Z that is corrected for R c (denoted as Z c ) is acquired by Note that, for the materials with α t ~ 200 μ V/K, α t /(α t − α w ) corrects for 1-2% of initially measured Z. If there exists an error when evaluating α t, denoted as ε, the term for the Seebeck coefficients can be expressed as Thus, if α w is ~1 μ V/K, fortunately the influence of ε would not be great. Then, the thermal effects are corrected by measuring multiple samples with various shape factors, and extrapolating the data to L/A = 0. Based on the previous study that only corrects for the thermal effects 5 , the relation between the Z c and Z i is expressed as where β is the radiative heat transfer coefficient, P is sample perimeter, T is an average temperature across the sample, and K w is the thermal conductance of lead wires. According to Eq. 4, 1/Z c is a second order polynomial of L/A, and 1/Z i is obtainable at the y-axis intercept of the polynomial. To simplify Eq. 4, we assume Δ T is a linear function of L/A such as Δ T = cL/A, where c is a constant. This assumption is not too ideal based on the experimental data where Δ T is estimated from Δ T = (V DC − V AC )/(α t − α w ). Furthermore, near room temperature or when the sample is in thermal equilibrium with surrounding radiation shield, we may assume T ~ T 0 . Then, Eq. 4 becomes In the simplified relation, 1/Z c is a linear function of L/A. Figure 7 shows 1/Z c of the test samples as a function of the shape factor. Within a particular range of L/A (~1.5 ≤ L/A ≤ ~4), 1/Z c is a linear function of L/A, indicating that the radiative loss is not significant near room temperature. Even at high temperature, the linearity between 1/Z c and L/A would not break if the sample and the surrounding radiation shield possess similar temperatures. Thus, when linearly extrapolating the data measured with surface source, y-axis intercepts are almost identical to α t /Z i (α t − α w ). The slope of the linear trend depends on Joule heating and conductive heat flow through lead wires as dictated by Eq. 5. To estimate the influence of the heat conduction, Eq. 5 was fitted with the measured data by employing K w as a fitting parameter. For the best fit, K w was chosen between 100 μ W/K and 350 μ W/K. Note that the slopes are different for different materials, measuring temperature and wiring conditions, as ρ, k, and K w would not the same. Interestingly, extrapolating the data acquired with the point source also provides similar y-axis intercepts for type 1 and 2, although it exhibited a large discrepancy for type 3. To simply see the influence of the current source on the Harman measurement, the FEM calculated several possible conditions. Figure 8 shows the simulated figure-of-merit (Z s ) of type 3 with several configurations. When the contact resistance (R c ) is not corrected, both the point and surface sources possess large error when L/A is small. However, without R c , Z s becomes larger, is proportional to 1/(L/A)~1, and the extrapolation of Z s to L/A = 0 gives the intrinsic Z (= 2.40). When the lead wire material is Cu, the Peltier effect between the lead wire-Cu foil interface can be excluded and the Joule heating within the wire can be minimized. However, the Cu wire facilitates the conductive heat flow such that Z s becomes smaller.
4,951.8
2016-05-20T00:00:00.000
[ "Engineering", "Physics" ]
Extra Surfactant-Assisted Self-Assembly of Highly Ordered Monolayers of BaTiO3 Nanocubes at the Air–Water Interface Assembly of nanocrystals into ordered two- or three-dimensional arrays is an essential technology to achieve their application in novel functional devices. Among a variety of assembly techniques, evaporation-induced self-assembly (EISA) is one of the prospective approaches because of its simplicity. Although EISA has shown its potential to form highly ordered nanocrystal arrays, the formation of uniform nanocrystal arrays over large areas remains a challenging subject. Here, we introduce a new EISA method and demonstrate the formation of large-scale highly ordered monolayers of barium titanate (BaTiO3, BT) nanocubes at the air-water interface. In our method, the addition of an extra surfactant to a water surface assists the EISA of BT nanocubes with a size of 15–20 nm into a highly ordered arrangement. We reveal that the compression pressure exerted by the extra surfactant on BT nanocubes during the solvent evaporation is a key factor in the self-assembly in our method. The BT nanocube monolayers transferred to substrates have sizes up to the millimeter scale and a high out-of-plane crystal orientation, containing almost no microcracks and voids. Introduction Nanocrystals have been attracting interest owing to their size-dependent physical properties and potential applications in novel electric, optical, and magnetic devices [1,2]. Recent progress in solution-based chemical processes has enabled the synthesis of nanocrystals with well-defined shapes (e.g., sphere, cube, cuboid, octahedron, and dodecahedron), narrow size distributions, and various compositions [3][4][5][6][7][8]. The progress in the synthesis of nanocrystals has also accelerated the fundamental studies of their intrinsic and collective properties. In this context, the assembly of nanocrystals into ordered arrays has become increasingly important for their application as well as fundamental studies. Moreover, the assembly techniques should be able to control the arrangement and crystal orientation of nanocrystals over large areas. Self-assembly of nanocrystals is a practical way to achieve their highly ordered arrays that are uniform and continuous over large areas. In particular, evaporation-induced self-assembly (EISA) has been intensely employed by many researchers to fabricate nanocrystal arrays because of its simplicity and controllability. The mechanism of this type of self-assembly is generally explained by the combination of convective flows of solvent driven by solvent evaporation and lateral capillary forces. As the solvent of a nanocrystal suspension evaporates, the nanocrystals dispersed in the suspension are carried to the evaporation front by convective flows and assembled into ordered arrays by lateral capillary forces acting between the nanocrystals [9,10]. Interaction between the surfactant-capped surfaces of nanocrystals is also an important factor to determine their arrangement [11]. In this type of self-assembly, nanocrystal arrays are assembled onto a solid surface or liquid surface. Given the practical applications of nanocrystal arrays, the former case can provide a simpler fabrication process than the latter because the latter requires a process of transferring the nanocrystal arrays floating on a liquid surface to a solid surface. For this reason, various attempts have been made to form two-and three-dimensional arrays of nanocrystals with diverse shapes, sizes, and compositions directly on substrates [12][13][14][15][16][17]. We have previously reported the fabrication of highly ordered three-dimensional arrays of perovskite cubic-shaped nanocrystals (called nanocubes) directly on substrates by using EISA methods: a capillary force-assisted self-assembly method and a dip-coating method [18][19][20][21][22][23][24]. Such direct fabrication on a solid surface including our previous cases, however, commonly suffers from the generation of microcracks during solvent evaporation. Microcracks divide nanocrystal arrays into small domains and render the fabrication of continuous and uniform films of nanocrystal arrays over large areas difficult. In fact, the lateral size of the continuous regions in the three-dimensional arrays was limited to less than a few tens of micrometers in our previous methods. This is a serious problem in practical applications of nanocrystal arrays. Fabrication methods using EISA on a liquid surface have the potential to resolve this problem because nanocrystals on a liquid surface are mobile and can rearrange their positions. Although this advantage has been proven in several reports [25][26][27][28] that demonstrated the formation of monolayers and multilayers of nanocrystals over large areas, most examples are limited to spherical nanocrystals. In the present study, we demonstrate the formation of large-scale highly ordered monolayers of barium titanate (BaTiO 3 , BT) nanocubes capped with oleic acid on a water surface by introducing a new EISA method. In our method, oleic acid, used as an extra surfactant, is added to a water surface before the BT nanocube suspension is dropped onto the surface. We found that the addition of the extra surfactant allows the EISA of BT nanocubes to arrange into highly ordered monolayers under the condition in which monolayers do not form without an extra surfactant. Although excess ligands added to a nanocrystal suspension or a nanocrystal-dispersed water surface promote the formation of nanocrystal arrays and improve their ordering by slowing the solvent evaporation or by modifying the interaction between the nanocrystals [12,29,30], we experimentally confirmed that the major role of the extra surfactant in our self-assembly method is distinguished from that of such excess ligands. Our measurements of surface pressure revealed that the extra surfactant exerts compression pressure on the BT nanocubes on the water surface. This compression promotes the EISA of BT nanocubes and prevents the generation of microcracks and voids, resulting in the formation of uniform monolayers with a size of up to a few millimeters. Materials and Methods BT nanocubes were synthesized by a hydrothermal method. The details of the fabrication process have been mentioned in our previous report [7]. Titanium(IV) bis(ammonium lactate) dihydroxide (Sigma Aldrich Japan, Tokyo, Japan) and Ba(OH) 2 ·8H 2 O (FUJIFILM Wako Pure Chemical Co., Osaka, Japan) were used as starting materials of Ti and Ba, respectively. They were dissolved in distilled water and mixed with NaOH (FUJIFILM Wako Pure Chemical Co., Osaka, Japan), tert-butylamin (Sigma Aldrich Japan, Tokyo, Japan), and oleic acid (FUJIFILM Wako Pure Chemical Co., Osaka, Japan). The aqueous solution was heated at 220 • C in an autoclave for 72 h. The BT nanocubes synthesized via this process were capped by oleic acid and enclosed by {100} facets [7]. Their typical size was in the range of 15-20 nm. The BT nanocubes obtained were rinsed with ethanol and dispersed in mesitylene. Figure 1 shows a schematic illustration of our self-assembly method. Distilled water was poured into a petri dish (7.0 cm in diameter), and the water surface was cleaned using an aspirator. Oleic acid was diluted with toluene to a concentration of 1.9 × 10 −3 M, and the solution was added to the water surface with a microsyringe. Toluene was evaporated at room temperature so that oleic acid molecules disperse over the water surface. After toluene evaporation, 40 µL of the BT nanocube suspension was dropped onto the oleic acid-added water surface with a microsyringe. The petri dish was covered with a glass lid to slow the evaporation of mesitylene under room temperature. The resulting monolayers of BT nanocubes floating on the water surface were transferred to silicon substrates by the Langmuir-Schaefer deposition method. The silicon substrates were subjected to only ultrasonication in ethanol before monolayer deposition to keep their surface hydrophobic. After monolayer deposition, the substrates were dried at 60 • C followed by ultraviolet light irradiation for 2 h and drying at 200 • C for 1.5 h to remove organic residues. substrates by the Langmuir-Schaefer deposition method. The silicon substrates were subjected to only ultrasonication in ethanol before monolayer deposition to keep their surface hydrophobic. After monolayer deposition, the substrates were dried at 60 °C followed by ultraviolet light irradiation for 2 h and drying at 200 °C for 1.5 h to remove organic residues. The morphologies of BT nanocube monolayers on substrates were observed using a fieldemission scanning electron microscope (FE-SEM; JSM-6335FM, JEOL, Tokyo, Japan). The out-ofplane orientation of BT monolayers on substrates was evaluated by performing X-ray diffraction (XRD) measurements using a SmartLab XRD (Rigaku, Tokyo, Japan) with a Cu Kα radiation source. The surface pressure on the water subphase surface was measured by the Wilhelmy plate method with a KSV NIMA Layer Builder (Biolin Scientific, Espoo, Finland) to investigate the dependence of surface pressure on the amount of oleic acid added to the water surface. Additionally, real time measurements of surface pressure on the water surface during the solvent evaporation were performed. In the measurements, the process of formation of BT nanocube monolayers was the same as mentioned above; however, the petri dish and the instrument were covered with a plastic case during mesitylene evaporation instead of covering with a glass lid. Figure 1. Schematic illustration of our self-assembly process. First, oleic acid diluted with toluene is added to a water surface in a petri dish. After toluene evaporation, the barium titanate (BT) nanocube suspension (mesitylene solvent) is dropped onto the oleic acid-added water surface. The petri dish is covered by a glass lid during mesitylene evaporation under room temperature. As a result of mesitylene evaporation, BT nanocube monolayers form at the air-water interface. Results and Discussions In a typical procedure for the formation of BT nanocube monolayers, 40 µ L of the oleic acid solution (corresponding to the amount of oleic acid molecules, 2.0 × 10 −9 mol/cm 2 ) is added to the water surface, and then, 40 µ L of BT nanocube suspension with a concentration of about 0.2 mg/mL is dropped onto the surface. The BT nanocube monolayers obtained via the typical procedure are a few hundreds of micrometers to a few millimeters in size. We employed the Langmuir-Schaefer deposition method since it is suitable for transferring locally floating monolayers on a water surface to a substrate and has been used in similar systems [25,27]. Figure 2a shows an example of a BT nanocube monolayer transferred to a silicon substrate. It can be seen that the monolayer (the bright region in Figure 2a) is uniform and contains no microcracks over the entire area. The magnified image in Figure 2b shows that the BT nanocubes are highly ordered and form small domains with about 10 nm gaps in the monolayer. Since the thickness of the oleic acid-capped layers of BT nanocubes is about 1 nm [31], these gaps seem to be caused by slight variations in the size and shape of the BT nanocubes, and by the penetration of extra oleic acid molecules between the neighboring BT nanocubes. Although we also obtained BT nanocube multilayers by using a more concentrated suspension (more than 2.0 mg/mL), we will focus on the formation of monolayers in this study. Schematic illustration of our self-assembly process. First, oleic acid diluted with toluene is added to a water surface in a petri dish. After toluene evaporation, the barium titanate (BT) nanocube suspension (mesitylene solvent) is dropped onto the oleic acid-added water surface. The petri dish is covered by a glass lid during mesitylene evaporation under room temperature. As a result of mesitylene evaporation, BT nanocube monolayers form at the air-water interface. The morphologies of BT nanocube monolayers on substrates were observed using a field-emission scanning electron microscope (FE-SEM; JSM-6335FM, JEOL, Tokyo, Japan). The out-of-plane orientation of BT monolayers on substrates was evaluated by performing X-ray diffraction (XRD) measurements using a SmartLab XRD (Rigaku, Tokyo, Japan) with a Cu Kα radiation source. The surface pressure on the water subphase surface was measured by the Wilhelmy plate method with a KSV NIMA Layer Builder (Biolin Scientific, Espoo, Finland) to investigate the dependence of surface pressure on the amount of oleic acid added to the water surface. Additionally, real time measurements of surface pressure on the water surface during the solvent evaporation were performed. In the measurements, the process of formation of BT nanocube monolayers was the same as mentioned above; however, the petri dish and the instrument were covered with a plastic case during mesitylene evaporation instead of covering with a glass lid. Results and Discussion In a typical procedure for the formation of BT nanocube monolayers, 40 µL of the oleic acid solution (corresponding to the amount of oleic acid molecules, 2.0 × 10 −9 mol/cm 2 ) is added to the water surface, and then, 40 µL of BT nanocube suspension with a concentration of about 0.2 mg/mL is dropped onto the surface. The BT nanocube monolayers obtained via the typical procedure are a few hundreds of micrometers to a few millimeters in size. We employed the Langmuir-Schaefer deposition method since it is suitable for transferring locally floating monolayers on a water surface to a substrate and has been used in similar systems [25,27]. Figure 2a shows an example of a BT nanocube monolayer transferred to a silicon substrate. It can be seen that the monolayer (the bright region in Figure 2a) is uniform and contains no microcracks over the entire area. The magnified image in Figure 2b shows that the BT nanocubes are highly ordered and form small domains with about 10 nm gaps in the monolayer. Since the thickness of the oleic acid-capped layers of BT nanocubes is about 1 nm [31], these gaps seem to be caused by slight variations in the size and shape of the BT nanocubes, and by the penetration of extra oleic acid molecules between the neighboring BT nanocubes. Although we also obtained BT nanocube multilayers by using a more concentrated suspension (more than 2.0 mg/mL), we will focus on the formation of monolayers in this study. The crystal orientation of the BT nanocube monolayer obtained was analyzed from its XRD pattern. Figure 3 shows the results of XRD 2θ-ω scan of the BT nanocube powders and a BT nanocube monolayer with an area of about 0.3 cm 2 on a silicon substrate (1  1 cm 2 ). Although the splitting of (100), (200), (210), and (211) diffraction peaks cannot be observed due to peak broadening caused by their nanometric size, the diffraction pattern of the BT nanocube powder is roughly consistent with that of the tetragonal BT (P4mm) represented by the vertical lines in Figure 3. On the other hand, the diffraction pattern of the monolayer shows only (100) and (200) peaks at 22° and 45°, respectively. This result clearly indicates that the BT nanocubes in the monolayer have an excellent {100} out-ofplane orientation over the large area. The weak diffraction peaks around 56° in the diffraction pattern of the monolayer were proved to be from the Si substrate, as determined by the XRD measurement of the substrate without BT nanocube monolayers. To investigate how the amount of extra oleic acid influences the self-assembly of BT nanocubes, we performed the self-assembly process by changing the amount of oleic acid from 0 to 2.0 mol/cm 2 . Figure 4 shows the SEM images of BT nanocubes transferred to silicon substrates from the water surfaces with the addition of various amounts of oleic acid. These images demonstrate that the amount of oleic acid strongly affects the ordering of BT nanocubes. Without the addition of oleic acid (0 mol/cm 2 ), two-dimensional random clusters consisting of a few tens of BT nanocubes were deposited on the substrate; however, their monolayers were not observed (Figure 4a). This may be due to the high boiling point of mesitylene (165 °C), which causes slow solvent evaporation at room temperature. Under the condition, BT nanocubes gathered at the evaporation front by convective The crystal orientation of the BT nanocube monolayer obtained was analyzed from its XRD pattern. Figure 3 shows the results of XRD 2θ-ω scan of the BT nanocube powders and a BT nanocube monolayer with an area of about 0.3 cm 2 on a silicon substrate (1 × 1 cm 2 ). Although the splitting of (100), (200), (210), and (211) diffraction peaks cannot be observed due to peak broadening caused by their nanometric size, the diffraction pattern of the BT nanocube powder is roughly consistent with that of the tetragonal BT (P4mm) represented by the vertical lines in Figure 3. On the other hand, the diffraction pattern of the monolayer shows only (100) and (200) peaks at 22 • and 45 • , respectively. This result clearly indicates that the BT nanocubes in the monolayer have an excellent {100} out-of-plane orientation over the large area. The weak diffraction peaks around 56 • in the diffraction pattern of the monolayer were proved to be from the Si substrate, as determined by the XRD measurement of the substrate without BT nanocube monolayers. The crystal orientation of the BT nanocube monolayer obtained was analyzed from its XRD pattern. Figure 3 shows the results of XRD 2θ-ω scan of the BT nanocube powders and a BT nanocube monolayer with an area of about 0.3 cm 2 on a silicon substrate (1  1 cm 2 ). Although the splitting of (100), (200), (210), and (211) diffraction peaks cannot be observed due to peak broadening caused by their nanometric size, the diffraction pattern of the BT nanocube powder is roughly consistent with that of the tetragonal BT (P4mm) represented by the vertical lines in Figure 3. On the other hand, the diffraction pattern of the monolayer shows only (100) and (200) peaks at 22° and 45°, respectively. This result clearly indicates that the BT nanocubes in the monolayer have an excellent {100} out-ofplane orientation over the large area. The weak diffraction peaks around 56° in the diffraction pattern of the monolayer were proved to be from the Si substrate, as determined by the XRD measurement of the substrate without BT nanocube monolayers. To investigate how the amount of extra oleic acid influences the self-assembly of BT nanocubes, we performed the self-assembly process by changing the amount of oleic acid from 0 to 2.0 mol/cm 2 . Figure 4 shows the SEM images of BT nanocubes transferred to silicon substrates from the water surfaces with the addition of various amounts of oleic acid. These images demonstrate that the amount of oleic acid strongly affects the ordering of BT nanocubes. Without the addition of oleic acid (0 mol/cm 2 ), two-dimensional random clusters consisting of a few tens of BT nanocubes were deposited on the substrate; however, their monolayers were not observed (Figure 4a). This may be due to the high boiling point of mesitylene (165 °C), which causes slow solvent evaporation at room temperature. Under the condition, BT nanocubes gathered at the evaporation front by convective To investigate how the amount of extra oleic acid influences the self-assembly of BT nanocubes, we performed the self-assembly process by changing the amount of oleic acid from 0 to 2.0 mol/cm 2 . Figure 4 shows the SEM images of BT nanocubes transferred to silicon substrates from the water surfaces with the addition of various amounts of oleic acid. These images demonstrate that the amount of oleic acid strongly affects the ordering of BT nanocubes. Without the addition of oleic acid (0 mol/cm 2 ), two-dimensional random clusters consisting of a few tens of BT nanocubes were deposited on the substrate; however, their monolayers were not observed (Figure 4a). This may be due to the high boiling point of mesitylene (165 • C), which causes slow solvent evaporation at room temperature. Under the condition, BT nanocubes gathered at the evaporation front by convective flows, dispersing away from the suspension droplet before being assembled into monolayers by lateral capillary forces, because the supply rate of BT nanocubes to the evaporation front is not enough for the formation of monolayers. The addition of 2.5 × 10 −10 mol/cm 2 of oleic acid caused the formation of BT nanocube monolayers with a size of a few micrometers. However, the monolayers formed in this condition contain numerous voids and have poor ordering of BT nanocubes, as shown in Figure 4b. When more than 5.0 × 10 −10 mol/cm 2 oleic acid was added, BT nanocube monolayers grew to more than submillimeter sizes, and voids were dramatically reduced (Figure 4c-e). Moreover, the fast Fourier transform (FFT) patterns of the SEM images shown in the insets of Figure 4 clearly demonstrate an improvement in BT nanocube ordering with increasing amount of oleic acid. A four-fold symmetry, which indicates the long-range order of BT nanocubes in the monolayers, appears in the FFT patterns when the amount of oleic acid is more than 5.0 × 10 −10 mol/cm 2 (the insets in Figure 4c-e), while no symmetry can be observed at oleic acid contents of less than 2.5 × 10 −10 mol/cm 2 (the insets in Figure 4a,b). These results revealed that the addition of extra oleic acid allows BT nanocubes to form large-scale monolayers and increasing the amount of extra oleic acid improves the ordering of BT nanocubes. flows, dispersing away from the suspension droplet before being assembled into monolayers by lateral capillary forces, because the supply rate of BT nanocubes to the evaporation front is not enough for the formation of monolayers. The addition of 2.5  10 −10 mol/cm 2 of oleic acid caused the formation of BT nanocube monolayers with a size of a few micrometers. However, the monolayers formed in this condition contain numerous voids and have poor ordering of BT nanocubes, as shown in Figure 4b. When more than 5.0  10 −10 mol/cm 2 oleic acid was added, BT nanocube monolayers grew to more than submillimeter sizes, and voids were dramatically reduced (Figure 4c-e). Moreover, the fast Fourier transform (FFT) patterns of the SEM images shown in the insets of Figure 4 clearly demonstrate an improvement in BT nanocube ordering with increasing amount of oleic acid. A fourfold symmetry, which indicates the long-range order of BT nanocubes in the monolayers, appears in the FFT patterns when the amount of oleic acid is more than 5.0  10 −10 mol/cm 2 (the insets in Figure 4c-e), while no symmetry can be observed at oleic acid contents of less than 2.5  10 −10 mol/cm 2 (the insets in Figure 4a,b). These results revealed that the addition of extra oleic acid allows BT nanocubes to form large-scale monolayers and increasing the amount of extra oleic acid improves the ordering of BT nanocubes. To further understand the effect of the amount of oleic acid, we measured the surface pressure on the water subphase surface on which various amounts of oleic acid were added but without the drop-cast of the BT nanocube suspension. The measured values were plotted as a function of the amount of oleic acid (black filled circles and solid line in Figure 5). The surface pressure saturates to around 30 mN/m when the amount of oleic acid exceeds about 5.0 × 10 −10 mol/cm 2 . This saturated value is well consistent with the equilibrium spreading pressure of a monolayer of oleic acid molecules on a water surface [32]. Interestingly, both the appearance of the four-fold symmetry in the FFT pattern ( Figure 4) and the saturation of surface pressure were observed when the added amount of oleic acid exceeded 5.0 × 10 −10 mol/cm 2 . This concurrence may indicate that the ordering of BT nanocubes in their monolayers is related to the surface pressure of the water surface. We evaluated the density of BT nanocubes quantitatively by calculating their area ratio on the substrate from the SEM images using the image processing software, ImageJ. We calculated area ratios in five randomly selected regions (1.6 µm × 1.9 µm in area) covered by BT nanocubes for each sample. The mean values of area ratio for each sample are plotted in Figure 5 as a function of the amount of oleic acid (red filled squares and solid line). The plots also indicate the dependence of the area ratio on the surface pressure. The area ratio increases with an increase in surface pressure and saturated with the saturation of surface pressure. Therefore, these results indicate that the surface pressure exerted by the extra oleic acid is a key factor in the formation of BT nanocube monolayers and the arrangement of BT nanocubes in our method. To further understand the effect of the amount of oleic acid, we measured the surface pressure on the water subphase surface on which various amounts of oleic acid were added but without the drop-cast of the BT nanocube suspension. The measured values were plotted as a function of the amount of oleic acid (black filled circles and solid line in Figure 5). The surface pressure saturates to around 30 mN/m when the amount of oleic acid exceeds about 5.0  10 −10 mol/cm 2 . This saturated value is well consistent with the equilibrium spreading pressure of a monolayer of oleic acid molecules on a water surface [32]. Interestingly, both the appearance of the four-fold symmetry in the FFT pattern ( Figure 4) and the saturation of surface pressure were observed when the added amount of oleic acid exceeded 5.0  10 −10 mol/cm 2 . This concurrence may indicate that the ordering of BT nanocubes in their monolayers is related to the surface pressure of the water surface. We evaluated the density of BT nanocubes quantitatively by calculating their area ratio on the substrate from the SEM images using the image processing software, ImageJ. We calculated area ratios in five randomly selected regions (1.6 μm  1.9 μm in area) covered by BT nanocubes for each sample. The mean values of area ratio for each sample are plotted in Figure 5 as a function of the amount of oleic acid (red filled squares and solid line). The plots also indicate the dependence of the area ratio on the surface pressure. The area ratio increases with an increase in surface pressure and saturated with the saturation of surface pressure. Therefore, these results indicate that the surface pressure exerted by the extra oleic acid is a key factor in the formation of BT nanocube monolayers and the arrangement of BT nanocubes in our method. Since the aforementioned measurements of surface pressure were performed in the absence of BT nanocubes and their suspension, it remains unclear how the extra oleic acid affects the surface pressure during the assembly of BT nanocubes. It should be noted that the compression of the suspension droplet by oleic acid could be visually confirmed from the behavior of the suspension droplet on the water surface. The suspension droplet on the oleic acid-added water surface maintains a lens-like shape and floats on the center of the surface during evaporation, while the suspension dropped onto the water surface without oleic acid immediately spreads out to form thin films and covers the water surface. To investigate this point quantitatively, we measured the surface pressure on the water surface with and without the addition of oleic acid during solvent evaporation in real time. Figure 6a,b show the time profiles of surface pressure on the water surfaces with 2.0  10 −9 mol/cm 2 of oleic acid and without oleic acid, respectively. It is noteworthy that the surface pressure on the oleic acid-added water surface is higher than that on the water surface without oleic acid throughout the measured time range, although these profiles show the different behaviors of surface pressure in response to the drop cast of BT nanocube suspension (indicated by the red arrow) and Since the aforementioned measurements of surface pressure were performed in the absence of BT nanocubes and their suspension, it remains unclear how the extra oleic acid affects the surface pressure during the assembly of BT nanocubes. It should be noted that the compression of the suspension droplet by oleic acid could be visually confirmed from the behavior of the suspension droplet on the water surface. The suspension droplet on the oleic acid-added water surface maintains a lens-like shape and floats on the center of the surface during evaporation, while the suspension dropped onto the water surface without oleic acid immediately spreads out to form thin films and covers the water surface. To investigate this point quantitatively, we measured the surface pressure on the water surface with and without the addition of oleic acid during solvent evaporation in real time. Figure 6a,b show the time profiles of surface pressure on the water surfaces with 2.0 × 10 −9 mol/cm 2 of oleic acid and without oleic acid, respectively. It is noteworthy that the surface pressure on the oleic acid-added water surface is higher than that on the water surface without oleic acid throughout the measured time range, although these profiles show the different behaviors of surface pressure in response to the drop cast of BT nanocube suspension (indicated by the red arrow) and the evaporation of the solvent. This verifies that extra oleic acid exerts a compression pressure on the suspension droplet and BT nanocubes. The surface pressure of the oleic acid-added water surface decreased when the BT nanocube suspension was dropped and increased to its initial value before the addition upon solvent evaporation. The decrease in surface pressure may be explained by the dissolution of a part of oleic acid molecules on the water surface in the suspension. As the solvent evaporates, the oleic acid molecules in the suspension were released to the water surface and increased the surface pressure. Contrarily, on the water surface without oleic acid, the surface pressure increased in response to the drop cast of the suspension and then decreased to nearly 0 mN/m after solvent evaporation. The increase in surface pressure can be attributed to the presence of suspension spreading over the water surface. The longer evaporation time for the oleic acid-added water surface than that for the water surface without oleic acid may be due to the difference in the behavior of the suspension droplet on the water surfaces, which causes a difference in the specific surface area of the suspension droplet. The oleic acid molecules dissolved in the suspension may also contribute to the slowing of evaporation. The slow evaporation of solvent is favorable to the formation of uniform monolayers. the evaporation of the solvent. This verifies that extra oleic acid exerts a compression pressure on the suspension droplet and BT nanocubes. The surface pressure of the oleic acid-added water surface decreased when the BT nanocube suspension was dropped and increased to its initial value before the addition upon solvent evaporation. The decrease in surface pressure may be explained by the dissolution of a part of oleic acid molecules on the water surface in the suspension. As the solvent evaporates, the oleic acid molecules in the suspension were released to the water surface and increased the surface pressure. Contrarily, on the water surface without oleic acid, the surface pressure increased in response to the drop cast of the suspension and then decreased to nearly 0 mN/m after solvent evaporation. The increase in surface pressure can be attributed to the presence of suspension spreading over the water surface. The longer evaporation time for the oleic acid-added water surface than that for the water surface without oleic acid may be due to the difference in the behavior of the suspension droplet on the water surfaces, which causes a difference in the specific surface area of the suspension droplet. The oleic acid molecules dissolved in the suspension may also contribute to the slowing of evaporation. The slow evaporation of solvent is favorable to the formation of uniform monolayers. From the results mentioned above, we concluded that the major role of extra oleic acid in our method is as follows: the extra oleic acid added to the water surface exerts a compression pressure on the BT nanocubes, which were carried to the evaporation front by convective flows, as illustrated in Figure 7. This compression pressure confines the BT nanocubes around the suspension droplet and assists the lateral capillary force to work between the BT nanocubes, leading to the formation of a highly ordered arrangement of BT nanocubes. Furthermore, it can be deduced that the compression by extra oleic acid reduces the generation of microcracks. Time profiles of surface pressure on water surfaces (a) with the addition of 2.0 × 10 −9 mol/cm 2 of oleic acid and (b) without oleic acid. The red arrow indicates the time at which BT nanocube suspension was dropped onto the water surface. From the results mentioned above, we concluded that the major role of extra oleic acid in our method is as follows: the extra oleic acid added to the water surface exerts a compression pressure on the BT nanocubes, which were carried to the evaporation front by convective flows, as illustrated in Figure 7. This compression pressure confines the BT nanocubes around the suspension droplet and assists the lateral capillary force to work between the BT nanocubes, leading to the formation of a highly ordered arrangement of BT nanocubes. Furthermore, it can be deduced that the compression by extra oleic acid reduces the generation of microcracks. Finally, we compared the quality of BT nanocube monolayers fabricated using our method and the Langmuir-Blodgett (LB) method in which nanocrystals on a liquid surface are assembled into monolayers by compressing them using movable barriers. The LB method is commonly used for fabrication of nanocrystal monolayers [33,34]. Since the LB method is similar to our method in terms of using compression pressure for fabrication of monolayers on liquid subphases, this comparison seems helpful in evaluating the quality of monolayers produced by our method. In this experiment, the LB method was carried out using the same instrument that was used for the measurement of surface pressure. BT nanocubes, which were dispersed on the surface of distilled water poured into a Teflon Langmuir trough, were compressed by movable barriers with a compression pressure of about 50 mN/m. The compression pressure was selected to be slightly lower than the collapse pressure of the monolayer (about 55 mN/m). Silicon substrates used in the LB method were subjected to ultraviolet rradiation to make their surface hydrophilic. Our method was carried out under the same condition as that used for the fabrication of the monolayer shown in Figure 2. Figure 8a,b show the SEM images of BT nanocube monolayers fabricated using our method and the LB method, respectively. In Figure 8a, almost no microcracks and voids exist; however, only a few bright spots, where small two-dimensional clusters of BT nanocubes are deposited on the monolayer, are observed. On the other hand, in Figure 8b, numerous voids are present even though a higher compression pressure was applied. Additionally, some overlaps of monolayers can also be observed as bright areas in the image. Such voids and overlaps are a common problem in the LB method and there is the trade-off relation between them. Increasing the compression pressure to reduce voids should cause overlapping of monolayers, while voids cannot be reduced under a low compression pressure [35]. Thus, severe control of experimental parameters, such as compression pressure, is required to reduce the voids and overlaps. Although the differences in the deposition methods and surface conditions of the substrates can contribute to the resultant morphology of the monolayers, this result seems to demonstrate qualitatively the potential of our method to provide high-quality monolayers without any dedicated instrument and severe control of experimental parameters. The extra oleic acid added to the water surface exerts a compression pressure on BT nanocubes, which are carried to the evaporation front by convective flows. This compression pressure confines the BT nanocubes around the suspension droplet and assists the lateral capillary forces to work between the BT nanocubes. Finally, we compared the quality of BT nanocube monolayers fabricated using our method and the Langmuir-Blodgett (LB) method in which nanocrystals on a liquid surface are assembled into monolayers by compressing them using movable barriers. The LB method is commonly used for fabrication of nanocrystal monolayers [33,34]. Since the LB method is similar to our method in terms of using compression pressure for fabrication of monolayers on liquid subphases, this comparison seems helpful in evaluating the quality of monolayers produced by our method. In this experiment, the LB method was carried out using the same instrument that was used for the measurement of surface pressure. BT nanocubes, which were dispersed on the surface of distilled water poured into a Teflon Langmuir trough, were compressed by movable barriers with a compression pressure of about 50 mN/m. The compression pressure was selected to be slightly lower than the collapse pressure of the monolayer (about 55 mN/m). Silicon substrates used in the LB method were subjected to ultraviolet rradiation to make their surface hydrophilic. Our method was carried out under the same condition as that used for the fabrication of the monolayer shown in Figure 2. Figure 8a,b show the SEM images of BT nanocube monolayers fabricated using our method and the LB method, respectively. In Figure 8a, almost no microcracks and voids exist; however, only a few bright spots, where small two-dimensional clusters of BT nanocubes are deposited on the monolayer, are observed. On the other hand, in Figure 8b, numerous voids are present even though a higher compression pressure was applied. Additionally, some overlaps of monolayers can also be observed as bright areas in the image. Such voids and overlaps are a common problem in the LB method and there is the trade-off relation between them. Increasing the compression pressure to reduce voids should cause overlapping of monolayers, while voids cannot be reduced under a low compression pressure [35]. Thus, severe control of experimental parameters, such as compression pressure, is required to Figure 7. Schematic illustration of mechanism of oleic acid-assisted self-assembly of BT nanocubes. The extra oleic acid added to the water surface exerts a compression pressure on BT nanocubes, which are carried to the evaporation front by convective flows. This compression pressure confines the BT nanocubes around the suspension droplet and assists the lateral capillary forces to work between the BT nanocubes. Conclusions We demonstrated a novel self-assembly method to fabricate large-scale highly ordered monolayers of nanocrystals. BT nanocube monolayers with a size more than submicrometer scale were achieved by adding an extra oleic acid onto the water subphase surface. SEM observations and XRD measurements revealed that the monolayers obtained contain almost no microcracks and voids, and have a high crystal orientation. The measurements of surface pressure on the water surface revealed that the addition of extra oleic acid assists the EISA of BT nanocubes into a four-fold symmetrical arrangement by exerting a compression pressure on the BT nanocubes. The combination of compression pressure with the EISA allows the formation of large-scale monolayers under the condition where monolayers cannot form only by EISA. Since our method is potentially applicable Conclusions We demonstrated a novel self-assembly method to fabricate large-scale highly ordered monolayers of nanocrystals. BT nanocube monolayers with a size more than submicrometer scale were achieved by adding an extra oleic acid onto the water subphase surface. SEM observations and XRD measurements revealed that the monolayers obtained contain almost no microcracks and voids, and have a high crystal orientation. The measurements of surface pressure on the water surface revealed that the addition of extra oleic acid assists the EISA of BT nanocubes into a four-fold symmetrical arrangement by exerting a compression pressure on the BT nanocubes. The combination of compression pressure with the EISA allows the formation of large-scale monolayers under the condition where monolayers cannot form only by EISA. Since our method is potentially applicable to other types of nanocrystals, we believe that the present work will open a route to constructing highly ordered and large-scale arrays of various nanocrystals.
9,154.6
2018-09-01T00:00:00.000
[ "Materials Science" ]
A lightweight data-driven spiking neuronal network model of Drosophila olfactory nervous system with dedicated hardware support Data-driven spiking neuronal network (SNN) models enable in-silico analysis of the nervous system at the cellular and synaptic level. Therefore, they are a key tool for elucidating the information processing principles of the brain. While extensive research has focused on developing data-driven SNN models for mammalian brains, their complexity poses challenges in achieving precision. Network topology often relies on statistical inference, and the functions of specific brain regions and supporting neuronal activities remain unclear. Additionally, these models demand huge computing facilities and their simulation speed is considerably slower than real-time. Here, we propose a lightweight data-driven SNN model that strikes a balance between simplicity and reproducibility. The model is built using a qualitative modeling approach that can reproduce key dynamics of neuronal activity. We target the Drosophila olfactory nervous system, extracting its network topology from connectome data. The model was successfully implemented on a small entry-level field-programmable gate array and simulated the activity of a network in real-time. In addition, the model reproduced olfactory associative learning, the primary function of the olfactory system, and characteristic spiking activities of different neuron types. In sum, this paper propose a method for building data-driven SNN models from biological data. Our approach reproduces the function and neuronal activities of the nervous system and is lightweight, acceleratable with dedicated hardware, making it scalable to large-scale networks. Therefore, our approach is expected to play an important role in elucidating the brain's information processing at the cellular and synaptic level through an analysis-by-construction approach. In addition, it may be applicable to edge artificial intelligence systems in the future. Introduction Elucidating the mechanisms underlying information processing in the brain represents a great challenge in neuroscience.In parallel to collecting data with experiments, building brain models has proven to be a powerful approach to enable in-silico analysis and provide a framework for understanding information processing in the brain.Macroscopic models (Kawato, 1999;Frank et al., 2001;Norman and O'Reilly, 2003;Walther and Koch, 2006) describe information flow at the functional level and present an overview of neural processing.In contrast, spiking neuronal network (SNN) models emulate the brain at the cellular and synaptic level and provide their in-silico counterparts, which are more tractable and easier to manipulate.From an engineering perspective, properly built SNN models are expected to be capable of intelligent information processing equivalent to the brain.Silicon neuronal network (SiNN) chips, which are highly power-efficient neuromorphic hardware optimized for SNN models, have already been developed (Merolla et al., 2014;Qiao et al., 2015;Davies et al., 2018).Therefore, they have great potential for next-generation artificial intelligence (AI) applications. The structure of the brain is highly diverse, which makes it demanding to capture the comprehensible rules about the network topology.In addition, a wide variety of neuronal and synaptic properties has been reported.The data-driven approach intends to replicate the brain by semi-automatically incorporating vast amounts of anatomical and physiological data.Several large-scale data-driven SNN models (Markram et al., 2015;Bezaire et al., 2016;Ecker et al., 2020) that reproduce a part of the mammalian cortex and hippocampus have been built.They were designed to replicate the network topology, neuronal anatomy and electrophysiology, and synaptic properties, and they successfully reproduced the characteristic spiking activities seen in the target regions.However, in mammalian brains, the considerable number of neurons makes it challenging to measure the exact connection topology between the neurons.Hence, the network topology was inferred based on statistical data.In addition, because each brain region closely interacts with various other brain regions, it is not trivial to understand the specific function of the target region.Generally, data-driven models employ the ionic-conductance-based neuronal models, which can reproduce arbitrary electrophysiological properties but incur enormous computational costs.For example, the model in Bezaire et al. (2016) runs on a supercomputer consisting of 3,488 processors, and its simulation speed is 1,600 times slower than real-time.Moreover, these models are not suitable for implementation on SiNNs because they involve complex calculation processes that require enormous circuit resources. In this study, we built a data-driven SNN model for the olfactory nervous system of Drosophila melanogaster (fruit fly).The system is a relatively small (∼2,200 neurons) network having a known function, whose complete network topology, or connectome, is available.The electrophysiological activity of neurons was reproduced by using the piecewise quadratic neuron (PQN) model, which is a lightweight neuron model suitable for digital arithmetic circuit implementations (Nanami andKohno, 2016a,b, 2023;Nanami et al., 2016Nanami et al., , 2017Nanami et al., , 2018)). The PQN model was adopted to reduce the computational cost and enable the SNN model to be run on a SiNN chip.It focuses on reproducing the key dynamics behind neuronal activities with lightweight calculations.The model is designed using the dimension reduction techniques of nonlinear dynamics such that the dynamical structure of the activity of the target neuron is preserved.Unlike integrate-and-fire (I&F) based models, such as the leaky I&F model, Izhikevich (IZH) model (Izhikevich, 2003), and adaptive exponential I&F model (Brette and Gerstner, 2005), the dynamics in the neuronal spike are not replaced by a resetting of the membrane potential.I&F-based models are generally more lightweight than the PQN model.However, they have been reported to have limitations in the reproducibility of neuronal activities.For example, because their spike amplitudes are fixed, they cannot reproduce the propagation of spike intensity observed in some brain regions including the hippocampus (Alle and Geiger, 2006).In addition, the IZH model can only reproduce spiking within a limited range of stimulus intensities (Nanami and Kohno, 2016b).Furthermore, the phase-resetting curve of the Class II mode in Hodgkin's classification (Hodgkin, 1948) of the IZH and AdEx models differs from the typical shape (Nanami and Kohno, 2023).In addition to the aforementioned advantages, the PQN model supports the efficient implementation on digital arithmetic circuits.Thus, the SNN model can be executed efficiently (power and speed) with a SiNN on field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs).The results in this study were obtained using a SiNN on an entry-level low-cost FPGA chip to demonstrate its potential for low-power brain-morphic artificial intelligence (AI) applications. In recent years, brain-inspired AI has become popular, where spike-based machine learning (Yang and Chen, 2023a,b;Yang et al., 2023a,b) is studied mainly using I&F-based models.These studies built massively parallel information processing systems inspired by the brain's structure to enable advanced and robust information processing with low power consumption.In contrast, here we aim to provide an in silico platform that more faithfully reproduces neuronal connectivity and information processing in brain microcircuits, which is distinct from the objective of braininspired AI. The fruit fly brain comprises 100,000 neurons.Moreover, its connectome was recently revealed (Scheffer et al., 2020).It is compact compared to the mammalian brain but capable of complex information processing.Its olfactory nervous system consists of brain regions including the antennal lobe and the mushroom body, the anatomy and physiology of which have been widely studied (Wilson, 2013;Modi et al., 2020).The function and activity of each type of neuron in these regions are better characterized in the context of sensory input and behavioral output than those of the mammalian cortex and hippocampus, enabling us to adequately verify the reproducibility of the model.However, previous modeling studies (Wessnitzer et al., 2012;Faghihi et al., 2017;Kennedy, 2019) (not data-driven) of the olfactory nervous system used simplified I&F-based neuron models, which did not fully reproduce the electrophysiological properties of each type of neurons.Specifically, they did not reproduce the characteristic spiking activities seen in the olfactory nervous system including (1) odor-evoked oscillatory firing in the projection neurons (PNs) and local neurons (LNs) (Tanaka et al., 2009), (2) absence of oscillations in Kenyon cells (KCs) (Turner et al., 2008), (3) different contributions of LN subclasses to the formation of oscillations (Tanaka et al., 2009), and (4) temporal dynamics of firing in mushroom body output neurons (MBONs) (Hige et al., 2015).Thus, it is uncertain whether they accurately capture information processing mechanisms in the olfactory nervous system.More sophisticated, ionic-conductancebased SNN models of the insect brain (Bazhenov et al., 2001a,b) had been built for the antennal lobe of locust.However, they were not data-driven and did not reproduce most of the aforementioned characteristics of spiking activities.This is likely because they modeled only PNs and LNs, and also lacked electrophysiological data on identified neurons.Here we built a model of a fly olfactory system incorporating the connectome data as well as neuronal and synaptic electrophysiological properties of neurons.Our model successfully reproduced not only the aforementioned characteristic spiking activities (1)-( 4) of the constituent cells, but also olfactory associative plasticity, the primary function of the olfactory system.Although we did not intend to implement every single known neuron or connection in our model, this study lays a foundation for building lightweight data-driven SNN models and is expected to aid in understanding the brain and developing brain-morphic AI systems. Methods . Network model This section provides an overview of the proposed network model.The model comprises an antenna, the antennal lobe, and the mushroom body (Figure 1).The antenna contains olfactory receptor neurons (ORNs), and the antennal lobe contains LNs and PNs (Stocker et al., 1990).KCs, an anterior paired lateral neuron (APL), and MBONs are present in the mushroom body (Aso et al., 2014a).The two MBONs, MBON-α1 and MBON-α3, project to SMP354 neuron, whose excitatory activity can trigger a series of olfactory approach behaviors including upwind steering and locomotion (Aso et al., 2023).ORNs, PNs, KCs, and MBON-α3 are cholinergic (Yasuyama and Salvaterra, 1999;Kazama and Wilson, 2008;Tanaka et al., 2008;Barnstedt et al., 2016) and form excitatory synapses.Some LNs are cholinergic (Shang et al., 2007) or glutamatergic (Das et al., 2011) and are considered as sources of excitatory or inhibitory input (Olsen et al., 2007;Shang et al., 2007).However, most LNs are GABAergic (Python and Stocker, 2002;Wilson and Laurent, 2005) and have been shown to provide inhibitory input (Olsen and Wilson, 2008;Root et al., 2008).Thus, in this model, all LNs were set as inhibitory.APL is GABAergic (Tanaka et al., 2008) and inhibitory, whereas MBON-α1 is glutaminergic (Aso et al., 2014a) and inhibitory.The numbers in Figure 1 represent the numbers of neurons implemented in the model.Synaptic connections were determined using the connectome database hemibrain v1.2.1 (HEM, 2020;Scheffer et al., 2020).Since this model targets the olfactory nervous system of the right hemisphere, we first obtained the number of neurons of each type in the right hemisphere from the hemibrain server using the NC function of the neuprint-python library.We then determined the connections between neurons using the fetch_neurons function of the neuprint-python that returns the number of synaptic connections between neurons.Connections with more than ten synapses were assumed to have sufficient strength, and their weight w was set to 1. Otherwise, w was set to 0. Note that the connections of LNs were determined based on a previous study (Seki et al., 2010). A variety of LN subclasses were reported (Chou et al., 2010;Seki et al., 2010) identified four subclasses, each with different spiking properties.However, the connectome database (HEM, 2020;Scheffer et al., 2020) does not describe which subclass each LN belongs to.Therefore, the connection of LNs was determined based on Seki et al. (2010) where the probabilities that each LN subclass has a connection to a certain glomerulus were shown.In the antennal lobe, glomeruli are neuropils comprising axons and dendrites of PNs, LNs, and ORNs.ORNs and PNs are generally connected to only one glomerulus.We first determined the subclasses to which 191 LNs belong.As the proportion of each subclass is unknown, we set the number of NP2426_class1 to 47 and the remaining to 48 to ensure that the distribution of subclasses was as even as possible.Next, for each LN, we randomly determined whether each LN innervates each glomerulus according to the innervation probabilities shown in Seki et al. (2010).If an ORN/PN and LN innervate the same glomerulus, the ORN/PN was assumed to make a synaptic connection onto the LN, and the synaptic weight w was set to 1.For example, LNs NP1227_class1 connect to glomerulus DA1 with a probability of 75% (Seki et al., 2010).Based on this probability, we determined whether each LN NP1227_class1 connects to glomerulus DA1. Each ORN expresses one of the olfactory receptors, each of which has different odor selectivity.A previous study (D.Münch and Galizia, 2016) described a correspondence table between glomerulus and olfactory receptor (OR) types.We used this table and the glomerulus type for each ORN listed in the connectome database to determine the OR type for each ORN.When multiple OR types were assigned to a single glomerulus type, one OR was randomly selected. Odors are first detected by ORNs on the antenna.ORNs express one of the ORs, each possessing different odor selectivity.ORNs extend their axons to the antennal lobe and project to PNs and LNs.As the firing activities of ORNs are the input data, they are not modeled.Figure 2A shows part of the connection structure from ORNs to PNs in the model.The black squares represent the presence of connections.On average, each ORN projects to 1.6 PNs, and each PN receives input from 24.0 ORNs.ORNs and PNs were sorted based on their glomerulus type, the borders of which are represented by blue lines.ORNs generally project to all the PNs in the same glomerulus type (Kazama and Wilson, 2009).This convergent projection is considered (Bhandawat et al., 2007) to enable PNs to produce reliable output by averaging the input from a large number of ORNs, whereas the responses of ORNs to odors are noisy and unreliable (Stocker et al., 1990).LNs receive inputs from a wide range of ORNs and extensively inhibit PNs and LNs.On average, each LN receives input from 1337.4 ORNs and inhibits 90.9 PNs.LNs are considered to contribute to the gain control of the input from ORNs (Olsen and Wilson, 2008) and to the generation of oscillations in the antennal lobe (Tanaka et al., 2009). PNs extend their axons to the entrance of the mushroom body, where they provide excitatory input to KCs.On average, each PN projects to 67.6 KCs, and each KC receives input from 4.2 PNs. Figure 2B shows part of the connection structure from PNs to KCs in the model.In contrast to the connections between ORNs and PNs, there is no regularity in the connections between PNs and KCs, which was confirmed in a previous study (Caron et al., 2013).APL receives input from almost all KCs and PNs and returns inhibitory feedback to KCs and MBONs. There are 28 MBONs, or 44 including atypical MBONs (Li et al., 2020), at least some of which signal either positive or negative valence (Aso et al., 2014b).While how MBON signals are further processed by the downstream circuits to determine the behavioral output is still largely elusive, the connectome study discovered that postsynaptic neurons of the MBONs typically receive synaptic input from more than one type of MBONs (Li et al., 2020), suggesting that valence signals could be integrated by those neurons.A recent study characterized such a circuit motif experimentally (Aso et al., 2023).A cluster of 8-10 neurons named UpWind Neurons (UpWiNs) directly and indirectly integrates excitatory and inhibitory input from MBON-α3 and MBON-α1, respectively.Direct optogenetic activation of MBON-α3 induces upwind locomotion, which can be interpreted as an olfactory approach behavior (Matheson et al., 2021), while activation of MBON-α1 does not induce such behavior (Aso et al., 2023).Experiments using compartment-specific optogenetic activation of dopaminergic neurons demonstrated that α3 and α1 are an aversive-and appetitive-memory compartment, respectively (Aso and Rubin, 2016).Moreover, optogenetic activation of UpWiNs triggers robust upwind locomotion (Aso et al., 2023).Thus, the UpWiN cluster is one of the sites where signals of opposite memory valence are integrated and translated into olfactory navigation behavior.Since neurons in the UpWiN cluster are heterogeneous in their anatomy and connectivity, in our model, we focused on one of the neurons, SMP354, (bodyId in hemibrain is 390003153), which receives direct synaptic input from both MBON-α3 and MBON-α1.Because there is no specific genetic driver to label this particular neuron, we were unable to use experimentally determined electrophysiological parameters for this neuron.Although our model is simplified in terms of the readout mechanism of the mushroom body signals, we believe that the SMP354 circuit represents one of the common motifs that interpret the population signals of MBONs. If a reward or punishment is given to a fly with an odor, the fly will learn to approach or avoid that odor thereafter (Tully and Quinn, 1985).Multiple studies (Cohn et al., 2015;Hige et al., 2015;Owald et al., 2015) indicate that this olfactory associative learning is caused by longterm depression of KC>MBON synapses.In the case of the circuit associated with UpWiNs, it has been experimentally demonstrated that induction of plasticity in α1, which mimics appetitive conditioning, depresses olfactory responses of MBON-α1.This in turn potentiates responses of UpWiNs, whose naïve odor responses are typically weak (Aso et al., 2023).In our model, when an odor is given, excitatory signals reach KCs via ORNs and PNs, then KCs generate spikes.Here, each individual odor elicits spikes in small, distinct population of KCs.A reward stimulus given following the odor weakens the weights of KC>MBON-α1 synapses whose presynaptic KCs had been firing within 5 seconds prior to the stimulus.Although the magnitude of the decrease in the synaptic weight after a single learning is not clear, we set the initial value of w to 1 and the weakened value to 0.25.Reward stimuli are transmitted to KC>MBON synapses through dopaminergic neurons innervating the mushroom body (Aso et al., 2014b); however, this pathway was not modeled in this study.Whereas MBON-α1 fires in response to all odors before learning, after learning, it will reduce responsiveness only to the learned odor because the synaptic connections from the KCs representing the learned odors will be selectively weakened.Since MBON-α1 is inhibitory, the activity of SMP354, receiving input from MBON-α1 will be disinhibited and thus fire only in response to the learned odor.This activity of SMP354 represents the output of the network. We prepared electrophysiological data of each neuron and tuned the PQN models to replicate them.For LNs and KC, data recorded in previous studies (Seki et al., 2010;Inada et al., 2017) were used.The detailed procedures for the data acquisition from PNs and MBONs are described in the Methods section.Owing to the lack of data on APL and SMP354 neurons, only the modeling results are shown.Since we did not have data on MBON-α3, we used the one on MBON-α1.The PQN model was used to model the neurons.The parameter sets of the PQN model are shown in Supplementary Tables S1-S9. Figure 3 illustrates the responses of the somatic membrane potentials in vivo (red) and those of the PQN models on FPGA (blue).The black plots are the step input currents, whose unit in the recording is pA.The FPGA simulation results have no physical unit.Although a variety of LN subclasses were observed (Chou et al., 2010;Seki et al., 2010), we employed four electrophysiologically identified subclasses reported in Seki et al. (2010).They are Krasavietz_class1, Krasavietz_class2, NP1227_class1, and NP2426_class1; we fitted PQN models to each of them.The parameters of the PQN model were automatically determined using a fitting method (Nanami et al., 2017(Nanami et al., , 2018) ) based on the differential evolution algorithm (Storn and Price, 1997).Detailed activities of each neuron are shown in Supplementary Figures S1-S7. . Electrophysiological measurements . . Recording from PNs Whole-cell patch-clamp recordings from PN somata were performed as previously described (Inada et al., 2017).Briefly, the brain of w;UAS-ReaChR::Citrine(attP40) /+;VT033006-Gal4(attP2)/+ female flies (von Philipsborn et al., 2011;Inagaki et al., 2013), 3 days post eclosion, was removed from the head capsule and fixed on a glass slide with surgical glue (GLUture, Abbott).Part of the perineural sheath covering the antennal lobe was removed to obtain an access to cell bodies.The external saline added on top of the plate was circulated throughout the experiment.A patch pipette was pulled from a thin-wall glass capillary (1.5 mm o.d./ 1.12 mm i.d., TW150F-3, World Precision Instruments).Resistance of the pipette was typically 8-10 M .The internal solution contained (in mM) 140 KOH, 140 aspartic acid, 10 HEPES, 1 EGTA, 4 MgATP, 0.5 Na3GTP, 1 KCl, and 13 biocytin hydrazide (pH ∼7.2, osmolarity adjusted to ∼265 mOsm).Electrophysiological recordings were made with a Multiclamp 700B amplifier (Molecular Devices) equipped with a CV-7B headstage.Signals were low-pass filtered at 2 kHz and digitized at 10 kHz.Multiple levels of depolarizing currents were injected into the soma of individual PNs to examine the relationship between the input current and the membrane potential or spike output.PNs were identified based on the signals from Citrine as well as biocytin included in the internal solution. FIGURE Electrophysiological properties of somatic membrane potentials of the in vivo data (red) and the simulated results of the PQN models in silico (blue) in response to step stimulus inputs (black).We conducted recordings from PNs and MBONs in this study.The data of the KC and four subclasses of the LNs are from previous studies (Seki et al., ; Inada et al., ).As there is no recorded data of APL and SMP neuron, we only show the simulation results. . ORN input data The input data were generated using the DoOR dataset (D.Münch and Galizia, 2016), which comprehensively reports the response properties of ORs of Drosophila.The dataset shows the response intensities of each OR for a wide variety of odorants.Given a certain odor, the firing frequency r of an ORN that expresses a certain OR is given by Equation (1). where r ij is the response intensity of the ith OR to the jth odorant, and its value ranges from 0 to 1. r spo represents the spontaneous firing frequency, which was set to 8 from the average value examined in de Bruyne et al. (1999).k j is a constant that abstractly refers to the concentration of the jth odorant; its values range from 0 to 1 and are listed in Supplementary Table S10.As ORNs fire at approximately 200 Hz in response to the most favorable odorants (Hallem and Carlson, 2006), parameter c was set to 192, such that the maximum firing frequency r would be 200 when r ij and k j were 1.Based on the Poisson process, each ORN generates a spike with probability rdt at every time step, where time step dt is 1 ms.In the input dataset, six odorants were applied sequentially for one second every five seconds.The synaptic currents from ORNs were calculated using the following Equations (2, 3). where x represents the spiking information of an ORN. x is 1 when a spike is emitted in the current time step by an ORN and 0 otherwise.ORNs are cholinergic (Kazama and Wilson, 2008), and their β was set to 203.125 as well as the other synapses. We prepared three types of input data for the in-silico experiments.In the first type of the input data, one of the six odorants, 3-octanol, cis-3-hexenol, cyclohexanone, 2,3butanedione, 2-hexanol, and ethyl butyrate, was applied in turn for 1 second every 5 seconds.In the second type of data, 3octanol, was applied for ten seconds every twenty seconds.In the third type of data, the same six odorants as the first type were applied in turn for ten seconds every twenty seconds.In all insilico experiments, the first type of data was initially given for 300 seconds, during which time the PN's homeostasis was adjusted (details are described in Supplementary Note 1).Subsequently, the first type of data was continuously provided, and experiments on associative learning and the activity of MBON-α1 were conducted.In contrast, in the experiments on the oscillations in the antennal lobe, the second type of data was applied following the 300-second homeostatic period.The third type of data was only used in the experiment to show the variations in oscillations for each odor (Supplementary Note 2). . PQN model The piecewise quadratic neuron (PQN) model (Nanami andKohno, 2016a,b, 2023;Nanami et al., 2016Nanami et al., , 2017Nanami et al., , 2018) ) is a qualitative neuron model designed to replicate a wide variety of neurons in the nervous system and to be efficiently implemented on digital arithmetic circuits.Compared with other qualitative models (FitzHugh, 1961;Nagumo et al., 1962;Hindmarsh and Rose, 1984), the PQN model possesses additional parameters, enabling it to represent more functional forms and reproduce a variety of neurons, each with its unique dynamical structure.In addition, although other qualitative models have cubed variable terms, which consume a vast amount of circuit resources in digital arithmetic circuits, the PQN model uses piecewise functions composed of a squared term to represent comparable dynamics and consumes few circuit resources. The nervous system of Drosophila primarily comprises unipolar neurons, the soma of which is separated from the rest of the cell by a long and thin membrane.In the patch-clamp recording from the soma, only action potentials with extremely small amplitudes were observed.This is attributed to the fact that the action potentials are generated in the axon and propagated with decay to the cell body (Gouwens and Wilson, 2009).Therefore, we modeled PNs, KCs, and MBONs using two-compartment models; one compartment corresponded to the soma, and the other contained axons and dendrites.In contrast, in the soma of LNs, sufficiently large action potentials were observed (Seki et al., 2010); therefore, they were modeled using single-compartment models.APL is an inhibitory, non-spiking neuron whose axons extend to the whole mushroom body.It was reported (Inada et al., 2017;Amin et al., 2020) that APL performs local inhibition; however, the details are not clear.Therefore, in this study, we modeled APL as a simple non-spiking neuron with a single-compartment. The equations of the PQN model in the single-compartment version for LNs, APL, and SMP354 are given by Equations (4-16). where v, n, and q correspond to the membrane potential, recovery variable, and slow variable, respectively.Parameter where v and n are the membrane potential and recovery variables in the axonal compartment, respectively, and v s is the membrane potential of the somatic compartment.Parameters θ , α, and I b1 are the time constant, bias constant, and leakage constant, respectively.I c represents the internal current that flows from the axonal compartment to the somatic compartment, and k 0 is its kinetic parameter.When synaptic currents are given to I, the current injected into the soma (Figure 2) is given to I r , and k r is its scaling parameter. In PNs, homeostatic control of synaptic efficacy has been indicated (Kazama and Wilson, 2008).Although various types of homeostatic mechanisms are found in neurons, we modified the equation for PNs based on the mechanism of synaptic scaling proposed in a previous study (Turrigiano, 1999), where the weights of synaptic connections were gradually scaled according to the activity level of the postsynaptic neuron.The equations used are as Equations (21,22). where F and F t represent the neuronal current firing frequency and target firing frequency, respectively.The equation of I c and the differential equations of n and v s are the same as those in KCs and MBONs (Equations 18,20).The parameter κ determines the time constant.Note that the value of u is fixed between 0 and 1.The synaptic current is calculated as Equation ( 23). where s denotes the synaptic current and the parameters α and β determine the time constants.This synaptic model is a qualitative version of the simplified kinetic model of chemical synapses (Li et al., 2012).Although the dynamics of each synapse are unclear, the decay constants of cholinergic synapses from PN to KC and GABAergic synapses in cultured embryonic neurons have been investigated (Lee et al., 2003;Gu and O'Dowd, 2006) and are both approximately 5 ms.Therefore, the values of β were set to 203.125 so that their decay time constants were close to 5 ms.The value of α was chosen to be 250 so that a single spike results in a synaptic current amplitude of approximately 1. The synaptic current I of the i−th neuron is calculated as Equation ( 24). where j represents the index of a presynaptic neuron.w ji is the weight of the synaptic connection from the j-th neuron to the ith neuron.N is the total number of neurons.x and y indicate the classes of presynaptic and postsynaptic neurons, respectively, and the parameter p x_y scales the synaptic current.As the extent to which the single spike of each class of neurons affects the membrane potential of different classes of postsynaptic neurons is not known clearly, the values of p x_y were manually fitted such that the simulation results reproduce the experimental results as closely as possible.Here, the four LN subclasses share the same p x_y value. First, the values of p x_y where y is a PN or LN were set to reproduce the characteristics of oscillations observed in the antennal lobe in vivo.Next, the values of p x_y , where y is a KC, APL, or MBON-α1, were determined to make the responses of MBON-α1 as consistent as possible with the in vivo data.Finally, the values of p x_y , where y is MBON-α3 or SMP354 neuron, are set such that the success rate of olfactory associative learning becomes as high as possible.Note that, p x_y is positive or negative when x is an excitatory or inhibitory neuron.All the parameter sets of neurons and synapses are listed in Supplementary Tables S1-S9. . FPGA implementation of the PQN model In the FPGA implementation, the PQN model is simulated by the PQN engine.As an example, the details of the PQN engine of the PN mode are shown in Figure 4. Figure 4A shows the information flow of the PQN engine.The PQN engine updates the internal state variables and synaptic currents of 121 individual PNs in turn at each time step.The internal variables(v, n, v s , u, and F), input currents I, and synaptic currents s are sent via the PQN controller from block RAMs named PQN internal variables, I5, and s5 shown in Figure 6, respectively.The next step values of the internal variables and synaptic currents computed by the PQN engine are returned to the block RAMs and stored.Figure 4B shows a block diagram of the PQN engine of the PN mode.The symbols ×, +, and M in the figure represent the multipliers, adders, and multiplexers, respectively.Each state variable is computed in four pipelined stages.In the first stage, the square of v and the product of u and I are calculated using two multipliers.The second stage involves multiplication of the variables and coefficients determined from the parameters, and v_x, s_S, and s_L represent the results of the calculations, where x is vv_S, vv_L, v_S, v_L, n, v s , or I.For example, the calculation of v_vv_S is performed by multiplying the square of v by 0.021484375, the binary representation of which is 0.000001011.Therefore, the calculation of the sum of the sixth, eighth, and ninth right-shift operations on the square of v is performed (Figure 4C).In the third stage, the values of v, n, v s , and u are calculated.In the fourth stage, the values of s and F are determined based on the new value of v.When the old value of v is negative and the new value is zero or greater, the spike detector detects a spike.The values of v, n, v s , and F are updated every 1 ms, whereas the value of u is updated only once per second.The current firing frequency is calculated from the number of spikes counted in one second.All state variables are expressed in an 18-bit fixedpoint representation, of which 10 bits are the decimal part and the remaining are the integer part. . Olfactory associative learning Flies are capable of olfactory associative learning, where they remember the odor associated with the reward.One of the main goals of our model is to reproduce the neuronal mechanisms underlying this learning.Figure 5A shows a portion of the raster plots for ORNs, PNs, and KCs.Every 5 seconds, one of the six odorants, 3-octanol, cis-3-hexenol, cyclohexanone, 2,3butanedione, 2-hexanol, and ethyl butyrate, was applied in turn for one second.As the responses propagate from ORNs to PNs to KCs, a smaller number of neurons are activated.These have also been observed in the olfactory nervous systems in multiple species (Wilson et al., 2004;Turner et al., 2008).This sparse activity of KCs suggests that individual odors are represented by a small number of KCs, which in turn allows flies to selectively identify the odor associated with the reward. Figure 5B shows the activities of MBONs and SMP354 neuron before and after olfactory associative learning in silico (FPGA).The application of 3-octanol was followed by a reward signal at t = 304.This resulted in LTD at KC>MBON-α1 synapses, the presynaptic neurons of which fired in the previous five seconds.Subsequently, MBON-α1 became selectively unresponsive to 3octanol, whereas MBON-α3 remained responsive to all odorants.Consequently, SMP354 neuron that receives excitatory input from MBON-α3 and inhibitory input from MBON-α1 fires only when 3-octanol is applied. Figure 5C shows the success rates of olfactory associative learning for individual odors.Each set of experiments comprised one associative learning and ten trials.In each trial, all six odors were applied sequentially in a unique order.A trial was considered successful when SMP354 neuron responded solely to the learned odor.Ten sets of experiments comprising 100 trials were conducted for each odor, and the probabilities of success were calculated.This model achieved an average success rate of 84.0%.The variation in the results of each trial originates from the variable input spike streams from ORNs. . Oscillations in the antennal lobe In order to test whether our model is applicable to known activity dynamics observed in the Drosophila olfactory system other than plastic changes induced by learning, we next focused on neuronal oscillations.Neuronal oscillations are widely observed in the olfactory nervous system of insects and are believed to be important in odor information processing (Stopfer et al., 1997;Perez-Orive et al., 2002).Oscillations have also been reported (Tanaka et al., 2009) in the PNs of Drosophila, which are absent without odors or when LNs are inactivated.A similar oscillatory behavior was observed in our model.Whereas Tanaka et al. (2009) measured the local field potential (LFP) caused by the synaptic currents of PNs, we calculated a virtual LFP by averaging the synaptic currents for each type of neuron.Figure 6A shows the virtual LFP of PNs, LNs, and KCs when 3-octanol was applied, where clear oscillations can be seen in PNs and LNs.The peak amplitudes of their frequency spectra were estimated to clarify their oscillatory nature.Figure 6B shows the power spectra (details are explained in Supplementary Note 3), which have the peak at approximately 20-30 Hz.Here, the odor was applied for ten seconds.Following this, we applied 3-octanol twenty times and plotted the averaged values of the peak power on a logarithmic scale (Figure 6C).The peak power of LNs was considerably higher than that of PNs; whereas, the peak power of KCs was much smaller than that of PNs, which is consistent with Turner et al. (2008) reporting no clear oscillations in the membrane potentials of the Drosophila KCs.In addition, the peak power when odor was not given was much lower than that under normal conditions, which is consistent with the results in Tanaka et al. (2009). The previous study (Tanaka et al., 2009) selectively inactivated the synaptic output of NP1227_class1 and NP2426_class1 LNs in turn, and reported that the oscillations of PNs were attenuated only when NP2426_class1 was inactivated.We inactivated each subclass of LNs in turn and plotted the average of their peak spectra of the oscillations of PNs (Figure 6D).Here, inactivation of LNs was performed by forcing the stimulus input to the LNs to zero.3octanol was applied five times for each condition.The peak power was significantly attenuated when NP2426 class1 but not NP1227 class1 was inactivated.This is consistent with the experimental results in Tanaka et al. (2009).The second and third largest attenuation was observed following the inactivation of Krasavietz class1 and class2, respectively.We also calculated the virtual LFP for each LN subclass under the normal condition.Figure 6E shows the average peak power when 3-octanol was applied twenty times.The peak power of NP2426_class1 was the largest, indicating that it was the primary source of oscillation in LNs.The peak powers of Krasavietz_class1 and Krasavietz_class2 are the second and third largest, respectively.There are almost no oscillations in NP1227_class1, which could explain why inactivation of NP1227_class1 does not attenuate the oscillations in PNs. . Temporal dynamics of firing in MBON-α Figure 7A shows the responses of the somatic membrane potential of MBON-α1 in vivo (red) and in silico (blue) before and after olfactory associative learning.The solid plots represent the values of the somatic membrane potential, and the black dots above them represent the detected spike timing.The gray arrows indicate the onset of 3-octanol input.3-octanol was given for 1 second.We calculated the firing frequency for each 50 ms time window from the spikes and plotted the average firing frequency transition over five trials as dotted curves.The procedures for detecting spikes and calculating their frequency are described in Supplementary Note 4. In vivo, whereas odor-evoked firing frequency of MBON-α1 is constantly high before learning, it decreases rapidly after learning.We reproduced this characteristic temporal dynamics of firing in silico.The somatic membrane potential and synaptic current of APL are shown in Figure 7B to illustrate how the temporal dynamics occur in silico.While MBON-α1 fires immediately after odor onset, the membrane potential of APL reaches the threshold with a delay due to its slow neuronal dynamics.This delayed inhibition from APL may contribute to suppressing the firing of MBON-α1 from approximately t = 0.7, together with LTD at KC>MBON-α1 synapses.We tested this possibility in silico by examining the activity of MBON-α1 while inactivating APL (Figure 7C).Without APL, the odor-evoked firing frequency of MBON-α1 is constantly high even after learning, and this result indicates that APL is essential for the temporal activity of MBON-α1. . FPGA implementation Our qualitative modeling approach allowed to implement the entire model on an entry-level FPGA (Xilinx Artix-7 XC7A35T on a Digilent cmod-a7 board) using Xilinx Vivado 2016.4. Figure 8 presents an overview of the implementation.As the network has nine types of neurons, namely four LN subclasses, PN, KC, APL, MBON, and SMP354 neuron, we constructed nine PQN engines corresponding to each of them.The weights of synaptic connections, input current, synaptic current, and neuronal internal state variables are stored in block RAMs.Spike signals of ORNs were generated by the PC and sent to the FPGA through a serial communication bus.The spike signals were composed of 11 bits representing the indices of ORNs, which were initially stored in the FIFO buffer; the synaptic currents of ORNs were calculated using the SC engine.The accumulators calculated the input currents for neurons from the synaptic currents in parallel.The antennal lobe and the mushroom body are distant, and only PNs provide a one-way connection from the antennal lobe to the mushroom body.Therefore, we built three accumulator blocks, a, b, and c, which consisted of seven, three, and one accumulator(s), that were responsible for the processing inside the antennal lobe, between the antennal lobe and the mushroom body, and inside the mushroom body, respectively.The weights w of the KC>MBON-α1 synapses are represented by two bits to realize the LTD, whereas all other synaptic weights are represented by one bit.The PQN controller activates each PQN engine in turn.Each PQN engine receives the current values of the internal variables, input currents, and synaptic currents of the corresponding type of neurons.It then returns the next step values of the internal variables and synaptic currents.A reward signal was also transmitted using serial communication to the LTD unit, which triggered the LTD of KC>MBON-α1 synapses.The LTD unit holds the indices of KCs that have fired in the previous five seconds, and when the reward signal arrives, it rewrites w of synapses made by those KCs onto MBON-α1. Figure 9A shows the resource consumption of this implementation.The look-up tables (LUTs) are truth tables that were used primarily for addition calculations in this implementation.Digital signal processors (DSPs) are blocks for complex calculations that were used to multiply the state variables.Flip-flops (FFs) and block random-access memories (BRAMs) are memory elements.Most BRAMs store synaptic weights, whereas the rest store state variables.A mixed-mode clock manager (MMCM) was used to generate a 100 MHz clock.Figure 9B lists the on-chip power consumption of each resource estimated by Vivado.The static represents the steady-state leakage power of the device and is independent of the circuit design.The total power consumption is approximately 0.37 watts. Discussion In this study, we built the first data-driven SNN model of the olfactory nervous system of Drosophila melanogaster.Our modeling approach proposed a way to overcome the trade-off between replicating the detailed biological data (the connectome and electrophysiological activities of neurons) and the computational cost, such that the model can run in real-time Frontiers in Neuroscience frontiersin.org FIGURE Architecture overview.The rectangle, rounded rectangle, and arrows represent the block RAM, computation unit, and data flow, respectively.PQN engines, where x ranges from to , simulate the activities of each type of neuron.LN , LN , LN , and LN correspond to Krasavietz_class , Krasavietz_class , NP _class , and NP _class , respectively.sx, where x ranges from to , and Iy, where y ranges from to , are the synaptic and input currents, respectively.w , w , and w store the weight of synaptic connections.The PQN internal variables store neuronal internal state variables.Spike signals of ORNs are sent from the PC using serial communication.They are temporarily stored in the FIFO and subsequently converted to synaptic currents using the SC engine.The accumulator blocks a, b, and c comprise seven, three, and one accumulator(s), respectively, and each accumulator calculates the input currents from the synaptic currents in parallel.The PQN controller activates each PQN engine in turn to simulate neurons.Each PQN engine receives the internal variables, input currents, and synaptic currents of the corresponding type of neurons and returns the next step values of the internal variables and synaptic currents.The reward signal from the PC activates the LTD unit and reduces the weights of KC>MBON-α synapses that are stored in part of w .The information required for each result section, such as the spike information of the PQN neurons, values of membrane potential, and synaptic currents, is selected and sent to the PC and stored.on a low-power SiNN chip while reproducing the characteristic neuronal activities in the brain.Features of previous datadriven models (Markram et al., 2015;Bezaire et al., 2016;Ecker et al., 2020) that reproduced parts of the mammalian cortex and hippocampus as well as this work are compared in Table 1.Specifically, our model went beyond the preceding models in the following four aspects: the higher reproducibility of (1) synaptic connectivity, (2) characteristic spiking activities, (3) neuronal functions, and (4) the lower computational cost.Whereas the preceding models reproduced the electrophysiological and morphological properties of each type of neuron using multicompartmental ionic-conductance-based models, our model reproduced electrophysiological properties using the PQN model, which requires a lower computational cost.In Markram et al. (2015) and Ecker et al. (2020), the Tsodyks-Markram (TM) synapse model (Tsodyks and Markram, 1997) with a stochastic mechanism was used to accurately reproduce synaptic physiology, whereas in Bezaire et al. (2016), the double exponential synapse model reproduced the rising and decaying time constants of the synaptic current for each type of synaptic connection.In this study, the decay time constant of the double exponential synapse model was fitted to electrophysiological data for the corresponding type of neurotransmitter.In the preceding models, synaptic connections were randomly determined based on the position and morphology of individual neurons and statistical information for each neuron type.However, in this model, they were based on the connectome (HEM, 2020;Scheffer et al., 2020) identified from comprehensive electron microscopy images.In the preceding models, the vast number of neurons and complex structures of the mammalian brain limited the validation of the models.In Markram et al. (2015) ./fnins. . FIGURE Results of FPGA implementation.(A) Utilization ratio for each type of resource.The numbers above the bar indicate the number of units used.(B) Power estimation for each type of resource. and Bezaire et al. (2016), synchronous oscillations at the network level were validated, but not for each type of neuron.Spiking activities were not examined in Ecker et al. (2020).Additionally, the preceding models did not reproduce the function of the network, as mammalian cortical and hippocampal functions at the circuit level have not yet been elucidated.In contrast, because the olfactory nervous system has a smaller network size and its function is clearer, we were able to demonstrate that our model successfully reproduces olfactory associative learning, characteristic spiking activities of each type neuron, such as odor-evoked oscillatory firing in PNs and LNs, absence of oscillations in KCs, different contributions of LN subclasses to the formation of oscillations, and temporal dynamics of firing in MBON-α1.Whereas the preceding models required supercomputers owing to their enormous computational cost, our model was light enough to be simulated on an entry-level low-cost FPGA chip at 0.37 watts, which may be acceptable for small robots and portable AI devices. In addition, whereas the simulation speed in Bezaire et al. (2016) was approximately 1,600 times slower than real time, our model performs real-time simulations. There also are differences between our model and the latest preceding model (Kennedy, 2019) of the Drosophila olfactory system.Unlike our model, the preceding model is not datadriven.The preceding model used the leaky I&F model, and did not reproduce the electrophysiological properties of each class of neurons.As for the structure of the network, our model employs a slightly extended version of the preceding model.Whereas the preceding model consists of PNs, LNs, KCs, APL, and MBON, our model has another MBON and SMP354 neuron in addition, reproducing the valence-balance model (Heisenberg, 2003;Aso et al., 2023), where learning-induced plasticity in the KC>MBON synapses tips the balance of valence signals of MBONs.This competitive memory circuitry is important because it is the basis for the interactions among MBONs that are responsible for flexible and complex behavioral decisions associated with memory.As for the learning rule, both models employ reward-induced depression of KC>MBON synapses to implement olfactory associative learning.As for the synaptic connections, whereas the preceding model stochastically determines the connections between layers such as ORN>PN and PN>KC, our model precisely reproduces the connections based on the connectome database. As for the spiking dynamics, the characteristic spiking activities of each neuron are not considered in the preceding model.For example, the spiking activities of PNs and LNs are not calculated by spiking neuron models but are generated by the Poisson process.The activities of MBONs are represented using nonlinear activation functions.KCs are described by the LIF model, and their firing properties are not fitted to the in vivo data. The peak frequency of PN oscillation in this model was approximately 24 Hz, whereas experimentally observed peak frequency in the antennal lobe was 10-15 Hz (Tanaka et al., 2009).In the antennal lobe, PNs and LNs are connected via glomeruli, which are neuropils comprising the dendrites and axons of PNs, LNs, and ORNs.However, the model does not consider the dynamics of the glomeruli, which may cause a gap in peak frequencies.In addition, the proportion and detailed connections of the four subclasses of LNs are not known; therefore, they were not incorporated into the model and may have affected the peak frequency.A more detailed model awaits to be built to clarify the mechanism and function of oscillations in the antennal lobe. To examine the oscillations (Figure 6), we only applied 3octanol to the network.This is because the magnitude of PN's oscillations greatly depends on the identity of odors both in vivo (Tanaka et al., 2009) and in silico (Supplementary Figure S8).Since our intention was to measure the effect of inactivation of the LN subclass on PN's oscillations, we used only one type of odor.In the future, we will comprehensively examine the relationship between oscillations and odors and clarify why the magnitude of the oscillations differs between odors. In honeybees, oscillations in the antennal lobe are necessary for distinguishing between similar odors (Stopfer et al., 1997).In locusts, oscillations appear not only in the antennal lobe but also in KCs, and they are believed to contribute to the sparse representation of odors in the KC population (Perez-Orive et al., 2002).Although the role of oscillations in Drosophila remains unclear, oscillations likely contribute to the processing of odor information given the similarity of olfactory network structure between different insects.One possible candidate is the generation of the sparse representation of odors in the antennal lobe. In this study, the PQN model employs function m(I), which was not incorporated into the original PQN model (Nanami and Kohno, 2023).This function performs a nonlinear transformation of the stimulus input so that the membrane potential behaves as expected in response to a wide range of stimulus inputs.However, this function does slightly complicate the model and has no biological counterpart.By changing the parameters and adjusting the dynamics, we expect to be able to remove this function in future works. As shown in Figure 7, after olfactory associative learning, MBON-α1 fires for approximately 250 ms, immediately after the arrival of the odor signal, and subsequently enters a resting period, successfully reproducing the temporal firing observed in Hige et al. (2015) in MBON-γ 1pedc.To our knowledge, there has been no report on the mechanism underlying this firing dynamics characteristic for the post-learning response.The result of our simulation suggests that the delayed activation of APL contributes to shaping this activity pattern.Thus, our modeling not only reproduces observed physiological data but also provides mechanistic insight by proposing an experimentally testable hypothesis. The SiNN implemented in this study operates at the same speed as the olfactory nervous system with a 100 MHz clock signal.However, if we use a higher clock, the model can provide accelerated simulations, albeit with increased power consumption.For example, we confirmed that the model can simulate four times faster than real-time using a 400 MHz clock with a Xilinx Virtex UltraScale+ xcvu37p-fsvh2892-3-e FPGA.In this implementation, the estimated power consumption was about 4W.The power efficiency and simulation speed can be further improved by using Application Specific Integrated Circuits (ASICs). As shown in Figure 9B, most of the power is consumed by the MMCM, BRAMs, and the steady-state leakage (Static).Except for a few BRAMs that are used to store the neuronal state variables, these resources are not directly used to compute the neuronal dynamics.Ignoring the reproducibility of the spiking properties and using I&F-based models instead of PQN might reduce the power of clocks, signals, logic, and DSPs.However, these resources consume only 18.5% of the total power and their impact on the overall system is expected to be small.Ionic-conductance-based models can reproduce the dynamics of the spiking process as accurately as or better than the PQN model.However, they have many exponential terms that consume a large number of DSPs in FPGA implementations (Akbarzadeh-Sherbaf et al., 2018;Khoyratee et al., 2019).Even in the most well-optimized implementation (Khoyratee et al., 2019), it requires more than 20,000 LUT units and more than 100 DSPs to build a network of 2,000 neurons, which would lead to significantly higher power consumption. Our modeling approach is applicable to not only FPGAs but also ASICs.Conversion from FPGA to ASIC improves power efficiency by a factor of 14 to 20 (Amara et al., 2006;Kuon and Rose, 2007).The network reproduced in this study accounts for approximately 2% of the entire brain.Thus, our approach enables the construction of an ASIC chip that simulates the entire Drosophila brain while consuming approximately 1 watt.Such chips have considerable potential in the engineering and scientific fields.Because of its low power consumption, the chip can be mounted on small insect-like robots.The resulting system is expected to move around autonomously, solve unknown tasks, and adapt to changes in the environment, similar to insects.In addition, owing to its intrinsic power efficiency, the chip can serve as a sufficiently fast simulator of the whole brain within the constraints of the power supply typically available in laboratories.It can facilitate long-term measurement of neuronal activities and is expected to contribute to the analysis of phenomena with long timescales, such as continuous learning and forgetting. To evaluate the robustness of our approach, we measured how the success rate of the olfactory associative learning varied while changing one of the empirically determined parameters (Figure 10A).We varied p PN_KC which scales the strength of synaptic connections from PN to KC. Increasing or decreasing from the original value (p PN_KC = 1.03125) decreased the success rate.This is because at the lower value, the inputs from PNs to KCs are weakened, and KCs rarely fire (Figure 10B).As a result, KC>MBON synaptic depression, which is the basis of learning, does not occur sufficiently.When p PN_KC is large, too many KCs fire, preventing the sparse representation of odors in KCs and reducing the success rate.At present, these parameters have to be carefully tuned manually, which hinders the easy application of this approach to other nervous systems.In future research, we plan to develop a method to automatically determine these parameters to achieve the functionality of the network.Metaheuristics will be applied, just as we determined the parameters of neurons by the differential evolution algorithm. HK was supported by a grant from RIKEN.TH was supported by grants from National Institutes of Health (R01DC018874), National Science Foundation (2034783), and United States-Israel Binational Science Foundation (2019026).DY was supported by the Toyobo Biotechnology Foundation Postdoctoral Fellowship and Japan Society for the Promotion of Science Overseas Research Fellowship. FIGURE FIGURENetwork overview.The network comprises an antenna, the antennal lobe, and the mushroom body.ORNs, PNs, KCs, and MBON-α are excitatory neurons, whereas LNs, APL, and MBON-α are inhibitory neurons.SMP neuron receives excitatory and inhibitory input from the two MBONs and produces the approach cue.KC > MBON-α synapses can express synaptic plasticity, which is driven by the reward signal. FIGURE FIGURESynaptic connections from ORNs to PNs (A) and PNs to KCs (B). I b0 is a bias constant.Parameter I represents the stimulus current.The function m performs a nonlinear transformation of I, adjusting the scale of I with parameter k I and extending the dynamic range with parameters m 0 and m 1 .Synaptic currents from other neurons and current injections shown in Figure 2 were given to I. The parameters τ , φ, and ǫ determine the time constants of the variables.The parameters r g , r h , a x , b x , and c x , where x is fn, fp, gn, gp, hn, or hp, are constants that determine the nullclines of the variables.The parameters b fp , c fp , b gp , c gp , b hp , and c hp are determined by other parameters such that the nullclines are continuous and smooth.All variables and parameters are purely abstract with no physical units.The initial values of all state variables were set to zero.The equations of the two-compartment version for KCs and MBONs are given by Equations (17-20). FIGURE FIGURE Details of the PQN engine of the PN mode.(A) Information flow of the PQN engine.N PN is the number of PNs.(B) Block diagram.This circuit calculates the succeeding values of internal variables (v, n, v s , u, and F).The symbols ×, +, and M represent multipliers, adders, and multiplexers, respectively.(C) Internal circuit for the calculation of v_vv_S. FIGURE FIGURE Activities of each type of neuron and the success rate of olfactory associative learning.(A) Parts of the raster plot of ORNs, PNs, and KCs.The horizontal axis represents time and the vertical axes represents indices of the neurons, respectively.The colored bars represent the onset of the one-second odorant applications.The blue dots represent spikes.(B) Waveforms of somatic membrane potentials of MBON-α , MBON-α and SMP neurons during and after learning.The light blue arrow indicates the timing of the reward signal that triggered learning.(C) Success rates of olfactory associative learning for six odorants.Error bars represent standard deviation over the success rates of the six odorants. FIGURE FIGURE Neuronal oscillations in PNs, LNs, and KCs.(A) Examples of the virtual LFP for each type of neuron.(B) Power spectra of the virtual LFPs of PNs, LNs, and KCs.-octanol was applied for ten seconds.(C) Averages of the peak power spectra were plotted on a logarithmic scale graph.Average of the peak power spectra of PNs in the absence of the odor was also plotted.Error bars represent standard deviation over trials.(D) Averages of the peak power spectra when one of the four types of LNs was inactivated.LNs of Krasavietz_class (i), Krasavietz_class (ii), NP _class (iii), and NP _class (iv) were inactivated, respectively.The results obtained without inactivation are also plotted for comparison (v).Error bars represent standard deviation over trials.(E) Averages of the peak power spectra of each LN subclasses.Krasavietz_class (i), Krasavietz_class (ii), NP _class (iii), and NP _class (iv).The average peak power spectra of the virtual LFPs over all LNs were also plotted for comparison (v).Error bars represent standard deviation over trials. FIGURE FIGURE Responses of MBON-α .(A) Comparison of somatic membrane potential of MBON-α in vivo (red) and in silico (blue) before and after olfactory associative learning.The gray arrows indicate the onset of a -second-long application of -octanol.In silico, the odor input caused ORNs to fire instantaneously, whereas in vivo, there was a delay for the odor to travel through the tubing and reach ORNs.Example responses of the somatic membrane potential for a single trial (solid line) and the temporal dynamics of firing frequency averaged over five trials dotted line).Error bars represent standard deviation over five trials.The black dots above the solid lines represent spikes.(B) The membrane potential and synaptic current of APL are shown.The black dotted line represents the threshold for the release of the synaptic current of APL.(C) Example responses of the somatic membrane potential of MBON-α in the absence of APL for a single trial (solid line) and its averaged firing frequency over five trials (dotted line).Error bars represent standard deviation over five trials.The black dots above the solid line represent spikes. TABLE Comparison of the data-driven SNN models.Success rate of olfactory associative learning while changing p PN_KC .To calculate the success rate, trials were performed for each odor.(B) Average number of firing KCs per trial. FIGURE (A)
13,389.6
2024-06-26T00:00:00.000
[ "Computer Science", "Engineering", "Biology" ]
Analysis, Modeling, and Simulation Solution of Induced-Draft Fan Rotor with Excessive Vibration: A Case Study In the modern industry, computer modeling and simulation tools have become fundamental to estimating the behavior of rotodynamic systems. These computational tools allow analyzing possible modifications as well as alternative solutions to changes in design, with the aim of improving performance. Nowadays, rotodynamic systems, present in various industrial applications, require greater efficiency and reliability. Although there are deep learning methodologies for monitoring and diagnosing failures which improve these standards, the main challenge is the lack of databases for learning, a problem that can be addressed through experimental monitoring and computer analysis. This work analyzes the vibrations of two induced-draft fans with excess vibration in a thermoelectric plant in Mexico. A vibration analysis was carried out through the instrumentation and monitoring of accelerometers located at crucial points in the fans. The results of this experimental analysis were validated by computer simulation based on FEM. The results show that the operating speed of the induced-draft fans is very close to their natural frequency, causing considerable stress and potential failures due to excessive vibration. Finally, this work presents a practical solution to modify the natural frequency of induced-draft fans, so that they can function correctly at the required operating speed, thus mitigating excessive vibration issues. Introduction Vibration issues in rotating machinery depend on multiple factors, including misalignment, critical speeds or resonances, system deterioration due to continuous use, bearings defects, and imbalance.It is well known in the field of vibrations that critical velocities or resonances are physical phenomena that arise when the natural frequency of the system matches the operating frequency.These occurrences can be mitigated by altering the system design and adjusting its natural or operating frequency (Xiangyang et al., 2023).Simultaneously, imbalances can manifest as either mechanical or electrical in nature.Mechanical imbalance arises when the principal axis of inertia does not align with the geometric axis of the system.According to (Blanco-Ortega et al., 2010), several active and passive methods or devices have been developed trough vibration analysis to mitigate this mechanical phenomenon.Electrical imbalance is generated by voltage variations or harmonic distortion in the voltage, which, in the typical case of induction motors, affects their dynamic behavior and vibrations (Donolo et al., 2016;Ren et al., 2023).For steam turbines, the vibration in the rotor is divided into two categories (Kaneko et al., 2022): forced vibration and selfexcited vibration.Forced vibration is caused by an external force and is divided as follows: a) by rotor imbalance, b) by mechanical imbalance in gears and connections or couplings, c) by electric excitation from the motor or generator, d) by fluid excitation, and e) by uncommon factors.Self-excited vibration is mainly caused by oil whip and steam whirl. Long-term vibration defects in the machinery significantly impact both the equipment's service life and the regular, stable operation of the unit (Li et al., 2020).According to studies on different types of turbomachines, the leading causes of vibrations are cavitation and resonance.In addition, excess speed and long periods of operation contribute to this effect (Doshi et al., 2021).Although vibrations are unwanted in systems, they also serve as support to identify defective designs, poor installation, and wear or deterioration.Over time, improper installation, manufacturing defects, waste accumulation, and the gradual degradation of machines can lead to rotor eccentricity.This eccentricity adversely impacts their performance, causing high noise and vibration levels, elevated losses, and the risk of overheating (Chapagain and Silwal, 2023), (Yu-Ling et al., 2023). Induced-draft fans are used to evacuate air from a space or to create a negative air pressure in a system.They are one of the crucial elements of a thermal power plant's auxiliary equipment.Vibration studies on industrial fans focus on the main static and dynamic parts, such as the rotor and bearings (Zenglin and Gordon, 2003;Shabaneh and Zu, 2003;Trebuna et al., 2014;Jagtap et al., 2020), the blades (Pingchao et al., 2018;Wang et al., 2020), the casing and structure (Niko et al., 2011), and the ducts.At times, vibration issues arise from the intricate interaction between various components of the fan-duct-foundation system. In recent years, various modern techniques have emerged from analyzing, determining, and predicting failures in rotating machinery based on a) artificial intelligence (Liu et al., 2018) through data acquisition, feature extraction, and fault recognition; b) neural networks (Benrahmoune et al., 2018), by analyzing and comparing monitoring data; and c) machine learning models (Benchekroun et al., 2023) to accurately predict the vibration problem in fans.Considering the above, and bolstered by preventive maintenance to reduce downtime and enhance efficiency, operators can proactively identify potential issues before they reach critical levels.(Di et al., 2022) propose a novel method for detecting anomalies in rotating machinery, leveraging vibration vectors inspired by the Polar plot.These vectors contain amplitude and phase values from various characteristic frequencies.By converting the original Polar vectors to Cartesian plots using fast Fourier transformbased order analysis (FFT-OA), calculations are simplified and visualization is improved.Wei et al. (2022) introduce an innovative condition monitoring approach for induceddraft systems.They combine a genetic algorithm with a long-short-term memory network (GA-LSTM) to establish dynamic and static thresholds for anomaly detection.Applying this method to a coal-fired power plant enables early fault diagnosis and detection.Experimental results demonstrate the effectiveness of this approach, successfully detecting minor anomalies in advance and contributing to improved system reliability and proactive maintenance. Physical models allow estimating a system's real behavior to better perform and interpret computational and simulation analyses (Xie et al., 2023;Novotn ý et al., 2019).The study by Zenglin and Gordon (2003) is based on the Jeffcott rotor model with external damping for vertical and horizontal directions.The Lund Stability method is applied for the horizontal direction, whereas an analytical and numerical analysis is conducted for the vertical direction to study the threshold speeds.The authors also consider improving the stability characteristics of the system by adjusting external damping values.Shabaneh and Zu (2003) perform a dynamic analysis of a single-rotor shaft system with nonlinear elastic bearings at the ends mounted on viscoelastic suspension.The Timoshenko shaft model represents the flexibility of the shaft, and the viscoelastic supports are modeled using the Kelvin-Voigt model.The authors also find that the primary resonance peak is modified for higher frequencies based on increased nonlinear elastic bearing characteristics. Bearings play a critical role in rotating machines.The inevitability of degradation and bearing failures arises from prolonged and continuous operation, encompassing factors such as poor maintenance practices, inadequate care, and dynamic stress induced by the dynamic loads of rotating parts. Recent research endeavors employ innovative analysis techniques for the vibration analysis of bearings, recognizing their inherent complexity (Mohamad et al., 2023;Lu et al., 2023).Furthermore, probabilistic techniques such as the remaining useful life (RUL) as well as those based on deep learning have become very important.To predict the failure of bearings through vibration analysis or estimate their remaining service life, modern techniques are complemented with the Vold-Kalman parametric filter method (Cui et al., 2019).This method, based on the state-space framework, has recently found applications in the navigation and control of vehicles and spacecraft.In the context of techniques rooted in deep learning, such as the innovative PIResNet (Ni et al., 2023), their necessity arises from the inherent variations in speed and load experienced by bearings during the operation of machinery.PIResNet stands out by offering a consistent physical solution even in the presence of imperfect data.Given the critical significance of bearings, their behavior has undergone evaluation in both previous unsuccessful actions and our ongoing research project.In both instances, they have been ruled out as the causes of excessive vibrations. The main contribution of this work lies in the comprehensive study and identification of the cause of excessive vibration in two induced-draft fans of a thermoelectric plant in Mexico. In addition, through computer simulation, a solution is proposed.Previous attempts to resolve the vibration issue had been unsuccessful, prompting the need for a more rigorous and systematic approach.The hypothesis of this work assumes that a practical and theoretical analysis through simulation will shed light on the leading causes of excessive vibration, thus allowing to propose solutions.This hypothesis employed a methodology involving the instrumentation of rotor bearings with accelerometers, fan modeling, simulation, and data analysis. The research objectives of this work are as follows: (i) to study and identify the cause of excessive vibration in an induced-draft fan at a Mexican thermoelectric power plant; (ii) to develop a methodology that includes instrumentation, fan modeling, simulation, and data analysis to evaluate vibrations and determine the natural frequencies and vibration modes of the fan's operation; (iii) to assess the hydraulic bearings' stiffness and damping constants for a more accurate computer simulation; (iv) to emphasize the importance of vibration measurement and monitoring for the proper operation and maintenance of fans; (v) to provide valuable insights and recommendations for mitigating vibration issues and improving the performance and reliability of induced-draft fans in thermoelectric power plants. This document is organized into sections focusing on analyzing, modeling, and simulating an induced-draft fan model with excessive vibration.It starts with an Introduction that provides an overview of vibration issues in rotating machines and a case study conducted in a thermoelectric plant in Mexico, in addition to stressing the need for analyzing fan vibration issues. Figure 1 depicts the proposed approach while considering previous actions and presents an overview of the methodology.The Problem definition section provides information on the system's design, vibration limits, and the need for mitigating excessive vibrations.It also describes previous actions to address the vibration issues.The Metodology section outlines the analysis, covering both experimental instrumentation and computer simulation.It explains the approach to vibration analysis, which involved the placement of accelerometers in the fan bearings to validate the natural frequencies.The document further details the process of modeling and simulating the fan using the finite element method (FEM) in the ANSYS software.The Results section presents the data obtained from static and dynamic tests, encompassing the modeling and meshing of individual fan components as well as the complete simulation.Subsequently, the Discussion and Conclusion sections draw upon relevant studies, offering valuable insights and recommendations to enhance the performance and reliability of a particular induced-draft fan in a thermoelectric power plant. Problem definition Unpredictable rotor vibrations have been observed in two fans at a thermoelectric plant in Mexico, with each occurrence displaying distinctive characteristics.In order to mitigate random vibrations, it red was necessary to balance the shaft-compressor system of the rotor.This process has been frequently performed over the past decade whenever the system experiences vibration.However, this issue has resulted in significant economic losses due to the inadequate The system comprises two 2500 kW motors, each integrated into a rotor, and each duct is supplied by one of these motors (Figure 2).Table 1 presents information on the system.Airflow is supplied to the ducts of these motors.The air inflow is in the vertical direction from the outside in, and the air outflow is in the horizontal direction towards the entrance of the ducts (Figures 3a and 3b). The mechanical vibration issues of the system have persisted since it was put into operation two decades ago, with the bearings reaching unbalanced speeds between 10 and 2).While induced-draft fan 1A initially experienced minimal issues, it has required frequent balancing since 2002.Achieving motor stability as it passes through the system's natural frequencies is crucial, and the balancing process varies for each of the two 2500 kW motors.Sometimes, up to 20 motor starts are necessary for correct balancing without complications. Previous actions Earlier, the internal staff of the thermoelectric, as well as private companies, used to analyze vibration issues in the system.In this regard, previous actions included the following: • In 1996, a new rotor and shaft design was implemented for induced draft-fan 1A, significantly reducing the need for more continuous balancing. The revised design specifically involved the switch to a more rigid steel material. • Ash residues, with quantities of approximately 0.300 kg, were detected on the fan blades.In response, a maintenance and cleaning program with increased frequency was proposed.While this initiative has contributed to a partial mitigation of excessive vibration issues, it remains a necessary measure. • In some instances, repairing the edges of the fan blades has been necessary. • To achieve system stability, the structure of the ducts was modified, and gates were added to the fan suction.These actions were also unsuccessful. • The bearings were also tested to determine whether they were the cause of excessive vibration, and the foundation of the fans was modified.Both modifications were unsuccessful. In light of the above, the following hypotheses were proposed: 1.The operating frequency of the engine is very close to one of the system's natural frequencies (simulation, hammer test and instrumentation sections will be used to test these hypotheses). 2. Improper balancing, whereby an additional mass is added to a subsequent imbalance, may be the cause of excessive vibration. 3. The accumulation of external elements (i.e., ash, additives, moisture) or blade wear causes the rotor to be unbalanced. In light of the above, a methodology was structured, with the aim of examining the system's excessive vibration issues.This methodology comprises two main parts: experimental instrumentation and computer simulation. Methodology Our approach began with the system's instrumentation for static and dynamic testing.We modeled and meshed the fan parts.In this model, master nodes connect the meshed parts.A vibration analysis of the complete system was carried out through a computer simulation of the fan's operation. Instrumentation of the induced-draft fan Vibration curves and natural frequencies were obtained by conducting dynamic and static tests with accelerometers placed on two fans.One fan was operational for the dynamic case, while the other was turned off for the static case. Dynamic testing Through the instrumentation process, the operation of the induced draft-fan was evaluated in order to estimate the system's natural frequencies. To properly take measurements of these machines' bearings, the first step involved adjusting the position of the accelerometers.One accelerometer was used to measure vibrations in the vertical direction of the bearing, and the other one was employed to measure vibrations in the longitudinal or horizontal direction of the bearing (Figure 4). Static impact test An impact test was performed on induced-draft fan 1B, taking advantage of the fact that the unit was out of service due to major maintenance (i.e., joint expansion replacement).The impact test is a vibration measurement using accelerometers, wherein the system is excited using a high-intensity mechanical impulse, forcing it to vibrate in its natural modes.If the system is operating, the damping factor of the system can also be obtained.Nevertheless, the only information that could be obtained from this test were the natural frequencies and the first vibration modes. Modeling the induced-draft fans The goal of this vibration analysis was to provide a solution to the vibration issues, which cause mechanical damage and result in downtimes, leading to economic losses for the company.The analytical solution aimed to extend the service life of the fans and ensure optimal performance without frequent maintenance.This work focuses on three main aspects: 1. the material's mechanical properties, using specialized techniques to understand its behavior under mechanical stress; 2. the root causes of vibration failure in the system; 3. proposing a series of solutions to mitigate vibration issues in the system. Rotor In rotating machinery, vibration can stem from various sources, with imbalance being a major cause.While rotors ideally consist of homogeneous material with a uniform mass distribution along their axis and perfect symmetry, the presence of small randomly distributed mass concentrations unfortunately leads to imbalance.This imbalance can be caused by factors such as corrosion and dirt.Balancing a rotor involves ensuring that its principal axis of inertia aligns with the center of gravity, coinciding with the axis of rotation, nearly eliminating vibration. Rotating machinery may experience instability due to various factors. Factors that can be attributed to incorrect rotor assemblies include inadequate shaft alignment and incorrect bearing placement.In many cases, rotors operate above their natural frequency, causing resonance to occur briefly when the operating speed (or impeller speed) passes through this frequency.These occurrences can cause damage to machinery components, such as seals and bearings, leading to instability.In addition, highspeed operation and friction in the bearings can lead to wear and temperature increases, modifying the dynamic characteristics of lubricants and resulting in poor lubrication and further contributing to instability in the bearings. There are various models available for dynamic rotor analysis, ranging from very simple to extremely complex.The complexity of these models is based on the number of degrees of freedom (Zenglin and Gordon, 2003;Shabaneh and Zu, 2003;Yang et al., 2024).While some models may be unrealistic and inadequate, others may need to be more complex to be practically useful, making their Figure 5a shows the fan rotor.The model was compared against design models based on manufacturer and field measurements to ensure accuracy.The 3D CAD model of the fan rotor is shown in Figure 5b. After creating the geometric model, the rotor was meshed to generate a sectional model.This involved selecting certain parts or components of the rotor and assembling them into a single model, as illustrated in Figure 6. The mesh corresponding to the complete rotor shaft shell has a complex geometry due to the shape of the blades, as shown in Figure 7, which depicts the appropriate assembly of elements. Bearings The bearings used in these fans are hydrodynamic and can be represented by four stiffness and four damping coefficients.These coefficients are usually modeled as four spring-damper systems placed at a separation of 45 o (Jorgen et al., 1965;Čorovi ć and Miljavec, 2020;Xie et al., 2023), as shown in Figure 8. Based on Sommerfeld numbers, and assuming that the system is in resonance when the natural and the operating frequencies through the angular velocity are similar, the eight stiffness and damping constants to be introduced into the software for simulation are defined in Table 3.The following is also considered: • Each bearing carries 9394.849kg due to the weight of the rotor. • The radial clearance of the bearing is c = 0.0014 m. • The critical angular velocity of the system is also considered. Based on a Sommerfeld number of 0.9, and assuming that the first resonance is the most pronounced, the critical speed is 0.9 (890 rpm) = 801 rpm (83.88 rad/s).The value of 890 rpm is the system's operating speed and corresponds to a frequency of 14.8333 Hz.The value of 0,9 was chosen instead of 1 to prevent the system's natural and operating frequencies from being equal, thus only approaching resonance. Preparing the performance simulation The simulation was carried out in the ANSYS software. The analysis was performed based on several factors, i.e., for two materials, considering increases in diameter and the accuracy of bearings.The materials defined for the rotor shaft were 1026 steel (E=205GPa) and 4340 steel (E=215GPa), both with a Young modulus of 0.29.The MASS21 element assigned mass values to key points and consequently provided the values of the loads applied to each bearing.These key points were used as master nodes.For the real constant of the MASS21 element, the approximate value of the bearing weight was introduced. To simulate the spring-damper pairs, the COMBIN14 element must be assigned to assume the values in Table 3 for the eight different values of the stiffness and damping constants. Key points were created in the part of the fan where the bearings are mounted.These key points were then meshed as a mass element and used as a master node with which all slave nodes were associated.Attributes were assigned to the previously created key points, the element type (MASS21), and its real constant.This yielded a mesh, which implies a mass element.The key points were created to be connected with lines to the mass element (master node) that simulates the springs and dampers through the COMBIN14 element.This process can be observed in Figure 9. Here, the key points are outside the rotor. Afterwards, the master node was associated with the slave nodes.The master node was the MASS21 element, and the slave nodes were all those around it, as shown in Figure 10. As depicted in Figure 11, lines were created to represent the spring-damper system.Each of the lines starts from the master node (MASS21 element) and goes towards the different key points that were created earlier.To assign attributes to the previously created lines created, these steps were followed: 1.The element type (COMBIN14) was given. 2. The corresponding real constant for each line was assigned. 3. Meshing was performed on the lines. These steps aimed to simulate the operation of the springdamper system.To avoid unwanted displacement, the following actions were performed (Figure 12): 1.The key points connected to the elements of COMBIN14 were restricted in all directions. 2. The master node was restricted in the direction of z. The finished model can be observed in Figure 13, which is sufficient for the modal analysis.This analysis is very important in the design and development of a fan to prevent premature failure (Manish et al., 2015;Čorovi ć and Miljavec, 2020), as it helps to obtain the vibration modes of the system. Results The results presented herein start with the data obtained from static and dynamic experimental tests using instrumentation.Following this, the modeling and meshing of the fan parts are detailed, leading to the ultimate stage of computer simulation. Dynamic test Measurements were obtained for the induced-draft fan 1A.It was possible to measure the vibration in both bearings for The vibration values in the horizontal direction exceed those obtained in the vertical direction.This discrepancy arises from the lower rigidity in the horizontal direction, in contrast to the vertical direction, due to the presence of vertical supports.This behavior has been experimentally confirmed. Static impact test Figure 20 shows the results obtained from the impact test conducted on the out-of-operation fan 1B.The top part shows the vibration amplitude, and the bottom part shows the Fourier spectrum indicating the natural frequencies.As seen in this test, values of 15 and 20 Hz were obtained for this particular configuration.It is important to emphasize that the unit was out of service: the bearings were not operating, and there was no mechanical connection between the electric motor and the fan rotor.The results show that the natural frequencies of the rotor are very close to its operating frequency, as already validated with the accelerometers mounted during the operation of the other fan. Simulation without bearings Through simulation, it is possible to determine the different vibration modes of a fan.These modes are fundamental to determining primary conditions such as angular velocity or system design, which are essential for a correct operation.The simulation of the system is based on the real operating conditions of the fan, meaning that the model is close to reality. Figure 21 shows the frequencies of the first vibration mode of the rotor for two different materials. Ingeniería e Investigaci ón vol.44 No. 2, August -2024 7 of 14 According to the results shown in Figure 21, the system's natural frequency is sensitive to the material.Another factor that we consider fundamental is the rotor diameter.In this vein, to increase the cross-section, analyses were conducted in order to evaluate the impact on the natural frequency.The results are shown in Table 4. According to the results regarding the change in the radius of the inner shaft, by increasing the stiffness of the shaft, it is possible to increase the natural frequencies of the system's vibration modes.It should be noted that the longitudinal dimension cannot be modified.It was also necessary to analyze the system's vibration modes.Figures 21 and 22 show the first to the fourth mode.These modes First mode of vibration First mode of vibration without bearings with bearings 18.559 Hz 15.3622 Hz were included to rule out their similarity to the functional frequency of the system. Simulation with bearings Figure 23 shows the vibration mode obtained with the original characteristics of the system, including the bearings.It can be observed that the natural frequency (15.3622Hz) of the system is very close to its excitation or operating frequency (890 rpm, 14.8333 Hz), which likely indicates the reason for excessive vibration in the system. Table 5 compares the results obtained from the two initial simulations of the system in its first mode of vibration, represented in Figures 21 and 23. Discussion The first natural frequency is 15.3622 Hz.This frequency was obtained from experimental tests, and it was validated using computer simulation (Table 5). The operating frequency (14.833Hz), obtained based on the impeller speed (890 rpm), is close to the system's natural frequency.Excessive vibration occurs due to the convergence of these values toward mechanical resonance. Modifying the rotation speed of the fan rotor in order to move it away from its natural frequency values, with the purpose of preventing resonance, was not possible due to two main factors: a) the airflow and the efficiency of the system are altered and no longer meet the functionality requirements; and b), according to Qingjie et al. (2020), it is not possible to eliminate the torsional vibration generated in this type of systems when adding a variable frequency drive.Inertial factors generate this torsional vibration due to speed changes in induced draft fans, and the torsional vibration is in turn caused by frequency conversion driving technology due to dynamic electromechanical coupling.Several alternatives to eliminate the torsional vibration of the shaft were considered in (Qingjie et al., 2020), finding that it is only possible to displace its resonance region and reduce its amplitude but not eliminate it. M and K (2023) propose a MATLAB-based methodology to prevent machine downtime.This methodology consists of continuously monitoring the rotor-bearing system with regard to vibration responses obtained through data collection and the fast Fourier transform (FFT) analyzer.This work, however, only proposes the detection of the fault and makes some suggestions.In our work, conducting vibration monitoring in the rotor-bearing system was imperative to identifying the primary vibration modes and establishing their similarity to the operating frequency.This understanding was crucial to define an effective solution.To facilitate our vibration analysis, we utilized the MATLAB software and integrated the Fourier spectrum of the bearing's rotating behavior in both the horizontal and vertical directions. Numerous research projects such as that of Manish et al. (2015) have employed the FEM for the modal analysis of rotating industrial equipment.Some works, beyond relying solely on computational approaches, also incorporate physical models to validate their findings, as demonstrated by Čorovi ć and Miljavec (2020) and Noureddine and Noureddine (2022).In Manish et al. (2015), a comprehensive analysis of the vibration modes of a centrifugal fan was undertaken, focusing on the acquisition of the first ten vibration modes.This extensive examination aimed to validate that these modes did not closely align with the fan's operating speed or frequency, thus avoiding vibration issues.The authors of this study emphasized the crucial nature of such analyses for rotating machinery, Ingeniería e Investigaci ón vol.44 No. 2, August -2024 9 of 14 stressing its indispensability in predicting design outcomes and enhancing performance.In the context of our project, we conducted a similar analysis, acquiring data on the first six vibration modes.This document presents findings up to the fourth mode (Figure 22).Our study revealed a proximity between the first natural vibration frequency and the operating frequency of the induced-draft fan.This correlation provides enough evidence to attribute the excessive vibration issues to the alignment of the first natural vibration frequency with the fan's operating frequency.Based on our results, in the subsequent vibration modes, the values move away from their operating frequency, as in Manish et al. (2015), Čorovi ć and Miljavec (2020), and Noureddine and Noureddine (2022).These modes increase their numerical value as shown in Figures 21 and 22. In the study conducted by Trebuna et al. (2014), a methodology similar to ours was developed.Their approach involved finite element analysis, strategically placing accelerometers in both rotor bearings, and conducting static impact tests.These measures were implemented in order to scrutinize excessive vibration in two air extraction fans.As in our case, both fans were of the same model, resulting in the same behavior.One distinctive aspect of the work by Trebuna et al. (2014) was its focus on air extraction fans with mixed-flow characteristics, combining both flow directions (axial and radial).In this vein, three accelerometers were required for instrumentation: one of the accelerometers was placed in the axial direction of the rotor, while, for the horizontal and vertical directions, two accelerometers were used to assess the radial flow effects.Given that the air extraction fans operated within a speed range of 400-1 000 rpm, modal and experimental analyses were conducted at distinct frequency values in orded to capture the nuances of their behavior. Similarly, Tarek et al. (2018) employed accelerometers positioned in all three directions.In an effort to rectify fan behavior issues, their maintenance department opted for a comprehensive replacement of the rotor, encompassing the shaft, fan, and roller bearings.In contrast, in our project, we placed the accelerometers in the two necessary directions: horizontal and vertical.This approach was prompted by the radial flow characteristics of the induced-draft fan under study.Through our experimental and modal analyses, we identified a distinctive pattern: the fan exhibited an operating frequency closely aligned with the first natural frequency of the system.Consequently, we recommend a design modification strategy centered around altering the rotor's diameter, including the replacement of the fan rotor itself. Our methodology was shaped by a combination of factors, including an assessment of ineffective previous approaches, our team's extensive experience, and a thorough analysis We proposed a solution based on instrumentation and computer simulation to address the excessive vibration issue, which had not been considered by companies in the public and private sectors and company maintenance personnel. The root cause of the problem was the need for more analysis and communication between the fan manufacturer and the company's engineering and maintenance area, as they were operating in areas close to the fan's resonance, i.e., they failed to consider that the first natural frequency of the system would be close to its operating frequency in the plant. To support our proposal, Table 4 should be considered, as it shows that an increase in the rotor's diameter raises the system's natural frequency, causing it to move away from the operating frequency or speed.This finding aligns with turbomachinery principles, as it is advisable for the natural frequencies to exceed the operating one. Conclusions In this work, mechanical vibratory analysis was conducted to address the issue of excessive vibrations in two induceddraft fans at a thermal power plant in Mexico.Excessive vibrations occurred suddenly and frequently, causing continuous shutdowns to balance the rotors, thus generating economic losses. The research methodology comprised four key processes: a) analyzing previous solutions to the problem, b) conducting a literature review encompassing background and standards, c) performing experiments by instrumenting accelerometers at crucial points in the fans with the purpose of determining their natural frequencies (a crucial component for subsequent vibration analysis), and d) validating the natural frequencies through modeling and computer simulation.This validation allowed defining a solution to mitigate the problem.It is worth adding that the computer simulation was based on finite element analysis. Based on the data obtained from both experimentation and simulation, the following was observed: • The first natural frequency of the system is closely aligned with the operating frequency of the induceddraft fans.Consequently, the system is close to mechanical resonance, resulting in excessive vibration. • After reviewing the available literature and considering the essential operation of the fans under analysis, implementing a variable frequency drive to adjust the operating speed was not considered a viable option. In light of the results, it is imperative to modify the fans' geometry.This is necessary because the first natural frequency is close to the operating frequency or impeller speed.In this context, the most practical approach would involve modifying the shaft's cross-section to one with a larger diameter.Although the expenses associated with this solution may be substantial, it remains a necessary measure to rectify the initial design flaws and address the ongoing issues. Future work could leverage real-time monitoring, computer modeling, and simulation tools, along with the integration of deep learning methodologies.These approaches are poised to offer crucial insights into the behavior of systems, aiding in identifying potential faults and developing corresponding solutions.In this case study, establishing a database lays a foundation for applying deep neural network learning in forthcoming projects.Another improvement would be enhancing the accuracy of sensitivity measurements, similarly to the methods outlined in this study, which could be further advanced through techniques such as speckle interferometry (Dhiya et al., 2023;Jesús et al., 2024). Figure Figure 1.Proposed approach Source: Authors Table 1.Fan information Description Quantity Volumetric flow 2000 m 3 hr Static pressure 0.602 m water column Temperature 417.5 K Type Double suction blade or vane control, constant speed Source: Authors Figure 2 . Figure 2. Lateral section of the ducts and actuation engines Source: Authors Figure 4 . Figure 4. Placement of the accelerometers and directions of response measurements in the induced-draft fan Source: Authors Figure 7 .Figure 8 . Figure 7.Total mesh of the rotor shaft Source: Authors Figure 9 . Figure 9. Key points to be connected with the master node Source: Authors Figure 11 . Figure 11.Lines to simulate each spring-damper assembly Source: Authors Figure 13 . Figure 13.Completed model prior to modal analysis Source: Authors Figure 14 . Figure 14.Vibration obtained in the horizontal direction of the inner bearing, left; and rear bearing, right.Induced-draft fan 1A. Figure 15 .Figure 16 .Figure 17 . Figure 15.Vibration obtained in the vertical direction of the inner bearing, left; and rear bearing, right.Induced-draft fan 1A.Source: Authors Figure 18 . Figure 18.Fourier spectrum and integral of the Fourier effect in the horizontal direction of the inner bearing, top; rear bearing, below.Induced-draft fan 1A.Source: Authors Figure 19 . Figure 19.Fourier spectrum and integral of the Fourier effect in the vertical direction of the inner bearing, top; rear bearing, below.Induced-draft fan 1A.Source: Authors Table 4 . Natural frequency of the undamped system according to the radial increment of the shaft and the two materials used Table 5 . Results of the two main simulations
7,879.2
2024-05-29T00:00:00.000
[ "Engineering", "Computer Science" ]
Islamic Business Ethics The purpose of writing this article is to increase knowledge and can be used as a reference to add references related to the topic presented, namely Business Ethics in Islam. The method used is a qualitative method using a literature study approach. In researching this article, the author conducted a literature study by looking for various written sources such as books, archives, articles, scientific journals, and documents that are relevant to the discussion being discussed, namely business ethics in an Islamic perspective. Supporting activities carried out to determine the literature review in this study include reading, searching, and analyzing expert opinions and library materials that contain theories related to the discussion articles. Ethics in business is very necessary for an entrepreneur in running his business. In Islam it has also been explained that ethics is one of the important things and is very helpful in improving the performance of a business as exemplified by the Prophet Muhammad SAW. Therefore, as Muslims are required to apply Islamic business ethics in doing business. INTRODUCTION Basically, ethics (core values) in business can help business actors (traders) in solving problems (morality) in their business practices (Abdelmoety dkk., 2022;Manuel & Herron, 2020;Sholihin dkk., 2020).Therefore, it is very necessary to develop an Islamic economic system, especially in reviving Islamic business as a solution to the failure of the economic system, both capitalism and socialism, learning the basics of Islam relating to the rules of trade (business) derived from the Koran and Sunnah which is also an absolute thing (Baran, 2021;Gallego-Alvarez dkk., 2020;Rabbani dkk., 2021).The framework in this paper discusses the problem of developing Islamic business related to the real sector. The awareness of Islamic scholars in an effort to return to the nature of Islamic teachings found ideas in the economic system based on Islamic teachings or commonly called the Islamic economic system (Alhammadi, 2022;Hasan dkk., 2022;Kuran, 2018).The growing awareness to apply the teachings of Islam in economic life makes Muslims have to change the pattern of thinking that was originally a capitalist economy into a sharia economy that is included in the business environment (Sabiu & Abduh, 2020;Shaukat & Zhu, 2021;Sidani, 2019).The business environment cannot be separated from business ethics.There are many studies that show a positive relationship between business ethics and the success that will be achieved by a company (Ghezzi & Cavallo, 2020;Hair dkk., 2019;Rhou & Singal, 2020).One example that can illustrate the impact of a company's failure due to not practicing business ethics in every business process is the story of the bankruptcy of the Lehman Brothers.Business practices that do not apply the principle of honesty, only seek maximum profit, and harm other parties will eventually lead to the collapse of even small and large companies. Business actors or companies that do not maintain ethics will not be able to create a healthy business atmosphere and in the end can also threaten social relations and harm consumers, even themselves (Hock-Doepgen dkk., 2021;Vermunt dkk., 2019;Wamba-Taguimdje dkk., 2020).In Indonesia after the end of the new order period, namely in early 1998, many unethical business practices were revealed (Umam dkk., 2020).Many cases and scandals occurred in business practices, be it KKN (corruption, collusion, and nepotism), bribery, forgery, fraud or misappropriation (embezzlement) of company or state property that we can see in the case of Edi tanzil BLBI (Bantuan, Likuiditas, Bank Indonesia), PT Newmont, Freeport and the Gayus case with its tax scandal (Kurniawan dkk., 2021;Tegnan dkk., 2021;Yustiarini & Soemardi, 2020).In addition, we can see that there are also companies that apply business ethics in running their business.For example, the Indian company Nestle supports cattle farmers with various assistance so that they can maximize and increase milk production by fifty times and the standard of living of farmers can also increase. Ethics are related to a good life, whether it concerns the personality of a person or a group of people or social groups that are passed down from one person to another or from one generation to the next (Robert dkk., 2020;Volarevic dkk., 2018;Winfield & Jirotka, 2018).In research researched by Emi Rosyidah (2014) entitled The effect of business competition and business ethics on entrepreneurs in Sidoarjo.Based on the research it can be seen that business competition and business ethics are very influential on the behavior of an entrepreneur both persially and simultaneously.The similarity of this research with the writing of this article is that both discuss business ethics in Islam.While the difference lies in the indicators and research methods used.(Rosyidah, 2014).Research by Diah Sulistiyani (2015) with the title of her research The effect of knowledge of Islamic business ethics and religiosity on the behavior of Muslim traders (Case study on basic food traders in the karangkobar market).the research gives the results that Islamic business ethics have a significant effect on the behavior of traders.The similarity of the above research is that it examines Islamic business ethics and trade behavior.has a difference, namely in the research methods used. Business ethics initially emerged when business activities did not input from the ethical spotlight (Dierksmeier & Seele, 2018;Ferrell dkk., 2019;Martin dkk., 2019).Fraud in business, reduction of scales or portions is one of the concrete examples of the connection between ethics and business.Because of this phenomenon, business ethics received a lot of attention until it became a stand-alone field of scientific research (Agag, 2019;Sarwar dkk., 2020).In Islam, business activities are highly recommended, taking into account what has been determined by both the Qur'an and As-sunnah.Both can be a reference for Muslims, especially conducting their business activities (Faizal dkk., 2021;Jabbar dkk., 2018;Umar & Kurawa, 2019).Among these guidelines there are also several codes of ethics in trade according to Islam including siddiq (honest), amanah (responsibility), not committing usury, keeping promises, not committing fraud, not cheating on scales, not demonizing other traders, not hoarding goods and this can harm others.Based on the explanation stated above, the question arises: What is business ethics in accordance with Islamic law? RESEARCH METHODOLOGY This research discusses theories and concepts based on existing literature, such as articles, scientific journals that are certainly relevant to this research.Literature review is useful as a form of effort in creating concepts or theories that form the basis of study in research (Oztemel & Gursev, 2020;Snyder, 2019).Literature studies or literature reviews are important and required, especially those whose main purpose is to develop theoretical aspects and aspects of practical benefits. This journal article is written using a qualitative method that uses a literature study approach.Collecting data or sources related to the research topic can be done by means of a literature study.In the research of this article, the author conducted a literature study by looking for various written sources such as books, archives, articles, scientific journals, and documents relevant to the discussion studied, namely about business ethics in an Islamic perspective.Supporting activities carried out to determine the literature study in this research include reading, searching and analyzing the opinions of experts and library materials that contain theories related to the discussion of the article.By using this article, philosophical values in the development of Islamic economics in the production of halal products can be found.The methodology used in this article is the methodology of Islamic studies, which contains an accurate and thorough scientific process in understanding Islamic teachings comprehensively from various sources and understanding its history. RESULT AND DISCUSSION Business Ethics in Islamic perspective In terms of ethics means good standards of behavior, some even argue that Islam is an ethic that regulates our overall behavior, or daily life activities (McCann, 2022). Morals or ethics in Islamic teachings are a form of piety, Islam and also a belief based on great faith in the truth of Allah SWT (Ahmad dkk., 2021).In essence, Islam was revealed by Allah to be the main foundation in developing good and correct morals or morals. In ethics there are three functions and manifestations, namely descriptive ethics, which contains descriptive moral experiences to determine the motivation, will and purpose of an action in human behavior.Second, normative ethics, which tries to explain why people act the way they do, and what human principles are.Third, it tries to provide the meaning, terminology, and language used in ethics, as well as the way of thinking used to substantiate claims about ethics.Metaethics questions the meanings contained in the terms of morality used to give moral responses. Etymologically, ethics comes from the Greek word ethos which means attitude, way of thinking, habits, customs, manners, feelings, and decency.Aristotle, a Greek philosopher, used the term ethics to refer to moral philosophy.Therefore, ethics means the principles, norms, and standards of behavior that govern one's personality and group to distinguish what is right and what is wrong.Business ethics (business ethics) aims to prohibit improper behavior of the company, its officers and employees (Arrisman, 2018).Business ethics affect the company's relationship with its employees, the relationship of workers with the company, and the company's relationship with agents or other economic entities. Islam has determined that the goal of human life is to achieve eternal or permanent victory, which can also be reflected in the form of meeting God in Heaven.In this case, Islam stakes the principles of faith and tawhid.The basis of monotheism is in line with the example of the Prophet, which is expected to produce humans with noble character (Shahabuddin dkk., 2020).Islam is a religion of tawhid or all its provisions come from Allah SWT as its creator, one of the sources of truth is what is known as the Islamic creed.Allah SWT revealed a truth through the holy book Al-Qur'an which was revealed to the Prophet Muhammad SAW. Ethics are defined as standards of behavior that guide individuals in making decisions.Ethics is the study of good and bad actions and one's moral choices.Ethical decisions are the right decisions about standard behavior (Khan dkk., 2023).Sometimes business ethics is called management ethics, which is the application of ethical standards in business activities.W. F. Schoell states: some philosophers argue that behavior is ethical if it follows the will of Allah SWT.So, actually ethical behavior is an act of holding the commands of Allah SWT and avoiding His prohibitions.In Islam, business ethics has been discussed in various literatures and the basic source is the Koran and hadith. Al-Ghazali in the book Ihya Ulumuddin explains that Ethics (khuluq) is a trait contained in the soul from which actions will flow easily, without the need for consideration (Alfakhri dkk., 2018).So business ethics in Islamic sharia is ethics in running a business that is in accordance with the values of the teachings in Islam so that in running a business there is no need to worry anymore because it is considered a good and right thing.So, from some of the definitions above, researchers can conclude that ethics (morality) is a habit of a person's behavior in carrying out activities that can lead to good or bad characteristics and are related to one another. Business is a term that describes all the activities of various organizations (institutions) that produce goods and services needed by the community (consumers) in order to meet their daily needs (Manik, 2019).In general, business is defined as a form of activity carried out by humans in order to obtain income or income or sustenance to meet their needs and desires in life by managing economic resources systematically, effectively and efficiently.The business economy includes agriculture, industry, services and trade. The principles of business ethics that apply in good business practices cannot be separated from our lives as human beings, this means that the principles of business ethics are embedded in the system of values applied by each society (Mamun dkk., 2021).In Islam, business can be understood as a series of corporate activities in various forms of unlimited number (quantity) of ownership of assets (goods / services) including benefits, but limited with respect to the acquisition and use of property (there are rules halal and haram).The above definition can be concluded that Islam is obligatory for every Muslim, especially those who have the responsibility to work.Work is one of the reasons that allows humans to have wealth and allows humans to try to make a living, Allah SWT expands the earth and provides various means used to seek wealth (rizki). Business in the Qur'an is al-tijarah, and in Arabic tijaraha, derived from the root word tajara, tajranwatijarata, which means trade or business.According to Ar-Raghib al-Ashfahan, in the Qur'an at-Tijarah, al-mufradat fi gharib means managing assets for profit (Wijaya dkk., 2022).Islamic business is basically the same as business in general, it's just that it must obey and follow the teachings of the Qur'an, As-Sunnah, Al-Ijma and Qiyas (Ijtihad) and pay attention to the prohibitions contained in these sources. In the book of Prof.Dr.H.Muhammad Djakfar, he wrote that Islamic business ethics is an ethical standard based on the Qur'an and Hadith that must be used as a benchmark for everyone in business.Islamic business ethics is the morality of doing business in accordance with Islamic values.So there is no need to worry about trading, because it is considered something good and right.Ethical, moral, moral or ethical values are values that encourage a person to become a whole human being.Such as honesty, truth, justice, independence, happiness, love and compassion.When this ethical value is realized, it will be able to complete human nature as a whole.While everyone is allowed to have knowledge about values, there are only two units of knowledge that guide and regulate Islamic behavior, namely the Quran and Hadith as the source of all values and guidelines in all aspects of life, including business. Islamic economics and business are closely related to Islamic sharia and aqidah, so that the view of Islamic economics and business cannot be understood without a good understanding of Islamic aqidah and sharia .Attachment to belief (aqidah) or belief leads to self-control, harmonious relationships with partners, the emergence of mutual benefit and not just one-sided gain.Building a culture in a healthy business, ideally begins with the formulation of ethical rules to serve as a standard of behavior before drafting and enforcing a code of ethics (law) or incorporating a code of ethics (standard) into a legal standard. .As a guideline for individual business actors, namely through the application of moral habits or culture to understand and live the values of moral principles as the core strength of the company that prioritizes honesty, responsibility, discipline and nondiscriminatory behavior.Islamic business ethics is a moral or moral culture related to the business operations of a company.Meanwhile, Islamic business ethics is a study of a person or organization in business or a mutually beneficial business agreement according to Islamic values. In order to apply business ethics to build an Islamic company, first of all, awareness of the new business must be restored.The view that business ethics is an integral or indispensable part is a fundamental construction as a change in people's widespread assumption and understanding of the awareness of immoral business systems.In the Qur'an, business is defined as tangible and intangible activities.In order for a company to be called valuable, two goals must be observed The fulfillment of material and spiritual needs can be met in balance. Regarding the unity of business and ethics, understanding the principles of business ethics is valuable when it meets material and spiritual needs in a balanced way and does not contain evil, corruption and injustice.But it contains values such as unity, balance, free will, responsibility, truth, benevolence and honesty.This means that business ethics can be applied by everyone.Second, in an effort to apply business ethics to build an Islamic business order, it should be noted the need for a new perspective in the study of economic science that is based more on the paradigm of normative ethical approach and inductive approach.Empiricism that emphasizes the study of the values of the Qur'an and the development of values to face the changes and changes of the times that are accelerating or in the category of the development of modern science, must be developed abductively.-pluralistic way.think. Steps to apply business ethics in order to build a sharia-compliant business in order to face challenges in the business world In this discussion of Islamic business ethics, according to (Taufik dkk., 2021) Dr. Syahata explained that Islamic business ethics is an important step in the journey of professional business activities and it is necessary to know that Islamic business ethics has an important function in the offerings of entrepreneurs, namely: 1) Building an Islamic code of ethics that guides, develops and anchors the method of doing business based on religious teachings.This code of ethics is also a symbol of order protecting entrepreneurs from risks.2) This code can serve as a legal basis for establishing the responsibility of the entrepreneur, especially for himself, between business life, society and others Everything is a responsibility before Allah SWT. 3) The Code is considered a legal document that can resolve issues that arise rather than leaving them to the judiciary.4) The code of ethics can help resolve many of the problems that exist between employers and their work communities.Something that can build brotherhood (ukhuwah) and cooperation among all. The function of Islamic business ethics According to (Shamsudheen dkk., 2023) basically, Islamic business ethics has some specific functions, which are explained in the following points: 1) Business ethics seeks to find ways to equalize and harmonize the different scores in world business.2) Business ethics also influences the ever-changing awareness of business, especially Islamic business.And the way is usually in the form of providing a new understanding and perspective on business based on moral and spiritual values, which are then combined in a form commonly called business ethics.3) Business ethics, especially Islamic business ethics, can also be a solution to various modern business problems that are getting away from ethical values.In the sense that ethical business must really refer to its main source, namely the Al-Quran and As-Sunnah. Principles of Business Ethics According to (Kalkavan dkk., 2021) some principles of business ethics formulated by a group of top European, American, and Japanese business leaders called the Caux Round Table (CRT) include: 1) About general principles, including: Economic value to society lies in prosperity and employment.produce goods and services at prices that match their quality.Companies help improve the lives of customers, employees and shareholders.Suppliers and applicants expect honesty and fairness from the company.2) Overseas-based businesses should support the well-being of local communities by creating jobs, increasing purchasing power, upholding human rights and improving education.4)Businesses should value sincerity, honesty, keeping promises and confusion.Comply with regulations and develop more competitive businesses and fair and just treatment for all entrepreneurs at home and abroad.5) It is important to protect and improve the environment, ensure sustainability and prevent the waste of natural resources.6) There is no justification for bribery, money laundering, malicious acts and no involvement in the trade of weapons used for terrorism or illegal drugs.7) Provide the highest quality products and services.8) Treat customers fairly in all transactions.9) Businesses should protect human dignity in the marketing and advertising of products.10) Respect the integrity of the client's culture.11) Relations with employees, employment and fixed wages improve their conditions, improve health, be open to information, be ready to listen to employee complaints, avoid discriminatory practices, respect gender, age, ethnicity, religion, avoid work accidents, develop knowledge and professional responses in the host country.12) Maintain good relations with suppliers, competitors and the community. The principles of business ethics according to the Qur'an, namely First, prohibit business conducted through flawed procedures.Second, contracts must not involve usury.Third, business also has a social function through zakat and alms Fourth, prohibiting the revocation of rights to an item or commodity that is obtained or held by measuring or weighing, because it is a form of injustice.Fifth, loving the values of economic and social balance, security and goodness, and not accepting harm and injustice.Sixth, Obviously.entrepreneurs cannot use (deceive) themselves or other entrepreneurs. There are several important things related to Islamic business ethics, namely promises, buying debts, the inability of villagers to block the city limits, honesty in buying and selling, measures and scales, frugal behavior, salary issues, income (Abdelzaher dkk., 2019).Rights of others, maintenance of land, transactions, and settlement of accumulated wealth. If we look at business ethics from an Islamic economic perspective, they come from two sources: Divine value Divinely-sourced values are those that God prescribes to His Messengers about piety, faith, goodness, justice, etc., and are recorded in Divine Revelation.Religion is the most important reference for moral and ethical values.God as the main source of religious teachings has determined truth and error.God has full authority to determine good and bad values (ethics).Values derived from religion are static and the truth is absolute.Human attitudes, actions and behavior must reflect God's will for human beings.As values must be based on truth and love for Him, it also leads to truth and acceptance (His favor), namely sa'adah fi al Dunya wa sa'adah fi al-Akhira.To achieve Sa'adah (happiness), modern humans and business people must develop business ethics sourced from the Quran.Ethics and economics inspired by God's teachings prohibit business people from doing business that harms others because in essence these actions end up with a boomerang where the consequences of these actions not only harm other parties but also lead to the fact that business people experience negative consequences.There is a sense of pleasure after you have done something that harms others.On the other hand, doing business ethically in accordance with religious teachings certainly gives the perpetrator peace of mind because they are not overshadowed by guilt towards others. Insaniyat (humanity) values The values that originate from Godhead are the values that God assigns to His Messenger about The opposite of ethical values that originate from religion are ethical values that arise from the creations and agreements of human thought for the benefit of the people themselves and for goodness.This value is dynamic, limited in time and space. These two values have different sources, but are interrelated.The relationship between divinely transmitted values and humanly inherited values, so closely related to human values because of their relative and dynamic nature, allows for submission to absolute and eternal divine values.Therefore, all human intentions, thoughts, actions and behaviors cannot be separated from divine values.Human dependence on divine values does not mean dissolving him as an independent being, but rather bringing him to a more humane position, humanizing him and raising him to a higher level so that he becomes perfect. CONCLUSION Etymologically, ethics comes from the Greek word ethos which means attitude, way of thinking, habits, customs, manners, feelings, and decency.Ethics are defined as standards of behavior that guide individuals in making decisions.Business is a term that describes all the activities of various organizations (institutions) that produce goods and services needed by the public (consumers) in order to meet their daily needs. In Prof. Dr. H. Muhammad Djakfar's book, he wrote that Islamic business ethics is an ethical standard based on the Qur'an and Hadith that must be used as a benchmark for everyone in business.Islamic business ethics is the morality of doing business in accordance with Islamic values.So there is no need to worry about trading, because it is considered something good and right.Ethical, moral, moral or ethical values are values that encourage a person to become a whole human being.Such as honesty, truth, justice, independence, happiness, love and compassion.When this ethical value is realized, it will be able to complete human nature as a whole.While everyone is allowed to have knowledge about values, there are only two units of knowledge that guide and regulate Islamic behavior, namely the Quran and Hadith as the source of all values and guidelines in all aspects of life, including business.Islamic economics and business are closely related to Islamic sharia and aqidah, so that the view of Islamic economics and business cannot be understood without a good understanding of Islamic aqidah and sharia.
5,551
2023-09-30T00:00:00.000
[ "Business", "Philosophy" ]
Dielectric Dependent Absorption Characteristics in CNFET Infrared Phototransistor . In future infrared photodetectors, single-walled carbon nanotubes (SWCNTs) are considered as potential candidates due to their band gap, high absorption coefficient (10 4 - 10 5 cm − 1 ), high charge carrier mobility and ease of processability. The SWCNT based Field Effect Transistors (CNFETs) are being seriously considered for applications in optoelectronics. In the proposed work optically controlled back gated CNFET is modeled in Sentaurus TCAD to observe the impact of high dielectric oxides on its photo absorption. The model is based on analytical approximations and parameters extracted from quantum mechanical simulations of the device and depending on the nanotube diameter and the different gate oxide materials. Small deviation in SWCNT chirality shows significant change (more than 50 %) in channel current. Transfer characteristics of the device are analyzed under dark and illuminated conditions. CNFET integrated with HfO 2 dielectrics exhibits superior performance with a significant rise in photocurrent current. Precise two dimensional TCAD simulation results and visual figures affirm that the ON state performance of CNFET has significant dependency on the dielectric strength as well as width of the gate oxide and its application in enhancing the performance of carbon nanotube based infrared photo detectors. Introduction Infrared photodetectors can be used for a variety of applications in the military, security, medical, industrial, and telecommunication areas. Photo devices' flexibility and broadening the photosensitive spectral range with low cost are foreseen to be the fundamental areas in promoting the relevancy of IR photodetectors [1]. Amongst the various preferred detector materials for infrared light, SWCNTs are being seriously considered as one of the potential candidates due to its outstanding electrical, optical, thermal, and mechanical properties. Researchers have demonstrated that SWCNTs show strong absorption in the IR (≈10 5 cm −1 ), [2, and 3] due to high charge carrier mobility, 1D structure, and high aspect ratio of SWCNT. In particular, SWCNT based planar photoconductors devices are one of the important research areas in recent years due to their customizable electronics properties and 2D architecture. Also the same existing CMOS structure can be reused in SWCNT based phototransistors due to their compatibility with conventional MOSFETs, potentially enabling their on-chip integration, and multifunctional optoelectronics. SWCNT acts as the channel in CNFET and is responsible for transport mechanisms. Still the performance of infra-red SWCNT phototransistors has thus far been inferior compared to other IR photodetectors regardless of their potential [2,4]. One of the main reasons for CNFET's low performance can be attributed to relatively low dielectric and larger thickness of SiO2 gate oxide. Steep switching between ON and OFF states for CNFET can be attained by the incorporation of thin high dielectrics gate oxide. High dielectric gate oxides and SWCNT channel interface develops with reduced dangling bonds and hence do not degrade the carrier mobility in the SWCNT channel. [5] In this proposed work, the possible impact of high-κ gate dielectric HfO2 on CNFET phototransistor is studied through TCAD simulation. The distinctive features of the CNFET device with high dielectric HfO2 gate oxide are observed and measured with those of a CNFET with low dielectric SiO2 gate oxide in terms of output characteristics and responsivity. In this simulation approach; the 2D CNFET model is implemented including the principles of calculation of SWCNT charge, which is responsible for the current transportation. This charge developed on the SWCNT surface is due to the potential across SWCNT which is responsible for channel band alterations [6]. The CNFET model considers SWCNTs with different diameter and channel length. This model is valid as long as the active channel SWCNT in CNFET phototransistor is behaving as semiconducting. A. Optical Properties of SWCNT SWCNTs are tiny, hollow cylinders constructed by rolling up a 2D graphene sheet. Ch around rollup vector in the graphene sheet and describes how the sheet is rolled up to form SWCNTs with varying diameter and different electronics properties. Depending on the chirality of the graphene cylinder, SWCNT diameter can be varied and SWCNTs results in semiconducting and metallic tubes shown in figure 1(a). Chirality vector, Ch with (n, m) defines diameter, bandgap, and the threshold voltage of SWCNT. Figure 1(b) indicates the presence of more than a few sub-bands with all energy bands having their maxima or minima at the same K points, indicates that nanotubes are unique material with direct bandgap. Such direct bandgap semiconducting SWCNTs due to their strong light absorption and fast light response over a broadband can be used in wide optoelectronics applications. However, with reduced dimensionality in SWCNTs many-body effects dominate the light absorption. The most significant body effects amongst various optical properties are excitons, which are electronhole pairs easily formed with much larger Columbic interaction energies. As seen in figure 1(b), photoexcited electrons in SWCNT can be directly excited from valence bands to conduction bands for energies larger than its bandgap. But in the presence of excitons in semiconducting SWCNTs the optical absorption spectrum is completely altered; each band to band transition gives a series of sharp excitonic peaks. Free electrons and holes may be produced in lower subbands via excitations of excitons to higher energy subbands, followed by decaying to lower energy subbands to become free electrons and holes. In the first sub-band, excitons have large exciton binding energy, hence it is usually difficult for them to decay to become free carriers. As in photodetectors, external electric field or band bending due to doping are usually utilized to separate the photo carriers. In the excitonic picture, a larger electric field needs to be applied to disassociate the excitons into a free electron and hole [2]. IJET volume 19 B. Phototransistor Structure Figure 2. Back Gated CNFET structure with High Dielectric HfO2 gate insulation Schematic for the proposed back gated CNFET structure is shown in figure 2. The CNFET structure is of the standard back gated MOSFET with n doped SWCNT as an active channel. Chiral numbers (n, m) which describe the different SWCNTs is responsible for its electronic properties. (1) a1 and a2 are the unit vectors along the unit cell in 2D graphene sheet. Chiral vector Ch is the linear combination of lattice vectors a1 and a2 with m and n as integer indices [8]. In SWCNT, its diameter and bandgap are dependent on chiral indices and hence calculated with the help of (n, m) [9]. SWCNT diameter is derived by a is the distance between neighboring carbon atoms and its value is 0.142 nm [9]. Bandgap in SWCNT is dependent on its diameter [10] and is calculated as: Vppπ is the carbon π-π bond energy with value 3.033 eV [11]. The transport mechanism in SWCNT which is dependent on the intrinsic charge carrier concentration can be calculated as [11]: D(E) , f(E) represents the density of states, Fermi-Dirac distribution in SWCNT respectively. D(E) and f(E) values are calculated using equations (5) and (6): EC is the conduction band energy. The density of state value is calculated for the first sub-band only since in SWCNTs the contributions of next level sub-bands is negligible. T is the operating temperature, K is the Boltzmann constant, and EF is the first sub-band Fermi level value in SWCNT. Substituting (5) and (6) in (4) Channel current calculation is done using an expression in equation 9 [12]. In intrinsic SWCNT Fermi levels are zero on the source and drain side. By introducing n-type doping in SWCNT, the nature of CNFET is affected, and hence current along the channel also changes accordingly. Shift in Fermi level, EF due to doping is given as [11]: N represents the amount of doping concentration. VS, VD be the source, drain bias respectively then the channel current IDS is calculated using the Launder Equation as [15]: h is Planck's constant and q is the electronic charge. Ψ (0) and 0 is the surface potential on the top and bottom of SWCNT. III. Sentaurus TCAD Modeling and Simulation Technology computer-aided design (TCAD) is a well-established discipline and optimization technique used in electronic device simulation in the market [14]. A. CNFET Model All the required set dimensions of the device are listed in the table I. IJET volume 19 Following it, to develop the simulation model, an ordered description of the statements loaded in sde_dve.cmd file and region-wise doping styles and doping concentration are defined and listed in Table II. The electrical properties of CNFET are based on the simultaneous solution of various partial differential equations. During the device simulation calculation of these equations is done at the locations of intersection points in the mesh defined across various regions of CNFET shown in figure 4. As seen in figure 4, an active layer is defined with more dense mesh to obtain optimized simulation results considering the speed and accuracy of TCAD software. The complete 2D CNFET structure established in SDE tool is shown in figure 5, without any physical model applied to it. Therefore to carry out the CNFET simulation further, the SDEVICE tool is used for initializing appropriate physical and mathematical models across different regions in CNFET. B. CNFET Simulation 2D numerical simulation of CNFET is carried out in TCAD to calculate electrical parameters. Simulator makes use of the Poisson's and drift-diffusion formulation discretized over a multidimensional numerical mesh [14]. The Coupled and Quasistationary commands are used to solve a set of equations and to ramp a solution from one boundary condition to another respectively. The Coupled command activates a Newton-like solver over a set of equations includes the Poisson equation, continuity, and the different thermal and energy equations [14]. The simulator solved these basic equations using the drift-diffusion model evaluating for the potential, hole, and electron concentration with appropriate assumptions and calculated the drain current at a specified gate bias. With our very small dimensions of CNFET; some quantum modification items are added to the simulation for results to be closer to the real condition [15,16]. Optical generation is computed using the transfer matrix method (TMM) which is based on the propagation of plane waves through layered media in CNFET. Shockley-Read-Hall (SRH) and Auger recombination with doping dependence is used to calculate recombination during the simulation. IV. Simulation Results Optical generation in CNFET is activated by illuminating from the top using a monochromatic light source whose spectral range varied from 600 nm to 1550 nm. The TMM optical solver requires the use of the complex refractive index model, and various excitation variables, all such excitation variables specified in TMM are listed in table 3 A. Gate Voltage Dependence Figure 6 presents the plots for the photocurrent under the different bias conditions for CNFET under dark and illumination. It shows that drain current increases with the increase in illumination. This is because with the illumination a forward biasing potential is developed across the SWCNT active layer IJET volume 19 which increases the channel potential. Also, the photosensitivity of CNFET is highly dependent on the bias voltage, from the plot it can be seen that both photocurrent and dark current is increasing with the increasing bias voltage. B. Channel thickness Dependence Channel thickness is dependent on the diameter of SWCNT as shown in equations 1 and 2. For the CNFET simulation different zig-zag SWCNTs are used and tabulated in Table IV. Transfer characteristics between the input voltage Vgs and the output drain current Ids are derived for constant Vds = 0.01 V. The simulation result for both CNFET's with 100 nm channel length is shown in figure 7. Drain current is increasing with the input voltage Vgs. It is observed from the above plot that SWCNT provides high channel mobility so automatically current increases and linearity are more. But drain current is decreasing with growing tube diameter since in SWCNT the bandgap is inversely proportional to its diameter and responsible for reduced carriers across the active channel. C. Gate Oxide Dependence The relatively low dielectric oxide such as SiO2 brings limitations for its use as a gate oxide in scaled CNFETs due to high leakage current. With high dielectric gate oxides small thickness deposition is possible and hence allows efficient charge injection into an active channel reducing leakage current. This has further motivated me to explore the integration of high-κ films (~20-30) as gate oxide. The dielectric materials considered for gate oxide during the simulation are listed in table 5. Gate oxide dielectric material-dependent output characteristics of the CNFET are shown in Figure 8, where the thickness of the gate insulator is kept 10 nm. The results are obtained for the ramping gate bias Vgs from 0 to 5 V while keeping the drain bias Vds fixed at 0.01 V. Figure 8 shows the Ids-Vgs characteristics in CNFET for SiO2 and HfO2 gate oxide materials. For the relative dielectric constant of 3.9, the drain current in the CNFET is 10 μA at Vgs = 3V. For the high dielectric constant, the drain current increases to 0.388 μA. This rise in the drain current is because of high dielectric HfO2 oxide. Sheikh Ziauddin Ahmed et al found that with a more than 50 % rise in ON current when HfO2 is used as the gate dielectric [7]. This justifies our results obtained by Sentaurus TCAD simulation software. For SiO2 dielectrics, to obtain an electrostatic coupling capacitance approaching the quantum capacitance of an SWCNT, an ultra-thin 1-2 nm SiO2 layer is required since in SWCNT, the electrostatic coupling capacitance logarithmically depends on thickness. But the thin SiO2 dielectric causes significant leakage currents, a major transistor scaling trouble. D. Current Variation at various wavelengths Figures 9 and 10 depict the variation of current Ids with voltage Vds for the low and high dielectric gate oxide at various wavelengths showing that both models are wavelength insensitive till 500 nm. A prominent decrease in Ids is observed when frequency increases from 1000 nm to 1550 nm because of minority carrier lifetime. Table VI shows the comparison for Ids at various wavelengths for the developed model with the reference model [2]. It shows that the optically generated current for the model developed is comparable to the reference model for the same dimensions and optical illumination. Through careful optimization of CNFET structure by the use of compact models in Sentaurus TCAD, the impact of the gate oxide dielectric strength on optical generation in CNFET is examined under varied illumination conditions. It has been observed that high dielectric HfO2 reduces leakage current and significantly improves the output current. CNFET with HfO2 shows an improved performance compared to SiO2 with an incremental increase of 50 % under the same operating conditions. Input as well as output current characteristics of CNFET photodetector show that the drain current is sensitive to SWCNT thickness and biasing. It has been observed that output dark and illuminated drain current decreases for the increasing SWCNT thickness. Results obtained from the TCAD simulation of CNFET photodetector are encouraging. It is predicted that TCAD simulation might have the possibility to use to model the material SWCNT and thus CNFET phototransistors for analysis and consequently saving the actual production time and cost. CNFET phototransistor performance can be further improved by optimizing the trap densities at the heterojunction between channel and oxide layer.
3,530.8
2020-12-01T00:00:00.000
[ "Physics" ]
Voluntary Exercise Stabilizes Established Angiotensin II-Dependent Atherosclerosis in Mice through Systemic Anti-Inflammatory Effects We have previously demonstrated that exercise training prevents the development of Angiotensin (Ang) II-induced atherosclerosis and vulnerable plaques in Apolipoprotein E-deficient (ApoE-/-) mice. In this report, we investigated whether exercise attenuates progression and promotes stability in pre-established vulnerable lesions. To this end, ApoE-/- mice with already established Ang II-mediated advanced and vulnerable lesions (2-kidney, 1-clip [2K1C] renovascular hypertension model), were subjected to sedentary (SED) or voluntary wheel running training (EXE) regimens for 4 weeks. Mean blood pressure and plasma renin activity did not significantly differ between the two groups, while total plasma cholesterol significantly decreased in 2K1C EXE mice. Aortic plaque size was significantly reduced by 63% in 2K1C EXE compared to SED mice. Plaque stability score was significantly higher in 2K1C EXE mice than in SED ones. Aortic ICAM-1 mRNA expression was significantly down-regulated following EXE. Moreover, EXE significantly down-regulated splenic pro-inflammatory cytokines IL-18, and IL-1β mRNA expression while increasing that of anti-inflammatory cytokine IL-4. Reduction in plasma IL-18 levels was also observed in response to EXE. There was no significant difference in aortic and splenic Th1/Th2 and M1/M2 polarization markers mRNA expression between the two groups. Our results indicate that voluntary EXE is effective in slowing progression and promoting stabilization of pre-existing Ang II-dependent vulnerable lesions by ameliorating systemic inflammatory state. Our findings support a therapeutic role for voluntary EXE in patients with established atherosclerosis. Introduction Regular exercise training is an essential strategy for both primary and secondary cardiovascular disease prevention [1][2][3][4][5]. In a recent meta-analysis of prospective cohort studies, moderate and high levels of physical activity have been associated with a 12% and 27% relative risk reduction of coronary heart disease, respectively [5]. Along the same line, acute myocardial infarctions patients randomized to exercise-based cardiac rehabilitation had a 47% risk reduction for reinfarction, 36% for cardiac mortality, and 47% for all cause mortality, as recently revealed in a systemic review and meta-analysis of randomized controlled trials [3]. Exercise induces cardio-vascular benefits partly through its direct positive impact on atherosclerosis development and progression. Indeed, exercise is effective in decreasing carotid artery intima-media thickness in healthy asymptomatic subjects as well as in subjects with cardiovascular risk factors and/or disease [6,7]. Moreover, we and others have shown that exercise, including running as well as swimming, delays atherosclerosis progression, stabilizes, and reduces rupture of atherosclerotic plaque in Apolipoprotein E (ApoE) and low density lipoprotein receptor (LDLr) knockout mice [8][9][10][11][12][13][14]. Regression of experimental pre-existing atherosclerotic plaques has also been reported following exercise [15,16]. The renin-angiotensin system, and in particular its final product Angiotensin (Ang) II, plays a pivotal role in atherogenesis and plaque vulnerability [17]. We recently reported that exercise prevents the development of Angiotensin (Ang) II-induced advanced atherosclerosis and plaque vulnerability, using the 2-kiney, 1-clip ApoE -/mouse model [18]. In the present study, we investigated whether exercise has any effect on limiting progression of already established vulnerable Ang II-dependent atherosclerotic lesions. Mouse model of Ang II-induced advanced and vulnerable plaque and voluntary running wheel exercise Male and female C57BL/6J ApoE -/mice originally purchased from Charles River Laboratories (L'Arbresle, France) were used. Animals were housed in local animal facility under a 12-h light/dark cycle in a temperature-controlled environment and ad libitum access to normal chow diet (Kliba Nafag, Switzerland) and water throughout the study. All procedures were performed according to the Swiss Ethical Principles and Guidelines for Experiments on Animals, and with approval of local Institutional Animal Committee ("Service de la Consommation et des Affaires Vétérinaires du canton de Vaud"). All efforts were made to minimize suffering. At 12-14 weeks of age, mice underwent left renal artery clipping (2-kidney, 1-clip [2K1C] renovascular hypertension model) under anaesthesia to induce the formation of advanced and vulnerable Ang II-dependent lesions as previously described [18][19][20]. Briefly, mice were anesthetized by halothane inhalation (1% to 2% in oxygen), the left kidney was exposed and reduction in left renal perfusion was induced by placing a U-shaped stainless steel clip of 0.12 mm internal diameter around the renal artery. Renal perfusion reduction induces increased renin secretion from juxtaglomerular cells of clipped kidney leading to increased Ang II production and thus development of chronic Ang II-dependent systemic hypertension. Besides hypertension, 2K1C ApoE -/mice with high circulating Ang II develop advanced atherosclerotic lesions with vulnerable phenotype [18][19][20]. At 4 weeks, four mice were euthanatized to confirm the presence of such atherosclerotic lesions in aortic sinus (data not shown). Prior to surgery and for 2 days following surgery, mice were subcutaneously injected with Temgesic (0.01 mg/kg) for pain relief. Additionally, animals were treated with Dafalgan (200 mg/kg) via drinking water during 7 days following surgery. The remaining 4-week 2K1C ApoE -/mice were randomly assigned to either a voluntary exercise group (EXE; n = 12 total, n = 7 males and n = 6 females) or a control sedentary group (SED; n = 14 total, n = 5 males and n = 9 females) for additional 4 weeks. EXE mice were individually housed and had free 24-h/day access to a running wheel (12 cm diameter). Hemodynamic parameters, plasma renin activity, and plasma total cholesterol measurements At the end of the study (8 weeks after renal artery clipping), mean blood pressure (MBP) and heart rate (HR) were measured in conscious mice by using an intra-carotid catheter connected to a pressure transducer as described elsewhere [18][19][20]. Blood sample obtained through the catheter were used to determine plasma renin activity (PRA) by radioimmunoassay, and plasma total cholesterol levels using enzymatic methods [18][19][20]. Mice were then euthanized with sodium pentobarbital, tissues rapidly harvested, and processed for later analysis. En face analysis of atherosclerotic plaque extension After fixation with 10% neutral formalin, aortas were removed from surrounding connective tissues, opened longitudinally, pinned on a petri dish with a black silicone surface, and stained with Oil red O to visualize atherosclerotic plaque. Pictures of stained aortas were taken with a digital camera (Coolpix, Nikon, Japan). Total aortic surface area and atherosclerotic plaque areas of each pinned aorta were measured using computer-assisted image analysis Leica Qwin software (Leica systems, Wetzler, Germany), and percentage of atherosclerotic lesion to total aortic surface area was calculated [19,20]. Histologic, immunohistochemical, and morphometry analyses of atherosclerotic plaques After fixation with 10% neutral formalin, hearts were paraffin embedded and cross-sectioned (3 μm) until reaching the aortic sinus. Aortic sinus sections were subsequently stained with Movat pentachrome or with Sirius red for lipid core quantification and total collagen plaque content assessment, respectively. Additionally, the relative plaque content of macrophages and smooth muscle (SM) cells was determined by immunostaining with primary anti-mouse Mac-2 (Cedarlane Labs, Ontario, Canada, dilution 1/200) and α-(SM) actin (gift of Dr M-L Piallat-Bochaton, University of Geneva, Switzerland, dilution 1/200) antibodies, respectively, followed by appropriate biotinylated secondary antibodies. Antibodies were revealed with a peroxidaselinked avidin-biotin detection system. Images of sections were captured with a digital camera, and specific staining was expressed as percentage of total cross-sectional plaque area using the Qwin software [18][19][20]. Histological plaque stability score was calculated as follows: (SM cell area + collagen area)/(macrophage area + lipid core area) [21,22]. Real-time reverse transcription PCR and gene expression analysis Total RNA from aortas and spleens was isolated (RNeasy Mini kit, Qiagen, Switzerland), and reverse transcribed into cDNA (iScriptT™ cDNA Synthesis Kit, Bio-Rad, Switzerland) following manufacturer's instructions. Real-time RT-PCR (CFX96 Real-Time PCR detection system, Bio-Rad, Switzerland) using the iQ™ SYBER Green PCR Supermix was used to detect mRNA expression of VCAM-1, ICAM-1, IL-1β, IL-18, TNF-α, IL-6, IL-1ra, IL-4, IL-10, iNOS, Arg1, T-bet, GATA3 and 36B4. Primer sequences are listed in Table 1. Expression of target genes was normalized to the expression of the housekeeping 36B4 gene using the comparative Ct method. All data in EXE mice were expressed as fold change over SED mice, arbitrarily set at 1. Measurements of circulating inflammatory cytokines Plama levels of IL-1β, IL-18 and IL-4 were quantified using commercial mouse enzyme-linked immunosorbent assay (ELISA) kits (R&D Systems, MBL-Medical and Biological Laboratories, and eBioscience), respectively, following manufacturer instructions. Statistical analysis No significant differences between sexes were found and results were averaged. Data are presented as mean ± SD or as interquartile ranges ± minimum and maximum values. Statistical analysis was performed using the unpaired student's t-test. Analysis of body weight (BW) and running distance were evaluated using one-way ANOVA followed by Tukey's multiple comparisons test. A value of P<0.05 was considered statistically significant. Effect of voluntary exercise on physiological parameters Physiological parameters of 8-week 2K1C ApoE -/-SED and EXE mice are presented in Table 2. MBP, HR, and PRA did not significantly differ between the two groups. Total cholesterol plasma levels were significantly reduced by 32% in EXE mice (P<0.05 versus SED). There was no significant difference in initial BW (i.e. before EXE) between groups either in male ( Effect of voluntary exercise on local and systemic pro-and antiinflammatory mediators Quantitative RT-PCR analysis revealed that expression of VCAM-1 and ICAM-1 in atherosclerotic aortic tissue decreased in EXE compared to SED mice (0.38-fold, P = 0.085 and 0.56-fold, P<0.05, respectively, Fig 3). Consistent with RT-PCR results, plasma IL-18 concentration decreased in EXE mice compared to SED mice (148.1 ± 52.1 pg/ml versus 198.4 ± 59.2, p = 0.09, n = 7-9 mice per group). IL-1β and IL-4 levels were not detectable in plasma in either SED or EXE mice. As shown in Fig 5B, there was no significant difference in iNOS, GATA3, iNOS/Arg1, Tbet, GATA3 and T-bet/GATA3 splenic expression between the two groups. Discussion To our knowledge, the present study is the first one to examine whether EXE is effective in limiting progression of Ang II-induced pre-existing vulnerable atherosclerotic lesions. Main findings can be summarized as follows: (i) Voluntary EXE limited further ATS progression, and promoted stability of established Ang II-dependent plaques; (ii) Voluntary EXE reduced aortic VCAM-1 and ICAM-1 expression; (iii) Voluntary EXE reduced splenic expression of proinflammatory cytokines (IL-1β and IL-18), while increasing that of anti-inflammatory IL-4; (iv) Voluntary EXE did not modulate aortic and splenic expression of Th1/Th2, and M1/M2 polarization markers; finally, (v) Voluntary EXE reduced total plasma cholesterol level, but not blood pressure, and plasma renin activity. Atherosclerotic plaque destabilization and subsequent rupture is the main pathologic mechanism responsible for a majority of cardiovascular events. Herein, we showed that voluntary EXE stabilized pre-existing Ang II-dependent plaques mainly by reducing lesional macrophage infiltration. This result corroborates earlier reports both in clipped and non-clipped ApoE -/-, and/or LDLr -/mice [9,10,12,13,16]. The reduction in macrophage plaque content following EXE correlated with a vascular reduction in endothelial adhesion molecules ICAM-1 and VCAM-1 (although not significant) expression. Since these molecules are known to participate in atherogenesis by promoting monocytes recruitment, which differentiate to macrophages in the arterial intima, we propose that the observed decrease in their expression constitute a mechanisms of action by which voluntary EXE prevents plaque inflammation. Cytokines such as IL-1β, IL-18, IL-6 and TNF-α are key mediators in chronic vascular inflammatory response underlying several aspects of atherosclerosis and cardiovascular disease [23][24][25][26]. Numerous animal studies demonstrated that vascular decreased expression of such pro-inflammatory cytokines, in response to pharmacological treatments, is associated with atherosclerosis burden prevention [27][28][29]. In the present study, voluntary EXE failed to modulate vascular expression of the above-mentioned cytokines. Moreover, no difference in vascular anti-inflammatory/atherosclerotic cytokines IL-1ra and IL-10 was observed between EXE and SED mice. These results indicate that voluntary EXE has no positive impact on local inflammation in our model. In accordance with these findings, we previously found no effect of 4-week forced treadmill EXE on local vascular IL-6, IL-1β, TNF-α IL-18, IL-10 and IL-1ra expression despite Ang II-dependent vulnerable plaque prevention in 4-week 2K1C ApoE -/mice (unpublished data). Thus, modulation of vascular inflammation does not seem to be a mechanism of action by which voluntary EXE promotes stabilization of established Ang II-dependent plaque. Atherosclerosis is a systemic disease and there are compelling clinical and experimental evidences showing that EXE has anti-inflammatory systemic effects [30,31]. For example, in patients with stable coronary artery disease, circulating C-reactive protein, and IL-6 levels were significantly reduced after regular EXE by 41 and 18%, respectively [32]. In a recent randomized clinical trial, Ribeiro et al. also demonstrated increased circulating IL-10 levels after an 8-week aerobic EXE program in post-myocardial infarction patients [33]. Along the same line, a reduction in systemic IL-1, IL-6 and increased in IL-10 levels was reported in coronary heart disease patients in response to a 12-week aerobic EXE training program [34]. Serum proinflammatory cytokines reduction in response to EXE has also been reported in ApoE -/mice with advanced atherosclerosis [8,9]. Based on these considerations, we hypothesized that voluntary EXE would ameliorate the systemic inflammatory status of our mice. Interestingly, voluntary EXE favorably modified systemic balance of pro-and anti-inflammatory cytokines, as evidenced by reduced IL-1β, IL-18, and increased IL-4 expression in spleen tissue. Reduction in plasma IL-18 levels was also observed in response to voluntary EXE. Taken together, our data suggest that in our mouse model voluntary EXE exerts atheroprotection through systemic rather than local anti-inflammatory properties. Immune cells, both from innate and adaptive immunity, are present throughout all stages of atherosclerotic lesion development. Lesions' innate immune cells are predominantly monocytes/macrophages while adaptive ones are mostly CD4 + T cells [35,36]. Naive CD4 + T cells have the ability to differentiate into various T helper subtypes. These include in particular Th1-cell lineage which are pro-atherogenic via the secretion typically of IFN-γ and TNF-α, and Th2-cell lineage secreting IL-4, IL-10 and IL-13 whom their role in atherosclerosis remains controversial, including Th1 and Th2 cells [35,36]. Like CD4 + T cells, macrophages can alter their phenotypes and functions accordingly in response to change in the microenvironment. In the setting of atherosclerosis, the concept of macrophage polarization is increasing given recent work showing that different stages in the progression of atherosclerosis are associated with the presence of distinct macrophage subtype (i.e. pro-inflammatory/atherosclerotic M1 or classically activated versus anti-inflammatory/atherosclerotic M2 or alternatively activated macrophages) [37]. To further understand cellular mechanisms underlying our observations, we examined expression of genes associated with Th1-and Th2-polarization (T-bet and GATA3, respectively) as well as M1-and M2-polarization (iNOS and Arg1, respectively) in aortic and spleen tissues. Neither Th1/Th2 cells, nor M1/M2-associated gene expression were modulated by voluntary EXE. These findings corroborate recent investigations from our group showing no effect of 4-week forced treadmill EXE in 2K1C ApoE -/mice on M1 and M2 polarization (unpublished data). In conclusion, we showed that voluntary EXE is an effective therapeutic strategy to slow down progression and promote stabilization of pre-existing Ang II-mediated atherosclerotic
3,381
2015-11-24T00:00:00.000
[ "Biology" ]
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks We investigate MT evaluation metric performance on adversarially-synthesized texts, to shed light on metric robustness. We experiment with word- and character-level attacks on three popular machine translation metrics: BERTScore, BLEURT, and COMET. Our human experiments validate that automatic metrics tend to overpenalize adversarially-degraded translations. We also identify inconsistencies in BERTScore ratings, where it judges the original sentence and the adversarially-degraded one as similar, while judging the degraded translation as notably worse than the original with respect to the reference. We identify patterns of brittleness that motivate more robust metric development. Introduction Automatic evaluation metrics are a key tool in modern-day machine translation (MT) as a quick and inexpensive proxy for human judgements.The most common and direct means to evaluate an automatic metric is to test its correlation with human judgements on outputs of MT systems.However, as such metrics are commonly used to inform the development of new MT systems and even used as training and decoding objectives (Wieting et al., 2019;Fernandes et al., 2022), it is inevitable for them to be applied to out-of-distribution texts that do not frequently occur in existing system outputs.The rapid advancement of MT systems and metrics, as well as the prospect of incorporating MT metrics in the training and generation process, motivates investigation into MT metric robustness. In this work, we examine textual adversarial attacks (TAAs) as a means to synthesize challenging translation hypotheses where automatic metrics systematically underperform.We experiment with word- (Li et al., 2021;Jia et al., 2019;Feng et al., 2018) and character-level (Gao et al., 2018) attacks on three popular, high-performing automatic (b) The metric is self-inconsistent as it judges the original and perturbed translations to be similar (BERTScore(original, original) → (BERTScore(perturbed, original)) while judging the perturbed sentence as a worse translation (BERTScore(original, reference) → BERTScore(perturbed, reference)).All ratings are normalized. MT metrics: BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020), and COMET (Rei et al., 2020).We construct situations where the metrics disproportionately penalize adversariallydegraded translations.To validate such situations, we collect a large set of human ratings on both original and adversarially-degraded translations.As BERTScore can also be seen as a measure of semantic similarity between any two sentences, we also explore another scenario of inconsistency where BERTScore judges the original and adversarially-perturbed translations as similar while judging the perturbed translation as notably worse than the original one with regard to the reference translation.Examples are shown in Figure 1. We identify mask-filling and word substitution as effective means to generate perturbed translations where BERTScore, BLEURT, and COMET over-penalize degraded translations and BERTScore is self-inconsistent.In particular, BLEURT and COMET are more susceptible to perturbations in data with higher-quality translations.Our findings serve as a basis for developing more robust automatic MT metrics. 1 2 Methods Formulation Most TAA methods probe for the overreaction of the victim model f (Wang et al., 2022).Given the original text x and associated label y, the methods generate a bounded perturbed x ′ with label y ′ .The perturbation is assumed to be label-preserving (i.e.y ′ = y).Robust behavior would be f (x ′ ) = y for classification or f (x ′ ) ≈ y for regression, and the attack is considered successful iff f (x ′ ) is notably different to y.The label-preserving assumption is usually enforced by a set of constraints. In our task, given the original translation x and metric rating y, we aim to generate a perturbed text x ′ that misleads the metric f such that f (x ′ ) is notably different from y.The label-preserving assumption amounts to equivalence in meaning and fluency, which is commonly enforced through sentence embedding distance (Li et al., 2021) and perplexity (Jia et al., 2019;Alzantot et al., 2018).However, semantic equivalence can clearly not be adequately enforced in our case: the MT metric can roughly be seen as a model-based measure of semantic similarity, similar to the sentence embedding model enforcing the semantic constraint.When we have a "successful" attack where f (x ′ ) is notably different from y, we cannot be certain whether it is because we have a faulty metric (where the ground truth y ′ is close to y but f (x ′ ) is notably different from y ′ ) or a faulty constraint (where the perturbed x ′ is semantically different from x and thus y ′ should be different from y). We explore two approaches with regard to this issue.Firstly, we experiment with forgoing the semantic constraint and searching for x ′ such that 1 Code and data are available at https://github.com/i-need-sleep/eval_attack.f (x ′ ) is notably lower than y with a minimal number of perturbations under only the fluency constraint.The intuition is that when the number of perturbations is small, humans are likely to rate the extent of degradation as less significant than the automatic metric.To validate whether the assumption holds, we collect continuous human ratings on meaning preservation against the reference translation following Graham et al. (2013Graham et al. ( , 2014Graham et al. ( , 2017) ) and compare the extent of degradation as judged by humans and that as judged by the metrics.We focus on meaning preservation as it is aligned with the training objectives of BLEURT and COMET.We describe further details in Appendix A. Secondly, we investigate a scenario where BERTScore is self-inconsistent by using itself as a semantic similarity constraint.As BERTScore can be seen as a generic distance metric of semantic similarity, we can use it to measure the distance between the original and the perturbed translations, the original translation and the reference, as well as the perturbed translation and the reference.When the original and perturbed translations are measured as similar, the robust behavior would be for them to have similar ratings with regard to the reference.We search for violations against this where BERTScore(perturbed, reference) is notably smaller than BERTScore(original, reference), but BERTScore(perturbed, original) is close to BERTScore(original, original), which we use as a maximum score of similarity.2 Adversarial Attack Setup We use the German-to-English system outputs from WMT 12, 17, and 22 (Callison-Burch et al., 2012;Bojar et al., 2017;Kocmi et al., 2022), and randomly select 500 sentences for each system for each year, totalling 19K (source, translation, reference) tuples.For the sake of efficiency, we use MT outputs whose associated references are longer than 10 words.We normalize each metric such that their outputs on this dataset have a mean of 0 and a standard deviation of 1.When probing for overpenalization, we consider three widelyused metrics: BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020), and COMET (Rei et al., 2020).We constrain the perturbed sentence to have an increase in perplexity of no more than Table 1: The percentages and numbers of perturbations fitting our criteria for each metric for each year.The WMT 12, 17, and 22 splits ontain 8K, 5.5K and 5.5K original system outputs, respectively. 10 as measured by GPT-2 (Radford et al., 2019), and search for cases where the perturbed translation causes a decrease of more than 1 standard deviation in the metric rating.When probing for self-inconsistency with BERTScore, we constrain the difference between BERTScore(perturbed, original) and BERTScore(original, original) to be less than 0.3 after normalization, and search for cases where the perturbed translation causes a decrease of more than 0.4 in BERTScore. For both setups, we apply a range of black-box search methods to generate perturbations, including word-level attacks (CLARE (Li et al., 2021), the Faster Alzantot Genetic Algorithm (Jia et al., 2019), Input Reduction (Feng et al., 2018)) and character-level attacks (DeepWordBug (Gao et al., 2018)).CLARE applies word replacements, insertions, and merges by mask-filling, the Faster Alzantot Genetric Algorithm applies word substitutions, Input Reduction applies word deletions, and DeepWordBug applies character swapping, substitution, deletion, and insertion.We further describe these methods in Appendix B. Probing for Overpenalization We generate a total of 102,176 perturbed translations fitting our criteria.The breakdown across the search methods, metrics, and years is shown in Table 1.All three metrics seem insensitive to character-level perturbations, with DeepWordBug returning a small number of eligible perturbations for each year.The more sophisticated CLARE and Faster Alzantot Genetic Algorithm returns a larger ratio of eligible perturbations for BERTScore and BLEURT.On the contrary, COMET appears more sensitive to word deletions, with Input Reduction returning eligible perturbations for more than 50% of the system outputs.The ratio of eligible perturbations fluctuates only slightly for different years. We collect human ratings for a balanced subset of eligible perturbated translations and corresponding original translations.We aggregate the normal-ized ratings across annotators, resulting in 2,800 qualifying ratings respectively for original and perturbed sentences.The Pearson r correlations with human ratings are shown in Figure 2. We observe that the attacks lead to worsened correlations in most cases, with CLARE and the Faster Alzantot Genetic Algorithm leading to bigger degradations, suggesting mask-filling and word substitution as effective means of attack.All three metrics are particularly susceptible to perturbations on the WMT 22 data where the original translations are of higher quality.Both CLARE and the Faster Alzantot Genetic Algorithm lead to degradations of over 0.2 in Pearson correlations for BLEURT and over 0.4 for COMET.This is likely because BLEURT and COMET are trained on data from previous years and cannot easily generalize to higher-quality translations with minor modifications. To investigate the cause of the reduced correlations, we compare the degradation of translation quality as measured by the metrics and as judged by humans.Results are shown in Figure 5.We observe that, in most cases, the metrics assign higher differences between the original and perturbed translations.To quantify this observation, we perform a one-sided Wilcoxon rank-sum test on the subsets of the data corresponding to the 36 combinations of metrics, years, and attack methods.Under a significance level of p < 0.05, for 25 out of the 36 combinations, the degradation as measured by the metrics is significantly larger than that as measured by humans.This confirms our assumption of overpenalization.We also find that in most cases, the metrics penalize different perturbation instances more consistently than humans.As an exception, BLEURT and COMET are significantly more inconsistent when measuring CLARE-generated degradations for WMT 22. This, again, suggests vulnerability against perturbed, high-quality translations outside the models' training sets. We also investigate the influence of sentence length on overpenalization.Changing a word in a short sentence may result in a larger score differ- ence, and this difference might be different between human and metric scores.We compare the length of MT outputs (in the number of words) before perturbation for the following subsets: (1) the 19K MT outputs we used to generate the perturbations (subsampled to be balanced across the years); (2) all MT outputs leading to eligible perturbations; and (3) the 500 MT outputs where humans and metrics disagree the most on the degree of penalization.Statistics are shown in Figure 3 we find that (3) is smaller than (1) and ( 2) at a level of statistical significance.This suggests that shorter MT outputs lead to more severe over-penalization.However, the difference in sentence lengths is small and does not fully explain the different degrees of penalization. Probing for Self-Inconsistency We randomly select a total of 4K system outputs balanced across years and systems, and search for perturbations fitting our criterion of selfinconsistency.Results are shown in Table 2.While all attack methods return a small number of successful attacks, we observe a trend that CLARE and the Faster Alzantot Genetic Algorithm have a higher success rate.This, again, suggests the effectiveness of mask-filling and word substitution as attack methods. Implications of this Work The immediate implication of this work is to augment training of learned metrics such as BLEURT and COMET with the data generated in this work, and experiment with incorporating automaticallygenerated synthetic data based on mask-filling and word substitution. Related Work Classical MT metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) have been shown to correlate poorly with human judgements (Mathur et al., 2020;Kocmi et al., 2022), motivating the development of model-based metrics.Supervised metrics such as BLEURT (Sellam et al., 2020) and COMET (Rei et al., 2020) are trained to mimic human ratings as a regression task.Nonlearnable metrics such as BERTScore (Zhang et al., 2020), XMoverDistance (Zhao et al., 2020), and UScore (Belouadi and Eger, 2023) do not rely on human ratings and instead leverage the embeddings of the source, reference, and translation.Whereas more recent and higher-performing metrics exist, we focus our investigation on BERTScore, BLEURT, and COMET as they are most commonly used and easily adapted to other domains such as text simplification (Maddela et al., 2023).Wang et al. (2022) defines robustness as performance on unseen test distributions.Such distributions can occur naturally (Hendrycks et al., 2021) or be constructed adversarially, and robustness failures are usually identified through human priors and error analysis.Alves et al. (2022) and Chen et al. (2022) use hand-crafted types of perturbations to create challenge sets where MT metrics underpenalize the perturbed translations.Yan et al. (2023) use minumum risk training (Shen et al., 2016) to optimize directly for higher metric scores, resulting in a set of translations with overestimated scores.This work complements previous works by investigating overpenalization, which is a very different behavior to overestimation.We use adversarial attacks targeted at each metric that are not limited to pre-defined categories, which allows us to discover particular failure cases specific to each metric.In addition, we consider the same set of metrics on different years of WMT data.This allows us to draw connections between adversarial robustness and the quality of MT system outputs, and whether the MT system outputs are used when training the metric. Conclusion We apply word-and character-level adversarial attacks and probe for overpenalization with BERTScore, BLEURT, and COMET, and for selfinconsistencies with BERTScore.We observe that mask-filling and word substitution are more effective at generativing challenging cases, and that BLEURT and COMET are more susceptible to perturbation of high-quality translations. Our findings motivate more sophisticated data augmentation and training methods to achieve greater metric robustness.In particular, our formulation of self-consistency requires no validation against human ratings and can be applied to other embedding-based metrics (Zhao et al., 2020;Reimers and Gurevych, 2020;Belouadi and Eger, 2023) as a regularization term.We leave this to future work. Limitations We use the high-resource German-to-English subset of the WMT datasets, which mainly focuses on the news domain.How readily our results translate across to other language pairs, translation systems, metrics, or domains requires further investigation.We experiment with only word-and character-level attacks, but other methods exist that generate sentence-level (Ross et al., 2022) or multilevel (Chen et al., 2021) attacks.We leave a more comprehensive study of attack methods to future work. Ethics Statement The human ratings are collected from fluent English speakers contracted through a work-sharing company.The annotators are paid fairly according to local standards.Prior to annotation, we informed the annotators of the purpose of the collected data and provided relevant training.We ensured that the annotators had a prompt means of contacting us throughout the annotation process. Wei Zhao, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020.On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1656-1671, Online.Association for Computational Linguistics. A Annotation To validate whether the perturbed sentences lead to degraded metric performance, we collected human ratings for a subset of perturbed translations and corresponding original system outputs.We largely follow the DA protocol (Graham et al., 2013(Graham et al., , 2014(Graham et al., , 2017) ) and task the annotators to rate on a continuous scale to what extent the meaning of the reference is expressed in the translations.We focus on meaning preservation as it is aligned with the training objectives of BLEURT and COMET.We take a more conservative stance by displaying the original and perturbed sentences in parallel and highlighting the perturbed words.Our intuition is that the annotators are more inclined to exaggerate the quality differences between the original and perturbed translations under this setup.An example of the annotation interface is shown in Figure 4. We randomise the order of the original and perturbed sentences such that it is not immediately clear to the annotator which is which. For each human intelligence task (HIT) of 100 (reference, original, perturbed) tuples, we use the ratings from 70 tuples and include 30 tuples as control items.15 of the control items are duplicates from the 70 tuples, and the other 15 contain degraded original and perturbed translations from the 70 tuples, where we randomly drop four words.We use the Wilcoxon rank sum test to ensure that the score differences from the duplicated pairs are smaller than that of the degraded pairs.We reject HITs where the p value is larger than 0.05. We collect ratings from 10 fluent English speakers3 contracted through a work-sharing company.We conduct training sessions where we describe the task and annotation interface prior to the annotation process.In total, we collected 268 HITs, with 177 (66.04%)HITs passing quality control.The ratio is higher than those reported by Graham et al. (2017) as we work with trained annotators.We ob-tain the z-scores by normalizing annotations from the same annotator, and aggregate the ratings for the same translation by averaging.We use tuples with at least three annotations, resulting in 10,080 annotations for 2,800 tuples.The annotated data is balanced for the different metrics, years, and search methods. B Implementation Details We use the Huggingface Evaluate4 implementations of BERTScore, BLEURT, and COMET.For BERTScore, we use roberta-large as the underlying model and use F1 score as the metric output.For BLEURT, we use the improved bleurt-20-d12 checkpoint introduced by Pu et al. (2021).For COMET, we use the wmt20-comet-da checkpoint. We consider four search methods for adversarial attacks: CLARE, the Faster Alzantot Genetic Algorithm, Input Reduction, and DeepWordBug.CLARE iteratively applies contextualized wordlevel replacements, insertions, and merges by masking and bounded infilling, with each perturbation greedily selected by the impact on the victim.The Faster Alzantot Genetic Algorithm modifies the genetic algorithm proposed by Alzantot et al. (2018), and iteratively searches for word replacements that are close in a counter-fitted embedding space (Mrkšić et al., 2016).Input Reduction iteratively removes the least important word based on its influence on the victim's output.DeepWordBug iteratively applies a heuristic set of scores to determine the word to perturb, and applies character level swapping, substitution, deletion, and insertion. We use the TextAttack (Morris et al., 2020) implementations of the adversarial attacks.For CLARE, we modify the default implementation by removing the sentence similarity constraint and using beam search with width 2 and a maximum of 10 iterations when probing for overpenalization, and with width 5 and a maximum of 15 iterations when probing for self-inconsistency.For the Faster Alzantot Genetic Algorithm, we modify the implementation by changing the LM constraint and using a population size of 30 and a maximum of 15 iterations when probing for overpenalization, and a population size of 60 and a maximum of 40 iterations when probing for self-inconsistency.We follow the default implementation otherwise.For the GPT-2 perplexity constraint, we use the C Probing for Self-Inconsistency with BLEURT Our formulation of self-consistency does not immediately apply to BLEURT as it distinguishes between the hypothesis and the reference and thus cannot be seen as a distance metric.We instead experiment with a symmetric variant of BLERUT as the semantic constraint.Given BLEURT(hypothesis, reference), we define symmetric BLEURT as the average between BLEURT(original, perturbed) and BLEURT(perturbed, original).We constrain the difference between BLEURT(original, original) and this symmetric measure to be smaller than 0.3.We search for perturbed translations such that the difference between BLERUT(original, reference) and BLERUT(perturbed, reference) is larger than 0.4.None of the search methods returns successful attacks for our preliminary experiments with 1K randomly sampled translations.We find that BLEURT overpenalizes translations with low-quality references, i.e.BLEURT(original, perturbed) is significantly smaller than BLERUT(perturbed, original).This makes it difficult to find perturbations satisfying the semantic constraint. Figure 1: (a) The metric overpenalizes the perturbed translation when compared with human ratings.(b)The metric is self-inconsistent as it judges the original and perturbed translations to be similar (BERTScore(original, original) → (BERTScore(perturbed, original)) while judging the perturbed sentence as a worse translation (BERTScore(original, reference) → BERTScore(perturbed, reference)).All ratings are normalized. Figure 2 :Figure 3 : Figure 2: The Pearson correlation (r) with human ratings for different metrics, years, and attack methods on original and perturbed translations.The error bars show the standard error as computed through bootstraping with 10K resamples. Figure 4 : Figure 4: A screenshot of the annotation interface.The slider is initialized at the middle position.The annotator must interact with the slider before proceeding to the next annotation instance and cannot revisit completed annotations. Figure 5 : Figure5: The quality difference between the original and perturbed translations as measured by the metrics and humans.Bars marked with red asterisks are the cases where the degradation for metrics is significantly larger (p < 0.05) than that for humans.
4,823
2023-11-01T00:00:00.000
[ "Computer Science" ]
Antiviral Effect of Bovine Lactoferrin against Enterovirus E Enterovirus E (EV-E), a representative of the Picornaviridae family, endemically affects cattle across the world, typically causing subclinical infections. However, under favorable conditions, severe or fatal disorders of the respiratory, digestive, and reproductive systems may develop. There is no specific treatment for enterovirus infections in humans or animals, and only symptomatic treatment is available. The aim of this study was to determine the in vitro antiviral effect of bovine lactoferrin (bLF) against enterovirus E using virucidal, cytopathic effect inhibition, and viral yield reduction assays in MDBK cells. The influence of lactoferrin on the intracellular viral RNA level was also determined. Surprisingly, lactoferrin did not have a protective effect on cells, although it inhibited the replication of the virus during the adsorption and post-adsorption stages (viral titres reduced by 1–1.1 log). Additionally, a decrease in the viral RNA level in cells (by up to 75%) was observed. More detailed studies are needed to determine the mechanism of bovine lactoferrin effect on enterovirus E. However, this highly biocompatible protein ensures some degree of protection against infection by bovine enterovirus, which is particularly important for young animals that receive this protein in their mother’s milk. Introduction Enterovirus E (EV-E) belongs to the Picornaviridae family, which is comprised of small, nonenveloped, icosahedral viruses that have a positive-sense, single-stranded RNA. This virus was isolated from cattle faeces in the 1950s and, similar to many human and animal enteroviruses, was initially classified as an enteric cytopathogenic orphan virus (ECBO, enteric cytopathogenic bovine orphan). The reason for this was that these viruses were isolated from healthy specimens and could not be unambiguously linked to any known pathological syndrome [1]. EV-E appears endemically in populations of cattle across the world, and the shedding of large amounts of the virus occurs in animal faeces. This virus spreads through the fecal-oral route and typically causes subclinical infections. However, under advantageous conditions, it can result in disorders of the respiratory, digestive, and reproductive systems with a severe and sometimes fatal course [2][3][4][5]. It is also classified as a member of a large group of pathogens involved in the pathogenesis of the bovine respiratory disease complex (BRDC), a complex, multifactorial disease responsible for large economic losses in cattle breeding [6]. In addition to having a global distribution, enterovirus E invades a broad spectrum of hosts, such as cattle, small ruminants, and animals living in the wild, while the presence of antibodies against EV-E has been confirmed in horses, dogs, donkeys, and humans [7][8][9][10][11]. Because enteroviruses are characterized by a high frequency of mutations and recombination, they are highly capable of evolving over time, which is linked to the risk of the zoonotic potential of animal enteroviruses [12]. In the human population, the percentage of EV-E seropositive patients in rural areas is approximately the same as that in urban areas, which attests to the fact that the virus is relatively widespread in the environment [11]. It can also survive in the environment longer than other known bovine viruses, due to the high stability of enteroviruses and their Molecules 2022, 27, 5569 2 of 11 tolerance to changes in pH, high temperature, salinity and many chemical agents [10]. It is also thought that atmospheric precipitations can promote the spread of enteroviruses in the environment, and large amounts of these viruses in water are treated as an indicator of water contamination with faeces [7,13]. For all the above reasons and pursuant to the EU standard [14], enterovirus E is used in studies on the activity of chemical disinfectants and antiseptics used in the veterinary area. Due to the specificity of viral infections, the use of antiviral compounds in medicine is very limited compared to antibacterial or antifungal chemotherapy. According to the Centers for Disease Control and Prevention (CDC), there is no specific treatment for nonpolio enterovirus infection in humans, and only symptomatic treatment is administered [15]. The use of antiviral chemotherapy in veterinary medicine is far less available than in human medicine, especially in the case of food species. Only a few antivirals used in human medicine have been adapted to veterinary clinical medicine, such as acyclovir in feline ocular infections caused by feline herpesvirus 1 [16]. As in the case of human enteroviruses, there is no treatment for enterovirus infections in animals. Lactoferrin is a multifunctional glycoprotein from the transferrin family that is present in body fluids, excretions (including colostrum and milk), and neutrophil granules. It is a significant humoral factor of innate immunity, and its role in infections and inflammatory states of the organism arises from both its antimicrobial and immunomodulatory activities [17]. Lactoferrin shows antiviral activity against both DNA and RNA viruses, acting particularly during the early stages of an infection by binding to the particles of some viruses or blocking the cellular receptors for the viruses. Lactoferrin has also been observed to inhibit the replication of some viruses in infected cells and to enhance the antiviral immunological response of the organism [18,19]. The viruses sensitive to lactoferrin activity include enteroviruses, such as enterovirus 71, coxsackieviruses, and the human viruses ECHO 5 and 6 [19]. It is also suspected that breast-feeding ensures protection from enterovirus infection in children due to the presence of both maternal antibodies in milk and factors associated with nonspecific humoral resistance, such as lactoferrin [20,21]. This observation seems to be confirmed by the experiment conducted by Chen et al. [22] in mice, in which recombinant lactoferrin present in murine milk protected pups from lethal enterovirus 71 infection. The purpose of this study was to determine the antiviral activity of bovine lactoferrin against enterovirus E under in vitro conditions. To date, the antiviral activity of this biocompatible protein against bovine enterovirus has not been tested. Choice of Lactoferrin Concentrations Tested in the Experiment When selecting the concentrations of lactoferrin tested in this study, we were guided by the results of our earlier studies, which were conducted using the same cell line (MDBK) and the same batch of bovine lactoferrin purchased from the same manufacturer (Sigma-Aldrich, Schnelldorf, Germany). The 50% cytotoxic concentration (CC 50 ) of lactoferrin was 4.907 mg/mL, and the maximum tolerable concentration (MTC) was 2.5 mg/mL [23]. As in previous studies and according to the literature data, the concentrations of 1, 0.5, 0.25, 0.125, and 0.06 mg/mL bovine lactoferrin were chosen for further testing. Cytopathic Effect Inhibition and Virucidal Activity of Bovine Lactoferrin against Enterovirus E In the first experiment, in which both lactoferrin and the virus were incubated with cells throughout the entire experiment, only the highest concentration of lactoferrin considerably reduced the final titre of the virus (a decrease in the titre by 0.71 log, i.e., by approximately 63%) ( Figure 1A). The highest tested lactoferrin concentration did not have a direct virucidal effect on EV-E after 60 min of contact, regardless of the incubation temperature ( Figure 1B). Viral Yield Reduction in the Presence of Bovine Lactoferrin-A Time-of-Addition Assay The antiviral activity of lactoferrin was only observed in the case of the low and medium infection doses of the virus (multiplicity of infection, MOI = 0.1 or 1, respectively), whereas when the infection dose was high (MOI = 10), lactoferrin did not have a significant effect on the production of virus progeny by cells ( Figure 2). The highest tested lactoferrin concentration did not have a direct virucidal effect on EV-E after 60 min of contact, regardless of the incubation temperature ( Figure 1B). Viral Yield Reduction in the Presence of Bovine Lactoferrin-A Time-of-Addition Assay The antiviral activity of lactoferrin was only observed in the case of the low and medium infection doses of the virus (multiplicity of infection, MOI = 0.1 or 1, respectively), whereas when the infection dose was high (MOI = 10), lactoferrin did not have a significant effect on the production of virus progeny by cells ( Figure 2). Irrespective of the dose of the virus, bLF did not have a protective effect on cells (pretreatment stage), although an inhibitory effect was observed both at the adsorption stage and immediately after the adsorption of the virus ( Figure 2). The highest reduction of the viral titre (by 1-1.1 log, i.e., ca. 90%) was observed in the case of the lowest infection dose of the virus at the adsorption stage ( Figure 2C) and at the post-adsorption stage when the medium dose was tested ( Figure All data expressed as means ±SD (standard deviation) for three independent experiments (n = 3). Asterisks refer to statistically significant differences between control and lactoferrin-treated cells at: * p < 0.05, ** p < 0.01, *** p < 0.00. . Asterisks refer to statistically significant differences between control and lactoferrin-treated cells at: * p < 0.05, ** p < 0.01, *** p < 0.00. Effect of Lactoferrin on Viral RNA Load in Enterovirus E-Infected Cells The effect of lactoferrin on the quantity of the viral intracellular RNA was weaker than its effect on extracellular virus titres. This effect was dose-dependent and inversely proportional to the incubation time. The most distinct reduction observed in the amount of viral RNA (by approximately 75%) was observed after 6 h of incubation ( Figure 3). Irrespective of the dose of the virus, bLF did not have a protective effect on cells (pretreatment stage), although an inhibitory effect was observed both at the adsorption stage and immediately after the adsorption of the virus (Figure 2). The highest reduction of the viral titre (by 1-1.1 log, i.e., ca. 90%) was observed in the case of the lowest infection dose of the virus at the adsorption stage ( Figure 2C) and at the post-adsorption stage when the medium dose was tested ( Figure 2B), with a broader range of bLF concentrations showing antiviral activity at the post-adsorption stage ( Figure 2B,C). Effect of Lactoferrin on Viral RNA Load in Enterovirus E-Infected Cells The effect of lactoferrin on the quantity of the viral intracellular RNA was weaker than its effect on extracellular virus titres. This effect was dose-dependent and inversely proportional to the incubation time. The most distinct reduction observed in the amount of viral RNA (by approximately 75%) was observed after 6 h of incubation ( Figure 3). Lactoferrin present only during virus adsorption (adsorption) or added just after virus adsorption (post-adsorption). Amounts of intracellular enterovirus E RNA from control (lactoferrin-untreated) cells set as 1, results obtained from treated cells expressed as the relative amount of the control virus RNA. All data expressed as means ±SD (standard deviation) for three independent experiments (n = 3). Asterisks refer to statistically significant differences between control and lactoferrin-treated cells at: * p < 0.05, ** p < 0.01. Discussion In the absence of registered enterovirus-specific treatments, the search for antivirals against human enteroviruses has gained importance [24][25][26]. Bovine enterovirus E has not been a frequent subject of studies on the activity of potential antiviral compounds. In recent years, only a team of researchers from Poland has been testing the virucidal activity of various betulin derivatives against this virus [27,28]. However, there have been recent cases of severe infections in cattle, which have been identified as being caused by bovine enterovirus [2,4,5,29,30]. This is probably a consequence of improved testing techniques in veterinary medicine, which may force us to revise our view of EV-E as a harmless, almost commensal cattle virus. In view of the above facts, we decided to analyze the antiviral potential of substances of natural origin, namely, lysozyme, lactoferrin and nisin, against EV-E in our laboratory. The preliminary study showed that only lactoferrin demonstrated a satisfactory level of activity against EV-E and was therefore chosen for further tests. The other substances did not demonstrate a considerable influence on enterovirus E (data unpublished). In in vitro experiments on the antiviral activity of bovine lactoferrin against human enteroviruses, this protein has been confirmed to act against enterovirus 71, coxsackieviruses, echovirus 5 and 6 and poliovirus type 1 [21,[31][32][33][34][35], with quite a wide range of active concentrations. The lowest IC 50 values (50% inhibitory concentration) of lactoferrin were noted against coxsackievirus (9.3 µg/mL) [31] and enterovirus 71 (10-34.5 µg/mL) [31,32], whereas the IC 50 was 1.56 µM, i.e., ca. 50 µg/mL, against echovirus 6 [21] and as high as 650 µg/mL against poliovirus type 1 [35]. In some studies, the authors used predetermined concentrations of LF in a range of 12.5 µM (i.e., ca. 0.39 mg/mL) to 2 mg/mL [33][34][35]. A concentration of 250 µg/mL produced a 73% inhibitory effect against enterovirus 71 [32], while 2 mg/mL inhibited poliovirus type 1 by 75-90%, depending on the stage of viral replication [35]. The 63% reduction in the final titre of EV-E by bLF at a concentration of 1 mg/mL and 90% reduction in the virus titres in the time-of-addition assay at concentrations within 0.125-1 mg/mL bLF that were documented in our study fall within the broad ranges of lactoferrin activity described by other authors. The effect of lactoferrin on the viral RNA load in enterovirus E-infected cells corresponded with the results of the time-of-addition assay, although it was not as strongly observed. It is not easy to explain this finding, as the available literature lacks a similar comparison regarding human enteroviruses. None of the cited studies included the isolation of viral RNA or real-time PCR analysis. As mentioned previously, the antiviral activity of lactoferrin against most viruses takes place at the early stages of infection and is a consequence of lactoferrin's interaction with virus particles or with the cell, while the inhibition of the intracellular stages of viral replication by lactoferrin has been described far less often [18,19]. Enteroviruses are not an exception in this regard. However, the research results provided by different authors concerning the ability of bovine lactoferrin to bind to viral proteins are not conclusive. LF has been demonstrated to be able to bind to the proteins of echovirus 71 [32] but is unable to bind to either intact viral particles (N form) or intermediate particles (A form) of enterovirus 6. Lactoferrin gains this ability only in an acidic (pH 4-5) environment, which is characteristic of late endosomes [33]. Thus, the cited authors observed the lack of the ability of LF to neutralize echovirus 6 at a neutral pH, which is consistent with the results of our study on enterovirus E. In nearly all the studies cited in this paper in which human enteroviruses were tested, lactoferrin showed a protective influence on cells. Such an effect was observed in the cases of enterovirus 71 [31,32], echovirus 5 [34] and echovirus 6 [21,33]. For three decades, it has been known that interactions of lactoferrin with cells are a consequence of its ability to bind glycosaminoglycans and low-density lipoprotein receptors on cell surfaces, which are utilized by various viruses as cell receptors [36]. Although cell surface receptors to EV-E have not been identified to date [37], the protective effect of lactoferrin observed in the case of human enteroviruses has substantiated the hypothesis that this protein can also protect cells from being infected with enterovirus E. With enterovirus 71, lactoferrin demonstrated a stronger protective effect the longer it was incubated with cells, and the shortest time needed to achieve this effect was 30 min [31,32]. Based on these results, we decided in our experiment to incubate MDBK cells with lactoferrin for two hours. Surprisingly, the relatively long cell pretreatment with different concentrations of LF did not produce any protective effect against EV-E, regardless of the multiplicity of infection of the virus used. In our earlier study on another bovine RNA virus (bovine viral diarrhea virus) using the same cell line, bovine lactoferrin demonstrated a protective influence on cells, but the observed effect, typically reaching the highest level after the shortest time period tested, which was 24 h, was weakened when the incubation time was prolonged [23]. In the present study, because the enteroviral life cycle is rapid and virus progeny are usually released after 8 h [38], titres of EV-E in the time-of-addition assay were determined only after 24 h. Nonetheless, the incubation time in our study might have been too long to observe the protective action of LF towards the cells. The antiviral activity of bovine lactoferrin at the stage of the adsorption of the virus observed against bovine enterovirus in our experiment is totally consistent with the results of studies reported by other authors who tested the activity of this protein against human enteroviruses [21,[33][34][35]. What surprised us, however, was the powerful antiviral effect of bLF against EV-E at the post-adsorption stage, while no inhibitory effect of LF at this stage of infection against poliovirus type 1, echovirus 5, and enterovirus 71 has been documented [31,34,35]. In turn, Weng et al. [32] showed that it is necessary for this protein to enter cells no later than 60 min after the adsorption of the virus has been completed to achieve the inhibitory effect of lactoferrin against enterovirus 71. The strong inhibitory effect of LF against echovirus 6 after adsorption has been noted by Pietrantoni et al. [21]. These researchers performed a more detailed analysis of this effect in their subsequent experiment, thereby demonstrating that both LF and echovirus 6 enter cells by endocytosis and that the virus-lactoferrin interaction takes place in an acidic environment of endosomes inside the cell [33]. Thus, the cited scholars described novel mechanisms of the antiviral activity of lactoferrin, consisting of the prevention of virus uncoating and prevention of the viral eclipse phase. In our experiment, lactoferrin was added to the cells immediately after viral adsorption was completed and remained present in the medium until the termination of the experiment (24 h). Hence, it could have interfered with virus uncoating. However, it is not known whether enterovirus E can penetrate cells via endocytosis or if it takes advantage of another entry pathway. Analysis of the mechanism of the antiviral activity of lactoferrin against EV-E requires further investigation. Enterovirus E (LCR4 strain, ATCC VR-248) was propagated in MDBK cells. To prepare a stock virus for titration, a confluent monolayer of MDBK cells grown in a 75-cm 2 flask was inoculated with a virus stock at a 1:10 virus dilution in maintenance medium (containing 2% horse serum instead of 10%) and incubated. When an extensive cytopathic effect (CPE) was observed, the infected cells were centrifuged, and the aliquoted supernatant was stored at −80 • C until use. This virus stock was titred by an end-point dilution assay (10-fold serial dilutions of virus) on MDBK cells grown in 96-well plates (eight wells per dilution). Three days after inoculation, the cytopathic effect was recorded using an inverted phase contrast microscope. The 50% endpoint virus titres (CCID 50 , 50% cell culture infective dose) were calculated using the Reed and Muench method [39]. Lactoferrin from bovine milk, purchased from Sigma-Aldrich, was tested for antienterovirus E activity. Just before use, the lactoferrin was dissolved in DMEM at final protein concentrations of 0 (control cells), 0.0625, 0.125, 0.25, 0.5, and 1 mg/mL. When selecting the concentrations of lactoferrin tested in this study, we were guided by the results of our earlier studies, which were conducted using the same cell line (MDBK) and the same batch of bovine lactoferrin [23]. Antiviral Assays Since there are no registered specific anti-enterovirus drugs, there was none to use as a positive control for comparisons in the antiviral assays. Cytopathic Effect Inhibition Assay To confirm the potential anti-enterovirus E activity of bovine lactoferrin, its effect on the final virus titre was evaluated. The virus was titrated in the presence of all tested concentrations of lactoferrin, as described above. The control virus was titrated in compound-free medium. In this experiment, both virus and lactoferrin were present in the medium during the entire incubation period (72 h). All experiments were repeated three times. Virucidal (Virus Neutralisation) Assay To evaluate the potential virucidal activity of bovine lactoferrin, the stock virus was incubated with the highest tested concentration of lactoferrin (1 mg/mL) in DMEM (the stock virus final dilution 1:10) under different experimental conditions (contact time, 60 min; contact temperatures, 4, 20 or 37 • C). The control virus was diluted with lactoferrin-free medium and incubated under the same conditions. Each mixture was then titrated in MDBK cells, and CCID 50 titres of lactoferrin-treated virus were compared with control virus titres (tested under the same set of conditions). Each experiment was repeated three times. Yield Reduction Assay MDBK cells were seeded in 24-well plates and grown for 24 h before infection. Then, the growth medium was replaced with maintenance medium containing enterovirus E and incubated at 37 • C for 1 h (adsorption). Afterwards, the inoculum was removed, the cells were washed three times with medium, and fresh maintenance medium was added to the wells. In this experiment, three different infectious doses (multiplicity of infection, MOI) of enterovirus E were tested: high (MOI = 10), medium (MOI = 1) and low (MOI = 0.1). To determine the mode of antiviral activity of lactoferrin, the following experimental procedures were carried out: After 24 h of incubation, culture supernatants were collected, and the extracellular virus was titrated (CCID 50 ) in MDBK cells, as described above. All experiments were repeated three times. RNA Isolation and RT-qPCR In this experiment, only a low infectious dose (MOI = 0.1) of eneterovirus E was used, and only those experimental designs that resulted in decreased extracellular virus titres in the yield reduction assay were further tested for their effects on intracellular viral RNA synthesis. Since the enteroviral life cycle is fast and virus progeny are usually released after 8 h, we evaluated viral RNA levels after 6 and 18 h of incubation. For this purpose, the supernatants were removed, cells were washed twice with PBS, and 0.8 mL of Fenozol reagent (A & A Biotechnology, Poland) was added to each well and mixed by pipetting until complete cell lysis occurred. Isolation of the RNA from the cell culture was carried out with the use of a Total RNA Mini kit (A&A Biotechnology, Gdynia, Poland) according to the manufacturer's recommendations. The concentrations of eluted RNA were measured with a BioSpectrometer (Eppendorf, Hamburg, Germany). Reverse transcription was performed with the use of a High-Capacity cDNA Reverse Transcription Kit (Life Technologies, Waltham, MA, USA) following the manufacturer's protocol. Real-time PCR was performed to identify and quantify enterovirus E. The 20-µL reaction sample contained 10 µL of DyNAmo HS SYBR Green qPCR Master Mix 2× (Life Technologies, Waltham, MA, USA), 0.5 µL of each primer at 10 µM (EV-E183 for and EV-E183 rev; Table 1), 2 µL of cDNA, and 7 µL of ribonuclease-free water. TaqMan qPCR was conducted under the following conditions: activation of the polymerase at 95 • C for 15 min followed by 40 two-stage cycles of denaturation at 94 • C for 30 s, primer annealing at 62 • C for 20 s, and chain elongation at 72 • C for 20 s. To determine the viral copy number in the analyzed samples, a standard curve was plotted. Standard curves were generated using a EV-E LCR4 strain (ATCC VR-248) amplicon, which contains a nucleotide sequence of the polyprotein gene of EV-E, with complementary primers and probes. Amplification of a product was conducted by PCR. The reaction was carried out in a Nexus Gradient thermocycler (Eppendorf, Hamburg, Germany) with a HotStar TaqPlus Master Mix Kit (Qiagen, Hilden, Germany). The 20-µL reaction sample contained 10 µL of HotStar TaqPlus DNA Polymerase (Qiagen), 0.5 µL of each primer at 10 µM (EV-E802 for and EV-E802 rev; Table 1), 7 µL of RNase-free water, and 2 µL of cDNA. The reaction was carried out under the following conditions: the initial denaturation step was at 95 • C for 5 min and was followed by 30 cycles of denaturation at 94 • C for 30 s, primer annealing at 60 • C for 30 s, 72 • C for 60 s and final extension of 72 • C for 10 min. Then, after the PCR products were purified with a Clean-Up Concentrator Kit (A&A Biotechnology, Gdynia, Poland), the amplicon concentration was measured with a BioSpectrometer (Eppendorf, Hamburg, Germany). The gene copy number was calculated based on amplicon concentration and size with a copy number calculator (Genomics and Sequencing Center, University of Rhode Island, Kingston, RI). Standard 10-fold serial dilutions of amplicons (starting at initial dilution to 10 8 and ending at final dilution to 10 3 ) were used as template cDNA. Intracellular viral RNA detected in enterovirus E-infected untreated (control) cells was assumed to equal 1, and the results obtained from infected lactoferrin-treated cells were expressed as the relative amount of the control virus RNA. Each experiment was repeated three times. Statistical Analysis All the results were expressed as the mean values ± standard deviations (SD) of three independent experiments. Data were submitted to one-way analysis of variance (ANOVA). Tukey's posttest was used to determine differences between control and lactoferrin-treated cells. Statistical evaluation of the results was performed using GraphPad Prism 7 software. Conclusions In summary, the present data indicate that bLF has an antiviral effect on bovine enterovirus but with slightly different mechanisms than those described for human enteroviruses. We did not observe the protective effect of lactoferrin towards cells, although the protein inhibited the replication of the virus at both the adsorption and post-adsorption stages and decreased the viral RNA load in the cells. More detailed studies are needed to determine the mechanism underlying the activity of bovine lactoferrin against enterovirus E. However, there is no doubt that this highly biocompatible protein ensures some degree of protection against infection with bovine enterovirus, which can be of particular importance for young animals that receive this protein in their mother's milk. Data Availability Statement: All data generated and analyzed during this study are available from the corresponding author on reasonable request.
6,007.4
2022-08-29T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Double Cover of Modular $S_4$ for Flavour Model Building We develop the formalism of the finite modular group $\Gamma'_4 \equiv S'_4$, a double cover of the modular permutation group $\Gamma_4 \simeq S_4$, for theories of flavour. The integer weight $k>0$ of the level 4 modular forms indispensable for the formalism can be even or odd. We explicitly construct the lowest-weight ($k=1$) modular forms in terms of two Jacobi theta constants, denoted as $\varepsilon(\tau)$ and $\theta(\tau)$, $\tau$ being the modulus. We show that these forms furnish a 3D representation of $S'_4$ not present for $S_4$. Having derived the $S'_4$ multiplication rules and Clebsch-Gordan coefficients, we construct multiplets of modular forms of weights up to $k=10$. These are expressed as polynomials in $\varepsilon$ and $\theta$, bypassing the need to search for non-linear constraints. We further show that within $S'_4$ there are two options to define the (generalised) CP transformation and we discuss the possible residual symmetries in theories based on modular and CP invariance. Finally, we provide two examples of application of our results, constructing phenomenologically viable lepton flavour models. Introduction The origin of the flavour structures of quarks and leptons remains a fundamental mystery in particle physics. In the lepton sector in particular, data from neutrino oscillation experiments [1] has revealed a mixing pattern of two large and one small mixing angles, which suggests a non-Abelian discrete flavour symmetry may be at play [2][3][4][5][6]. Future observations are expected to put such symmetry-based scenarios to the test via, e.g., high precision measurements of the neutrino mixing angles and of the amount of leptonic Dirac CP Violation (CPV). Of paramount importance are also the measurement of the absolute scale of neutrino masses and the determination of the neutrino mass ordering. Recent global data analyses (see, e.g., [7,8]) show that data favour values of the leptonic Dirac CPV phase δ close to 3π/2, 1 and a neutrino mass spectrum with normal ordering (NO) over the one with inverted ordering (IO), the IO spectrum being disfavoured at ∼ 3σ confidence level. Upper bounds on the sum of neutrino masses in the range of Σ < 0.12 − 0.69 eV (at the 2σ level) are also found in the most recent analysis [8], where the quoted largest upper limit corresponds to the cosmological data set used as input which leads to the most conservative result. Within the approach of postulating a discrete symmetry and its breaking pattern, one can generically predict correlations between some of the three neutrino mixing angles and/or between some of, or all, these angles and δ (see, e.g. [6]). Majorana CPV phases remain instead unconstrained, unless one combines the discrete symmetry with a generalised CP (gCP) symmetry [10,11]. While the latter scenarios are more predictive, one is still required to construct specific models to obtain predictions for neutrino masses. These models typically rely on the introduction of a plethora of so-called flavon scalar fields, acquiring specifically aligned vacuum expectation values (VEVs), which require a rather elaborate potential and additional large shaping symmetries. The modular invariance approach to the flavour problem put forward in Ref. [12] has opened up a new direction in flavour model building. Modular symmetry is introduced into the supersymmetric (SUSY) flavour picture, with quotients Γ N of the modular group (N = 2, 3, . . . ) playing the role of non-Abelian discrete symmetry groups. For N ≤ 5, these finite modular groups are isomorphic to the permutation groups S 3 , A 4 , S 4 and A 5 , widely used in flavour model building. The traditional approach to flavour is thus generalised, since fields can carry non-trivial modular weights k, further constraining their couplings in the superpotential. Furthermore, no flavons need to be introduced in the model. In such a case, Yukawa couplings and fermion mass matrices in the Lagrangian of the theory are obtained from combinations of modular forms, which are holomorphic functions of a single complex number -the VEV of the modulus τ -and have specific transformation properties under the action of the modular symmetry group. Models of flavour based on modular invariance have then an increased predictive power, constraining fermion masses, mixing and CPV phases. 2 1 The best fit value of δ obtained by the T2K Collaboration in the latest data analysis is also close to 3π/2, while the CP conserving values δ = 0 and π are disfavoured by the T2K data respectively at 3σ and 2σ [9]. 2 Possible non-minimal additions to the Kähler potential, compatible with the modular symmetry, and transforms non-trivially under the modular group Γ, which is the special linear group of 2 × 2 integer matrices with unit determinant, i.e. Γ ≡ SL(2, Z) ≡ a b c d a, b, c, d ∈ Z, ad − bc = 1 . (2.1) The group Γ is generated by three matrices subject to the following relations: where 1 denotes the identity element of a group. The modular group Γ acts on the modulus with fractional linear transformations: The matter superfields transform under Γ as "weighted" multiplets [12,65,69]: where (cτ + d) −k is the automorphy factor, k ∈ Z is the modular weight 4 and ρ is a unitary representation of Γ. Note that the group action (2.4) has a non-trivial kernel Z R 2 = {1, R}, i.e. the modulus τ does not transform under the action of R. For this reason one typically defines the (inhomogeneous) modular group as the quotient Γ ≡ P SL(2, Z) ≡ SL(2, Z) / Z R 2 , which is the projective version of SL(2, Z) with matrices γ and −γ being identified. However, matter fields of a modular-invariant theory are in general allowed to transform under R, as can be seen from (2.5). Therefore the symmetry group of such theory is Γ rather than Γ, as was stressed recently in [59]. The inclusion of the R generator is crucial in extending finite modular groups to their double covers, as we will see shortly. We assume that representations of matter fields are trivial when restricted to the so-called principal congruence subgroup, with a fixed integer N ≥ 2 called the level. In other words, ρ(γ) of eq. (2.5) is the identity matrix whenever γ ∈ Γ(N ), so that ρ is effectively a representation of the quotient group where with a slight abuse of notation we denote by S, T , R the equivalence classes of the corresponding generators (2.2) of the full modular group. In the special case when ρ does not distinguish between γ and −γ, i.e. ρ(R) is identity, we see that ρ is a representation of a smaller quotient group called the (inhomogeneous) finite modular group. For N ≤ 5, Γ N has the following presentation: Note that R ∈ Γ(2), hence Γ 2 = Γ 2 . In contrast, for N ≥ 3 one has R / ∈ Γ(N ), and Γ N is a double cover of Γ N . For small values of N , the groups Γ N and Γ N are isomorphic to permutation groups and their double covers, see Table 1. As a final remark, let us stress that the level N defining the finite modular group is common to all matter fields ψ I , which may however carry different modular weights k I . Modular Forms and Modular-Invariant Actions The Lagrangian of a N = 1 global supersymmetric theory is given by where K is the Kähler potential, W is the superpotential, θ andθ are Graßmann variables, and Φ collectively denotes chiral superfields of the theory. In modular-invariant supersymmetric theories, τ is the scalar component of a chiral superfield, and the superpotential has to be modular-invariant, W (Φ) γ − → W (Φ) [69]. In theories of supergravity, the superpotential is instead coupled to the Kähler potential and has to transform 5 For N > 5, additional relations are needed in order to render the group finite [73]. with a certain weight −h under modular transformations (up to a field-independent phase) [65,69]: The superpotential can be expanded in powers of matter superfields ψ I as: where the sum is taken over all possible combinations of fields {I 1 , . . . , I n } and all independent singlets of Γ N , denoted by (. . .) 1 . 6 In order to satisfy (2.12) given the field transformation rules (2.5), the field couplings Y I 1 ...In (τ ) have to be modular forms of level N and weight k Y = k I 1 + . . . + k In − h, i.e., transform under Γ as where ρ is a unitary representation of the homogeneous finite modular group Γ N such that ρ ⊗ ρ I 1 ⊗ . . . ⊗ ρ In ⊃ 1. Apart from that, due to holomorphicity of the superpotential, modular forms have to be holomorphic functions of τ . Together with the transformation property (2.14), this significantly constrains the space of modular forms. In fact, non-trivial modular forms of a given level N exist only for positive integer weights k ∈ N and form finite-dimensional linear spaces M k (Γ(N )) which decompose into multiplets of Γ N . As can be seen from Table 1, the spaces M k (Γ(N )) have low dimensionalities for small values of k and N . Therefore it is possible to form only a few independent Yukawa couplings, which yields predictive models of flavour. By analysing eq. (2.14), one notes that odd-weighted modular forms necessarily have ρ(R) = −1 in order to compensate the minus sign arising from the automorphy factor, while for even-weighted modular forms one has ρ(R) = 1. Therefore, in modularinvariant theories based on inhomogeneous modular groups Γ N only even-weighted modular forms appear. Modular Forms of Level 4 3.1 "Weight 1/2" Modular Forms Modular forms of level 4 and weight k form a linear space of dimension 2k + 1 given by [74]: where m and n are non-negative integers, and η(τ ) is the Dedekind eta function (we collect all the necessary definitions and properties of special functions in Appendix A). In other words, M k (Γ(4)) is spanned by polynomials of even degree 2k in two functions θ(τ ) and ε(τ ) defined as Here Θ 2 (τ ) and Θ 3 (τ ) are the Jacobi theta constants related to the Dedekind eta by eq. (A.5). In particular, we conclude from eq. (3.1) that the space of weight 1 modular forms of level 4 is formed by the homogeneous quadratic polynomials in θ and ε, or equivalently, in the theta constants Θ 2 and Θ 3 of double argument (for more details on the correspondence between modular forms of level 4 and the theta constants, see Appendix B). From eqs. (3.2) and (A.2) we find immediately that θ(τ ) and ε(τ ) admit the following q-expansions, i.e. power series expansions in q 4 ≡ exp(iπτ /2): so that θ → 1, ε → 0 in the "large volume" limit Im τ → ∞. In fact, ε ∼ 2 q 4 in this limit and it can be used as an expansion parameter instead of q 4 , which justifies the notation. Note that, due to quadratic dependence in the exponents of q 4 , the series (3.3) converge rapidly in the fundamental domain of the modular group, where one has |q 4 | ≤ exp(−π √ 3/4) 0.26. We give below the values of θ(τ ) and ε(τ ) at values of τ , namely τ C , τ L , and τ T , at which there exist residual symmetries (see Section 5 for details): We further find the exact relations at symmetric points: The action of the T generator on θ and ε follows from the corresponding transformation of the theta constants (A.3): (3.7) By requiring that the second action of S should transform the result back to θ(τ ), we find the corresponding action on ε(τ ), and conclude that From the transformation properties (3.6) and (3.8), one sees that θ and ε work as "weight 1/2" modular forms. Their even powers produce integer weight modular forms, which we consider in the following subsection. Weight 1 Modular Forms We have seen that the linear space of weight 1 modular forms of level 4 is spanned by three quadratic monomials in θ(τ ) and ε(τ ), namely: such that the linear space of weight k = 1 has the correct dimension, 2k + 1 = 3. These three functions can be arranged into a triplet furnishing a representation of S 4 ≡ SL(2, Z 4 ), which is a double cover 7 of the permutation group S 4 [66]. We summarise the group theory of S 4 in Appendix C. In the group representation basis of Table 7, the relevant triplet has the form and furnishes an irreducible representation3. Indeed, using the transformation rules (3.6), (3.8) it is easy to check that the triplet (3.10) transforms under the generators of the modular group as expected: (3.11) The3 modular triplet of eq. (3.10) is the base result of our construction. It can be used to generate all modular forms entering and determining the fermion Yukawa couplings and mass matrices, as we will see in what follows. Modular Forms of Higher Weights Modular multiplets of higher weights Y (k>1) r, µ may be obtained from those of lower weight via tensor products. Here, the index µ labels linearly independent multiplets (in case more than one is present) for a given weight k and irreducible S 4 representation r. The lowest weight multiplet in eq. (3.10) works then as a 'seed' multiplet, since all higher weight modular multiplets can be recovered from a sufficient number of tensor products of Y (1) 3 (τ ) with itself. Note that the latter has been written in terms of a minimal set of functions of τ from the start, namely θ(τ ) and ε(τ ). By doing so, tensor products directly provide spaces of modular forms with the correct dimensions, bypassing the typical need to look for constraints relating redundant higher weight multiplets. In other words, these constraints are manifestly verified given the explicit forms of the multiplet components. First of all, we recover the known [32] modular S 4 lowest-weight multiplets, a doublet and a triplet( ), which are now expressed in terms of θ(τ ) and ε(τ ) and read Our construction reduces to that of modular Γ 4 S 4 for even weights (see also Appendix C.1). In order to compare the results in eq. (3.12) with those of Ref. [32], one needs to work in compatible group representation bases, i.e. bases in which the representation matrices ρ r (S) and ρ r (T ) coincide, for irreducible representations r common to S 4 and S 4 (those without hats). The basis for S 4 compatible with the one for S 4 we here consider, together with the expressions for modular multiplets in that basis, can be found in Ref. [47] (see Appendices B and C therein). Then, by looking at the q-expansions, produce new modular multiplets of odd weight. At weight k = 3, a non-trivial singlet and two triplets exclusive to S 4 arise: (3.14) Finally, at weight k = 4 one again recovers the S 4 result. We obtain: which can be seen to match known multiplets (up to normalisation) by comparing qexpansions. We collect the explicit expressions of S 4 modular multiplets with higher weights, up to k = 10 and written in terms of θ(τ ) and ε(τ ), in Appendix D. Note that odd(even)-weighted modular forms always furnish (un)hatted representations, since in our notation hatted representations are exactly the ones for which ρ(R) = −1. Combining gCP and Modular Symmetries In models possessing a flavour symmetry, one can define a generalised CP (gCP) transformation acting on the matter fields as with a bar denoting the conjugate field, and where x = (t, x), x P = (t, −x) and X is a unitary matrix acting on flavour space. Modular symmetry, which plays the role of a flavour symmetry, can be consistently combined with a generalised CP symmetry. This has been done from a bottom-up perspective in [47] for the inhomogeneous modular group Γ. The result of [47] can be generalised to the case of the full modular group Γ as follows. Starting with eq. (4.1) one can show that the modulus τ should transform under CP as τ without loss of generality (cf. Ref. [47]). The corresponding action on the modular group Γ is given by an outer automorphism where σ(γ) = ±1. Note that the signs σ(γ) are irrelevant in the case of the inhomogeneous modular group Γ since γ is identified with −γ, and therefore eq. (4.4) uniquely determines the automorphism u(γ). This is no longer the case for the full modular group Γ, and one has to treat the signs carefully. Since u is an automorphism, it is sufficient to define its action on the group generators. From eq. (4.4) one has: The fact that u(γ) is an automorphism implies u(R) = 1 = −R, and so σ(R) = +1 and u(R) = +R. Furthermore, the signs σ(γ) must be chosen in a way consistent with the group relations in (2.3). In particular, one finds: implying that σ(S) = σ(T ), since (T S) 3 = 1. Thus, from the outset, two different outer automorphisms may be realised (see also [77]): We note that S −1 = − S. CP 1 The first option (4.7), which we call CP 1 , corresponds to a trivial sign choice σ(γ) = +1 and therefore admits an explicit formula for generic γ: This automorphism can be realised as a similarity transformation within GL(2, Z): Applying the chain CP 1 → γ → CP −1 1 to the matter field ψ, which transforms under Γ and CP as in eqs. (2.5) and (4.1), one arrives at the gCP consistency condition on the matrix X: 11) or, equivalently, (see also [59]), which coincide with the corresponding expressions in the case of Γ [47]. In a basis where S and T are represented by symmetric matrices, eq. (4.12) is satisfied by the canonical CP transformation X = 1 [47]. Such a basis exists for all irreducible representations of the inhomogeneous finite modular groups Γ N with N = 2, 3, 4, 5 (see [47] and references therein) and N = 7 [40], as well as for all irreps of the homogeneous modular groups Γ 3 and Γ 4 (see Appendix C.2). 8 This means that CP 1 allows to define a CP transformation consistently and uniquely for all irreps of the aforementioned finite modular groups, hence u acts as a class-inverting automorphism on these groups [78]. 9 The action of CP 1 on fields (and τ ) obeys CP 2 −−→ (XX * ) ij ψ j (x) and X = 1 ⇒ XX * = 1 in the symmetric basis. It further follows that X is symmetric in any representation basis [47]. The modular group Γ = SL(2, Z) is then extended to Finally, in a basis where S and T are symmetric, where Clebsch-Gordan coefficients are real and with modular multiplets normalised to satisfy Y (−τ * ) = Y * (τ ), 10 the requirement of CP 1 invariance reduces to reality of the couplings [47], i.e. of the numerical coefficients in front of the independent singlets in eq. (2.13). In such theories, CP symmetry is broken spontaneously by the VEV of the modulus τ , thus providing a common origin of CP and flavour symmetry violation. We will make use of CP 1 in the upcoming phenomenological examples of Section 6. CP 2 Let us now discuss the second possibility (4.8) for the modular group outer automorphism, u . This choice, which we call CP 2 , is formally defined by but cannot be realised as a similarity transformation within GL(2, Z). It leads to a different consistency condition on the matrix X, namely: 15) or, in terms of the generators S and T , which are equivalent to (4.15), since σ(γ 1 )σ(γ 2 ) = σ(γ 1 γ 2 ). In practice, the consistency condition (4.16) differs from that of eq. (4.12) and CP 2 differs from CP 1 only when (−1) k ρ(R) = 1, i.e. whenever the matter field ψ transforms non-trivially under R. For these R-odd fields, however, it is only possible to satisfy the consistency condition if i) both the characters of T and S vanish, χ(S) = χ(T ) = 0, which follows from eq. (4.16) after taking traces, ii) the dimension of the representation of ψ is even, which follows from eq. (4.16) after taking determinants, and iii) the level N of the finite group is even, which follows from taking the N -th power of the second relation in eq. (4.16). 11 This means that, given a finite modular group of level N , CP 2 is incompatible with certain combinations of modular weights and irreps. In particular, combining the groups Γ N with N = 3, 5, 7 and Γ 3 with CP 2 means that any matter field must be R-even, i.e. satisfy (−1) k ρ(R) = 1, and transform canonically under CP, X CP 2 = 1, in the symmetric basis. In the case of Γ 2 , Γ 4 and Γ 4 there is the additional option to have R-odd fields, (−1) k ρ(R) = −1, but only for the doublet representations, all of which verify χ(S) = χ(T ) = 0. These fields are constrained to transform under CP with An associated fact is that Γ(N ) with N ≥ 2 is only stable under u for even N . in the symmetric basis. Notice that CP 2 2 = 1. Instead, the action of CP 2 2 on fields, forms and τ coincides with that of R for these finite groups. Equating these two actions, the modular group is in this context minimally extended to the semidirect product 12 (4.18) Keeping our focus on Γ 2 , Γ 4 , and Γ 4 , with their respective symmetric bases and Clebsch-Gordan coefficients given in Ref. [47] and Appendix C, let us briefly comment on the consequences of implementing CP 2 for the couplings in the superpotential W . We start by writing the latter as a sum of independent singlets, where Y s (τ ) are modular multiplets of a certain weight and irrep, and g s are complex coupling constants. To be non-vanishing, each term must contain an even number of R-odd fields ψ i , if any, which are in doublet representations of the finite groups at hand. Taking ψ i to be R-odd for i ≤ 2m and R-even for i > 2m, with m being a non-negative integer such that 2m ≤ n, one can explicitly check that where we have used the reality and symmetry properties of the Clebsch-Gordan coefficients. 13 Under CP 2 , a term in eq. (4.19) transforms into the conjugate of which should coincide with the original term. The independence of singlets then implies the constraint g s = ± g * s , meaning that all coupling constants g s have to be real or purely imaginary (depending on the sign) to conserve CP. It should be noted that it is difficult to build phenomenologically viable models of fermion masses and mixing exploiting the novelty of CP 2 with R-odd fields, as i) the choice of irreps for such fields is quite limited and ii) the R-odd and R-even sectors are segregated by the Z R 2 symmetry. Taken together, these facts imply the vanishing of some mixing angles or masses in simple models based on the combination of the novel CP 2 with Γ 2 , Γ 4 , or Γ 4 . We will not pursue this model-building avenue in what follows. 12 The non-trivial automorphism defining this outer semidirect product is γ → CP2 S γ S −1 CP −1 2 . 13 For each pair of R-odd doublets, (XCP 2 ψi ⊗ XCP 2 ψj)r = ±(ψi ⊗ ψj)r, where the sign depends on r. Modular symmetry is spontaneously broken by the VEV of the modulus τ : in fact, there is no value of τ which is left invariant by the modular group action (2.4). However, certain values of τ (called symmetric or fixed points) break the modular group Γ only partially, with the unbroken generators giving rise to residual symmetries [33]. These unbroken symmetries can play an important role in flavour model building [18,25]. To classify the possible residual symmetries, one first notices that with a proper "gauge choice" τ can always be restricted to the fundamental domain D of the modular group Γ: which describes all possible values of τ up to a modular transformation (see Fig. 1). Note that, by convention, the right half of the boundary ∂D is not included into D, since it is related to the left half by suitable modular transformations. In the fundamental domain D, there exist only three symmetric points, namely [33]: In addition, the R generator is unbroken for any value of τ . Finally, if a theory is also CP-invariant (i.e. its couplings satisfy the constraints discussed in Section 4), then the CP symmetry is spontaneously broken by any τ ∈ D except for the values lying on the fundamental domain boundary or the imaginary axis [47]: i) Re τ = 0 (the imaginary axis) is invariant under CP; ii) Re τ = −1/2 (the left vertical boundary) is invariant under CP T ; iii) |τ | = 1 (the boundary arc) is invariant under CP S. Recall that CP always acts on τ as in eq. (4.2), meaning the above statement does not depend on the choice of CP automorphism (CP 1 vs. CP 2 ). For a given value of τ , the residual symmetry group is simply a group generated by the unbroken transformations subject to relations which can be deduced from eqs. (4.13), (4.18). For instance, the symmetric point τ = i is invariant under S, R and CP 1 in the case of the full modular group Γ enhanced by CP 1 . The corresponding symmetry group is where D 4 is the dihedral group of order 8 (the symmetry group of a square). One can find the residual symmetry groups for other values of τ in a similar fashion; we collect the results in Table 2. When considering finite modular versions Γ ( ) N of the modular group, the residual symmetry groups may be reduced, due to the extra relation T N = 1 (recall that for N > 5 further constraints are present). For N ≤ 5, the instances of Z T in Table 2 should be replaced by Z T N . Since every symmetric point outside the fundamental domain D is physically equivalent to a symmetric point inside D, its residual symmetry group is isomorphic to one of the groups listed in Table 2. For instance, "the right cusp" τ R ≡ 1/2 + i √ 3/2 is related to the left cusp as τ R = T τ L , so the residual symmetry group at τ R is isomorphic to that at τ L , and the isomorphism is given by a conjugation with T −1 . In particular, the unbroken generators are mapped as ST → T (ST )T −1 = T S, R → T RT −1 = R and CP T → T (CP T )T −1 = T CP. Phenomenology To illustrate how the results of the previous sections can be applied to model building, we now consider examples of S 4 modular-invariant models of lepton flavour. As in previous bottom-up works, the Kähler potential is taken to be with Λ 0 having mass dimension one. Weinberg Operator Model We first assume that neutrino masses are generated from the Weinberg operator, and assign both lepton doublets L and charged lepton singlets E c to full triplets of the discrete flavour group. Such an assignment provides a justification for three lepton generations and contrasts with most previous bottom-up modular approaches to flavour. The relevant superpotential is where one has summed over independent singlets s. In particular, we take L ∼ 3 with weight k L = 2, and E c ∼3 with weight k E c = 1. Higgs doublets H u and H d are assumed to be S 4 trivial singlets of zero modular weight. To compensate the modular weights of field monomials, the modular forms entering the Weinberg term need to have weight k W = 4, while those in the Yukawa term need instead k Y = 3. Note that E c transforms with an odd modular weight and in an irrep which is absent from the usual Γ 4 S 4 modular construction. Aiming at a minimal and predictive example, we further impose a gCP symmetry (CP 1 , see Section 4) on the model. Then, eq. (6.2) explicitly reads where the g s and the α s (s = 1, 2, 3) are real as a result of imposing gCP in the working symmetric basis for the S 4 group generators (see Appendix C.2). This superpotential results in the following Lagrangian, containing the mass matrices of neutrinos and charged leptons, which is written in terms of four-spinors, with H u = (0, v u ) T , H d = (v d , 0) T , and ν c iR ≡ C ν iL T , C being the charge conjugation matrix. The matrices M ν and M e can be obtained from eq. (6.3) and read: 14 . (6.6) In the above, the Y (k) r subscript attached to each matrix denotes the modular form multiplet Y to be used within that matrix. The explicit expressions for these mass matrices in terms of the θ and ε functions are given in Appendix E. Notice that the 13 independent Y i are all determined once the value of the complex modulus τ is specified. Hence, this model contains 8 real parameters (6 real couplings and τ ) while aiming to explain 12 observables (3 charged-lepton masses, 3 neutrino masses, 3 mixing angles, and 3 CPV phases). Since 8 of these observables are rather well-determined, one expects to predict within the model the lightest neutrino mass and the Dirac CPV phase δ, as well as the Majorana phases α 21 and α 31 , and hence the effective Majorana mass | m | entering the expression for the rate of neutrinoless double beta ((ββ) 0ν -)decay [1]. The functions θ(τ ) and ε(τ ) are particularly well suited to analyse models in the "vicinity" of the symmetric point τ T = i∞, i.e. for models where Im τ is large. In this case, one can use ε(τ ) as an expansion parameter and obtain the approximate forms of the neutrino and charged-lepton mass matrices given above: 15 where we have omitted O(ε 4 /θ 4 ) corrections, included in full in Appendix E. In the above expressions, we have further definedg 2(3) = g 2(3) /g 1 andα 2(3) ≡ α 2(3) /α 1 . The statistical analysis, the details and results of which will be reported in subsection 6.3, shows that a successful description of the neutrino oscillation data and of charged-lepton masses can be achieved for a value of τ close to τ C = i for NO, and close to 1.6 i for IO. In both cases, one cannot rely on the approximations used in eqs. (6.7) and (6.8), and the full expressions given in Appendix E are required. Type I Seesaw Model We now assume instead that neutrino masses are generated from interactions with gauge singlets N c in a type I seesaw, taking L ∼3 with weight k L = 4, E c ∼ 3 with weight k L = −1, and N c ∼ 2 with weight k L = 1. Once more, Higgs doublets H u and H d are assumed to be trivial S 4 singlets of zero modular weight. The modular forms entering the Majorana mass term need to have weights k M = 2, while those in the Yukawa terms of charged leptons and neutrinos need k Y E = 3 and k Y N = 5, respectively. Note that here both E c and N c transform with odd modular weights, while L transforms in an irrep which is absent from Γ 4 S 4 . We further impose a gCP symmetry (CP 1 ) on the model, whose superpotential reads: where Λ 1 , the g s and the α s (s = 1, 2, 3) are real, given the working symmetric basis. This superpotential can be cast in the form In the conventions of eq. (6.4), the light neutrino mass matrix M ν is then obtained from the seesaw relation, while the charged-lepton mass matrix M e = v d λ † is given by eq. (6.6) with α 3 → −α 3 . Note that, due to the seesaw relation (6.13), changes in the scale of the g s can be compensated by adjusting the scale of Λ 1 . Hence, this model is effectively described by 8 real parameters at low energy (6 real combinations of couplings and τ ). Numerical Analysis and Results Our models are constrained by the observed ratios of charged-lepton masses, neutrino mass-squared differences, and leptonic mixing angles. The experimental best fit values and 1σ ranges considered for these observables are collected in Table 3. We do not take into account the 1σ range of the Dirac CPV phase δ in our fit. As a measure of goodness of fit, we use N σ ≡ ∆χ 2 , where ∆χ 2 is approximated as a sum of one-dimensional chi-squared projections. The reader is referred to Ref. [33] for further details on the numerical procedure. Table 3: Best fit values and 1σ ranges for neutrino oscillation parameters, obtained from the global analysis of Ref. [8], and for charged-lepton mass ratios, given at the scale 2 × 10 16 GeV with the tan β averaging described in [12], obtained from Ref. [80]. The parameters entering the definition of r are δm 2 ≡ m 2 2 −m 2 1 and ∆m 2 ≡ m 2 3 −(m 2 1 +m 2 2 )/2. The best fit value and 1σ range of δ did not drive the numerical searches here reported, i.e. the value of δ does not affect the value of N σ defined in the text. Through numerical search, we find that the model of subsection 6.1 can lead to acceptable fits of the leptonic sector (N σ 0.07), with the values of τ , α i and g i indicated in Tables 4 and 5 for NO and IO, respectively. The phenomenologically viable region in the τ plane is shown, for both orderings, in Figure 2. While for IO the fit is possible with τ 1.6 i, for NO an annular region close to τ C = i is selected, with |τ − i| 0.12. As one can see from the tables, independent singlets in the superpotential of eq. (6.3) can provide comparable contributions to the mass matrices. There is however some fine-tuning present in the coupling constants α i in order to accommodate chargedlepton mass hierarchies. This model additionally predicts peculiar correlations between observables, which are shown in Figures 3 and 4, for NO and IO, respectively. One can see that, in the NO case, a Dirac phase δ deviated from π is tied to smaller values of the atmospheric angle, which are in turn associated with larger values of the effective Majorana mass | m | and of the sum of neutrino masses Σ i m i , i.e. with a larger absolute neutrino mass scale. In the IO case, a deviation of δ from π also favours smaller values of sin 2 θ 23 . In both cases, the values of all three CPV phases are highly correlated. Analysing instead correlations between observables and model parameters, one can verify that CP is conserved for Re τ = 0, as anticipated in Section 5. Recall that, in this model, CP symmetry is spontaneously broken by the VEV of τ . The correlation between the Dirac CP phase and the value of Re τ is shown in Figure 5 for both orderings, with δ taking a CP conserving value for purely imaginary τ , as expected. We note also that, in the case of NO, the viable fit region for the ratios g 2 /g 1 and g 3 /g 1 seems to be unbounded, see Figure 6. However, correlations between g 2 /g 1 and observables suggest that the limit g 1 → 0 is phenomenologically viable, with larger values of the ratio not affecting the values of observables (cf. sin 2 θ 23 and m in the figure). We are then free to limit the range of the ratio g 2 /g 1 to 10 3 in our numerical exploration. Finally, let us comment on the allowed values for the effective Majorana mass | m | entering the expression for the rate of (ββ) 0ν -decay. At the 3σ level, the IO fit one predicts | m | 22−29 meV, while for NO one has | m | 29 meV (see Tables 4 and 5). In the latter case, very small values of | m | are allowed, contradicting a tendency of bottom-up modular-invariant models (see e.g. [68]). This is also the case for the NO best fit point, for which a value of | m | slightly below the meV is preferred. However, | m | can be large, | m | > 20 meV, already at the 1.2σ level. This can, for instance, be seen in Figure 7, where we collect the N σ projections for different model parameters and observables. For the seesaw model of subsection 6.2, we find that a fit of the data summarised in Table 3 is possible. As an example, the point in parameter space described by τ = −0.14 + 1.43 i and Majorana phases is physical. The ratio of the masses M i of the two heavy Majorana neutrinos is additionally predicted to be M 2 /M 1 1.14. A full numerical exploration of this scenario is postponed to future work. Summary and Conclusions In the present article we have developed the formalism of the finite modular group Γ 4 -the double cover group of Γ 4 -that can be used, in particular, for theories of lepton and quark flavour. The finite modular group Γ 4 , as is well known, is isomorphic to the permutation group S 4 , while Γ 4 is isomorphic to the double cover of S 4 , S 4 . In comparison with S 4 , the group S 4 has twice as many elements and twice as many irreducible representations, i.e. it has 48 elements and admits 10 irreps: 4 one-dimensional, 2 two-dimensional, and 4 three-dimensional. We have denoted them by: 1 ,1 , 1 ,1 , 2 ,2 , 3 ,3 , 3 ,3 . (7.1) Our notation has been chosen such that irreps without a hat have a direct correspondence with S 4 irreps, whereas hatted irreps are novel and specific to S 4 . Working in a symmetric basis for the generators of S 4 , we have derived the decompositions of tensor products of S 4 irreps, as well as the corresponding Clebsch-Gordan coefficients (see Appendix C). Modular forms of level 4 transforming non-trivially under S 4 can have even integer or odd integer weight k > 0. In Section 3, we have explicitly constructed a basis for the 3-dimensional space of the modular forms of lowest weight k = 1, which furnishes a 3-dimensional representation3 of S 4 , not present in S 4 . The three components of the weight 1 modular form Y (1) 3 (τ ) transforming as a3 were shown to be quadratic polynomials of two "weight 1/2" Jacobi theta constants, denoted as ε(τ ) and θ(τ ), τ being the modulus (cf. eq. (3.10)). The functions ε(τ ) and θ(τ ) are related to the Dedekind eta function, and their q-expansions are given in eq. (3.3). We have further constructed S 4 multiplets of modular forms of weights up to k = 10. The multiplets of weights k ≥ 2 are expressed as homogeneous polynomials of even degree in the two functions ε and θ -see eqs. (3.12), (3.14) and (3.15), and Appendix D. We have also investigated the problem of combining modular and generalised CP (gCP) invariance in theories based on S 4 . We have shown, in particular, that in such theories the CP transformation can be defined in two possible ways, which we have denoted as CP 1 and CP 2 (see Section 4). They act in the same way on the (VEV of the) modulus τ , but the corresponding automorphisms act differently on the generators S and T of S 4 . The CP 1 transformation coincides with the one that can be employed in Γ 2 S 3 , Γ 3 A 4 , Γ 4 S 4 , and Γ 5 A 5 modular-invariant theories [47]. The second transformation, CP 2 , may or may not differ from CP 1 in practice and is incompatible with certain combinations of modular weights and irreps. Note that CP 2 may also be consistently combined with other finite modular groups, such as Γ 2 S 3 and Γ 4 S 4 . We have analysed in detail, in Section 5, the possible residual symmetries in theories with modular invariance, and with modular and gCP invariance. Depending on the value of τ , some generators of the full symmetry group may be preserved. The possible residual symmetry groups can be non-trivial and are summarised in Table 2. Finally, we have provided examples of application of our results in Section 6, constructing phenomenologically viable lepton flavour models based on the finite modular S 4 symmetry in which neutrino masses are generated by the Weinberg operator and by the type I seesaw mechanism. The approach developed by us in the present article simplifies considerably the parameterisation of modular forms of level 4 and given weight. In particular, the derivation of k > 1 modular multiplets in terms of just two independent functions ε and θ automatically bypasses a typical need to search for non-linear constraints, which would relate redundant multiplets coming from tensor products. This approach can be useful in other setups based on modular symmetry, for both homogeneous (double cover) and inhomogeneous finite modular groups. A Dedekind Eta and Jacobi Theta The Dedekind eta function is a holomorphic function defined in the complex upper half-plane as where q ≡ e 2πiτ and Im τ > 0. In this work, fractional powers q 1/n , n being a non-zero integer, should be read as e 2πiτ /n . The Jacobi theta functions Θ i (z, τ ), i = 1, . . . , 4, (see e.g. [82]) are special functions of two complex variables. We are primarily interested in the so-called theta constants Θ i (τ ) ≡ Θ i (0, τ ) which are functions of one complex variable defined in the upper halfplane by 16 (the first theta constant, Θ 1 (τ ), is identically zero). The theta constants transform under the generators of the modular group as Note that in the S transformation the principal value of the square root is assumed. Apart from the power series expansions (A.2), the theta constants admit the following infinite product representations: . Finally, using the power series expansions (A.2) one can prove a useful identity: B Modular Forms of Level 4 in Terms of Theta Constants The correspondence between modular forms of level 4 and the theta constants is wellknown. The classical result is [83] M(Γ(4)) C Θ 2 2 (τ ), i.e. the ring of modular forms of level 4 is generated by the three squares of the theta constants subject to one non-linear relation The idea we employ to avoid the non-linear relation (B.2) is to re-express Θ 2 i (τ ) in terms of Θ j (2τ ) using bilinear identities on the theta functions [82]: which means that modular forms of level 4 are homogeneous even-degree polynomials in Θ 2 (2τ ) and Θ 3 (2τ ). C Group Theory of S 4 C.1 Properties and Irreducible Representations The homogeneous finite modular group S 4 ≡ SL(2, Z 4 ) can be defined by three generators S, T and R satisfying the relations: It is a group of 48 elements (twice as many as S 4 ), with group ID [48,30] in the computer algebra system GAP [75,76]. It admits 10 irreducible representations: 4 onedimensional, 2 two-dimensional, and 4 three-dimensional, which we denote by The notation has been chosen such that irreps without a hat have a direct correspondence with S 4 irreps, whereas hatted irreps are novel and specific to S 4 . In fact, for the hatless irreps, the new generator R is represented by the identity matrix and the construction effectively reduces to that of S 4 S 4 {R = 1}. We also note that the hatless irreps are real, while the hatted irreps are complex except for2 which is pseudoreal. The 48 elements of S 4 are organised into 10 conjugacy classes. The character table is given in Table 6 and shows at least one representative element for each class. C.2 Representation Basis In Table 7, we summarise the working basis for the representation matrices of the group generators S, T and R. In this basis, the group generators are represented by symmetric matrices, ρ r (S, T, R) = ρ r (S, T, R) T , for all irreps r of S 4 . Such a basis is convenient for the study of modular symmetry extended by a gCP symmetry (see Section 4 and Ref. [47]). C.3 Tensor Products and Clebsch-Gordan Coefficients We present here the decompositions of tensor products of S 4 irreps, as well as the corresponding Clebsch-Gordan coefficients, given in the basis of Table 7. Entries of each multiplet entering the tensor product are denoted by α i and β i . Apart from the trivial products 1 ⊗ r = r, these results are collected in Tables 8 -11. r ρ r (S) ρ r (T ) ρ r (R) Table 8: Decomposition of all non-trivial tensor products involving 1-dimensional S 4 irreps, and corresponding Clebsch-Gordan coefficients. Table 9: Decomposition of tensor products involving two 2-dimensional S 4 irreps, and corresponding Clebsch-Gordan coefficients. Note that the order is important in order to match the left and right columns. Tensor product decomposition Clebsch-Gordan coefficients Table 9, but for products involving a 2-dimensional and a 3-dimensional irrep. Table 9, but for products involving two 3-dimensional irreps.
11,025
2020-06-04T00:00:00.000
[ "Mathematics" ]
Exploring the genetic makeup of Xanthomonas species causing bacterial spot in Taiwan: evidence of population shift and local adaptation The introduction of plant pathogens can quickly reshape disease dynamics in island agro-ecologies, representing a continuous challenge for local crop management strategies. Xanthomonas pathogens causing tomato bacterial spot were probably introduced in Taiwan several decades ago, creating a unique opportunity to study the genetic makeup and adaptive response of this alien population. We examined the phenotypic and genotypic identity of 669 pathogen entries collected across different regions of Taiwan in the last three decades. The analysis detected a major population shift, where X. euvesicatoria and X. vesicatoria races T1 and T2 were replaced by new races of X. perforans. After its introduction, race T4 quickly became dominant in all tomato-growing areas of the island. The genomic analysis of 317 global genomes indicates that the Xanthomonas population in Taiwan has a narrow genetic background, most likely resulting from a small number of colonization events. However, despite the apparent genetic uniformity, X. perforans race T4 shows multiple phenotypic responses in tomato lines. Additionally, an in-depth analysis of effector composition suggests diversification in response to local adaptation. These include unique mutations on avrXv3 which might allow the pathogen to overcome Xv3/Rx4 resistance gene. The findings underscore the dynamic evolution of a pathogen when introduced in a semi-isolated environment and provide insights into the potential management strategies for this important disease of tomato. Introduction Tomato (Solanum lycopersicum L.) is one of the most important vegetable crops worldwide.Yet, most tomato-growing areas are challenged by bacterial spot caused by Xanthomonas species.This disease induces necrotic spots on aboveground tissues, leading to defoliation, yield losses, and reduced fruit quality and marketability (Pohronezny and Volin, 1983).An increase in the incidence of bacterial spot has been reported in different regions of the world (Hamza et al., 2010;Horvath et al., 2012;Beran et al., 2014;Kebede et al., 2014;Abbasi et al., 2015;Potnis et al., 2015;Osdaghi et al., 2016;Araújo et al., 2017;Burlakoti et al., 2018).In the field, however, symptoms can be caused by different Xanthomonas species, including X. vesicatoria, X. gardneri, X. euvesicatoria, and, X. perforans (Jones et al., 2004).A more recent classification also suggests that X. euvesicatoria may include two distinct pathovars: euvesicatoira and perforans (Constantin et al., 2016).While historically, X. euvesicatoria and X. vesicatoria have been the predominant species infecting tomato, recent reports pointed out the emergence of X. perforans in the United States (Horvath et al., 2012;Ma, 2015), Ethiopia (Kebede et al., 2014), Canada (Abbasi et al., 2015), andBrazil (Araújo et al., 2017), among others.Xanthomonas pathogens causing bacterial spot can be classified into at least five different races (Asuta-Monge et al., 2000;Jones et al., 2004).Rapid changes in the pathogen population represent a significant challenge for disease management strategies because no tomato variates show resistance to all of these races.While the disease is well-studied, the emergence of new Xanthomonas strains and races poses a continuous challenge for disease management.Understanding the genetic makeup and adaptation pathways of these bacterial pathogens is crucial for developing effective control strategies. To proliferate in the plant tissues, most bacteria pathogens rely on the translocation of type III effector proteins into the host cell.The Xanthomonas outer proteins (Xop) are a big group of effectors with diverse biochemical functions that target different subcellular compartments (Büttner and Bonas, 2010).Effectors interfere with plant processes to modulate plant defense response and facilitate access to host nutrients (Toruño et al., 2016).During host colonization, Xanthomonas are known to deliver between 23 and 37 different type III effectors, although individual strains might carry a variable repertoire, most likely due to redundancy in functions (Büttner and Bonas, 2010).Plants also evolved networks of protein sensors that recognize the effectors and elicit a response that restricts pathogen advance (Jones and Dangl, 2006).For instance, the tomato resistance protein Bs4 triggers a hypersensitive response (HR) when recognizing the cognate avirulence protein AvrBs4 from Xanthomonas T1 race (Ballvora et al., 2001).The same is true for the effector AvrXv3 (xopAF) from Xanthomonas T3 race, which is recognized by the nucleotide-binding leucine-rich repeat (NBS-LRR) protein Xv3/Rx4, present in S. pimpinellifolium (Zhang et al., 2021).Understanding the distribution of effector repertoires in pathogen populations is crucial for supporting disease management strategies and guiding breeding efforts in modern agriculture. Plant pathogens often experience a process of selection and diversification as they adapt to new hosts or environments (Brasier, 1987).This is particularly true in semi-isolated areas, such as islands, where introducing new pathogens can drastically shape the existing population structure.The study of Xanthomonas species in Taiwan is particularly relevant since tomato cultivation is relatively new.While the Dutch introduced tomatoes as an ornamental plant in 1622 (Chen, 2005), high-scale cultivation only started when varieties like Ponderosa and Marglobe were brought in from the United States in the early 1930s.Since then, tomato production has occupied the central and southern areas of the island, where hot and humid environments are conducive to the disease.The first report of bacterial spot was in the early 1930s (Okabe, 1933), and recent reports suggest the presence of X. vesicatoria, X. euvesicatoria, and X. perforans (Lue et al., 2010;Burlakoti et al., 2018).This study aims to understand the genetic makeup of Xanthomonas pathogens causing bacterial spot in Taiwan and uncover the pathway of adaptation to the novel environment. Genome assembly, core genome determination, and spanning tree reconstruction The genomes were sequenced at BGI Genomics in China, using a DNBseq platform with a short-insert library to obtain 150 bp paired end reads.The Velvet algorithm was utilized for assembling the Xanthomonas genomes (Zerbino and Birney, 2008).To determine the optimal K-mer size and coverage cutoff for each specific genome, the VelvetOptimiser script was employed (Gladman et al., 2017).For detailed statistics of the genome sequencing and assembly, refer to Supplementary Table S3.Subsequently, the contigs originating from X. euvesicatoria and X. perforans strains were mapped using the genomic reference of X. euvesicatoria pv.vesicatoria str.85-10 (NC_007508) and X. perforans strain GEV872 (NZ_CP116305) respectively.Correspondingly, the contigs derived from X. vesicatoria were aligned against the genomic reference of X. vesicatoria strain LMG911 (NZ_CP018725.1)using the Geneious plugin minimap2 (Li, 2018).This approach yielded 95.5% (on average) of the reference genome for both Taiwanese X. vesicatoria and X. perforans genomes and 94.4% in average for X. euvesicatoria genomes.Genome annotation was done within Geneious version 2023.2.1. 1We also annotated the genomes contained within the scaffolds generated by Velvet.The core genome of X. euvesicatoria, X. vesicatoria, and X. perforans was leveraged with all the available annotated genomes in GenBank (as of December 22, 2022), along with additional strains from Taiwan.The get_homologues software (Contreras-Moreira et al., 2017) was applied to discern the core genomic elements.Subsequently, the identified core genome was input for selecting phylogenomic markers using the get_phylomarkers software (Vinuesa et al., 2018).The phylogenetic tree was built with the 62 robust markers as ascertained by the get_phylomarkers approach.Spanning trees were constructed using a dataset of 296 genomes available in the NCBI 1 www.geneious.comdatabase for X. perforans, X. euvesicatoria, and X. vesicatoria in conjunction with the 21 Taiwan assembled genomes.Thirty informative phylomarkers were selected and concatenated to form a representative sequence for each genome.Software PHYLOViZ (Francisco et al., 2012) was used to reconstruct the spanning tree. Xanthomonas populations in Taiwan experienced a major shift To investigate the historical dynamics of Xanthomonas populations, we mined a dataset of 669 strains causing bacterial spot in Taiwan (Supplementary Table S1).The strains were isolated across 82 tomato-growing areas in 13 major districts in the last 34 years .Along with previous samples collected between 1989 and 2016 (293), we also obtained 376 Xanthomonas strains during the 2017 to 2023 surveys.Overall, we identified three main Xanthomonas species in our collection using DNA markers: X. vesicatoria, X. euvesicatoria, and X. perforans (Figure 1; Supplementary Table S1).We were unable to find any samples corresponding to X. gardneri.We plotted the time distribution of all Xanthomonas strains and found a significant shift in Xanthomonas species, a trend that began in the early 2000s, as previously described in Burlakoti et al. (2018), and persists to the present day.The data consistently show the replacement of X. euvesicatoria and X. vesicatoria by X. perforans as the dominant population from the early 2000s until 2023 (Figure 1A).Changes in Xanthomonas species prevalence have also been observed in different regions over the past two decades (Kebede et al., 2014;Timilsina et al., 2015;Araújo et al., 2017).For instance, Timilsina et al. (2019) reported a shift from X. euvesicatoria to X. perforans in Florida's tomato-growing areas.Other reports describe a similar phenomenon in North Carolina, Indiana, Ohio, and other regions in the Midwest United States (Ma et al., 2011;Horvath et al., 2012;Egel et al., 2018;Adhikari et al., 2019;Bernal et al., 2022).Rapid shifts in pathogen species of Xanthomonas are likely the result of fitness advantage.For instance, resistance to copper-based compounds, commonly used in Taiwan, might explain how resistance populations might become dominant in time.However, Burlakoti et al. (2018) find no significant differences in copper resistance phenotypes among Xanthomonas species.It has been suggested that the increase in X. perforans may be due to the production of antimicrobial peptides, also known as bacteriocins (Marutani-Hert et al., 2020).However, this factor does not fully explain why pepper does not experience a similar surge in X. perforans infections in Taiwan (Burlakoti et al., 2018).It is more plausible that other factors, such as pathogenicity and resistance components, play a pivotal role in shaping the observed trends of bacterial spot in tomato and pepper (Adhikari et al., 2020).Further investigation into these factors is essential to comprehensively understand the dynamics of Xanthomonas species shifts in different crops and regions. Xanthomonas perforans T4 Race is now dominant in Taiwan To investigate the change in pathogenicity of the Xanthomonas population in Taiwan, we inoculated 156 strains from the late collection onto four tomato differentials.All the races (T0-T5) were identified.The integrated data on pathogenicity unveiled a shift from T1 and T2 races, collected before the 2000s, toward T3 and T4 races collected after this period (Figure 1B).Race T4 showed remarkable prevalence, with more than 95% of the samples collected after 2015 (Figure 1B), suggesting a fitness advantage of this pathogenicity group over T3.Race designation as T3 or T4 is based only on the presence or absence of hypersensitive response (HR) caused by the recognition of avrXv3 by the cognate Xv3 (Robbins et al., 2009).Further genetic analysis found no differences between T3 and T4 genomes, except for crucial mutations that disrupt the avrXv3 effector in T4 strains (details below).In Taiwan, there is no report on the frequency of Xv3, but a draft estimate suggests that 20% of tomato hybrids might have Xv3 (Chou, personal communication).As a result, one possible explanation for the increase in T4 races is that the host is driving selection due to the presence of Xv3.However, a recent population swift in the Midwestern United States involving an X. perforans T4 population occurred in the absence of the resistance Xv3 locus (Bernal et al., 2022), suggesting that other factors might also be involved.This new T4 population (2017-2020) appears to be genetically identical, probably due to recent migration events (Bernal et al., 2022). We further identified additional virulence patterns within T3, T4, and T5 populations by inoculating 1 T5, 24 T4, and 5 T3 strains on a different tomato panel.Based on the phenotypic response, which was classified as resistance (R), moderate resistance (M), and susceptible (S), we found eight different pattern combinations among the small sample (Figure 2A; Supplementary Figure S1; Supplementary Table S1).Race T4 strains were classified into 8 patterns, suggesting that the X. perforans population in Taiwan harbors diverse phenotypes and might be going through a process of local diversification.Phenotypic heterogeneity in plant pathogens in response to the environment has been described even for clonal populations (Delgado et al., 2013;Mariz-Ponte et al., 2022).Finally, we look for new sources of resistance against predominant virulent disease by screening the highly virulent T4 strain (Xant314) against an S. pimpinellifolium-diversity panel.We found only one accession (VI007032) with high resistance levels (Figure 2B; Supplementary Table S2).VI007032, which showed a slight symptom of bacterial spot, will be further tested as a potential resistance donor.The data suggest a fitness advantage of race T4 but also a process of phenotypic diversification. Xanthomonas in Taiwan come from a narrow genetic background To investigate the genetic variation of Xanthomonas causing bacterial spot in Taiwan, we obtained complete genome sequences for 21 representative strains collected in highland and lowland tomatogrowing areas over 30 years (Supplementary Table S2).The assembled genomes (235-245X coverage range) represent X. perforans (11), X. euvesicatoria (6), X. vesicatoria (4), and include T1 to T5 races (Supplementary Table S3).Overall, we found indications of a narrow genetic background within Xanthomonas species from Taiwan.Using phylogenetic reconstruction from 317 genomes, we clearly distinguish between Xanthomonas strains clustering at the species level and forming single clusters across the tree (Supplementary Figure S2).The spanning tree constructed from concatenated phylomarkers linked the Taiwanese X. perforans strains to a single North American genotype, particularly represented in the United States and Canada (Figure 3).We identified two clades that derived from the same X.perforans genotypes.The first clade included strains collected in 2005, and the second clade included strains from 2016 (Figure 3). Interestingly, we found no genetic substructure that clusters T3, T4, or T5 groups, suggesting that these might be derived.In addition, the Split decomposition tree found no evidence of recombination among Xanthomonas in Taiwan (data not shown).These preliminary results challenge the hypothesis of multiple colonization events on the island and suggest that X. perforans in Taiwan result from a single introduction (one or few genotypes).It is worth noting that the abundance of genomes from North America in our databases may introduce some bias into the results.The lack of recombination, the absence of structure among races (Supplementary Table S1), and the virulence pattern on reference tomato accessions (Figure 2) also suggest that the original X. perforans diversified locally in the last two decades.A similar scenario has been described recently where a clonal The spanning tree of Xanthomonas species causing Bacterial Spot in Taiwan depicts the potential linkage of X. perforans, X, euvesicatoria, and X. vesicatoria to North American populations.The circles represent genetic clusters that are linked by lines.The colors inside the circles represent the countries where those clusters have been found.The genomes from Taiwan are depicted in black.population of X. perforans that dominate extensive areas shows the accumulation of genetic variation (Bernal et al., 2022).The case of X. vesicatoria appears to follow the trend, as it is characterized by a single genotype.A single introduction, probably during the late 80s, was likely responsible for bringing a race T1 to Taiwan (Figure 3).The spanning tree of X. euvesicatoria detected two main clusters, suggesting more than one colonization event.Both colonization events of race T2 are likely to come from North America (Figure 3).One cluster was associated with Taiwanese strain 87-21, collected in the late 80s (Bouzar et al., 1994), and the other was associated with strain Xant154, collected in 1994 (Burlakoti et al., 2018).The genetic data of representative Xanthomonas genomes from Taiwan points to North America as the main source.Discrete genotypes were likely introduced and then diversified locally.The pathogen could have been introduced during the importation of tomato varieties, but other explanations cannot be excluded. The effector composition of Xanthomonas populations reveals diversification and local adaptation To characterize the effector composition of the Taiwanese population, we compared the Xanthomonas outer proteins (Xop) repertoires of 317 genomes and found evidence of diversification and local adaptation in Taiwan.Out of 70 Xops reported in the global database, 3 only 48 were present in the target genomes (Supplementary Table S3).Similar to previous studies, we found a variable number of effectors in each strain (Schwartz et al., 2015;Roach et al., 2019), where X. perforans displayed 23 to 31 Xops 3 https://euroxanth.eu/Chen et al. 10.3389/fmicb.2024.1408885Frontiers in Microbiology 07 frontiersin.orgwhile X. euvesicatoria displayed 30 to 31 Xops.We only found 11 Xop effectors in all the X. vesicatoria strains (Supplementary Table S3).Based on the presence/absence of effectors, the local X.perforans, X. euvesicatoria, and X. vesicatoria strains showed a unique composition compared to the global population (Figure 4A; Supplementary Figure S2).We identified at least 13 effector profiles in a sample of 21 genomes, with most of the unique profiles occurring in X. perforans genomes (8) compared to X. euvesicatoria genomes (4).The observed pattern is unlikely to result from an assembly or annotation issue since all the X. vesicatoria genomes assembled the same Xop genes and formed a single profile.The diversity in effector composition observed in X. perforans (Figure 4) aligns with the occurrence of multiple virulence patterns within T4 and T3 strains (Figure 2).Diversification of virulence factors has been reported in other pathogens and is likely the result of local adaptation (Quibod et al., 2016;Richards et al., 2019). The comparative analysis of the Xops dataset also identified 13 effectors with rare alleles only present in Taiwan (Supplementary Table S4).These variations are characterized by one or more nonsynonymous mutations (Supplementary Table S4), which are most likely the result of adaptation to the local environment.Interestingly, we found most of the rare alleles in X. perforans genomes (Figure 4A).For instance, the strain Xant319 showed a unique amino acid substitution (S741G) in the highly conserved region of the protein xopZ1.We found derived alleles of xopD and xopV in X. perforans strains associated with recent collections (Supplementary Table S4), which might suggest more recent mutations.All of these genes have been reported to contribute to Xanthomonas colonization in various host systems (Furutani et al., 2009;Medina et al., 2018;Deb et al., 2020), and therefore, the observed mutations might have a fitness role in the local environment.In addition, strains Xant314 and Xant328 contain the effector xopAZ, which is absent from any of the genomes analyzed but was first reported in X. arboricola and X. cynarae (Kara et al., 2017).A few effectors also showed Indels and frameshift mutations that potentially disrupt their function (Supplementary Table S5).One clear example represents mutations in xopAF/avrXv3 associated with race T4.Loss of avirulence function in this gene has been associated with a range of changes in the protein sequence (Timilsina et al., 2016;Bernal et al., 2022).We found two nonsense mutations in xopAF alleles from X. perforans T4 strains from Taiwan (Figure 4B).These mutations appear to be the only difference between xopAF alleles from race T3 and T4, suggesting that T4 strains emerged from T3 genetic backgrounds by random mutation (Bernal et al., 2022). Interestingly, the effector repertoires of X. vesicatoria strains consist of only 11 genes (Figure 4A; Supplementary Table S3), which is shorter than any of the global X. vesicatoria genomes reported so far.The pathogenicity capabilities of these X. vesicatoria strains with a minimal number of effectors suggest a high plasticity of the pathogen associated with the redundancy of effector functions (Castaneda et al., 2005;Long et al., 2018).The overall data indicates that the genetic background of the Xanthomonas population from Taiwan appears to be narrow and most likely the result of a few colonization events that caused profound changes in population structure.However, the phenotypic pattern and the effector composition suggest that such a genetically uniform population is diversifying due to local adaptation. Conclusion The introduction of Xanthomonas pathogens into Taiwan has led to significant shifts in disease dynamics across tomato-growing areas, particularly evident in the rapid dominance of X. perforans race T4.Despite that only a few pathogen genotypes managed to colonize the island, the population quickly diversified in response to local adaptation.These findings emphasize the complexity of managing introduced plant pathogens in semi-isolated environments and underscore the importance of ongoing research for effective disease responses.The author(s) declare that financial support was received for the research, authorship, and/or publication of this article.Scientists at the World Vegetable Center are partially funded by The Ministry of Foreign Affairs and The Ministry of Agriculture in Taiwan (107AS-4.5.1-ST-a2, 108AS-1.1.2-ST-a4,109AS-1.1.2-ST-a3,110AS-1.2.5-ST-a1).Researchers at Universidad de Las Américas receive funding from internal sources, as part of the program PRG.BIO.23.14.01.This work received financial support from the Consortium of International Agricultural Research Centers/Consultative Group on International Agricultural Research (CGIAR). FIGURE 1 FIGURE 1 The population shift of Xanthomonas species causing Bacterial Spot in Taiwan started in 2004-2006; (A) Relative frequency of Xanthomonas species in the last 34 years (1989-2023); (B) Relative frequency of race groups among Xanthomonas species causing bacteria spot collected in Taiwan. FIGURE 4 FIGURE 4Evidence of distinctive effector composition in Xanthomonas from Taiwan.(A) The heat map tree illustrates the presence or absence of Xanthomonas outer proteins (Xop) present in each strain.Green squares indicate the presence of a unique allele found in a Xanthomonas genome.(B) Partial sequence alignment of XopAF showing exclusive nonsense mutations in the X. perforans T4 race from Taiwan (highlighted by the red square).Nonsense mutations are depicted in black.Polymorphic residues are depicted with colors.The race of each strain is identified within parentheses.
4,803.4
2024-05-23T00:00:00.000
[ "Environmental Science", "Biology" ]
Multi-Year Mapping of Major Crop Yields in an Irrigation District from High Spatial and Temporal Resolution Vegetation Index Crop yield estimation is important for formulating informed regional and national food trade policies. The introduction of remote sensing in agricultural monitoring makes accurate estimation of regional crop yields possible. However, remote sensing images and crop distribution maps with coarse spatial resolution usually cause inaccuracy in yield estimation due to the existence of mixed pixels. This study aimed to estimate the annual yields of maize and sunflower in Hetao Irrigation District in North China using 30 m spatial resolution HJ-1A/1B CCD images and high accuracy multi-year crop distribution maps. The Normalized Difference Vegetation Index (NDVI) time series obtained from HJ-1A/1B CCD images was fitted with an asymmetric logistic curve to calculate daily NDVI and phenological characteristics. Eight random forest (RF) models using different predictors were developed for maize and sunflower yield estimation, respectively, where predictors of each model were a combination of NDVI series and/or phenological characteristics. We calibrated all RF models with measured crop yields at sampling points in two years (2014 and 2015), and validated the RF models with statistical yields of four counties in six years. Results showed that the optimal model for maize yield estimation was the model using NDVI series from the 120th to the 210th day in a year with 10 days’ interval as predictors, while that for sunflower was the model using the combination of three NDVI characteristics, three phenological characteristics, and two curve parameters as predictors. The selected RF models could estimate multi-year regional crop yields accurately, with the average values of root-mean-square error and the relative error of 0.75 t/ha and 6.1% for maize, and 0.40 t/ha and 10.1% for sunflower, respectively. Moreover, the yields of maize and sunflower can be estimated fairly well with NDVI series 50 days before crop harvest, which implicated the possibility of crop yield forecast before harvest. Introduction Food security is one of the major challenges that humanity is facing. The Food and Agriculture Organization (FAO) reported that there were about 815 million people worldwide suffering from food shortages in 2016 [1]. To support food security, monitoring and estimating of crop yields in large areas is of great significance [2]. Moreover, accurate and real-time estimation of major crop yields is helpful for decision makers to formulate informed food trade policies [3][4][5]. Crop yield estimation are often based on official statistics derived from crop yield survey performed at some administrative level that are made available several days or months after crop harvesting [6][7][8]. Therefore, developing a simple and convenient yield estimation model that can estimate crop yield timely in different spatial scales is urgently required. In recent years, remote sensing has been widely used in agricultural monitoring because of its extensive coverage and regular revisit, which makes regional crop identification [9,10] and yield estimation [6,11] possible at acceptable accuracy. The crop yield estimation models based on remote sensing data mainly include three types, i.e., empirical models, crop growth models, and radiation use efficiency (RUE) models. The empirical or statistical models are based on the relationship between remote sensing indexes and crop yields in selected spatial units. The most frequently used remote sensing indexes include the Normalized Difference Vegetation Index (NDVI) [12][13][14], the Enhanced Vegetation Index (EVI) [15,16], the Leaf Area Index [17], and the Soil Adjusted Vegetation Index (SAVI) [4]. Most available studies have shown that there is a strong linear relationship between these remote sensing indexes and crop yields [18][19][20]. Although empirical models are simple and easy to use, these models usually consider only a few key indexes and neglect other influence factors, which would affect the accuracy of crop yield estimation. The crop growth models use indexes derived from remote sensing to simulate crop growth and yield [21][22][23]. The crop growth process and yield can be well simulated based on accurate model inputs, including climate, soil and agricultural management measures [24,25]. However, due to the spatial heterogeneity of field conditions, agricultural management, crop planting dates at regional scale, and the complexity of the land uses, the application of crop growth models was generally limited to small areas [26]. At present, crop growth models are usually driven by field measured data and are difficult to extend to large areas where there is a lack of spatial field measured data [27][28][29]. The RUE models estimate crop yield based on gross primary productivity (GPP) or net primary productivity (NPP) derived from remote sensing data [30]. Compared with the crop growth model, the RUE models need less parameters and has certain crop physiological basis. Main parameters for calculating crop yields include RUE, absorbed photosynthetically active radiation (APAR), and harvest index (HI) [31][32][33]. However, some parameters, especially HI, are difficult to obtain and usually estimated by field experiment or experience. The estimated parameters may neglect the effects of some important environmental factors, which will bring uncertainty to crop yield estimation. Therefore, there are still some problems in regional crop yields estimation using the existing methods. In recent years, machine learning methods, including random forest (RF), support vector machine (SVM), and artificial neural network (ANN), have been gradually used in crop identification based on remote sensing data [34][35][36]. More recently, these machine learning methods have been applied to crop yield estimation. The ANN has been successfully applied to yield estimation of various crops, such as maize [11], wheat [37], potato [38], melon [39] and grassland dry matter yield [40]. The RF method has also been used in crop yield estimation, especially for large areas of maize [7,41,42], soybean [43] and wheat [44]. Therefore, most available studies were on yield estimations of maize and wheat, but few studied yield estimations of sunflower, an important economic crop in arid regions of Northwest China. The key factors of crop yield estimation using machine learning methods are the determination of predictors and model calibration and validation. The periodic variation of vegetation index is closely related to crop growth period, and is therefore usually used as predictors [45,46]. Since most crops have growth periods of several months, remote sensing data with fine temporal resolution are usually needed, such as moderate resolution imaging spectroradiometer (MODIS) [7] or advanced very high-resolution radiometer (AVHRR) [6]. However, the finest spatial resolution of these remote sensing data is 250 m, which will affect the feature extraction of crops from possible mixed pixels, especially in small-sized cropland blocks that is common in China [47]. The inaccuracy of model predictors would inevitably lead to uncertainty in crop yield estimation. Therefore, it is necessary to make use remote sensing data with both high temporal and spatial resolutions to improve the yield estimation accuracy. In the processes of model calibration and validation, the precision of crop distribution maps has a great influence on yields extraction of different crops [7]. Therefore, a high precision crop distribution map is the basis for the calibration and validation of crop yield estimation models. China launched the HJ-1A/1B CCD satellite constellation with 30 m spatial resolution and two days revisit period on 6 September 2008 [48]. Previous studies have indicated that NDVI derived from HJ-1A/1B CCD images could be applied to regional crop identification [49,50] and phenological characteristics estimation [51]. However, few studies had applied HJ-1A/1B CCD data to crop yield estimation in arid and semi-arid areas [52]. Yu and Shang [49] mapped multi-year maize and sunflower in Hetao Irrigation District from 2009 to 2015 with the mean relative statistical error of 10.82% for maize and 4.38% for sunflower, which provided the basis for further yield estimation in this region. The main objective of this study is to develop practicable RF models for yield estimation of major crops in Hetao Irrigation District of China based on the NDVI series derived from HJ-1A/1B CCD images and high precision crop distribution maps. Study Area Four counties in western and middle Hetao Irrigation District (HID) were selected as the study region, covering an area of 0.91 million ha with 44% being cropland ( Figure 1). The HID is the third largest irrigation district and an important food production base in China located in the western part of the Inner Mongolia Autonomous Region in North China. The HID lies in a typical arid area and mainly relies on the Yellow River water for irrigation, while a water-saving rehabilitation program has been carrying out since 1998 to reduce water diversion from the Yellow River and slow down the rise of the groundwater table caused by irrigation [53,54]. Consequently, the decrease of available irrigation water influenced the crop planting structure and crop yield in HID. [48]. Previous studies have indicated that NDVI derived from HJ-1A/1B CCD images could be applied to regional crop identification [49,50] and phenological characteristics estimation [51]. However, few studies had applied HJ-1A/1B CCD data to crop yield estimation in arid and semi-arid areas [52]. Yu and Shang [49] mapped multi-year maize and sunflower in Hetao Irrigation District from 2009 to 2015 with the mean relative statistical error of 10.82% for maize and 4.38% for sunflower, which provided the basis for further yield estimation in this region. The main objective of this study is to develop practicable RF models for yield estimation of major crops in Hetao Irrigation District of China based on the NDVI series derived from HJ-1A/1B CCD images and high precision crop distribution maps. Study Area Four counties in western and middle Hetao Irrigation District (HID) were selected as the study region, covering an area of 0.91 million ha with 44% being cropland ( Figure 1). The HID is the third largest irrigation district and an important food production base in China located in the western part of the Inner Mongolia Autonomous Region in North China. The HID lies in a typical arid area and mainly relies on the Yellow River water for irrigation, while a water-saving rehabilitation program has been carrying out since 1998 to reduce water diversion from the Yellow River and slow down the rise of the groundwater table caused by irrigation [53,54]. Consequently, the decrease of available irrigation water influenced the crop planting structure and crop yield in HID. The dominant crops include maize, sunflower and wheat, and the crop planting structure has changed significantly in recent years due to economic and irrigation factors. According to the statistics of the crop planting area provided by the local government, the proportion of maize planting area increased from 28% to 40%, sunflower increased from 31% to 40%, and wheat decreased from 22% to 13% from 2010 to 2015. Since maize and sunflower accounted for about 80% of the crop planting area in recent years, only maize and sunflower were considered in this study. According to field investigation in HID, the growth period of maize is generally from the 120th to 260th day in a The dominant crops include maize, sunflower and wheat, and the crop planting structure has changed significantly in recent years due to economic and irrigation factors. According to the statistics of the crop planting area provided by the local government, the proportion of maize planting area increased from 28% to 40%, sunflower increased from 31% to 40%, and wheat decreased from 22% to 13% from 2010 to 2015. Since maize and sunflower accounted for about 80% of the crop planting area in recent years, only maize and sunflower were considered in this study. According to field investigation in HID, the growth period of maize is generally from the 120th to 260th day in a year, while the sunflower from the 160th to 260th day. Moreover, maize and sunflower both reached their peak growth periods at about the 220th day in the year [49]. Data Sources Data used in the present study mainly include HJ-1A/1B CCD images, field measured and official statistical crop yield data, crop distribution and land use maps. The two-day-repeat CCD sensors of Chinese HJ-1A/1B satellites capture ground features with 30 m pixel resolution, with each satellite carrying a 4-band multispectral optical imagers that captures radiation in the blue (0.43-0.52 µm), green (0.52-0.60 µm), red (0.63-0.69 µm), and near-infrared (0.76-0.90 µm) bands [48]. Level 2A images for HID covering the crop growth period from April to October in the years from 2010 to 2015 with the cloud cover less than 5% were downloaded from the China Centre for Resources Satellite Data and Application (CRESDA) [55]. Crop type and yield were surveyed on the ground in 2014 and 2015. Fifty-five sampling points were first determined considering the spatial distribution uniformity and the homogenous of crop type within the area of cropland based on the 1:100,000 land use map of the cropland in 2005 ( Figure 1) provided by National Earth System Science Data Sharing Infrastructure [56]. Distributing sampling points uniformly throughout the entire study area ensured the representativeness and diversity of crops with different growth conditions and crop yield. Then we conducted field survey about crop type and yield based on the location of the sampling points on the map. Each sampling point represented an area of over one hectare with the same crop, which could prevent the possible existence of mixed pixels in developing remote sensing-based models for crop yield estimation. Meanwhile, the crop type of each point may be different in these two years due to crop rotation. As a result, thirty-four sampling points of maize, fifty-four points of sunflower, and twenty-two points of other crops were obtained in 2014 and 2015, and the statistics of measured crop yields at the sampling points were shown in Table 1. The spatial resolution of HJ-1A/1B CCD images was 30 m and the area of each sampling point was over one hectare, then we selected eight pixels (30 × 30 m 2 ) at each sampling point. Consequently, we have 272 and 432 pixels in total with measured yield data for model calibration for maize and sunflower, respectively. For each sampling point, we selected thirty plants with relatively uniform growth to measure the average crop yield. The sampled maize cob and sunflower plate were air-dried, and the maize and sunflower seeds were threshed and weighed to obtain the dry grain weight. Then, the crop yields per unit area of the sampling point were calculated considering the crop density (Table 1). For the selected four counties in HID, we obtained the average maize and sunflower yield statistics per county from 2010 to 2015 from the Bayannur Agricultural Information Network [57]. We also used the crop distribution maps with 30 m spatial resolution of maize and sunflower from 2010 to 2015 obtained in our previous study [49]. Data Processing and Determination of Model Input The preprocessing procedures for HJ-1A/1B CCD images mainly include radiometric calibration and atmospheric and geometric corrections [49]. In case of HJ-1A/1B CCD images, the band 4 (near-infrared band) and the band 3 (red band) were used to calculate NDVI [58]. To get daily NDVI series, we used an asymmetric logistic curve ( Figure 2) [59] to fit the NDVI series of the available days, which was then used to calculate the daily NDVI values and extract the crop phenological characteristics [49]. The fitting curve equation is where t is day of year (DOY); a, b, c, d and k are curve parameters to be estimated from available NDVI series by the least-squares method. After curve fitting, three characteristic points in the fitted NDVI curve ( Figure 2) were used to determine the crop phenological characteristics and corresponding NDVI values ( Table 2). The equations for calculating the phenological characteristics could be found in Yu and Shang [49]. (near-infrared band) and the band 3 (red band) were used to calculate NDVI [58]. To get daily NDVI series, we used an asymmetric logistic curve ( Figure 2) [59] to fit the NDVI series of the available days, which was then used to calculate the daily NDVI values and extract the crop phenological characteristics [49]. The fitting curve equation is where t is day of year (DOY); a, b, c, d and k are curve parameters to be estimated from available NDVI series by the least-squares method. After curve fitting, three characteristic points in the fitted NDVI curve ( Figure 2) were used to determine the crop phenological characteristics and corresponding NDVI values ( Table 2). The equations for calculating the phenological characteristics could be found in Yu and Shang [49]. Eight models were developed for the crop yield estimation with different predictors, each being a combination of NDVI series with a specified time interval and/or phenological characteristics (Table 3). Available studies showed that crop yield is closely related with NDVI [8,13,60], and Shao et al. [7] used MODIS NDVI 16-day composite data to achieve an accurate estimation of maize yield in the United States. Moreover, Bose et al. [37] estimated winter wheat yield accurately using MODIS NDVI data in Shandong province, China. Therefore, to compare the impact of time intervals on yield estimation precision, NDVI series in the crop growth period with time intervals of 5 days and 10 days were selected as the inputs for models 1 and 2, respectively. Considering that the phenological characteristics of crops also have close correlations with crop growth and yield, three phenological characteristics in Table 2 were added to the inputs of model 2 to obtain inputs for Eight models were developed for the crop yield estimation with different predictors, each being a combination of NDVI series with a specified time interval and/or phenological characteristics (Table 3). Available studies showed that crop yield is closely related with NDVI [8,13,60], and Shao et al. [7] used MODIS NDVI 16-day composite data to achieve an accurate estimation of maize yield in the United States. Moreover, Bose et al. [37] estimated winter wheat yield accurately using MODIS NDVI data in Shandong province, China. Therefore, to compare the impact of time intervals on yield estimation precision, NDVI series in the crop growth period with time intervals of 5 days and 10 days were selected as the inputs for models 1 and 2, respectively. Considering that the phenological characteristics of crops also have close correlations with crop growth and yield, three phenological characteristics in Table 2 were added to the inputs of model 2 to obtain inputs for model 3. Since parameters d and k in the fitted NDVI curve also affect the crop growth period, d and k were added to the inputs of model 3 and used as the inputs for model 4. To test whether crop yield could be accurately estimated using the NDVI series before harvest, the NDVI series before the 210th day (50 days before the harvest) instead of the entire growth period were used as the inputs for model 5. The phenological characteristic in the early growth period, t_inf_1, was added to the inputs of model 5 and used as the inputs for model 6. To further test if these three phenological characteristics and corresponding NDVI indexes in Table 3 can be substitutes of the NDVI series, these six indexes were used as the inputs for model 7. Combination of the inputs of model 7 and parameters d and k constitutes the input for model 8. N_200 N_200 10 N_165 N_210 N_210 N_210 N_210 N_210 11 N_170 N_220 N_220 N_220 t_inf_1 12 N_175 N_230 N_230 N_230 13 N_180 N_240 N_240 N_240 14 N_185 N_250 N_250 N_250 15 N_190 N_260 N_260 N_260 16 N_195 t_inf_1 t_inf_1 17 N_200 t_inf_2 t_inf_2 18 N_205 t_max t_max 19 N_210 d 20 N_215 k 21 N_220 . . . . . . N_260 Note: N stands for NDVI, and inputs for models 1 to 6 for sunflower start from N_160. Random Forest Regression Algorithm The random forest (RF) algorithm is one of the most widely used machine learning methods, which has the advantages of less input parameters, simple operation, and strong stability. It is an ensemble learning method for classification, regression and other tasks. To achieve these tasks, a multitude of decision trees are randomly generated at the training stage, and each decision tree inside would make a decision about the problem independently by selecting a random set of input data [61]. In the case of classification problems, the result depends on the results of most decision trees. While in the case of regression problems, the result depends on the average value of the result of each decision tree [61,62]. In this study, we used RF regression function implemented in the "Random Forest" package within Matlab R2017b software developed by MathWorks (Natick, MA, USA) to estimate the crop yield. Using remote sensing data as inputs, RF regression algorithm has been successfully applied to crop yield estimation in recent years [7,41,42]. The yield estimation in this study is based on pixel-level, and the annual crop yield is estimated from [7]: where Y i,j,k is crop yield at pixel (i, j) in the kth year, x is the vector of predictors and F is the predictive function of RF regression algorithm. In the RF regression algorithm, the performance of the model could be improved by adjusting three major parameters. The first one was ntree that represents the number of decision trees with the default value of five hundred, the second one was mtry that represents the number of features used at each node with the default value to be 1/3 of the total number of the features, and the third one is nodesize that represents the minimum size of the terminal nodes of decision trees with the default value of one [63]. Moreover, some studies have shown that the prediction accuracy of the RF model is acceptable when nodesize takes the default value [44,62]. Model Calibration and Validation We used the field measured crop yield data for the RF model calibration. In previous studies, the calibration of yield estimation models was mostly based on the regional level [7,19]. Here we attempted to calibrate the RF model using yield data at pixel scale to improve the model accuracy. We used the root-mean-square error (RMSE), the relative error (RE), the coefficient of determination (R 2 ) and the adjusted R 2 (R 2 ) to evaluate the performance of RF model driven by different predictors, which can be calculated from where S i and P i (i = 1, 2, . . . , N) are the i th field measured or official statistical crop production and model estimated crop yields, respectively, S and P are the corresponding mean values, and N is the total number of data used for the RF model calibration or validation, and p is the number of predictors of each model. Among these indexes, R 2 was used to adjust R 2 in model calibration to avoid over fitting of the model by considering the influence of predictor numbers. In model validation using statistical crop yield or production, R 2 and R 2 was used to indicate the linear correlation of estimated and statistical production, where p = 1 is used to adjust R 2 the for the univariate regression analysis. The closer the RMSE and RE to zero and the R 2 and R 2 to one, the higher the accuracy of the model. Model Calibration with Field Measured Yields at Pixel Level We used the measured yields to calibrate the maize and sunflower RF models driven by different predictors, respectively. The calibration performance of RF models was shown in Figure 3. Compared with previous model calibration at county scale [7,11,41], the model calibration in this study was based on pixel scale, which could improve the performance of crop yield estimation model at finer resolution. No matter for maize or sunflower, the R 2 and adjusted R 2 of all RF models were between 0.80 and 0.90, and the difference between R 2 and adjusted R 2 was very small due to the number of calibration data (272 for maize and 432 for sunflower) was far more than the number of predictors, which indicated that there was no over fitting phenomenon in our calibrated models. The RMSE of maize ranged from 0.85 to 1.12 t/ha, while the RMSE of sunflower were lower than the maize and ranged from 0.34 to 0.47 t/ha. For maize, models 1, 2, 4, 5 and 8 had relatively lower RMSE and higher R 2 than the other three models. While for sunflower, models 1, 2, 5, 6 and 8 had relatively lower RMSE and higher R 2 than the other three models. calibration data (272 for maize and 432 for sunflower) was far more than the number of predictors, which indicated that there was no over fitting phenomenon in our calibrated models. The RMSE of maize ranged from 0.85 to 1.12 t/ha, while the RMSE of sunflower were lower than the maize and ranged from 0.34 to 0.47 t/ha. For maize, models 1, 2, 4, 5 and 8 had relatively lower RMSE and higher R 2 than the other three models. While for sunflower, models 1, 2, 5, 6 and 8 had relatively lower RMSE and higher R 2 than the other three models. Model Validation with Statistical Yield and Production at the County Level We used the above-calibrated RF models to estimate the yields of maize and sunflower, and then compared the estimated yield and production with the official statistical data at the county level in the irrigation district in 6 years (from 2010 to 2015) to further validate the model. Figures 4 and 5 showed the inter-annual variations of the accuracy of RF models driven by different predictors for maize and sunflower in the irrigation district, respectively. For maize, models 1, 2 and 5 had higher estimation accuracy in different years, with the RMSE ranging from 0.57 to 1.29 t/ha, from 0.48 to 0.93 t/ha, and from 0.69 to 0.91 t/ha, and the RE ranging from 4.5% to 11.7%, from 4.3% to 8.5%, and from 5.1% to 8.1%, respectively. These three models also performed better in model calibration. Among these 3 models, model 5 has the highest accuracy with the multi-year average values of RMSE and RE of 0.75 t/ha and 6.1%, respectively. Model Validation with Statistical Yield and Production at the County Level We used the above-calibrated RF models to estimate the yields of maize and sunflower, and then compared the estimated yield and production with the official statistical data at the county level in the irrigation district in 6 years (from 2010 to 2015) to further validate the model. Figures 4 and 5 showed the inter-annual variations of the accuracy of RF models driven by different predictors for maize and sunflower in the irrigation district, respectively. For maize, models 1, 2 and 5 had higher estimation accuracy in different years, with the RMSE ranging from 0.57 to 1.29 t/ha, from 0.48 to 0.93 t/ha, and from 0.69 to 0.91 t/ha, and the RE ranging from 4.5% to 11.7%, from 4.3% to 8.5%, and from 5.1% to 8.1%, respectively. These three models also performed better in model calibration. Among these 3 models, model 5 has the highest accuracy with the multi-year average values of RMSE and RE of 0.75 t/ha and 6.1%, respectively. calibration data (272 for maize and 432 for sunflower) was far more than the number of predictors, which indicated that there was no over fitting phenomenon in our calibrated models. The RMSE of maize ranged from 0.85 to 1.12 t/ha, while the RMSE of sunflower were lower than the maize and ranged from 0.34 to 0.47 t/ha. For maize, models 1, 2, 4, 5 and 8 had relatively lower RMSE and higher R 2 than the other three models. While for sunflower, models 1, 2, 5, 6 and 8 had relatively lower RMSE and higher R 2 than the other three models. Model Validation with Statistical Yield and Production at the County Level We used the above-calibrated RF models to estimate the yields of maize and sunflower, and then compared the estimated yield and production with the official statistical data at the county level in the irrigation district in 6 years (from 2010 to 2015) to further validate the model. Figures 4 and 5 showed the inter-annual variations of the accuracy of RF models driven by different predictors for maize and sunflower in the irrigation district, respectively. For maize, models 1, 2 and 5 had higher estimation accuracy in different years, with the RMSE ranging from 0.57 to 1.29 t/ha, from 0.48 to 0.93 t/ha, and from 0.69 to 0.91 t/ha, and the RE ranging from 4.5% to 11.7%, from 4.3% to 8.5%, and from 5.1% to 8.1%, respectively. These three models also performed better in model calibration. Among these 3 models, model 5 has the highest accuracy with the multi-year average values of RMSE and RE of 0.75 t/ha and 6.1%, respectively. As a result, models 1, 2 and 5 were chosen as the three better models for maize, while models 5, 6 and 8 as the better models for sunflower. The model 5 was among the three better models for both maize and sunflower, which indicates that the yields of maize and sunflower can be estimated fairly well with NDVI series 50 days before crop harvest and implicates the possibility of crop yield forecast before harvest. If we could estimate the NDVI series 50 days before crop harvest without NDVI data of the final growth period using appropriate interpolation or curve fitting methods, then the crop yield can be forecasted before crop harvest. The possibility of crop yield forecast before harvest will be considered in further studies. Furthermore, we calculated the total production of maize and sunflower in four counties (Dengkou, Linhe, Hangjinhouqi and Wuyuan) using the above eight models by adding up the total production of all maize or sunflower pixels in each county. As a result, we obtained 24 (4 counties in 6 years) estimated total production for maize and sunflower, respectively, which were compared with the official statistical productions during 2010 to 2015 (Table 4). From Table 4, the R 2 and the adjusted R 2 were mostly over 0.45 for maize and over 0.60 for sunflower. The RE were mostly less than 30% for maize and 35% for sunflower. It can also be found that R 2 and the adjusted R 2 for model validation using total production are smaller than that for model calibration using crop yield per unit area, which is reasonable because errors in the estimated total production include errors in both yield estimation and crop classification. Compared with previous crop yield estimation studies using machine learning methods [7,11,43] with the R 2 ranging from 0.2 to 0.8, the accuracy of the present crop yield estimations was acceptable. The scatter plots for the three better model estimated and statistical productions in four counties during 2010 to 2015 are shown in Figures 6 and 7. For maize, the model 5 has close R 2 values to the other two models and the smallest RMSE and RE values. For sunflower, the model 8 has close RMSE As a result, models 1, 2 and 5 were chosen as the three better models for maize, while models 5, 6 and 8 as the better models for sunflower. The model 5 was among the three better models for both maize and sunflower, which indicates that the yields of maize and sunflower can be estimated fairly well with NDVI series 50 days before crop harvest and implicates the possibility of crop yield forecast before harvest. If we could estimate the NDVI series 50 days before crop harvest without NDVI data of the final growth period using appropriate interpolation or curve fitting methods, then the crop yield can be forecasted before crop harvest. The possibility of crop yield forecast before harvest will be considered in further studies. Furthermore, we calculated the total production of maize and sunflower in four counties (Dengkou, Linhe, Hangjinhouqi and Wuyuan) using the above eight models by adding up the total production of all maize or sunflower pixels in each county. As a result, we obtained 24 (4 counties in 6 years) estimated total production for maize and sunflower, respectively, which were compared with the official statistical productions during 2010 to 2015 (Table 4). From Table 4, the R 2 and the adjusted R 2 were mostly over 0.45 for maize and over 0.60 for sunflower. The RE were mostly less than 30% for maize and 35% for sunflower. It can also be found that R 2 and the adjusted R 2 for model validation using total production are smaller than that for model calibration using crop yield per unit area, which is reasonable because errors in the estimated total production include errors in both yield estimation and crop classification. Compared with previous crop yield estimation studies using machine learning methods [7,11,43] with the R 2 ranging from 0.2 to 0.8, the accuracy of the present crop yield estimations was acceptable. and RE values to the other two models and the highest R 2 value. Consequently, we selected the model 5 to estimate the maize yields, and the model 8 to estimate the sunflower yields. From the above results, we found that the optimal yield estimation models for maize and sunflower were different. However, there were many studies on the yield estimation of maize [11,19,42,64], a food crop, and few studies on sunflower, an economic crop. Sunflower was an important economic crop in HID because of its obvious drought-tolerance characteristics. This study was the first to develop a regional sunflower yield estimation model, which was of great significance to the management of sunflower planting in HID. Moreover, there were many studies on estimating maize yield in areas where precipitation and temperature were the dominant factors for maize yield while irrigation was not a major influencing factor [7,11,42,64]. Based on the growth conditions of maize in arid areas, the phenological characteristics were added to the yield estimation model in this study, and the yield estimation model suitable for maize in arid area was obtained. and RE values to the other two models and the highest R 2 value. Consequently, we selected the model 5 to estimate the maize yields, and the model 8 to estimate the sunflower yields. From the above results, we found that the optimal yield estimation models for maize and sunflower were different. However, there were many studies on the yield estimation of maize [11,19,42,64], a food crop, and few studies on sunflower, an economic crop. Sunflower was an important economic crop in HID because of its obvious drought-tolerance characteristics. This study was the first to develop a regional sunflower yield estimation model, which was of great significance to the management of sunflower planting in HID. Moreover, there were many studies on estimating maize yield in areas where precipitation and temperature were the dominant factors for maize yield while irrigation was not a major influencing factor [7,11,42,64]. Based on the growth conditions of maize in arid areas, the phenological characteristics were added to the yield estimation model in this study, and the yield estimation model suitable for maize in arid area was obtained. From the above results, we found that the optimal yield estimation models for maize and sunflower were different. However, there were many studies on the yield estimation of maize [11,19,42,64], a food crop, and few studies on sunflower, an economic crop. Sunflower was an important economic crop in HID because of its obvious drought-tolerance characteristics. This study was the first to develop a regional sunflower yield estimation model, which was of great significance to the management of sunflower planting in HID. Moreover, there were many studies on estimating maize yield in areas where precipitation and temperature were the dominant factors for maize yield while irrigation was not a major influencing factor [7,11,42,64]. Based on the growth conditions of maize in arid areas, the phenological characteristics were added to the yield estimation model in this study, and the yield estimation model suitable for maize in arid area was obtained. Figures 8 and 9 show the yield distribution of maize and sunflower from 2010 to 2015. For maize, most yields fell in the range of 9.23 (the 10th percentile) to 13.43 t/ha (the 90th percentile). The maximum maize yield occurred in Hangjinhouqi, which averaged to 11.08 t/ha, while the minimum yield occurred in Dengkou and Wuyuan, which averaged to 10.83 t/ha. Annual average yields from 2010 to 2015 ranged from 10.62 t/ha in 2011 to 11.41 t/ha in 2015. For sunflower, most yields fell in the range of 2.43 (the 10th percentile) to 5.35 t/ha (the 90th percentile). The maximum sunflower yield occurred in Wuyuan, which averaged to 3.76 t/ha, while the minimum yield occurred in Dengkou, which averaged to 3.64 t/ha. Annual average yields from 2010 to 2015 ranged from 3.54 t/ha in 2012 to 3.87 t/ha in 2014. Spatial and Temporal Distribution of Crop Yields The yields of maize and sunflower in Dengkou were both the lowest compared with the other three counties. Possible reason for this spatial variation was that there were more lands with sandy soils in Dengkou, which was not conducive to the growth of maize and sunflower. Meanwhile, maize and sunflower both reached their highest yields in their most distributed counties, Hangjinhouqi and Wuyuan, respectively. The more planted percentage of a crop in an area, the more applicable of the crop in the area. In other words, the present maize and sunflower distributions are generally in agreement with the environment adaptability of these two crops. The yields of maize and sunflower in Dengkou were both the lowest compared with the other three counties. Possible reason for this spatial variation was that there were more lands with sandy soils in Dengkou, which was not conducive to the growth of maize and sunflower. Meanwhile, maize and sunflower both reached their highest yields in their most distributed counties, Hangjinhouqi and Wuyuan, respectively. The more planted percentage of a crop in an area, the more applicable of the crop in the area. In other words, the present maize and sunflower distributions are generally in agreement with the environment adaptability of these two crops. Conclusions In this study, we used the RF algorithm to estimate annual maize and sunflower yields in Hetao Irrigation District from 2010 to 2015 based on vegetation indexes and phenological characteristics. The main feature of this study was that the regional crop yield estimation of pixel scale was carried out for the first time in HID and this method can be easily applied to other regions, if there were auxiliary crop planting structure map and field measured crop yield data. Main conclusions of this study are as follows: (1) The RF model could accurately estimate annual regional crop yields, with the multi-year average values of root-mean-square error and the relative error of 0.75 t/ha and 6.1% for maize, and 0.40 t/ha and 10.1% for sunflower, respectively. (2) Among eight models, the optimal model for maize was NDVI series from the 120th day to the 210th day with 10 days' interval, while the optimal model for sunflower was the combination of NDVI indexes and phenological characteristics. The yields of maize and sunflower in Dengkou were both the lowest compared with the other three counties. Possible reason for this spatial variation was that there were more lands with sandy soils in Dengkou, which was not conducive to the growth of maize and sunflower. Meanwhile, maize and sunflower both reached their highest yields in their most distributed counties, Hangjinhouqi and Wuyuan, respectively. The more planted percentage of a crop in an area, the more applicable of the crop in the area. In other words, the present maize and sunflower distributions are generally in agreement with the environment adaptability of these two crops. Conclusions In this study, we used the RF algorithm to estimate annual maize and sunflower yields in Hetao Irrigation District from 2010 to 2015 based on vegetation indexes and phenological characteristics. The main feature of this study was that the regional crop yield estimation of pixel scale was carried out for the first time in HID and this method can be easily applied to other regions, if there were auxiliary crop planting structure map and field measured crop yield data. Main conclusions of this study are as follows: (1) The RF model could accurately estimate annual regional crop yields, with the multi-year average values of root-mean-square error and the relative error of 0.75 t/ha and 6.1% for maize, and 0.40 t/ha and 10.1% for sunflower, respectively. (2) Among eight models, the optimal model for maize was NDVI series from the 120th day to the 210th day with 10 days' interval, while the optimal model for sunflower was the combination of NDVI indexes and phenological characteristics. Conclusions In this study, we used the RF algorithm to estimate annual maize and sunflower yields in Hetao Irrigation District from 2010 to 2015 based on vegetation indexes and phenological characteristics. The main feature of this study was that the regional crop yield estimation of pixel scale was carried out for the first time in HID and this method can be easily applied to other regions, if there were auxiliary crop planting structure map and field measured crop yield data. Main conclusions of this study are as follows: (1) The RF model could accurately estimate annual regional crop yields, with the multi-year average values of root-mean-square error and the relative error of 0.75 t/ha and 6.1% for maize, and 0.40 t/ha and 10.1% for sunflower, respectively. (2) Among eight models, the optimal model for maize was NDVI series from the 120th day to the 210th day with 10 days' interval, while the optimal model for sunflower was the combination of NDVI indexes and phenological characteristics. (3) The yields of maize and sunflower could be estimated fairly well with NDVI series 50 days before crop harvest, which implicated the possibility of crop yield forecast before harvest. Author Contributions: S.S. outlined the research topic, assisted with manuscript writing, and coordinated the revision activities. B.Y. performed data collection and processing, data analysis, model building, the interpretation of results, manuscript writing, and coordinated the revision activities. Funding: This research was funded by National Natural Science Foundation of China, grant numbers 51479090 and 51779119.
9,808.4
2018-11-01T00:00:00.000
[ "Agricultural And Food Sciences", "Mathematics", "Environmental Science" ]
A typology of scientific breakthroughs Scientific breakthroughs are commonly understood as discoveries that transform the knowledge frontier and have a major impact on science, technology, and society. Prior literature studying breakthroughs generally treats them as a homogeneous group in attempts to identify supportive conditions for their occurrence. In this paper, we argue that there are different types of scientific breakthroughs, which differ in their disciplinary occurrence and are associated with different considerations of use and citation impact patterns. We develop a typology of scientific breakthroughs based on three binary dimensions of scientific discoveries and use this typology to analyze qualitatively the content of 335 scientific articles that report on breakthroughs. For each dimension, we test associations with scientific disciplines, reported use considerations, and scientific impact. We find that most scientific breakthroughs are driven by a question and in line with literature, and that paradigm shifting discoveries are rare. Regarding the scientific impact of breakthrough as measured by citations, we find that an article that answers an unanswered question receives more citations compared to articles that were not motivated by an unanswered question. We conclude that earlier research in which breakthroughs were operationalized as highly cited scientific articles may thus be biased against the latter. INTRODUCTION The evolution of scientific knowledge is commonly understood as an alternating process between short periods of breakthrough discoveries and long periods in which these breakthroughs are further refined and elaborated (Kuhn, 1962;Toulmin, 1967). Periods of breakthrough discoveries transform the knowledge frontier, while periods of refinement and elaboration allow for the major scientific, technological, and societal contributions of those breakthroughs to materialize (Evans, 2016;Hilgard & Jamieson, 2017;Winnink & Tijssen, 2014). However, while the periods of refinement and elaborations are well understood (e.g., Boyd & Richerson, 1985;Cavalli-Sforza & Feldman, 1981;Nelson & Winter, 1982), the characteristics of discoveries that transform the knowledge frontier remain unclear (Marx & Bornmann, 2013). In recent years, several authors have suggested supportive conditions for scientific breakthroughs, such as cognitively diverse teams (Hage & Mote, 2010;Hinrichs, Seager, et al., 2017;Wu, Wang, & Evans, 2019), combinations of highly conventional and highly novel knowledge (Mukherjee, Romero, et al., 2017;Schilling & Green, 2011) and psychological characteristics of scientists, such as stubbornness and tenacity (Grumet, 2008). Generally, these studies understand scientific breakthroughs as being codified in highly cited publications, typically operationalized as the top 5%, 1%, or 0.1% highly cited articles in a field (Ponomarev, Williams, et al., 2014;Uzzi, Mukherjee, et al., 2013;Zeng et al., 2017). These studies thus treat scientific breakthroughs as a group characterized by high impact and implicitly assume that all scientific breakthroughs are highly cited articles, and conversely that all highly cited articles are scientific breakthroughs. A systematic effort to identify different types of breakthroughs has so far not been made. This is, however, useful, as it might be the case that breakthrough types occur under different circumstances and differ in their citation impact. Below, we develop a typology of scientific breakthroughs, and examine differences between kinds of breakthroughs in terms of their disciplinary occurrence, considerations of use and citation impact. We make use of the Charge-Chance-Challenge (Cha-Cha-Cha) theory of scientific discovery as described by Koshland (2007) to develop a typology of scientific breakthroughs. Rather than understanding scientific breakthroughs as either Charge, Chance or Challenge type, as Koshland does, we propose three discovery dimensions that underlie those three types to provide a better understanding of the varieties of scientific breakthroughs. Using the three dimensions, we test to what extent configurations of these three dimensions are observable in the scientific literature by qualitatively coding the full text of 335 articles that, according to experts, report on scientific breakthroughs. We then use these coded articles to explore how the different characteristics are distributed over scientific disciplines and vary in their considerations of use and citation impact. We are particularly interested in those aspects, because Koshland, in his paper, provides examples of the different types of discoveries that come primarily from physical sciences and life sciences. Furthermore, as it is known that scientific breakthroughs have transformational potential both within and beyond science (Winnink & Tijssen, 2014), we explore how the different typological dimensions relate to furthering fundamental understanding and considerations of use (Stokes, 1997). Finally, we model the effect of breakthrough characteristics on cumulative citation impact over 10 years by means of a set of regression models. We find that breakthroughs vary widely in their citation impact, and that there are telling differences in impact between different types of breakthroughs. THE CHA-CHA-CHA THEORY OF SCIENTIFIC DISCOVERIES Koshland's Cha-Cha-Cha theory of scientific discoveries was developed to aid in understanding the heterogeneous nature of major scientific advances, and to improve our understanding of the conditions under which scientific breakthroughs occur (Koshland, 2007). The theory is developed from the perspective that different field conditions lead up to different types of scientific discoveries. These field conditions relate to the perceived state of knowledge in a scientific field, which offers opportunities for scientists to make relevant scientific contributions (discoveries). For example, a scientific discovery may provide an answer to a long-standing question in a field. Alternatively, a breakthrough may be a serendipitous encounter with an important new piece of evidence, which may fit or question the existing theory or observations in a field. Scientists may also recognize a set of inconsistencies in the state-of-the-art literature in a field, which they aim to resolve. Koshland's Cha's summarize these different kinds of scientific discoveries in three types: Charge, Chance, and Challenge. Charge, Chance and Challenge Type Discoveries Koshland defines Charge type discoveries as discoveries that "solve problems that are quite obvious … but in which the way to solve the problem is not so clear" (p. 761). In other words, Charge type discoveries resolve "known unknowns" (Logan, 2009). Koshland uses Isaac Newton's discovery of gravity as an example of a Charge type discovery, because "the movement of stars in the sky and the fall of an apple from a tree were apparent to everyone, but Isaac Newton came up with the concept of gravity to explain it all" (p. 761). A recent example of a Charge type scientific breakthrough is "cloaking technology" (Leonhardt, 2006), an invisibility device that has been a longstanding dream of many scientists. While it had been proven that perfect invisibility is impossible due to the wave nature of light, there was reason to believe that "perfect invisibility within the accuracy of geometrical optics" was achievable (p. 1777). Leonhardt (2006) reports the formulation of a "general recipe" for the design of media that can achieve such invisibility with possibilities for practical demonstrations. This breakthrough has thus, at least in theory, solved a well-known problem in a way that had not been thought of before. Koshland defines Chance type discoveries as "instances of a chance event that the ready mind recognizes as important and then explains to other scientists" (p. 761). For a Chance type discovery, the original contribution lies in recognizing the importance of an unexpected encounter or explaining the importance to other members of the scientific community. These encounters typically involve some kind of serendipity (Copeland, 2019;Koshland, 2007;Yaqub, 2018). Encounters may take the shape of accidental discoveries of fossils, ancient remains, and other natural phenomena, but also of unexpected outcomes of planned experiments, such as Alexander Fleming's discovery of penicillin (Koshland, 2007). A more recent example of a Chance type discovery is reported in an article by Palmer, Barthelmy, et al. (2005), who report on neutron star SGR 1806-20 emitting a giant gamma-ray flare on 27 December 2004. Recognizing the importance of this event was crucial: The authors note that this flare was about a hundred times higher than the two giant flares observed from this neutron star earlier, whereas the energy of giant flares is usually a thousand times higher than that of a typical burst. Because of that difference, the authors further note that under different circumstances, this burst could have been interpreted as another type of burst. Instead, they suggest that the observed flare is of a newly discovered subclass. As a third type of scientific breakthrough, Koshland defines Challenge type discoveries as "a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time" (p. 761). Koshland provides Einstein's theory of special relativity as an example of a Challenge type discovery, as it provided a theory that explained anomalies with contemporary theories. Another example presented itself with the report on a draft sequence of the Neandertal genome (Green, Krause, et al., 2010). The authors emphasize in the introduction of the article that "substantial controversy surrounds the question of whether Neandertals interbred with anatomically modern humans" (p. 710). The challenged model was based on the idea that modern humans, after leaving Africa, completely replaced Neandertals without interbreeding. This theory was supported by evidence on morphological features and DNA of modern humans, although the evidence was considered to be inconclusive. The draft sequence of the Neandertal genome suggest that Europeans and Asians, but not Africans, have inherited genes from Neandertals-a finding that does not fit with the model. Instead, the authors put forward an alternative theory: Neandertals interbred with modern humans after they left Africa, but before they spread into Europe and Asia. While Koshland's categorization is intuitive, it has thus far not been used to systematically map scientific discoveries. One obstacle towards using Koshland's theory to classify scientific breakthroughs holds that Koshland does not specify whether we should understand scientific discoveries as mutually exclusive (i.e., Chance, Charge, or Challenge alone), or as combinations of types. For example, a discovery can fit the description of both Chance and Challenge. Consider, as an example, an article published in 2000, reporting on the discovery of two early hominid skulls and tools at a site in the Republic of Georgia, which the authors interpret as evidence that "the initial hominid dispersal from Africa was driven not by technological innovation but more likely by biological and ecological parameters" (Gabunia, Vekua, et al., 2000, p. 1025. This discovery fits the definition of a Chance type discovery because it involves a chance encounter that scientists recognized as important, but it also fits the definition of a Challenge type discovery because the authors interpret evidence that is incongruent with scientific theories of the time, and propose an alternative theory. Moreover, Koshland also does not specify whether the three types are meant to be exhaustive. It might be that some discoveries do not fit with the definition of any of Chance, Charge, or Challenge. From Discovery Types to Discovery Dimensions As we aim to characterize and compare scientific breakthroughs, we allow for the possibility that Koshland's discovery types are neither exhaustive nor mutually exclusive. Rather, we assume that breakthroughs can be characterized on three binary discovery dimensions. For each of Koshland's discovery types, the state of two of the three binary dimensions is fixed, while the state of the third dimension may vary (see also Table 1). For both states of each dimension, we provide examples of relevant scientific articles in Table A1. We summarize the dimensions as follows: 1. The discovery is driven by a question, or by a research object First, we distinguish between discoveries that are question driven and discoveries that are research object driven. Whereas in the case of question-driven discoveries the area of ignorance and the line of enquiry in the field is well established and widely shared ("we know what we do not know"), discoveries driven by a research object are inverse question driven: The discovery precedes the formulation of the question ("we do not know what we do not know") (Meyers, 2011). For example, archaeologists might discover ancient hominid remains in an unexpected location, which then raises questions about the distribution and social relation of hominids (Brunet, Guy, et al., 2002). The discovery of ancient remains thus drives the formulation of a question that was not asked before. Note that it may be the case that discoveries driven by a research object do actually provide answers to questions, but these questions were not the driver of the discovery. Koshland's Charge type can be characterized as question driven, referring to discoveries that solve long-standing problems. In our earlier example, the discovery team was able to design a theoretical cloaking device in response to the ambition of engineering invisibility. Chance and Challenge type discoveries are both research object driven: Chance type discoveries start from an encounter with a research object that awaits interpretation, and Challenge type discoveries start from the recognition of a research object that does not fit with existing theories. In our example of a Chance type discovery, this was the observation of a giant gamma-ray burst, and in our Challenge example, this was genetic The discovery introduces a new question/research object, or contributes to a known question/research object As a second dimension, we distinguish between new and known questions and research objects. We understand questions and research objects that are known as questions and research objects that are documented in the scientific literature, and of which the scientists who made the discovery were aware. Conversely, new questions and research objects are those that are introduced by the scientists who made the discovery and are, therefore, themselves part of the discovery. With regard to the scientific impact of a discovery, this is a relevant distinction, as it indicates whether the discovery team can be credited for introducing the new question or research object, or only for resolving or contextualizing it. Koshland described this in terms of "uncoverers," or scientific teams for whom uncovering the question or research object is (part of ) their original contribution, and "discoverers," or scientific teams that contribute to a question or research object uncovered by others (p. 761). Koshland's Chance type can clearly be characterized along this dimension. For Chance type discoveries, it is the "uncovering" that is critical, along with recognizing and interpreting the relevance of the uncovered research object. Without observing the giant gamma-ray burst, its discovery team would not have been able to report any discovery. In the case of Charge or Challenge discoveries, the question or research object can be either new or known. For Charge type discoveries, which answer "obvious" questions, the question may be a long-standing one that many others have tried and failed to solve, such as the ambition of invisibility or the puzzle of gravity, but it may also be a question that they raised themselves as an extension of existing literature. For example, the authors of an article that reports on the derivation of germ cells from stem cells argued that "because embryoid bodies sustain blood development, we reasoned that they might also support primordial germ cell formation," thus raising the question of whether germ cells can indeed be made from such embryoid bodies (Geijsen, Horoschak, et al., 2004, p. 148). And, Challenge type discoveries can be a response to an accumulation of facts that the discovery team uncovers themselves, or that were already known in the literature before. Our example of a Challenge type discovery based on the draft sequence of the Neandertal genome includes both: It reports original evidence that counters the existing model, and describes pieces of evidence that were uncovered by others. Here, we are in agreement with Koshland (2007), who also argued that Challenge type discoveries can be accompanied with uncovery or not. Following his wording, it is the discovery of a new explanation of facts that is critical for Challenge type discoveries, not the uncovering of the facts as such. The question or research object is against or in line with state-of-the-art literature Third, we distinguish between discoveries that go against state-of-the-art literature and discoveries that fit with or follow logically from existing literature. In other words, the discovery may have the potential to cause a paradigm shift, or it may fit within the current paradigm (Koshland, 2007;Kuhn, 1962). Koshland's Challenge type discoveries are driven by research objects that are incongruent with the current paradigm, and their interpretation thus calls for a paradigm shift. Challenge type discoveries can thus be characterized as "against state-of-the-art literature." The article on the Neandertal genome, for example, reported existing evidence incongruent with the current paradigm, uncovered additional evidence, and offered an alternative model. Charge type discoveries, on the other hand, answer questions that have been part of the existing literature or follow logically from it and, logically, cannot go against state-ofthe-art literature. The discovery of a theoretical cloaking device was, indeed, in line with earlier ideas on the feasibility of such a device. Chance type discoveries may or may not be in line with state-of-the-art literature, depending on the interpretation of their discoverers. The article by Palmer et al. (2005) on an observed gamma-ray flare is an example of the former: the flare was interpreted as an additional category of flares. This interpretation offered an extension of the current model and did not require a paradigm shift. The discovery of hominid skulls and tools in the Republic of Georgia (Gabunia et al., 2000), by contrast, is an example of both a Chance type discovery and a Challenge type discovery, as the evidence is seen as incongruent with scientific theories of the time. In summary, we can define Koshland's types as configurations of three binary dimensions, as summarized in Table 1. Following the table, Charge type discoveries are driven by a question, be it a new or known question, and are in line with existing literature. Chance type discoveries are driven by a new research object and may be in line with or against existing literature. Challenge type discoveries are driven by a new or existing research object, and go against existing literature. It follows from Table 1 that Koshland's three discovery types are not exhaustive of the possible types that are analytically conceivable. Indeed, there is no reason to assume that discoveries will only meet the particular configurations of the dimensions that are consistent with Koshland's three types. Using the framework, we are able to characterize scientific breakthroughs in three binary dimensions, so that each scientific breakthrough is classified as one out of eight (2 3 ) possible types, rather than Koshland's three. The question, then, of which of the eight possible discovery types is most prevalent, is an empirical one. Data Collection To characterize different types of scientific breakthroughs, we make use of Science's annual announcement of the Breakthrough of the Year (BotY) (AAAS, 2018) between 1999 and 2012. Each year the magazine's scientific editors select "the most significant scientific discovery of the year" (AAAS, 2018) and its nine runners-up 1 . The selected breakthroughs are described by the journal's reporters in the final issue of the year. These descriptions may refer to a single scientific breakthrough or to a multitude of breakthroughs that center on a common theme 2 , and include a list of references to the original research described and other supportive material. For this paper, we use the reference list of each BotY description to select research articles that report on the scientific breakthrough. We will refer to these articles as breakthrough articles. We use the following requirements in our selection of breakthrough articles from the BotY (and runners-up) reference lists: (a) Articles should be written in English 3 ; (b) articles should be published in the same year as the year in which they were announced BotY or runner-up, with the exception of articles published in December the year before (as these were published after the BotY announcement of the previous year); (c) articles should be published in peer-reviewed academic journals 4 ; (d) articles should report original results described in the BotY description: Review articles or articles that were included as further reading are omitted; (e) articles should have a DOI and be available on Web of Science (WoS); and (f ) articles should not have been retracted afterwards 5 . Although the announcement of BotYs began in 1996 (replacing the annual announcement of Molecule of the Year), data are collected from 1999 onwards, as no runners-up were announced in 1998. BotYs are collected until 2012, to allow the articles at least 6 years to receive citations after publication. This resulted in 335 scientific breakthrough articles, derived from 140 BotYs (14 years: one breakthrough and nine runners-up per year). Table 2 shows a summary of this selection process. We used the DOI of each article to collect data from WoS: (a) publication date; (b) publication source; (c) citation report consisting of the number of citations per year for 10 years, or as many years as possible for articles published after 2008; (d) number of authors; and (e) a PDF of the article's full text (including abstract). These data were extracted from WoS in April 2018. The articles' full texts were used to code the breakthrough article in terms of the three discovery dimensions and its reported considerations of use. We also collect data on the scientific discipline of each of our breakthrough articles based on the indexation of Nature 6 (Springer Nature, n.d.). We distinguish between disciplines as listed by Nature: biological sciences 7 , business and commerce, environmental sciences, health sciences 8 , humanities, physical sciences, scientific community and society, and social sciences. 9 Because many of the examples of Chance type discoveries supplied by Koshland are specifically from paleontological sciences and astronomical sciences, whereas examples from Charge and Challenge type discoveries are not (Koshland, 2007), we will further distinguish between paleontological sciences and other biological sciences and between astronomical sciences and other physical sciences. Coding Discovery Dimensions and Reported Considerations of Use We use directed content analysis (Hsieh & Shannon, 2005;Saldaña, 2015) to code each breakthrough article on each of the three binary discovery dimensions and on the considerations of use of the scientific breakthrough article. The result of this process will be used to assess descriptively, statistically, and visually differences in discipline, citation impact, and reported considerations of use between breakthrough articles by dimensions (see also Section 2.2). To code articles on the three discovery dimensions, we use the text of the articles. For each article we search for key phrases indicative of each dimension. Examples of key phrases used can be found in Table A1, and were developed in three steps. First, two coders, K. S. and M. L. W., 4 Note that, although BotYs are announced by Science, reference lists include articles published in other peerreviewed journals. 5 As retraction of articles is essentially right-censored, because any article may be retracted in the future, there may be a bias against older articles. However, there are few retractions in the data. 6 For breakthrough articles that were not published by Nature, we identify the most relevant scientific discipline by determining the discipline of referenced articles published in Science. 7 Including anatomy, physiology, cell biology, biochemistry, biophysics, and paleontology (Springer Nature, n.d.). 8 Including aspects of health, disease and healthcare aiming to develop knowledge, interventions, and technology for use in healthcare (Springer Nature, n.d.). 9 However, in our data set, we only find articles from "biological sciences," "physical sciences," "environmental sciences," and "health sciences." coded 14 breakthrough articles from 2014, which were not part of the data set of this paper, by highlighting phrases that signal the state of the three dimensions as defined in Section 2.2. During this stage it was found that while relevant phrases could be found throughout the whole text of the article, the articles' abstracts, first sentence, introductions and conclusions are the most informative with regard to the state of the discovery dimensions. Coding thus focused on these sections, or on the whole text if abstract, introduction, and conclusion were inconclusive. Second, coding differences between K. S. and M. L. W. were discussed until a consensus was reached on common coding practices. Third, the highlighted phrases of these 14 breakthrough articles were summarized into stylized phrases. Note that some key phrases serve as signals for more than one dimension. For example, the phrase "On [date] we have observed […]" signals that the reported scientific breakthrough is research object driven rather than question driven, but also that this research object is new rather than known, because uncovering this research object is part of the breakthrough. Fourth, the 335 breakthrough articles in our data set were then independently coded by both coders. For our analyses, the dimension states question-driven, new, and against literature are coded as 1, and research object-driven, known, and in line with literature are coded as 0. For the identification of reported considerations of use we follow the four quadrants proposed by Stokes (1997) when cross-tabulating two questions: (a) Does the article report applied considerations of use of the scientific breakthrough, or not?; and (b) Does the article report that the scientific breakthrough is part of a quest for fundamental understanding, or not? Articles that do not report applied considerations of use but do report contributions to fundamental understanding are considered basic research. Articles that only report applied considerations of use without contributing to fundamental understanding are applied research. Articles that report both applied and fundamental considerations of use are considered as useinspired basic research, also known as "Pasteur's quadrant" (Stokes, 1997). Finally, articles may report neither applied nor fundamental considerations of use. For the development of key phrases that signal considerations of use, we followed the same procedure as for the development of key phrases for discovery dimensions, described above. Such phrases were found to be typically reported in the final paragraph(s) of the breakthrough articles. Key phrases, as well as examples of reported considerations of use, can be found in Table A2. We present intercoder reliability for each of our coded dimensions in Table 3, where we report Cohen's kappa, which takes intercoder agreement by chance into account. We find that kappa values are sufficiently high (Cohen, 1960). In cases of disagreement, final codes are based on consensus between the two coders. Consensus was found for all articles in the data set, which implies that all 335 publications originally selected serve as empirical observations. Have not been retracted 335 Analysis We run chi 2 tests and Tukey's HSD post hoc tests (Tukey, 1949) to test whether discovery dimension states are associated with scientific disciplines and reported considerations of use. We also present bar charts to assess differences visually. To test whether discovery dimensions affect cumulative citation patterns, we run a set of regressions with the number of cumulative citations as the dependent variable, and three dummies that represent the three binary discovery dimensions as the main independent variables. Ten regression models estimate cumulative citations from 1 to 10 years after publication. For this, negative binomial regression is appropriate, as our dependent variables reflect overdispersed count data (Cameron & Trivedi, 1998): If we were to use a Poisson regression rather than a negative binomial regression to model cumulative citations 10 years after publication using all our independent variables, the residual variance of 184.073 would exceed the 220 degrees of freedom. As each BotY description can contain references to multiple breakthrough articles, we cluster standard errors at the level of the BotY description. As using cumulative citations per year makes it difficult to identify differences in the number of citations per year, we rerun our models using number of citations per year for one to 10 years after publication rather than cumulative citations per year for 1 to 10 years after publication as dependent variables, presented in Figure A1 and Table A3. Individual authors may each boost the cumulative citations to their own articles by bringing their work under the attention of others (Aksnes, 2003). Therefore, we include number of authors as a control variable. As this variable is heavily skewed, we use a log transformation for number of authors. In a second set of models, we further include dummy variables for discipline, as it is known that citations rates vary between disciplines, and we find that configurations of discovery dimensions are not randomly distributed over disciplines. Configurations of Discovery Dimensions and Associated Disciplines and Considerations of Use In Table 4, we present the distribution of articles in our data set over the eight different configurations of discovery dimensions. We also compare our typology to Koshland's. We see that some combinations of characteristics are more common than others. Notably, the majority of our articles (77%) are what Koshland would describe as Charge type discoveries: driven by a known question that is in line with theory, irrespective of whether the research object is new or known. We also find that most articles can indeed be classified according to Koshland's typology: Only 43 articles (13%) do not fit with that typology, either because they have properties of more than one type (11%), or because they have properties of none (2%). In this sense, the original typology of Koshland can be regarded as useful. Figure 1 presents bar charts of the discipline and reported considerations of use for each configuration of discovery dimensions, discussed in detail below. Most articles in our data set report considerations of use for fundamental understanding only (74%). Just a few are only applied (7%), while another 17% are classified as both fundamental and applied; 2% report neither. A large share (47%) of the articles report on research on biological sciences (excluding paleontology), while 26% report on research on physical sciences (excluding astronomy). Research in the field of health sciences and environmental sciences is less common. Dimension A The majority of articles in this study (86%) are question driven. We find that question driven discoveries are not randomly distributed across disciplines (chi 2 = 100, df = 5, p < .001). Based on our post hoc test, we find that being question driven is associated with health sciences, more than with other disciplines ( p < .05). Indeed, almost all (96%) health sciences articles in our data set are question-driven. Conversely, being research object driven is associated with astronomy and paleontology more than with other disciplines ( p < .05). This may be because disciplines such as paleontology and astronomy more often encounter unexpected physical research objects, for example from fossil records and satellite observations, respectively. Question-driven articles are also not randomly distributed across the four Stokes quadrants (chi 2 = 20, df = 3, p < .01). Our analysis suggests that being question driven is associated with reporting only applied considerations of use and with reporting both applied and fundamental considerations of use, while research object driven breakthroughs are associated with reporting on neither ( p < .05). Dimension B Articles with a new question or research object are slightly less common than articles with a known question or research object (43% versus 57% of articles). Most of these are articles with a new question rather than a new research object (74%). Of our total set of articles, only 10% report on a new research object that is in line with theory. Our results indicate that this dimension and discipline are not independent (chi 2 = 20, df = 5, p < .001). Specifically, we find that having a new question or research object is associated more with biological sciences (but not paleontology) and astronomy, while having a known question or research object is associated with physical sciences (but not astronomy) ( p < .05). It is further worth noting that breakthroughs that are specifically driven by a new research object are primarily found in astronomy (see also Figure 1). We do not find a strong association between this discovery dimension and Stokes' four quadrants regarding considerations of use (chi 2 = 7, df = 3, p > .05). Dimension C Breakthroughs that go against the state-of-the-art literature are uncommon (11%) and among them the large majority are question driven. We do not find a strong association between this discovery dimension and discipline (chi 2 = 9, df = 5, p > .05). Our post hoc tests suggest that breakthroughs going against the literature are somewhat common in paleontological articles, while being in line with the literature is associated more with health sciences ( p < .1) and physical sciences except astronomy ( p < .05). In terms of reported considerations of use, we do not find significant evidence that being against state-of-the-art literature is associated with reported considerations of use (chi 2 = 5, df = 3, p > .1). Table 5 shows descriptives of the cumulative number of citations within 10 years per discovery dimension state. On average, the articles in our data set collect 799 citations within the first 10 years after publication. However, with a median of 489 and an interquartile range of 625, this varies broadly: While the lowest decile of the articles in our data set have fewer than 108 citations, the highest decile has more than 1,571. Interestingly, one article did not receive any citation within 10 years 10 . 10 This article, which reports the first results from the Sudbury Neutrino Observatory (Helmer & SNO Collaboration, 2002). It may be that this article did not receive any citations because there were two other articles that report results from the same observatory (also included here). While all three articles were originally submitted in April 2002, the Helmer et al. paper was published in November, while the other two were published in June. Citation Impact and Discovery Dimensions To test if the discovery dimension can explain some of the variation in citation counts, we present the incidence rates of negative binomial regression models including control variables in Figure 2, with one regression for each of the 10 years. Incidence rates for the effect of being question-driven, driven by a new question or research object, or being against literature are presented relative to being research object-driven, driven by a known question or research object and being in line with literature, respectively. Table 6 presents the regression coefficients of our models, where cumulative citations 1, 2, 5, and 10 years after publication are used as dependent variables. Models based on cumulative citations after 3, 4, 6, 7, 8, and 9 years were omitted from this table for readability reasons. Dummies for the three binary dimensions (with question-driven, new, and against literature coded as 1, and research object-driven, known and in line with literature coded as 0) and control variables for # authors (log) and discipline, with biological sciences as reference category, are included. Dimension A We find that being question driven has a positive effect on cumulative citations of scientific breakthrough articles. After 10 years, articles that are question driven are estimated to receive twice as Controlling for discipline in Models 5-8 slightly reduces the effect of being question driven, suggesting that part of the effect seen in Models 1-4 is, in fact, due to high citation rates of disciplines that are associated with being question driven. However, this does not alter our conclusion that being question driven has a positive effect on cumulative citations of scientific breakthrough articles. Dimension B We do not observe a significant association between this dimension and cumulative citations. Our coefficients suggest that there may be a small positive effect of being driven by a new question or research object on cumulative citations shortly after publication, which decreases in later years. This may be caused by the unexpectedness and novelty of the new question or new physical evidence introduced in the breakthrough article, and the sudden interest that this may spark. However, this is not a significant finding. Note: *** p < .001, ** p < .01, * p < .05. Dimension C We do not find a significant association between articles going against the literature and cumulative citations. Upon visual inspection, there is some indication that breakthrough articles driven by a question or research object that is against state-of-the-art literature receive more citations in later years (year 9 and 10). The trend observed supports the idea that paradigmshifting discoveries require more time to have an impact before they can be integrated in future knowledge development. However, the results are statistically insignificant. The results of Models 9-16, where we use citations per year rather than cumulative citations per year as dependent variable, are presented in Figure A1 and Table A3. These results are in line with our earlier results. Again, we find that only the question-driven dimension significantly affects the number of citations received. We find that the difference in the number of citations per year between question-driven and research object-driven articles is biggest after 4-5 years. DISCUSSION In this paper, we have developed a typology of scientific breakthroughs and applied this typology to characterize a set of articles reporting on scientific breakthroughs. Using Koshland's Charge-Chance-Challenge theory of scientific discovery as a starting point, we propose that scientific breakthroughs can be characterized along three dimensions: (a) whether the discovery is question driven or research object driven; (b) whether the discovery contributes to a known question or research object or introduces a new one; and (c) whether the discovery is in line with, or against, state-of-the-art literature. We subsequently use the typology to characterize 335 breakthrough articles along the three dimensions and analyzed how breakthrough characteristics relate to scientific disciplines, citation impact, and considerations of use for fundamental understanding and application. One of our main findings holds that the large majority of breakthrough discoveries can be classified as one of Koshland's discovery types within his Cha-Cha-Cha framework. However, we also observed that a small proportion of breakthroughs could not be characterized as any of Koshland's types, and some other articles fell into multiple Koshland types. Based on this finding we conclude that, rather than distinguishing between Charge, Chance, and Challenge types, breakthroughs can better be understood as being question driven or research object driven, introducing a new question/research object or a known question/research object, and having a contribution that is against or in line with state-of-the-art literature. We believe that our framework marks an improvement over the original Cha-Cha-Cha theory, as we have made the underlying dimensions explicit and orthogonal to one another, expanding the typology from 3 to 2 3 = 8 types. Our framework, then, can be used in future research to further probe the antecedents and effects of scientific breakthroughs. It can equally be used to analyze differences between characteristics of breakthrough and nonbreakthrough discoveries. A logical extension of this paper is also to study whether the configurations of discovery dimensions discussed here are distributed differently over breakthroughs than over nonbreakthroughs, and to test whether the citation patterns we found are also observed for nonbreakthroughs. Our main empirical finding holds that most scientific breakthroughs are driven by an already existing question and in line with the state-of-the-art literature. This finding broadens our view of science in that it questions the popular view of scientific breakthroughs as radical, paradigmshifting discoveries (e.g., Evans, 2016;Ventegodt & Merrick, 2004). Rather, it suggests that the majority of scientific discoveries that are recognized as breakthroughs are better described as "normal science" (Kuhn, 1962). Our analysis also shows that articles reporting on scientific breakthroughs vary considerably in their citation impact. In particular, breakthrough articles that were driven by a research object rather than a question receive far fewer citations. This finding has implications for the interpretation of earlier research on scientific breakthroughs. Previous research has mainly analyzed scientific breakthroughs based on citation impact, thereby considering breakthroughs as a homogeneous group of discoveries (Ponomarev et al., 2014;Uzzi et al., 2013;Zeng et al., 2017). In contrast, our findings suggest that earlier research aimed at identifying supportive conditions for scientific breakthroughs did not recognize the variety of breakthroughs and may have been biased against a minority of breakthroughs driven by research objects. Therefore, their findings may not be generalizable to all scientific breakthroughs. For literature on scientific breakthroughs, a next step is to identify how conditions such as team composition and sponsoring affect the occurrence of a variety of scientific breakthroughs, and in particular those breakthroughs that are research object-driven, as we have shown that these have been underrepresented in the literature thus far. In this research, discoveries that were marked as scientific breakthroughs by Science have been leading. This operationalization of scientific breakthroughs has several implications for the generalizability of our findings. In the first place, our research only includes discoveries that are recognized as scientific breakthroughs within a year after publication. Discoveries that are recognized as such in a later phase may not have the same characteristics. For example, their citation impact may differ over time. An interesting avenue for future research would be to distinguish between discoveries that are received as breakthroughs shortly after publication and those that are recognized as breakthroughs later on. An interesting question then holds whether the relative prominence of the three dimensions introduced here differs between early and delayed recognition. In the second place, it is not unlikely that the nomination for BotY in itself affects the way a discovery is received. The increased visibility of the discovery may inspire others to refine the discovery in other research projects, and can lead to an increase in citations or even an increase in the likelihood of receiving a significant prize, such as a Nobel Prize or a Fields Medal. For future research, we encourage alternative approaches to identifying scientific breakthroughs that are more sensitive to delayed recognition and are not based on external assessments. One such approach has been developed by Small, Tseng, and Patek (2017), who identify and characterize biomedical discoveries based on automated text analysis of citing sentences and cocitation analysis. Our analysis of breakthrough discoveries is further limited by what has been reported in the scientific articles. As such, we must limit ourselves to an analysis of the reported drive of the scientific discoveries observed, which may not be the same as the actual drive of the discovery. Indeed, authors may present the process of discovery as more linear and rational than it actually has been (Myers, 1985). Similarly, the authors' motivation to write and publish the article may be different from their motivation to start the reported research project. For example, their original line of enquiry may have resulted in a serendipitous finding that solves an unexpected problem in another line of enquiry (Yaqub, 2018), which might lead the authors to change their narrative as well. Furthermore, our analysis is limited by the limited number of articles considered. With more observations, we could test differences in citation patterns of combinations of dimensions, rather than for single dimensional states. This may help us understand whether breakthrough articles that go against existing theory are accepted by the scientific community faster if they provide an answer to a long-standing question, for example. This is likely, as the question-driven approach of such articles may provide more legitimacy to the anomalous finding than if it were driven by new evidence. We therefore encourage others to extend our analysis to a larger set of breakthrough articles, potentially also including a broader range of scientific disciplines. Note: *** p < .001, ** p < .01, * p < .05. Figure A1. Incidence rate for noncumulative citations after N years, after controlling for the log of the number of authors and discipline.
10,038
2020-08-01T00:00:00.000
[ "Physics" ]
Some completely monotonic functions involving the polygamma functions Motivated by existing results, we present some completely monotonic functions involving the polygamma functions. Introduction The digamma (or psi) function ψ(x) for x > 0 is defined to be the logarithmic derivative of Euler's gamma function The function ψ and its derivatives are called polygamma functions. There are many interesting inequalities involving the polygamma functions in the literature, many of which are closely related to the fact that ψ ′ is completely monotonic on (0, +∞). Here we recall that a function f (x) is said to be completely monotonic on (a, b) if it has derivatives of all orders and (−1) k f (k) (x) ≥ 0, x ∈ (a, b), k ≥ 0 and f (x) is said to be strictly completely monotonic A general result of Fink [4, Theorem 1] on completely monotonic functions implies that for integers n ≥ 2, The following inequality of the reverse direction is given in [8]: A short proof of the above inequality is given in [3]. For integers p ≥ m ≥ n ≥ q ≥ 0 and any real number s, we define where we set ψ (0) (x) = −1 for convenience. In [2, Theorem 2.1], Alzer and Wells established a nice generalization of the above results. Their result asserts that for n ≥ 2, the function F n+1,n,n,n−1 (x; s) is strictly completely monotonic on (0, +∞) if and only if s ≤ (n − 1)/n and −F n+1,n,n,n−1 (x; s) is strictly completely monotonic on (0, +∞) if and only if s ≥ n/(n + 1). For a given function f (x), we denote for c > 0, We define for integers p ≥ m ≥ n ≥ q ≥ 0, real number c > 0 and any real number s, where we set ψ (0) (x) = ψ(x), ψ (−1) (x) = −x for convenience. We further define F p,m,n,q (x; s; 0) = lim c→0 + F p,m,n,q (x; s; c) and it is then easy to see that F p,m,n,q (x; s; 0) = F p,m,n,q (x; s). Motivated by the above results, it is our goal in this paper to prove the following: We then deduce that u ′ (s; a, c) ≤ 0 when 0 < c ≤ 1 and u ′ (s; a, c) ≥ 0 when c ≥ 1 and this completes the proof. Proof of Theorem 1.2 We first prove assertions (1) (a) and (2) (a) of the theorem. Note first that if F p,m,n,q (x; s; c) is completely monotonic on (0, +∞), then we have It then follows easily from the mean value theorem and (2.4) that we have Thus, s ≤ α p,m,n,q . Similarly, one shows that if −F p,m,n,q (x; s; c) is completely monotonic on (0, +∞), then s ≥ α p,m,n,q and this proves the "only if" part of the assertions (1) (a) and (2) (a) of the theorem. To prove the "if" part of the assertions (1) (a) and (2) (a) of the theorem, it's easy to see that it suffices to show that F p,m,n,q (x; α p,m,n,q ; c) is completely monotonic on (0, +∞) when 0 < c ≤ 1 and that −F p,m,n,q (x; α p,m,n,q ; c) is completely monotonic on (0, +∞) when c ≥ 1. We first consider the function F p,m,n,q (x; α p,m,n,q ; c) with q ≥ 1 following the approach in [2]. Using the integral representations (2.1) and (2.2) for the polygamma functions and using * for the Laplace convolution, we get F p,m,n,q (x; α p,m,n,q ; c) = ∞ 0 e −xt c 2 g p,m,n,q (t; α p,m,n,q )dt, where g p,m,n,q (t; α p,m,n,q ) = t m−1 (e −ct − 1) By a change of variable s → ts we can recast g(t) as g p,m,n,q (t; α p,m,n,q ) = t m+n−1 We now break the above integral into two integrals, one from 0 to 1/2 and the other from 1/2 to 1. We make a further change of variable s → (1 − s)/2 for the first one and s → (1 + s)/2 for the second one. We now combine them to get g p,m,n,q (t; α p,m,n,q ) It follows from a(1; p − q, n − q, α p,m,n,q ) > 0 and lim t→+∞ a(t; p − q, n − q, α p,m,n,q ) < 0 that for 0 < s ≤ s 0 , with the above inequality being reversed when s 0 ≤ s < 1. We further note by Lemma 2.3, the function u(s; t/2, c) is decreasing on s ∈ (0, 1) when 0 < c ≤ 1 and increasing when c ≥ 1. Thus we conclude that when 0 < c ≤ 1, with the above inequality being reversed when c ≥ 1. Hence when 0 < c ≤ 1, g p,m,n,q (t; α p,m,n,q ) with the above inequality being reversed when c ≥ 1. Note that the integral above is (by reversing the process above on changing variables) where the last step follows from the well-known beta function identity Γ(x + y) , x, y > 0, and the well-known fact Γ(n) = (n − 1)! for n ≥ 1. It follows that g(t) ≥ 0 when 0 < c ≤ 1 and g(t) ≤ 0 when c ≥ 1 and this completes the proof for the "if" part of the assertions (1) (a) and (2) (a) of Theorem 1.2 for F p,m,n,q (x; α p,m,n,q ; c) with q ≥ 1. Now we consider the function F p,m,n,q (x; α p,m,n,q ; c) with q = 0. In this case p = m + n and we note that α m+n,m,n,0 = B(m, n) = 1 0 s m−1 (1 − s) n−1 ds, and we use this to write α m+n,m,n,0 It follows that F m+n,m,n,0 (x; α m+n,m,n,0 ; c) Now we note that, for h c (s) defined as in (3.1), where v c (x) is defined as in (2.5). It follows from the proof of Lemma 2 for t ≥ s ≥ 0 with the above inequality being reversed when c ≥ 1. This implies that the function t → ln h c (t − s) − ln h c (t) is increasing (resp. decreasing) for t > s when 0 < c ≤ 1 (resp. when c ≥ 1). Thus we obtain that when 0 < c ≤ 1, with the above inequality being reversed when c ≥ 1. One checks easily that this implies that when 0 < c ≤ 1, 1 − e −t , with the above inequality being reversed when c ≥ 1. This implies the "if" part of the assertions (1) (a) and (2) (a) of the theorem for F m+n,m,n,0 (x; α m+n,m,n,0 ; c). Now we prove the assertions (1) (b) and (2) (b) of the theorem. Note that if F m+n,m,n,0 (x; s; c) is completely monotonic on (0, +∞), then we have It then follows from (2.3) that we have Thus, s ≤ α m+n,m,n,0 /c. Similarly, one shows that if −F m+n,m,n,0 (x; s; c) is completely monotonic on (0, +∞), then s ≥ α m+n,m,n,0 /c and this proves the "only if" part of the assertions (1) (b) and (2) (b) of the theorem. To prove the "if" part of the assertions (1) (b) and (2) (b) of the theorem, it's easy to see that it suffices to show that −F m+n,m,n,0 (x; α m+n,m,n,0 /c; c) is completely monotonic on (0, +∞) when 0 < c ≤ 1 and that F m+n,m,n,0 (x; α m+n,m,n,0 /c; c) is completely monotonic on (0, +∞) when c ≥ 1. Similarly to (3.3), we have F m+n,m,n,0 (x; α m+n,m,n,0 c ; c) For fixed t > s > 0, define Then we have as it's easy to see that the function x → x/(e x − 1) is decreasing for x > 0. It follows that the function r s,t (c) is an increasing function of c so that r s,t (c) ≤ r s,t (1) when 0 < c ≤ 1 and r s,t (c) ≥ r s,t (1) when c ≥ 1. One sees easily that the "if" part of the assertions (1) (b) and (2) (b) of the theorem follows from this. Lastly, we prove assertion (3) of the theorem. This is similar to our proof above of the "if" part of the assertions (1) (a) and (2) (a) of the theorem for F p,m,n,q (x; α p,m,n,q ; c) with q ≥ 1, except that we replace α p,m,n,q by β p,m,n,q and recast the function g p,m,n,q (t; β p,m,n,q ) similar to (3.2) as g p,m,n,q (t; β p,m,n,q ) = t 2 It is then easy to show using the method in the proof of Lemma 2.3 that the function s → a −2 (1 − s 2 ) −1 u(s; a, c) is increasing on s ∈ (0, 1) when c > 0 and essentially repeating the rest of the proof of the "if" part of the assertions (1) (a) and (2) (a) of the theorem allows us to establish assertion (3) of the theorem.
2,178.4
2010-11-15T00:00:00.000
[ "Mathematics" ]
Central Monitor Based On Personal Computer Using Single Wireless Receiver ( SPO 2 Parameter ) Central monitor is a tool that monitors the patient's condition with several devices into one display on a personal computer (PC). Pulse Oximetry serves to monitor the state of oxygen saturation in the patient's blood without going through blood gas analysis. This tool uses a wireless delivery system, HC-11 that can transmit data as far as 10 meters without obstruction. This tool uses a finger sensor, an analog signal conditioner and a microcontroller that is processed to produce a percentage value of SpO2 which is then sent through HC-11. The workings of this tool are very simple by entering the finger sensor on the finger and then it will be detected by the finger sensor that will be displayed on the PC. Digital data from the ADC Atmega is received by the personal computer (PC) via Bluetooth HC-011. Furthermore, the data is processed with the Delphi program and displayed on the monitor. Appears on the Delphi application. After measurement, get an error in the SpO2 parameter, the biggest error is 1.02% and get the smallest error 0.8%. INTRODUCTION In every hospital, there are always patient patients to help the performance of doctors and nurses.Patient monitors are medical devices that are used to monitor patients' physiological conditions.Patient monitors are standard examinations, namely ECG, respiration, blood pressure or NIBP, and blood oxygen levels or blood saturation or SpO2 [1].The alarm serves as an indicator to remind health workers if there is a decrease in oxygen saturation below the level of 90% and a low pulse rate.Adding an alarm will add value to the use of pulse oximetry that is more automatic and responds quickly to patient safety.By using a buzzer circuit that is connected to the microcontroller, alarm parameters can be set properly [2].If the human body lacks or excess oxygen, it will cause illness and disruption to other body's work systems.Some diseases caused due to lack or excess oxygen include hypoxemia, anemia, and so on.At some level, the disease can pose a risk of death [3].Lack of oxygen supply in the body can cause tissue damage to the body due to hypoxia [4]. Based on the results of a library search from Pulse Oximeter that was made by Teguh Pratomo (2016) about "Pulse Oximeter Fingertip PC Display (SpO2)", it was mentioned that the Tool was not equipped with storage for Pleth's signal analysis process and needed software improvements so that the display was automatically set according to the reference of blood concentrations of each patient.A tool "Designing Oxygen Saturation Measuring Instrument in LCD Graph Blood Display" was made by Pramita Galuh Ajeng Pradana in 2017, where this tool has been able to display signals on LCD charts but the programming used is still Avr and the tool cannot be used for central monitoring. Then in 2014, research was developed by Desak Putri Puspita Indriani, Yudianingsih, Evrita Lusiana Utari entitled "Designing Pulse Oximetry with Priority Alarms as a Vital Monitoring for Patients" This study still has an oxygen saturation measurement value that is not accurate because the components used are still ordinary components not from tantalum.Research has been carried out with the title "Monitoring Heart Rate equipped with Temperature Sensors by Sending Data to PC via Bluetooth" by Duta, 2016.In this study using the Atmega 8535 microcontroller as a data collector and displayed on a PC using wireless.To determine the temperature of the patient using LM35 and the BPM value of the patient obtained from processing a series of BPM instrumentation that uses a finger sensor (sensor on the finger).This tool also features an internal PC storage with patient data.But the tool is still designed to monitor one module in patients. Based on the results of the identification of the problems above, the author will make a Central Monitor based on a Personal Computer (PC) Via Wireless with 1 Receiver (SpO2).Where this tool uses 1 receiver to save in the cost of the manufacturing process.With the hope that monitoring can be done through a wireless system or without using cable media as data transmission.asa medium for sending and displaying data to overcome the shortcomings of previous studies. A. Experimental Setup This study was applied to patients over the age of 17 years.Data retrieval is done on random respondents 5 times and data is displayed in real-time with a measurement time of 7 seconds alternating between module 1 and module 2. Data collection is carried out at a distance of 5 meters and 10 meters without any obstacles.Data collection was carried out with 2 patients simultaneously.parameters used by SpO2. 1) Materials and Tool This study uses the Nellcor brand finger sensor for oxygen saturation parameters in the body.Finger sensor is attached to the patient's finger.The component used uses the Arduino Nano as a Microcontroller to provide SpO2 data obtained from the finger sensor connected to the HC-11 Bluetooth module port to send data and display on the Personal Computer. 2) Experiment In this study, the researchers compiled a module that conditions the signal for oxygen saturation in the body from a finger sensor.Data is sent with a Bluetooth delivery system using the HC-11 module.The results will be displayed on a PC using the Delphi application.In seven second the results will be sent alternately between module 1 and module 2. The researcher conducted several tests including: a. Test for processing oxygen saturation data using an automatic reference system.The index finger of the patient is fitted with a finger sensor which serves to detect oxygen saturation signals in the blood, then the signal will be processed in a PSA circuit consisting of HPF and LPF filters and amplifiers that are amplified 101 times so that the signal detected by the finger sensor can be clearly seen and ready to be sent to the ADC port on the microcontroller as analog data input.Then processed into digital data and data sent using Bluetooth HC-11 then displayed on a PC in the form of a percentage value of oxygen levels.Then it is received by the receiver circuit and processed through a PC.When the oxygen level is less than the normal value, the alarm will sound as a warning.The digital finger sensor reading reading starts with reading the ADC as a result of processing analog signals on the finger sensor.flow starts from the read sensor sent in variable and sent to the HC-11 Bluetooth module.The results of AC and DC from RED and IR enter the ADC A3, A4, A6, and A7 pins.The SpO2 signal entering the ADC pin will be converted by the Minimum System into digital data.Then from digital data then it will be processed and transmitted with the HC-11 wireless module.The data is then received by the Receiver using the HC-11 module connected with the PL 2303 module connected to the personal computer so that the information received as string data will be processed using the Delphi 7 application.The reading flow rate begins by reading the ADC as a result of the agreement of the analog signal on the finger sensor.The ADC signal is processed and calculated as the results displayed on the PC.The sensor circuit above works with a voltage divider system.Photodiodes receive continuous light from infrared and then processed into voltage.This is the measurement result on the 1199 Hz Demultiplexer Input, 584 mV amplitude with the sensor installed in the patient obtained from the photodiode output, the AC and DC signal demultiplexer input from RED and IR are still mixed.This is the measurement result on the output amplifier and the second filter of the Frequent 14th Demultiplexer foot, 609.7mHz, the amplitude is 6.60V.This is the second gain of the AC RED signal, which is amplified again by a gain of 101X after the first gain.1 KHz noise artifacts look thinner because filters are used with the same cut-off frequency so the filter presses better. 3) Demultiplexer Circuit 4) Demultiplexer Circuit This is the measurement result on the 1199 Hz Demultiplexer Input, 584 mV amplitude with the sensor installed in the patient obtained from the photodiode output, the AC and DC signal demultiplexer input from RED and IR are still mixed. III. RESULTS AND DISCUSSION In this study, comparison of data results with predetermined comparison tools, measurement, and analysis of delivery times, and calculation and analysis of lost shipping data were conducted.In the above program, listings use an auto reference system that uses the highest percentage of signal voltage to result SpO2.Based on the results of the measurement of the SpO2 value on this central monitor module it has an error value of 1.03%.From this value, it can be seen that this module is said to be used because the error value does not exceed the maximum limit value of ± 1%.SpO2 display signal and value displayed on the PC using the Delphi application.In the picture shows the number 98 and the normal indication. IV. DISCUSSION After comparing the results and measurements of the instruments, the average error in the SpO2 parameter is 0.7% of the temperature value ± 1 ° C. The test results of missing data with a distance of 10 meters 1% error displayed on the PC using the Delphi application and on the signal there is noise even though it is not too large.The results of testing the delivery time with a distance comparison, it can be concluded that sending using Bluetooth HC-11 can send data as far as 10 meters.for the development of data can be improved in the delivery process V. CONCLUSION This research shows that central monitors are monitoring vital signs that are carried out in real-time and alternately.This technology uses sending via Bluetooth HC-11 can be applied properly.this allows the maintenance monitoring process without cable media and makes it easy for technicians to troubleshoot.The design of this tool is to maximize the process of monitoring outpatient conditions. Fig 4 . Fig 4. Mechanism diagram Module 1 and module 2 data is sent via the HC-11 Bluetooth module where there is only one receiver.displayed on the PC and there is an alarm warning if there are incorrect parameters. Fig Fig 8. PWM circuit program listing on Arduino starts from reading the ADC through pin converting ADC data to voltage. Fig 9 . Fig 9.The design of the central monitor TABLE I . MEASUREMENT ERROR BETWEEN MODULE AND COMPARISON APPARATUS
2,416.4
2019-08-22T00:00:00.000
[ "Computer Science", "Medicine", "Engineering" ]
The physics of the mean and oscillating radial electric field in the L–H transition: the driving nature and turbulent transport suppression mechanism The low-to-high confinement mode transition (L–H transition) is one of the key elements in achieving a self-sustained burning fusion reaction. Although there is no doubt that the mean and/or oscillating radial electric field plays a role in triggering and sustaining the edge transport barrier, the detailed underlying physics are yet to be unveiled. In this special topic paper, the remarkable progress achieved in recent years is reviewed for two different aspects: (i) the radial electric field driving procedure and (ii) the turbulent transport suppression mechanism. Experimental observations in different devices show possible conflicting natures for these phenomena, which cannot be resolved solely by conventional paradigms. New insights obtained by combining different model concepts successfully reconcile these conflicts. Introduction Magnetically confined plasmas, typified by tokamaks and stellarators, are non-equilibrium open systems, in which inherent sources and sinks of particles, momentum, and heat exist in the system. One of the ultimate industrial applications of high temperature magnetically confined plasmas is thermonuclear fusion energy development. To realize a sustainable nuclear reaction in a fusion plant, one has to achieve a sufficiently high plasma performance with a tolerable heat exhaust for the plasma-facing components. Increasing the heat input with the aim to raise the plasma temperature leads to confinement degradation [1] which is considered to be brought about by plasma turbulence, resulting in unacceptable heat flow to the material divertor. Meanwhile, a spontaneous transition to the suppressed turbulence state with improved plasma confinement occurs by applying intense heating power above a threshold value [2]. Plasmas before and after this confinement transition are called the L-mode state and the H-mode state, respectively, and have different physical properties. The H-mode plasma is the prototypical example of the dissipative structure spontaneously formed in the open system [3] and has an attractive nature suitable for use in controlled thermonuclear fusion reactors [4]. Therefore, unveiling the background physics of the L-H mode transition is desirable from both the academic and industrial points of view. Since the first discovery of the L-H transition [5], the underlying physics of the transition have been investigated intensively. The important role of the radial electric field bifurcation for the L-H transition has been pointed out theoretically [6,7], and the existence of the negative radial electric field structure localized at the plasma periphery was spectroscopically measured in the H-mode [8,9]. Further understanding of confinement improvement was achieved through turbulence measurement. Across the L-H transition, the turbulence fluctuation amplitude was found to be reduced, which is considered to be responsible for the suppression of turbulent transport [10][11][12][13][14][15]. A sheared radial electric field or sheared E × B flow [16] was regarded as the major factor for turbulence amplitude suppression, which stretches turbulence eddies out causing thermalization or turns the turbulence into the large scale mean flow [17][18][19]. This prevailing concept is referred to as the shear-amplitude paradigm in this paper. Despite the continuous effort that has steadily advanced our understanding of the background physics of the L-H transition, some open questions still remain [20]. In this paper, two of these questions are focused upon. The first open question is how the radial electric field is driven. Several theoretical concepts for the radial electric field driving mechanism at the edge region have been proposed, and experimental validation for these models has been an active research topic. Nevertheless, a comprehensive model that can cover a wide range of observations has not yet been acquired, although some plausible case studies for validation of various model concepts have been reported in different plasma regimes or experimental devices. The second open question is how the turbulent transport is quenched by the non-uniform radial electric field structure. This transport suppression occurs in a wide radial range of the plasma, not only in the sheared radial electric field region but also in the shear-less part, i.e. at the bottom of the E r -well structure or the core region. Recent observations have shown that some ingredients beyond the shear-amplitude suppression paradigm are necessary to capture the whole picture of the transport suppression across the transition. In this paper, we review recent experimental observations and theoretical and numerical works to examine the two open questions raised above, aiming towards establishing a comprehensive model of the L-H transition. Accordingly, sections 2 and 3 are dedicated to topics regarding the radial electric field drive and the turbulent transport suppression, respectively. The last subsections in sections 2 and 3 discuss the issues remaining for future study. A summary of the paper is provided in section 4. Radial electric field driving mechanism Naturally, plasma is regarded as quasi-neutral. But in reality, there are several mechanisms through which a toroidal plasma can form the radial electric field structure by itself [21]. First, classical concepts for the possible candidates for the radial electric field excitation are presented in this section, and then key experimental implications that provide a perspective for model validation are introduced. Considering these bases, cutting-edge results of experimental observations and numerical simulations are reviewed. Finallly, an open issue for quantitative model validation, i.e. the relative dielectric constant in toroidal plasmas involving inertia enhancement, is discussed. Classical concepts of radial electric field driving in toroidal plasmas In this paper, two major categories of the radial electric field driving mechanism are discussed. The first category is the radial electric field excitation due to different trajectories of ions and electrons that induce radial charge separation. The neoclassical bulk viscosity [7,22] is one of the representative concepts in this category. The neoclassical bulk viscosity is predicted to induce excess ion current in the radial direction where γ j = 3/2 in the plateau regime, the prime denotes the radial derivative, and D p = (π/2)(ϵ t qρ i T/reB) is the characteristic diffusivity. Other quantities are as follows: e is the electron charge, n is the plasma density, T is the plasma temperature, B θ is the poloidal magnetic field, V || is the parallel velocity, ϵ t is the inverse aspect ratio, q is the safety factor, ρ i = √ mT/eB is the ion gyro-radius, and ρ p = qϵ −1 t ρ i is the ion poloidal gyro-radius. This ion current initiates the radial electric field excitation, and when the excited radial electric field balances with the gradient terms and the parallel velocity term, the ion current disappears. The normalized radial electric field X is defined as The exponential function exp(−X 2 ) rapidly goes to zero once the normalized radial electric field exceeds unity. Only at the edge can the ion particles escape from the confinement magnetic field due to its larger radial excursion, which excites the negative radial electric field [6,7]. The magnitude of this ion orbit loss current is predicted to be where ν ii is the ion-ion collision frequency. The outward ion current is reduced when the normalized radial electric field exceeds unity. The above two currents are functions of the kinetic profile and its gradient, therefore these radial currents can vary in the profile time scale. The second category of the radial electric field excitation is that related to the turbulence dynamics. The well known fluid Reynolds stress for the poloidal flow drive can be categorized here. The zonal flow is excited by the fluid Reynolds stress via the modulational instability process [23][24][25][26][27], which is considered to play an important role in the predator-prey dynamics [28], as will be discussed in detail below. In magnetized plasmas, the radial electric field and the poloidal flow are related through the E × B motion. The equivalent radial current can be quantified as where ω ci is the ion-gyro angular frequency [29]. Another possibility is the wave convection of momentum. This produces excess electron flux at the edge, while in the core this is compensated by ions. The intuitive expression of this radial current is given as where λ = −ρ p n −1 n ′ is the normalized inverse density gradient length and D e is the typical turbulent diffusivity [6]. The radial currents related to turbulence dynamics can change quickly with the turbulence time scale, which is one of the major differences from the radial currents in the first category. The charge separation due to these radial currents induce the radial electric field of where J i CX is the charge exchange current damping term and ϵ ⊥ is the relative dielectric constant of the plasma [21]. For quantitative model validation, what model is used as the relative dielectric constant in toroidal plasmas is essential [30][31][32][33], which will be discussed below. Imbalance among the current terms in equation (6) in the L-mode induces radial electric field growth, which breaks the steady state condition and pushes the plasma into the H-mode. The radial currents induced by other possible mechanisms and external operations, J other , are also worth considering. For example, the MHD mode involving magnetic island activities can create an enhanced electron loss channel by short-circuiting the nested magnetic surfaces [34,35]. As a result of this electron loss, the negative radial electric field structure in the H-mode is weakened and concomitant confinement degradation occurs. Another important factor for the current balance equation is the radial current induced by external biassing [36][37][38][39][40]. In these cases, the finite radial current applied externally is balanced by other terms in equation (6) in the stationary state, as predicted in [41]. The E × B flow is accelerated by the J r × B force of the bulk radial plasma current that flows in the opposite direction to the externally applied current to satisfy the balanced current condition. Possible bifurcation in toroidal plasmas As discussed above, the ambipolar particle fluxes, i.e. the radial currents, are nonlinear functions of the radial electric field. By balancing them in the steady state condition of ∂E r /∂t = 0, the bifurcation conditions of the radial electric field were explored. For example, Itoh and Itoh found an E r bifurcation by balancing the ion orbit loss flux and the wave convection flux [6]. In this case, more than two intersections of the ion orbit loss flux curve and the wave convection flux curve at different E r values were pointed out as possible steady state conditions. Meanwhile, Shaing and Crume Jr found another type of bifurcation by comparing the ion orbit loss flux and the neoclassical bulk viscosity flux [7]. A different way to reach the H-mode, the so-called predator-prey model, was proposed by Kim and Diamond [28]. This model is not based on the condition ∂E r /∂t = 0, but adopts a dynamic interplay among turbulence, zonal flow, and mean flow. As a result of the interplay, a consecutive sequence of L-H transitions and H-L back-transitions, the socalled limit cycle oscillation, occurs between the turbulence and zonal flow, which gradually steepens the edge pressure gradient. Once the pressure gradient driven mean flow reaches a threshold value, the turbulence and the zonal flow are totally quenched and the H-mode transition occurs. Another type of bifurcation mechanism based on the transport flux equation nonlinearly depending on the radial electric field shear was proposed by Staebler and co-workers [42]. A set of system equations is closed by assuming that the radial electric field is generated by gradients, and a bifurcation is reproduced by choosing a specific range of coefficients. Key experimental observations In order to achieve a comprehensive understanding of the radial electric field excitation mechanisms, numerous experiments have been performed. In this subsection, some of the key experimental observations that are useful for validating theoretical models are introduced. First, let us consider the time scale of the radial electric field excitation, in which an important role of turbulence at the transition is implied. Figure 1 shows the radial profiles of the radial electric field and the pressure gradient before and immediately after the transition measured by electrostatic probes in DIII-D [43]. Although the radial electric field profile shows a significant change before and after the transition, the pressure gradient profile remains almost identical. Figure 2 shows the relation between the E × B shearing rate ω E×B = (r/q)d[(q/r)(E/B)]/dr and the ion temperature gradient across the L-H transition in JT-60U [44]. Around t ∼ 5.1 s, where forward and back transitions occur, the value of ω E×B jumps without being accompanied by any change in the ion temperature gradient. These two observations clearly show that the radial electric field can change before the density profile or the temperature profile change. The L-H transition can occur in a very short time scale in O(10 µs) to O(100 µs) [15]. This time scale is clearly the turbulence time scale, and seems not to be the profile time scale. The density profile and the temperature profile start to form a pedestal structure after the excited radial electric field suppresses the turbulent transport. Once the transition is initiated, the radial electric field profile continues to develop until it subsides with some saturation mechanisms. Next, we discuss where the radial electric field finally settles in the H-mode. Figure 3(a) shows the radial profile of the radial electric field in the H-mode in ASDEX-Upgrade [45]. The radial profile of the radial electric field is evaluated using charge exchange recombination spectroscopy Reprinted from [43], with the permission of AIP Publishing. through the radial force balance equation where the subscript j denotes the ion species of interest, p is the ion pressure, Z is the ion charge, V θ and V ϕ are the ion poloidal and toroidal rotation velocities, and B θ and B ϕ are the poloidal and toroidal magnetic fields, respectively. In addition, the radial electric field profiles predicted by a theoretical model [46] and by a numerical code [47], and the pressure gradient term in equation (7) are shown. It is proven that the neoclassical theory can account for the main features of the radial electric field structure. Also in ASDEX-Upgrade, it was shown that the radial electric field value settles at a specific threshold value determined by neoclassical theory at the transition in a wide range of plasma parameters (figure 3(b)) [48]. This observation indicates the crucial role of neoclassical theory in maintaining the radial electric field structure in the H-mode. It is generally accepted that the threshold power of the L-H transition strongly depends on the plasma density, and has a minimum value at the specific range of the density [49,50]. Nevertheless, the threshold neoclassical radial electric field shows no plasma density dependence, providing a unique physical criterion for the L-H transition condition. The input power that is necessary for exciting the threshold neoclassical radial electric field is therefore considered to strongly depend on the plasma density, and probably suggests different paths to achieve the threshold value below and above the density minimum. Moreover, turbulence is often totally quenched and the transport level decays down to the neoclassical level in the H-mode [51] so that the turbulence originating part of the radial electric field cannot remain essential. There are two contradictory aspects at a first glance, i.e. the important role of the turbulence in accounting for the fast time scale of the transition and the converging feature of the radial electric field to the neoclassical value in the steady state Hmode. These can be reconciled by considering possible combinations of multiple driving mechanisms that may reproduce the complex transition sequence. Hereafter, model validation methods taking into account different combinations of multiple concepts will be introduced from recent experimental and numerical works. Recent model validation efforts The first case is from the JFT-2M data analysis study [29,52,53]. The electrostatic potential and the electron density at four radial locations were simultaneously measured with a heavy ion beam probe (HIBP) [54,55]. From the obtained dataset, equation (6) was directly examined, as was first performed in the heliotron CHS [56]. Figure 4 shows the relation between the normalized radial electric field and the radial current across the L-H transition [29]. The black curve is the trajectory of the experimental observation and the red curve is the E r driven by the neoclassical current (equation (1)) and the ion orbit loss current (equation (3)). The experimental value of the radial current is estimated by J r = −ϵ ⊥ ϵ 0 ∂E r /∂t, where E r is evaluated as the radial difference of the electrostatic potential directly measured by the HIBP. The confinement states of the plasma are indicated by the labels at the top of the figure. The transition sequence seen in the experimental trajectory is as follows: when the plasma is in the L-mode, the normalized radial electric field X = ρ p eE r /T is close to zero and the radial current oscillates around zero. At the transition, the positive radial current is excited, leading to charge separation, which deepens the negative radial electric field. The positive radial current peaks at 3 − 4 A/m 2 , and decreases towards zero. At X ∼ 1 and J r ∼ 0, the plasma is considered to be in the Hmode. A few milliseconds later, another transition occurs that further enhances the negative radial electric field, but it is beyond the focus of this paper. At the first transition, the sum of the neoclassical bulk viscosity and the ion orbit loss approximately accounts for the experimental value. However, there is a significant mismatch in the L-mode, where the theoretical models overestimate the radial current of ∼5 A/m 2 . If one admits that the model expressions of equations (1) and (3) are valid, there must be a negative radial current component that compensates the excess positive radial current. The radial current that is equivalent to the fluid Reynolds stress force (equation (4)) is directly estimated by the turbulence measurement and it is found to play a minor role in the L-mode. As another candidate, the wave convection process (equation (5)) is examined and found to possibly account for the L-mode current balance. A scenario of the radial electric field excitation at the L-H transition is deduced as follows. In the L-mode, the radial current balance is satisfied by three components, by the neoclassical bulk viscosity current J i BV , the ion orbit loss current J i LC , and the wave convection current J e−i WC , i.e. As described in [6], the magnitude of the turbulence diffusivity in equation (5) is sensitive to the radial electric field. Once the radial electric field grows above a certain value, J e−i WC is suppressed and the current imbalance is enhanced, which facilitates further growth of the radial electric field, i.e. The excited radial currents through the neoclassical process and the ion orbit loss process are suppressed when the normalized radial electric field X approaches unity. When the applied heating power is marginal with respect to the threshold power for transition, the limit cycle oscillation (LCO), is frequently observed in many toroidal plasmas [57][58][59][60][61][62][63][64][65][66][67][68][69][70]. The LCO phase provides a chance to investigate the basic mechanism of the radial electric field excitation thanks to its repetitive nature, which offers multiple independent events for statistical approaches. Some of them are considered to be consistent with the predator-prey model [28], in which the combination between the turbulence contribution and the profile contribution for the radial electric field drive plays a role. Figure 5 shows the time evolution of the L-H transition involving the LCO in DIII-D [60]. Once the LCO is triggered, an oscillation with nearly constant frequency is observed in the D α emission signal, which is regarded as repetitive transport barrier formation and deformation. Panels (a) and (b) in figure 5 show the E × B velocity shearing rate ω E×B and its pressure gradient driven part ω E×B dia . The LCO is driven by the radial electric field shear oscillation. In the beginning, the radial electric field shear oscillation is considered to be turbulence driven zonal flow. Only a minor contribution of the pressure gradient driven part to the LCO is seen. However, in the later phase of the LCO period, the pressure gradient driven part gradually increases. At the transition to the H-mode, the LCO disappears and only the pressure gradient driven part remains. The turbulence driven zonal flow assists the growth of the pressure gradient driven mean radial electric field by regulating the turbulent transport in the LCO phase, as predicted in [28]. This is regarded as a synergetic relation between the zonal flow and the pressure gradient driven mean radial electric field. A similar discussion of zonal flow production was addressed at the L-H transition that involves no LCOs [50,[71][72][73][74]. It was stated that the turbulence is quenched when the zonal flow production rate exceeds the effective growth rate of turbulence. A detailed parameter scan experiment for examining the role of the Reynolds stress driven radial electric field was recently performed in DIII-D [75]. Another example of the mean flow generation mediated by turbulence dynamics was reported in the LCO involving the high frequency branch of the zonal flow, the geodesic acoustic mode (GAM), in ASDEX-Upgrade [58]. Figure 6 compares the mean part of the shearing rate (τ −1 M ), the oscillating part of the shearing rate (τ −1 O ), and the turbulence decorrelation rate (τ −1 c ). In the I-phase, which refers to the time period in which the LCO emerges, the oscillating part of the shearing rate dominates over the mean part, and behaves very similar to the turbulence decorrelation rate. This means that the turbulence activity is regulated by the GAM during the I-phase. Meanwhile, the mean part of the shearing rate is gradually enhanced. Across the I-phase to H-mode transition, the mean part of the shearing rate eventually turns over the oscillating part, and limits the turbulence amplitude growth. With the assistance of the turbulence driven part of the radial electric field that can vary with a fast time scale such as the LCO frequency, the transport barrier can be developed by the gradient driven part of the radial electric field with its own time scale. Recently, edge transport barrier formation was successfully reproduced using a first-principles-based global electrostatic gyrokinetic code, XGC1, in a realistic edge geometry of tokamak plasmas [76,77]. In this study, it is proposed that the synergism between the Reynolds force and the ion orbit loss force comprehensively explains the time evolution of the shear flow structure formation, as shown in figure 7. The edge transport barrier formation is forced to occur by applying a sufficiently high heating input. Bifurcation is initiated by the oscillating fluid Reynolds force that induces the GAM at the edge, as shown in figure 7(a). The GAM seems to mitigate the turbulent transport that results in the edge temperature growth. As a result, the ion orbit loss force gradually grows and finally overtakes the Reynolds force (figure 7(b)). In the final stage of transport barrier formation, the sheared radial electric field structure is maintained by the ion orbit loss force that can endure even after the turbulence is quenched. In the situation where the transition time scale is not very fast, it was demonstrated that the neoclassical process can solely account for the radial electric field excitation [64,78]. In ASDEX-Upgrade, the diamagnetic velocity was found to approximately meet the spectroscopically measured E × B velocity oscillation in the LCO phase, as shown in figure 8 [64]. The authors claimed that the neoclassical contribution dominates over other factors. Insufficient drive of the fluid flow by the Reynolds force in the LCO phase was also pointed out in NSTX [79]. In the fluid simulation EMEDGE3D, the bulk part of the E × B flow kinetic energy was supplied by the neoclassical force (figure 9) [78]. In this case, the Reynolds force is responsible for both the high frequency fluctuation and for the sink of the macroscopic flow. According to the classification by Terry [19], these observations are grouped into the two-step transition, where the radial electric field is maintained by turbulence-force-free bifurcation mechanisms, and the turbulence suppression by the excited radial electric field structure occurs afterwards. A different aspect of the LCOs, their strong magnetic fluctuation nature, attracts much attention. A coherent poloidal field fluctuation in LCO was reported in ASDEX-Upgrade and other devices [65][66][67]. These observations share a unique characteristic of the oscillating spatial structure, that is the updown asymmetric m = 1 magnetic oscillation, where m is the poloidal mode number of the oscillation. Figure 10 shows a schematic view of the the poloidal magnetic probe array and the time evolution of the poloidal magnetic field fluctuation in the LCO phase. The poloidal magnetic field oscillation is regarded as the parallel current oscillation. It was proposed that the up-down asymmetry of the parallel current oscillation can be explained by the Stringer spin-up mechanism [80]. A Lotka-Volterra-type set of equations that describes the system evolution can be obtained from that model, on which the LCO arises. The robustness of this interpretation is also shown by the fact that the experimentally measured LCO frequency agrees with the Stringer spin-up relaxation frequency. Another type of H-mode transition involving the LCO stimulated by a quasi-coherent electromagnetic oscillation was also reported in HL-2A [81]. Note that this type of coherent magnetic field oscillation is not always observed in LCO activity. For example, the case in [63] does not show a strong magnetic oscillation at the LCO frequency. The role of the relative dielectric constant in toroidal plasmas in quantitative model validation For the quantitative model validation based on the equation of motion or the current equation (6), it is essential to consider what model should be used as the relative dielectric constant in toroidal plasmas, ϵ ⊥ . The magnitude of ϵ ⊥ can be derived as follows [3]: Starting from a cylindrical geometry with the confinement magnetic field B in the axial direction, the time varying radial electric field causes the polarization current The polarization current shields the growth of the radial electric field on the one hand, and drives the E r ×B poloidal flow by the J pol × B force on the other hand. Assuming that E r and J pol are constant on the equiradial surface, a combination of Gauss's law and the charge continuity equation gives where ∑ J corresponds to the sum of the radial currents that induce the radial charge separation, i.e. the right hand side of equation (6) in the case of toroidal geometry. Substituting equation (10) into equation (11) gives with the relative dielectric constant for cylindrical plasmas where v A = B/ √ nmµ 0 is the Alfvén speed. According to [30][31][32][33]82], in toroidal plasmas the effective mass of the ion fluid is enhanced by in the banana regime and by in the plateau regime. Typically in the tokamak edge, M tor ∼ O(10) due to q ≫ 1. The relative dielectric constant of the toroidal plasma becomes The cause of the inertia enhancement is the finite divergence of the poloidal flow in the toroidal plasma, i.e. ∇ · V ⊥ = −2 R −1 sin θV θ [31]. When the flow frequency is much smaller than the GAM frequency, ω ≪ ω GAM , this finite divergence is compensated by the toroidal return flow along the magnetic field line, V || = 2q cos θV θ . The up-down asymmetric poloidal flow divergence and the streamlines of the poloidal flow and the return flow are illustrated in figure 11(a). As a result, the poloidal momentum is transferred into the toroidal momentum, which leads to the enhanced effective inertia. In the high frequency regime ω ∼ ω GAM , the poloidal flow divergence can be the up-down asymmetric density perturbation of the GAM. A stationary toroidal flow that has a fundamental feature of the toroidal return flow is actually observed in different devices [44,83]. The importance of the finite inertia enhancement in equation (14) or equation (15) for the relative dielectric constant in toroidal plasmas is quantitatively presented using data for the GAM dynamics [84,85] and those of the LCO dynamics [62,63] in JFT-2M. Figure 11(b) shows the radial profile of the GAM excitation rate by the Reynolds stress and that directly obtained by the rising time scale of the GAM fluctuation energy [86]. Here the horizontal axis corresponds to the radial distance from the last closed flux surface, r − a. At r − a ∼ −3 cm, two excitation rates independently obtained overlap, showing that the Reynolds stress plays the major role in the GAM excitation at this location. Here, the GAM driving force by the Reynolds stress is estimated based on the equation of motion with no inertia enhancement, i.e. M tor = 1, since the time scale of interest is high enough, ω ∼ ω GAM [87]. This observation implies that the estimation of the absolute value of the Reynolds stress is valid. In the same discharge, the GAM disappears and the LCO takes over in a few hundred milliseconds before the L-H transition. Note that LCO with a small amplitude, and therefore not involving transitions to the deep H-mode, is sometimes specifically called 'small amplitude LCO' (SALCO) [70]. The LCO activity in JFT-2M [62,63], as described in detail below, can be defined as SALCO. In the LCO phase, oscillations in the E × B velocity and the Reynolds stress at a nearly constant frequency are observed. Figure 11(c) shows the conditionally averaged waveform in these quantities [62]. The equation of motion with the inertia enhancement factor takes the form M tor mn∂V E×B /∂t = −mnr −1 ∂rΠ rθ /∂r + · · ·, where ··· indicates additional effects, such as the collisional damping term [21]. The expected oscillatory E × B velocity, which is induced by the oscillatory Reynolds stress at the angular frequency of ω LCO , is evaluated as δ|V E×B | ∼ |Π rθ |L −1 M −1 tor ω −1 LCO , where L is the scale length of the Reynolds stress change. Substituting parameters |Π rθ |L −1 ∼ 7 × 10 6 m/s 2 , ω LCO ∼ 3 × 10 4 s −1 , L~1 cm, and M tor ∼ 20 for q ∼ 3 provides the expected amplitude of modulation δ|V E×B | ∼ 15 m s −1 . Since the modulation amplitude of the E × B velocity in the LCO is ∼500 m s −1 , the Reynolds stress driven part is only a minor contribution to the total flow modulation. Instead, if M tor = 1 is used, the conclusion completely reverses: δ|V E×B | ∼ 300 m s −1 is obtained so the Reynolds stress driven part will account for the bulk part of the oscillatory E × B velocity in the LCO. This inertia enhancement effect is quantitatively assessed by a fluid-type transport code, showing the validity of the model described above [88]. Figure 12 shows the time evolutions of the radial electric field driven by the radial current of the high-energy ion transport. The simulations were run both in the toroidal geometry and in the cylindrical geometry to examine the impact of the toroidicity on the inertia enhancement effect. In the toroidal geometry, the time constant of the radial electric field variation is much larger compared to the cylindrical case. This is due to a large relative dielectric constant at a finite q value of ∼1.1 at the mid-radius, where a finite inertia enhancement of 1 + 2q 2 ∼ 3.63 is anticipated for the toroidal geometry case. The simulation shows that the time constant in the toroidal geometry is 3.59 times larger than that in the cylindrical geometry case, confirming the necessity of taking into account the neoclassical inertia enhancement factor in the relative dielectric constant. As discussed in [21], the poloidal flow divergence can also be compensated by the radial flow. Direct measurement of the relative dielectric constant in toroidal plasmas might be challenging but it is highly desirable for validating the model of the inertia enhancement factor. Discussion In this paper, a variety of radial electric field excitation mechanisms were described, considering possible combinations of different concepts. The interpretations of these examples are fairly case dependent, and to pursue a comprehensive model the detailed comparison and unification of different case studies are essential. As a first step, the phenomenological classification of essential mechanisms in different parameter ranges, e.g. below or above the bottom density point in the rolling-over L-H power threshold diagram or the isotope mass dependent threshold power [49,50], seems to provide a perspective. Considering the fact that the radial electric field settles on the neoclassical value [45,48], a fundamental question arises. As shown in [45], the net fluid flow, which is the sum of the E × B flow and the ion diamagnetic flow, stays at rest in the Hmode. Indeed, the E × B flow of the neoclassical radial electric field is expected to be of a similar magnitude to the ion diamagnetic flow but directed towards the electron diamagnetic direction. For the turbulence suppression, equivalent but sign independent roles of the E × B flow and the ion diamagnetic flow are considered, which makes both flows effective [89]. In a different model, the impact of the electron diamagnetic effect on the turbulence linear growth rate is discussed [90]. Critical investigation through experiments or numerical simulations of this issue is anticipated in the future for further understanding of the role of the radial electric field in turbulent transport suppression. Turbulence transport suppression mechanism This section is initiated by discussing the limitations of the shear-amplitude suppression paradigm addressed in a pioneering work in DIII-D [43]. Motivated by the necessity for new ingredients, recent experimental and theoretical results focusing on the important roles of the turbulence cross phase and the turbulence spatial redistribution induced by the radial electric field non-uniformity are presented to push the physical understanding beyond the paradigm. Limitations of the shear-amplitude suppression paradigm As the turbulent transport regulation mechanism in the Hmode, turbulence amplitude suppression by radial electric field shear is acknowledged as being of major importance [17]. However, there are some counterexamples showing the limitations of the shear-amplitude suppression paradigm. For example, figure 13(a) shows the radial profile of the relative turbulence amplitude in the Ohmic H-mode phase from the Ohmic phase. It is defined asĨ(H)/Ĩ(OH), whereĨ(H) andĨ(OH) are the turbulence amplitudes of any quantity in the Ohmic H-mode phase and the Ohmic phase, respectively. Circles and triangles correspond to plots for the density fluctuation and the potential fluctuation, respectively. The solid curve shows the radial electric field profile where the red shaded area indicates the E r -well bottom region. Figure 13(b) shows the particle flux profile. Focusing on the radial electric field shear region, i.e. both sides of the E r -well bottom, both the density fluctuation amplitude and the potential fluctuation amplitude are significantly reduced, which results in the transport suppression. However, at the E r -well bottom, the density fluctuation amplitude reduction is only modest, and the potential fluctuation is even amplified. Even though the fluctuation amplitude reduction is not substantial, the particle flux can be reduced through the cross phase alternation, since the turbulent particle flux is defined in the form of where |ñ|, |φ|, γ nϕ , α nϕ , and k θ denote the density fluctuation amplitude, the potential fluctuation amplitude, the cross coherence and the cross phase between them, and the poloidal wavenumber, respectively [91]. Looking at figures 13(c) and (d), it is found that the particle flux quench after the H-mode transition at the E r -well bottom region is attributed to the cross phase alternation. The cross phase effect seems to be more essential in particular at a location where the radial electric field shear is not remarkably strong. Experimental activities focusing on the cross phase were also reported for different toroidal plasmas [92][93][94]. In early studies, the impact of the sheared radial electric field on both the fluctuation amplitude and the cross phase were theoretically modelled [89,95] and the experimental validation of these models was performed in a basic experimental device [96,97]. In this paper, the importance of the cross phase behavior is addressed from another aspect, that is the decoupled dynamics of the cross phase with respect to the amplitude evolution, which has lately been discussed experimentally and theoretically. In the presence of a non-uniform radial electric field structure, it is anticipated that the mutual nonlinear interaction between the radial electric field and the turbulence results in a spatial redistribution of the turbulence profile. Radial profiles of (a) the radial electric field and the relative turbulence amplitude in the density fluctuation and the potential fluctuation with respect to the values in the Ohmic phase and (b) the particle flux. Frequency decomposed spectrum of (c) the particle flux and (d) the cross phase between the density fluctuation and the potential fluctuation. Reprinted from [43], with the permission of AIP Publishing. Since the E r -well structure has both the shear region and the curvature region, the different roles of those structures on the turbulent transport regulation are investigated intensively [41,98,99]. Moreover, it is predicted that radially propagating radial electric field structures such as the zonal flows or the GAMs can lead to the spatial transmission of a turbulence clump, as reported in a the global gyrokinetic simulation code GYSELA [100]. Figure 14 shows the spatiotemporal evolution of the turbulence diffusivity in the presence of an energetic particle source. There are three different phases in the system evolution: (A) the energetic particle source is applied to a quasi-stationary turbulence regime, (B) a transport barrier is triggered at r/a > 0.5, and (C) nonlinear interaction between the energetic particle driven GAMs (EGAMs) and turbulence occurs. In particular, in phase (C), the turbulence clump penetrates into the transport barrier region because a portion of the turbulence is trapped by the radially propagating EGAMs. Note that some previous models, including the predatorprey model, are based on the spatial integration in the scale of the radial electric field structure. Therefore neither the different roles of the shear and curvature of the radial electric field nor the turbulence trapping by the radial electric field structure can be treated in those frameworks. To overcome those points, a new theoretical mode that is not based on spatial integration was developed recently [101,102]. In this model, the interaction between the radial electric field and turbulence is locally described so that the turbulence trapping can be treated. Often, experimental model validation is based on the local measurement of the radial electric field and the turbulence. Therefore a spatially non-integrated model might better describe the experimental situation. In the subsection below, newly obtained understanding based on this model is discussed. Decoupled dynamics of the cross phase from the amplitude evolution One of the interesting results was reported from the TJ-K stellarator experiment [103]. In this device, a pair of 64-pin poloidal Langmuir probe arrays is installed in two different toroidal sections to simultaneously measure the fine structure of the zonal flow and the turbulent transport. One of the probe arrays is shown in figure 15(a). By using these probe arrays, the zonal average can be performed for the particle flux oscillation and the potential oscillation, from which the interaction between the net particle flux and the zonal potential can be discussed. Figure 15(b) is the conditionally averaged time evolution of the global particle flux variation and the zonal potential. Before the zonal potential bursts, the turbulence particle flux increases to nonlinearly drive the zonal potential. At the moment of the zonal potential burst the particle flux is suppressed, and the state of the suppressed particle flux continues for ∼0.1 ms even after the zonal potential returns back to the original level. In order to decompose the contributions of each factor in equation (17), the wavenumber spectra of the ion saturation current fluctuation, the potential fluctuation, the particle flux, and the cross phase between the density fluctuation and the poloidal electric field fluctuation are shown in figures 15(c)-(f ), respectively. Note that the π/2 cross phase between the density fluctuation and the poloidal electric field fluctuation makes the particle flux zero, because the phases of the poloidal electric field and the potential differ by π/2. The cross coherence between the density fluctuation and the poloidal electric field fluctuation changes only little, thus is not shown here. Before the zonal potential bursts, the turbulence amplitudes in both the ion saturation current fluctuation and the potential fluctuation increase, which results in increased particle flux. The cross phase also changes to enhance the particle flux. At the time instant of the zonal potential burst, the turbulence amplitudes stay at the averaged values and do not contribute to a particle flux change. Meanwhile, the cross phase approaches π/2, playing the main role in the particle flux suppression in this time period. After the zonal potential burst ceases, all components behave to reduce the particle flux that keeps the particle flux level lower compared to the averaged value. Overall, the cross phase responds to the zonal potential variation prior to the turbulence amplitude. Another example showing the decoupled dynamics between the cross phase and the turbulence amplitude was reported from the JFT-2M tokamak experiment [104]. Figure 16 shows the time evolutions of the radial electric field, the relative density fluctuation amplitude, the cross phase between the density fluctuation and the potential fluctuation, and the particle flux measured at the E r -well bottom across the L-H transition. Here, the particle flux disappears when the cross phase between the density fluctuation and the potential fluctuation is zero. The poloidal wavenumber of the turbulence is also moderately reduced across the L-H transition, contributing a further transport reduction. In order to avoid a large uncertainty brought about by the poloidal wavenumber evolution, the poloidal wavenumber in the Lmode is used here to obtain the time evolving particle flux using equation (17). Therefore, the value in figure 16(d) corresponds to the possible upper boundary of the particle flux. Immediately after the radial electric field grows negatively, the density fluctuation amplitude quickly responds with the time scale of O(0.1 ms). A prompt reduction of the particle flux is brought about by this amplitude reduction of the turbulent density fluctuation. However, the density fluctuation amplitude quickly recovers afterwards, and its net reduction is only moderate. Meanwhile, the cross phase between the density fluctuation and the potential fluctuation changes slowly with the time scale of O(1 ms). In the later phase, where the density fluctuation amplitude level recovers, the particle flux remains reduced thanks to the cross phase reduction. Clear time scale separation between the amplitude suppression and the cross phase reduction implies the existence of different underlying mechanisms. In particular, the turbulence amplitude is determined by nonlinear saturation, while the cross phase is equivalent to the source of the linear instability, i.e. the incomplete adiabatic response of electrons with respect to the potential fluctuation in the case of the resistive drift wave. Curiously, the order of changes here is opposite to the case of TJ-K, in which the cross phase responds prior to the fluctuation amplitude. Further discussion of the turbulence particle flux reduction in JFT-2M concerns its spatial distribution. Figure 17 shows the radial profiles of the radial electric field, the particle flux, the relative density fluctuation amplitude, the potential fluctuation amplitude, and the cross phase between the density fluctuation and the potential fluctuation in the L-mode phase and in the H-mode phase. As shown in figure 17(b), the turbulent particle flux is reduced in a wide radial region. The different role of the radial electric field non-uniformity is explored by dividing the entire peripheral region into sub-regions. The two regions with green shading correspond to the inner shear region (−2.2 < r − a < −1.5 cm) and the outer shear region (−0.5 < r − a < 0 cm). In addition, there is the curvature region (−1.5 < r − a < −0.5 cm) indicated by the orange shading between these two shear regions. Moreover, a further inside region (r − a < −2.2 cm) is characterized by very low shear or curvature of the radial electric field. Across the L-H transition, the density fluctuation amplitude is substantially suppressed at both the inner shear region and the outer shear region, while the variation is only moderate in the curvature region. For the potential fluctuation amplitude, the reduction is visible only in the inner shear region. It approximately stays unchanged in the curvature region, and is even enhanced in the outer shear region. The cross phase approaches zero in the inner shear region and the curvature region, and in the outer shear region it becomes negative. Combinations of different components account for the overall particle flux suppression observed in a wide range of the radius. Interestingly, in the further inside region, all the components behave so as to reduce the particle flux with very low shear or curvature of the radial electric field. A similar observation was reported in an early study [13]. The turbulence amplitude reduction at the region where the radial electric field structure is not substantially large can be explained by the turbulence spreading theory [105][106][107]. Experimental assessments of the turbulence spreading concept were recently reported [62,108,109]. The turbulence spreading is an important concept which is expected to explain a long-standing mystery, the prompt confinement improvement over a wide region of plasma at the L-H transition [14]. To understand the decoupled behavior of the turbulence amplitude and the cross phase a dynamic model was developed, as reported in [110]. Starting from the trapped electron mode (TEM) turbulence model [111], the following expression for the cross phase dynamics is obtained: where α nϕ,k is the cross phase between the density fluctuation and the potential fluctuation, β k = |ϕ k |/|n k | is the amplitude ratio of the potential fluctuation to the density fluctuation, ω k is the fluctuation angular frequency, the subscription k indicates the wavenumber of interest, η e = L n /L Te is the ratio of the density gradient length to the temperature gradient length, and ν is the de-trapping rate for trapped electrons. The nonlinear contribution due to the E × B advection is given as Here, time and space are normalized by ρ s c −1 s and ρ s , respectively, where ρ s is the ion sound gyro-radius and c s is the ion sound speed. Utilizing this model, the impact of the pump wave-zonal flow interaction on the cross phase of the pump wave is discussed. Figure 18(a) presents a diagram of the model framework. The situation considered in the model is as follows. First a pump TEM turbulence interacts with a linearly stable seed zonal flow that generates two sidebands. The sidebands couple with the pump TEM turbulence to nonlinearly drive the zonal flow. At the same time, the nonlinear interaction between the sidebands and the zonal flow occurs, which reacts back to the pump TEM turbulence. The LCO dynamics with and without considering the finite cross phase effect are plotted in figure 18(b). In the results of the simulation, a decoupled dynamics of the pump TEM wave amplitude and the cross phase is demonstrated as a trajectory that is not on a flat plane. Turbulence spatial redistribution induced by the radial electric field non-uniformity A model for the dynamic interaction between the radial electric field structure and the turbulence was developed [101,102] in the framework of the wave kinetic theory [112]. Aiming at treating the turbulence trapped by the radial electric field structure, the phase-space dynamics of the turbulence is accounted for. The evolution of turbulence can be described by where N k is the dimensionless wave action density, ω k is the turbulence angular frequency, γ L is the linear growth rate, and ∆ω is the nonlinear decorrelation rate. Here, time and space are normalized by ρ s V −1 d and ρ s , respectively, where V d is the diamagnetic drift velocity. In particular for the drift wave turbulence, N k and ω k are given as and respectively, where k x is the radial wavenumber, k y is the poloidal wavenumber, ϕ k is the normalized turbulence electrostatic potential, andV y is the poloidal velocity modulation in a macro-or meso-scopic E × B flow structure. In the nonlinear saturation phase of the turbulence, i.e. γ L N k − ∆ωN 2 k = 0, the equi-frequency plane in the phase-space corresponds to the constant of motion. In the presence of the E × B flow structure, the turbulence frequency contour in the phase-space is distorted by the Doppler shift and then the turbulence trapping occurs. Two applications of the wave kinetic theory for the interaction between the turbulence and the EGAM [101] and that between the turbulence and the turbulence driven GAM [102] are demonstrated in the rest of this subsection. The schematic view of the former case is given in figure 19(a). The turbulence is set to be unstable only on the left side of the simulation region, while a stationary mean flow structure that stabilizes the turbulence is placed in the rest of the region. An EGAM propagating radially outward is applied externally . Note that the energy exchange between the EGAM and the turbulence need not be considered since the EGAM is not excited by the turbulence, which simplifies the situation. The result of the numerical examination of the model is shown in figure 19(c). Due to existence of the EGAM flow structure, the Doppler shifted turbulence frequency (white contour lines) has island structures in the phase-space, inside which the turbulence is trapped. Since the EGAM propagates radially outward, the trapped turbulence penetrates into the linearly stable (a) Schematic diagram of the parametric interaction between the pump TEM turbulence, the zonal flow, and two sidebands, and (b) limit cycle oscillation between the pump wave amplitude, the zonal flow amplitude, and the cross phase. The blue and black curves show the models with and without the phase mismatch effect, respectively. Reproduced from [110]. © IOP Publishing Ltd. All rights reserved. region, x > 20. This concept of the turbulence trapping by the EGAM structure can give an explanation for the observation in the GYSELA simulation shown in phase (C) of figure 14. This result suggests that an attenuation of the transport barrier by the interaction between a radially propagating E × B flow structure and the turbulence is possible depending on the propagating direction. A similar numerical investigation is performed for the turbulence driven GAM as well. In this case, the GAM excitation by the turbulence and the phase-space interaction between the GAM and the turbulence are simultaneously considered. The energy equations of the GAM and the turbulence are given as and where µ G is the viscosity for the GAM, v g ≡ ∂ω k /∂k x is the turbulence group velocity, and ⟨ * ⟩ k ≡´ * (1 + k 2 x + k 2 y ) −1 dk x is the wavenumber integration. The GAM energy gain and the turbulence energy loss are given as and where Π xy is the fluid Reynolds stress. The simulation result in the GAM saturation phase is shown in figure 20, where the periodic boundary condition in the x direction is used. As shown in figure 20(a), the turbulence clump is trapped by the GAM at the location where the E × B flow is directed towards the electron diamagnetic drift direction. The energy exchange between the GAM and the turbulence is presented in figure 20(b). The GAM gains energy at which the curvature of the GAM is strong while the turbulence loses energy at which the E × B shear of the GAM is strong. Although the spatial distributions of W G and −W turb differ from each other, the spatially integrated energy budget balances between the GAM and the turbulence, i.e.´W G dx = −´W turb dx. The turbulence propagation rate is represented by the second term of the left hand side of equation (24), whose spatial distribution is shown in figure 20(c). Since the turbulence propagation rate and the energy exchange terms are in the same order of magnitude, all these terms are essential for predicting the turbulence spatial redistribution induced by the E × B flow structure. Note that the spatial integration of equations (23) and (24) provides the well-known predator-prey model [28], where the turbulence propagation rate term disappears. The turbulence trapping by the radial electric field structure is an explanation of the enhancement or the moderate reduction of the turbulence amplitude at the curvature region [43,104,113]. A similar approach using a different model was reported in [114]. Here, the Hasegawa-Wakatani fluid model [115] is used to describe the spatial redistribution of the drift wave turbulence in the presence of the enforced sinusoidal zonal flow structure. As shown in figure 21(a), in the case that the given zonal flow amplitude is large enough, the turbulence is localized where the zonal flow curvature is negatively maximum both in the linear growth phase and the nonlinear saturation phase of the turbulence. This result agrees with that obtained with the wave kinetic theory, as discussed above [101,102]. Around the location where the zonal flow curvature is positively maximum, the cross phase between the density fluctuation and the potential fluctuation is negatively enhanced, which stabilizes the drift wave and drives the negative particle flux simultaneously. Different roles of the shear and the curvature of the radial electric field for both the turbulence amplitude redistribution and the cross phase modification are found. Further investigation of the turbulent transport quench at the edge transport barrier region is an interesting direction to provide an interpretation for the detailed observation of the radial electric field and the particle flux in JFT-2M (figure 17) [104]. Discussion The contrasting results from TJ-K [103] and JFT-2M [104] regarding the dynamics in the turbulence amplitude and the cross phase are an interesting topic for discussion. Recent numerical simulation works showed that the cross phase modification can occur due to the electromagnetic effect in a tokamak edge, which is a possible mechanism that can dynamically vary the cross phase. Even without a large β ≡ ( p/(B 2 /2µ 2 0 ) ) , the electromagnetic turbulence can play a role because of a large q or a large inverse scale length L −1 ⊥ at the tokamak edge, since the ratio of the electromagnetic part to the electrostatic part in the parallel electric field fluctuation is approximated as ( qRL −1 ⊥ ) 2 β [116]. It is shown that as the electromagnetic effect becomes significant, the cross phase is gradually altered [117,118]. Immediately after the L-H transition, the pedestal structure is quickly formed and the importance of the electromagnetic turbulence component can accordingly rise through an increase of L −1 ⊥ . Superposition of the electrostatic fluctuation and the electromagnetic fluctuation can cause a complex time evolution of the turbulence amplitude and the cross phase that strongly depends on the edge plasma parameters. The impact of the electromagnetic fluctuation on the turbulence driven E × B flow is also discussed [119]. Recently, the parallel flow shear driven instability was investigated theoretically [120] as a candidate for the inward particle pinch mechanism in the type-III ELM dynamics [121]. As the free energy of the parallel flow shear driven turbulence is the toroidal return flow shear in the H-mode, which has been discussed in the subsection 2.5, the up-gradient particle flux can be driven without violating the law of entropy increase [122]. Intensive assessments of the elementary process of the parallel flow shear driven instability were conducted in a basic linear plasma device [123,124]. This idea can be applied to consider the different time scale of the turbulence amplitude reduction and the cross phase alternation. Once the excited radial electric field at the L-H transition secondarily stimulates the parallel flow shear driven instability through the toroidal return flow, competition against the original instability is thought to occur. As a result, the cross phase is altered with a delayed time scale compared to the prompt turbulence amplitude suppression. Detailed practical assessment is left for future research. Summary In this special topic paper, recent progress on the role of the mean and oscillating radial electric field in the L-H transition was reviewed. It was shown that the radial electric field driving procedure has apparent conflicting characteristics: the quick evolution in the turbulence time scale and the eventual settling on the diamagnetic value. These points were reconciled by combining different models of the radial electric field excitation based on the turbulence driven component and on the profile driven component. Another point focused upon was the turbulent transport suppression mechanism. It was pointed out that the turbulence particle flux behavior in the presence of the well-shaped radial electric field structure cannot be fully featured by the shearamplitude suppression paradigm. Two additional concepts were proposed, i.e. the cross phase alternation and the turbulence redistribution induced by the interaction with the radial electric field structure. In particular, the different time scales seen in the turbulence amplitude suppression and the cross phase alternation that impact on the particle flux behavior across the L-H transition were presented. A new modeling activity based on the wave kinetic framework that can treat the spatially decomposed turbulence-radial electric field interaction was introduced as a key concept to investigate the spatial redistribution of the turbulence in the H-mode.
13,419
2020-02-26T00:00:00.000
[ "Physics" ]
Self-Assembly of Free-Standing LiMn2O4-Graphene Flexible Film for High-Performance Rechargeable Hybrid Aqueous Battery A novel LiMn2O4-graphene flexible film is successfully prepared by facile vacuum filtration technique. LiMn2O4 nanowires with diameters of 50–100 nm are distributed homogeneously on graphene sheet matrix. Used as cathode in rechargeable hybrid aqueous batteries, the LiMn2O4-graphene film exhibits enhanced electrochemical performance in comparison to LiMn2O4-graphene powder. The LiMn2O4-graphene film shows stable 13.0 mAh g−1 discharge capacity after 200 cycles at 1.0 C, benefitting from the presence of graphene with strong conductivity and large pore area in this free-standing film. This synthetic strategy for a free-standing film can provide a new avenue for other flexible materials and binder-free electrodes. Introduction The new generation of the electronic equipment, such as light wearable electronic d electric vehicles with high energy density batteries, is accelerating the development of re batteries [1,2]. The traditional rechargeable lithium-ion batteries with organic electrolyte fiercer and fiercer challenge due to their high cost and low safety [3,4]. Recently rechargeable lithium batteries have attracted increasing attention in large-scale ener systems due to their lower toxicity, lower cost and better safety, thanks to water solution organic electrolytes [5]. Among these, the rechargeable hybrid aqueous battery (ReHAB attracting increasing attention [6]. The ReHAB is composed of a zinc metal anode and a traditional cathode (such as L LiMn2O4), in which the Zn anode undergoes the reversible redox reaction, while the LiMn2O4 cathode, for instance, undergoes lithium intercalation/de-intercalation. The elect reactions can be written as follows. Introduction The new generation of the electronic equipment, such as light wearable electronic devices and electric vehicles with high energy density batteries, is accelerating the development of rechargeable batteries [1,2]. The traditional rechargeable lithium-ion batteries with organic electrolyte are facing fiercer and fiercer challenge due to their high cost and low safety [3,4]. Recently, aqueous rechargeable lithium batteries have attracted increasing attention in large-scale energy storage systems due to their lower toxicity, lower cost and better safety, thanks to water solutions instead of organic electrolytes [5]. Among these, the rechargeable hybrid aqueous battery (ReHAB) has been attracting increasing attention [6]. The ReHAB is composed of a zinc metal anode and a traditional cathode (such as LiFePO4 and LiMn2O4), in which the Zn anode undergoes the reversible redox reaction, while the LiFePO4 or LiMn2O4 cathode, for instance, undergoes lithium intercalation/de-intercalation. The electrochemical reactions can be written as follows. the electrochemical properties of pristine LiFePO 4 or LiMn 2 O 4 cathodes often deteriorate drastically with increasing of charge-discharge rates. To overcome these issues, one of the effective strategies is anchoring the active LiFePO 4 or LiMn 2 O 4 particles into various porous carbon matrixes to enhance the electrical conductivity and suppress the volume change during cycling [10][11][12]. Among various carbon-based materials, graphene or reduced graphene oxide is always intensively investigated, due to the large specific surface area, extraordinary electronic transport property and high electrochemical stability. Some work has discussed the positive effect on the electrochemical properties of graphene-based LiFePO 4 or LiMn 2 O 4 composites [12][13][14]. It is worth noting that most of this work is focused on powder graphene composites and their application in lithium rechargeable batteries with organic electrolyte. Recently, research into flexible films as binder-free electrodes for rechargeable batteries has developed rapidly to power new applications, such as light and soft wearable electronic devices [15,16]. Compared to carbon nanotubes and other carbon-based materials, it is most convenient to make graphene into flexible film because of its layered structure [17]. However, few works have discussed graphene-based flexible film electrodes for rechargeable hybrid aqueous batteries. Herein, a free-standing LiMn 2 O 4 -graphene flexible film is designed and prepared by a facile vacuum filtration method, and its electrochemical performance is investigated for the first time in ReHAB. Compared to the LiMn 2 O 4 -graphene powder prepared by simple physical mixing and the slurry casting technique, the LiMn 2 O 4 -graphene film exhibits an amazingly stable cycling ability and enhanced rate performance. Materials Preparation Graphene oxide (GO) was synthesized by natural flake graphite according to previous work [18]. LiMn 2 O 4 powder was prepared by the following process. 0.095 g KMnO 4 was dissolved in 25 mL deionized (DI) water and underwent ultrasonic radiation for 0.5 h to form the first solution. The second solution was arranged by dissolving 0.220 Mn(CH 3 COO) 2 ·4H 2 O in 25 mL DI water. The mixture of the two solutions was transferred and sealed in a 60 mL Teflon autoclave for 20 h at 160 • C. After the hydrothermal reaction, black MnO 2 powder was collected, followed by centrifugation, washing with ethanol and DI water, and drying in vacuum. The prepared MnO 2 and LiOH·H 2 O powders with a molar ratio of 2:1 were ground with ethanol for 1 h. After air solid-state reaction at 400 • C for 8 h and 750 • C for 10 h, the expected LiMn 2 O 4 powder (LMO) was prepared. The flexible LiMn 2 O 4 -graphene film was manufactured by an ordinary vacuum filtration method followed by a thermal reduction process. Typically, 30 mg GO was dispersed in 10 mL deionized water and underwent ultrasonic operation for 2 h to a uniform brown GO suspension. 20 mg LiMn 2 O 4 powder was secondly dispersed into the GO suspension to undergo ultrasonic treatment for 2 h. The suspension was then filtered through a filter membrane under vacuum. The flexible film was peeled off carefully from the membrane after washing, drying and immersing in acetone for 15 min. After being heat treated in air at 220 • C for 2 h to reduce GO into graphene (GN), the final flexible LiMn 2 O 4 -graphene film was obtained, labeled as LMO/GN-F. The preparation procedure and photos of the flexible film are shown in Figure 1. For comparison, the LiMn 2 O 4 -graphene powder (LMO/GN-P) was prepared by simple physical ball-milling. Material Characterization The crystalline phases of the as-prepared samples were determined by X-ray powder diffraction (XRD, D8 ADVANCE, Bruker, Billerica, MA, USA) equipped with Cu Kα radiation (λ = 0.15418 nm) at a scanning rate of 0.02° s −1 in 10~70°. The content of LiMn2O4 in the LiMn2O4-graphene powder and the LiMn2O4-graphene film were confirmed by thermoanalyzer (DSC-TGA; SDT Q600, TA Company, Boston, MA, USA) with air flow from room temperature to 600 °C at 10 °C min −1 . To determine the pore volumes and specific surface areas of the prepared films, Brunauere Emmette Teller (BET) and Barrette Joynere Halenda (BJH) methods were carried out using nitrogen adsorption. The surface morphologies of the samples were examined by field emission scanning electron microscopy (SEM, Quanta FEG-400) and transmission electron microscopy (TEM, FEI-Tecnai G 2 -F20 S-TWIN) techniques. Electrochemical Measurements The free-standing LMO/GN-F electrodes were prepared by cutting the flexible LMO/GN film into 10 mm circles directly. The compared LMO/GN-P electrodes were prepared by brushing the n-methyl-2-pyrrolidinone slurry containing LMO/GN powder, acetylene black, and polyvinylidene fluoride (80:10:10 wt. %) on 10 mm stainless steel foil, followed by vacuum drying at 110 °C for one night. The stainless steel foil was pressed at 10 MPa to achieve superior contact between the active material and the current collector. The CR2025 coin cells were assembled using Zn metal as anode, 0.5 M LiCH3COO and 0.5 M Zn(CH3COO)2 aqueous solution as electrolyte, absorbent glass mat wet as separator, and the LMO/GN-F or LMO/GN-P electrode as cathode. The cyclic voltammetry (CV) was carried out on a CHI 660D electrochemical workstation at a scan rate of 0.15 mV s −1 in the potential range of 1.35-2.15 V vs. Zn/Zn 2+ . The galvanostatic charge-discharge tests were arranged on a LAND battery program-controlled tester in a cut-off potential window of 1.45-2.05 V. Electrochemical impedance spectroscopy (EIS) was performed by using the CHI 660D electrochemical workstation with a frequency range from 0.01 to 100 kHz. Figure 2a shows the X-ray diffraction (XRD) patterns of the as-prepared samples. The XRD patterns of GO displays only one obvious peak centered at around 11.5°, which can be attributed to the (002) reflection of graphene oxide. The XRD patterns of GN display one obvious peak centered at around 25.6° and one weak peak at around 43.6°, which can be attributed to the (002) and (100) reflections of graphene, respectively [19]. The XRD pattern of LMO used in our experiment shows the typical reflection pattern of cubic spinel LiMn2O4 with a space group of Fd3m [20]. The XRD pattern of LMO/GN-F exhibits the characteristic features of spinel LiMn2O4 and a broad typical peak of graphene at around 25.6°, indicating that there are no phase transformations for LMO in the LMO/GN film. No detectable peak at around 11.5° is observed, indicating that graphene oxide is reduced completely to graphene during the experiment process. The XRD pattern of LMO/GN-P exhibits a very similar shape with the XRD pattern of LMO/GN-F, indicating that the LiMn2O4-graphene composite can be made successfully by the physical ball-milling technique. The LMO contents in the LMO/GN film and as-prepared LMO/GN powder are estimated by TGA under Material Characterization The crystalline phases of the as-prepared samples were determined by X-ray powder diffraction (XRD, D8 ADVANCE, Bruker, Billerica, MA, USA) equipped with Cu Kα radiation (λ = 0.15418 nm) at a scanning rate of 0.02 • s −1 in 10~70 • . The content of LiMn 2 O 4 in the LiMn 2 O 4 -graphene powder and the LiMn 2 O 4 -graphene film were confirmed by thermoanalyzer (DSC-TGA; SDT Q600, TA Company, Boston, MA, USA) with air flow from room temperature to 600 • C at 10 • C min −1 . To determine the pore volumes and specific surface areas of the prepared films, Brunauere Emmette Teller (BET) and Barrette Joynere Halenda (BJH) methods were carried out using nitrogen adsorption. The surface morphologies of the samples were examined by field emission scanning electron microscopy (SEM, Quanta FEG-400) and transmission electron microscopy (TEM, FEI-Tecnai G 2 -F20 S-TWIN) techniques. Electrochemical Measurements The free-standing LMO/GN-F electrodes were prepared by cutting the flexible LMO/GN film into 10 mm circles directly. The compared LMO/GN-P electrodes were prepared by brushing the n-methyl-2-pyrrolidinone slurry containing LMO/GN powder, acetylene black, and polyvinylidene fluoride (80:10:10 wt %) on 10 mm stainless steel foil, followed by vacuum drying at 110 • C for one night. The stainless steel foil was pressed at 10 MPa to achieve superior contact between the active material and the current collector. The CR2025 coin cells were assembled using Zn metal as anode, 0.5 M LiCH 3 COO and 0.5 M Zn(CH 3 COO) 2 aqueous solution as electrolyte, absorbent glass mat wet as separator, and the LMO/GN-F or LMO/GN-P electrode as cathode. The cyclic voltammetry (CV) was carried out on a CHI 660D electrochemical workstation at a scan rate of 0.15 mV s −1 in the potential range of 1.35-2.15 V vs. Zn/Zn 2+ . The galvanostatic charge-discharge tests were arranged on a LAND battery program-controlled tester in a cut-off potential window of 1.45-2.05 V. Electrochemical impedance spectroscopy (EIS) was performed by using the CHI 660D electrochemical workstation with a frequency range from 0.01 to 100 kHz. Figure 2a shows the X-ray diffraction (XRD) patterns of the as-prepared samples. The XRD patterns of GO displays only one obvious peak centered at around 11.5 • , which can be attributed to the (002) reflection of graphene oxide. The XRD patterns of GN display one obvious peak centered at around 25.6 • and one weak peak at around 43.6 • , which can be attributed to the (002) and (100) reflections of graphene, respectively [19]. The XRD pattern of LMO used in our experiment shows the typical reflection pattern of cubic spinel LiMn 2 O 4 with a space group of Fd3m [20]. The XRD pattern of LMO/GN-F exhibits the characteristic features of spinel LiMn 2 O 4 and a broad typical peak of graphene at around 25.6 • , indicating that there are no phase transformations for LMO in the LMO/GN film. No detectable peak at around 11.5 • is observed, indicating that graphene oxide is reduced completely to graphene during the experiment process. The XRD pattern of LMO/GN-P exhibits a very similar shape with the XRD pattern of LMO/GN-F, indicating that the LiMn 2 O 4 -graphene composite can be made successfully by the physical ball-milling technique. The LMO contents in the LMO/GN film and as-prepared LMO/GN powder are estimated by TGA under air atmosphere with a heating rate of 10 • C min −1 . Both LMO/GN-F and LMO/GN-P TGA curves in Figure 2b show only one drastic weight loss from around 300 • C to 450 • C. From 450 • C upon 600 • C, the two TGA curves remain approximately unchanged. The results indicate that the carbon component in LMO/GN-F and LMO/GN-P is completely burned in air flow [21,22]. The two TGA curves remain unchanged from 450 • C to 600 • C, showing that the LMO in the two samples has no phase transformations during the heat treatment. The LMO/GN-F and LMO/GN-P exhibit around 44.2% and 48.5% LMO content, respectively. The similar amounts of LMO content in the different samples can eliminate the influence of different content on electrochemical performance. The morphology of the synthesized LMO, GN film and LMO/GN film are measured by SEM (Figure 3). The LMO nanowires are 5~10 µm in length (Figure 3a). There is a slight agglomeration among LMO nanowires. The pristine GN film has curved and wrinkled surface morphology (Figure 3b). The LMO/GN film has a similar wrinkled surface to GN film, except that LMO nanowires are homogeneously distributed in the surface (Figure 3c). The thickness of the LMO/GN film is about 20 µm (Figure 3d In order to determine the specific surface area and pore volumes of the GN film and LMO/GN film, the N2 adsorption and desorption isotherms are measured and shown in Figure 5. The BET specific surface area of the LMO/GN film is calculated to be 60.4 cm 2 g -1 , which is obviously higher than that of the pristine GN film (39.6 cm 2 g -1 ). This result can be attributed to the presence of LMO nanowires on or in the surface of the GN support. The LMO nanowires intercalating into the GN nanosheets may not only result in more pores, but also prevent the aggregation of the GN nanosheets. In the meantime, the adsorption of the GN nanosheets can prevent the aggregation of the LMO nanowires. Therefore, the total pore volume of the LMO/GN film is 0.30 cm 3 g -1 , which is In order to determine the specific surface area and pore volumes of the GN film and LMO/GN film, the N2 adsorption and desorption isotherms are measured and shown in Figure 5. The BET specific surface area of the LMO/GN film is calculated to be 60.4 cm 2 g -1 , which is obviously higher than that of the pristine GN film (39.6 cm 2 g -1 ). This result can be attributed to the presence of LMO nanowires on or in the surface of the GN support. The LMO nanowires intercalating into the GN nanosheets may not only result in more pores, but also prevent the aggregation of the GN nanosheets. In the meantime, the adsorption of the GN nanosheets can prevent the aggregation of the LMO nanowires. Therefore, the total pore volume of the LMO/GN film is 0.30 cm 3 g -1 , which is In order to determine the specific surface area and pore volumes of the GN film and LMO/GN film, the N 2 adsorption and desorption isotherms are measured and shown in Figure 5. The BET specific surface area of the LMO/GN film is calculated to be 60.4 cm 2 g -1 , which is obviously higher than that of the pristine GN film (39.6 cm 2 g -1 ). This result can be attributed to the presence of LMO nanowires on or in the surface of the GN support. The LMO nanowires intercalating into the GN nanosheets may not only result in more pores, but also prevent the aggregation of the GN nanosheets. In the meantime, the adsorption of the GN nanosheets can prevent the aggregation of the LMO nanowires. Therefore, the total pore volume of the LMO/GN film is 0.30 cm 3 g -1 , which is higher than that of the pristine GN film (0.21 cm 3 g -1 ). Combining the result with TEM measurements, the wrinkled surface morphology, large pore volume and specific surface area of the LMO/GN film can permit easy access for electrons and ions to the electrode/electrolyte and accommodate the volume change of the LMO nanowires during the charge and discharge process [23]. Results and Discussion Materials 2018, 11, x FOR PEER REVIEW 6 of 11 higher than that of the pristine GN film (0.21 cm 3 g -1 ). Combining the result with TEM measurements, the wrinkled surface morphology, large pore volume and specific surface area of the LMO/GN film can permit easy access for electrons and ions to the electrode/electrolyte and accommodate the volume change of the LMO nanowires during the charge and discharge process [23]. The galvanostatic discharge-charge and CV tests are important challenging and key aspects for ReHAB applications. Aiming to examine the electrochemical performance of the LMO/GN-F electrode sufficiently, the LMO/GN-F as cathode material for ReHAB is examined by CV and galvanostatic discharge-charge tests, as shown in Figure 6. All the CV curves over three cycles exhibit two obvious pairs of oxidation and reduction peaks between 1.6 and 2.0 V vs. Zn/Zn 2+ , corresponding to the two-step lithium de/intercalation of the LMO/GN-F electrode (Figure 6a). In detail, the oxidation peak at 1.81 V is corresponding to the deintercalation of lithium ions from the spinel LiMn2O4 structure until half of the 8a sites are empty in LixMn2O4 (0.5 ≤ x ≤ 1). The subsequent oxidation peak at 1.94 V corresponds to the continued deintercalation of lithium ions until all of the 8a sites are empty. At this point, LiMn2O4 is fully oxidized to λ-MnO2. The anodic peak at 2.15 V vs. Zn/Zn 2+ is assigned to O2 evolution due to the water decomposition [6]. The reduction peak at 1.85 V is assigned to the intercalation of lithium ions into each available tetrahedral site (8a) in λ-MnO2, until half of the sites are filled in LixMn2O4 (0 < x < 0.5). The other reduction peak at 1.72 V is associated with the lithium ions filling the remaining empty 8a sites to form LixMn2O4 (0.5 ≤ x ≤ 1) [24,25]. As clearly shown in the inset in Figure 6a, the peak intensity weakens slightly with increase in the CV test cycle, which could be due to the small attenuation of electrochemical activity. Figure 6b illustrates the first three charge-discharge profiles of the LMO/GN-F electrode at 0.5 C. According to the CV data, the potential of the charge-discharge process is restricted from 1.45 V to 2.05 V vs. Zn/Zn 2+ to avoid the water decomposition. As shown in Figure 6b, all three curves have two well-defined plateaus at about 1.76 V and 1.92 V vs. Zn/Zn 2+ in the charge and discharge profiles corresponding to the two-step lithium de/intercalation mechanism of LMO/GN-F electrode, which is confirmed by the CV data. The galvanostatic discharge-charge and CV tests are important challenging and key aspects for ReHAB applications. Aiming to examine the electrochemical performance of the LMO/GN-F electrode sufficiently, the LMO/GN-F as cathode material for ReHAB is examined by CV and galvanostatic discharge-charge tests, as shown in Figure 6. All the CV curves over three cycles exhibit two obvious pairs of oxidation and reduction peaks between 1.6 and 2.0 V vs. Zn/Zn 2+ , corresponding to the two-step lithium de/intercalation of the LMO/GN-F electrode (Figure 6a). In detail, the oxidation peak at 1.81 V is corresponding to the deintercalation of lithium ions from the spinel LiMn 2 O 4 structure until half of the 8a sites are empty in Li x Mn 2 O 4 (0.5 ≤ x ≤ 1). The subsequent oxidation peak at 1.94 V corresponds to the continued deintercalation of lithium ions until all of the 8a sites are empty. At this point, LiMn 2 O 4 is fully oxidized to λ-MnO 2 . The anodic peak at 2.15 V vs. Zn/Zn 2+ is assigned to O 2 evolution due to the water decomposition [6]. The reduction peak at 1.85 V is assigned to the intercalation of lithium ions into each available tetrahedral site (8a) in λ-MnO 2 , until half of the sites are filled in Li x Mn 2 O 4 (0 < x < 0.5). The other reduction peak at 1.72 V is associated with the lithium ions filling the remaining empty 8a sites to form Li x Mn 2 O 4 (0.5 ≤ x ≤ 1) [24,25]. As clearly shown in the inset in Figure 6a, the peak intensity weakens slightly with increase in the CV test cycle, which could be due to the small attenuation of electrochemical activity. Figure 6b illustrates the first three charge-discharge profiles of the LMO/GN-F electrode at 0.5 C. According to the CV data, the potential of the charge-discharge process is restricted from 1.45 V to 2.05 V vs. Zn/Zn 2+ to avoid the water decomposition. As shown in Figure 6b, all three curves have two well-defined plateaus at about 1.76 V and 1.92 V vs. Zn/Zn 2+ in the charge and discharge profiles corresponding to the two-step lithium de/intercalation mechanism of LMO/GN-F electrode, which is confirmed by the CV data. To further examine the effect of GN in improving the electrochemical performance sufficiently, the electrodes containing LMO/GN-P or LMO/GN-F as cathodes for ReHABs are tested for C-rate and cycle ability for comparison ( Figure 7). As shown in Figure 7a, as the C-rate increases stepwise, the specific discharge capacities of both LMO/GN-P and LMO/GN-F electrodes decrease obviously, which is due to the diffusion-controlled kinetics of the lithium de/intercalation reactions [26]. Compared to LMO/GN-P, the rate performance of the LMO/GN-F is significantly improved ( Figure 7a The corresponding capacity retentions are 94.8% and 76.7%, respectively. The coulombic efficiencies of both electrodes reached 99% after a few activated cycles. All the data indicate that the LiMn2O4-graphene film exhibits an amazingly stable cycling ability and enhanced rate performance. To further examine the effect of GN in improving the electrochemical performance sufficiently, the electrodes containing LMO/GN-P or LMO/GN-F as cathodes for ReHABs are tested for C-rate and cycle ability for comparison ( Figure 7). As shown in Figure 7a, as the C-rate increases stepwise, the specific discharge capacities of both LMO/GN-P and LMO/GN-F electrodes decrease obviously, which is due to the diffusion-controlled kinetics of the lithium de/intercalation reactions [26]. Compared to LMO/GN-P, the rate performance of the LMO/GN-F is significantly improved (Figure 7a Benefitting from the vacuum filtration method, the LMO nanowires and graphene sheets in LMO/GN-F contact each other more closely than in LMO/GN-P prepared by simple physical mixing. The improvement of electrochemical performance is mainly attributed to the positive effects of graphene in enhancing electronic conductivity [27], decreasing the LMO nanowires agglomeration and handling the volume expansion of LMO during charge-discharge cycles. The positive influence of the LMO/GN-F electrode on charge transfer behavior and conductivity of the system can be proved by the EIS measurements ( Figure 8). As shown in Figure 8, both impedance plots of the LMO/GN-P and LMO/GN-F electrodes have the similar shape, which is composed of a semicircle in the high-to-medium frequency and a straight line in the low frequency. The slope angle of the straight line in the low-frequency region of the LMO/GN-F electrode is larger than that of the LMO/GN-P electrode, demonstrating that the LMO/GN-F electrode has a smaller Warburg impedance (W), resembling the solid-state diffusion of within the electrode [28,29]. The diameter of the semicircle in the high-to medium-frequency region for LMO/GN-F electrode is about 15 Ω, which is significantly smaller than that of the LMO/GN-P electrode (about 25 Ω), indicating that the LMO/GN-F electrode has a lower charge-transfer resistance (Rct) at the electrode/electrolyte interface. The equivalent circuits are inset in Figure 8. In addition to W and Rct, discussed above, RΩ is the ohmic resistance representing the total resistance of the electrolyte, separator, and electrical contacts. CPE is the constant phase-angle element, involving double layer capacitance of the active materials. The enhancement of charge transfer and Li + diffusion in combination with the lower aggregation of LiMn2O4 nanowires and the better volume change handling could lead to the superior electrochemical performance of the LMO/GN flexible film. Benefitting from the vacuum filtration method, the LMO nanowires and graphene sheets in LMO/GN-F contact each other more closely than in LMO/GN-P prepared by simple physical mixing. The improvement of electrochemical performance is mainly attributed to the positive effects of graphene in enhancing electronic conductivity [27], decreasing the LMO nanowires agglomeration and handling the volume expansion of LMO during charge-discharge cycles. The positive influence of the LMO/GN-F electrode on charge transfer behavior and conductivity of the system can be proved by the EIS measurements ( Figure 8). As shown in Figure 8, both impedance plots of the LMO/GN-P and LMO/GN-F electrodes have the similar shape, which is composed of a semicircle in the high-to-medium frequency and a straight line in the low frequency. The slope angle of the straight line in the low-frequency region of the LMO/GN-F electrode is larger than that of the LMO/GN-P electrode, demonstrating that the LMO/GN-F electrode has a smaller Warburg impedance (W), resembling the solid-state diffusion of within the electrode [28,29]. The diameter of the semicircle in the high-to medium-frequency region for LMO/GN-F electrode is about 15 Ω, which is significantly smaller than that of the LMO/GN-P electrode (about 25 Ω), indicating that the LMO/GN-F electrode has a lower charge-transfer resistance (R ct ) at the electrode/electrolyte interface. The equivalent circuits are inset in Figure 8. In addition to W and R ct , discussed above, R Ω is the ohmic resistance representing the total resistance of the electrolyte, separator, and electrical contacts. CPE is the constant phase-angle element, involving double layer capacitance of the active materials. The enhancement of charge transfer and Li + diffusion in combination with the lower aggregation of LiMn 2 O 4 nanowires and the better volume change handling could lead to the superior electrochemical performance of the LMO/GN flexible film. Conclusions A flexible LMO/GN hybrid film was successfully prepared through a facile vacuum filtration and reduction process. Compared to the LMO/GN powder prepared by physical mixing, the designed LMO/GN film exhibits significantly enhanced electrochemical performance as cathodes in ReHABs. Benefited from the wrinkled surface and relatively large pore volume and specific surface area, the LMO/GN film could deliver reversible discharge capacities of 122.
6,253.8
2018-06-21T00:00:00.000
[ "Materials Science" ]
Is Word Order Asymmetry Mathematically Expressible ? The computational procedure for human natural language (CHL) shows an asymmetry in unmarked orders for S, O, and V. Following Lyle Jenkins, it is speculated that the asymmetry is expressible as a group-theoretical factor (included in Chomsky’s third factor): “[W]ord order types would be the (asymmetric) stable solutions of the symmetric still-to-be-discovered ‘equations’ governing word order distribution”. A possible “symmetric equation” is a linear transformation f(x) = y, where function f is a set of merge operations (transformations) expressed as a set of symmetric transformations of an equilateral triangle, x is the universal base vP input expressed as the identity triangle, and y is a mapped output tree expressed as an output triangle that preserves symmetry. Although the symmetric group S3 of order 3! = 6 is too simple, this very simplicity is the reason that in the present work cost differences are considered among the six symmetric operations of S3. This article attempts to pose a set of feasible questions for future research. Problem I would like to pose the question of whether the following phenomenon can be mathematically (Galois theoretically) expressed. 1I am grateful to the editors and anonymous reviewers for their patience in assessing this challenging article over the past two years.I would like to thank Makoto Toma for his valuable comments and suggestions.Without his constructive criticism regarding my amateurish mathematics, I could not have finished this.I thank Massimo Piattelli-Palmarini for allowing me to join his class on biolinguistics at MIT in 2003, which marked the beginning of this project.I am grateful to Lyle Jenkins for the insightful lecture on human language and Galois theory in Massimo's class and for taking the time to listen to my idea in a campus café.Finally, I would like to thank Enago for editing and proofreading my work, which clarified the reasoning that I wished to express.All remaining errors are my own. 1 The author does not claim that the geometrical cost calculation proposed here is the 'third factor' (non-genetic and non-environmental) that is actually at work in C HL .Rather, he claims that it may be a mathematically feasible way to express and translate the unmarked word order asymmetry into a language of geometrical cost calculation that leads us to (1) In terms of phylogeny, 2 C HL shows the following language distribution: <SOV> = 48.5%,<SVO> = 38.7%,<VSO> = 9.2%, <VOS> = 2.4%, <OVS> = 0.7%, <OSV> = 0.5% (Yamamoto 2002). 3 Why do we focus on S, O, and V? 4 There are four reasons.First, many reliable studies since the seminal work of Greenberg (1963) present relatively solid evidence regarding the probability of unmarked word orders.Second, we have reliable data from native speakers, who have relatively clear intuitions about what the unmarked order of the set {S, O, V} is for their languages.The third reason is simplicity: we should start from the simplest possible case.The fourth reason is reducibility: we can and should reduce seemingly complex structures to the simplest possible structures, namely S + V and S + O + V. S and O may be complex, but they are reducible to the simple S and O. O may be direct (DO) or indirect (IO), but we start from the simpler DO.Sentence structures contain CP, TP, vP, and VP, but the most basic semantic domain is vP+VP, in which S, O, and V appear originally.Yamamoto (2002: 85) contains a table that is useful for comparing the relevant percentages that have appeared in previous studies.Here I have included Yamamoto (2002), Dryer &Martin (2011), andGell-Mann &Ruhlen (2011). 5This full list is shown in Table 1. algebraic and group-theoretical analyses in the future.I thank an anonymous reviewer for clarifying the issue.With regard to Galois theory, Évariste Galois (a French mathematician; 1811-1832) developed the fundamental mathematical tool, the Galois group (algebraic structure of equations), for examining the symmetry of equations.Modern science would not exist without Galois theory.Group theory is a calculus of symmetry (Stewart 2007: 111).In Chomsky (2002), Fukui and Zushi mention Weil (1969), which is a group-theoretical analysis of an aboriginal kinship structure in Australia (Japanese translation of Chomsky 1982: 356).As regards other group-theoretical analysis on C HL , see Laughren (1982), in which the author attempts a group-theoretical analysis of Walpiri kinship structure (languages of kinship in Aboriginal Australia).See also Jenkins (2013) for the introduction of Laughren (1982).In Chomsky (2002), Fukui and Zushi suggest a possibility of "Galois theory of phrase structure (I-language)" (Japanese translation of Chomsky 1982: 397-398). 2 The phylogeny problem (species puzzle) asks why a language system (the current C HL ) behaves in a particular way, "the historical development of languages" (Di Sciullo 2013).However, we are concerned with synchronic phenomena (why the current C HL appears like this; how it has come to have the property; what the cause is) and we put aside the actual diachronic analysis.The ontogeny problem (individual puzzle) asks how a human child acquires his or her mother tongue, i.e. "the growth of language in the individual" (ibid.). 3 Yamamoto (2002) considers the largest number (2,932) of languages for typological analysis to date (gross=6,000).The actual number used for calculating the percentages is 2,537.Given that many previous studies have only considered 20 or 30 to 200 or 300 languages, Yamamoto (2002) offers a significantly reliable sampling.<…> indicates an ordered set of unmarked (basic) word order.The ratio is rounded to the first decimal place.4 I thank an anonymous reviewer for pointing out this fundamental question.5 I added Yamamoto (2002; gross: 2,537 languages), Dryer & Martin (2011;gross: 1,377), and Gell-Mann & Ruhlen (2011;gross: 2,011).In Dryer & Martin (2011), 189 languages have no dominant order.Selected language families and samples are provided below.<SOV>: Niger-Congo, Semitic, Turkic, Indo-Aryan, Dravidian, Austonesian, Altaic, Chibchan, Native American languages, … <SVO>: Indo-European, Niger-Congo, Tai-Kadai, Sinae, Austronesian, Arawakan, … <VSO>: Celtic, Semitic, Niger-Congo, Austronesian, Native American languages, Chibchan, … <VOS>: Malagasy, Batak, Seediq (Austronesian languages), Native American languages, Chibchan, … However, an anonymous reviewer points out many fundamental problems.Why should we focus on the ordering among S, O, and V? Is it not the case that S, O, V are the grossest levels of organization of the clause, hence encompassing the maximal level of complexity?Is it not the case that unmarked orders such as <SOV> and <SVO> are shadows, not the essential substances?Is it not possible that the unmarked <SOV> has many other derivations, hence leading to different varieties of unmarked <SOV>? 7Why is <SOV> the base order?Why should the base order be the most common?If <SOV> is the cheapest, why is it not the case that all languages show <SOV> as the unmarked order?Why does an unmarked order such as <OSV> (0.5%) exist at all? 8 I attempt to answer these questions as far as possible.However, the questions are so fundamental that a complete answer is beyond the reach of this paper.Although the article faces many <OVS>: Päri (Niger-Congo), Ungarinjin (moribund Australian aboriginal language), Hixkaryana (Carib language), Tuvaluan (Austronesian) <OSV>: Kxoe (Kalahari), Tobati (Papua New Guinea), Wik Ngathana (Pama-Nyngan), Nadëb (Brazilian Amazon) 6 11% of languages are unclassified in this study.I thank an anonymous reviewer for pointing out that referring Yamamoto alone is insufficient. 7 The reviewer suggests that distinct operations yielding the same superficial <SOV> unmarked order, for example, are parametrized. 8 I thank an anonymous reviewer who pointed out this serious problem that my approach should solve.The readers can refer to Yang (2002) and Chomsky (2012) for the method behind the explanation of the statistical duality of irregular verbs.The reviewer's puzzle (a phylogeny issue) is particularly important in that it relates to an important statistical paradox (an ontogeny issue) as follows (Yang 2002, Chomsky 2012): Why do low-probability irregular verbs behave like high-probability regular verbs such that irregular verbs are as naturally and frequently used as regular verbs?Why do irregular verbs exist at all? Yang (2002) has discovered that irregular verbs are in fact 'regular' for they are grouped into distinct classes and the classes obey the relevant regular rules.For example, the blocking effect of the past tense form went over goed indicates that the 'weight' (probability) of the corresponding rule is 1.0 (must happen) or very close to 1.0 (very likely to happen) as a result of learning.Following his insight, I will argue later that a low-probability unmarked order such as <OSV> behaves like a high-probability order because the cost calculation is 'regular': The gross computational cost is within the threshold permitted for C HL (the minimum cost).The blocking effect of unmarked <OSV> over <SOV> indicates that the 'weight' (probability) of the corresponding cost calculation is 1.0 (must happen) or very close to 1.0 (very likely to happen) as a result of cost equilibrium.Greenberg (1966) showed that <SVO> languages outnumber <SOV> languages, and Yamamoto (2002: 85) attributed this unlikely result to the smaller samples (30 languages) and a bias toward Indo-European and African languages, excluding the languages of New Guinea and Melanesia.The general ranking of unmarked word order seems to be clear: It is significant that <SOV> and <SVO> account for more than 80%.C HL is strongly biased for these two unmarked word orders.Can we say as follows?Starting from <SOV>, <SVO> involves flipping O and V, and <VSO> involves rotating one position rightward, <VOS> involves flipping S and V (or it is a one-dimensional mirror image of <SOV>).Where does the ranking in (2) arise from?Why does C HL select this particular ranking?The main goal of this study is to show that the ranking is expressible as geometrical cost differences, which will ideally lead to a Galois-theoretic explanation, and that C HL chooses the most cost-effective unmarked word orders with respect to the phylogeny (the issue of why we can observe the current probability regarding unmarked word order asymmetry in human language).However, it is also a fact that all six possible unmarked word orders show symmetry and they are each the result of the most efficient computation with respect to ontogeny (the issue of why all six word orders are respectively the most natural and frequent unmarked orders for the respective native speakers).In a sense, phylogenetically minor unmarked orders such as <OSV> are similar to irregular verbs because they show low probability (we do not find many samples) but simultaneously show high probability (they are the most natural, frequent, and unmarked orders for the respective native speakers).Why do minor unmarked orders show low probability but simultaneously show high probability?I will offer a possible answer to this paradox in the last part of Section 3.With regard to the basic statistical data, I tentatively adopt Yamamoto (2002) in the following sections because it contains the largest data set available at present (2,932 languages). Chomsky's Third Factor The biolinguistic approach tackles the problem of whether we can explain C HL by natural laws, which Chomsky calls the third factor.The third factor includes "principles of neural organization that may be even more deeply grounded in physical law" (Chomsky 1965: 59) and "principles of structural architecture and developmental constraints that enter into canalization, organic form, and action over a wide range, including principles of efficient computation, which would be expected to be of particular significance for computational systems such as language" (Chomsky 2005: 6). 9Approximately half a century of biolinguistic research has revealed that there are parts of C HL that obey the principle of efficient computation, informally stated as follows: (3) Economy Principle (Minimal Computation) Select the most cost-effective computation. Measures of effective computation include the least effort, the shortest distance, the closest element, the fewest steps, the simplest structure, and the minimal search.The initial state of C HL is an organic computational system that includes the Economy Principle that governs an inorganic world.The initial state of C HL , which is given by the human genome, undergoes parameter setting in a linguistic environment until C HL reaches the final state, the point at which the mother-tongue acquisition system deactivates. 10C HL is a system that exhibits the discrete infinity property, which typically appears at the molecular level or below. A system of discrete infinity obeys the Economy Principle, such as a snowflake's hexagonal shape emerging as the idealized (optimized) realization of the atomic structure of H 2 O in midair, free from the noise of gravity and earth's thick air.As Chomsky often mentions, it would be interesting if an inorganic principle were operating on organic matter such as the human brain. 11I assume that the group-theoretical principles of an algebraic structure belong to the third factor.Jenkins (2000Jenkins ( , 2003) ) suggested that "word order types would be the (asymmetric) stable solutions of the symmetric still-to-bediscovered 'equations' governing word order distribution" (Jenkins (2000: 164) and that "the tools of group theory may be able to aid in characterizing the symmetries of word order patterns" (ibid.: 164). 12I believe that a study of the as discrete infinity and merge) and the second factor is the linguistic environment.The first factor is a force internal to C HL , and the second and third factors are external forces (Yang 2000).The first and second factors are responsible for the ontogeny of C HL (how C HL grows in the brain of a human infant), while the third factor is responsible for the phylogeny (why C HL has evolved in such a way).The interaction of these three factors determines the facts of C HL .Boeckx (2009: 46) points out that Chomsky's 'three factors' resemble Gould's 'adaptive triangle' (Stephen Jay Gould, American paleontologist, evolutionary biologist, and historian of science;1941-2002), which has three vertexes: (1) historical (chance); contingencies of phylogeny (mutation of DNA, 1 st factor), (2) functional; active adaptation (environmental pressure, 2 nd factor), and (3) structural constraints; rules of structure (physical laws, 3 rd factor) (Gould 2002).See Uriagereka (2010) and Longa et al. (2011) for relevant discussions.10 C HL is generally active for mother-tongue acquisition until approximately the appearance of secondary sex characteristics.Many mysteries exist regarding the issue. 11 With regard to the connection between Hamilton's principle of least action in physics and the third factor in C HL , see Fukui (1996).I thank an anonymous reviewer for suggesting that I should mention Hamilton's principle in this connection. 12 The assumption here is that an asymmetric state is stable; a symmetric state is too tense and expensive to maintain and such an unstable symmetric state becomes stabilized (costless to maintain) when the symmetry is broken.For example, Kayne (1994) proposes that syntactic terms must be in an antisymmetric c-command relation.Moro (2000: 15-29) claims that a symmetric structure (a point of symmetry) is too unstable for C HL to tolerate and that symmetry must be broken, and this drives movement, stabilizing the structure.Di Sciullo (2005Sciullo ( , 2008) ) investigates symmetry breaking (as a result of 'fluctuating asymmetry (oscillation)') in merge and morphology.In contrast, from the viewpoint of physics, a symmetric situation is stable (highly probable).An example is a gas, in which every direction appears the same.Symmetry forming is information diffusion and obeys the algebraic structure of equations (Galois group) will help us to express the phylogeny problem concerning permutation asymmetry in C HL .I attempt to express and translate the unmarked word order asymmetry into Galois-theoretic language, by considering cost. The rest of this paper is organized as follows.In Section 2, I claim that C HL produces the universal base vP, where S c-commands O, and O c-commands V, and that this base vP corresponds to the identity element (I) in mathematics.In Section 3, I propose that geometrical cost asymmetry is a possible "language" to express the unmarked word order asymmetry.I would like to propose that the unmarked ordering asymmetry in C HL can be expressed by Galois-theoretic language: the third factor. 13In particular, I propose a possible "equation governing [unmarked] word order distribution".Moreover, I also attempt to answer an important question: Why is it not the case that all languages show unmarked <SOV> provided that <SOV> derives from the most efficient computation?Section 4 summarizes the paper. The Universal Base vP as the Identity Element I propose that C HL creates the universal base vP, which is the identity element (identity syntactic relation) under the Merge operation. 14The base vP has the c-command relation S≫O≫V, as shown in Figure 1. 15 The base vP is formed entropy law: Disorder develops (the second law of thermodynamics).Symmetry breaking is information condensation and disobeys the entropy law, i.e., order develops.An example is a crystal, in which things look different according to the viewpoint.For Kayne, Moro, and Di Sciullo, structure building is symmetry breaking, which produces information, disobeying the entropy law.On the other hand, Fukui (2012a) proposes that F(feature)-equilibrium (symmetry formation) drives structure building.F-equilibrium obeys the entropy law.For Fukui, structure building is symmetry formation: information loss.There is no contradiction.Kayne, Moro, and Di Sciullo discuss how structures produce phonetic (sound) and semantic (meaning) information, which must not be deleted, whereas Fukui talks about how structures lose formal features (structural information), which must be deleted.The issue is related to a diachronic question that an anonymous reviewer asks as follows: What will happen to the synchronic unmarked order asymmetry?Will all languages become <SOV> type, provided that it is the most efficient?Although the diachronic issue is beyond the reach of this paper, at this point, let us tentatively assume as follows.The diachronic change may be determined by the dynamic interaction between the two forces noted above: symmetry breaking and symmetry preservation (formation). 13 An anonymous reviewer suggests that S-initiality is largely areal (geographical proximity of other S-initial languages) (Dryer 2012).If so, we should conclude that it is primarily the environmental factor that induces the unmarked word order asymmetry.Although the issue is beyond the scope of this paper, let us tentatively start with the view that all three factors (genetic, environmental, and physical) are involved in the asymmetry in question.14 I focus on the structure of a simple matrix transitive sentence (consisting of S, O, and V) that the relevant native speakers judge to be the unmarked (basic) word order (actually their C HL reaction).C HL is what motivates the universal base vP.I thank an anonymous reviewer for pointing out this unclarity.I call the universal base vP the base vP for simplicity. with the least effort, that is, only an external merge (the simplest possible structure-building operation) builds it.Every sentence structure starts with the base vP.If TRANSFER applies to the base vP, the phonological component Ф (sensorimotor interface) produces <SOV> as the unmarked order. 16 Why is this structure the universal base vP? 17 First, it is the most cost-effective structure: the base vP is built by external merges only.If the cost is zero, the base vP corresponds to the identity (do-nothing) operation, which is the most cost-effective transformation.It is like the identity operation +0 under addition, which does not affect a number (for example, 3 + 0 = 3).Second, it is the most fundamental structure: every sentence structure contains the base vP at its deepest structure.Third, it gives us semantic universality: The base vP is the minimal domain where the V's inherent semantic information is assigned to O and S, and this holds universally.Fourth, there is V's affinity for O: universally, V has an affinity for O rather than S. 18 Thus, C HL disallows other possibilities. TRANSFER (Spell Out) sends a halfway-built tree with sound information to Ф.The relevant derivation may involve movements in later steps.An anonymous reviewer asks an important question in this connection: Is it not the case that <SOV>, for example, is always re-derived many times or has many sources?I tentatively assume that the geometrical cost approach mapping a tree to an unmarked word order is compatible with the conception that an unmarked word order (output) derives from many source trees (input) because a function allows many-to-one correspondence (Stewart 1975).For unmarked <SOV> and <SVO>, let us assume that the c-command relation within the vP phase at the point of the first TRANSFER determines the unmarked order.17 I thank an anonymous reviewer for pointing out this crucial question.In an earlier draft, I adopted the view that O moves to Spec, vP for feature checking.The reviewer pointed out that such a vP competes in cost with the one in which V moves to v, that is, both structures have one internal merge.The reviewer's observation has improved the structure of the universal base vP; it is constructed by an external merge alone, which yields the simplest possible architecture for S, O, and V. For phylogeny, the third factor (geometrically lowest cost) determines the six unmarked word orders.But for ontogeny, capitalizing on Yang (2002: 72), who argued that 'irregular'-verb formation is in fact 'regular' in that a child acquires 'irregular' verbs by applying 'regular'-class-forming rules, I propose that a child reliably associates an 'irregular' (minor) order (OSV, VOS, OVS) with its matching 'irregular'-formation rule, and reliably apply the rule over the default <SOV>.The ontogeny (acquisition) of 'irregular' (minor) unmarked orders parallels that of 'irregular' verbs.See section 3 for a detailed discussion. Let us demonstrate how the base vP is constructed.Given that each set includes the empty set by definition and that a syntactic object is a set, each syntactic object includes the empty set ∅ (an axiom).V externally merges with ∅. 19 V' and O merge, and V assigns Patient θ (a semantic role) to O. 20 The light verb v merges with VP.The v' merges with S and v assigns Agent θ to S. Thus, the base vP is the most inexpensive base for building the structure of {S, O, V} because it is formed by external merges only, given the Merge-over-Move hypothesis, and so every sentence starts with the base vP.Every final structure contains the base vP as a subset, and the base vP does not affect the usable c-command relations in the final structure.As noted above, the base vP is like the identity element 0 (zero) in addition.Probe features in v agrees with the goal features in O, the relevant structural features are valuated and deleted (Chomsky 2000). 21The valued selects a that clause as O but the V kill does not), V forms idioms with O (e.g., kick the bucket), a transitive verbal noun N V produces a compound word with O (e.g., manslaughter), and sequential voicing occurs between V and O (e.g., compound words in Japanese). 19 An anonymous reviewer points out that construing the empty set as a legitimate syntactic object is something new and that it should be justified.The reviewer points out that it poses a problem because in set theory, the empty set is a subset of every set, not an element of every set.I tentatively adopt the following definition of syntactic object in the bare phrase structure model (Chomsky 1995: 243, 262).I reintroduce the relevant definition stated in Uriagereka (2000: 497). (ⅰ) Syntactic object σ is a syntactic object if it is a. a lexical item or the set of formal features of a lexical item, or b. the set K = {γ, {α, β}} or K = {<γ, γ>, {α, β}} such that α and β are syntactic objects and γ or <γ, γ> is the label of K. If the set of formal features of a lexical item is a syntactic object as in (ⅰa) and if the phonologically empty set lacking any member (phonological feature) is a legitimate phonological object, the syntactically empty set lacking any member (syntactic feature) may also be a legitimate syntactic object.As an alternative, the reviewer suggests 'Self-Merge' that allows vacuous projection, as in Guimarães (2000) and Kayne (2008).I leave open this fundamental problem for future research.See Barrie (2006: 99-100) for the solution adopted here, which avoids the initial-merge problem (or the "bottom of the phrase-marker" linearization problem; Uriagereka 2012: 141, fn. 23, citing Chomsky 1995: chap. 4).In fact, the structure-building space consists of empty set (∅) before V enters, i.e., "take only one thing, call it 'zero,' and you merge it; you get the set containing zero.You do it again, and you get the set containing the set containing zero; that's the successor function" (Chomsky & McGilvray 2012: 15).The operation also satisfies the restriction that "Merge cannot apply to a copy: a trace or an empty category that has moved covertly" (Chomsky 2004).The empty set ∅ is not a copy or an empty category that has moved covertly.Therefore, ∅ is allowed to merge with V. "The empty set is not 'nothing' nor does it fail to exist.It is just as much in existence as any other set.It is its members that do not exist.It must not be confused with the number 0: for 0 is a number, whereas ø is a set" (Stewart 1975: 48)."[T]he empty set ∅ is a subset of any set you care to name -by another piece of vacuous reasoning.If it were not a subset of a given set S, then there would have to be some element of ∅ which was not an element of S. In particular there would have to be an element of ∅.Since ∅ has no elements, this is impossible" (ibid.: 49).See also Fukui (2012b: 259) for the hypothesis that 1 is created by merging 0 with 1.If the natural numbers emerged from the abstraction from merge, the sentence-structure building must involve the empty set merging with V at the first step. 20 An intermediate projection such as V' is used for expository purposes. 21 The base vP is consistent with the Multiple Spell Out (MSO) hypothesis (which states that there is more than one point when a structure with sound features attached is sent to the PF (Ф) (Uriagereka 2012: 113, fn. 33).According to MSO, a domain, such as S, that is moved to φ-feature is deleted because it is redundant: O contains the same φ-set in the first place.The valued structural Case is deleted as a reflex (side effect) of valued-φ deletion (ibid.: 122).If a formal feature is not deleted within C HL and enters into the external performance systems (Ф and the thought system Σ), the external systems will freeze because such a structural feature is unknown to them. The base vP is the most economical structure (involving the least effort) that satisfies the Linear Correspondence Axiom (LCA; originally proposed by Kayne 1994).LCA is a principle at the sound interface that maps two-dimensional structures to one-dimensional linear orders.LCA demands that a structurally higher term should be pronounced earlier.Let us adopt the following definition of LCA (Uriagereka 2012: 56). 22 (4) LCA: When x asymmetrically c-commands y, x precedes y. The base vP does not influence later structures.For example, suppose we arrived at V≫S≫O as the final output structure of TRANSFER.In Ф, LCA notices only the boxed terms in Figure 2. 23 There, TRANSFER (Spell Out) sends the final CP structure to Ф, and LCA maps this structure to the linear unmarked order <VSO>. 24Although the final CP structure contains the base vP whose syntactic relation is S≫O≫V, the final structure is not affected by the base vP (recall that the base vP is like the identity element 0 (zero) for addition). 25 TP Spec and spelled out independently becomes opaque to subextraction.O in the base vP remains in situ and is not spelled out independently, and hence, no island effect is detected for O. Uriagereka cites Jurka (2010), who maintains that Kayne's (1994) hypothesis that <SVO> derives <SOV> is dubious: it incorrectly predicts that the moved O should exhibit the island effect.The universal base vP hypothesis rejects Kayne's (1994) hypothesis that structure building starts with the base VP in which S c-commands V, which c-commands O. See Fukui & Takano (1998) for arguments for our hypothesis. 22 The original definition of LCA is as follows (Kayne 1994: 6).Given d(X) = the set of terminals T that X dominates and A = the set of ordered pairs <X j , Y j > such that for each j, X j asymmetrically c-commands Y j , where X asymmetrically c-commands Y iff X c-commands Y and Y does not c-command X, LCA = def.d(A) is a linear ordering of T. 23 With regard to the V-initial unmarked order, there is a debate on the derivation, i.e. remnant-VP movement vs. V-movement.For arguments for the former view, see Alexiadou & Anagnostopoulou (1998) and Massam (2000).I use a V-movement analysis for simplicity.The choice does not affect the discussion.See Carnie et al. (2005) for relevant discussions. 24 If T contains EPP and attracts S, V must have reached C at the point of the final TRANSFER for the unmarked order <VSO> to be realized. 25 The tree building in C HL constitutes a group.It conforms to the four definitions of a group.First, it is closed: Merge applies to a tree and it creates a tree.Second, it has an identity element: the universal base vP is similar to 1 for multiplication; it does not affect the output.Third, it has inverse elements: there is always a set of remerge operations that returns some c-command relation to the base S ≫ O ≫ V relation.Fourth, it obeys the associative law, (XY)Z = X(YZ), with respect to structure building (head projectionability); given the headfinal property, both (XY)Z and X(YZ) produce a projection of Z.Alternatively, given the head-initial property, both (XY)Z and X(YZ) produce a projection of X.With regard to the fourth condition, Fukui & Zushi hold the view that C HL disobeys the associative law for semantics, i.e., distinct hierarchical (binary) structures produce distinct meanings (Merge disobeying the associative law causes the hierarchical structures).See their comment on pages 19 and 322 of the Japanese translation of Chomsky (1982Chomsky ( , 2002)). If Merge is the fundamental operation in C HL and the concept of 'group' applies to any system with the possibility of combining two objects to yield another (Stewart 1975: 1), C HL deserves a group-theoretical analysis."Thus the concept 'group' has applications to Word Order Asymmetry as Geometrical Cost Asymmetry The symmetry structure of an equilateral triangle represents the group-theoretical structure of a cubic equation (Stewart 2007). 26The permutation of three solutions corresponds to that of the three vertexes.Assume counterclockwise rotations, with a 0˚ rotation serving as the identity I. 27 Let us call the original triangle as the identity element or identity triangle. rigid motions in space, symmetries of geometrical figures, the additive structure of whole numbers, or the deformation of curves in a topological space.The common property is the possibility of combining two objects of a certain kind to yield another" (ibid.). 26 I thank an anonymous reviewer for clarifying the issue.That is, the permutation group S 3 of three letters have only 4 isomorphism classes (or conjugacy classes) of subgroups, namely, {id} = I, C 2 (a cyclic group of order 2), C 3 (a cyclic group of order 3) and S 3 .The reviewer criticizes that the observed broken symmetry corresponds most closely to C 2 , amounts to a rather simple observation that V and O seem to remain symmetric whereas S is not symmetric with others.Here is the list of six subgroups of S 3 .( 23) stands for the permutation that switches 2 and 3, leaving 1 intact, as in (1, 2, 3) → (1, 3, 2).( 132) stands for the permutation that changes 1 to 3, 3 to 2, and 2 to 1, as in (1, 2, 3) → (3, 1, 2).I is the identity permutation that keeps everything intact, as in (1, 2, 3) → (1, 2, 3).Assume 1 = S, 2 = O, 3 = V. 27 A 0° and 360° cannot be distinguished group-theoretically, but they are distinct if we take the cost difference into consideration. L1 The do-nothing operation r0 changes <ABC> to <ABC>.The top apex corresponds to the first position, the lower left apex to the second, and the lower right apex to the third.The six transformations are as follows: (6) a. r0 changes <ABC> to <ABC>.b. The transformation r0 is the most cost-effective.Although Galois groups are indifferent to cost, geometrical operations do have cost differences, given an appropriate cost function.It is true that the structure of the symmetric group S 3 of order 3! ( 6) is too simple to imply anything.However, this simplicity is the very reason why I take operational costs into consideration. 29All six symmetrical 28 Rotations are linear transformations T (or function f) in R 2 (two-dimensional real-number space).Flips are T of R 2 subspace in R 3 (Strang 2009).T or f can be translated into a matrix A. If the unmarked order asymmetry can be expressed by T, we will be able to translate it into the matrix language, which we leave for future research. 29 Algebraic cost means computing time (Strang 2003: 87).An anonymous reviewer offered the criticism that the structure of the symmetric group is too simple to imply anything.I thank trans-formations can be expressed using only r0, r1, and f1, that is, r2, f2, and f3 are derivable operations (Armstrong 1988). 30 (7) a. r0 b. Why do we select r0, r1, and f1 as irreducible atoms for symmetrical transformations? 31Recall that we started from an empirical (physical) fact about the human brain: C HL produces a sentence structure with the base vP as its universal base, in which S, O, and V are externally merged such that S asymmetrically c-commands O, which in turn asymmetrically c-commands V.The base vP is the most cost-effective base with a cost of 0: it is built by external merges alone.Therefore, the base vP corresponds to r0, the identity operation (with a cost of 0).Since we use the cost differences between transformations, we have to rank transformations by their geometrical cost.After r0, the next most cost-effective operation is f1, which switches two (rather than three) positions, O and V.Because f1 switches O and V, which have a strong bond, as stated earlier, and which form a natural class, f1 is the most cost-effective transformation among flips (or reflections).Following r0 (cost 0) and f1 (cost 1), r1 (with cost 2; it is a single-step rotation with three (rather than two) positions replaced) is the second most cost-effective transformation within the rotations. Let us summarize cost calculation.Suppose that the identity operation r0 has cost 0.The geometrical operation r0 syntactically corresponds to doing nothing to the least costly base vP before spell-out, which in turn sent to Ф where LCA produces the linear order <SOV>.The more positions that a computation replaces, the more energy the computation uses. 32This is the reason why r1 is costlier than f1. 33Furthermore, single-step operations are cheaper than two-step operationsmathematicians call this the 'length function' in symmetric groups. 34Hence, r0 is the cheapest of all, f1 is the second cheapest, and r1 is the third. 35Assuming that f1 has cost 1, r1 has cost 2, and that addition is used for the reviewer for clarifying the crucial reason why I should consider geometrical cost, namely it sharpens the tool for observing the phenomena.30 I stipulate that the vertical axis L1 is the default (basis).An empirical reason is as follows.Given that the base vP corresponds to an equilateral triangle in which S is the top vertex, O is on the left, and V is on the right, the vertical axis L1 switches O and V.There is considerable evidence that V has an affinity for O, rather than S.That is, given, S, O, and V, {O, V} constitutes a natural class excluding S, whereas {S, V} excluding O does not.The vertical axis L1 switches elements in a natural class. 31 I thank an anonymous reviewer for pointing out the necessity of clarifying this reasoning. 32 "[I]n group theory it is the end result that matters, not the route taken to get there" (Stewart 2007: 121).However, the route matters for the geometrical cost approach: A longer route is more expensive. 33 I thank an anonymous reviewer for pointing out unclarity in an earlier draft. 34 I thank an anonymous reviewer for pointing this out. 35 This cost function is consistent with results under the Mobius function, according to which cost accumulation, the costs for the six transformations are as follows: The identity operation r0 is the cheapest (cost 0) followed by f1 (cost 1) and r1 (cost 2). 36This is what we would expect if we replaced A, B, and C with S, O, and V, respectively. 37The identity triangle looks like the following: the equation for flip is simpler than that for rotation. 36 An anonymous reviewer asks a subtle and extremely important question: Exactly what are the relevant 'costs' to be minimized, provided the economy principle?I adopt the view that algebraic cost means computing time (Strang 2003: 87).The longer the root, the more time it takes.Therefore, the relevant 'cost' to be minimized is computing time.The high probability of unmarked <SOV> from phylogenetic point of view emerges from the fact that the identity (do-nothing) transformation is the fastest computation.Also, I thank an anonymous reviewer for pointing out a miscalculation in a previous draft and for clarifying the reason for selecting smaller values.The reasoning is as follows.For f2, there are three sets of operations that lead to the same result: f2 = f1 + r1 = 1 + 2 = 3, f2 = r2 + f1 = 4 + 1 = 5, and f2 = r1 + f1 + r2 = 2 + 1 + 4 = 7.For f3, there are two sets of operations that lead to the same result: f3 = r1 + f1 = 2 + 1 = 3, and f3 = f1 + r2 = 1 + 4 = 5.I select the lowest cost for each, assuming that C HL obeys the Economy Principle.Therefore, f2 = f1 + r1 = 3, and f3 = r1 + f1 = 3. 37 A reviewer points out that "[these] permutations on the SOV 'basic' string as the relevant group-theoretic action" is "the source of the most severe problems".However, what is 'basic' is not the SOV string itself.What is 'basic and universal' is the vP structure without internal merge (copy and remerge) at the point of TRANSFER (movements may occur later). The universal base vP per se is not the unmarked <SOV> order.TRANSFER applies to the universal base vP and Ф outputs <SOV> as a possible unmarked order for a simple matrix transitive sentence.The reviewer also has severe doubts on "the author's technique of considering string permutations rather than movement operations in the tree."However, I do not propose string permutations as a new technique to analyze sentence structures.Rather, I claim that movement operations in a tree (including no movement) can be expressed as the group-theoretical transformations of equilateral triangle.The movement operations and the geometrical transformations are compatible and translatable.If a certain structure (order) is not derivable due to a violation of the movement constraint, there is no geometrical expression for it.We consider how a per-mitted tree structure can be expressed algebraically and geometrically.The group-theoretic action acts on an equilateral triangle in a certain coordinates (which is a geometrical expression of a particular permutation of three solutions of a cubic equation).A triangle undergoes various linear transformations in R 2 (e.g., rotations in two-dimensional real-number space) and R 3 (e.g., reflections (flips) in three-dimensional space).However, I admit that the geometrical cost approach does rely on the universal base vP as the identity element.If that approach is untenable (as the reviewer points out), the geometrical cost approach collapses.Internal merge operations including the lack thereof apply to the universal base vP, and the LCA produces various unmarked order types in Ф.This situation is geometrically expressed as symmetric transformations applied to the identity triangle, producing various permutations.Table 2 summarizes the transformations and costs.Following Jenkins (2000, 2003), I speculate that the unmarked word order asymmetry is expressible as a group-theoretical factor (included in Chomsky's third factor): "[W]ord order types would be the (asymmetric) stable solutions of the symmetric still-to-be-discovered 'equations' governing word order distribution".The 'symmetric equation' is a linear transformation f(x) = y, where function f (or transformation T) is a set of merge operations that is expressed as a set of symmetric transformations of an equilateral triangle (or permutations of three solutions of a solvable cubic equation), x is the universal base vP input that is expressed as the identity triangle, and y is a mapped output tree that is expressed as an output triangle that preserves symmetry.The equation f(x) = y can be translated into the matrix language: Ax = y, where A is a matrix that performs the transformation, x is a set of input vectors expressing the identity triangle (the universal base vP), and y is a set of output vectors expressing the transformed symmetrical triangle (the transformed tree). 38The Galois theory and the Economy Principle (choose the cheaper operation) can express the current ratio of languages with the top three unmarked word orders: 38 See Strang (2009) for the basic idea of linear transformations.The condition that a linear transformation must satisfy is as follows: T(cv + dw) = cT(v) + dT(w), where T is a linear transformation, v and w are some vectors, and c and d are some constants.Projections and rotations are examples of linear transformations.f1 (cost 1) produces <SVO> with a ratio of 38.7%.c. Although the geometrical cost approach fails to predict the internal ranking among f2, f3, and r2, it does predict their relatively low probability: (10) a. The geometrical cost approach predicts that <OSV> and <VOS> should emerge at the same rate, and that <OVS> should exhibit the lowest rate, which is not reflected in the actual statistics.We are not able to predict this difference.However, it is significant that the approach predicts the internal ranking of the major (top) three unmarked word orders and the division between the higher three and lower three with respect to unmarked word order in C HL . 39 What is symmetry?A state is symmetrical when an operation (or a transformation) does not affect (change) the properties of the state.However, some properties are preserved after transformation (symmetry is formed), whereas some properties are not preserved (symmetry is broken).What properties are preserved and not preserved here?The preserved property is the structure of the equilateral triangle itself located in particular coordinates (the entire shape looks the same after symmetrical transformations); information regarding the locations of S, O, and V is irrelevant.We observe the same-looking equilateral triangle after various symmetrical operations.The property not 39 With regard to <OSV>, I tentatively propose that O raises and becomes the Spec, TP.The operation is very expensive because C HL must find (and actually finds) a solution to circumvent a violation of the minimality principle; T has attracted O, which is more distant than S. With regard to <OVS>, V further raises to T. With regard to <VOS> (e.g., Austronesian languages such as Malagasy, Seediq, and Tzotzil), V further raises to C. However, the analysis wrongly predicts that the probability difference should be OSV > OVS > VOS.As an anonymous reviewer points out, the currently available difference is unexpectedly the opposite: VOS > OVS > OSV.Why should <VOS> be the most probable among the three?It may be that V-movement to C facilitates O-movement, as in Object Shift phenomena.As for the conditions on Object Shift, see Chomsky (2000).Alternatively, it may be related to the mathematical fact that "Inverses come in reverse order" (Strang 2003: 72) In other words, <VOS> could be an inverse of <SOV>.Therefore, (SOV <VOS> shows relatively high probability because it is in inverse relation with the highest probable order, <SOV>.However, neither the exact nature of the derivation nor the linear algebraic reasoning is clear at this point. Furthermore, a question arises as to why the unmarked word orders <OSV>, <VOS>, and <OVS> exist at all; i.e. why do they not show 0% if they are very expensive?From the perspective of phylogeny, I propose that these unmarked word orders are rare (minor) because they have higher geometrical cost.However, from the perspective of ontogeny, I propose, capitalizing on Yang (2002: 69-70), that they exist because they have higher weight (probability that is one or very close to one as a result of learning).These rare (minor) unmarked orders are like 'irregular' verbs: Every 'irregular'-forming rule, which applies to the verb class, is associated with a weight (probability).As a child acquires 'irregular' verbs by applying 'regular' class-forming rules, she acquires a 'minor' basic word order by applying 'regular' transformation (phrasal and head movement) rules. preserved is the locational information of S, O, and V regarding where S, O, and V end up in the triangle after symmetrical transformations.We observe different arrangements of S, O, and V after various symmetrical operations.However, the identity (do-nothing) transformation is special in that it always preserves all properties after symmetrical operations. A derivation of a sentence starts out with the universal base vP, in which S c-commands O and O c-commands V.If the base vP (without movement) is transferred to Ф, we obtain <SOV> as the unmarked order.This is geometrically expressed as the identity transformation where nothing is done.If V raises to v (one-step V-movement) before TRANSFER, we obtain <SVO> as the unmarked order.This is geometrically expressed as a flip (three-dimensional transformation) where we have V in the base-O position and O in the base-V position (O and V are switched).If V raises to v and then to T (two-step V-movement) before TRANSFER, we obtain <VSO> as the unmarked order.This is geometrically expressed as a 120° rotation, where we have V in the base-S position, S in the base-O position, and O in the base-V position.The structure-building cost corresponds to the geometrical cost.This causes the probability difference among the three major basic word order types from the phylogenetic viewpoint.Let us summarize the C HL geometry correspondence in the following figures.The boxes in the trees are visible to Ф and to the LCA spelling out the unmarked word order.The above tree-building steps can respectively be expressed as (Galois-theoretic) geometrical transformations (rigid movements) as follows.These geometrical transformations express various permutations of the solutions of the solvable cubic equation. 40 The Our analysis is consistent with the conception that "[o]ptimally, linearization should be restricted to the mapping of the object to the SM [sensorimotor] interface [Ф], where it is required for language-external reasons" (Chomsky 2005).The geometrical cost belongs to a mathematical or physical law that is language external.In Addition, our model supports the view that "order does not enter into the generation of the C-I [thought] interface," and that "syntactic determinants of order fall within the phonological component" (Chomsky 2008).In other words, the permutation among S, O, and V does not influence the meaning of the matrix simple transitive sentence in all languages in the thought system: the idea of "John loves Mary" is the same in all languages, whatever the unmarked order is; symmetry is maintained.On the other hand, with regard to the ordering that takes place in Ф, symmetry breaks in a manner that obeys a mathematical or physical law (except the do-nothing (identity) operation).Ordering is not accidental or random, contra Chomsky (2012). The universal base vP vP However, it is also a fact that all six unmarked-order types behave alike in that they are all possible mother languages; each type is the most natural, frequent, and unmarked word order for the respective native speakers.The computational cost for basic order formation must be within the permissible level in all types; the relevant computation is equally efficient in all languages. An anonymous reviewer asks a crucial question: Is it not the case that C HL must produce the unmarked <SOV> only, provided that the unmarked <SOV> derives from the most efficient computation and that C HL obeys the principle of efficient computation?Why does C HL allow other unmarked orders that derive from less efficient computation?Why does the unmarked <OVS> for example exist at all, given that it derives from the least efficient computation?Is it not the case that the unmarked <OVS> cannot exist?Why does it exist at all?A tentative answer is as follows.Suppose that the gross computational cost is 1.0 in all languages and that C HL allows all possible patterns as long as the gross cost is 1.0. 41If the basic (unmarked) word order is <SOV>, approximately cost 0.1 is used for the unmarked order building and the rest (cost 0.9) is used for other operations.If the basic word order is <SVO>, cost 0.2 is used for the unmarked order building and the rest (cost 0.8) is used for other operations.If the basic word order is <VSO>, cost 0.3 is used for the unmarked order building and the rest (cost 0.7) is used for other operations. 42For example, the <SOV> type has the greatest cost 0.9 remaining for other operations.Thus, an <SOV>-type language such as Japanese tends to allow computationally more complex operations in other domains: this type allows (phonologically) null subjects, null expletives, null agreement morphologies, covert (phonologically null) wh-movement, covert extraction of argument-wh phrases out of islands, and scrambling (relatively free word ordering). 43The C HL needs more energy to locate where these silent entities are, how they are moving, and where they went because they are not heard (not pronounced); they are difficult to find and keep track of. 44Therefore, our model predicts that the <SVO> type, unlike the <SOV> type, is less tolerant toward these phonetically null entities and word permutations.The prediction is borne out as comparative syntactic studies have observed: an <SVO>-type language such as English tends not to allow covert subjects, covert expletives, covert agreement morphologies, covert wh-movement, covert extraction of wh-phrases out of islands, and scrambling.In addition, our analysis predicts that the <VSO> type is furthermore less tolerant toward these phenomena. 45We leave the detailed verification for future research.Let us summarize our point in Table 3. 41 Notice that the number 1.0 is tentatively used here for maximum level of computational cost, not the probability 1.0 (it must happen). 42 The specific numbers expressing cost do not matter.What matters is the difference. 43 Covert extraction of adjunct-wh phrases out of islands is not allowed even in this type.The computational cost exceeds the threshold level (cost 1.0) at this point. 44 This idea is the opposite of the standard conception that covert entities and operations need less energy because the costly pronunciation is not necessary. 45 Unlike <SVO>-type languages such as English, <VSO>-type languages such as Irish (exclusively <VSO>) and Tagalog tend to show severer restrictions on covert elements and word permutations.For example, they require a phonetically realized question marker at the beginning (or the second position) of the question sentence; V-initial languages have pre-V particles (C?), C has a more elaborate system of phonetic realization with respect to feature combination of [±Q] and [±WH], which restricts cyclic wh-movement (Irish), whfronting is obligatory (Irish), the patient wh-phrase, but not the agent wh-phrase, is fronted in the matrix simple transitive question (Tagalog) (Aldridge 2002: 394), an argument movement to the left edge is strictly disallowed (Irish), and null subject is more strictly constrained; a pronoun must appear when V takes an analytic form (Irish), and ordering within nominals is more restricted (strictly head-initial), i.e., nouns must precede demonstratives, adjectives, or relative clauses; and inverted order is prohibited in questions.These observations indicate that the <VSO> type is much less tolerant toward covert elements and word permutations than the <SVO> type.See Carnie et al. (2005) Assume that the gross cost-level for C HL operations is the same in all languages.In addition, assume that the number of parameters is the same in all languages, i.e., the cost for language acquisition is the same.With regard to <SOV>, less parameters are fixed for determining the unmarked order, and more parameters must be fixed for other operations.With regard to <VSO>, more parameters must be fixed for determining the unmarked order, and less parameters are fixed for other operations.However, the gross cost is the same in all languages.Our analysis is compatible with the conceptions that "[c]omplexities [expensive computation] in one domain of language are balanced by simplicity [inexpensive computation] in another domain", "[a]ll languages are necessarily equally complex [the gross cost is 1.0]", and that "[c]omplexity trades off between the subsystems of language." 46 Conclusion I am grateful to the anonymous reviewers for teaching me reality: My approach may be too simple, immature, groundless, and without promise, and my research has a long way to go even if it should turn out to be tenable.The reviewers pointed out several faults.First, S 3 is too simple to say anything about general patterns.Second, since one can superficially analyze any permutation phenomenon by means of the group theory, there is no substance to the argument that C HL works group theoretically.Third, the classification based on S, O, and V may be too crude for samples.Fourth, it may be too simple to assume that the derivation of the unmarked <SOV>, for example, is done in only one way; there may be many ways to derive the unmarked <SOV>.The reviewers advised me to write this speculative paper without claiming to present any scientific findings, at least raise a set of good questions.I hope that this version manages to do that.I hope that my approach will lead to possible future research from the combined perspective of applied mathematics and biolinguistics.Despite tons of difficulty, let us ask the following question.What would it mean for the geometrical cost approach to express the basic word order asymmetry in C HL ?What does it mean to say that the basic word order asymmetry can be expressed as solving a cubic (or complex quadratic, whatever) equation?What does it mean for the categories as S, O, and V to be described as the roots of an equation? 47Following Noam Chomsky, I speculate that these 46 Fenk & Fenk (2008), Nematzadeh (2013).See p. 329 in the Japanese translation of Chomsky (1982) for a possible hypothesis that every individual language shows the same cost level. 47 I thank an anonymous reviewer for pointing out the necessity of asking these questions in order to provide the raison d'être for this project.According to the reviewer, in Jenkins' for-questions lead us to a partial answer to the traditional question that troubled Alfred Russel Wallace (a British naturalist, explorer, geographer, anthropologist, and biologist;1823-1913), co-author of the evolutionary theory of natural selection, 124 years ago.Chomsky (2005Chomsky ( : 16, 2007Chomsky ( : 7, 20, 2010: 53: 53) quotes Wallace's puzzlement: The "gigantic development of the mathematical capacity is wholly unexplained by the theory of natural selection, and must be due to some altogether distinct cause," if only because it remained unused. 48In favor of Leopold Kronecker (a German mathematician; 1823-1891), who said that God (Mother Nature) made integers; all else is the work of man (Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk), Chomsky states that the theory of natural numbers may have derived from a successor function arising from Merge and that "speculations about the origin of the mathematical capacity as an abstraction from linguistic operations are not unfamiliar." 49Considering Merge within the context of the evolutionary theory, Chomsky (2007: 7) proposed the following hypothesis: (11) Mathematical capacity is derived from language. If so, Wallace's puzzle is partially answered: "Some altogether distinct cause" is an operation in C HL .I speculate the following hypothesis. (12) A simple matrix transitive sentence consisting of S, O, and V can be expressed as a solvable equation with an algebraic-geometrical structure. If this is true, we can study C HL with Galois-theoretic tools. 50As a Galois group characterizes the algebraic (or symmetry-related) structure of an equation, it can also characterize the algebraic (or symmetry-related) structure of a sentence at a relevant level. Let us summarize the discussion through the key points listed in (13): mulation, the idea was that the word orders themselves (not the coarse individual categories S, O, or V) are the solutions to the equations governing syntactic structure and that the group theory could shed light on the algebraic properties of those equations.This paper attempts a very preliminary study to see how far we can proceed with permutations of these coarse but sufficiently simple categories.48 Wallace's (1889: 467) statement is cited in Chomsky (2007: 7)."The significance of such phenomena, however, is far from clear."(Chomsky 2009: 26, 33).See Chomsky & McGilvray (2012: 16) for relevant discussion. 49 Chomsky has stricter view than Kronecker in that C HL is the origin of nutural numbers, not integers.According to Chomsky (2005: 17), the "most restrictive case of Merge applies to a single object, forming a singleton set.Restriction to this case yields the successor function, from which the rest of the theory of natural numbers can be developed in familiar ways."50 This is a huge 'if'.An anonymous reviewer asks: Could solving algebraic equations be such a fundamental logical operation as to explain whatever symmetry that is found in human brain?The reviewer is inclined to answer no.But at the same time, the reviewer states that "it is always worthwhile pointing out that every discrete structure in human language deserves group-theoretic analysis," and that "at least it must have value if it encourages future research in this direction." (13) a. Although this approach fails to predict the internal relative ranking of the lower three basic word orders, it nevertheless predicts a division between the higher three orders (<SOV>, <SVO>, and <VSO>) and the lower three orders (<VOS>, <OSV>, and <OVS>). c.As Lyle Jenkins suggests, the unmarked word order asymmetry is expressible as a group-theoretical factor (included in Chomsky's third factor): "word order types would be the (asymmetric) stable solutions of the symmetric still-to-be-discovered 'equations' governing word order distribution."The "symmetric equation" is a linear function (transformation) f(x) = y, where the mapping function f consists of various internal merge operations that are expressed as various symmetric transformations (rigid movements) of an equilateral triangle, the input x is the universal base vP that is expressed as the identity triangle, and y is the respective output tree that is expressed as the output triangle that preserves symmetry after transformation. d. The gross computational cost is the same in all languages.The more energy the system uses for the basic word order formation, the less energy is left for other operations (the law of conservation of energy). Our model predicts that the <SOV> type is the most tolerant toward phonetically null entities and operations in which C HL needs more energy to locate them and keep track of the result, the <SVO> type less tolerant, and the <VSO> type still less tolerant. e. The unmarked ordering asymmetry obeys a physical law that has algebraic and geometrical expressions.Ordering is not accidental, contra Chomsky (2012). Figure 3 : Figure 3: Symmetrical Operations of an Equilateral Triangle Figure 4 : Figure 4: Identity Triangle Expresses the Universal Base vP Figure 5 : Figure 5: C HL Transformation Deriving the Major Three Unmarked Orders Figure 6 : Figure 6: Geometrical Transformations Deriving the Major Three Unmarked Orders Table 1 : Unmarked Word Order Asymmetry Produced by C HL us first look at what typological studies have found with respect to the probability of unmarked (basic) word order asymmetry and see how far we can go within the geometrical cost approach. Table 2 : Transformations and Costs for {S, O, V} for more information. Table 3 : Cost is balanced
13,742.4
2013-11-23T00:00:00.000
[ "Mathematics" ]
Decreased secretion of adiponectin through its intracellular accumulation in adipose tissue during tobacco smoke exposure Background Cigarette smoking is associated with an increased risk of type 2 diabetes mellitus (T2DM). Smokers exhibit low circulating levels of total adiponectin (ADPN) and high-molecular-weight (HMW) ADPN multimers. Blood concentrations of HMW ADPN multimers closely correlate with insulin sensitivity for handling glucose. How tobacco smoke exposure lowers blood levels of ADPN, however, has not been investigated. In the current study, we examined the effects of tobacco smoke exposure in vitro and in vivo on the intracellular and extracellular distribution of ADPN and its HMW multimers, as well as potential mechanisms. Findings We found that exposure of cultured adipocytes to tobacco smoke extract (TSE) suppressed total ADPN secretion, and TSE administration to mice lowered their plasma ADPN concentrations. Surprisingly, TSE caused intracellular accumulation of HMW ADPN in cultured adipocytes and in the adipose tissue of wild-type mice, while preferentially decreasing HMW ADPN in culture medium and in plasma. Importantly, we found that TSE up-regulated the ADPN retention chaperone ERp44, which colocalized with ADPN in the endoplasmic reticulum. In addition, TSE down-regulated DsbA-L, a factor for ADPN secretion. Conclusions Tobacco smoke exposure traps HMW ADPN intracellularly, thereby blocking its secretion. Our results provide a novel mechanism for hypoadiponectinemia, and may help to explain the increased risk of T2DM in smokers. Introduction Over 1.3 billion people smoke worldwide, and even more are exposed to second-hand smoke. Smokers often exhibit impairments in insulin-mediated glucose handling and an increased incidence of type 2 diabetes mellitus (T2DM) [1]. Smoking cessation improves these conditions [2]. Nevertheless, mechanisms by which smoking impairs insulin-stimulated glucose metabolism and increases T2DM are still unclear. Materials We purchased polyclonal antibodies against mouse adiponectin and ERp44 (endoplasmic reticulum [ER] resident protein of 44 kDa) from Cell Signaling. Monoclonal antibodies against Ero1 L-α (ER oxidoreductase 1-Lα) and GAPDH, as well as secondary antibodies (anti-rabbit and anti-mouse IgG horseradish peroxidase conjugates), were from Santa Cruz. The antibody against DsbA-L (disulfidebond A oxidoreductase-like protein) was from Abcam. Tobacco smoke extract (TSE, 100%) with water-soluble components was prepared by using a Kontes gas-washing bottle to bubble mainstream smoke from research cigarettes through serum-free, phenol red-free RPMI media containing 0.2% BSA (RPMI/BSA), followed by filtration (0.22 μm) and standardization according to their absorbance at 320 nm, as we previously published [12,13]. Cell culture, preparation of primary mouse adipocytes, and TSE exposure Murine 3T3-L1 preadipocytes (ATCC) were cultured and differentiated into adipocytes as described [14]. Briefly, two days after reaching 100% confluence, the cells were stimulated for an additional two days with FBS/DMEM containing 100 nM insulin, 0.5 M IBMX, 0.25 μM dexamethasone, and 1 μM rosiglitazone. Cells were then maintained in FBS/DMEM medium with 100 nM insulin for another 2 days to differentiate into mature adipocytes with fat droplets. Cells were serumstarved for 3 h in DMEM containing 0.2% BSA, followed by exposure to 0-1.5% TSE for 0-20 h. Primary mouse adipocytes were prepared as described [15]. Briefly, epididymal adipose tissue from wild-type C57BL/6 mice (Jackson Laboratory) was placed in pre-warmed DMEM with 10% FBS and penicillin/streptomycin, and then minced into 5-10 mg pieces. Minced tissue fragments were filtered through a nylon mesh (350-μm pore size) and washed with DMEM. Then 200-300 mg of minced, filtered tissue was placed into 1 ml DMEM with 0.2% BSA and penicillin/streptomycin for 18 h before being treated without or with 1.5% TSE for an additional 20 h. At the end, the supernatants were collected and the explants were lysed for ADPN immunoblots. TSE exposure of wild type mice To mimic the effects of tobacco smoke exposure on a non-respiratory organ with well-controlled dosing, we followed the recently established methodology of intraperitoneal administration of TSE [16][17][18]. Wild-type mice were injected intraperitoneally in the lower left quadrant of the abdomen with 400 μl of pre-warmed, filtered TSE diluted to 20% strength in RPMI-1640 (this amount of TSE is equivalent to smoking 2 packs of cigarettes for a 60-kg person) or RPMI-1640 alone (control) on days 1, 3, 5, 8, and 10 [16]. Twenty-four hours after the final injection, the mice were euthanized by an overdose of pentobarbital. We collected whole blood by cardiac puncture and then epididymal adipose tissue from the lower right abdominal quadrant, away from the application sites. All animal protocols were approved by the Institutional Animal Care and Use Committee of the Philadelphia Veterans Administration Medical Center. Immunoblots Immunoblots were performed as described in our previous publications [12,13]. For detection of adiponectin oligomers and multimers, cells were lysed in non-reducing lysis buffer and loaded onto a gel without boiling. Determination of adiponectin concentration Total adiponectin concentrations in conditioned media from control and TSE-treated 3T3-L1 adipocytes and in plasma from control and TSE-treated mice were measured by ELISA for mouse ADPN according to the manufacturer's instruction (BioVendor). Data analysis All column and line graphs depict mean ± SEM of data that passed tests for normality. Comparisons amongst three or more groups were performed using one-way analysis of variance (ANOVA), followed by pairwise comparisons using Student-Newman-Keuls (SNK) test, with p < 0.05 considered significant. Comparisons between two groups used Student's unpaired, two-tailed t-test. Results and discussion Tobacco smoke exposure decreases secretion of ADPN while inducing its intracellular accumulation We found that TSE exposure caused dose-and timedependent suppression of total ADPN secretion from 3T3-L1 adipocytes, while increasing total intracellular ADPN detected by immunoblots ( Figure 1A-1D). Viability of the adipocytes was unaffected by these low concentrations of TSE (not shown). Inhibition of total ADPN secretion into the conditioned medium from TSE exposed 3T3-L1 adipocytes was confirmed by ADPN ELISA ( Figure 1E). In addition, exposure of mouse primary adipocytes to 1.5% TSE ex vivo for 20 h significantly suppressed ADPN secretion, while increasing intracellular ADPN content ( Figure 1F). Importantly, exposure of mice to TSE also induced a large decrease in plasma concentrations of total ADPN in vivo ( Figure 1G), consistent with prior publications in smoke-exposed mice [11] and in human smokers [6,7]. Thus, smoke-induced retention of ADPN within Figure 1 Tobacco smoke exposure decreases secretion of ADPN while inducing its intracellular accumulation. Panels A,B: Representative immunoblots (A) and summary statistics (B) for dose-dependent effects of TSE on ADPN accumulation in conditioned medium and in cellular homogenates of 3T3-L1 adipocytes during a 20-h incubation. Panels C,D: Representative immunoblots (C) and summary statistics (D) of the time course of the effects of TSE on ADPN accumulation in conditioned medium and in cellular homogenates of 3T3-L1 adipocytes exposed to 1.5% TSE for 0-20 h. Panel E: ADPN concentrations measured by ELISA in the culture supernatants of 3T3-L1 adipocytes exposed for 20 h to 0 (control) or 1.5% TSE . Panel F: Immunoblots of ADPN in conditioned medium and in cellular homogenates of primary mouse adipocytes treated without or with 1.5% TSE for 20 h. Panel G: Immunoblots of total plasma ADPN in mice after RMPI (Control) or TSE injections. Panels B, D, E, G, n = 3-5. In panels B and D, P < 0.01 by ANOVA of all cellular values, and P < 0.01 by ANOVA of all medium values. *P < 0.05, **P < 0.01, ***P < 0.001 vs. control values (0% TSE or t = 0) by the SNK test. In panels E and G, Student's t-test was used. adipocytes contributes to the decreased secretion of ADPN from adipose tissue and low plasma levels of ADPN in smokers [6,7,9,10]. Tobacco smoke exposure traps HMW ADPN intracellularly Among the three different multimeric forms of ADPNs, HMW ADPN has been shown to be the most biologically active [4,5] in promoting insulin-induced glucose handling [3][4][5][6]. In the current study, we assessed the three major multimeric forms of ADPNs by immunoblots under non-reducing conditions [19]. We found that the decrease in total ADPN secretion from cultured 373-L1 adipocytes after TSE exposure ( Figure 1A-E) was mainly attributable to decreased secretion of HMW ADPN (Figure 2A,B), accompanied by increased intracellular accumulation of HMW ADPN (Figure 2A,B). Likewise, we found that mice injected with TSE exhibited a loss of mainly HMW ADPN from plasma ( Figure 2C) and HMW ADPN accumulated in their adipose tissue ( Figure 2D). Tobacco smoke exposure dysregulates the expression of ADPN chaperones Assembly and secretion of adiponectin oligomers from adipocytes is tightly regulated by thiol redox status in the ER through ERp44 and Ero1-Lα [20,21]. ERp44 is an ER resident chaperone that inhibits the secretion of ADPN through thiol-mediated retention, while Ero1-Lα DsbA-L in 3T3-L1 adipocytes exposed to 1.5% TSE for 0-20 h. Panel B shows confocal fluorescent micrographs of representative 3T3-L1 cells that were stained simultaneously with anti-ERp44 (red) and anti-ADPN (green) antibodies, as well as DAPI (blue; nuclear stain). The yellow color in the merged images (Merge) demonstrates co-localization of ERp44 and ADPN in the ER around the nucleus. releases HMW adiponectin from ERp44 [20,21]. In addition, DsbA-L has been shown to promote adiponectin multimerization and secretion [22,23]. In the current study, we found that TSE exposure of cultured adipocytes induced time-( Figure 3A) and dose-(not shown) dependent up-regulation of ERp44 and down-regulation of DsbA-L. TSE exposure, however, did not affect the amount of Ero1-Lα in adipocytes ( Figure 3A), suggesting that the high intracellular levels of ERp44 would be unopposed. Additionally, our confocal microscopic analyses revealed that TSE exposure of adipocytes increased intracellular staining for ERp44 (red) and ADPN (green, Figure 3B). Importantly, this intracellular ADPN colocalized with ERp44 (yellow color in the merged images, Figure 3B), indicating ADPN accumulation in the ER, presumably physically associated with ERp44. We conclude that tobacco smoke exposure suppresses ADPN secretion from adipocytes by specifically trapping HMW ADPN intracellularly, thereby contributing to decreased blood levels of ADPN in smokers. These results provide a novel mechanism for hypoadiponectinemia, which may help to explain impaired insulin-mediated glucose handling and the increased risk of T2DM in smokers.
2,291.8
2015-05-09T00:00:00.000
[ "Biology", "Medicine" ]
Open-Circuit Fault Detection and Classification of Modular Multilevel Converters in High Voltage Direct Current Systems (MMC-HVDC) with Long Short-Term Memory (LSTM) Method Fault detection and classification are two of the challenging tasks in Modular Multilevel Converters in High Voltage Direct Current (MMC-HVDC) systems. To directly classify the raw sensor data without certain feature extraction and classifier design, a long short-term memory (LSTM) neural network is proposed and used for seven states of the MMC-HVDC transmission power system simulated by Power Systems Computer Aided Design/Electromagnetic Transients including DC (PSCAD/EMTDC). It is observed that the LSTM method can detect faults with 100% accuracy and classify different faults as well as provide promising fault classification performance. Compared with a bidirectional LSTM (BiLSTM), the LSTM can get similar classification accuracy, requiring less training time and testing time. Compared with Convolutional Neural Networks (CNN) and AutoEncoder-based deep neural networks (AE-based DNN), the LSTM method can get better classification accuracy around the middle of the testing data proportion, but it needs more training time. Introduction Modular multilevel converters (MMCs) have been widely applied due to their advantages of modularity, extensibility, high-quality output, and high efficiency [1][2][3]. An MMC is formed by cascading multiple sub-modules (SMs) with the same structure. In a high voltage direct current (HVDC) transmission power system, the numbers of SMs are always up to several hundreds or thousands, which may induce some faults of SMs more likely to arise under complex and harsh conditions. The most application of SM circuits is the half-bridge circuit topology (HB-SM), which consists of two wire-bound insulated gate bipolar transistor (IGBT) modules along with their corresponding antiparallel diodes and a capacitor [4,5]. The HB-SM is commonly used due to its simplicity in terms of component count, lower losses, and ease of control. However, the main disadvantage in HB-SM is that it cannot provide blocking against DC fault. IGBT damage is the most common cause of sub-module failure [6], generally due to short-circuit faults or open-circuit faults [7]. Compared to the IGBT short-circuit fault, the IGBT open-circuit faults can last for a long time without being detected, which can deteriorate the output of the MMCs and can make the capacitors in the faulty SMs over-charged [8]. Therefore, this paper is concerned with the IGBT open-circuit fault diagnosis of Modular Multilevel Converters in High Voltage Direct Current (MMC-HVDC) systems. Recently, several fault diagnosis methods have been discussed for the MMCs. These methods can be categorized into hardware-based and software-based methods [5]. The hardware-based methods are not suitable for the MMCs in HVDC systems because of the large number of the SMs in the MMC. Software-based methods can further be categorized into model-based methods and signal processing-based methods [9,10], according to whether the monitoring characteristics are inner characteristics or output characteristics [11]. The observers such as Luenberger observer [12], sliding mode observer [13,14], and Kalman filter observer [15,16], are prevalent model-based methods used to provide the detection references. Signal processing-based methods have been considered reliable and effective by several researchers [17][18][19][20] in recent years. However, both model-based and signal processing-based methods need to obtain suitable and appropriate inner features or thresholds of specific derived indices, such as zero-crossing current slope or harmonic content, which can degrade the robustness of fault diagnosis. An alternative to traditional software-based approaches, artificial intelligence-based methods have been developed, which provide powerful tools to extract useful information for fault diagnosis based on historical data. Neural networks (NNs), one of the most basic artificial intelligence methods, have been used to detect a fault condition in the HVDC systems [21][22][23]. However, NNs need lots of training data and training time. Support vector machine (SVM) [24] and its optimization algorithms [25][26][27][28] have been employed to diagnose the faults of MMC. The support tensor machine (STM) [29], a generalization of the SVM, has been introduced to detect faults for MMC. However, in real-world applications, these artificial intelligent methods depend on feature extraction techniques. The quality of feature extraction directly affects the accuracy and efficiency of fault diagnosis. Deep learning methods can avoid the problems of feature extraction, but the related publications are very limited in the application of MMC-HVDC systems. Convolutional Neural Networks (CNN) [30,31], and 1-D CNN [32] are proposed for fault classification and fault location in MMC-HVDC. Our research group proposed CNN, AutoEncoder-based deep neural network (AE-based DNN), and SoftMax classifier for MMC [33], the results showed that these deep learning methods have good potential. It is worth noting that the fault diagnosis of MMC so far has been mostly concerned on model-based research [34][35][36][37], less on data-driven diagnosis methods [38], and only some pioneering work has arisen in the publications about deep learning fault diagnosis of MMC. Therefore, to develop a new deep learning method used for IGBT open-circuit fault diagnosis of MMC-HVDC systems to shorten such a gap, we aim to provide an LSTM approach to address the above-mentioned problems. The main contributions of this paper are outlined below. (1) The proposed method has the ability to achieve accurate detection and classification of IGBT open-circuit faults but also can reduce the computational cost of sensing and learning from a large number of measurements. This paper is organized as follows: Section 2 describes the MMC open-circuit faults and simulation experiments. Section 3 introduces a Recurrent neural network (RNN) and LSTM. Fault diagnosis of MMC-HVDC systems with LSTM is evaluated in Section 4. Section 5 compares LSTM with BiLSTM, CNN, and AE-DNN methods. Conclusions are drawn in Section 6. MMC Sub-Module and Open-Circuit Faults A typical structure of a three-phase MMC consists of six arms as shown in Figure 1 [33]. Each arm consists of one inductor (L) and several identical SMs. Each SM involves one DC storage capacitor (C) and a half-bridge, which is composed of two IGBTs (i.e., T 1 and T 2 ) and two diodes (D 1 and D 2 ). The circuit of SM is shown in Figure 2. Open-circuit faults of an SM can be sorted into T 1 fault and T 2 fault. When any fault occurs, the SM can be in ON (s i = 1) state or OFF (s i = 0) state, where s i is the corresponding switch function. Table 1 illustrates the output voltages of SM in different states for both normal and abnormal cases. In Table 1, i sm is SM current, u c is the capacitor voltage, and u sm is the output voltage of SM. SM State Normal Simulation Experiments In the PSCAD/EMTDC software environment, a two-terminal model of the MMC-HVDC transmission power system was simulated for this study. The system parameters of the operating environment and the MMC are shown in Table 2 [33]. The data recorded for this study are AC-side three-phase current (I a , I b , I c ) and threephase circulation current (I di f f a , I di f f b , I di f f c ). The circulation current and bridge current can be represented mathematically using the following equation: where k stands for the a, b, and c phase, while p and n separately denote for upper and lower arms of the MMC. The symbols i kp and i kn are, respectively, the currents of the upper bridge and lower bridge of each three phases. Since the values of i ap , i bp , i cp , i an , i bn , and i cn can be directly measured, we recorded them instead of i di f f a , i di f f b , and i di f f c . Consequently, we recorded nine parameters, i.e., i a , i b , i c , i ap , i bp , i cp , i an , i bn , and i cn , (see Figure 1). In our test, Table 3 [33] shows seven different health conditions of the MMC. In the processing of the seven states of the wind farm side MMC, the values of the nine parameters described above have been recorded. There are six types of faults occurring at different IGBTs at different times. These six types of faults were A-phase lower SMs, A-phase upper SMs, B-phase lower SMs, B-phase upper SMs, C-phase lower SMs, and C-phase upper SMs, at six different locations of IGBT break-circuit fault manually for each bridge. The total time of recording was 0.1 s while the time for the IGBT open circuit fault duration has been varied from 0.03 s to 0.07 s. The time step is 2 µs and the sampling frequency is 0.5 MHz. We collected 700 cases of seven different health conditions. RNN and LSTM Recurrent neural network (RNN) has become one of the important subfields of deep learning, which has been widely used in the fields of speech recognition [39], rotating machine fault detection and classification [40], medical image segmentation [41], and natural language processing [42]. Figure 3 shows the RNN structure. In order to avoid the problems of gradient vanishing or exploding, a long and short-term memory (LSTM) neural network, which involves creating a memory cell [43], is employed. Figure 4 shows the LSTM structure which illustrates the flow of data at time step t. The cell state at time step t is given by where stands for the Hadamard product (element-wise multiplication of vectors). The output (hidden) state at time step t is given by Here are the calculation procedures of the LSTM cell at time step t. where σ(.) stands for the sigmoid function given by σ(z) = (1 + e −z ) −1 , and x is the input of the time-series data. In an LSTM layer, input weights w, recurrent weights R, and bias b need to be determined by learning. The matrices w, R, and b are concatenations of the input weights, the recurrent weights, and the bias of each component, respectively. The matrices are described as follows: where i, f , g, and o mark the input gate, forget gate, layer input, and output gate, respectively. Design of LSTM The data used in this study are collected from a two-terminal simulation model of the MMC-HVDC transmission power system described in Section 2. Seven MMCs conditions include one normal condition and six IGBT open-circuit fault conditions in the lower and the upper arms of the MMC. A total of 100 examples of each condition, nine current signals of each example, and 5001 time samples of each current signal were recorded. Every current signal represents a time-series sample, so the fault information of MMC-HVDC systems is suitable for LSTM neural network. The key parameters such as the number of layers, hidden layer size, batches, epochs, time steps, and learning rate, are very important to the performance of LSTM. In order to increase the model generalization ability and reduce the network calculation, we tested the different values of parameters. To minimize the error of the training, the backpropagation is used to update weights and bias. We selected the cross-entropy as the cost function to illustrate the error between the estimated value and the true value. where t ij is the sign that the i-th example belongs to the j-th class, y j (x i , θ) denotes the output for the i-th example. Adam as a stochastic optimization method [44] is used to train LSTM and to determine network parameters, weights, and bias because Adam can adaptively adjust the learning rate by using the mean and variance of the gradient and has been successful in the learning rate optimization. Adam [44] uses an element-wise moving average of both the parameter gradients and their squared values to update the network parameters. where l denotes the iteration number, θ is the parameter vector, α is the learning rate, β 1 is the decay rate of gradient moving average, β 2 is the decay rate of squared gradient moving average, E(θ) is the gradient of the loss function, m is the first-moment estimate of the gradient, v is the second-moment estimate of the gradient, and is a small constant added to avoid division by zero. Here, we set α at 0.001, β 1 at 0.9, β 2 at 0.999, and = 10 −8 . Results and Analysis In this section, the parameters of LSTM are selected, and the performance of the proposed method is illustrated and discussed. Parameters Selection of LSTM To design an LSTM structure with higher classification accuracy, several parameters such as hidden layer size, mini-batch size, the maximum number of epochs, and learning rate, need to be discussed and determined. In this paper, the parameters quantification of hidden layer size, batches, and epochs have been explored to select better values based on a comparative evaluation of the performances. The learning rate is set to 0.001. The accuracy and computation time at different hidden layer sizes are depicted in Figure 5, when the maximum number of epochs set to 50 and the mini-batch size is set to 7. It can be shown that as the hidden layer size raises from 100 to 300, the computation time has a distinct peak with the hidden layer size set to 260 while the accuracy curve always rises as the layer size get bigger. In theory, the abscissa of the focus of the two lines should be the most optimal Hidden numbers. However, in the case of a small difference in time consumption, we are more concerned about the classification accuracy. Therefore, we select the hidden layer size as 300 by considering accuracy and computation time. Sensors 2021, 21, x FOR PEER REVIEW rate, need to be discussed and determined. In this paper, the parameters quantifi hidden layer size, batches, and epochs have been explored to select better values b a comparative evaluation of the performances. The learning rate is set to 0.001. The accuracy and computation time at different hidden layer sizes are dep Figure 5, when the maximum number of epochs set to 50 and the mini-batch size 7. It can be shown that as the hidden layer size raises from 100 to 300, the com time has a distinct peak with the hidden layer size set to 260 while the accura always rises as the layer size get bigger. In theory, the abscissa of the focus of the t should be the most optimal Hidden numbers. However, in the case of a small d in time consumption, we are more concerned about the classification accuracy. Th we select the hidden layer size as 300 by considering accuracy and computation The accuracy and computation time at different mini-batch sizes are depicte ure 6, when the maximum number of epochs set to 50 and the hidden layer size 300. It can be shown that as the mini-batch size increases from 1 to 7, the accura and computation time have distinct rise, and then they tend to go down. We s mini-batch size as 7. The accuracy and computation time at different mini-batch sizes are depicted in Figure 6, when the maximum number of epochs set to 50 and the hidden layer size is set to 300. It can be shown that as the mini-batch size increases from 1 to 7, the accuracy curve and computation time have distinct rise, and then they tend to go down. We select the mini-batch size as 7. The accuracy and computation time at different maximum numbers of epochs are depicted in Figure 7, when the hidden layer size is set to 300 and the mini-batch size is set to 7. It can be shown that as the maximum number of epochs raises from 10 to 80, the accuracy curve has a distinct peak with the maximum number of epochs set to 50 while the computation time curve always rises as the layer size get bigger. By considering comprehensively from both facets of accuracy and computation time, the maximum number of epochs is selected as 50. The accuracy and computation time at different mini-batch sizes are depicte ure 6, when the maximum number of epochs set to 50 and the hidden layer size 300. It can be shown that as the mini-batch size increases from 1 to 7, the accura and computation time have distinct rise, and then they tend to go down. We s mini-batch size as 7. The accuracy and computation time at different maximum numbers of ep depicted in Figure 7, when the hidden layer size is set to 300 and the mini-batch s to 7. It can be shown that as the maximum number of epochs raises from 10 t accuracy curve has a distinct peak with the maximum number of epochs set to the computation time curve always rises as the layer size get bigger. By consider prehensively from both facets of accuracy and computation time, the maximum of epochs is selected as 50. Detection and Classification of MMC-HVDC System with LSTM According to the above studies, we set the parameters of LSTM hidden laye 300, the mini-batch size at 7, the maximum number of epochs at 50, and the learn at 0.001. We conducted experiments from the testing data proportion of 0.1 to 0.9. testing data proportion, we ran it 20 times. The results following are the avera runs. Testing data proportion is the ratio of the test samples number to the total The detection accuracy of the LSTM is described in Table 4. In terms of fault detec network output is divided into two types: normal and abnormal. We can see fro 4 that the detection accuracy of the LSTM is 100% at each testing proportion. Detection and Classification of MMC-HVDC System with LSTM According to the above studies, we set the parameters of LSTM hidden layer size at 300, the mini-batch size at 7, the maximum number of epochs at 50, and the learning rate at 0.001. We conducted experiments from the testing data proportion of 0.1 to 0.9. For each testing data proportion, we ran it 20 times. The results following are the average of 20 runs. Testing data proportion is the ratio of the test samples number to the total number. The detection accuracy of the LSTM is described in Table 4. In terms of fault detection, the network output is divided into two types: normal and abnormal. We can see from Table 4 that the detection accuracy of the LSTM is 100% at each testing proportion. The results for training data and testing data are shown in Figure 8. STD in Figure 8 means the standard deviation, which is a measure that is used to quantify the amount of variation or dispersion of data values. It is observed that, with the rise of testing data proportion, classification accuracy for training data is steady (except a little dip at the testing data proportion of 0.8) and classification accuracy for testing data declines. The maximum mean accuracy of the testing dataset is 98.4% at 0.1 testing data proportion and the minimum average accuracy is 92.6% at 0.9 testing data proportion. The standard deviation of the classification accuracy for the training dataset increases with the increasing testing data proportion. However, for the testing dataset, the standard deviation of the classification accuracy at the ends of the testing data proportion is greater than around the middle of the testing data proportion. Moreover, the standard deviation of the classification accuracy for the training data set is less than that for the testing data set at all data proportions. Table 5 is a confusion matrix of the classification results for each condition at testing data proportions of 0.2, 0.5, and 0.8. From Table 5, it is observed that the recognition of the normal condition is 100% at testing data proportions of 0. Table 5 is a confusion matrix of the classification results for each condition at testing data proportions of 0.2, 0.5, and 0.8. From Table 5, it is observed that the recognition of the normal condition is 100% at testing data proportions of 0.2, 0.5, and 0.8. At testing data proportion of 0. Comparison To validate the effectiveness of the proposed method, several deep learning methods have been used for comparison. A Bidirectional LSTM (BiLSTM) is a sequence processing model that consists of two LSTMs: one access past information in a forward direction, and the other access future information in a reverse direction. The use of BiLSTM may not make sense for all sequence prediction problems but can offer some benefit in terms of better results to those domains where it is appropriate [45]. Therefore, we compare LSTM with BiLSTM on the detection accuracy, classification accuracy, training time spent, and the testing time spent with the testing data proportion from 0.1 to 0.9. We also compare it with CNN and AE-DNN. The implementation details of CNN and AE-DNN have been described in [33]. Comparison with BiLSTM In order to compare, we set the parameters of BiLSTM the same as the parameters of the LSTM. The results are the arithmatic average of 20 runs, which include the detection accuracy, classification accuracy, training time spent, and the testing time spent. The comparisons are detailed in Table 6. From Table 6, we can see that both LSTM and BiLSTM have the detection accuracy of 100%. The classification accuracy of BiLSTM is similar to LSTM, but BiLSTM required more training time and testing time. Comparison with CNN and AE-DNN Compared to CNN and AE-based DNN from Figure 9, it is observed that in terms of detection accuracy, the proposed method (LSTM) behaves outstandingly well at each testing data proportion. When the testing data proportion ranges from 0.1 to 0.7, these deep learning methods can detect faults perfectly. Compared to CNN and AE-based DNN from Figure 10, it is observed that the proposed method (LSTM) offers higher classification accuracy at the testing data proportion 0.3, 0.4, 0.5, and 0.7. When the testing data proportion is 0.1, 0.2, and 0.9, which are located at the ends, CNN has better classification accuracy than LSTM and AE-based DNN. Figure 11 shows the training time spent and testing time spent of the three methods. We can see that at each proportion, the LSTM method spends more training time than other methods and spends more testing time than CNN. We also can see that LSTM spends less testing time than AE-based DNN at the testing data proportions 0.1-0.6. Comparison with CNN and AE-DNN Compared to CNN and AE-based DNN from Figure 9, it is observed that in terms of detection accuracy, the proposed method (LSTM) behaves outstandingly well at each testing data proportion. When the testing data proportion ranges from 0.1 to 0.7, these deep learning methods can detect faults perfectly. Compared to CNN and AE-based DNN from Figure 10, it is observed that the proposed method (LSTM) offers higher classification accuracy at the testing data proportion 0.3, 0.4, 0.5, and 0.7. When the testing data proportion is 0.1, 0.2, and 0.9, which are located at the ends, CNN has better classification accuracy than LSTM and AE-based DNN. Figure 11 shows the training time spent and testing time spent of the three methods. We can see that at each proportion, the LSTM method spends more training time than other methods and spends more testing time than CNN. We also can see that LSTM spends less testing time than AE-based DNN at the testing data proportions 0.1-0.6. ing data proportion. When the testing data proportion ranges from 0.1 to 0.7, these deep learning methods can detect faults perfectly. Compared to CNN and AE-based DNN from Figure 10, it is observed that the proposed method (LSTM) offers higher classification accuracy at the testing data proportion 0.3, 0.4, 0.5, and 0.7. When the testing data proportion is 0.1, 0.2, and 0.9, which are located at the ends, CNN has better classification accuracy than LSTM and AE-based DNN. Figure 11 shows the training time spent and testing time spent of the three methods. We can see that at each proportion, the LSTM method spends more training time than other methods and spends more testing time than CNN. We also can see that LSTM spends less testing time than AE-based DNN at the testing data proportions 0.1-0.6. Conclusions Fault diagnosis of MMC-HVDC has become one of the most important directions in research and practice. This paper presented an LSTM deep learning method for fault de- Conclusions Fault diagnosis of MMC-HVDC has become one of the most important directions in research and practice. This paper presented an LSTM deep learning method for fault detection and classification to avoid the design of handcrafted features and classifiers. To validate its effectiveness, we compared it with BiLSTM and two other deep learning methods, CNN and AE-based DNN, using raw current sensor data of MMC-HVDC. The simulation results with data generated in PSCAD/EMTDC show that LSTM and BiLSTM have the best detection accuracy of 100%. CNN and AE-DNN can achieve high detection accuracy of more than 99.7%, while AE-based DNN is a little better than CNN. Additionally, these four methods achieve high classification accuracies. Compared with BiLSTM, LSTM has similar classification accuracy and requires less training time and less testing time. Compared with CNN and AE-DNN, LSTM provides better classification accuracy around the middle of the testing data proportions, though it needs more training time. Data Availability Statement: The data presented in this study may be available on request from the first author, Q Wang. The data are not publicly available due to privacy reason.
5,923.4
2021-06-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Prevention from Dike Failure by Emergency Flood Control Measures The risk of failure of a flood protection system must always be taken into account. During flooding events, appropriate interim protection systems must be at hand and ready to be deployed to support weak and overloaded structures. Usually sandbags, eventually in combination with fascines and geotextiles, are in use to defend endangered dike stretches in case of emergency. Sandbags offer highly flexible employment, however the enormous personal, material and time consuming efforts required for installation and dismantling are problematic. Therefore, more effective constructions for emergency flood control are needed. Within the research projects HWS-Mobile, DeichSCHUTZ, and DeichKADE different constructions based on the use of flexible membranes have been developed or are in development to ensure easy and effective countermeasures to secure dike stretches, which are in risk of breakage. Successful applications of the developed systems have taken place during the catastrophic flood event at the river Elbe in Northern Germany in 2013. Introduction Natural hazards have become natural disasters since people have been settling in flood areas.During the past decades an increasing population density and concentration of assets in low lying coastal and river areas have resulted in an increasing need for protection against floods.Therefore, the demand for technical measures for flood control is growing.However, technical structures can never provide an overall but only a limited protection against inundation.A problem that not least has been called about by restricted financial budgets.Therefore, the degree of safety depends on the cost-benefit assessment of the measure. The possibility that a flood protection system can fail must always be taken into account.In such an emergency, appropriate interim protection systems must be at hand and ready to be deployed to support weak and overloaded structures. In the research project HWS-Mobile, funded by the German Federal Ministry of Economics and Technology, different prototypes of water-filled tube systems for emergency flood control were developed and tested in situ as well as on test sites.After the project HWS-Mobile was completed, the water-filled tube constructions were tested and certified by the German Technical Inspection Agency TÜV Nord for deployment in emergency flood control.These innovative systems offer the following advantages: -Low consumption of resources -Rapid deployment -Only few personnel required -Deployable on different undergrounds without the need for any intrusive/destructive installations The structures are non-stationary water-filled tube systems made of reinforced plastic membranes.They can be used for strengthening an endangered dike line during long lasting high water levels. In the following, the construction, function and deployment of the marketable water-filled tube systems for dike strengthening, FLUTSCHUTZ Impoundment and FLUTSCHUTZ Load Filter, are described.Additionally, information is given on the ongoing research projects DeichSCHUTZ and DeichKADE dealing also with improvements in emergency flood control at endangered dike stretches. Conventional dike protection systems To secure dike sections that are at risk of breakage, structures made entirely of sandbags or sandbags in combination with fascines (cylindrical bundle of brushwood or similar material) or geotextiles are commonly used.The sandbags can be deployed either on the waterside or, most commonly, on the crest or on the landside dike slope.The latter conventional protection systems are described in the following. Protection against overtopping Overflowing water at a dike destroys the inner dike face and dike base within a very short time and consequently the dike breaches.In case flood water levels are expected that will exceed the crest of a dike, the structure can only be secured by heightening the dike.Conventionally, this is done by sandbags packed trapezoidal onto the dike crest (Figure 1). Figure 1. Dike crest heightening by sandbags By heightening a dike crest, the safety against overtopping increases but the overall stability of the dike decreases due to higher loads and higher hydraulic gradients.Therefore, the dike heightening must not exceed 50 cm and must be installed at the waterside of the dike crest. The width of a dam made of sandbags is approximately twice the height of the dam.For a dam 50 cm high, 100 cm wide and 1 km long approximately 35,000 sandbags are needed corresponding to 70 truckloads of filled sandbags.Neglecting the efforts for transporting the sandbags to the endangered dike section 165 helpers are needed over 8 hours (1,320 working hours) to fill, load and pack the sandbags for a 0.5 m high and 1 km long dam (calculation based on >1@). Protection against increased local seepage Increased local seepage in the lower regions of the landside slope must be stopped or at least hampered to prevent piping.For this, a sandbag impoundment is built around the leakage in which the water or water / sediment mixture is collected.The resulting hydraulic counter pressure reduces or prevents further seepage. If seepage occurs around the toe of the landside slope with a low grade or behind the dike, impoundments are built as a ring dike (Figure 2).U-shaped impoundments are built around spills further up the landside slope or on landside slopes with steep gradients (Figure 3).In general it must be made sure that the majority of sandbags are placed on the toe and at the lower landside slope to impose a higher load to counteract ground failure and breaching of the slope.The height of the impoundment depends on the effective hydraulic gradient.The hydraulic counter load imposed by the impoundment is a function of its height.Therefore, it has to be built to a height to accommodate the water level at which seepage is blocked or at least reduced to a level at which piping no longer poses a threat. Impoundments made from sandbags have a trapezoid cross-section.Depending on the height, the base is made of three or more sandbags placed lengthwise.On top of this, criss-cross layers are placed to create a watertight bond, in which membranes can be integrated to improve imperviousness.Typically, about 800 to 1,000 sandbags are needed to build an impoundment with a height of 80 cm<EMAIL_ADDRESS>When building an impoundment it has to be taken into account that by blocking the seepage in the dike, the seepage line will rise.This may cause leakages in the vicinity of the impoundment that will also need to be secured. Protection against increased laminar seepage Extensive laminar seepage occurs especially on dikes with a narrow crest and a steep landside slope and when prolonged high water levels have caused the seepage line in the dike to rise.Although minor seepages can generally be accepted, high flow velocities through the dike cause internal erosion leading to subsidence and slumping or slides of the landside slope and ultimately piping. When applying measures to prevent dike failure it has to be made sure not to increase buoyancy of the dike and to ensure that seepage water is allowed to escape.Filters with an imposed load ensure sufficient drainage and provide support to stabilize a sliding landside slope. If sandbags are used as load, a permeable grid of crossed rods or brushwood (e.g.fascines) or drainage mats must be laid out to guarantee sufficient permeability of the filter layer.This grid is placed, commencing at the toe and then slope upwards, followed by placing the sandbags in the same order (Figure 4).A sandbag load of three layers with a layer thickness of 0.3 m requires about 24 to 30 sandbags per square meter<EMAIL_ADDRESS>membranes or sheeting must not be used on the landside slope, as these block seepage flow and prevent water draining from the dike, leading to a rise of the seepage line and the dike becoming waterlogged. If there is a lack of suitable material for building the filter layer, the supporting sandbag load must be intermitted every 2 meters by a gap of 0.2 m >3@ (Figure 5). Dikes can also be ballasted with a permeable layer of bulk material, such as gravel or alike.This method is suitable for reinforcing longer dike stretches, if appropriate material, transport, and equipment for placing the material are available.Layers of solid bulk material have to be filter-stable and, if graded, have to be stratified from fine to coarse material.This layer is built up, beginning inland of the toe slope upwards to prevent hydraulic failure in the lower region of the landside slope. However, owing to the stress and vibration caused by heavy equipment required for transporting and placing bulk material on waterlogged dikes, this method is risky, if not impossible.It could additionally weaken, damage or even destroy the dike, dike defence paths and the adjoining hinterland or cause vehicles to sink into the ground and get stuck. Another method is a combination of sandbag and permeable bulk material, in which sandbags are placed as cross-ledgers supplemented by an extensive layer of gravel. The above methods are also suitable for securing dikes that have begun sliding, always taking into account that transportation and placement of material onto the slope should not be carried out prior to reinforcing the dike toe.Currently, two research projects are ongoing dealing with the development of heightening systems for dikes made of flexible water-filled membranes (project DeichKADE) as well as protection sheets for the waterside dike slope to decrease the saturation height in the dike body (project DeichSCHUTZ). In the following, the results of the completed research project HWS-Mobile as well as preliminary results of the ongoing projects are shown. FLUTSCHUTZ Impoundment The FLUTSCHUTZ Impoundment is a water-filled construction for emergency dike defence.Its purpose is to rapidly reduce and confine local seepages on the landward dike slope. For developing the FLUTSCHUTZ Impoundment, material and construction tests were carried out on an even surface and on different test dikes. The FLUTSCHUTZ Impoundment is designed for a maximal impoundment depth of 0.90 m, which compares to the conventional THW sandbag impoundment<EMAIL_ADDRESS>FLUTSCHUTZ Impoundment can be deployed by two people within 15 minutes<EMAIL_ADDRESS>replaces about 1,000 sandbags that would be required for a conventional impoundment of the same size.In contrast, for the deployment of a conventional impoundment 20 helpers are needed to fill the sandbags and 7 helpers to place the sandbags at the site, both over a period of 1 hour.Additional time, helpers and equipment are required for the transport of the filling material and the filled sandbags. The FLUTSCHUTZ Impoundment was tested and certified for flood defence by the German Technical Inspection Agency TUV >4@ (Figure 7). FLUTSCHUTZ Load Filter The FLUTSCHUTZ Load Filter is a water-filled load filter for emergency dike protection during continued high water periods.Deployed on the landward slope and toe it stabilises the dike sections, in which extensive seepages have arisen, against hydraulic failure or from breaching. For developing the FLUTSCHUTZ Load Filter, material and construction tests were carried out in Grunendeich on the tidal River Elbe (Figure 8), at the Mohnetal Dam (Figure 9) and on the test dike belonging to the College of the German Federal Agency for Technical Relief in Hoya (Figure 10).Simple handling makes the FLUTSCHUTZ Load Filter an alternative to the conventional sandbag-type load filters with fascines, currently used by the THW, fire brigades and other relief organisations.A 7.00 m x 3.50 m x 0.60 m FLUTSCHUTZ Load Filter element replaces about 600 sandbags required for building a load filter of equivalent size.It takes two people about 20 minutes to deploy a FLUTSCHUTZ Load Filter element<EMAIL_ADDRESS>contrast, for the deployment of a conventional impoundment 13 helpers are needed to fill the sandbags and 5 helpers to place the sandbags at the site, both over a period of 1 hour.Additional time, helpers and equipment are required for the transport of the filling material and of the filled sandbags. The FLUTSCHUTZ Load Filter was tested and certified for flood defence by the German Technical Inspection Agency TUV<EMAIL_ADDRESS> Dike heightening system DeichKADE Aim of the ongoing research project DeichKADE is the development of mobile, without additional anchorage fixable and water-fillable constructions made of flexible membranes to heighten dikes during flood events in case overflowing is anticipated.The construction must offer a narrow footprint to enable the use also on old dikes with dike crest widths of only 2 m, she must offer a heightening of additional water levels of 0,5 m (see chapter 2.1) also on uneven dike crests and she must be stable also over longer time spans of several days or a week without significant maintenance requirements. In Figure 11 In spring 2016, the large-scale test dike will be built at the College of the German Federal Agency for Technical Relief in Hoya.There, the prototype development of the protection system DeichSCHUTZ will be executed starting in summer 2016.The dimensions of the test facility are 30 m length and 25 m width with a 3 m high dike construction (Figure 17).During the 2013 flooding event, dike sections with locally concentrated seepages were successfully protected against inner erosion with FLUTSCHUTZ Impoundments (Figure 19).Curious onlookers followed the easy deployment of the structures by the German Federal Agency for Technical Relief THW Group Hamburg-Nord (Figure 20).Based on the positive experiences made in the research project HWS-Mobile follow-up projects to develop reliable and easy to handle systems for heightening dike stretches during a flooding event and to stabilize weak dike bodies by lowering the saturation degree with protection sheets at the waterside dike slope are in execution.Successful prototype development in these projects is anticipated in the year 2017. Figure 2 .Figure 3 . Figure 2. Ring shaped sandbag impoundment around the seepage area near the toe Figure 4 . Figure 4. Supporting the landside dike slope with a permeable layer loaded with sandbags Figure 5 . Figure 5. Supporting the landside dike slope with sandbag batches regularly intermitted by drainage gaps Figure 6 . Figure 6.Impoundment prototype on the LSBG test dike Figure 7 . Figure 7. TUV certification test of FLUTSCHUTZ Impoundment on the test dike of the College of the German Federal Agency for Technical Relief THW in Hoya Figure 8 .Figure 10 . Figure 8. Load Filter prototype during a test in Grunendeich at the tidal River Elbe north of Hamburg and 12 the production facilities of the research partners and manufacturers OPTIMAL and Karsten Daedler are shown.Different prototypes of the system DeichKADE, the series spring and summer 2015 as well as spring 2016, are shown in Figure 13, 14 and 15.It is expected to conclude the prototype development including TUV-certification for the use of the construction in flood defence by spring 2017. Figure 16 . Figure 16.Laboratory tests concerning the saturation and desaturation of soils in a dam construction Figure 17 .4 Figure 17.Layout of the planned large-scale test dike at the College of the German Federal Agency for Technical Relief in Hoya Figure 18 . Figure 18.Construction of a load filter with sandbags to secure the landward slope of the Elbe Dike at Hitzacker in June 2013 Figure 19 .Figure 20 . Figure 19.THW Group Hamburg-Nord deploying a FLUTSCHUTZ Impoundment on the Elde Dike at Domitz during the Elbe high water in June 2013 Figure 21 . Figure 21.Demonstrating deployment, dismantling and operation of the FLUTSCHUTZ Load Filter by THW Group Hamburg-Altona during Elbe flood in June 2013 FLOODrisk 2016 -3 rd European Conference on Flood Risk Management
3,477.6
2016-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Optimized Speculative Execution to Improve Performance of MapReduce Jobs on Virtualized Computing Environment Recently, virtualization has become more and more important in the cloud computing to support efficient flexible resource provisioning. However, the performance interference among virtualmachinesmay affect the efficiency of the resource provisioning. In a virtualized environment, where multiple MapReduce applications are deployed, the performance interference can also affect the performance of the Map and Reduce tasks resulting in the performance degradation of the MapReduce jobs. Then, in order to ensure the performance of the MapReduce jobs, a framework for scheduling the MapReduce jobs with the consideration of the performance interference among the virtual machines is proposed. The core of the framework is to identify the straggler tasks in a job and back up these tasks to make the backed up one overtake the original tasks in order to reduce the overall response time of the job.Then, to identify the straggler task, this paper uses a method for predicting the performance interference degree. Amethod for scheduling the backing-up tasks is presented. To verify the effectiveness of our framework, a set of experiments are done. The experiments show that the proposed framework has better performance in the virtual cluster compared with the current speculative execution framework. Introduction Recently, the MapReduce [1,2] as a platform for massive data analysis has been widely adopted by most of companies for processing large body of data to correlate, mine, and extract valuable features.With the prevailing of the virtualized techniques, the virtual clusters can provide much more flexible mechanism for different applications sharing the common computing resources.Then, currently, lots of MapReduce jobs are deployed in a virtual cluster.However, the modern virtual techniques do not provide perfect performance isolation mechanism, for example, Xen [3], which may cause the virtual machines to compete for the limited resource and result in the performance interference among the virtual machines.Then, how to ensure the performance of the MapReduce job in the virtual cluster becomes a key issue. Previous works focusing on the performance of the MapReduce job have indicated the performance degradation in the virtual clusters [4][5][6][7].Other researchers have found that the performance interference [8][9][10] is one of the important factors causing such degradation.Then, a set of works in the field of task scheduling were conducted [11][12][13] to ensure the performance of the MapReduce applications.However, most of them only focus on I/O intensive applications and try to find a uniform performance interference model to predict the performance degradation for different types of the applications.In fact, for different applications, using a uniform model to evaluate its performance may not always work well. In this paper, we present an optimized speculative execution framework for MapReduce jobs which aims to improve the performance of the jobs in the virtual clusters.The contribution of the paper is as follows. (1) In order to predict the performance degradation, a method for predicting the performance degree is proposed.In this method, the linear regression model is used to reflect the performance degree and the system workloads and a 2 Mathematical Problems in Engineering swarm particle algorithm is used for finding the coefficients in the model. (2) In order to find the stragglers, the method for computing the remaining time of the task is presented with the consideration of the performance interference degree. (3) In order to back up the stragglers, a scheduling algorithm is proposed which assigns the tasks to the slot with a global optimization. The organization of the rest of the paper is as follows.The next part introduces the current works related to the MapReduce scheduling in the virtual cluster.Section 3 overviews our speculative execution framework.Sections 4 and 5 show how to predict the performance interference degree, identify the stragglers, and schedule the tasks.Section 6 presents the experimental result to verify our methods.Finally the paper is summarized in Section 7. Related Works Currently, lots of works in the field of performance analysis in the virtual cluster are conducted.Reference [14] presents a method for predicting the interthread cache conflicts based on the hardware activity vector.Reference [15] presents a method to characterize the application performance in order to predict the overheads caused by the virtualization.Reference [16] uses an artificial neural network to predict the application performance.References [17,18] analyze the network I/O contention in the cloud environment.Performance interference among the CPU-intensive applications has been discussed in [11].Reference [12] considers the performance interference of the disk I/O intensive applications and proposes a model for predicting such interference.Reference [8] analyzes the factors related to the performance interference and presents a method for estimating it.Reference [19] targets the problem of application scheduling in data centers with the consideration of the heterogeneity and the interference.Although some of the current works have noticed the performance interference and the MapReduce applications' performance caused by such interference, they only focus on I/O intensive applications and try to find a uniform performance interference model to evaluate the performance degradation for different types of the applications.In fact, for different applications, using a uniform model to evaluate its performance may not always work well as the resource usage pattern can be very different.Besides, the method proposed in [19] develops several microbenchmarks to derive interference sensitivity scores and uses a collaborative filtering method to induce the sensitivity score for a new arrival application which needs the application to run against at least 2 microbenchmarks for 1 minute to get its profile.Then, as the method relies on the microbenchmarks for analyzing the interference degree, the diversity of the microbenchmarks will affect the accuracy of the analysis.Besides, if the diversity number of the microbenchmarks is large, the score matrix for the collaborative filtering may be very sparse as the new application cannot run against many microbenchmarks for 1 minute before inducing the interference sensitivity score.Then, the collaborative filtering method may not work well as for the sparse matrix.Although some methods have been proposed to solve this problem, the effect is not very good.Meanwhile, in the field of the MapReduce job scheduling, the QoS may depend on not only the interference, but also the factor of the data locality.Then, making the MapReduce job run against the microbenchmarks may not reflect its actual performance and get its actual profile.MapReduce job may need to read the data file remotely from the microbenchmarks.Then, the runtime of the job under this situation may be different from the runtime when the job need not read input data files remotely.In this sense, the method proposed in [19] may not be used in the field of MapReduce job scheduling. Many researchers have put their efforts in the field of task scheduling in MapReduce.Reference [20] proposes a capacity scheduler to guarantee the fairly share of the capacity of the cluster among different users.To ensure the data locality, [21] proposes a delay scheduler.With this technique, if the headof-line job cannot launch a local task, the scheduler can delay it and look at the subsequent job.When a job has been delayed for more than the maximum delay time, the scheduler will assign the job's nonlocal map tasks.Reference [22] uses a linear regression method to model the relation between the I/O intensive applications.Reference [23] uses node status prediction to improve the data locality rate.Reference [24] uses a matchmaking algorithm for scheduling not only considering the data locality but also respecting the cluster utilization.Reference [25] introduces a Quincy scheduler to achieve data locality.Several recent proposals, such as resource-aware adaptive scheduling [26] and cost effective resource provisioning [27], have introduced resource-aware job schedulers to the MapReduce framework.Reference [28] mentions the problem of task assignment with the consideration of the data locality in cloud computing.Reference [29] focuses on the scheduling with the consideration of the data locality to minimize the cost caused by accessing remote files.Reference [30] proposes a scheduling algorithm to make the jobs meet the SLAs.Reference [31] solves the problem of job scheduling with the consideration of the fairness as well as the data locality.Reference [19] proposes a method for application scheduling with the consideration of the interference and a greed algorithm is presented for finding the optimal assignments.However, this method is only for single application.As for our problem, we need to find optimal assignments in each time interval for a set of tasks.As stated above, most of the current works assume the perfect performance isolation among virtual machines.Then, based on such an assumption, current works seldom consider the performance interference.As stated above, some of the works consider the performance interference; for example, in [22], the scheduler optimizes the assignment with the consideration of only one task or only one slot while it is hard to achieve the global optimization of minimizing the performance interference.For example, when two slots are free simultaneously and the first job in the wait queue has the acceptable interference degree with the two nodes, in this case, one needs to determine which slot will be used to serve the job.However current works do not highlight this issue and, in fact, it needs to make a decision with a global optimization. As for the performance of the MapReduce in the heterogeneous environment, [32] presents a LATE method to improve the performance of MapReduce applications through speculative execution.Reference [33] proposes a method for optimizing the speculative execution by considering the computing power to optimize the method for estimating the remaining time.Reference [34] proposes a scheduling method especially for the heterogeneous environment.This algorithm according to the historical execution progress of the task dynamically estimates the execution time to determine whether to start a backup task for the task with low progress rate.However, the above literature does not consider the factor of the performance interference among virtualized computing resource on the problem of identifying the stragglers when estimating the remaining time.Besides, when assigning the backup task to the slot, current works do not consider the performance interference which may cause the future straggler again.Besides this, current work only waits for the straggler without a prediction in order to make the backup decision early.Then, the effectiveness of the method may be affected also. For the limitations of the above works, this paper proposes an optimized speculative execution framework for MapReduce jobs on the virtualized computing resources.The framework considers the interference.Then, an interference prediction is employed, and, according to the prediction, the framework will compute the remaining time of the task to predict the stragglers and assign the backup task to an appropriate node. Framework Overview Figure 1 shows the optimized speculative execution framework for MapReduce jobs.This framework is mainly for the MapReduce applications running in a virtual cluster.In the cluster, there are a set of physical servers.We imagine that each of the physical servers has the same virtualized environment.Each physical server can allocate its resource to multiple virtual machines.The virtual machine can host the application.The virtual cluster serves the Hadoop framework. The Hadoop framework has one master node and multiple slave nodes.The master node is deployed on a dedicated physical host.For each of the slave nodes, it will be deployed on a VM.In the master node, there are 4 major components: Straggler Identification Module, Backup Module, Heart Beat Receiver, and Performance Interference Modeling & Prediction.Straggler Identification Module is to compute the remaining time of the task in order to identify the straggler; Backup Module is to assign the straggler tasks to the slots; Heart Beat Receiver is to collect the running states of the servers and the tasks by receiving the heart beat information from the slave nodes; Performance Interference Modeling & Prediction is to train or retrain the performance interference model for predicting. In the Sections 4 and 5, the major components in our framework will be discussed. Modeling the Performance Interference. In a virtual cluster, the application deployed on a virtual machine (VM) will consume the resource of this VM.Due to the contention of the limited shared resource, the resource usage of the VMs consolidated on the same physical host may affect others' access to the shared resource.Then, the performance degradation of the applications on the VMs may be caused. To mitigate such degradation, one of the important issues is to predict the extent to which the application's performance is affected by the contention of the shared resource.By this, when the predicted result shows a bad degradation, we can place this application on the other VM to mitigate the performance degradation.In the following, for simplicity, the "foreground VM" is used to signify the VM which serves the application app to be deployed while the other VMs consolidated with the "foreground VM" are called the "background VMs."As stated above, the contention of the shared resource may cause the performance interference of the VMs to be consolidated on the same physical server.Then, the resource usage pattern of the "background VM" may affect the performance of the "foreground VM."With the difference of the resource usage of the background VM, the performance of the foreground one will be different.That is to say, the extent to which the foreground VM's performance is affected by the background one is different.Then, the term "performance interference degree" is used for signifying this extent. Definition 1 (performance interference degree).We use (1) to show the performance interference degree. where we use system-level workloads to reflect the resource usage pattern of a VM.The system-level workloads considered in this paper are shown in Table 1.FW and BW are the workloads of the foreground and background VMs, respectively.The performance of the application on FW may include response time and throughput.We use Perf(FW@BW) to signify such performance when the background VM's workload is BW.Here, Idle is especially for the background VM when no application has been deployed on it. Since the contention of the shared resource can cause the performance degradation, the interference degree of the foreground VM will have a relation with the resource usage pattern of the background VM.We also do some experiments to show this relation as Tables 2 and 3 show. Tables 2 and 3 show that, with the background VM serving different types of applications, the response time of the foreground one is different.Here, when the background VM serves different types of applications, it means that the resource usage pattern of the background VM is different which also causes the difference of the performance of the where 0 , 1 , 2 , 3 , 4 , 5 , and 6 are coefficients. By using (2), the interference degree can be known if the coefficients are known.Then, we need to estimate the coefficients.Imagine that the estimated coefficients are 0 , 1 , 2 , 3 , 4 , 5 , and 6 .Then, according to (2), the model for estimating the performance interference degree can be as follows: Then, when the background VM's workloads are fed into the above equation, we can estimate the performance interference degree.To estimate the coefficients, we need to compute the error between the predicted interference degree and the actual one according to the observed data record. Then, the problem of finding the combination of the coefficients can be mapped to a problem according to the set of observed data {(pid 1 , cpuutil 1 , memeutil 1 , rps 1 , wps 1 , await 1 , svctm 1 ), . . ., (pid , cpuutil , memeutil , rps , wps , await , svctm )}, to make the overall error the minimum which can be seen in The above problem can be seen as a problem of finding the optimal combination of the coefficients, in order to make the error between the predicted interference degree and the actual one the minimum.In this paper, for solving the problem efficiently, we use a swarm particle algorithm. When using swarm particle algorithm to solve such problem, the first task is to define the particle.For this problem, the particle in the swarm can be defined as = [ 0 , 1 , 2 , 3 , . . ., 6 ].Here, signifies the location of the particle in the direction .The number of particles in a swarm is signified as .The particle will update its location in the direction with a speed V .The particle will compute the speed according to the best location pBest the particle is experiencing and the best location the swarm is experiencing.The best location means the location which is the closest one to the optimal solution which usually is expressed as the fitness function.As for our problem, the fitness function should evaluate how the swarm is close to the optimal solution.Then, according to formula (4), the fitness function of a swarm can be defined as follows: ( Then, we can use formula (6) to update the speed of the particle in the direction and compute the location of the particle in the same direction as formula (7). ( + 1) = () + V ( + 1) , where V (+1) signifies the speed in the direction in the (+ 1)th iterations; ( + 1) signifies the location in the direction in the ( + 1)th iterations; 1 () and 2 () are 2 functions which return a random number between 0 and 1; 1 and 2 are the constants; and is the weight which can be computed as formula (8) according to [35].In our experiment, the size of the swarm is 30, the iteration number is 1000, and where max and min are the maximum and minimum weights; is the current iteration number; and max is the maximum iteration number. Then, the PSO algorithm can find the optimal combination of the coefficients of each attribute.Algorithm 1 presents the detailed algorithm. The method which uses regression model for estimating the performance interference degree can work well when there are historical data for training the coefficients.However, as for the problem of MapReduce job scheduling, such historical data may not always be available.This is because the new arriving jobs may not have the historical data about the running status together with the consolidated VM in the same physical host.Then, in this case the historical data for training may not be available.For this situation, we will discuss the corresponding method in the following. Inferring the Performance Interference Degree. For two applications, if their resource usage patterns are similar, with the same background VM, their extents of the performance degradation may be similar.Then, when one of the applications is new and little historical data can be used for training its performance interference degree model, we can predict its performance interference by looking at another one's model.Based on this idea, we will discuss our method in the following. Imagine that the performance interference degree models can be kept and stored.Then, all the models can be a set = {PID(FW 1 @), PID(FW 2 @), . . ., PID(FW @)}.Here, FW of each item PID(FW @) in is called the workload pattern.Then, if we do not have enough historical data for training application 's performance interference model, we can use an available and appropriate model in for prediction. Let wp be the workload pattern of the virtual machine vm.To find an appropriate equation in is to find the equation whose workload pattern is the most similar to wp. Then, in the following, we will show how to compute the similarity degree. For comparing the similarities, we will use an Euclidean distance.For two VMs vm and vm , the similarity degree between their workload patterns can be computed as follows: Then, we can use ( 9) to find the workload patterns which are similar to the workload pattern of the VM to be predicted.In this paper, if the similarity is beyond the predefined threshold, it means the two workload patterns are similar. Then, for a workload pattern wp, by comparing the similarity degrees, we may find multiple workload patterns satisfying the predefined threshold requirement.Then, we can use the following equation to generate a combined equation.By using such combined equation, we can estimate the performance interference degree for the VM which has no historical data for training the model. where, for the VM which is used to predict the performance, FW is used for signifying its workload.Imagine the workload patterns satisfying the threshold requirements form the set . PID(FW @BW) is the interference model corresponding to the th workload pattern in . is the similarity degree between FW and FW .Then, by using the above methods, the performance interference model can be generated.By using the model, we can estimate the performance interference degree of an application.For a MapReduce job, it may contain a set of tasks.The resource usage patterns of these tasks are always similar [36].And there are also many research works for predicting the resource demand of the MapReduce jobs.Then, using this information, the performance interference degree between the tasks to be assigned (no matter whether the corresponding job is newly submitted or runs for a while) and the VMs on the candidate physical host can be predicted. Methods for Identifying Straggler and Backing-Up in Virtualized Environment In our framework, the task trackers will send the heart beat information which includes the resource status of the VMs.Taking the task profile, the status of VMs, and the physical host as inputs, the module of Performance Interference Modeling & Prediction will return a value to evaluate the interference.Then, in every interval, the Straggler Identification Module will predict the remaining time of each running task in the next time interval according to the heart beat information from the slave node and the performance interference degree provided by the Performance Interference Modeling & Prediction.The backup module will back up a new task for the straggler by assigning a new slot to it. In the speculative execution, the task which will finish farthest into the future will be backed up since the backed up task will have a greatest opportunity to overtake the original one and reduce the overall response time of the job.Then, the core of identifying a straggler is to estimate whether the task has a bad progress rate; that is to say, compared with other tasks in a job, it has a longer remaining time to be finished.Then, in the following, we will introduce how to estimate the remaining time of the task in order to identify the stragglers. Imagine we have a job = { 1 , 2 , . . ., } which contains a set of tasks.Then, we will introduce how to find the straggler tasks in the job.Imagine that the number of the allocated map slots for this job is and the number of the allocated reduce slots for this job is .Imagine that the number of the map tasks in this job to be executed is and the number of the allocated reduce slots for this job to be executed is .The overall remaining time of the job is a sum of the remaining time of the map phase and the reduce phase.The remaining time of either the map phase or the reduce phase depends on the slowest task.Then, the remaining time of can be computed as (5). 𝑚,predict 𝑖 is the predicted completion time of the current running map task which can be computed as (11), ,predict is the predicted completion time of the current running reduce task which can be computed as (12), is the execution time of map task , is the execution time of reduce task , max and avg are the maximum and average completion time, respectively, of all the map tasks which have been executed completely, and max and avg are the maximum and average completion time, respectively, of all the reduce tasks which have been executed completely. where slot() is the function to return the slot where the task is deployed on, PID predict slot() is the predicted performance interference degree among the slot slot() and the other slots consolidated on the same physical server in the next time interval, and PID avg slot() is the average performance interference degree among the slot slot() and the other slots consolidated on the same physical server in the last interval from the beginning of the execution to the current time. Then, based on (13), the remaining time of the job can be predicted.If there exists a running task whose predicted completion time makes the remaining time bigger than the required one, this task will be the straggler. Then, after identifying the stragglers, a backup task for the stragglers needs to be initiated by assigning a slot for this task.Since, from every time interval, the Straggler Identification Module will predict the stragglers in the next time interval, there may be a set of straggler tasks to be backed up.This problem can be seen as a problem of scheduling this set of tasks in a virtualized computing environment.As the performance interference is an important factor which may affect the execution of the tasks, when scheduling the task to a slot with high time interference degree with others, the task may become a new straggler in the future again which may result in the bad performance of the job.Then, when dealing with the problem of how to back up the stragglers, the performance interference degree needs to be considered also.Previous works [37] schedule the tasks to the slot, if the predicted interference degree is not higher than a predefined threshold ; otherwise, the task will wait for the available node with the required interference degree or will be assigned to a slot when the task is waiting for a long time.In these works, the scheduler optimizes the assignment with the consideration of only one task or only one slot while it is hard to achieve the global optimization of minimizing the performance interference.For example, when two slots are free simultaneously and the first task in the wait queue has the acceptable interference degree with the two nodes, which slot is used to place the task in will affect the following assigning Input: the set SL of slots to be free in the next interval; the queue of tasks to be assigned.Output: assignment plan AP. Begin ( plan.That is to say, a decision with a global optimization needs to be made. This paper presents a scheduling strategy with a global optimization as mentioned in Algorithm 2. In each interval, the backup module will collect the status of the tasks running in the slots and estimate which slots will be free in the next interval by computing the remaining time of the task.Then, in each interval, the backup module will assign a set of tasks to the set of free slots for the next interval with the global optimization of minimizing the performance interference degree of each task.Optimally finding the solution to the above problem is an NP-complete problem.Then, we propose a greedy algorithm for solving this problem with better efficiency.Firstly, the algorithm will place the task on the slot with least interference degree.Then, for the remaining slots to be free in the next interval, redo the first step until all the slots are assigned with a task. Simulation Results We evaluate our framework in a 24-node virtual cluster.The cluster has 6 physical servers; one is for the mast node.The configuration of each server is as follows: the memory is 4 G, disk amount is 250 G, and the version of CPU is i3.On each physical server, 4 virtual machines are deployed.Each VM is created using Xen hypervisor and has 4VCPU and 1 GB memory.We configured each virtual machine with 1 slot which can be a map slot or a reduce slot.In the whole virtual cluster, we allocate 16 map slots and 8 reduce slots. We evaluate the framework using 10 MapReduce applications, seen in Table 4.These applications are widely used for evaluating the performance of MapReduce framework in the previous research works [21,32,38,39].To verify the effectiveness of our works, the experiments will be carried out for some comparisons between our scheduler and other main competitors which also consider the performance interference in the scheduling. In this section, we evaluate whether our method is effective in estimating the interference degree.We will compare it with the model discussed in previous works [12] which uses a uniform model for evaluating all the applications.In our experiment, the predicted and actual performance interference degrees are considered.Figure 2 shows the prediction error for each type of jobs using different models. From Figure 2, we can see that the current method led to an average of 29% error rate while our method can achieve the average rate of 15%.This is because our method trains the model with the consideration of no historical data about performance interference while the current method relies on establishing a uniform model to evaluate all the types of applications which will sacrifice the prediction accuracy. In the following part, the experiments will be done to show whether our method is effective in predicting the remaining time in every time interval. From Figure 3, we can see that the current method led to an average of 20%.This is because our method considers the performance interference in the estimation of the remaining time while the current method in [32] only takes an average progress rate for the estimation. In the following, the experiments will show the effectiveness of our method in speculative execution.The performance of the backup module is also affected by the data locality.Then, to emphasize the performance interference only, we conduct the experiment in an intranet environment where when accessing the data, it does not need to read the data remotely which minimizes the effect caused by the data locality as much as possible.We select the applications of Matrix and TeraGen which need no input and we also select the applications of TeraSort and Gzip which need to read data.We set the numbers of map tasks in the Matrix job, TeraGen job, TeraSort job, and Gzip job which are 15, 10, 10, and 5, respectively.Every 15 seconds, a batch of jobs which contains 3 Matrix jobs, 3 TeraGen jobs, 5 TeraSort jobs, and 2 Gzip jobs will be submitted in the virtual cluster.The average normalized completion time is used for evaluation.In our method, we model the relation between the performance interference degree and the background workload.Then, in the experiment, we will show the effectiveness of our scheduler under the different status of the background workload.We will adjust the background workload in this way that we let different jobs run on the virtualized slave node in order to adjust the cpu, memory, and other system load to simulate the variations of the background workload.Figures 4 and 5 show the result when using different schedulers in the master node. From Figures 4 and 5, when the workload of the background is heavy, for example, with the high CPU and memory utilization, all the applications suffer the performance degradation severely when using the FairScheduler [37] and CapacityScheduler [20].Even under the situation with the light workload of the background, the speculative execution has the better performance than the FairScheduler and CapacityScheduler.The reason is that speculative execution can identify the stragglers and speed up the speed of the application.Besides, our speculative execution outperforms the current speculative execution.This is because ours finds the stragglers by prediction while the current one finds them by waiting for the degradation.Besides, the backingup module in our framework also considers the performance interference when assigning the slots which may reduce the future risk of the degradation caused by the performance interference.However, we also notice that when the background workload is light, the performance of the different schedulers is not too different.This is because, with the light background workload, the application suffers not too bad performance as a result of the interference among virtualized slave nodes.However, in reality, maintaining a light background workload is usually not an easy task especially with the consideration of the cost of the hardware and the system utilization. Conclusions This paper presents an optimized speculative execution framework for MapReduce jobs which aims to improve the performance of the jobs on the virtual cluster.Firstly, we analyze the factors related to the performance degradation in the virtual cluster and present a method for modeling how the factors affect the degradation.Secondly, we develop an algorithm that works with the performance interference prediction to identify the stragglers and assign the tasks. In this work, when predicting the remaining time of the MapReduce job, only the performance interference factor is considered.In fact, there are other factors such as the fault ratio of the physical server which can also affect the accuracy of estimating the remaining time.Then, in the future works, we will optimize our method in predicting the remaining time of the MapReduce jobs. Figure 4 :Figure 5 : Figure 4: Comparison of the normalized completion times under the light workload of the background. Table 1 : System-level workload considered in this paper. Table 2 : Response time of the application with the idle domain. Table 3 : Response time of the application with the background VM varying.
7,638.4
2017-12-07T00:00:00.000
[ "Computer Science" ]
Molecular Mechanism of Lipid Nanodisk Formation by Styrene-Maleic Acid Copolymers Experimental characterization of membrane proteins often requires solubilization. A recent approach is to use styrene-maleic acid (SMA) copolymers to isolate membrane proteins in nanometer-sized membrane disks, or so-called SMA lipid particles (SMALPs). The approach has the advantage of allowing direct extraction of proteins, keeping their native lipid environment. Despite the growing popularity of using SMALPs, the molecular mechanism behind the process remains poorly understood. Here, we unravel the molecular details of the nanodisk formation by using coarse-grained molecular dynamics simulations. We show how SMA copolymers bind to the lipid bilayer interface, driven by the hydrophobic effect. Due to the concerted action of multiple adsorbed copolymers, large membrane defects appear, including small, water-filled pores. The copolymers can stabilize the rim of these pores, leading to pore growth and membrane disruption. Although complete solubilization is not seen on the timescale of our simulations, self-assembly experiments show that small nanodisks are the thermodynamically preferred end state. Our findings shed light on the mechanism of SMALP formation and on their molecular structure. This can be an important step toward the design of optimized extraction tools for membrane protein research. INTRODUCTION Membrane proteins are of great importance to a variety of essential physiological functions in all organisms. Encoded by 30% of all genes, membrane proteins account for almost 70% of known drug targets in the cell. However, they only contribute less than 2% of the structures in the Protein Data Bank (1). These proteins are relatively less studied because of a lack of experimental approaches. One of the major challenges in membrane protein research is the isolation of these proteins without destroying their stability and activity. Extraction of membrane proteins from their lipid environments can lead to their inactivation or aggregation. A widely used solution is to incorporate the protein into a model lipid membrane. In particular, lipid nanodisks have proven to be an efficient way to solubilize membrane proteins while keeping a natural environment (2)(3)(4)(5)(6). In the pioneering work of Sligar and co-workers, these small bilayer patches are surrounded and stabilized by a ring of a-helical peptides (also called membrane scaffold proteins (MSP)) (7,8). One disadvantage is that in preparing these MSP nanodisks, one relies on the use of surfactants hindering the study of membrane proteins in their native lipid environment. Besides, the use of peptides as rim-stabilizing molecules complicates the use of biophysical techniques such as circular dichroism, Fourier transform infrared and NMR spectroscopies (9). An alternative approach to MSP is the use of amphipathic copolymers. These copolymers keep membrane proteins soluble without detergents (9)(10)(11)(12)(13)(14). This implies that membrane proteins, together with their annular lipid shells, can be extracted directly from native cellular membranes or from reconstituted vesicles. An efficient copolymer introduced by Dafforn and coworkers (9,10) is composed of styrene-maleic acid (SMA) units. SMA molecules, together with lipids, spontaneously form disk-shaped particles of 10-12 nm in diameter, which are denoted as SMA lipid particles (SMALPs) (15). Bigger particles may also form depending on the shape and diameter of the embedded protein(s), polymer composition, and the polymer/lipid ratio (10, [16][17][18][19]. Importantly, SMA copolymers dissolve in a wide range of membranes without showing specificity for any lipid types (20)(21)(22). They have been used to characterize the annular lipid shells of a variety of membrane proteins (23)(24)(25). The intrinsic hydrophobicity (SMA ratio) and the protonation state of maleic acid groups strongly influence the rate of membrane solubilization (26,27). These properties, together with the varying molecular weight, make SMA copolymers easy to change and adjust (28,29). As a result, these pH-responsive copolymers have also been used as membrane-destabilizing polymers for the delivery of therapeutic molecules (26,30). Despite the promising future of SMA copolymers in membrane protein research, little is known about the molecular mechanism of SMA-lipid nanodisk formation. Scheidelaar and colleagues suggested a model for membrane solubilization by SMA copolymers in which the hydrophobic effect would drive the interaction between SMA copolymers and membranes, modulated by electrostatic interactions (20). Molecular dynamics (MD) simulations provide an attractive tool to study the molecular interactions and the dynamics of the solubilization process in detail (31). Considering the large molecular weight of the polymers, coarse-grained (CG) models are required to access the relatively large timescales involved in membrane destabilization (32)(33)(34)(35). A CG model that has been parametrized for both polymeric systems and lipid membranes is the Martini model (36). This model has already been successfully applied to simulate the interaction of a variety of polymers and lipid membranes, including studies on polymer adsorption (37)(38)(39)(40)(41)(42)(43), polymer-mediated fusion (44), the permeation process of dendrimers (45) and polymer-coated nanoparticles (46,47), and preformed lipid nanodisks (48-53). Here, based on CG MD simulations with the Martini model, we describe the molecular mechanism of action of SMA copolymers in destabilizing a model lipid bilayer. We provide detailed insight into the insertion, penetration, and pore formation of these copolymers and show how they cooperatively lead to complete destabilization of the lipid membrane and the onset of nanodisk formation. Besides, self-assembly experiments of SMA copolymers and lipids of different length show that small nanodisks are the preferred end state. CG SMA model The Martini CG model is used for the parametrization of the basic SMA units (54). Herein, we used SMA copolymers consisting of 23 units, with each unit including two styrene groups and one maleic acid, yielding a molecular weight of $7.4 kDa, which is similar to the molecular weight used in previous experiments (9, 13,27). The copolymers were treated as fully deprotonated with two negative charges in each repeating unit to obtain the high aqueous solubility of these copolymers and to avoid aggregation. For the styrene group, a three-bead mapping scheme was used, similar to the ring-based side chains in the existing Martini models for the aromatic phenylalanine and tyrosine amino acids and the styrene group in the polystyrene molecule (55,56). For the maleic acid groups, a one-bead representation was used to represent the carboxylic group, carrying a full negative charge each. The chosen mapping of the CG SMA copolymer is shown in Fig. 1 a. Bonds and improper dihedral angles were represented based on standard harmonic potentials, whereas angles and proper dihedral angles were modeled with cosine-based potentials and periodic dihedral potentials, respectively. The set of CG bonded parameters was parametrized by comparison with atomistic simulations of the SMA copolymers at the interface between an aqueous solution and dodecane. Constraints were applied to the aromatic CG beads instead of using ordinary bonds. The target distribution functions were obtained for the various bonds, angles, and dihedrals from the atomistic trajectory. In a couple of iterative steps, the CG parameters were adjusted to obtain the best match between the pseudo-CG and real-CG distributions. A full description of the CG topology and a comparison with atomistic data can be found in Figs. S1, S2, and Table S1. The SMA model is available at http://cgmartini.nl. Simulation details We used the Martini 2.2P force field to model the interactions between lipid membranes and SMA copolymers (54,57,58). The primary setup consists of a bilayer composed of 1352 didecanoylphosphatidylcholine (DDPC) lipids built using the INSANE script (59) and 1, 10, or 20 SMA copolymers FIGURE 1 SMA model and starting configuration of the simulation. (a) A CG model for the SMA copolymer with mapping of the CG SMA model and chosen bead types is shown at the top left, and a zoomed-in image of one CG unit is shown at the top right. (b) The initial configuration of the simulation system with 10 SMA copolymers above the preformed DDPC lipid bilayer is shown. DDPC lipids are shown in gray with phosphate groups in orange and choline groups in blue. The SMA copolymers are shown in green, and the carboxyl groups are shown in yellow. The periodic boundary condition box is shown with blue solid lines. The solvent is omitted for clarity. To see this figure in color, go online. regularly arranged at a distance of 2.0 nm away from the lipid surface ( Fig. 1 b). The solvent layer in those systems comprised between 24,527 and 38,526 water beads, wherein one bead represented four real water molecules. We used a concentration of 150 mM of sodium chloride, which is optimal for nanodisk formation according to previous experimental works (20,27). All systems were neutralized by adding extra sodium ions. After minimization, all systems were first equilibrated at constant volume (NVT) and then at constant pressure (NPT, with semi-isotropic coupling) at a temperature of 310 K, using a Berendsen barostat and a V-rescale thermostat (60,61). After equilibration, we changed the barostat to the Parrinello-Rahman method (62) while the standard Martini water model was still used, which proved to be more efficient at initiating the insertion of the polymers to the membrane surface (within a few hundred nanoseconds). We also applied a flat-bottomed potential on the copolymers to keep them close to the membrane solvent interface. The harmonic distance restraints keep the copolymers within a distance of 3.0 nm around the membrane surface, and the potential was released once the copolymers attached to the membrane. At this point, the standard water model was replaced by the polarizable Martini water model (63) to mimic the electrostatic interactions more realistically. For each polymer concentration, we performed between two and five replicas starting from random initial velocities. Most simulations reached up to 3 ms. To assess the thermodynamic stability of the nanodisks, self-assembly simulations were performed, starting from a random mixture of all the components, using 4, 8, or 16 SMA polymers with 600 lipids (corresponding to 150:1, 75:1, and 75:2 lipid ratios) and excess water. A polarizable water model was used, and two replicas were performed for each polymer/lipid ratio. All simulations were run using Groningen Machine for Chemical Simulations version 5.0.7 (64). The total simulation time covered over 60 ms. To test the effect of polymer charge and lipid tail type, additional simulations were performed using 50% instead of fully charged polymers and longertail DMPC (dimyristoyl-PC), DPPC (dipalmitoyl-PC), or polyunsaturated dilinoleoyl-PC lipids. An overview of all simulations is provided in Table S2. Details of the atomistic simulations performed to calibrate the CG interactions can be found in the Supporting Materials and Methods. SMA copolymers spontaneously inserted into the lipid bilayer The starting setup of our simulations consisted of a bilayer composed of 1352 DDPC lipids. The short tail DDPC should facilitate membrane disruption on the accessible timescale of our simulations. We placed 1, 10, or 20 SMA copolymers in the aqueous phase in the vicinity of one of the membrane leaflets ( Fig. 1 b). The initial asymmetric placement of the copolymers represents the experimental situation in which polymers are added in the solution surrounding a liposome or cell. Each SMA polymer consisted of 23 monomeric units and was fully deprotonated with two negative charges per monomer. The highly charged state mimics conditions of high pH, guaranteeing a highly soluble state of the polymers (27,65), although the experimentally highest efficiency is obtained at somewhat lower pH (27). We performed multiple runs for each condition to increase the statistics (Table S2). In all cases, the SMA copolymers quickly adopted a disordered conformation in solution ( Fig. 2 a), in agreement with potentiometric studies and with all-atom simulations ( Fig. S3) (66). In the case of the simulation system with 10 or 20 SMA copolymers, the copolymers could also self-aggregate through their hydrophobic cores in solution, as shown experimentally (27). The membrane affinity of the SMA copolymers, however, is high. We observed the spontaneous insertion of SMA copolymers already in the early phases (10-500 ns) of FIGURE 2 Snapshots of the binding process of an SMA copolymer to the surface of the membrane. (a) At the beginning (t ¼ 0 ns), the polymer is in solution, taking a disordered conformation. (b) After 20 ns, the polymer adheres on the surface of the membrane through the hydrophobic terminal inserted between the lipid acyl tails. (c) At t ¼ 400 ns, the polymer is fully absorbed to the lipid bilayer with sodium ions mediating the electrostatic interactions between the SMA carboxyl and the lipid headgroups. Styrene groups are shown in green, and carboxyl groups are shown in yellow. Lipids are shown with gray tails, orange phosphate, and blue choline groups. Lipids around the hydrophobic termini are highlighted in red in the left insert, with blue and orange beads representing the choline and phosphate groups, respectively. Purple beads represent Na þ ions, and brown beads represent Cl À ions in the right insert. Some lipids in front of the polymer as well as water molecules were removed for clarity. Snapshots were obtained for a system with one copolymer. To see this figure in color, go online. most of our simulations. Molecular insertion always started with the styrene moieties of the SMA polymeric termini (Fig. 2 b; Video S1). Hydrophobic interactions between the styrene moieties of the polymers and the lipid acyl chains seem to drive this behavior. The termini appeared to be strongly bound to the lipids because detachments were not observed after insertion. Once this is achieved, the rest of the copolymer slowly followed. This increases the interaction between the copolymer and the water-lipid interface. The inserted copolymers were located under the phosphate headgroups with the styrene moieties fitting between the acyl chains and the carboxyl groups pointing to the solution (Fig. 2 c). In the adsorbed state, the polymers became stretched. The analysis of the radius of gyration confirmed this change in structure upon membrane binding (Fig. S4). Counterions seem to play also an important role in stabilizing the polymer-lipid interactions (Fig. 2 c, inset). Analysis of the density profiles along the membrane normal revealed an asymmetric distribution of sodium ions around the membrane (Fig. S5 a). Insertion of SMA copolymers dragged additional sodium ions into the lipid/water interface. This seems to help the copolymers to overcome the repulsion between the charged carboxyl groups and the lipid phosphate groups (Fig. S5, b and c). SMA copolymers perturbed the bilayer, inducing pore formation The binding of SMA copolymers induced cooperative activities that included membrane bending, lipid extraction, lipid tilting, and water infiltration. In particular, when multiple polymers aggregated, the insertion of the SMA copolymers produced significant local bending of the membrane around the insertion site (Fig. 3 a). The bending originated from the increased size of the hydrophobic core in the leaflet to which they absorbed, causing stress and distorting the planarity of the lipid bilayer. In some simulations, the aggregate pulled lipids out of the membrane, ending up in the hydrophobic core of the polymers in solution (Fig. 3 b). This, however, might be facilitated by the short tail length of the lipids used and become more difficult with typical phospholipids. In most of our simulations, penetration of the copolymers caused infiltration of water molecules between the lipids' tails. This made the lipids close to the copolymers tilt and shield their tails from the carboxyl groups of the copolymers and the water molecules. Some lipids even toppled over, lying horizontal to the membrane surface (Fig. 3 c). The disorder in the lipid bilayer allowed other water molecules on the other side to cross the lipid bilayer, forming transmembrane pores. At the same time, the polymers bridged to the other side, spanning across the membrane (Fig. 3 c). Together, the lipid flip-flopping and polymer translocation relieved the stress imbalance induced by the asymmetric adsorption of the SMA copolymers. Again, the timescale of this process is likely dependent on the length of the lipid tail and artificially enhanced in our simulations. SMA stabilized the pore rim, resulting in pore growth and membrane disruption Even though the initial stress imbalance largely dissipated, water permeation increased after the initial transmembrane pores formed. The amphipathic nature of SMA copolymers would likely favor the interaction with the water molecules inside the pore and the lipid tails. This further stabilized the pore's rim. The hydrophobic styrene groups intercalated perpendicularly to the lipid acyl chains, which agrees with the polarized attenuated total reflection Fourier-transform infrared spectroscopy measurements (9). The SMA carboxyl groups and the nearby lipid headgroups faced toward the water pore. This forced the lipids around the pore to tilt, forming a toroidal pore. At the beginning, the pores showed a roughly cylindrical shape and became more irregular as the pores expanded (Fig. 4 a). At the end of the simulations with high concentration of SMA copolymers, we observed big pores forming (with diameters of 5-10 nm) and the original bilayer largely destroyed (see Fig. 4 b). At this point, the systems seemed to reach a metastable state, preventing the complete formation of nanodisks. It is possible that the periodic boundary conditions used in the simulation artificially stabilized the connectivity in the plane of the membrane, resulting in a kinetic trap. The formation of the full pore upon SMA copolymer binding is shown in Video S2. We also quantified the kinetics of pore expansion by measuring the sizes of several pores over time (Fig. S6 c). Complete SMALP nanodisks formed by selfassembly To test the capability of SMA copolymers to form stable SMALPs and to avoid metastable states in preformed membrane bilayers, we also performed self-assembly experiments. We used a mixture of SMA copolymers and DDPC lipids (either 4, 8, or 16 SMA copolymer molecules per 600 lipid molecules), which corresponds to 150:1, 75:1, or 75:2 lipid/polymer ratios. Without copolymers, lipids form stable bilayers in self-assembly simulations (57). However, when we added SMA copolymers to DDPC lipids, stable SMALPs formed with the SMA copolymer bound to the edge of the lipid bilayer disk (Fig. 5). Often a few micelles initially also remained present. As the simulations progressed, however, these micelles merged with other nanodisks through the exposed polymer-depleted sides. The formed nanodisks were always stable. The embedding of the SMA copolymers during the self-assembly simulations was similar to what we observed during the membrane-disruption simulations. SMA copolymers stabilized the pore rim with the styrene and the carboxyl groups in opposite directions (Fig. 4 b). In the self-assembly simulations with a 150:1 lipid/polymer ratio, the SMA copolymers formed four nanodisks, with one polymer chain per nanodisk. The diameters of the nanodisks ranged from 7 to 9 nm. This agrees with the overall structural measurements of free SMALPs in solution. Small-angle neutron scattering measurements have shown that the inner radius of SMA nanodisks is around 3.8 5 0.2 nm (10). The nanodisks comprised one polymer in their annulus, in line with experimental data showing nanodisks surrounded by a one polymer thick belt (9). Despite the use of a different lipid composition in those experiments, earlier experiments have shown that the particle shape and the average diameter of the nanodisks are independent of the acyl-chain length (20). To test whether SMA copolymers can form nanodisks with long-tail lipids, we performed additional self-assembly simulations, replacing DDPC with either DMPC (myristoyl chains, 14 carbons) or DPPC (palmitoyl chains, 16 carbons) lipids. In both cases, stable nanodisks formed with similar sizes and lipid/copolymer ratios compared to DDPC (Fig. S7). With DPPC, however, we often observed two SMA polymer chains per nanodisk, which can be explained by the larger size of the hydrophobic core (Fig. 5 c). To further investigate the polymer concentration effect on the nanodisks formed, we also performed self-assembly simulations at higher (75:1 and 75:2 lipid/polymer ratio) polymer concentrations. Again, nanodisks formed (Fig. S7), but several SMA copolymers surrounded the lipid disk, in agreement with the general idea that multiple polymer chains are required to completely surround a nanodisk (67). Fig. 5 d shows an example of such a nanodisk. Limitations of our model and further controls Simulations of membrane solubilization at an all-atom level of resolution are computationally too expensive and cannot currently be performed. The use of a CG model allows such computations but implies that some detail is lost. An extensive discussion of the assumptions and limitations underlying the Martini model can be found in (36). Here, we briefly discuss the main limitations that could influence our results. One such limitation is the directionality of hydrogen bonds, which is missing in the Martini model; hydrogen bonds are represented isotropically only. Previous work, however, indicates that this is not an important limitation in capturing the essence of membrane-polymer interactions (37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49)(50)(51)(52). Another limitation, in particular when compared to experimental settings, is the small system sizes considered in this study combined with periodic boundary conditions. The latter may cause an artificial stabilization of the lamellar phase, which makes it more difficult for the polymers to break the membrane into nanodisks. Together with the limited timescales that can be reached by our simulations (microsecond range), this prompted us to select the short-tail DDPC lipids to speed up the process. Energy barriers for pore formation in longer-tail lipids are large, estimated to be around 45 and 78 kJ/mol for DMPC and DPPC, respectively, according to previous all-atom simulations (68). Pore formation is even harder to observe in Martini CG simulations (69). Additional simulations using longer-tail lipids, DMPC or DPPC, and the polyunsaturated dilinoleoyl-PC lipids revealed that the initial adsorption process of the polymers was similar to what we observed for DDPC lipids (Fig. S8), but pores did not form. Therefore, pore formation probably requires longer timescales. However, we expect the mechanism of SMALP formation to be generic. Reassuringly, we observed similar nanodisks forming upon self-assembly, comparing short-tail lipids to more common longer-tail lipids, as shown in Figs. 5 and S7. This data suggests that rupture of intact membranes of longer-tail lipids eventually will also happen with our models but perhaps requires the use of smart sampling techniques to observe the process of pore formation. Another important difference regarding typical experimental settings is the optimal polymer charge density and lipid/polymer ratio. To probe the effect of polymer charge, we performed additional simulations using 50% charged SMA copolymers (i.e., every second maleic acid unit was considered protonated). Experimental data suggest that dissociation of $50% is more appropriate (27). Our simulations showed that changing the charged state of the polymer did not lead to qualitative differences for those conditions tested (see Fig. S6). One difference, though, was that the pores expanded at different rates, with the fully charged model expanding faster than the half-protonated one (Fig. S6, c and d), probably due to the larger charge density inside the pore in the case of the fully charged model. Concerning the lipid/polymer ratio, the experiments show that nanodisk formation is more efficient at lipid/polymer weight ratios of between 1:1 and 1:3, depending on the type of copolymers (16). However, control simulations with higher polymer concentrations (up to $3:1 lipid/polymer weight ratio) led to extensive clustering of the polymers in the aqueous phase (Fig. S9). This clustering behavior severely hampered the adsorption and insertion efficiency of the polymers into the membrane. On an experimental timescale, this is no problem, but on our simulation timescale, it is. Fortunately, using the self-assembly setup, it was possible to explore higher polymer concentrations, revealing nanodisk formation at a 75:2 lipid/polymer molar ratio ($3:2 weight ratio), with multiple polymers stabilizing the rim (Fig. 5 d). CONCLUSIONS We investigated the molecular mechanism of the early stages of SMA nanodisk formation using CG MD simulations. Despite the limitations associated with our model, we expect our findings to be generic. More detailed allatom models should validate our results. Based on our simulations, we propose the following mechanism for SMA-induced nanodisk formation: 1. SMA copolymers bind to the membrane surface through the styrene moieties of the termini. The hydrophobic interactions drive the initial insertion with the core of the lipid bilayer. 2. Full insertion of the SMA copolymers' hydrophobic side chains follows, causing local membrane undulation. 3. Translocation of the SMA copolymers relieves the induced stress, together with water molecules and accommodated by lipid flip-flop. Small transmembrane pores form. 4. Growth of the transmembrane pores occurs. The SMA copolymers stabilize the rim by orienting the carboxyl moieties to the water pore, and the benzene groups intercalated in between the lipid tails. This likely disrupts the membrane and favors nanodisk formation. Because of periodic boundary effects, we could not observe the last phase, but self-assembly simulations show that SMALP nanodisks are the thermodynamically favorable state of the system. Our findings show the solubilization ability of SMA copolymers and the details of the process at the molecular level. Our simulation protocol paves the way for further studies of SMA nanodisks, exploring different conditions (pH, polymer composition, and multicomponent lipid membranes) and will help the design of optimized copolymers for nanodisk formation and drug-delivery systems. Simulations of nanodisk formation with membrane-embedded proteins are underway. They will contribute to understanding the influence of SMA copolymers on the structural and dynamic properties of proteins and their annular lipid shells in SMA nanodisks. SUPPORTING MATERIAL Supporting Materials and Methods, nine figures, two tables, and two videos are available at http://www.biophysj.org/biophysj/supplemental/ S0006-3495 (18) chains with different chirality, we noticed that the bonded distributions showed better convergence using a tetramer. To predict the most stable stereo isomeric tetramers, we used quantum mechanics calculations for all possible stereoisomers (2 4 ) (see Fig. S1). We used a short chain of styrene and maleic anhydride copolymer since this is the precursor before hydrolysis during the industrial production of the SMA-2000 copolymer. We used the semi-empirical PM6 method because it accounts for accurate intramolecular interactions and has the advantage of being less time-consuming than other DFT methods. The conformational search of each tetramer was performed in the gas phase at the PM6 level of theory. Eventually, the five most stable chirality sequences were used for the parametrization of the coarse-grained (CG) SMA model. Mapping: The mapping of the CG beads was chosen based on the mapping of previous Martini models (see built five different SMA copolymers based on the most stable SMA tetramers (see Table S1). In these simulations, one SMA copolymer was placed at the interface of water and dodecane. The system size was set to 4.4´4.4´6.9 nm 3 and a similar number of water and dodecane molecules were included. Sodium ions were added to neutralize the negative charge of the SMA copolymer. Bonds, angles and proper and improper dihedral angles of the CG molecules were optimized by comparison with atomistic simulations using the OPLS-AA force field 2 . The details of the CG simulations can be found in the Methods section of the main manuscript. The target distribution functions were obtained after a few iterative steps and were agree remarkably well with the atomistic distributions (Fig. S2a). We tested the CG models by calculating the radii of gyration depending on polymer chain length, and by comparing the results obtained from CG and AA simulations (Fig. S2b). A very good agreement between the CG and AA models was obtained. SMA copolymers in solutions: To further validate the CG model, we considered the behavior of the full SMA copolymer chain in solutions. Here we compared our CG model with an atomistic model using the CHARMM36 force parameters set 3 . The system consists of a single SMA chain of 23 monomeric units in excess aqueous solvent. Constant temperature was maintained at 310 K and constant pressure was maintained at 1 atm. Full electrostatic forces were evaluated using the particle-mesh Ewald method with a cutoff of 1.4 nm 4 . Short non-bonded terms were evaluated every step using a cutoff of 1.4 nm for van der Waals interactions. The last 200 ns of the simulation were used for data analysis, with the first 50 ns considered as equilibrium. Previous potentiometric titration studies of SMA copolymers showed that, at neutral and high pH, the electrostatic repulsions between the carboxylic groups dominate the hydrophobic interactions between styrene groups within the polymers, resulting in a disordered conformation that dissolves relatively easily in aqueous solution 5,6 . Both our all-atom and CG simulations showed that the fully charged SMA copolymer adopts a mostly disordered conformation in agreement with the potentiometric studies (Fig. S3a,b). The conformations adopted by the polymers range from partly collapsed to full extended. Visual inspection indicates also the presence of loops forming within one copolymer, providing some shielding of the styrene moieties (see Fig. 2A in the main manuscript). Both atomistic and CG MD simulations of the SMA copolymer showed similar flexibility properties according to the end-to-end distances and radius of gyration ( Fig. S3c and S3d).
6,605.2
2018-06-20T00:00:00.000
[ "Biology" ]
Characterization of Thermoresponsive Poly-N-Vinylcaprolactam Polymers for Biological Applications Poly-N-Vinylcaprolactam (PNVCL) is a thermoresponsive polymer that exhibits lower critical solution temperature (LCST) between 25 and 50 °C. Due to its alleged biocompatibility, this polymer is becoming popular for biomedical and environmental applications. PNVCL with carboxyl terminations has been widely used for the preparation of thermoresponsive copolymers, micro- and nanogels for drug delivery and oncological therapies. However, the fabrication of such specific targeting devices needs standardized and reproducible preparation methods. This requires a deep understanding of how the miscibility behavior of the polymer is affected by its structural properties and the solution environment. In this work, PNVCL-COOH polymers were prepared via free radical polymerization (FRP) in order to exhibit LCST between 33 and 42 °C. The structural properties were investigated with NMR, FT-IR and conductimetric titration and the LCST was calculated via UV-VIS and DLS. The LCST is influenced by the molecular mass, as shown by both DLS and viscosimetric values. Finally, the behavior of the polymer was described as function of its concentration and in presence of different biologically relevant environments, such as aqueous buffers, NaCl solutions and human plasma. Introduction Thermoresponsive polymers are characterized by a drastic and discontinuous change of their physical properties with temperature. Given a solvent/polymer binary mixture, the phase diagram usually exhibits a binodal curve that divides a polymer-rich zone from a zone in which the polymer and the solvent are miscible in every proportion. Accordingly, the critical solution temperature can correspond to the minimum or the maximum of the binodal curve. If the critical point corresponds to the minimum of the curve, the corresponding temperature is called lower critical solution temperature (LCST). On the contrary, the temperature corresponding to the maximum of the curve is called upper solution temperature (UCST) [1]. The ability to respond to a change in temperature makes thermoresponsive polymers a "smart" class of materials that can be applied in a broad range of applications [2]. To date, there are hundreds of thermoresponsive polymers developed for various applications in the biological field, which include tissue engineering, bioseparation, drug and gene delivery [2][3][4]. As a general rule, LCST-type polymers are easily solvated in water through hydrogen bonding and polar interactions [5]. Accordingly, their biological interest relies on presence of NaCl and three different buffers: citrate (pH 3), acetate (pH 5) and phosphate (pH 7). Finally, we report a method for the evaluation of the polymer in human plasma. In particular, it was observed that it is possible to increase LCST by decreasing PNVCL-COOH concentration, while the presence of salts, particularly phosphates, results in a significant lowering of the LCST. The measurements performed in human plasma showed a relevant lowering of the LCST of PNVCL-COOH polymers by about 10 • C. These observations are particularly useful for physiological applications, as the polymers undergoing LCST may result in possible cytotoxic effects. Synthesis of PNVCL-COOH Polymers PNVCL-COOH was synthesized by free radical polymerization by using a modified version of the protocol reported by Prabaharan [35]. Prior to their utilization, NVCL was recrystallized in hexane and AIBN was recrystallized in ethanol, while MPA was transferred in a sealed vial that was deoxygenated with Ar. For all preparations, 5 mg of AIBN (0.304 mmol) and 28 µL of MPA (3.278 mmol) were used. Different molar ratios between NVCL and AIBN were used in order to obtain polymer with different molecular mass and LCST. The NVCL/AIBN molar ratio were, respectively, 122, 244, 305, 610, 1220 and 1690. DMF was deoxygenized with Ar for 15 min prior to the addition of the reagents ( Figure 1). After the dissolution of the reagents, the reaction was carried out in sealed vials at 70 • C for 8 h. All sealed vials were dried overnight at 80 • C before the reaction. After the reaction, the solution was dialyzed in a cellulose membrane tubing (MWCO of 1-2 kDa) against distilled water for at least 2 days to remove impurities and unreacted materials. Finally, the frozen product was freeze-dried at −50 • C and 0.05 mbar and stored at 4 • C. Structural Characterization of PNVCL-COOH Polymers NMR spectra were acquired using a High-resolution 500 MHz Bruker NEOn500 Quadruple resonance (H/C/N/2H) equipped with a high-sensitivity TCI 5 mm CryoProbe (Bruker, Billerica, MA, USA) at 25 • C in D 2 O and d 6 -DMSO. FT-IR spectra were recorded with a double-beam Perkin Elmer System 2000 Ft-IR Spectrometer (Perkin Elmer, Waltham, MA, USA) in the range of 4500-370 cm −1 using KBr pellets. The number of terminal carboxyl groups was determined via conductimetric titration. The polymers were dissolved using diluted HCl and the solutions were titrated using a standardized solution of NaOH 0.1 M. The deprotonation of the -COOH end groups resulted in the formation of a small plateau in the conductimetric titration curve. The number of carboxyl groups was determined by the volume difference of added NaOH solution between the initial and the final point of the plateau ( Figure S10). UV-VIS Determination of LCST Phase transition and absorbance measurements were carried out in a Shimazdu UV-visible spectrophotometer model UV-2450 equipped with a TCC-240A Thermoelectrically Temperature Controlled Cell Holder (Shimazdu, Kyoto, JP). Transmission data were used for the realization of the miscibility curves. The miscibility curves were fitted with a sigmoidal model in order to calculate the LCST as the inflection point by using the following equation: where Tr is the calculated transmittance at a specific wavelength at a fixed temperature. DLS Determination of LCST The LCSTs of PNVCL-COOH polymers were determined using a Malvern Zetasizer Nano ZS90 (Malvern Panalytical S.R.L., Malvern, UK). Samples (at a concentration of 0.5 wt.%) were incubated at different temperatures for 5 min in the temperature range between 25 and 50 • C and the cuvettes were examined to check visible turbidity. The LCST was attributed to the temperature at which a dramatic change in the shape of the autocorrelation curve was observed [39]. Dynamic Light Scattering The molecular mass of the polymers was estimated using a Malvern Zetasizer Nano ZS90 (Malvern Panalytical S.R.L., Malvern, UK). All measurements were performed at 25 • C at a concentration of 0.5 wt.% in milliQ water. Each sample was analyzed three times and provided a measurement of an average hydrodynamic diameter (D h ) corresponding to the position of a peak in size distribution located between 5 and 30 nm. D h standard deviation is referred to the position of the peaks and does not provide any information on peak width. For a random coil conformation, the average radius of gyration was calculated as R g = D h × 0.75. The average molecular mass was estimated from R g using the equation reported by Lau [40] (Equation (2)) and Eisele [41] (Equation (3)), respectively: where s 2 is the mean square radius of gyration in cm −1 . Intrinsic Viscosity Measurements The intrinsic viscosity [η] of PNVCL-COOH was measured at 25 • C by means of a Schott Geräte AVS/G automatic measuring apparatus and a Ubbelhode capillary viscometer, using water as a solvent. Polymer solutions and solvents were filtered prior to the analysis through 0.45 µm nitrocellulose filters (Millipore, Germany). [η] was calculated from the polymer concentration dependence of the reduced specific viscosity, η sp /c, according to the Huggins Equation (4): where k is Huggins constant. The values of intrinsic viscosity [η] was calculated at infinite dilution by using two calibration lines per sample, one of which was obtained by excluding the values for the most diluted sample. [η] was calculated as an average of the values calculated by applying both equations, in order to provide a reasonable confidence interval for the calculated value. The corresponding average viscosimetric molecular weight (M η ) of PNVCL-COOH was calculated in agreement with the Mark-Houwink-Sakurada (MHS) Equation (5). K and a parameters used for the calculation are reported by Kirsh [12,42] for the calculation of the molecular mass in different size range at 25 • C. The MHS equations that were used were, respectively: Results and Discussion PNVCL-COOH polymers were prepared via free radical polymerization as reported by Prabaharan [35]. The initial results ruled out the possibility that PNVCL-COOH was able to precipitate in diethyl ether in the conditions described by the original procedure. Accordingly, the procedure was modified in order to produce PNVCL-COOH polymers with higher molecular weight. In free radical polymerization, the degree of polymerization X n is directly proportional to the square of the concentration of the monomer, according to the following equation: Consequently, the preparation of PNVCL-COOH with higher molecular weight was achieved by changing the molar ratio between initiator and monomer (M/I). For this reason, PNVCL-COOH were distinguished according to the M/I molar ratio that was used for their synthesis. The M/I was, respectively, 122 (equal to that reported by Prabaharan), 244, 305, 610, 1220 and 1690. The precipitation in diethyl ether was achieved from values of M/I above 610. The elongation of the hydrophobic portion resulted in a different hydrophilichydrophobic balance that allowed the precipitation of PNVCL_610, PNVCL_1220 and PNVCL_1690 in diethyl ether. FRP of PNVCL is associated with a lack of control on polymer polydispersity. This could be related to the low yield of the synthesis after diethyl ether precipitation (<20%). Accordingly, the fraction of the polymer under a critical molecular mass was unable to precipitate in the solvent. The average yield of the process was raised by purifying the polymer directly with dialysis using with membrane tubings (MWCO = 1 kDa) that were compatible with DMF solutions. In this way, PNVCL-COOH polymers were obtained without diethyl ether precipitation. The main drawback of this purification method is the inability to remove eventual traces of DMF that remain in water after dialysis. In a few polymers, it was possible to identify a minor peak at 2.9 ppm (Figure 2) that demonstrates the presence of a residual DMF in freeze-dried products. The peak had been previously mistaken for the methylene group (-C-S-CH 2 ) present in the terminal group [35]. This hypothesis was excluded with the utilization of HSQC-DEPT heterocorrelated spectra (see Supplementary Figure S4). All polymers exhibited different LCSTs in relation to their different molecular mass and their corresponding M/I ratio. Structural Characterization of PNVCL-COOH Polymers NMR characterization was performed in D 2 O and d 6 -DMSO. In the 1 H spectrum of PNVCL-COOH, four main signals were observed. The formation of the polymer was confirmed by the presence of broad signals and from the disappearance of the vinylic signals at 7.36 ppm. Similarly, amide I vibrations at 1631 and 1480 cm −1 (C-N stretching vibration) were observed, while the characteristic signals related to the monomer (C=C, 1658 cm −1 , CH= and CH2=, 3000 and 3100 cm −1 ) disappeared (see Supplementary Figure S5). All 1 H NMR PNVCL-COOH spectra exhibited peaks at 1.77 ppm (3H, -CH 2 ), 2.49 ppm (2H, -COCH 2 ), 3.31 (2H, -NCH 2 ) and 4.36 (1H, -NCH). The HSQC correlation spectrum with 1 H and 13 C assignments is reported in Figure 2. 1 H and 13 C spectra are provided in the Supplementary Materials ( Figures S1 and S3). The presence of carboxyl end groups was confirmed by analyzing PNVCL_244 and PNVCL_305 in d 6 -DMSO using a high number of acquisitions. The signals were identified in small peaks observed at about 12 ppm in both spectra ( Figure S2). In FT-IR spectra, the carboxyl end groups were recognized from the presence of broad signals at 3450 cm −1 ( Figure S5). The carboxyl group content was estimated via conductimetric titration and was found to be inversely dependent on the M/I ratio. The number of terminations for PNVCL_122, PNVCL_305, PNVCL_610, PNVCL_1220 and PNVCL 1690 were, respectively, 0.96 ± 0.25, 0.73 ± 0.17, 0.63 ± 0.14, 0.55 ± 0.12 and 0.45 ± 0.13 mmol/g. 13 C spectra confirmed the structure of the polymers based on the previous literature [10,12] (Figure 3 and Figure S2). Heterocorrelated 2D-HSQC spectra (see Supplementary Figure S4) allowed the identification of two minor signals at 1.55/29 ppm and 2.4/38 ppm that were related to the presence of the sulphur-bonded aliphatic methylene and the methylene bonded to the carboxyl termination group. The comparison between 13 C spectra ( Figure 3) of PNVCL_122, PNVCL_244 and PNCVL_1220 in the range between 50 and 25 ppm confirmed that the signals are associated to the aliphatic portion of the termination groups. As shown, the signal intensity decreases as M/I ratio increases. This observation is in accordance with the presence of different PNVCL-COOH polymer with increasing molecular mass and increasing M/I ratio. The signals at 2.9/37 ppm were associated to CH 3 residues of residual DMF solvent. Spectroscopic Determination of LCST The miscibility curves were represented by plotting the transmittance of the solutions as a function of temperature (Figure 4a). LCSTs were determined from the inflection points of the sigmoidal functions that were used to fit the miscibility curves. PNVCL-COOH polymers (0.5 wt.% solutions) exhibited LCST in a range between 33 and 42 • C. The results showed that LCST diminishes in relation to the M/I ratios that were used for the synthesis of the polymers (Table 1). This molecular mass-dependent behavior is in accordance with a "classical" type I miscibility behavior. Accordingly, the increase of the hydrophobicity of the polymer results in the reduction of the LCST. The results excluded the presence of "step" transitions, that are generally observed in highly polydisperse polymers with different LCSTs. As a result, the variation of the M/I ratio provided a reliable procedure for the control of the LCST with simple FRP. The type I behavior of PNVCL-COOH was further validated by measuring the LCSTs in the presence of salt species. The 0.5 wt.% PNVCL-COOH solutions were prepared in citrate (pH 3), acetate (pH 5) and phosphate buffers (pH 7). All buffers were prepared at the concentration of 0.1 M to compare the effect of the different ions on the LCSTs of the polymers. The results (Figure 4c) showed a strong dependence towards the types of ions dissolved in solution, as it was previously demonstrated by other studies on PNVCL polymers [4]. The results suggested that the ionic environment has a stronger effect in relation to the pH of the solution. The effect of salts was more pronounced for short chain polymers, which have a higher LCST (Figure 4c). The most evident effect was observed by dissolving PNVCL-COOH in phosphate buffer. This could be relevant from the biological point of view, since phosphate buffers solutions (e.g., PBS) are widely used in cell treatment kits [4,[28][29][30][31]. During LCST transition, PNVCL-COOH undergoes a coil-to-globule transition [3,4]. Consequently, the lowering of the LCST due to the cell culture medium could result in potential cytotoxic effect due to the conformational change of the polymer. The effect of polymer concentration was assessed by measuring the LCSTs of PNVCL-COOH solutions at the concentration of 0.5, 0.1, 0.05 wt.% (Figure 4b). Normally, the LCST is associated with the point where the transmittance goes to zero [8,35,43]. However, this model was not suitable for describing the behavior of solutions at lower concentration (<0.1 wt.%). The determination of the LCST of dilute solution (<0.1 wt.%) of PNVCL has been previously reported with DLS, static light scattering and differential scanning calorimetry [4,44]. The calculation of the LCST at lower concentration was facilitated using a sigmoidal model for the interpretation of curves (Equation (1)). Dilution resulted in higher values of LCST and slower transition. Accordingly, PNVCL-COOH concentration affects the kinetics and the energy of LCST transition. Finally, the LCSTs were measured in presence of NaCl in range of concentrations between 0 and 0.15 M (Figure 4d). Results showed that LCST decreases proportionally in relation to NaCl concentration [45]. LCST decreased by about 1 • C in a physiological solution (0.9% NaCl, or 0.15 M). Consequently, pharmaceutical applications of PNVCL-COOH polymer in a physiological solution would require a polymer with a LCST of at least > 38 • C in order to avoid cytotoxic effects due to the polymer transition. Scattering Determination of LCST DLS provided lower values of LCST in relation to UV-VIS spectroscopy. The analyses of polymers with higher M/I (PNVCL_610, PNVCL_1220, PNVCL_1690) provided a direct measurement of LCST from the displacement of the autocorrelation curve due to the transition. The analyses of the other samples did not provide a direct estimation due to the small dimensions of the PNVCL-COOH macromolecules in solution. Consequently, the LCST was estimated by checking visible turbidity. The observed differences between DLS and UV-VIS measurement were related to the polydispersity of the polymers. DLS is more sensitive to the formation of aggregates in the proximity of LCST. The formation of globular aggregates at temperature inferior to LCST is related to the fraction of PNVCL-COOH polymers with higher molecular mass. Since the scattering intensity is proportional to D h 6 , a small fraction of polymer undergoing coil-to-globule transition is able to produce variation in the autocorrelation curve shape or position ( Figure S7) [39]. Accordingly, the entity of the differences between UV-VIS and DLS measurements provided an overview on the polydispersity of PNVCL-COOH polymers. According to the results, the polymer with highest polydispersity was PNVCL_610. Determination of Molecular Mass The molecular weight determination of polymer was carried out by SEC, DLS and viscosimetry. DLS and viscosimetry provided an estimation of the average molecular mass, while SEC did not produce any appreciable results due to the sorption of PNVCL on the column. This issue has been previously addressed for the analysis of PNVCL column in most solvents, including aqueous buffers and THF [10,46]. DLS allowed the estimation of the molecular mass according to the models described by Lau [40] and Eisele [41]. The viscosimetric molecular mass of PNVCL_244, PNVCL_305 and PNVCL_1220 mass was determined using water as a solvent at 25 • C. Samples were analyzed in dilute conditions (≤ 0.2 wt.%) in order to prevent the formation of foam. Due to the low viscosity of the sample, the differences between the run times were very small. Similarly, the values of inherent viscosity (η inh ) were considered too low for the calculation of the molecular mass with the model described by Kraemer [47]. The comparison between DLS and viscosimetric measurements is reported in Table 1. The estimated molecular mass as a function of the M/I value is reported in Figure 5a and the variation of LCST as a function of the molecular mass is reported in Figure 5b. Both DLS and viscosimetric results are in line with a type I thermoresponsive polymer behavior. As the molecular mass increases, the LCST decreases. The utilization of K and a constant reported by Kirsh [12,42] demonstrated that the two models can be equally applied for PNVCL-COOH polymers in this molecular weight range . Molecular mass values obtained through viscosimetry were slightly lower than those obtained by interpreting DLS data with the model provided by Lau (Equation (2)), while the use of Eisele's equation led to the overestimation of the molecular mass. Interestingly, it was observed that the values calculated using Lau's equation were almost half the values obtained with Eisele's. The bigger differences between DLS and viscosimetry data are observed with increasing molecular mass (Table 1, Figure 5a,b). Accordingly, scattering methods should be considered for the estimation of the molecular mass of PNVCL polymers with lower molecular mass, that are not suitable for viscosimetric analysis due to their low viscosity. The reported relations between LCST and M η are in accordance with the mixing behavior of aqueous PNVCL solutions reported by Meeussen [22] and Kirsh [42]. Accordingly, the small difference between the calculated LCST may be related to the different end groups of the polymers under consideration. The presence of terminations or compounds that increase the hydrophilicity of PNVCL is known to increase the LCST [27,48]. Furthermore, PNVCL-COOH polymers with a 32 • C LCST have been frequently used as starting polymers for the preparation of thermoresponsive particles [7][8][9]19,35,38]. This LCST value has been previously associated to the behavior of PNVCL-COOH polymers with a molecular weight of 1 kDa by means of GPC-SEC measurements in THF [35] and never discussed in subsequent publications [7][8][9][16][17][18][19]. PNVCL_1690 shows similar properties as the LCST is close to 33 • C. According to DLS measurements, PNVCL-COOH polymers require a molecular mass higher than 100 kDa in order to exhibit LCST at 32 • C. Similarly, viscosimetric analyses demonstrated that the value should be at least higher than 42 kDa, which is the value that we associated to a LCST of 34 • C. Accordingly, the importance of the molecular mass of PNVCL polymers appeared to be underestimated in studies oriented towards biological applications [7][8][9][16][17][18][19]35]. The molecular mass of the PNVCL has a central role in defining the thermoresponsive behavior of nanoparticles, microgels, gels and other biocompatible devices for drug and gene delivery. Consequently, it is essential to correctly characterize the molecular mass with precise, standardized, and reproducible methods. Determination of LCST in Human Plasma The critical miscibility behavior of PNVCL-COOH was observed in human plasma to evaluate possible cytotoxic effects related to the utilization of the polymer in physiological fluids. Plasma represents 55% of the blood and consists mainly of water, dissolved ions, proteins and gases. Among the main proteins, albumin, fibrinogen and globulins are present, whose functional groups give rise to the intense visible bands in the region between 200 and 600 nm [49]. In both samples analyzed, the peak at 576 nm and the shoulder at 540 nm are related to the presence of oxyhemoglobin [50] due to the hemolysis of residual erythrocytes. A first calibration of the matrix was performed on a pool of plasma realized from the union of samples taken from 12 heathy donors in a temperature range between 25 and 50 • C. The calibration excluded any spectral modification between 25 and 42 • C. Due to the complexity of the matrix, all solutions were analyzed using milliQ water as a reference. The measurements of the LCST of PNVCL_122, PNVCL_244 and PNVCL_305 were provided by the analysis of the region devoid of amino acid signals, between 600 and 800 nm (Figure 6a and Figures S8,S9). Upon heating, the absorbance signal of the spectra was shifted towards higher values due to the conformational change of the polymers. The results show that the human plasma is responsible for a significant lowering of the LCST by about 10 • C, as spectral changes were already observed at 28 • C. This is in line with the results of previous experiments, which have shown that LCST is affected by the presence of ions and proteins. By heating up the solution to 37 • C, the spectra change dramatically, and it was no longer possible to recognize any spectral information. However, the spectral properties of plasma were restored by cooling the system back to 25 • C. Accordingly, the reversibility of PNVCL-COOH transition is maintained within the plasma matrix. LCST values were calculated by fitting the transmittance vs. temperature data with a sigmoidal function (Equation (1)). Transmittance data and fitting functions are reported in Figure 6b. The comparison between the LCST of PNVCL-COOH in milliQ water and plasma are reported in Table 2. It can be concluded that the polymer is not suitable for direct utilization within the plasma at the concentration of 0.5 wt.%. However, the reversibility of the transition could be an important starting point for future developments. According to the previously reported results, the LCST can be increased by simply decreasing the chain length by modifying the M/I ratio and by diminishing polymer concentration. Conclusions In this work, the preparation of PNVCL-COOH linear thermoresponsive polymers with a LCST of between 33 and 42 • C was reported. The results contradict the established thesis that PNVCL has a characteristic LCST at 32 • C and exhibits a miscibility behavior similar to PNIPAM. The utilization of different ratios between monomer and initiator (M/I) allowed to lower the LCST by increasing the molecular mass of the polymer. Accordingly, the increase in molecular mass was associated with a decrease in terminal groups content that was observed in NMR spectroscopy and conductivity titration. Molecular mass characterization was approached with different method and a comparison of the results was provided. While GPC-SEC has proven to be unreliable regarding the tendency of the polymer to adsorb on the column, DLS and viscosimetry proved to be simple and effective methods for the estimation of the molecular mass in simple aqueous solution. The behavior of the polymer was described according to its concentration and in the presence of different environments, such as buffers, NaCl solutions and human plasma. The utilization of a sigmoidal model allowed to correlate the equilibrium miscibility temperature (LCST) as the inflection point of the miscibility curve. This allowed to interpret with greater precision the behavior in diluted solutions or in complex matrices. The variability associated with the LCST values in the different solutions demonstrated the importance in reporting the LCST of PNVCL polymers in relation to their concentration and molecular mass. In addition, the study demonstrates the importance of the screening of the behavior of PNVCL polymers in biologically relevant environments (plasma, PBS, physiological solution). The LCST of PNVCL polymers for biological applications should be determined within these solutions in order to prevent cytotoxic effects due to thermo-induced conformational change. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/polym13162639/s1, Figure S1: 1 H spectrum of PNVCL_122; Figure S2: Carboxylic termination group signals in 1 H NMR spectrum of PNVCL_305 dissolved in d 6 -DMSO; Figure S3: 13 C spectrum of PNVCL_122; Figure S4: HSQC-DEPT spectrum of PNVCL_122; Figure S5: Comparison between NVCL and PNVCL-COOH spectrum (PNVCL_305); Figure S6 Reduced viscosity reported as a function of PNVCL_244 (black squares), PNVCL_305 (red circles) and PNVCL_1220 (blue triangles) concentration. The value of η red were calculated according to the Huggins method by using the calibration lines reported in the figure; Figure S7: Displacement of the correlation curve of PN-VCL_1220 solution (0.5 wt.%) associated to LCST transition; Figure S8: Variation of UV-VIS spectrum of a solution of PNVCL_122 in human plasma (0.5 wt.%) with temperature; Figure S9: Variation of UV-VIS spectrum of a solution of PNVCL_305 in human plasma (0.5 wt.%) with temperature; Figure S10: Conductimetric titration of PNVCL_305.
6,120.8
2021-08-01T00:00:00.000
[ "Materials Science", "Biology", "Medicine" ]
A Label-Free Proteomic Strategy to Investigate the Intramuscular fat Proteomic Differences Between Biceps Femoris and Longissimus Dorsi in Inner Mongolian Cashmere Goats Background: As a major raw-cashmere-producing province in China. Nearly 700,000 Aerbasi cashmere goats are fed per year, and the corresponding meat production is nearly 10,000 tons. However, there are no reports on the meat of this goat. To better understand the molecular variations underlying intramuscular fat (IMF) anabolism and catabolism in Inner Mongolian cashmere goats, the proteomic differences between the biceps femoris (BF) and longissimus dorsi (LD) were investigated by a label-free strategy. Then, the identied proteins were veried as being involved in IMF anabolism and catabolism by Western blot analysis. Results: The IMF content was signicantly higher in the BF than in the LD, suggesting that IMF accumulated more in the BF or was metabolized more in the LD. We performed proteomic analysis of IMF anabolism and catabolism at the proteomic level, and 1209 proteins were identied in the BF (high-IMF) and LD (low-IMF) groups. Among them, 110 were differentially expressed proteins (DEPs), 81 of which were upregulated in the high-IMF group, while 29 were upregulated in the low-IMF group. Gene ontology (GO) classication showed that the 110 DEPs were functionally classied into 100 annotation clusters. Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis showed that the 110 DEPs covered 34 KEGG pathways. Three pathways were related to IMF metabolism and deposition—fatty acid metabolism, fatty acid degradation and fatty acid elongation—and included 7 proteins. Conclusion: GO and KEGG analyses showed that differentially expressed HADHA, HADHB, ACSL1, ACADS, ACAT1 and ACAA2 in the mitochondria act via fatty acid metabolism, fatty acid degradation and fatty acid elongation to inuence the metabolism and synthesis of long-, short- and medium-chain fatty acids and modulate IMF anabolism and catabolism. Protein-protein interaction (PPI) network analysis showed that IMF accumulation in different muscle tissues of Inner Mongolian cashmere goats was affected not only by 5 key enzymes and proteins involved in fatty acid synthesis and metabolism but also by ve DEPs (SUCLG1, SUCLG2, CS, DLST, and ACO2) in the TCA cycle. Our results provide new insights into IMF deposition in goats and improve our understanding of the molecular mechanisms underlying IMF anabolism and catabolism. long-chain acid, short-chain acid and medium-chain acid metabolism and synthesis inuence IMF anabolism and catabolism. affected not only by 5 key enzymes and proteins involved in fatty acid synthesis and metabolism but also by ve DEPs (SUCLG1, SUCLG2, CS, DLST, and ACO2) in the TCA cycle. Our results provide new insights into IMF deposition in goats and improve our understanding of the molecular mechanisms underlying IMF anabolism and catabolism. Background As a major raw-cashmere-producing province in China, Inner Mongolia cashmere goats harbors three types -Erlangshan, Aerbasi and Alashan [1]. The Inner Mongolian cashmere goat is a local breed that provides both cashmere and meat, and the production of cashmere in Inner Mongolia accounts for approximately 40% of the total output of the whole country [2]. Nearly 700,000 Aerbasi cashmere goats are fed per year, and the corresponding meat production is nearly 10,000 tons. Aerbasi cashmere has been widely studied, but the meat value of this goat has not been reported. Therefore, the study of Aerbasi cashmere goat meat is of great signi cance. Intramuscular fat (IMF) is mainly found in the sarcolemma, including the perimysium, epimysium and endomysium, and is mainly composed of triglycerides and phospholipids [3]. The IMF content and its fatty acid composition play an important role in meat quality, affecting the sensory properties ( avor, juiciness and tenderness) and nutritional value of meat [4]. The positive effect of IMF on the sensory quality of meat has been demonstrated in pork [5], mutton [6], and beef [7]. The IMF deposition capacity is regulated by 3 aspects: transport to fatty acids, fat anabolism and fat catabolism. Recent studies have demonstrated that FABP and CD 36 play important roles in the process of fat uptake [8,9], and DGAT 1 and SCD regulate the synthesis of IMF [10,11]. The HSL and LPL genes regulate fat breakdown [12,13]. However, it is not known which proteins affect the deposition and metabolism of IMF in cashmere goat meat, so research on the composition of protein in cashmere goat meat is urgently needed. In recent years, the label-free approach to proteomics, which is known to be a reliable, versatile, and costeffective strategy, has received much attention [14]. Furthermore, it has been successfully applied to compare proteome changes in beef color stability [15] and gain knowledge of broiler self-regulation mechanisms under heat stress [16]. However, the application of a label-free proteomics strategy to investigate proteomic changes that occur during the process of IMF regulation has not yet been reported in cashmere goats. Moreover, the proteomic abundance in cashmere goats remains unclear. Therefore, the objective of the present study was to identify key proteins involved in IMF anabolism and catabolism and to reveal underlying biochemical events with the help of bioinformatics analyses. IMF content of the LD and BF muscles To assess the IMF content, the longissimus dorsi (LD) and biceps femoris (BF) tissues were collected from 20 goats. Independent sample t-tests for fatty acids in different muscles showed that the IMF content was signi cantly higher in the BF than in the LD (P < 0.01) (Fig. 1). This suggests that IMF is deposited at higher levels in the BF or metabolized to a greater degree in the LD. To explore IMF anabolism and catabolism at the proteomic level, we selected the BF and LD from 6 goats for the next experiment. Protein Identi cation And Comparative Analysis To understand the protein composition of these six goats, the UniProt/Swiss-Prot/Capra hircus database was used as a reference to investigate the proteome during IMF anabolism and catabolism using a labelfree mass spectrometry strategy. A total of 1209 proteins were identi ed in the BF (high-IMF group) and LD (low-IMF group), and 993 and 896 proteins were identi ed in each group with a false discovery rate (FDR) ≤ 0.01 (supplementary material 1). Therefore, there are many proteins that are yet to be studied in Inner Mongolian cashmere goat. To visualize and differentiate the observed sample clusters, Principal component analysis (PCA) was performed to compare the high-IMF group and low-IMF group based on the proteins present in both groups. PCA is an unsupervised method that can condense a large number of variables (proteins) into a set of representative and uncorrelated principal components by means of their variance-covariance structure. The score plot of the PCA of the high-IMF group and low-IMF group (Fig. 2a) showed that 45.9% of the variability was explained by the rst two principal components, which accounted for 31.8% and 14.5% of the total variance. Samples from the two groups could be separated completely and were located in different quadrants, which indicated the existence of differentially abundant proteins between the high-IMF group and low-IMF group. The volcano plots present samples with fold changes > 2 or < 0.5 and P-value < 0.05 (Fig. 2b). In the comparison of the high-IMF group and low-IMF group, there were a total of 110 signi cant differentially expressed proteins (DEPs) that were associated with the accumulation or metabolism of IMF; 81 proteins were upregulated in the high-IMF group, whereas 29 proteins were upregulated in the low-IMF group. The detailed results for the DEPs are presented in supplementary material 2. Function Analysis Of Differentially Expressed Proteins According to the Gene ontology (GO) classi cation statistics (Fig. 3), DEPs from IMF metabolism and deposition can be divided into three main categories: biological processes, cell components, and molecular functions. In this study, the 110 DEPs were functionally classi ed into 100 annotation clusters (supplementary material 3). The top 10 annotated GO terms in the biological process category showed that the DEPs participated in the oxidation-reduction process, small molecule metabolic process, oxoacid metabolic process, organic acid metabolic process, generation of precursor metabolites and energy, and carboxylic acid metabolic process (Fig. 3a). The top 10 GO terms in the cellular component category indicated that the DEPs were mainly enriched in mitochondria, the intracellular region, organelles, intracellular organelles, the cytoplasm, and the mitochondrial envelope. The top 10 GO terms in the molecular function category indicated that the DEPs were mainly involved in catalytic activity, ion binding, oxidoreductase activity, cytoskeletal protein binding, and actin binding. It is speculated that during IMF metabolism and deposition, the increase or decrease in proteins or enzymes leads to changes in the metabolic rate or accumulation, which further leads to differences in the IMF content. Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis was performed to evaluate the potential functions of exclusively detected proteins and DEPs. The top 20 enriched KEGG terms of these proteins are described in Fig. 3b. The results show that the 110 investigated proteins cover 34 KEGG pathways. In total, three of the pathways were related to IMF metabolism and deposition, including fatty acid metabolism, fatty acid degradation and fatty acid elongation. Seven proteins were included in these three pathways, such as acetyl-CoA acyltransferase 2 (ACAA 2 ), cholesterol acyltransferase 1 (ACAT 1 ), and Enoyl-CoA hydratase (HADHA) (Supplementary material 3). The KEGG analysis results provide further insight into IMF anabolism and catabolism. Protein-protein interaction (PPI) network analyses were performed to construct the speci c molecular network involving the key DEPs related to IMF anabolism and catabolism between the high-IMF group and the low-IMF group. To search for hub proteins among the DEPs, we used the software CytoHubba. One PPI network was determined, covering 33 DEPs and 147 edges (Fig. 4) targeted or were targeted by another 9 proteins; SUCLG 1 and SUCLG 2 targeted or were targeted by another 8 proteins (except HADHA); HADHB targeted or was targeted by another 8 proteins (except ACO 2 ); CS and DLST targeted or were targeted by another 7 proteins (except HADHA and ECHS1); ECHS 1 targeted or was targeted by another 7 proteins (except CS and DLST); ACO2 targeted or was targeted by another 7 proteins (except HADHA and HADHB); and HADHA targeted or was targeted by another 5 proteins (except DLST, SUCLG 1 , SUCLG 2 and ACO 2 ). These 10 hub genes play a vital role in regulating IMF anabolism and catabolism. Both functional analysis and PPI network analysis showed that HADHA, ACAA 2 and ACAT 1 might play an important role in regulating IMF anabolism and catabolism in Inner Mongolian cashmere goat. Protein Determination By Western Blot Analysis Protein determination by Western blot analysis Western blot quanti cation is a critical method that provides accurate and reproducible results (Fig. 5). Therefore, this method was used to detect the expression levels of three DEPs (HADHA, fatty acid-binding protein (FABP 3 ) and AMP-binding domain-containing protein (ACSL 1 )) from the high-IMF group and low-IMF group. All three proteins were upregulated in the high-IMF groups. The average band intensities of HADHA, FABP 3 and ACSL 1 (normalized to β-tubulin) had similar variation tendencies according to the results of MS analyses. The band intensities of these three proteins were signi cantly higher in the BF than in the LD (p < 0.05). Discussion IMF is formed by the deposition of fat in muscle, which is composed of IMF and myo brils. A previous study showed that the IMF content ultimately depends on fatty acid transport, IMF anabolism and IMF catabolism [17,18]. The process of fatty acid transport involves fatty acids entering intramuscular cells to provide the necessary substrates for IMF synthesis; IMF anabolism includes the synthesis, elongation or desaturation of fatty acid chains and the synthesis of triglycerides; IMF catabolism includes mobilization of fat in intramuscular cells and hydrolysis of triglycerides in lipoproteins. Interestingly, in our study, we found that the IMF content was signi cantly higher in the BF than in the LD. This suggests that IMF is deposited at greater levels in the BF or metabolized to a greater degree in the LD. This provides a good model for studying the regulatory mechanisms of IMF anabolism and catabolism. IMF anabolism and catabolism are closely associated with many critical cellular functions and biological processes. For example, IMF or fatty acid triglycerides are metabolized in the mitochondrial matrix, and this process is called fatty acid beta-oxidation [19]. Fatty acid beta-oxidation generally includes four steps: oxidation, hydration, oxidation and cleavage [20]. In our study, the GO classi cation statistics of 110 DEPs showed that the top biological process was the oxidation-reduction process, which included 36 DEPs. In the cellular component analysis, the most enriched cellular component was the mitochondrion, which included 39 DEPs. In the molecular function analysis, 65 DEPs were mainly enriched in catalytic activity. Overall, the DEPs were mostly located in mitochondria, had catalytic activity, and participated in oxidation processes. In summary, the GO analysis results were consistent with the function of fatty acid beta-oxidation steps, and DEPs involved in these processes were the key proteins contributing to the differences in fatty acid beta-oxidation, further in uencing IMF metabolism. The KEGG enrichment analyses indicated that there were 3 terms (fatty acid metabolism, fatty acid degradation and fatty acid elongation) related to IMF, and 7 of the DEPs were associated with these three pathways. The seven proteins included in these three pathways were ACAA 2 , short-chain acyl-coenzyme A dehydrogenase (ACADS), ACAT 1 , ACSL 1 , short-chain enoyl-CoA hydratase (ECHS 1 ), HADHA, and HADHB. Fatty acids can be classi ed as long-chain (containing more than 12 carbon atoms), medium--chain (containing 6-12 carbon atoms) and short-chain (containing less than 6 carbon atoms) fatty acids [21]. Short-chain fatty acids can directly cross the outer mitochondrial membrane and enter the mitochondrial matrix for fatty acid oxidation, but long-chain and medium-chain fatty acids need to be transported through the inner mitochondrial membrane under the catalysis of carnitine acyltransferase 1, which is located on the outer mitochondrial membrane. Among the 7 DEPs enriched by KEGG analysis, ACSL1, HADHA and HADHB are mitochondrial membrane proteins that act on long-chain fatty acids. ACSL 1 is known to catalyze the rst step (oxidation) of the activation of long-chain fatty acids by converting them into long-chain acyl-CoA thioesters for channeling toward chain elongation, triacylglyceride synthesis or fatty acid oxidation [22]. ACSL 1 is necessary for the synthesis of long-chain acyl-CoA esters, fatty acid degradation and phospholipid remodeling [23]. HADHA and HADHB, which break down fatty acids to acetyl-CoA, are speci c for long-chain fatty acids. HADHA is involved in fatty acid beta-oxidation, which is a part of lipid metabolism. Research has shown that HADHA overexpression signi cantly inhibits cell growth, induces cell apoptosis, and decreases the formation of cytoplasmic lipid droplets [24]. ACADS is one of the acyl-CoA dehydrogenases that catalyze the rst step (oxidation) of mitochondrial fatty acid beta-oxidation, an aerobic process that breaks down fatty acids to acetyl-CoA. It acts speci cally on acyl-CoAs with saturated 4-to 6-carbon-long primary chains. Studies have shown that ACADS not only plays a vital role in free fatty acid β-oxidation but also regulates energy homeostasis [25]. ECHS 1 , similar to ACADS in function, also participates in the metabolism of fatty acyl coenzyme A ester and is an important mitochondrial fatty acid beta oxidase but acts on straight-chain enoyl-CoA thioesters that have 4 to at least 16 carbons [26]. ACAT 1 and ACAA 2 are involved in lipid metabolism. They are two key enzymes of the fatty acid oxidation pathway, catalyzing the last step (cleavage) of mitochondrial betaoxidation. They use free coenzyme A/CoA and catalyze the thiolytic cleavage of medium-to long-chain unbranched 3-oxoacyl-CoAs to acetyl-CoA and fatty acyl-CoA, which are shorter by two carbon atoms, thus playing an important role in fatty acid metabolism [27]. GO and KEGG analyses showed that in the mitochondrion differentially expressed HADHA, HADHB and ACSL 1 participate in the fatty acid metabolism, fatty acid degradation and fatty acid elongation pathways to in uence long-chain fatty acid metabolism and synthesis, further in uencing IMF anabolism and catabolism. Differentially expressed ACADS affects fatty acid metabolism, fatty acid degradation and fatty acid elongation, in uencing shortchain fatty acid metabolism and synthesis and further in uencing IMF anabolism and catabolism; Differentially expressed ACAT 1 and ACAA 2 act on fatty acid metabolism, fatty acid degradation and fatty acid elongation, in uencing long-chain and medium-long-chain fatty acid metabolism and synthesis and further in uencing IMF anabolism and catabolism. In the construction of the PPI network, a total of 10 hub proteins were found. Among them, ACAT 1 , ACAA 2 , HADHB, ECHS 1 , and HADHA were also identi ed by GO and KEGG analyses, and the other 5 interacting proteins were SUCLG 1 , SUCLG 2 , CS, DLST, and ACO 2 . All ve of these proteins are involved in the tricarboxylic acid cycle (TCA). Succinate-CoA ligase (SUCL) is a heterodimer consisting of an alpha subunit encoded by SUCLG 1 and a beta subunit encoded by SUCLG 2 , catalyzing an ATP-and a GTPforming reaction, respectively [28,29]. SUCL is at the intersection of several metabolic pathways [30]. For example, SUCLA 2 rebound increases, pleiotropically affecting metabolic pathways associated with SUCL [31]. CS synthesizes isocitrate from oxaloacetate, and Tereza Škorpilová, et al showed that there could be speci c limits designed for CS activity in chilled and frozen/thawed meats [32]. DLST catalyzes the overall conversion of 2-oxoglutarate to succinyl-CoA and CO 2 [33]. ACO 2 catalyzes the isomerization of citrate to isocitrate, and Aco2 is needed for mitochondrial translation [34]. Analysis of the PPI network showed that in addition to the 5 key enzymes and proteins involved in the process of fatty acid synthesis and metabolism affecting the nal accumulation of IMF in different muscle tissues of Inner Mongolian cashmere goat, there are also ve proteins in the TCA cycle that exhibit expression, thereby affecting the accumulation of IMF. In our study, the proteomic pro les of the high-IMF and low-IMF muscles of Inner Mongolia cashmere goats were evaluated. Our results provide new insights into IMF deposition in goats and improve our understanding of the molecular mechanisms associated with IMF anabolism and catabolism. Conclusion: GO and KEGG analysis results show that differentially expressed HADHA, HADHB, ACSL 1 , ACADS, ACAT 1 and ACAA 2 in mitochondria act on fatty acid metabolism, fatty acid degradation and fatty acid elongation pathways to in uence long-chain fatty acid, short-chain fatty acid and medium-chain fatty acid metabolism and synthesis and to in uence IMF anabolism and catabolism. PPI network analysis results showed that IMF accumulation in different muscle tissues of Inner Mongolian cashmere goats was affected not only by 5 key enzymes and proteins involved in the process of fatty acid synthesis and metabolism but also by ve DEPs (SUCLG 1 , SUCLG 2 , CS, DLST, ACO 2 ) involved in the TCA cycle. Our results provide new insights into IMF deposition in goats and improve our understanding of the molecular mechanisms associated with IMF anabolism and catabolism. And permission were obtained from the farm owner. Animals were slaughtered under controlled conditions after being electrically stunned, and then, the muscles were collected aseptically into enzyme-free tubes, immediately immersed in liquid nitrogen and subsequently stored at -80°C until analysis. IMF content of meat The IMF level for the BF and LD was determined using the Soxhlet extraction protocol following the method of Hopkins et al. [35]. Three grams of freeze-dried meat was weighed into a thimble and extracted in 85 mL of hexane for 60 minutes within individual extraction tins. The solvent was then evaporated off for an additional 20 minutes. The tin was then dried for 30 minutes at 105°C to remove any residual solvent. The variation in meat weight before and after extraction was used to calculate IMF content. The nal value is expressed as a percentage of meat weight. Protein extraction and digestion Each frozen sample was ground to powder in liquid nitrogen and dissolved in lysis buffer (proteinase inhibitors (Roche, Basel, Switzerland), 1% SDS (Coolaber, China)). The samples were then incubated at room temperature for 20 minutes, vortexing for 30 seconds every minute. After 20 minutes of ultrasonication, the samples were centrifuged at 12,000 rpm and 4°C for 30 minutes. The supernatant was collected for further study, and the protein concentration was measured with a bicinchoninic acid (BCA) kit (Tiangen, China). One hundred micrograms of protein was added to 200 µL of 8 M urea (Sigma, Germany) and 10 mM DLdithiothreitol (Sigma-Aldrich, Germany) and incubated at 37°C for 1 hour. After centrifugation at 12,000 rpm for 40 minutes, 200 µL of urea was added to each ltrate tube, which was then agitated. Next, the ltrate tubes were centrifuged twice at 12,000 rpm for 30 minutes each. Then, 200 µL of 50 mM iodoacetamide (Sigma-Aldrich, Germany) was added to each ltrate tube; the reaction was allowed to proceed in the dark for 30 minutes, and then, the liquid was removed. Next, 100 µL of ammonium bicarbonate (Fluka, Germany) was added to each ltrate tube, and the samples were centrifuged at 12,000 rpm for 20 minutes. This step was performed 3 times, and then, the liquid was removed. The samples were incubated overnight with trypsin at 37°C and centrifuged at 12,000 rpm for 30 minutes. Then, 50 µL of ammonium bicarbonate (Fluka, Germany) was added to each ltrate tube. The samples were centrifuged at 12,000 rpm for 30 minutes, and this step was repeated. The ltrate was collected, freeze-dried, and stored at -20°C [36]. HPLC-MS/MS analysis Two methods, namely, information-dependent acquisition (IDA) and sequential window acquisition of all theoretical fragment ion spectra (SWATH), were used to acquire data from separated peptides on the LC-MS/MS system (Sciex, Framingham, MA, USA). Approximately 2 μg of peptides was injected and separated on a C18 HPLC column (75 μm×15 cm). A linear gradient (120 minutes, going from 5 to 80% B at 500 nL/minute) of 0.1% formic acid in water and 0.1% formic acid in acetonitrile was used to separate peptides. The conditions for IDA were as follows: nominal resolving power of 30,000, time-of-ight (TOF)-MS collection from 350 to 1800 m/z and automated collision energy for MS/MS with IDA scanned from 400 to 1800 m/z. The conditions for SWATH-MS were as follows: 150-1200 m/z MS1 mass range, 100-1500 m/z MS2 spectra, and nominal resolving power of 30,000 and 15,000 for MS1 and MS2, respectively. Data processing Protein Pilot 4.5 software (Sciex, Framingham, MA, USA) was used with the UniProt/SWISS-PROT (https://www.UniProt.org/#) database (downloaded from https://www.UniProt.org; 556,388 proteins) to identify peptides. The results were ltered at a 1% FDR. The selected search parameters included the use of trypsin as the enzyme, allowing up to two missed cleavage sites. The peptide mass tolerance was ± 15 ppm, and the fragment mass tolerance was 20 mmu. The data were loaded into PeakView (Sciex, Framingham, MA, USA) software to search the SWATH databank using the ion library generated in Protein Pilot. PeakView generated extracted ion chromatograms (XICs) after processing targeted and nontargeted data. Then, the results were interpreted and quantitatively analyzed using MarkerView software (Sciex, Framingham, MA, USA). MarkerView allows a rapid review of data to determine the DEPs. PCA and volcano plot analysis, which combined fold change analysis and t-tests, were performed. A fold change >2 or fold change < 0.5 and statistical signi cance (p value < 0.05) were used to identify DEPs [37]. Bioinformatics analysis The DEPs were subjected to bioinformatics analyses. g:Pro ler (https://biit.cs.ut.ee/gpro ler/gost) online software was used to perform the GO and KEGG pathway analyses. Cytoscape_v3.6.1 was used to perform the PPI analysis of DEPs, and cytoHubba was used for screening hub genes [38]. Western blotting The homogenized denatured protein was separated by SDS-PAGE and transferred in a semidry state to a PVDF membrane (Bio-Rad, USA). The PVDF membrane was blocked in blocking buffer (Li-CoR, USA) for 2 hours and incubated overnight with murine monoclonal primary antibody (Abcam, Germany). The membrane was washed 3 times for 10 minutes each time and then incubated with uorophoreconjugated goat anti-mouse antibody (Li-CoR, USA) for 1 hour. Finally, the membrane was rinsed with water, and the immunoreactive bands were examined using a LI-COR Odyssey (CLX-0496, USA) nearinfrared imager.
5,618.2
2020-12-21T00:00:00.000
[ "Biology" ]
A New Grid Zenith Tropospheric Delay Model Considering Time-Varying Vertical Adjustment and Diurnal Variation over China : Improving the accuracy of zenith tropospheric delay (ZTD) models is an important task. However, the existing ZTD models still have limitations, such as a lack of appropriate vertical adjustment function and being unsuitable for China, which has a complex climate and great undulating terrain. A new approach that considers the time-varying vertical adjustment and delicate diurnal variations of ZTD was introduced to develop a new grid ZTD model (NGZTD). The NGZTD model employed the Gaussian function and considered the seasonal variations of Gaussian coefficients to express the vertical variations of ZTD. The effectiveness of vertical interpolation for the vertical adjustment model (NGZTD-H) was validated. The root mean squared errors (RMSE) of the NGZTD-H model improved by 58% and 22% compared to the global pressure and temperature 3 (GPT3) model using ERA5 and radiosonde data, respectively. The NGZTD model’s effectiveness for directly estimating the ZTD was validated. The NGZTD model improved by 22% and 31% compared to the GPT3 model using GNSS-derived ZTD and layered ZTD at radiosonde stations, respectively. Seasonal variations in Gaussian coefficients need to be considered. Using constant Gaussian coefficients will generate large errors. The NGZTD model exhibited outstanding advantages in capturing diurnal variations and adapting to undulating terrain. We analyzed and discussed the main error sources of the NGZTD model using validation of spatial interpolation accuracy. This new ZTD model has potential applications in enhancing the reliability of navigation, positioning, and interferometric synthetic aperture radar (InSAR) measurements and is recommended to promote the development of space geodesy techniques. Introduction The troposphere, which ranges from the ground to the beginning of the stratosphere, is a layer of atmosphere near the Earth.Air in the troposphere accounts for 75% of the total atmospheric mass.The troposphere's height over China is approximately 10 km.The global navigation satellite system (GNSS) provides high-precision three-dimensional coordinates and speed.Tropospheric delay limits the accuracy of space geodesy [1].GNSS technology inevitably generates errors in its applications owing to the tropospheric delay, which seriously affects the accuracy of navigation, positioning, and interferometric synthetic aperture radar (InSAR) [2].The zenith tropospheric delay (ZTD) consists of the zenith hydrostatic delay (ZHD) and the zenith wet delay (ZWD).Owing to the particularities of the troposphere, the ZHD with a stable variation law can be accurately modeled.Unlike ZHD, ZWD, which is caused by precipitable water vapor (PWV), is difficult to model accurately [3].In addition, ZTD can be a useful signal for PWV inversion [4].Therefore, research on ZTD modeling is of considerable significance for navigation, positioning, InSAR monitoring, and PWV inversion [5]. Existing ZTD models include meteorological and non-meteorological parameter models.The meteorological parameter models rely on measured meteorological parameters, whereas the non-meteorological parameter models only require the input of spatiotemporal information.The Hopfield [6], Saastamoinen [7], and Black models [8] are the main meteorological parameter models.Hopfield established a ZTD model called the Hopfield model using data from 18 radiosonde stations worldwide; this model requires meteorological parameters.Saastamoinen established the Saastamoinen model based on the U.S. Standard Atmospheric Model (SAM), which also requires input values such as temperature, pressure, and station location information.The Black model was developed as a new-generation model that follows the Hopfield model, which also requires the input of temperature and pressure.The accuracy of the abovementioned models can reach a centimeter level, provided that the measured meteorological parameters are available [9].However, the measured meteorological parameters can only be obtained from stations equipped with meteorological sensors.The distribution of these stations with low spatial resolution is uneven, and there is a time delay, which considerably limits the ability of meteorological parameter models to realize real-time applications.Therefore, real-time calculation of ZTD using non-meteorological parameter models has become a considerable challenge that must be solved, and it has also become a hot research topic for scholars [10,11].The TropGrid and GPT series models were established to realize a higher temporal resolution [12][13][14].Owing to the GPT2 model only calculating certain meteorological parameters, Böhm et al. [15] established the GPT2w model using monthly ERA-Interim data, which adds two output values: water vapor decline rate and atmospheric weighted mean temperature.The global pressure and temperature 3 (GPT3) model is a new-generation model that follows the GPT2w model and improves the empirical mapping function coefficients [16][17][18].The GPT3 model, which has the ability to output comprehensive meteorological parameters, can calculate ZTD by combining the Saastamoinen and Askne models.The GPT3 model provides two horizontal resolutions of 1 • × 1 • and 5 • × 5 • , which need to be further improved.To improve the applicability of the ZTD model globally, studies have been conducted on high-precision global ZTD modeling, which considers multiple factors.Li et al. [19] established an IGGtrop model that considers latitude, longitude, elevation, and annual period, which attained a further improvement in the global ZTD model's accuracy.In addition, Huang et al. [20] proposed a new global ZTD grid model based on a sliding window algorithm with different spatial resolutions, a novel model which shows excellent performance compared to the GPT3 and UNB3m models and significantly optimizes model parameters.However, the above models cannot capture the diurnal variation of ZTD, and the temporal resolution of these models needs to be further improved. Recently, the calculation of the ZTD using the fifth-generation European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis (ERA5) and the Second Modern-Era Retrospective Analysis for Research and Applications (MERRA-2) has received much attention [21][22][23].These atmospheric reanalysis data exhibited high-precision results when they were validated by other reference data [24].Therefore, the ERA5 and MERRA-2 datasets are expected to be used widely in the future.Because these atmospheric reanalysis grid data exhibit a high spatiotemporal resolution, the grid point data around the target point can be interpolated to calculate the data at the user's position with high precision.However, ZTD varies much more vertically than horizontally.Direct interpolation produces large errors in regions with an undulating terrain.Therefore, to solve the aforementioned problems, it is necessary to develop a high-precision model to adjust the ZTD vertically.The selection of a fitting function for the variation in layered ZTD is an important research topic.Therefore, extensive studies have been conducted on ZTD vertical profile functions [25].The vertical variations of the ZTD are typically modeled using polynomials [26,27] and negative exponential functions [28][29][30].Zhu et al. [31] developed a segmented global ZTD vertical profile model (GZTD-P) with different spatial resolutions that considers the time-varying characteristics of the ZTD vertical variation factor to address the limitations of a single function in expressing the ZTD vertical profile, which shows better performance compared to the GPT3 model.However, the GZTD-P model can only vertically adjust ZTD from the starting height to the target height and cannot directly calculate ZTD at the target position, which limits its application.Sun et al. [32] proposed a global ZTD model, the GZTDS model, which considers delicate periodic variations by adopting a nonlinear function.The GSTDS model was developed as a new-generation model that follows the GZTDS model, and the performance of the new model deteriorated as the zenith angle increased.Hu and Yao [33] adopted the Gaussian function to fit the vertical ZTD and then established a ZTD vertical profile model that considers seasonal variation, which shows good performance in both time and space.The model achieved good results on a global scale.The model was developed using the monthly mean ZTD with horizontal resolutions of 5 • × 5 • provided by ERA-Interim.Its adaptability needs to be further improved in regions with greatly undulating terrain and complex climates.Zhao et al. [34] proposed a high-precision ZTD model that considers the height effect on ZTD after analyzing the relationship between the ZTD periodic residual term and the height of the GNSS station at different seasons.Although the aforementioned models have demonstrated their respective advantages, it is necessary to conduct further research on the more delicate vertical and temporal variations in the ZTD over China. China is characterized by greatly undulating terrain, large latitudinal spans, various climate types, and large diurnal atmospheric differences [35][36][37].Under such conditions, the ZTD exhibits complex variations in both time and space.Existing ZTD models have difficulty meeting the requirements of applications in China with the aforementioned characteristics.Therefore, a detailed investigation of the spatiotemporal variations of the ZTD and the selection of better functions are of considerable significance for applications in GNSS and InSAR.Our aim was to develop a NGZTD model that considers the time-varying vertical adjustment and delicate diurnal variations of ZTD.To attain this objective, this paper is organized as follows.Section 2 introduces the data and methods for calculating ZTD and analyzes the spatiotemporal characteristics of Gaussian coefficients and surface ZTD.Section 3 develops a NGZTD-H model to vertically adjust ZTD, and the vertical interpolation accuracy of the model is validated using ERA5 and radiosonde data.In addition, the NGZTD model is developed to directly calculate ZTD, and its accuracy is validated using GNSS and radiosonde data.China is characterized by greatly undulating terrain, large latitudinal spans, various climate types, and large diurnal atmospheric differences [35][36][37].Under such conditions, the ZTD exhibits complex variations in both time and space.Existing ZTD models have difficulty meeting the requirements of applications in China with the aforementioned characteristics.Therefore, a detailed investigation of the spatiotemporal variations of the ZTD and the selection of better functions are of considerable significance for applications in GNSS and InSAR.Our aim was to develop a NGZTD model that considers the timevarying vertical adjustment and delicate diurnal variations of ZTD.To attain this objective, this paper is organized as follows.Section 2 introduces the data and methods for calculating ZTD and analyzes the spatiotemporal characteristics of Gaussian coefficients and surface ZTD.Section 3 develops a NGZTD-H model to vertically adjust ZTD, and the vertical interpolation accuracy of the model is validated using ERA5 and radiosonde data.In addition, the NGZTD model is developed to directly calculate ZTD, and its accuracy is validated using GNSS and radiosonde data.Section 4 discusses the main error sources of the NGZTD model.Section 5 contains the Conclusions.The research framework is shown in Figure 1.The proposed model compensates for the limitations of existing ZTD models in China.The new model is expected to be applied to precise point positioning (PPP) and InSAR atmospheric correction to improve their monitoring accuracy. Data ERA5 is the latest product provided by ECMWF, which is characterized by high temporal and spatial resolution.ERA5 data consist of 37 vertical pressure layers.The ERA5 datasets are generated via assimilation schemes that include observation data from GPS occultation observations, satellite altimeters, satellite radiation, and inversion equipment [38].Jiang et al. [39] systematically evaluated the effectiveness of ZTD data calculated using ERA5 data.The datasets are used as important resources for research in GNSS meteorology and space geodesy [40,41].In this paper, ERA5 data were used to analyze the spatiotemporal characteristics of Gaussian coefficients and surface ZTD.They also provided data sources for developing NGZTD-H and NGZTD models.In the vertical interpolation validation of the NGZTD-H model, ERA5 data were used as the reference values to validate the accuracy of the model. Radiosonde stations comprise converters, sensors, and radio transmitters, which can provide measured meteorological parameters near the Earth.Radiosonde data are obtained by releasing balloons equipped with sensors and are widely used as reference data to test other data and tropospheric models [42].Radiosonde data were used as reference values to validate the accuracy of the NGZTD-H and the NGZTD model. The ZTD from GNSS stations can reach an accuracy of 4 mm [43].It is recognized as a truth value and is widely used to evaluate the ZTD derived from other data and ZTD models [44].GNSS-derived ZTD was used as reference value to validate the accuracy of the NGZTD model.It can be used to validate the NGZTD model's ability to capture diurnal variations of ZTD, as the temporal resolution of GNSS data is 1 h. The Method of Calculating ZTD The layered ZTD were calculated using the integral method for atmospheric refractivity.The equation used to calculate atmospheric refractivity is expressed as follows: NdH (1) where e is the water vapor pressure (hPa); N is the total refractivity; sh is the specific humidity; T is the temperature (K); H is the height (km); h low and h top are the lowest and topmost heights; k 1 , k 2 , k 3 are the refractive constants.The Saastamoinen model was employed to estimate the ZHD at the top of the troposphere, which was used to correct the integral result of the ZTD [29].The equation is expressed as follows: where ZHD top is the ZHD at the top of troposphere (m), P top is the pressure (hPa) at the top of troposphere, and φ is the latitude ( • ). Analysis of the Gaussian Coefficient A detailed investigation of the spatiotemporal variations of the ZTD provides important references for developing ZTD models.Direct interpolation results in large errors owing to the inconsistent heights of the four grid points.It is necessary to use appropriate functions to express the variation in ZTD with height.Hu and Yao [33] found that the Gaussian function is superior to the exponential function.Therefore, we used the Gaussian function to fit the vertical variation in ZTD, as expressed by where h is the height (m), a is the ZTD (m) at the mean sea level, and b and c (m) are the height-scale factors (also called the Gaussian coefficients). To verify the effectiveness of this function in expressing the vertical variation in ZTD, four representative grid points in China were selected to determine the ZTD-layered profile information from ERA5 and fitted vertical variations using the Gaussian function. From Table 1, the mean bias of the four grid points was negative, indicating that a certain systematic error occurs in the Gaussian function to fit the ZTD profiles, but the value was small: approximately −1.00 mm.The root mean squared errors (RMSE) of the four grid points were 14.56, 10.24, 10.47, and 8.86 mm, respectively.From the above results, the Gaussian function attained high accuracy and stability to fit the ZTD profiles.Analyzing the temporal variation characteristics of the Gaussian coefficients b and c is critical for developing a ZTD vertical adjustment model.Therefore, the annual mean values and amplitudes of Gaussian coefficients b and c were calculated using the ERA5 ZTD to analyze the distribution characteristics of Gaussian coefficients b and c over China.The results are shown in Figure 2. To calculate these values, the ZTD-layered profiles calculated using ERA5 atmospheric reanalysis data from 2014 to 2017 were fitted using Equation (5) to obtain b and c for all grid points.Second, the Gaussian coefficients b and c for each grid point were fitted by considering the annual and semi-annual periods using Equation (6) [20].Finally, the annual mean values and amplitudes of Gaussian coefficients b and c were obtained using Equations (7) and (8).(6) where i represents the i-th grid point, co f i represents Gaussian coefficients b or c, α i 0 is the mean value of Gaussian coefficients, α i 1 to α i 4 are the period amplitude coefficients, and doy is the day of the year. where amp a represents the annual period amplitude (m) of b and c and amp s represents the semi-annual period amplitude (m) of b and c.As shown in Figure 2, the characteristics of the Gaussian coefficient b varied across different regions.The annual mean values of the Gaussian coefficient b were larger in northwest China and smaller in southeast China, showing a decreasing trend from northwest to southeast China.The annual period amplitude gradually decreased from north to south, and there was a sudden increase in the Tibetan Plateau.This phenomenon may be caused by climatic differences between northern and southern China and differences between the ocean and land.The regional variation characteristics of the amplitude of the semi-annual period amplitude were subtle.In addition, the regional distribution characteristics of the annual mean values and amplitudes of Gaussian coefficient c were similar to those of Gaussian coefficient b.The annual mean values of Gaussian coefficient c showed an apparent regional variation law, as did the annual period amplitude, whereas the regional variation law of the semi-annual period amplitude was also subtle.Because the period characteristics of Gaussian coefficients b and c varied considerably across different regions, the delicate periods of Gaussian coefficients b and c must be considered for each grid point when developing a ZTD vertical adjustment model. Remote Sens. 2024, 16, 2023 6 of 21 the regional variation law of the semi-annual period amplitude was also subtle.Because the period characteristics of Gaussian coefficients b and c varied considerably across different regions, the delicate periods of Gaussian coefficients b and c must be considered for each grid point when developing a ZTD vertical adjustment model. Analysis of the Surface ZTD Analyzing the delicate temporal variations of ZTD contributes to the provision of important references for optimizing the ZTD model.The surface ZTD has significant annual and semi-annual periods [34].Two representative grid points in China were selected to determine the diurnal and semi-diurnal periods by adopting the fast Fourier transform. As shown in Figure 3, the surface ZTD exhibited significant diurnal and semi-diurnal period variations.The diurnal variation trends of the surface ZTD for the two grid points were different.For the first grid point, the surface ZTD increased and then decreased throughout the day, whereas the second grid point showed the opposite result.The diurnal period variation characteristics of the first grid point were significantly stronger than those of the semi-diurnal period, whereas the diurnal and semi-diurnal period variation characteristics of the second grid point were comparable.Therefore, annual, semi-annual, diurnal, and semi-diurnal periods must be considered when developing a ZTD grid model. As shown in Figure 4, the annual mean surface ZTD was larger on the Tibetan Plateau and smaller elsewhere, which may be due to altitudinal differences.The annual period amplitude of the surface ZTD showed a decreasing trend from southeast to northwest, which may be because the northwest is far from the ocean and the southeast is close to the ocean, resulting in a large difference in the type of climate.The semi-annual period amplitude of the ZTD did not exhibit an apparent variation law.The distributions of the diurnal and semi-diurnal period amplitudes were similar.Therefore, these results further demonstrate that the delicate period of the ZTD must be considered when developing a ZTD grid model.As shown in Figure 3, the surface ZTD exhibited significant diurnal and semi-diurnal period variations.The diurnal variation trends of the surface ZTD for the two grid points were different.For the first grid point, the surface ZTD increased and then decreased throughout the day, whereas the second grid point showed the opposite result.The diurnal period variation characteristics of the first grid point were significantly stronger than those of the semi-diurnal period, whereas the diurnal and semi-diurnal period variation characteristics of the second grid point were comparable.Therefore, annual, semi-annual, diurnal, and semi-diurnal periods must be considered when developing a ZTD grid model. As shown in Figure 4, the annual mean surface ZTD was larger on the Tibetan Plateau and smaller elsewhere, which may be due to altitudinal differences.The annual period amplitude of the surface ZTD showed a decreasing trend from southeast to northwest, which may be because the northwest is far from the ocean and the southeast is close to the ocean, resulting in a large difference in the type of climate.The semi-annual period amplitude of the ZTD did not exhibit an apparent variation law.The distributions of the diurnal and semi-diurnal period amplitudes were similar.Therefore, these results further demonstrate that the delicate period of the ZTD must be considered when developing a ZTD grid model. Development of the NGZTD-H Model According to the above analysis, both Gaussian coefficients b and c have annual and semi-annual period characteristics.Therefore, these coefficients of each ERA5 grid point were fitted by considering the annual and semi-annual periods.Five coefficients were calculated using Equation ( 6), which were stored in grid points with a horizontal resolution of 0.25° × 0.25°.After obtaining Gaussian coefficients b and c, the ZTD was adjusted vertically using the following formula [33]: where ℎ and ℎ are the starting and target heights (m), respectively, and ZTD and ZTD are the ZTD values (m) of the starting and target heights. Development of the NGZTD-H Model According to the above analysis, both Gaussian coefficients b and c have annual and semi-annual period characteristics.Therefore, these coefficients of each ERA5 grid point were fitted by considering the annual and semi-annual periods.Five coefficients were calculated using Equation ( 6), which were stored in grid points with a horizontal resolution of 0.25° × 0.25°.After obtaining Gaussian coefficients b and c, the ZTD was adjusted vertically using the following formula [33]: where ℎ and ℎ are the starting and target heights (m), respectively, and ZTD and ZTD are the ZTD values (m) of the starting and target heights. Development of the NGZTD-H Model According to the above analysis, both Gaussian coefficients b and c have annual and semi-annual period characteristics.Therefore, these coefficients of each ERA5 grid point were fitted by considering the annual and semi-annual periods.Five coefficients were calculated using Equation ( 6), which were stored in grid points with a horizontal resolution of 0.25 • × 0.25 • .After obtaining Gaussian coefficients b and c, the ZTD was adjusted vertically using the following formula [33]: where h r and h t are the starting and target heights (m), respectively, and ZTD r and ZTD t are the ZTD values (m) of the starting and target heights. The ZTD vertical profile model (NGZTD-H model) consists of Equations ( 6) and ( 9).This model requires the following simple steps for application.First, after extracting the five model coefficients of the given grid point and inputting the doy, the Gaussian coefficients b and c were calculated using Equation (6).Second, after inputting the starting and target heights, the ZTD of the target height was calculated using Equation (9).The NGZTD-H model can achieve vertical adjustment of the ZTD from the starting height to the target height before horizontal interpolation, which reduces the interpolation errors. To validate the NGZTD-H model, the effectiveness of vertical interpolation was evaluated using ERA5 and radiosonde data. Accuracy Validation in ZTD Vertical Interpolation Using ERA5 Data The proposed vertical profile model (NGZTD-H model) and the GPT3 model were evaluated using the ZTD layered profile information from ERA5 in 2018 over China.The UNB3 model can adjust the ZHD and ZWD from sea-level height to the target height.The GPT3 model provides the meteorological parameters required for the UNB3 model.Therefore, combining the GPT3 and UNB3 models can achieve a vertical adjustment for the ZTD.As the ERA5-layered ZTD recognized as a reference value does not start from sea-level height, it is necessary to derive the UNB3 model such that it can achieve vertical adjustment for ZHD and ZWD at any starting height.The derived formulas of the UNB3 model are as follows: where the subscript letters t and r represent the target and the starting heights, β is the lapse rate of temperature, T 0 is the thermodynamic temperature (K), R d is the dry gas constant with values of 287.0538J•kg −1 , g represents the ground gravity acceleration, and λ ′ is the water vapor lapse rate. The NGZTD-H model was used to adjust the ZTD vertically from the surface to the ERA5 layer height.Because the GPT3 model cannot vertically adjust the ZTD directly, the surface ZHD and ZWD derived from the ERA5 data at the grid point need to be adjusted vertically and then the ZHD and ZWD must be added together to obtain the ZTD.Bias and RMSE were recognized as precision criteria.The bias and RMSE results of the NGZTD-H and GPT3 models are presented in Table 2.As shown in Table 2, the mean bias of the NGZTD-H model and the GPT3 model were 0.45 and −3.02 cm, which indicated that the GPT3 model exhibits a larger systematic error.Furthermore, the mean RMSE of NGZTD-H model was 1.70.Compared with the GPT3 model, the accuracy of NGZTD-H model improved by 10% over the ERA5 ZTD profile.Therefore, the NGZTD-H model exhibits better stability for vertically adjusting the ZTD. The two models were used to calculate the hourly layered ZTD at the ERA5 grid points and then to calculate the annual mean bias and RMSE to analyze the geographic distribution of the accuracy.A is shown in Figure 5.The bias of the NGZTD-H model was approximately 0 cm in most regions, negative in high-elevation regions, and positive in Remote Sens. 2024, 16, 2023 9 of 21 marine regions.The bias of the GPT3 model was positive at high latitudes and negative at low latitudes, especially in the marine regions, where it showed a large negative bias of approximately −5 cm.In addition, the RMSE of the NGZTD-H model was approximately 2 cm and no more than 3 cm in most regions, attaining a better performance in Southwest China.The GPT3 model exhibited low accuracy both in land and marine regions.The two models were used to calculate the hourly layered ZTD at the ERA5 grid points and then to calculate the annual mean bias and RMSE to analyze the geographic distribution of the accuracy.A is shown in Figure 5.The bias of the NGZTD-H model was approximately 0 cm in most regions, negative in high-elevation regions, and positive in marine regions.The bias of the GPT3 model was positive at high latitudes and negative at low latitudes, especially in the marine regions, where it showed a large negative bias of approximately −5 cm.In addition, the RMSE of the NGZTD-H model was approximately 2 cm and no more than 3 cm in most regions, attaining a better performance in Southwest China.The GPT3 model exhibited low accuracy both in land and marine regions.To analyze the effectiveness of the NGZTD-H model in different atmospheric pressure layers and latitudes, the ERA5-layered ZTD was also recognized as a reference value in 2018 over China.Five pressure levels were selected to evaluate the model.In addition, the Chinese region was divided into four latitude zones at 10° intervals.Finally, the accuracy of the two models were validated for the selected pressure layers and latitude bands. As shown in Figure 6, the NGZTD-H model exhibited a small bias value in all pressure layers, positive values in two pressure layers at 800 and 200 hPa, and a negative value in the rest of the pressure layers, which were all approximately 0 cm.The GPT3 model exhibited a large value in all pressure layers; the value reached −5 cm in the pressure layers with 600 and 400 hPa.The bias of the GPT3 model was negative in all pressure layers, indicating a negative systematic error.Moreover, the RMSE of the NGZTD-H model was approximately 2 cm for all the pressure layers.It attained a better performance in 200 and 10 hPa pressure layers, which indicated that the NGZTD-H model is more precise at larger vertical interpolation heights.However, the GPT3 model exhibited larger errors in most pressure layers, with an RMSE of approximately 5 cm.In addition, the bias of the NGZTD-H model was within 1 cm for all latitude bands.The bias of the GPT3 model reached −4 cm at lower latitudes.This finding may be due to the variations of meteorological parameters in the troposphere being more complicated at low latitudes, which causes difficulties in capturing their variations using the GPT3 model.The bias of the GPT3 model also To analyze the effectiveness of the NGZTD-H model in different atmospheric pressure layers and latitudes, the ERA5-layered ZTD was also recognized as a reference value in 2018 over China.Five pressure levels were selected to evaluate the model.In addition, the Chinese region was divided into four latitude zones at 10 • intervals.Finally, the accuracy of the two models were validated for the selected pressure layers and latitude bands. As shown in Figure 6, the NGZTD-H model exhibited a small bias value in all pressure layers, positive values in two pressure layers at 800 and 200 hPa, and a negative value in the rest of the pressure layers, which were all approximately 0 cm.The GPT3 model exhibited a large value in all pressure layers; the value reached −5 cm in the pressure layers with 600 and 400 hPa.The bias of the GPT3 model was negative in all pressure layers, indicating a negative systematic error.Moreover, the RMSE of the NGZTD-H model was approximately 2 cm for all the pressure layers.It attained a better performance in 200 and 10 hPa pressure layers, which indicated that the NGZTD-H model is more precise at larger vertical interpolation heights.However, the GPT3 model exhibited larger errors in most pressure layers, with an RMSE of approximately 5 cm.In addition, the bias of the NGZTD-H model was within 1 cm for all latitude bands.The bias of the GPT3 model reached −4 cm at lower latitudes.This finding may be due to the variations of meteorological parameters in the troposphere being more complicated at low latitudes, which causes difficulties in capturing their variations using the GPT3 model.The bias of the GPT3 model also showed a decreasing trend with increasing latitude.Additionally, The NGZTD-H model exhibited good stability in all latitude bands, and its RMSE ranged from 1.5 to 2 cm.The RMSE of the GPT3 model reached 5 cm at lower latitudes.Therefore, compared with the GPT3 model, the NGZTD-H model is more suitable for the vertical adjustment of the ZTD over China. Remote Sens. 2024, 16, 2023 10 of 21 showed a decreasing trend with increasing latitude.Additionally, The NGZTD-H model exhibited good stability in all latitude bands, and its RMSE ranged from 1.5 to 2 cm.The RMSE of the GPT3 model reached 5 cm at lower latitudes.Therefore, compared with the GPT3 model, the NGZTD-H model is more suitable for the vertical adjustment of the ZTD over China. Accuracy Validation in ZTD Vertical Interpolation Using Radiosonde Data The layered ZTD at radiosonde stations in 2018 over China were considered as reference values to further validate the effectiveness of the NGZTD-H model.The ZTD at the radiosonde station was the starting value for the NGZTD-H model.The surface ZHD and ZWD at the radiosonde station were the starting values for the GPT3 model.The ZHD and ZWD, adjusted vertically using the GPT3 model, were added together to obtain the layered ZTD, which was used for comparison with the NGZTD-H model.The Gaussian coefficients b and c required for the two models at the radiosonde stations were calculated via interpolation from four ERA5 grid points around the radiosonde stations.Finally, the two models were used to calculate the layered ZTD for radiosonde stations over China at UTC 00:00 and 12:00 in 2018. Table 3 shows the mean bias of the NGZTD-H and GPT3 models were −0.32 and −2.66 cm, respectively, indicating that the ZTD from radiosonde stations was larger than the ZTD calculated by the GPT3 model.The mean RMSE of the NGZTD-H and GPT3 models were 3.10 and 3.97 cm.Compared with the GPT3 model, the accuracy of the NGZTD-H model enhanced by 0.87 cm (22%).The NGZTD-H model had a higher accuracy for the vertical adjustment of the ZTD.Compared with the validation results using the ERA5 profile ZTD, the NGZTD-H model exhibited lower accuracy, which may be due to the interpolation errors of Gaussian coefficients from four grid points to the radiosonde stations. Accuracy Validation in ZTD Vertical Interpolation Using Radiosonde Data The layered ZTD at radiosonde stations in 2018 over China were considered as reference values to further validate the effectiveness of the NGZTD-H model.The ZTD at the radiosonde station was the starting value for the NGZTD-H model.The surface ZHD and ZWD at the radiosonde station were the starting values for the GPT3 model.The ZHD and ZWD, adjusted vertically using the GPT3 model, were added together to obtain the layered ZTD, which was used for comparison with the NGZTD-H model.The Gaussian coefficients b and c required for the two models at the radiosonde stations were calculated via interpolation from four ERA5 grid points around the radiosonde stations.Finally, the two models were used to calculate the layered ZTD for radiosonde stations over China at UTC 00:00 and 12:00 in 2018. Table 3 shows the mean bias of the NGZTD-H and GPT3 models were −0.32 and −2.66 cm, respectively, indicating that the ZTD from radiosonde stations was larger than the ZTD calculated by the GPT3 model.The mean RMSE of the NGZTD-H and GPT3 models were 3.10 and 3.97 cm.Compared with the GPT3 model, the accuracy of the NGZTD-H model enhanced by 0.87 cm (22%).The NGZTD-H model had a higher accuracy for the vertical adjustment of the ZTD.Compared with the validation results using the ERA5 profile ZTD, the NGZTD-H model exhibited lower accuracy, which may be due to the interpolation errors of Gaussian coefficients from four grid points to the radiosonde stations.As is shown in Figure 7, the bias of the NGZTD-H model was small at all radiosonde stations and approximately 0 cm in most regions.Both the NGZTD-H and GPT3 models exhibited a large positive bias at higher elevations in southwest China, which may be due to the undulating terrain.The bias of the GPT3 model reached −8 cm at certain stations, indicating that serious systematic errors occur in the GPT3 model.In addition, the RMSE of the NGZTD-H model was stable without abrupt variations.At higher latitudes, the difference in RMSE between the NGZTD-H and GPT3 models was small.The GPT3 model exhibited a larger RMSE at low latitudes for the reasons noted earlier.As is shown in Figure 7, the bias of the NGZTD-H model was small at all radiosonde stations and approximately 0 cm in most regions.Both the NGZTD-H and GPT3 models exhibited a large positive bias at higher elevations in southwest China, which may be due to the undulating terrain.The bias of the GPT3 model reached −8 cm at certain stations, indicating that serious systematic errors occur in the GPT3 model.In addition, the RMSE of the NGZTD-H model was stable without abrupt variations.At higher latitudes, the difference in RMSE between the NGZTD-H and GPT3 models was small.The GPT3 model exhibited a larger RMSE at low latitudes for the reasons noted earlier.To investigate the adaptability of the NGZTD-H model to seasonal variation, the diurnal mean bias and RMSE were determined at two representative radiosonde stations (Hailar and Hangzhou).As shown in Figure 8, at the Hailar radiosonde station, the GPT3 model, which had poor seasonal adaptation, was seriously affected by seasonal variations, and its bias fluctuated throughout the year.However, the NGZTD-H model could resist the influence of the season on the vertical adjustment of the ZTD, and its bias was stable throughout the year.The RMSE of the new model was 2 cm.At the Hangzhou radiosonde station, two models exhibited fluctuations.The bias of the NGZTD-H model fluctuated at approximately 0 cm, and its RMSE was approximately 2 cm.However, the GPT3 model exhibited a more evident negative bias.Therefore, the NGZTD-H model is more suitable for capturing seasonal variations than the GPT3 model. Because the vertical adjustment of the ZTD is related to elevation, the radiosonde stations over China were divided into five intervals to validate the effectiveness of the NGZTD-H model at different elevations.Figure 9 shows the bias and RMSE statistics in each interval for the NGZTD-H and GPT3 models.The GPT3 model exhibited a larger positive bias in the interval of 3-4 km and a larger negative bias in the intervals of 0-3 and >4 km.The bias of the NGZTD-H model was smaller in each interval.A larger positive bias occurred in the interval of 3-4 km, indicating that the NGZTD-H model had a larger value than the reference value in this interval.Furthermore, the RMSE of the GPT3 model To investigate the adaptability of the NGZTD-H model to seasonal variation, the diurnal mean bias and RMSE were determined at two representative radiosonde stations (Hailar and Hangzhou).As shown in Figure 8, at the Hailar radiosonde station, the GPT3 model, which had poor seasonal adaptation, was seriously affected by seasonal variations, and its bias fluctuated throughout the year.However, the NGZTD-H model could resist the influence of the season on the vertical adjustment of the ZTD, and its bias was stable throughout the year.The RMSE of the new model was 2 cm.At the Hangzhou radiosonde station, two models exhibited fluctuations.The bias of the NGZTD-H model fluctuated at approximately 0 cm, and its RMSE was approximately 2 cm.However, the GPT3 model exhibited a more evident negative bias.Therefore, the NGZTD-H model is more suitable for capturing seasonal variations than the GPT3 model. Because the vertical adjustment of the ZTD is related to elevation, the radiosonde stations over China were divided into five intervals to validate the effectiveness of the NGZTD-H model at different elevations.Figure 9 shows the bias and RMSE statistics in each interval for the NGZTD-H and GPT3 models.The GPT3 model exhibited a larger positive bias in the interval of 3-4 km and a larger negative bias in the intervals of 0-3 and >4 km.The bias of the NGZTD-H model was smaller in each interval.A larger positive bias occurred in the interval of 3-4 km, indicating that the NGZTD-H model had a larger value than the reference value in this interval.Furthermore, the RMSE of the GPT3 model was larger in the intervals of 0-1 and 2-3 km.The above analysis shows the NGZTD-H model exhibited more stable performances in vertical adjustment for ZTD over China. was larger in the intervals of 0-1 and 2-3 km.The above analysis shows the NGZTD-H model exhibited more stable performances in vertical adjustment for ZTD over China. Development of the NGZTD Model From the above analysis of spatiotemporal characteristics, the surface ZTD exhibited regular seasonal and diurnal variations.Therefore, the development of a ZTD model must consider the delicate periods of the surface ZTD.This equation is expressed as follows: where ZTD is the surface ZTD (m), is the annual mean ZTD, to are the period amplitude coefficients, and ℎ is the hour (UTC) of day. The surface ZTD of all ERA5 grid points over China from 2014 to 2017 was used to calculate nine coefficients using the least squares method.The new ZTD grid model (NGZTD model), which directly calculates the ZTD, consists of Equation ( 12) and the NGZTD-H model proposed earlier.The method for using the NGZTD model is as follows: First, the user's coordinates are determined to search the ERA5 grid points around the user.After the model coefficients of these grid points have been obtained, as shown in Development of the NGZTD Model From the above analysis of spatiotemporal characteristics, the surface ZTD exhibited regular seasonal and diurnal variations.Therefore, the development of a ZTD model must consider the delicate periods of the surface ZTD.This equation is expressed as follows: where ZTD is the surface ZTD (m), is the annual mean ZTD, to are the period amplitude coefficients, and ℎ is the hour (UTC) of day. The surface ZTD of all ERA5 grid points over China from 2014 to 2017 was used to calculate nine coefficients using the least squares method.The new ZTD grid model (NGZTD model), which directly calculates the ZTD, consists of Equation ( 12) and the NGZTD-H model proposed earlier.The method for using the NGZTD model is as follows: First, the user's coordinates are determined to search the ERA5 grid points around the user.After the model coefficients of these grid points have been obtained, as shown in Development of the NGZTD Model From the above analysis of spatiotemporal characteristics, the surface ZTD exhibited regular seasonal and diurnal variations.Therefore, the development of a ZTD model must consider the delicate periods of the surface ZTD.This equation is expressed as follows: where ZTD i s is the surface ZTD (m), α i 0 is the annual mean ZTD, α i 1 to α i 8 are the period amplitude coefficients, and hod is the hour (UTC) of day. The surface ZTD of all ERA5 grid points over China from 2014 to 2017 was used to calculate nine coefficients using the least squares method.The new ZTD grid model (NGZTD model), which directly calculates the ZTD, consists of Equation ( 12) and the NGZTD-H model proposed earlier.The method for using the NGZTD model is as follows: First, the user's coordinates are determined to search the ERA5 grid points around the user.After the model coefficients of these grid points have been obtained, as shown in Equation ( 12), the surface ZTD can be calculated by inputting doy and hod using Equation (12).Second, the NGZTD-H model is employed to calculate ZTD at the height as the user after obtaining the Gaussian coefficients b and c.Finally, the user's ZTD is calculated using inverse distance interpolation.The NGZTD model was developed based on high-resolution ERA5 data, employs the high-precision Gaussian function to adjust the ZTD vertically, and considers seasonal and delicate diurnal variations in the ZTD, which ensures the effectiveness of the model.Furthermore, the NGZTD model has the advantages of efficiency and simplicity because measured meteorological parameters are not required, and the model coefficients are fewer in number and are stored in regular grid points. Accuracy Validation Using GNSS Data The effectiveness of the NGZTD model for estimating the ZTD was validated using the GNSS-derived ZTD in 2018 over China.Because the NGZTD model estimated the ZTD based on the ERA5 grid points, the GNSS and ERA5 data were unified with the same elevation system.The ZTD with a temporal resolution of 1 h at the GNSS stations was determined using both the NGZTD and GPT3 models, and the results were compared with the GNSS-derived ZTD.The accuracy statistics for the two models are presented in Table 4 and Figure 10.Equation ( 12), the surface ZTD can be calculated by inputting and ℎ using Equation (12).Second, the NGZTD-H model is employed to calculate ZTD at the height as the user after obtaining the Gaussian coefficients b and c.Finally, the user's ZTD is calculated using inverse distance interpolation.The NGZTD model was developed based on highresolution ERA5 data, employs the high-precision Gaussian function to adjust the ZTD vertically, and considers seasonal and delicate diurnal variations in the ZTD, which ensures the effectiveness of the model.Furthermore, the NGZTD model has the advantages of efficiency and simplicity because measured meteorological parameters are not required, and the model coefficients are fewer in number and are stored in regular grid points. Accuracy Validation Using GNSS Data The effectiveness of the NGZTD model for estimating the ZTD was validated using the GNSS-derived ZTD in 2018 over China.Because the NGZTD model estimated the ZTD based on the ERA5 grid points, the GNSS and ERA5 data were unified with the same elevation system.The ZTD with a temporal resolution of 1 h at the GNSS stations was determined using both the NGZTD and GPT3 models, and the results were compared with the GNSS-derived ZTD.The accuracy statistics for the two models are presented in Table 4 and Figure 10.As shown in Table 4, the mean bias of the GPT3 model was −0.38 cm and 0.16 cm.The ZTD calculated by the NGZTD model was generally smaller than the GNSS-derived ZTD but generally larger than the reference value for the GPT3 model.The bias of the NGZTD model was −0.38 cm and ranged from −3.98 to 1.06 cm, and the negative bias was larger, which indicated that some of the ZTD calculated by the NGZTD model were much smaller than the GNSS-derived ZTD.The NGZTD and GPT3 models had a similar RMSE range from 1 cm to 8 cm.The mean RMSE of the two models were 3.38 and 4.33 cm, respectively.As shown in Table 4, the mean bias of the GPT3 model was −0.38 cm and 0.16 cm.The ZTD calculated by the NGZTD model was generally smaller than the GNSS-derived ZTD but generally larger than the reference value for the GPT3 model.The bias of the NGZTD model was −0.38 cm and ranged from −3.98 to 1.06 cm, and the negative bias was larger, which indicated that some of the ZTD calculated by the NGZTD model were much smaller than the GNSS-derived ZTD.The NGZTD and GPT3 models had a similar RMSE range from 1 cm to 8 cm.The mean RMSE of the two models were 3.38 and 4.33 cm, respectively.Compared with the GPT3 model, the NGZTD model achieved an improvement of 0.95 cm (22%). As shown in Figure 10, the bias of the GPT3 model was positive in the western and southwestern regions and negative in the eastern region, indicating that the performance of this model was unstable.The NGZTD model performed better than the other regions in northern China.Especially in western and southwestern China with undulating terrain, the NGZTD model exhibited better stability, indicating that the NGZTD model is more adaptable to the influence of complex geographical conditions.The NGZTD model exhibited a stable performance with an RMSE of approximately 2 cm.However, the GPT3 model exceeded 2 cm in most regions of China and reached 6 cm in the eastern region.The two models do not perform well in the eastern region, which may be affected by atmospheric differences between the sea and land. To investigate the season adaptation of the NGZTD model, two representative GNSS stations (Kuqa and Delingha) were selected as validation stations.The diurnal mean bias and RMSE results are shown in Figure 11.The GPT3 model exhibited positive bias at both GNSS stations Kuqa and Delingha, whereas the NGZTD model exhibited smaller and more stable bias during the entire year.The NGZTD model performed a better result with an RMSE below 4 cm, indicating that the new ZTD model can cope with the influences of season. Compared with the GPT3 model, the NGZTD model achieved an improvement of 0.95 cm (22%).As shown in Figure 10, the bias of the GPT3 model was positive in the western and southwestern regions and negative in the eastern region, indicating that the performance of this model was unstable.The NGZTD model performed better than the other regions in northern China.Especially in western and southwestern China with undulating terrain, the NGZTD model exhibited better stability, indicating that the NGZTD model is more adaptable to the influence of complex geographical conditions.The NGZTD model exhibited a stable performance with an RMSE of approximately 2 cm.However, the GPT3 model exceeded 2 cm in most regions of China and reached 6 cm in the eastern region.The two models do not perform well in the eastern region, which may be affected by atmospheric differences between the sea and land. To investigate the season adaptation of the NGZTD model, two representative GNSS stations (Kuqa and Delingha) were selected as validation stations.The diurnal mean bias and RMSE results are shown in Figure 11.The GPT3 model exhibited positive bias at both GNSS stations Kuqa and Delingha, whereas the NGZTD model exhibited smaller and more stable bias during the entire year.The NGZTD model performed a better result with an RMSE below 4 cm, indicating that the new ZTD model can cope with the influences of season.ZTD is affected by the season as well as delicate diurnal variations.Two GNSS stations, Kuqa and Guilin, were selected to investigate the effectiveness of the NGZTD model to capture diurnal variations in the ZTD.The hourly ZTD over a five-day period in 2018 was estimated by both the NGZTD and GPT3 models, and their results were compared with the GNSS-derived ZTD and are shown in Figure 12.At the Kuqa GNSS station, the GNSS-derived ZTD ranged from 2.08 to 2.09 cm, exhibiting a diurnal period during those five days.The ZTD estimated by the NGZTD model also exhibited a diurnal period and ZTD is affected by the season as well as delicate diurnal variations.Two GNSS stations, Kuqa and Guilin, were selected to investigate the effectiveness of the NGZTD model to capture diurnal variations in the ZTD.The hourly ZTD over a five-day period in 2018 was estimated by both the NGZTD and GPT3 models, and their results were compared with the GNSS-derived ZTD and are shown in Figure 12.At the Kuqa GNSS station, the GNSS-derived ZTD ranged from 2.08 to 2.09 cm, exhibiting a diurnal period during those five days.The ZTD estimated by the NGZTD model also exhibited a diurnal period and agreed well with the reference value, indicating that the NGZTD model can capture the delicate diurnal variation of the ZTD.However, the ZTD determined using the GPT3 model was the same for one day and was larger than the reference value.At the Guilin GNSS station, although the ZTD calculated by the GPT3 model was close to the GNSS-derived agreed well with the reference value, indicating that the NGZTD model can capture the delicate diurnal variation of the ZTD.However, the ZTD determined using the GPT3 model was the same for one day and was larger than the reference value.At the Guilin GNSS station, although the ZTD calculated by the GPT3 model was close to the GNSSderived ZTD, this model was not able to capture the diurnal variation in the ZTD.Therefore, the proposed model can capture both seasonal and diurnal variations in the ZTD. Accuracy Validation Using Radiosonde Data Layered ZTD at radiosonde stations in 2018 over China were considered as reference data to validate the accuracy of the NGZTD model.To investigate the contribution of considering the seasonal variations of Gaussian coefficients b and c to the accuracy of estimating ZTD, the proposed model, which considered the seasonal variations of Gaussian coefficients and took the constants b and c as −7000 and 30,000 m, respectively, was used to calculate the layered ZTD at radiosonde stations at 00:00 and 12:00 and compared the results with those of the GPT3 model. As shown in Table 5, the mean bias of the NGZTD model and the model with constant Gaussian coefficients were −1.53 and −4.63 cm, respectively, and the ZTD estimated by the NGZTD model with constant Gaussian coefficients showed an evident difference from the reference value, which indicated that seasonal variations in Gaussian coefficients need to be considered.The GPT3 model exhibited a more stable performance with a mean bias of 0.32 cm.The mean RMSE of the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model were 3.62, 5.91, and 5.25 cm, respectively.The NGZTD model achieved the highest accuracy, with an improvement of 2.29 (39%) and 1.63 cm (31%) compared with the other models.Therefore, these results further indicated that considering seasonal variations in the Gaussian coefficients contributed to the effectiveness of the NGZTD model. Accuracy Validation Using Radiosonde Data Layered ZTD at radiosonde stations in 2018 over China were considered as reference data to validate the accuracy of the NGZTD model.To investigate the contribution of considering the seasonal variations of Gaussian coefficients b and c to the accuracy of estimating ZTD, the proposed model, which considered the seasonal variations of Gaussian coefficients and took the constants b and c as −7000 and 30,000 m, respectively, was used to calculate the layered ZTD at radiosonde stations at 00:00 and 12:00 and compared the results with those of the GPT3 model. As shown in Table 5, the mean bias of the NGZTD model and the model with constant Gaussian coefficients were −1.53 and −4.63 cm, respectively, and the ZTD estimated by the NGZTD model with constant Gaussian coefficients showed an evident difference from the reference value, which indicated that seasonal variations in Gaussian coefficients need to be considered.The GPT3 model exhibited a more stable performance with a mean bias of 0.32 cm.The mean RMSE of the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model were 3.62, 5.91, and 5.25 cm, respectively.The NGZTD model achieved the highest accuracy, with an improvement of 2.29 (39%) and 1.63 cm (31%) compared with the other models.Therefore, these results further indicated that considering seasonal variations in the Gaussian coefficients contributed to the effectiveness of the NGZTD model.Figure 13 shows that the NGZTD model exhibited a negative bias at most stations, whereas the GPT3 model exhibited a positive bias.The RMSE of the GPT3 model exceeded 5 cm in most regions.In particular, in regions with high altitudes and undulating terrain, the NGZTD model is more advantageous.In addition, Figure 14 shows the accuracy percentage for the NGZTD and GPT3 models.The NGZTD model had the largest proportion of RMSE in the interval of 2-3 cm, accounting for 30% of the RMSE, whereas the largest proportion of RMSE for the GPT3 model was in the interval of 5-6 cm, accounting for 69% of the RMSE.In the interval in which the RMSE was <2 cm, the NGZTD model accounted for 2%, whereas the GPT3 model accounted for 0%.According to the above results of the accuracy validation, the NGZTD model exhibited stable and high accuracy in terms of capturing seasonal and diurnal variations, adaptability to terrain, and adaptability to different elevations.The new ZTD model can directly provide reliable and high-precision ZTD for users in China. Figure 13 shows that the NGZTD model exhibited a negative bias at most stations, whereas the GPT3 model exhibited a positive bias.The RMSE of the GPT3 model exceeded 5 cm in most regions.In particular, in regions with high altitudes and undulating terrain, the NGZTD model is more advantageous.In addition, Figure 14 shows the accuracy percentage for the NGZTD and GPT3 models.The NGZTD model had the largest proportion of RMSE in the interval of 2-3 cm, accounting for 30% of the RMSE, whereas the largest proportion of RMSE for the GPT3 model was in the interval of 5-6 cm, accounting for 69% of the RMSE.In the interval in which the RMSE was <2 cm, the NGZTD model accounted for 2%, whereas the GPT3 model accounted for 0%.According to the above results of the accuracy validation, the NGZTD model exhibited stable and high accuracy in terms of capturing seasonal and diurnal variations, adaptability to terrain, and adaptability to different elevations.The new ZTD model can directly provide reliable and high-precision ZTD for users in China.Figure 13 shows that the NGZTD model exhibited a negative bias at most stations, whereas the GPT3 model exhibited a positive bias.The RMSE of the GPT3 model exceeded 5 cm in most regions.In particular, in regions with high altitudes and undulating terrain, the NGZTD model is more advantageous.In addition, Figure 14 shows the accuracy percentage for the NGZTD and GPT3 models.The NGZTD model had the largest proportion of RMSE in the interval of 2-3 cm, accounting for 30% of the RMSE, whereas the largest proportion of RMSE for the GPT3 model was in the interval of 5-6 cm, accounting for 69% of the RMSE.In the interval in which the RMSE was <2 cm, the NGZTD model accounted for 2%, whereas the GPT3 model accounted for 0%.According to the above results of the accuracy validation, the NGZTD model exhibited stable and high accuracy in terms of capturing seasonal and diurnal variations, adaptability to terrain, and adaptability to different elevations.The new ZTD model can directly provide reliable and high-precision ZTD for users in China. Discussion As shown in Tables 2, 4 and 5, the mean RMSE of the vertical interpolation for the NGZTD-H model was 1.70 cm when the ERA5 data were used as the reference value, whereas the mean RMSE of the NGZTD model for the direct calculation of the ZTD exceeded 3 cm when both the GNSS data and radiosonde data were used as the reference value.Unlike the vertical interpolation validation for the NGZTD-H model in Section 3.1.1,the starting ZTD for the NGZTD model in Sections 3.2.1 and 3.2.2. was obtained by a model considering the seasonal and diurnal variations of ZTD, whereas the starting ZTD in Section 3.1.1.was obtained by the integral method.In addition, the NGZTD model required horizontal interpolation to calculate the ZTD at GNSS and radiosonde stations in Sections 3.2.1 and 3.2.2.Based on the above analysis, the errors of the NGZTD model may be attributed to the starting ZTD and horizontal interpolation. To validate the above error sources, the GNSS-derived ZTD in 2018 over China was used to validate the spatial interpolation accuracy for the NGZTD-H model.First, the Gaussian coefficients b and c were determined.Second, the ZTD on the ERA5 grid points was vertically adjusted to the GNSS station height using the NGZTD-H model.It should be noted that the starting ZTD on the ERA5 grid points was obtained by the integral method.Finally, the adjusted ZTD was interpolated horizontally to the GNSS station employing an inverse distance interpolation method.A comparison of the interpolated results with the GNSS-derived ZTD validated the effectiveness of the NGZTD-H model.Because the elevations of GNSS stations are geodesic heights and the elevations of ERA5 grid points are geopotential heights, it is necessary to unify the elevations of these two products before vertically adjusting the ZTD.In this study, we employed the EGM2008 model [45] to convert elevations of the ERA5 data to geodesic elevations. The spatial interpolation accuracy of the NGZTD-H model was compared with that of the GPT3 model.The mean bias of the GPT3 model was −1.14 cm (Table 6).This indicated the GPT3 model had certain systematic errors.The bias intervals of the two models were similar; both ranged from −3 to 1 cm.The mean RMSE of the NGZTD-H and GPT3 models were 1.48 cm and 1.79 cm.Compared with the GPT3 model, the accuracy of the NGZTD-H model improved by 0.31 cm (17%).Therefore, the NGZTD-H model has higher accuracy for the spatial interpolation of the ZTD. Figure 15 shows the stable results for NGZTD-H model at each GNSS station in China.Its bias was approximately 0 cm in most regions and reached −3 cm at individual stations in Southwest China, which indicated that the model struggles to express the vertical variation of ZTD in these regions.However, the result of the GPT3 model shows a large negative bias in most regions.The RMSE of the NGZTD-H model was smaller at high latitudes.However, the RMSE of the GPT3 model was greater than 2 cm at most GNSS stations.Therefore, compared with the GPT3 model, the NGZTD-H model has better stability in spatial interpolation. The mean RMSE of spatial interpolation for the NGZTD-H model was 1.48 cm, which was apparently smaller than that of the NGZTD model in Sections 3.2.1 and 3.2.2.The validation of spatial interpolation accuracy for the NGZTD-H model used ZTD obtained by the integral method as the starting ZTD and required horizontal interpolation.This indicated that the errors of the NGZTD model mainly derive from the starting ZTD rather than from horizontal interpolation.polation in Section 3.1.1.This further indicated that the errors caused by horizontal interpolation were relatively small.According to the above analysis, the accuracy of ZTD vertical adjustment was higher in flat-terrain regions and lower in western China with undulating terrain.The GNSS stations are fewer in western China, whereas the distribution of ERA5 grid points is uniform, which resulted in a higher number of GNSS stations with smaller errors.Therefore, the accuracy of the spatial interpolation for the NGZTD-H model was higher than that of the vertical interpolation in Section 3.1.1. Conclusions To overcome the limitations of existing ZTD models that lack an appropriate vertical adjustment function and are unsuitable for use in China in light of its complex climate and greatly undulating terrain, this study proposed an NGZTD model considering time-varying vertical adjustment based on the Gaussian function and diurnal variation.First, a ZTD vertical adjustment model, NGZTD-H, was developed that can adjust the ZTD from the starting height to the target height.Then, an NGZTD model that considers the seasonal and delicate diurnal variation in the ZTD was developed to estimate the ZTD directly.The vertical interpolation accuracy of the NGZTD-H model was validated, and the model exhibited stability for different geographic regions, pressure layers, latitudes, seasons, and elevations.The NGZTD model was validated using GNSS-derived ZTD and layered ZTD at radiosonde stations, and the results show that the NGZTD model performed better than the GPT3 model in time and space dimensions.In particular, in terms of capturing diurnal variation, the NGZTD model can effectively detect diurnal variation in the ZTD.Because the NGZTD model adopted the Gaussian function to adjust the ZTD vertically, the new As shown in Tables 2 and 6, the spatial interpolation accuracy (the mean RMSE was 1.48 cm) of the NGZTD-H model is higher than the vertical interpolation accuracy (the mean RMSE was 1.70 cm).The validation of spatial interpolation accuracy for the NGZTD-H model required horizontal interpolation, whereas this is not required for vertical interpolation in Section 3.1.1.This further indicated that the errors caused by horizontal interpolation were relatively small.According to the above analysis, the accuracy of ZTD vertical adjustment was higher in flat-terrain regions and lower in western China with undulating terrain.The GNSS stations are fewer in western China, whereas the distribution of ERA5 grid points is uniform, which resulted in a higher number of GNSS stations with smaller errors.Therefore, the accuracy of the spatial interpolation for the NGZTD-H model was higher than that of the vertical interpolation in Section 3.1.1. Conclusions To overcome the limitations of existing ZTD models that lack an appropriate vertical adjustment function and are unsuitable for use in China in light of its complex climate and greatly undulating terrain, this study proposed an NGZTD model considering time-varying vertical adjustment based on the Gaussian function and diurnal variation.First, a ZTD vertical adjustment model, NGZTD-H, was developed that can adjust the ZTD from the starting height to the target height.Then, an NGZTD model that considers the seasonal and delicate diurnal variation in the ZTD was developed to estimate the ZTD directly.The vertical interpolation accuracy of the NGZTD-H model was validated, and the model exhibited stability for different geographic regions, pressure layers, latitudes, seasons, and elevations.The NGZTD model was validated using GNSS-derived ZTD and layered ZTD at radiosonde stations, and the results show that the NGZTD model performed better than the GPT3 model in time and space dimensions.In particular, in terms of capturing diurnal variation, the NGZTD model can effectively detect diurnal variation in the ZTD.Because the NGZTD model adopted the Gaussian function to adjust the ZTD vertically, the new model exhibited a more stable performance than the GPT3 model in western and southwestern China, where the terrain is undulating.Seasonal variations in Gaussian Section 4 discusses the main error sources of the NGZTD model.Section 5 contains the Conclusions.The research framework is shown in Figure 1.The proposed model compensates for the limitations of existing ZTD models in China.The new model is expected to be applied to precise point positioning (PPP) and InSAR atmospheric correction to improve their monitoring accuracy.Remote Sens. 2024, 16, 2023 3 of 21 a segmented global ZTD vertical profile model (GZTD-P) with different spatial resolutions that considers the time-varying characteristics of the ZTD vertical variation factor to address the limitations of a single function in expressing the ZTD vertical profile, which shows better performance compared to the GPT3 model.However, the GZTD-P model can only vertically adjust ZTD from the starting height to the target height and cannot directly calculate ZTD at the target position, which limits its application.Sun et al. [32] proposed a global ZTD model, the GZTDS model, which considers delicate periodic variations by adopting a nonlinear function.The GSTDS model was developed as a newgeneration model that follows the GZTDS model, and the performance of the new model deteriorated as the zenith angle increased.Hu and Yao [33] adopted the Gaussian function to fit the vertical ZTD and then established a ZTD vertical profile model that considers seasonal variation, which shows good performance in both time and space.The model achieved good results on a global scale.The model was developed using the monthly mean ZTD with horizontal resolutions of 5° × 5° provided by ERA-Interim.Its adaptability needs to be further improved in regions with greatly undulating terrain and complex climates.Zhao et al. [34] proposed a high-precision ZTD model that considers the height effect on ZTD after analyzing the relationship between the ZTD periodic residual term and the height of the GNSS station at different seasons.Although the aforementioned models have demonstrated their respective advantages, it is necessary to conduct further research on the more delicate vertical and temporal variations in the ZTD over China. Figure 1 . Figure 1.The research framework.Figure 1.The research framework. Figure 1 . Figure 1.The research framework.Figure 1.The research framework. Figure 2 . Figure 2. Distributions of the annual mean value and period amplitudes for Gaussian coefficients b and c.(a) The annual mean value of b.(b) The annual period amplitude of b.(c) The semi-annual period amplitude of b.(d) The annual mean value of c. (e) The annual period amplitude of c. (f) The semi-annual period amplitude of c. Figure 2 . Figure 2. Distributions of the annual mean value and period amplitudes for Gaussian coefficients b and c.(a) The annual mean value of b.(b) The annual period amplitude of b.(c) The semi-annual period amplitude of b.(d) The annual mean value of c. (e) The annual period amplitude of c. (f) The semi-annual period amplitude of c. 2.4.Analysis of the Surface ZTD Analyzing the delicate temporal variations of ZTD contributes to the provision of important references for optimizing the ZTD model.The surface ZTD has significant annual and semi-annual periods [34].Two representative grid points in China were selected to determine the diurnal and semi-diurnal periods by adopting the fast Fourier transform.As shown in Figure3, the surface ZTD exhibited significant diurnal and semi-diurnal period variations.The diurnal variation trends of the surface ZTD for the two grid points were different.For the first grid point, the surface ZTD increased and then decreased throughout the day, whereas the second grid point showed the opposite result.The diurnal period variation characteristics of the first grid point were significantly stronger than those of the semi-diurnal period, whereas the diurnal and semi-diurnal period variation characteristics of the second grid point were comparable.Therefore, annual, semi-annual, diurnal, and semi-diurnal periods must be considered when developing a ZTD grid model.As shown in Figure4, the annual mean surface ZTD was larger on the Tibetan Plateau and smaller elsewhere, which may be due to altitudinal differences.The annual period amplitude of the surface ZTD showed a decreasing trend from southeast to northwest, which may be because the northwest is far from the ocean and the southeast is close to the ocean, resulting in a large difference in the type of climate.The semi-annual period amplitude of the ZTD did not exhibit an apparent variation law.The distributions of the diurnal and semi-diurnal period amplitudes were similar.Therefore, these results further demonstrate that the delicate period of the ZTD must be considered when developing a ZTD grid model. Figure 4 . Figure 4. Distribution of annual mean value and period amplitudes for surface ZTD.(a) The annual mean value.(b) The annual period amplitude.(c) The semi-annual period amplitude.(d) The diurnal period amplitude.(e) The semi-diurnal period amplitude. Figure 4 . Figure 4. Distribution of annual mean value and period amplitudes for surface ZTD.(a) The annual mean value.(b) The annual period amplitude.(c) The semi-annual period amplitude.(d) The diurnal period amplitude.(e) The semi-diurnal period amplitude. Figure 4 . Figure 4. Distribution of annual mean value and period amplitudes for surface ZTD.(a) The annual mean value.(b) The annual period amplitude.(c) The semi-annual period amplitude.(d) The diurnal period amplitude.(e) The semi-diurnal period amplitude. Figure 5 . Figure 5. Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model using ERA5 profile ZTD in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 5 . Figure 5. Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model using ERA5 profile ZTD in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 6 . Figure 6.Distribution of vertical interpolation accuracy for NGZTD-H and GPT3 models in the selected pressure layers and latitude bands using ERA5 profile ZTD in 2018.(a) The bias of pressure layers.(b) The RMSE of pressure layers.(c) The bias of latitude bands.(d) The RMSE of latitude bands. Figure 6 . Figure 6.Distribution of vertical interpolation accuracy for NGZTD-H and GPT3 models in the selected pressure layers and latitude bands using ERA5 profile ZTD in 2018.(a) The bias of pressure layers.(b) The RMSE of pressure layers.(c) The bias of latitude bands.(d) The RMSE of latitude bands. Figure 7 . Figure 7. Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 7 . Figure 7. Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 8 . Figure 8. Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model in different seasons using the ZTD-layered profiles at radiosonde stations in 2018.(a) Hailar.(b) Hangzhou. Figure 9 . Figure 9. Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models at different heights using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias.(b) The RMSEs. Figure 8 . Figure 8. Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model in different seasons using the ZTD-layered profiles at radiosonde stations in 2018.(a) Hailar.(b) Hangzhou. Figure 8 . Figure 8. Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model in different seasons using the ZTD-layered profiles at radiosonde stations in 2018.(a) Hailar.(b) Hangzhou. Figure 9 . Figure 9. Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models at different heights using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias.(b) The RMSEs. Figure 9 . Figure 9. Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models at different heights using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias.(b) The RMSEs. Figure 10 . Figure 10.Distribution of accuracy for NGZTD and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018.(a) The bias of NGZTD.(b) The RMSE of NGZTD.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 10 . Figure 10.Distribution of accuracy for NGZTD and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018.(a) The bias of NGZTD.(b) The RMSE of NGZTD.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 13 . Figure 13.Distribution of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias of NGZTD.(b) The RMSE of NGZTD.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 14 . Figure 14.The percentage results of the RMSE for the NGZTD and GPT3 models using the ZTDlayered profiles at radiosonde stations in 2018.(a) NGZTD model.(b) GPT3 model. Figure 13 . Figure 13.Distribution of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias of NGZTD.(b) The RMSE of NGZTD.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 13 . Figure 13.Distribution of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018.(a) The bias of NGZTD.(b) The RMSE of NGZTD.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 14 . Figure 14.The percentage results of the RMSE for the NGZTD and GPT3 models using the ZTDlayered profiles at radiosonde stations in 2018.(a) NGZTD model.(b) GPT3 model. Figure 14 . Figure 14.The percentage results of the RMSE for the NGZTD and GPT3 models using the ZTDlayered profiles at radiosonde stations in 2018.(a) NGZTD model.(b) GPT3 model. Figure 15 . Figure 15.Distribution of spatial interpolation accuracy for NGZTD-H and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Figure 15 . Figure 15.Distribution of spatial interpolation accuracy for NGZTD-H and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018.(a) The bias of NGZTD-H.(b) The RMSE of NGZTD-H.(c) The bias of GPT3.(d) The RMSE of GPT3. Table 1 . The bias and RMSE of ZTD fitting curves using the Gaussian function (unit: mm). Table 2 . Statistics of vertical interpolation accuracy for NGZTD-H model and GPT3 model using ERA5 profile ZTD in 2018 (unit: cm). Table 3 . Statistics of vertical interpolation accuracy for the NGZTD-H and GPT3 models using ZTDlayered profiles at radiosonde stations in 2018 (unit: cm). Table 3 . Statistics of vertical interpolation accuracy for the NGZTD-H and GPT3 models using ZTD-layered profiles at radiosonde stations in 2018 (unit: cm). Table 4 . Statistics of accuracy for NGZTD and GPT3 models using GNSS-derived ZTD in 2018 (unit: cm). Table 4 . Statistics of accuracy for NGZTD and GPT3 models using GNSS-derived ZTD in 2018 (unit: cm). Table 5 . Statistics of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018 (unit: cm). Table 5 . Statistics of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018 (unit: cm). Table 6 . Statistics of spatial interpolation accuracy for NGZTD-H and GPT3 models using GNSSderived ZTD in 2018 (unit: cm).
16,891
2024-06-04T00:00:00.000
[ "Environmental Science", "Engineering" ]
Intertextuality in Pre-service Teachers' Argumentative Essay in Raising AI: Practices and Beliefs English as a Foreign Language (EFL) pre-service teachers arguably face more challenges regarding rhetorical moves in argumentative essays, and one of them is intertextuality because EFL pre-service teachers' arguments require sufficient and high-quality support and evidence from other scholars. Intertextuality was mainly studied, grounding in texts without external tools, such as, Artificial Intelligence (AI). In a rising AI era, the objective of this study is to investigate Indonesian EFL pre-service teachers' intertextuality in argumentative essays assisted by AI. Ten EFL pre-service teachers who attended sixteen courses in Academic Writing, with neither teaching nor writing experience, were recruited as participants. We employed a case study design to portray the nature of the phenomena, and the data were collected through documents (academic essays) to portray the practices, and interviews to represent teachers' beliefs on explicit and implicit intertextuality beyond their argumentative essays in facing AI. We employed content analysis from academic essays and interviews. The findings show that 1) EFL pre-service teachers mostly used reporting phrases and iconic references, but they were oriented to local references that targeted local audiences, so international references should INTRODUCTION To help EFL pre-service teachers become skilled writers, having only knowledge and teaching experiences in argumentative teaching strategies is insufficient.However, a broader understanding and practice of argumentative essays is necessary for success (Valdivia & Martínez, 2018).Moreover, writing plays an essential role in connecting EFL pre-service teachers to academia and teacher community practices based on expressing their creative teaching of ideas, relating to teachers' community, preserving cultural and social relevance, and achieving professional requirements (Latham, 2020;Yoo, 2018).More specifically, argumentative essays can elevate EFL pre-service teachers' scientific thinking by integrating causal claims with supporting evidence in the writing process (Valdivia & Martínez, 2018). Moreover, one of the essential elements in the argumentative essays is intertextuality, so it requires knowledge of citations.Therefore, pre-service teachers should be capable of critically navigating in a body of literature, critical reading, and critical writing to argue and give sufficient evidence to support their arguments.Kristeva (1996) acknowledges all texts are interconnected through references, so pre-service teachers could distinguish various voices in their augmentative essay between their own and experts to avoid plagiarism.Furthermore, the current body literature on intertextuality mainly discussed intertextuality and plagiarism awareness (Hu & Shen, 2021), literature review on the thesis (Badenhorst, 2017), writing e-mail (Bremner & Costley, 2018), writing social arguments (Olsen et al., 2018), and online writing (Strickland, 2019).Furthermore, intertextuality was reported to be the most challenging part for preservice teachers in writing argumentative essays (Valdivia & Martínez, 2018).With the rise of AI, there is scant evidence for intertextuality study in argumentative essays assisted by AI.Therefore, this study explored EFL pre-service teachers' intertextuality practices and beliefs in argumentative essays assisted by AI. Producing arguments or claims; 2. Establishing relationships that modify the epistemic value of arguments according to available knowledge; 3. Examining the validity of arguments according to available scientific knowledge. Pre-service teachers use their prior knowledge to construct scientific arguments to convey perspectives and values to their readers.Scientific arguments go beyond simply organizing theories and empirical studies in order, but teachers involve critical reflection and evaluation.Prior studies on argumentation within educational settings have primarily centered on pre-service teachers' argumentative essay training (Fajaryani et al., 2021;Valdivia & Martínez, 2018), pre-service teachers' collaborative writing (Rosales et al., 2020), and where scholars also employed various technologies AI chatbot (Guo et al., 2022) and mind mapping (Barzilai et al., 2021).However, those studies did not explicitly discuss pre-service teachers' intertextuality of argumentative essays.Moreover, Castelló et al. (2011) point out that the quality of scientific argumentative essays can be assessed according to their structure and the argumentative nature of academic texts, including three criteria, and one of them is intertextuality: 1. Intertextuality refers to dialogue established with other texts and authors used as an explicit reference.It includes: (i) sufficiency, clarity and relevance of statements; (ii) evaluative comments on statements, use of other texts or voices with that purpose; (iii) convergence with other accepted theories, laws or models. 2. Critical approach is characterized by the writer's stance and the use of discursive resources for: (i) making personal attitudes and choices explicit according to assumptions and evidence; and (ii) achieving coherence between arguments and ideas to convince a given audience. 3. Formal aspects represent of texts that follow specific rules on formal discourse elements.Such characteristics include command of technical language, and grammatical and spelling correctness. Moreover, among quality of argumentative essays (e.g., intertextuality, critical approach, and formal aspects), intertextuality was found to be an area that requires room for improvement because pre-service teachers lack the ability to argue and explain the relationship within intertextuality (Valdivia & Martínez, 2018).Therefore, this study focused on the intertextuality aspect, so we expect that preservice teachers have wider understanding to differentiate various voices (e.g., experts studied theory, methodology, and empiric; and their own voices as writers to interweave and critically reflect). The concept of intertextuality, influenced by Kristeva (1996), proposes that all texts are interconnected.Moreover, intertextuality can be defined as how writers incorporate existing texts and audiences to generate a fresh text and audience.It embodies the post-modern idea that every new text element has a background and can be linked to its origins.Essentially, intertextuality involves portraying other sources within one's writing (Badenhorst, 2017).Furthermore, Groom (2000) acknowledges as a "given point whose "voice" is "speaking" (p.15).Therefore, the origin of the idea can be traceable.When writers do not quote or cite, readers will be expected to take responsibility for the statement and credit.However, Bazerman (2004) acknowledges two types of intertextuality, including, explicit (evident through direct source citation) and implicit (only be discerned by knowledgeable individuals within a discourse community).Furthermore, explicit and implicit intertextuality can connect a text or statement to previous, current, or possible future texts (Bazerman, 2004).Moreover, implicit intertextuality relies on commonly circulated beliefs, issues, ideas, and statements, often considered common knowledge.However, recognizing and understanding the underlying connections and the voices behind the text may require additional or background knowledge, especially for individuals not part of the specific community or context in which the text originated. Arguably as the most challenging part of writing, Farrelly's (2020) work notes that Fairclough (1992) operationalized Kristeva's idea about intertextuality to 'make the concept of intertextuality somewhat more concrete by using it to analyze texts' and to 'set out rather more systematically the potential of the concept for discourse analysis' (p.101).Fairclough (1992) divided intertextuality into six forms (e.g., discourse representation: direct discourse, discourse representation: indirect discourse, presupposition, negation, meta-discourse, and irony).However, Fairclough (1992) also faced criticisms by Farrelly (2020) in terms of the fact that 1) presupposition and negation are problematically viewed by manifestation of intertextuality; 2) presupposition and negation should be included in assumption; and 3) meta-discourse should be excluded because it represents the text itself, not other texts.Therefore, Fairclough (2003) narrowed down his idea about intertextuality by referring to 'the presence of actual elements of other texts within a text -quotations' (p.39). EFL Pre-service Teachers' Academic Essay Challenges on Raising AI Many EFL pre-service teachers have complained because they had inadequate English language proficiencies, so their manuscripts were difficult to follow (Vintzileos et al., 2023).In the case of pre-service teachers, some needed to make a graduation paper, although they did not need to publish it.They were tested by internal examiners from their campus.Some of them gave up during the writing of the graduation paper and did not graduate from university.However, writing has become easier for pre-service teachers because they can get 24-hour-assisted language from Artificial Intelligence (AI).AI is not new in education, but it has become more advanced.Since the 1950s, Artificial Intelligence (AI) has been in multidisciplinary and interdisciplinary growth discussions among scholars, from conceptual to practical, by mimicking human intelligence, for instance, in language skills (Chowdhary, 2020;Haenlein & Kaplan, 2019;Lund et al., 2023). Moreover, generative AI is different from AI tools.Generative AI offers a "shortcut" to the writing process, and it becomes a concern to scholars to ethically educate pre-service teachers on what they could do and what they could not do with AI for writing argumentative essays.Moreover, the emerging integration between generative AI and various writing tools (e.g., Google Docs, Grammarly, and Turnitin) was used to improve writing quality in terms of accuracy and decrease plagiarism (Ebadi & Rahimi, 2019;Li & Li, 2018;Liao, 2016). Therefore, pre-service teachers' skills in intertextuality were challenged and they should have the ability to distinguish and articulate various voices of writers and prior scholars.Moreover, employing writing tools powered by AI that could offer collaborative writing, so they did not limited to shortcut use to produce writing product, but writing process in peer review could be done to maintain accuracy and fluency that were missed by AI's feedback (Ebadi & Rahimi, 2018, 2019;Li & Li, 2018).Therefore, the roles of AI and peer reviewers could strengthen feedback generated by AI and peer reviewers.Moreover, pre-service teachers can take benefit from focusing more attention on the quality of argumentation because writing accuracy can be assisted by AI so that pre-service teachers can focus on other areas of writing process (Cotos et al., 2020;Knight et al., 2020;Palermo & Wilson, 2020;Shermis & Hamner, 2013). In November 2022, more advanced AI -ChatGPT or AI chatbot -was massively used by various users until it reached over 100 million users (Meyer et al., 2023;Vintzileos et al., 2023).Moreover, this AI could also improve "several aspects of language, such as vocabulary, spelling, punctuation, and grammar" (Vintzileos et al., 2023, p. 89), where Grammarly was already offered.Although this AI chatbot could have similar roles like Grammarly, Quillbot, RewriteGuru, etc., this chatbot can elevate by generating arguments.AI not only generates sentences but also creates paragraphs.However, the validity and reliability of the result have been criticized by scholars. Although ChatGPT's use in language classrooms has been debated, EFL teachers have also shown a positive attitude toward implementing ChatGPT to enhance traditional language classrooms (Mohamed, 2023;Ulla et al., 2023).Ulla et al. (2023) point out that teachers' role and technology knowledge should be empowered to implement ChatGPT in the classroom where the role of ChatGPT could help teachers to assist students in problem-solving activities during their studies.In comparison, traditional classrooms mostly position teachers as the main sources to provide feedback.Moreover, in the EFL context, students indicated high affective filters while learning English in the classroom, and ChatGPT helps to reduce students' affective filters because they could get assistance before performing English (Mohamed, 2023). However, when it comes to writing, pre-service teachers have faced teaching dilemmas in terms of three main issues using AI in academic essays, including 1) authorship, 2) copyright, and 3) plagiarism (Lund et al., 2023).Moreover, authorship issues related to the intertextuality of the text became ambiguous when the AI produced the arguments because they might produce augments based on the data and without directly citing the authors.Moreover, the practice of citation was essential for early career researchers to be introduced.Furthermore, it raises the question, "Is a graduation paper still reliable as one of the graduation requirements?".However, this also reminded policymakers and faculty members of a question: if they did not practice from their undergraduate years to write academic essays, how do they survive when they pursue master or doctoral degree or work in a research field because they need to publish journal articles as a graduation requirement?Today, academic essays are challenged by generated AI.AI can develop argumentation for academic essays, which is painful to scholars who spend many hours navigating, reading, and writing.However, scholars and educators cannot ultimately hinder academic writing classes, but AI literacy should be introduced to pre-service teachers so they can use AI wisely.Regarding management, beliefs, and practices, Indonesian education in the context of this study reform should be carefully reviewed.Although the existence of AI has tremendously impacted various aspects, it must be noted that AI cannot replace pre-service teachers' critical reading and writing because AI does not comprehend the same way as professional writers do.We speculate that AI can assist English language quality, but it still requires critical review from the writers.Therefore, this study attempted to unfold the status quo of AI in argumentative essays from intertextuality because educators need clear borders to use AI in the classroom, especially in academic essays referring to previous works.We generated two research questions. 1) What are the characteristics of pre-service teachers' intertextuality in argumentative essays facing AI? 2) How do pre-service teachers' beliefs on argumentative essays with AI contribute to intertextuality selections? RESEARCH METHOD To portray the nature of the data, we employed case study research (Yin, 2018), where we investigated ten EFL pre-service teachers who practiced writing argumentative essays in one of the university courses in Indonesia.This program employed writing tools powered by AI called Scribo.It is capable in terms of 1) classroom management, grouping students in some classes and groups, 2) seeking feedback (e.g., self-feedback, peer-feedback, and AI-feedback), and 3) providing initial scores in each progress and detailed language proficiency progress. During this study, they were in the stage of beginning practice, so it was only focused on the introduction of argumentative essays.Therefore, EFL pre-service teachers' introductions were based on create-a-research-space (or CARS) model (Swales, 2014), including: establishing a research territory, establishing a niche, and occupying the niche. There were sixteen meetings during this study.This program allowed preservice teachers to review each other's argumentative essays.During the first and third meetings, students were educated about AI literacy in argumentative essays and how to use Scribo for argumentative essay purposes.Further meetings were started by introducing Swales's (2014) concept of create-a-research-space (or CARS) model, and pre-service teachers practiced writing and reviewing.Moreover, we provided a consent form and got permission to use their argumentative essay as the primary data source.The course's main argumentative essay topics focused on integrating technology into teaching. To collect the data, we collected documents, argumentative essays, to be analyzed through content analysis.Moreover, we also used interviews (e.g., 1) What types of references do you use to support your argumentative essay?; 2) How far does AI assist you in writing your argumentative essay?; 3) What challenges do you face in using references to support your argumentative essay?And How do you face it?;4) Do you use any AI from third-party apps instead of built-in AI in our learning management system?) with stimulus recall; the purpose was to investigate and clarify the nature of the teachers' text claims and the underlying value system underlying the attributions made about argumentative essays assisted by AI. In the data analysis process, we employed Bengtsson's (2016) content analysis.First is "decontextualization;" we familiarized ourselves with the data by reading the argumentative essays to know what was happening in the practices and reading the interview results to understand what was happening with preservice teachers' beliefs.During the decontextualization, we labeled the data with code to start open coding (e.g., reporting phrases, named text whole text, iconic references, etc.).Second is "recontextualization;" we reread all data and highlighted to distinguish each meaning-making from the data.During this process, we compared highlighted data with research questions and aims of this study and excluded out-of-topic data.Third is "categorization;" we categorized selected data into practices and beliefs of intertextuality.Then, we grouped the selected data into sub-themes under practices and beliefs.Fourth, "compilation", we analyzed grouped data under Farrelly's (2020) intertextual reference types as references.To check the consistency of our analysis, we referred back to the original data.To strengthen the validity of the data, we employed intercoder reliability.The first author played as the main coder, and the other authors were the co-coders.We worked under Bengtsson's (2016) content analysis in different places and met to see similarities and differences among our coding data.Then, we excluded some of the data that were differently interpreted and did not meet the agreement of the interpretations. Pre-service teachers' intertextuality practices in argumentative essays with AI At the macro analysis, this study grouped introductions based on Swales's (2014) create-a-research-space (or CARS) model.We found that all teachers already fulfilled the criteria in general, but they lacked coherence and cohesion.They mostly quoted previous studies and Indonesian policies without considering connector sentences or ideas.Theoretically, scientific arguments must be built based on sufficient justifications and teachers' skills in determining or locating their works, whether they agree or disagree with the prior studies (Castelló et al., 2011;Jorba et al., 2000).We added that the teacher-teacher reviewing process during the academic writing course was insufficient to build the skills in the short term.We argued it required a long-term commitment to make pre-service teachers engage with the lecturer in the same discourse. Intertextuality is the most challenging part of argumentative writing (Valdivia & Martínez, 2018), and we found that pre-service teachers required sufficient navigating skills as a foundation.Therefore, pre-service teachers seemingly overclaimed; for instance, teacher 3 stated "However, none of the previous studies showed the influence and effectiveness of augmented reality on the writing abilities of students.(Step 1B: Indicating gap)".However, this statement was not sufficiently supported by prior studies, such as studies on literature review, systematic review, critical review, etc., because they did not put more effort into navigating or reading the literature, although they were already trained.In line with Valdivia and Martínez's (2018) work, their participants had challenges developing argumentative thesis and generating intertextual dialogue.Moreover, this study added that pre-service teachers' introductions are less focused on some key areas or variables in the introduction; for example, teacher 3 neglected descriptive texts, writing skills, and the context of the research with senior high school students.Teacher 3 only focused on one of the media that was not commonly used in Indonesia and without any specific regulation for education.Similar to prior studies reports, they found EFL pre-service teachers sufficient in terms of micro-skills (e.g., grammar and vocabulary), macro-skills (e.g., coherence and cohesion) in writing (Fajaryani et al., 2021;Valdivia & Martínez, 2018) although we found from our mental process with pre-service teachers that they tend to more skills on cognitive, meta-cognitive, and social strategies.Therefore, we argued that the complex system of the writing process and sufficient writing skills must be built from pre-service teachers' awareness of intertextuality when they have adequate cognitive, meta-cognitive, and social strategies.Our content analysis shows that pre-service teachers mostly used reporting phrases (n= 32), named text whole text (n= 9), iconic references (n=10), and generic communicative act type (n= 4) from the introduction; quotation parts of the text (n= 2), generic text-type (n= 1), and ambiguity were used (n= 7).In Figure 2, pre-service teachers use many reporting phrases, seemingly forgetting that argumentative writing is not limited to reporting previous studies to support the argumentation.Furthermore, pre-service teachers' intertextuality is displayed (See Table 3) as reprepresentingachers' works.Reporting phrases This previous study reported that the students in night grades' reading motivation are strongly influenced by their school-related reading practices (Tegmark et al., 2022).(Teacher 2) In relation to the effectiveness of learning English as Foreign Language (EFL), the previous research stated that the interaction relationship between teachers and students is also one of the factors supporting success (Vattøy & Gamlem, 2020).This research was conducted in two junior high schools with the aim of knowing the quality of interaction between teachers and students and provide feedback in teaching English as a foreign language (EFL).This research was conducted through the analysis of video recording data from 13 classrooms conducted 65 English learning.The result revealed that interaction between teacher and students and providing feedback is a regulatory process needed to achieve the learning objectives.This research found that English teachers in those two schools were still struggling to provide positive feedback so that it could influence students' learning to be more effective. ( Generic Communicative Act The Indonesian government has created a curriculum, called curriculum 2013.In this curriculum, the process of English class requires the use of a scientific approach in the learning process, where the learning is more focused on students' activity rather than teachers' activity.(Teacher 7) In the case of COVID-19 in Indonesia, the Indonesian government decided to suspend all school-related activities in March 2020 to keep up against the virus.The Ministry of Education in Indonesia recommended schools establish remote teaching arrangements and provide online education possibilities for children.(Teacher 5) The government through the ministry of education made various adjustments to learning activities during the pandemic.One of them is the implementation of an online class system.(Teacher 1) Ambiguity There are many studies being conducted and compared to evaluate the effectiveness of written asynchronous computer mediated communication (WACMC) and oral face to face interaction (OF2F) that is used to give feedback in writing class.(Teacher 1) Despite the rise in popularity of digital games as pastimes and research demonstrating the affordability of digital game-based language learning (DGBLL) for English as a Foreign Language (EFL), DGBLL is not widely used in Indonesia.(Teacher 4) As novice writers, pre-service teachers showed a low level of intertextuality because they were in an ongoing process of developing reading experience to recognize theoretical, conceptual, and empirical studies to layer their complexity of arguments (Badenhorst, 2017;Hu & Shen, 2021;Valdivia & Martínez, 2018).Valdivia and Martínez (2018) found that the most challenging part is that novice teachers tend to use direct citations.In contrast, our analysis showed why more experienced teachers used indirect citation in the form of "reporting phrases" without considering their voices, so they only reported (see Table 1 "reporting phrases" from Teacher 9).Teacher 9 only summarized what prior studies were conducted and found without critical analysis of other studies or relationships to Teacher 9's study.Teachers argued by providing various sources that could make their augmentative text because they developed based on facts.We found that many pre-service teachers did the same things because they forgot to interweave what prior studies had already seen and their argumentation.Furthermore, complexity of represented citations draws pre-service teachers' capacity on intertextuality, and our study showed teachers cited on the level of empirical studies and left behind the theoretical or conceptual framework. In the introduction of argumentative writing, pre-service teachers providing clear definitions become valuable for readers because readers might need help understanding some discourse used in some studies.Some pre-service teachers tended to give descriptions or parameters of their key terms in the introduction by using "named text whole text."However, some teachers also used "ambiguity, referring to some knowledge or terms from previous studies without clearly mentioning who is speaking in the text. Pre-service teachers' beliefs on intertextuality in argumentative essay with AI Using a writing management system powered by AI made pre-service teachers believe they could focus on their arguments.However, pre-service teachers seemingly followed Indonesian writers in local journals where they wanted to show their knowledge and expertise about government policies related to education.Sezen-Barrie et al. ( 2017) also found a similar movement of teachers to use government resources as references.Still, their study showed that teachers supported government policy and that rebuttal was presented.Our participants tended to play safe and hold their norms; rebuttal was stereotyped as an impolite movement.Furthermore, international readers might see this act as uninterested reports because pre-service teachers only reported and did not critically elaborate on previous regulations or other countries' similar policies (see Table 1 "Iconic Reference and Generic Communicative Act"). Before I thought my research and writing did not need to be perfect, my audiences are not experts, and I do not need to publish.Therefore, I used local resources to build a context of the study in my introduction.However, when I wrote my academic writing, the AI gave me scores that motivated me to write better.I did not want scores under 70.Although it was very difficult for me as a novice writer to get higher scores, I was zeal to write the topic because it was based on my research interest.(Teacher 1) I learned that I need to be careful in selecting resources as references, for example, when my participants are in junior high school, I need to find the same participants on my topics to compare the result with the literature.But, I believe in my introduction that I need to put what is going on in my country to give updates to local readers.(Teacher 10) Putting more arguments and facts based on local reports or government policies, it is more valuable for other Indonesian readers because they might replicate my idea in their classroom and be considered by local context and published in a local journal because I am still new so local journal probably is good step to enter academia.(Teacher 3) Pre-service teachers' beliefs seem to be reproducing knowledge rather than seeking new knowledge from the current body of literature (Badenhorst, 2017).Pre-service teachers argued their academic writing as a practice to conduct classroom research to help students for local audiences, so they did not need to achieve novelty in their work. Moreover, AI made them keep on revising because it provided initial scores of their arguments.Therefore, pre-service teachers could reflect on the scores and target higher scores from AI.Although in self-assessment, AI scores did not finally score pre-service teachers, it is considered external feedback for internal feedback seeking.They built awareness of what they could seek or not from AI for the writing process (Guo et al., 2022). Moreover, pre-service teachers' intertextuality capacity was not magically changed because we found their capacity to use intertextuality was still developing and insufficient, and they needed more time to practice.Furthermore, we found that pre-service teachers only reported and forgot to argue, so their voices were not represented in their works.It indicated a lack of pre-service teachers' reading and writing strategies.Those strategies need to be supported by pre-service teachers viewing intertextuality as a way to critically interact with body literature rather than anchored conventions (Vardi, 2012).Learning intertextuality, this community contributed to giving fundamental skills and aspects for pre-service teachers as preparing for graduation paper by acknowledging pre-service teachers about 1) navigating and connecting literature review, 2) promoting various methodologies (Badenhorst, 2017), 3) using direct and indirect quotations, and 4) reporting previous studies to give credit and evidence (Guo et al., 2022;Hu & Shen, 2021).Moreover, this study added teacher-mentor and teacher-teacher interactions essential to building intertextuality because prior studies suggested three areas of teachers' development: intertextuality, engagement with various sources, and contextual mediation (Badenhorst, 2017;Fajaryani et al., 2021;Valdivia & Martínez, 2018). CONCLUSION This study reports on pre-service teachers' practices and beliefs.In practice, it shows that pre-service teachers need more time to engage with academic essay discourse which is repeated from higher use of reporting phrases (n= 32) to provide evidence of their argumentative essay.However, less use of iconic references (n=10) shows that some pre-service teachers find it challenging to offer their expertise to local or national knowledge to tailor to international issues.Moreover, to support their reporting phrases, pre-service teachers tended to use whole text (n= 9), generic communicative act type (n= 4), quotation parts of the text (n= 2), and generic text type (n= 1).To express their expertise in one of the research areas, they used ambiguity (n= 7), but it was low.Although their beliefs showed that AI feedback facilitated their focus on their arguments and quotations, leading to decreased worry about writing errors on accuracy, using various intertextuality in argumentative writing could not be magically achieved with AI support.We argue that writing argumentative essays cannot be achieved using only generative AI; it requires high-quality feedback to elevate students' intertextuality.Moreover, this study shows that students build initial awareness of AI literacy during this study, although it requires in-depth investigation. The praxis implication of this study contributes to Farrelly's (2020) intertextual reference-type implementation that can be fostered by integrating writing classes assisted by AI, so pre-service teachers can focus on critical engagement with literature review.To connect students with various discourses in a body of literature, our study recommends 1) building critical reading, 2) familiarizing them with mind mapping of body literature, and 3) practicing summarizing, paraphrasing, synthesizing, and arguing.Therefore, they not only report similarly to "reporters," but they also know how to voice their arguments and to locate their study in the body of literature.Therefore, at the end of the day, we expect that pre-service teachers will build their capacity to criticize or rebut based on the existing literature or policy. Regarding the policy implication of this study in higher education, although this study's early responses to the Indonesian government's policy of making "graduation paper and publication" optional, this study showed that the regulation of academic writing for publication classes needs to be reformed in the higher education curriculum.This study indicated pre-service teachers' insufficient intertextuality skills or early stages to engage with academic discourses, requiring more time to develop AI literacy in responding to the AI-raising era.This study suggests policymakers and faculty members should carefully design academic writing for publication classes with a single national framework.Therefore, although pre-service teachers did not need to write graduation papers or choose other optional requirements, they still engaged with academic writing or argumentative essays as the language of academia, and AI was adequately utilized and ethically regulated. This study was limited to only a two-month study and a small sample of preservice teachers, so it was defined in terms of the transferability and generalizability of this study.However, this study generally still represented the voices of EFL pre-service teachers who were challenged as EFL teachers to write argumentative essays for the first time.Therefore, future research can elaborate with more participants from non-English departments or experienced writers. I find it challenging to synthesize by creating a group of similar studies' results into one argument.Although my class gave me the idea to use mindmaping, it is not an essay to write my work's result.Although I did not get lost in my writing, tailoring and summarizing the idea was difficult.(Teacher 7) The AI is quite helpful in checking my vocabulary use and grammar, so I do not worry about my summary or paraphrasing from other scholars.However, if the AI could not differentiate between my claim and my citation, it could benefit someone who learned academic writing.I still relied on my friends' or lecturer's feedback about my citations.Because this was my first time learning to cite and write a paragraph, I needed to combine many resources.AI was beneficial for me to avoid plagiarism because it provided me with feedback about paraphrasing or summarising.(Teacher 4) After attending this class, I realized that I needed to read the journal articles I cited because I tried to compare them with the results of ChatGPT, but it was different.I felt embarrassed if I misinterpreted by following the AI about paraphrasing.Then, I strategized using AI to give me more options about vocabulary when I paraphrased.My lecturer showed that if I directly copied and pasted from ChatGPT, Turnitin could also detect my plagiarism with AI from my citation or summary.This class made me aware that I could use AI, but I need to use it wisely.(Teacher 2) Table 1 . Examples of Pre-service Teachers' Intertextuality
7,128.8
2023-11-28T00:00:00.000
[ "Education", "Computer Science" ]
Survey on clothing image retrieval with cross-domain The paper summarizes the research progress on critical region recognition and deep metric learning to achieve accurate clothing image retrieval in cross-domain situations. Critical region recognition is of great value for the clothing feature extraction, effectively improving retrieval accuracy. The accuracy will decrease when solving difficult samples with similar features but different categories. Nowadays, deep metric learning is an effective way to solve this problem, which utilizes the optimization of different loss functions and ensemble network to strengthen the discrimination of clothing features. Therefore, through comparison of the experimental results of different algorithms and analysis of the accuracy of cross-domain clothing retrieval, it is demonstrated that the improvement of the retrieval accuracy in the future mainly depends on clothing important feature extraction and clothing feature discrimination. Introduction The clothing image retrieval technology is that the computer recognizes the given clothing image and recommends clothing images with similar styles. Clothing image retrieval has been widely used in e-commerce platforms and search fields, such as Taobao, Jingdong, Baidu, Google, etc. benefit from clothing image retrieval. People prefer to take photos in daily life and find their favorite clothing from the Internet. Cross-domain clothing retrieval technology can help people quickly and accurately find similar styles of clothing on the Internet. This not only meets the needs of people's daily life and improves the quality of life, but also promotes the consumption of the clothing industry. According to the fashion industry survey, the scale of domestic and foreign fashion markets is growing steadily. By 2024, the size of the domes-B Chen Ning<EMAIL_ADDRESS>Yang Di<EMAIL_ADDRESS>Li Menglu<EMAIL_ADDRESS>1 Xi'an Polytechnic University, Shaanxi, Xi'an, China tic fashion market is expected to increase by 8.8% compared with the size of 2020, thus the market size will reach 26.288 million US dollars [1,2] Although clothing image retrieval technology has made great progress in the past 10 years, clothing image retrieval in cross-domain situations still faces great challenges. Crossdomain clothing retrieval means that the image to be queried and the image retrieval database come from two different In the scene domain, clothing images in online shopping malls are retrieved from daily street clothing images based on their similarity. Referring to recent surveys, it is found that it has two main difficulties: (1) clothes are flexible items, and when viewed from different shooting angles or The appearance can be very different when wearing different body types. The query image provided by the user may be taken under complex conditions, with complex background, various shooting angles, various lighting conditions, and even occlusion. Instead, most store images feature clean backgrounds, good lighting, and good frontal angles. (2) The intra-class variance is large and the inter-class variance is small, which is an inherent characteristic of clothing images. For example, two dresses from different categories are very similar in color and design, but have subtle differences in the shape of the neckline, one is V-shaped and the other is Ushaped. Given an image of a user's clothing with a V-neck, returning a dress with a U-neck is not considered a correct search result in the retrieval system. Motivation Clothing image retrieval in cross-domain situations has been widely used in daily life. In online shopping, when a user provides a photo of daily life, the retrieval system can return clothing images with the same or similar characteristics, which reduces the retrieval system's dependence on text, so that the desired clothing images can be retrieved more directly and accurately. Cross-domain clothing image retrieval is also widely used in physical stores. To accurately avoid purchasing and avoiding inventory, store owners need to understand the clothing preferences of people in nearby blocks. The traditional algorithm is to manually count and classify the clothing styles of consumers around the store. However, if the program can automatically take pictures of nearby pedestrians and analyze the attributes of the clothes, the number of observers can be greatly increased, and the time and labor costs can be reduced at the same time, providing a stronger reference and basis for purchasing decisions. Therefore, the research and summary of cross-domain clothing image retrieval has far-reaching significance for both individuals and society. In actual demand, in the major domestic e-commerce shopping platforms, clothing images are mainly retrieved through keywords or texts, and the essence is to search for pictures by text. This technique requires that clothing images be finely classified and labeled accordingly. But with the explosive growth of clothing images, the shortcomings of this method become more and more obvious. First, keywords can only describe the easy-to-extract and abstract semantic features, and cannot fully reflect the visual features of clothing images, especially some fine and difficult-to-describe features; secondly, due to the huge number of clothing images, it takes a lot of Human and material resources are used for manual labeling, and manual labeling is prone to subjective bias; finally, if the description of the search keywords entered by the user is not accurate enough, it is difficult to retrieve the desired product. Therefore, this paper studies the content-based cross-domain clothing image retrieval, summarizes and evaluates it from different technical perspectives, and hopes to bring some inspiration to researchers and find new research hotspots for future research. Previous work With the development of deep learning [3][4][5][6],the framework of cross-domain clothing image retrieval is shown in Fig. 1. Cross-domain clothing image retrieval mainly includes two key steps: feature extraction and similarity measurement. For clothing image features, the methods of critical region recognition is generally used to identify important areas of clothing. The new methods of similarity measurement have also appeared. At present, the best effect is use the deep metric learning methods As shown in Table 1, different methods from past studies are summarized based on the latest research survey. In crossdomain clothing retrieval, deformation, occlusion, complex background and other phenomena will occur, which bring challenges to the accuracy of cross-domain clothing retrieval. At present, most of the critical region recognition of clothing is detect foreground objects before extracting features. The main purpose is to suppress background differences, enhance the identification of relevant local details, and provide more discriminative features in image feature extraction, which is convenient for distinguishing different types of objects. Another major challenge of cross-domain clothing image retrieval is distinguish similar images of different categories, and to cluster the images with larger differences in the same category. Deep metric learning maps images to feature vectors in space through deep neural networks. In this space, Euclidean distance or cosine distance can be directly used as a distance metric between two points. The contribution of many deep metric learning algorithms design a loss function that can learn more discriminative features. Therefore, there is a large amount of work researching deep metric learning and its related loss functions, including contrastive loss, triplet loss, triplet loss of more complex mutations, and the combination of multiple networks or methods. The ensemble method of output grouping together. Bounding box method The bounding box method uses some detection methods to identify the clothing regions in the image, and uses a rectangular box to mark the frame, as shown in Fig. 2. The purpose reduces the clothing image from the complex background and other external environmental factors during the retrieval process. Enhancing the effect of neural network on the feature extraction of clothing images. Kiapour et al. [7] used a selective search method [8] to filter out any image with a width less than one-fifth of the image width, and directly used manual labeling of the bounding box of the clothing to limit the influence of the background regions and obtain a more accurate search. And other steps help reduce some of the variability observed in different online stores and item descriptions. Chen et al. [9] based on the R-CNN [10] target detection method made some improvements to the clothing detection problem in the image, using selective search to generate expected region suggestions, and using the Network-in-Network (NIN) model feature extraction was performed on the local regions. Then, Huang et al. [11] embedded additional semantic information in the tree structure layer of the attribute perception network, after they obtained the depth features of attribute perception, use Support Vector Regression (SVR) to predict the overlap ratio of each candidate frame, limit the size range and aspect ratio of the bounding box, and discard some inappropriate candidate solutions, thereby enhancing the positioning effect of the clothing regions bounding box in the image. In general, with the development of target detection [12], bounding box method is relatively easy to implement, the speed and accuracy of the recognition are also improved. However, when solving more complicated clothing images with background, human posture, and occlusion, the image features extracted in the clothing regions set by the bounding box will have many interference features, so the accuracy of retrieval accuracy is decreased. Human body landmark recognition method The landmark recognition method of the human body focuses on the limbs of the person wearing clothes. As shown in Fig. 3, according to the important body parts, it identifies the important regions of clothing, and then uses convolutional neural network to perform feature analysis on this region. A method of research work is based on the human body pose estimation method proposed by Marcin Eichner's team [13]to realize the detection of human body important nodes, they associated the clothing regions with the important parts of the human body and divided them into 9 parts: the torso, the left and right upper arms, the left and right lower arms, the upper left and right legs, and the lower left and right legs [14,15]. In the recognition process, the upper body detection [16] and face detection technology [17] are combined to estimate the regions of the upper body. On this basis, the human body is segmented using the GrabCut algorithm [18], and finally combined with the human body proposed in the literature [13]. The appearance estimation model estimates the posture of the human body, thereby further dividing the important regions. The human body important node recognition method uses the landmarks of the human body to realize the detection of the important parts of the clothing. When the posture of the human body in the image is complex, it can still detect the clothing features of the relevant parts. However, when some regions are blocked, the expressive power of this method is limited. Clothing landmark recognition method Clothing landmark recognition directly detects the landmark marked by the clothing itself, and it is a newer method to realize the location of important regions of clothing, as shown in Fig. 4. The DeepFashion database proposed by Liu et al. [19] defines a set of clothing landmarks, corresponding to a set of landmarks on the clothing structure. For example, a set of landmarks on the upper body is positioned as left/right collar ends and left/right cuffs. The landmarks of the lower end, lower left/right lower end, lower body clothing and full body clothing are also defined, because some landmarks often included in the image are occluded, so the visibility of each set of landmarks is also marked. On this dataset, Deep Fashion Alignment (DFA) network [20] is proposed to detect landmarks. DFA consists of three stages. In each stage, the output of the previous stage is used as input, and the network uses VGG-16 as the skeleton. In the first stage, DFA uses the original image as input to predict rough landmark locations and pseudo-labels. The pseudo-labels represent landmark such as clothing category and posture; In the second stage, the output of the entire network needs to predict the offset of the landmark, and the pseudo-label represents the offset of the local landmark; the third stage uses two CNNs as branches, and each branch has the same input and output. The choice of branch is determined by the pseudo-label of Phase 2. Deep-Fashion also has its own shortcomings. For example, each image has only one piece of clothing and each clothing category has only 4 to 8 landmark. To solve this problem, the DeepFashion2 [21] database was proposed, which has more landmark labeling and annotation information, the Match R-CNN model is proposed on this database, which is mainly composed of three parts: Feature Network (FN), Perception Network (PN) and Matching Network (MN) composition. After the query image passes through the FN network, it is input to the PN network, and then the landmark positions are obtained through the convolutional layer and the deconvolutional layer, and finally combined with the MN network for clothing retrieval. These two large clothing databases provide great support for future clothing retrieval research. The clothing landmark recognition method is also more capable of processing clothing deformation, occlusion and details, and the retrieval accuracy is greatly improved, but a lot of labeling and annotation information, and the professional knowledge of the clothing industry is very demanding. Attention map recognition method The attention map recognition method uses the idea of the attention mechanism to extract the image features of the saliency or visual attention regions from the original image, and usually needs to combine the attribute information of the clothing to complete the clothing retrieval task, as shown in Fig. 5. Clothing attributes usually refer to the semantic attributes of objects or scenes shared across categories. The attributes of clothing include color, texture, fabric and style, so attributes can be used as potential and interpretable connections between image content and abstract labels. By constructing a latent space between fine-grained labels and low-level features, it helps models find inter-class and intra-class correlations between clothing categories. The attention map recognition method is an algorithm model that does not require a large number of human-labeled bounding boxes and feature extraction of landmark. Extract effective image representation from the spatial location of the salient regions reduce the cost of annotations while ensuring the effect of clothing image retrieval. Most of the popular attention map recognition algorithms use image attributes as external information to locate the attention of the image in the database, and use the database image as the context to infer the attention of the query image. The attention model ignores the noise background and extracts discriminative features for retrieval. Wang et al. [22] proposed a deep convolutional neural network system, TagCtxYNet, which includes a convolutional layer for image feature extraction and an attention layer for spatial attention modeling. It extracts an effective representation of the image by learning attention weights. Gu et al. [23] proposed an autonomous learning Visual Attention Model (VAM) to extract attention maps from clothing images. It includes two branch networks, one is a global branch network based on CNN, which is used to extract attention maps from clothing images. Extract the bottom layer features of the image to get the image feature map. The other is the introduction of the attention branch of the Full Convolutional Network [24] (FCN), which is used to predict the saliency regions of the image to obtain the image attention map. The Impdrop module is used to connect two branches to obtain the attention feature map, and the module introduces randomness between the attention map and the feature map. The addition of this randomness can reduce the risk of overfitting and allow the neural network to learn to more robust features, so the robustness of the model is improved. Zheng et al. [25] proposed an Attention-Based Region Transfer (ART) module to highlight the importance of foreground, which works in a rough way that is not classified. The attention mechanism in the advanced features is used to extract the foreground objects of interest and mark them when the feature distribution is aligned. Through multi-layer adversarial learning, the use of complex detection models can achieve effective cross-domain retrieval. Attribute learning models usually treat attribute prediction as a multi-label classification problem, and treat each attribute as a category. In fact, the clothing images trained by each model are associated with a series of attributes, such as "silk pocket shirts". However, traditional attribute learning models ignore sequence information. Although [22,23] use the combination of the attention feature map and the image feature map to find a more effective feature representation, which improved the clothing retrieval effect. However, there is a lack of more local information and research on the contextual connection of different parts of clothing. Luo et al. [26] proposed an attention-based learning strategy in the clothing image retrieval task. By integrating global information and local information, the features of clothing images can be intuitively extracted, because these two kinds of information provide complementary mechanisms. It can describe clothing images accurately, and uses the Long Short-Term Memory (LSTM) mechanism [27] to simulate the top-down spatial relationship of different parts of clothing to obtain more discriminative feature representations. Luo et al. [28] proposed a Deep Multi-task Cross-domain Hashing (DMCH) method to jointly establish the sequence correlation between clothing attributes, and learn the attention and perception vision of clothing images features to further enhance the effect of cross-domain clothing image retrieval. Siamese network Chopra et al. [29] first applied the contrastive loss function to the Siamese network based on deep neural network. Kiapour et al. [7] used Siamese network to predict whether two features represent the same category. Sean Bell et al. [30] used the traditional contrastive loss function to design an end-toend Siamese network for modeling learning. Huang et al. [11] proposed a Dual-attribute perceptual Ranking Network (DARN) for feature learning based on the Siamese network. All in all, the contrastive loss of the Siamese network is the most widely used pair of losses in calibration learning. As shown in Fig. 6, the Siamese network architecture has two parallel feature networks, followed by a normalization operation (L2) and a contrastive loss. Jia et al. [31] defined the contrastive loss function in the literature as shown in Eq. (1). (1) Among f () is an embedding function that maps an image to a feature vector.y is a label, D() represents the distance between two feature vectors. The margin parameter m forces the distance between images of different categories to increase, which has a certain effect on learning ordering. On this basis, Xiong et al. [32] proposed a contrastive loss function with a bilateral distance parameter, as shown in Eq. (2). Here, if the positive margin (PM) parameter is equal to the negative margin (NM), it is called symmetric double margin, otherwise we call it asymmetric double margin. The existence of positive margins makes clothing images diversified to a certain extent, which is more reasonable than forcing them to be exactly the same. Wang et al. [33] optimized the contrastive loss by adding penalty constraints, and proposed a robust contrastive loss function to improve the generalization ability of the learning network. Triplet network and variants The triplet loss [34] function is widely used in the triplet network model, and has achieved better results in cross-domain clothing image retrieval. The structure of the triplet network is shown in Fig. 7. The main structure is three parallel feature networks that map images into feature vectors, then normalize the feature vector, and input it into the three sets of loss functions. The triplet loss makes the distance between different clothing images larger and the distance between the same clothing images smaller. Different from the contrastive loss function that considers the absolute distance of the pair, the triplet loss calculates the relative distance between the positive pair and the negative pair of the same reference sample, and the specific definition is shown in Eq. (3). Among a i , p i and n i , respectively, represent the reference sample, the positive sample, and the negative sample. The a i and n i label of is the same, the a i and p i label is different. m is the margin between the positive and negative pairs. Due to the triplet contains reference samples, positive samples and negative samples, so N images can generate O(N 3 ) sample, even if it is a medium number of images, it is impossible to consider all samples, because not all samples can provide the same information for training a model. Randomly selects triples is a very inefficient training deep embedding network, which has inspired a lot of recent work to mine difficult samples for training. Wang et al. [35] randomly selected samples as triples in the first 10 rounds of training, and dig out difficult triples in each small batch after [36] used manual methods to mark difficult negative images from images with high confidence scores assigned to them in each round. Simo-Serra et al. [37] analyzed the impact of difficult positive sample mining and difficult negative sample mining, and found that the combination of positive sample mining and negative sample mining improved the discrimination ability. Song et al. [38] designed a small batch of triple loss that considers all possible three-group associations in the small batch. Liu et al. [39] proposed a cluster-level triple loss, which considered the correlation between the cluster center, the positive sample and the nearest negative sample. Ge et al. [40] introduced Hierarchical Triplet Loss (HTL) to solve the random sampling problem in the triplet training process. These studies solve the problem of how to mine difficult samples in training. They use more complex triples for training, which can not only speed up the convergence speed of the learning algorithm, but also can use the positive and negative samples of the given reference learn clearer margins and better improve the global structure embedded in the cyberspace. However, the methods based on difficult sample mining aims to find those triples that are difficult to find in the existing network from the existing training samples. It is essentially a greedy algorithm, which makes the trained feature embed-ding network vulnerable to bad local optima [41]. Therefore, Zhao et al. [42] seek a method that can intentionally generate difficult triples to optimize the overall network of the network, instead of using a greedy strategy to explore existing samples only for the current network. As shown in Fig. 8, to generate the goal of difficult triples, a Hard Triplet Generation (HTG) network algorithm is proposed to optimize the network's ability to distinguish similar samples of different categories and group different samples of the same category. Chopra et al. [43] proposed a novel Grid Search Network (GSN) to learn feature embedding for clothing retrieval. Similar to the triplet network variant, this method assumes that the training process is a search problem, and it finds a match of reference sample images in a grid containing positive and negative images. The framework also uses reinforcement learning-based strategies to learn special feature vector conversion functions, instead of simply connecting feature vectors. When applied to feature embedding networks, it further improves the clothing image retrieval accuracy. Kuang et al. [44] proposed a Graph Reasoning Network (GRNet), the similarity pyramid network, which uses global similarity to learn and query the similarity between clothing images and clothing databases. Ensemble network Ensemble is a widely used method of training multiple learners to obtain a combined model, and its performance is better than a single model [45,46]. For deep metric learning, the ensemble network connects the feature embedding learned by multiple learners. Under the constraint of the distance between a given image pair, a better embedding space can usually be obtained. A better ensemble network depends on the high performance of individual learners and the diversity between learners. However, in deep metric learning, there is not much research on the optimal architecture to generate feature embedding diversity. According to the above-mentioned use of Siamese network or triplet network for deep metric learning, they have achieved better results. Although the image of clothing of the same category is closer, and clothing of different categories is further away from the image. However, it is difficult to directly optimize the target because of the size of the sample. Therefore, difficult sample mining is widely used to solve this problem, and it costs expensive calculations on a subset of samples considered to be difficult. However, the difficulty sample is defined relative to a specific model. Such a complicated model will treat most of the samples as easy samples, and a simple model will treat most of the samples as difficult ones, both of which are not conducive to training. Since different samples have different difficulty levels, it is difficult to define a moderately complex model, and it is also difficult to fully select difficult samples. To solve the above problems, we summarize and analyze the different methods put forward by researchers. The above-mentioned triplet loss and variant methods are only based on a single model to mine difficult sample images, and cannot make full use of samples of different difficulty levels. Therefore, Yuan et al. [47] proposed the Hard-Aware Deeply Cascaded (HDC) Embedding model, which uses increasingly complex model sets in a cascaded manner to mine negative samples of different difficulty levels during the training process. They take advantage of the deep supervision network [48,49],and use a contrastive loss function to train the lower layers of the network to handle easier samples, and the higher layers of the training network to handle more difficult samples. Compared with this multi-layer method, the Boosting Independent Embedding Robustly (BIER) model [50]uses an ensemble method of high-dimensional embedding method, focusing on reducing the correlation on a single layer, and dividing the highdimensional embedding into several a learner, trained with Online Gradient Boosting (OGB). Continuous learners are trained on reweighted samples, which greatly reduces the correlation between learners, thereby reducing the correlation within the embedding and improving the robustness of the embedding. In addition, compared with the HDC model, the method allows continuous weighting of samples according to the loss function. Inspired by the BIER method, Xuan et al. [51] proposed a different method to learn robust, highdimensional embedding spaces. Instead of reweighting the input samples to create independent output embedding. As an important aspect of ensemble network, learners should have diversity in feature embedding. So, Kim et al. [52] proposed an Attention-based Ensemble (ABE) model, as shown in Fig. 9, (a) based on ordinary ensemble learning; (b) based on attention ensemble learning. The model ensembles Clothing databases As shown in Table 2, the detailed introduction to the clothing databases. In recent years, the size of popular clothing databases and types of annotations is different. For example, Street2Shop and DARN contain 425K and 540K clothing images respectively. They contain two types of images (1) street images, which are images of people actually wearing clothes under daily uncontrolled environmental conditions; (2) shop images, which are clothing images of online clothing stores, which are made by professionals shot of a more controlled environment. Since the clothing category tags are extracted from the metadata of the images collected by online shopping sites, this makes their tags appear a lot of errors and confusion. DeepFashion and. ModaNet obtains labels by manually annotating clothing categories. In addition, different types of annotations are provided with these databases. DeepFashion is a large clothing database with comprehensive annotations, with four benchmarks, among which Consumerto-shop Benchmark is a database corresponding to street images and store images, and each clothing item's folder contains a street image and several store images, with a total of 33,881 clothing items and 239,557 clothing images. Each image has 4-8 clothings functional regions (such as " collar " ) and other related fashion labels. The recognitions of these fashion landmarks are shared with all clothing categories, which makes it difficult for them to capture the rich variety of clothing images. In contrast, ModaNet's street images have a mask of a single person, but there are no landmarks. Unlike the datasets above, DeepFashion2 contains 491K images and 801K annotations of landmarks, masks, and bounding boxes and 873,000 pairs of images, which is the most comprehensive benchmark in a clothing dataset. Experiment preparation The hardware and software environment used in the experiment is: Intel(R) Core(TM) i5-3570 CPU @ 3.40 GHz processor, NVIDIA GeForece GTX 1070 8GB graphics card, 8 GB memory. The operating system is Ubuntu 16.04, and the programming language is Python, the deep learning framework is Pytorch. As shown in Table 3, the datasets used in this paper are two subsets of the DeepFashion dataset, namely In-shop Clothes Retrieval Benchmark subset and Consumer-to-shop Clothes Retrieval Benchmark subset The evaluation of clothing retrieval There are many clothing databases, but the commonly used evaluation methods in clothing retrieval are as follows: precision, MAP, and accuracy. 1. The precision is shown in Eq. (4): where A is the number of similar clothing in the search results, and B represents the number of search results returned. From the precision rate formula, it can be seen that the precision rate can effectively investigate the proportion of the correct return results of the retrieval model in the retrieval results in all the returned results. 2. Although the precision rate is a statistical evaluation of the proportion of correct search results, it lacks the evaluation of the location information of the search results. Therefore, the MAP value is used as the evaluation of the location information of the search results. The MAP is shown in Eq. (5): where Q is the number of clothing images in the retrieval database, which represents the average correct rate and the change in recall rate, that is, the regions under the P−R curve. MAP can reflect the overall performance of the retrieval method, but lacks insight into the details of the retrieval results. 3. The accuracy of cross-domain clothing retrieval is the most commonly used evaluation criterion. The Top-k method is generally used, as shown in Eq. (6): where Q is the number of clothing images in the search database, and q represents the specific clothing image to be queried. If at least one clothing image in the Top-k list matches the image q, then hit(q, k) will be set to 1, otherwise it will be set to 0. Analysis of the results of the experiment By summarizing the clothing image retrieval models based on deep learning in recent years, they mainly solve some difficulties in cross-domain situations. Tables 4 and 5, respectively, show two different models based on clothing critical region recognition and deep metric learning two ideas. Among them, " Y " and " N " indicates whether the algorithm uses the corresponding attribute and landmark annotations. It can be seen from the table that the network model based on critical region recognition has higher requirements for clothing to attribute labeling, and supervised learning is generally used. Even if the attention mechanism is used to greatly reduce the annotations of landmark in clothing, some weakly supervised networks are proposed, but some annotations of clothing attributes are still needed. However, most of the network models based on deep metric learning do not need landmark annotations or clothing attribute annotations, because deep metric learning is based on the characteristics of the image itself, in-depth mining of different difficult samples, and strengthening the discrimination of different extracted clothing features, then furthers extract the important clothing features that are discriminative. At present, most popular clothing retrieval networks are implemented based on the DeepFashion database. The Deep-Fashion database has two subsets, Consumerto-shop Benchmark and In-shop Benchmark. The application scenarios are cross-domain clothing image retrieval and same-domain clothing image retrieval, as shown in the figure. As shown in Fig. 10, the performance of using the critical region recognition idea in solving the cross-domain clothing retrieval problem. The figure shows that different clothing critical region recognition algorithms have a greater impact on clothing retrieval performance. Among them, clothing landmark recognition and attention map recognition have a higher accuracy in clothing retrieval. However, FashionNet and Match R-CNN networks use clothing landmark recognition to rely too much on clothing attributes and landmark annotation information in the retrieval process, while the attention maps recognition method can solve this problem better in clothing retrieval, it can still achieve better retrieval accuracy without the clothing landmark annotation, which provides new ideas of cross-domain clothing image retrieval. As shown in Fig. 11, the performance of several deep metric learning-based clothing image retrieval models in different datasets. From the comparison of the performance of (a) and (b), We find that the deep metric learning is better in the same domain clothing image retrieval than the crossdomain clothing image retrieval, because the same domain clothing image itself is less affected by the external environment. Mainly considering the influence of the inherent attributes of clothing images, deep metric learning can use different loss functions combined with network models to achieve better similar clothing matching. However, for crossdomain clothing image retrieval, deep metric learning using contrastive loss, ternary loss and variant, and ensemble learning needs to consider the influence of background and other factors, and needs to mine difficult samples. At present, ensemble learning is widely used, and it can be very use- Fig. 10 The retrieval results on Consumer-to-shop benchmark ful. It is good to mine samples of different difficulty levels to improve the accuracy of cross-domain clothing retrieval. Clothing image retrieval requires feature extraction and similarity match. Different research methods have different focuses. Fig. 12 and Table 6 show the performance of deep network models in cross-domain clothing retrieval in recent years. It can be seen that the overall effect of deep metric learning is not as good as clothing critical region recognition of solving cross-domain retrieval problems, indicating that the main problem to be solved in cross-domain clothing retrieval are the recognition of clothing important regions in the image. This is important step in using convolutional Fig. 12 The retrieval results on Consumer-to-shop benchmark neural networks to extract features, then combine clothing attributes can achieve better retrieval results. Therefore, in the future, we can combine critical region recognition and deep metric learning to propose a new algorithm without additional annotation information, it can achieve better crossdomain clothing image retrieval accuracy. Conclusion This paper reviews the cross-domain clothing retrieval methods. First, it analyzes the common methods of the critical Fig. 11 The retrieval results of different models under different databases region recognition and deep metric learning in cross-domain clothing retrieval. Then, the research results show that attention map recognition method not only saves time and cost, but also further improves the effect of clothing retrieval. Deep metric learning research is widely used, and has achieved better results in the same-domain clothing retrieval and cross-domain clothing retrieval. At last, we find that the critical region recognition can extract more important clothing detail features, and deep metric learning makes the extracted features more discriminative, both affect the effect of crossdomain clothing retrieval. In summary, although cross-domain clothing retrieval has achieved better retrieval results using clothing critical region recognition and deep metric learning methods, there are still many issues need to be solved, it mainly includes: 1. Attribute labeling problem: Most of the deep network models need the assistance of clothing attribute labels, which is a supervised learning or weakly supervised method. It requires high clothing labeling and is a timeconsuming and labor-intensive work. How to reduce clothing attribute labels, save costs, and improve accuracy still needs further research. 2. Model complexity problem: In recent years, the research of cross-domain clothing image retrieval tasks has mainly focused on ensemble methods. Although better results have been achieved, the long training time and memory loss brought by model ensemble methods are difficult to solve. Therefore, how to solve the high model complexity brought by ensemble learning under the premise of ensuring the retrieval effect is a big challenge. 3. Clothing databases: At present, the clothing databases contain different types of clothing, which are distinguished from clothing categories, such as dresses, jeans, shirts, etc. However, due to the development of the clothing fashion industry, different clothing combinations can produce ever-changing clothing styles, such as sports, Japanese, punk, etc. The retrieval of clothing styles will be another important research direction in the future. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
8,545.6
2022-05-13T00:00:00.000
[ "Computer Science" ]
Prostaglandin E2 mediates the late phase of ischemic preconditioning in the heart via its receptor subtype EP4 Ischemic preconditioning (IPC) describes a phenomenon wherein brief ischemia of the heart induces a potent cardioprotective mechanism against succeeding ischemic insult. Cyclooxygenase-2 (COX-2), a rate-limiting enzyme in prostanoid biosynthesis, is upregulated in the ischemic heart and contributes to IPC. Prostaglandin E2 (PGE2) protects the heart from ischemia–reperfusion (I/R) injury via its receptor subtype EP4. We sought to clarify the role of the PGE2/EP4 system in the late phase of IPC. Mice were subjected to four IPC treatment cycles, consisting of 5 min of occlusion of the left anterior descending coronary artery (LAD). We found that COX-2 mRNA was significantly upregulated in wild-type hearts at 6 h after IPC treatment. Cardiac PGE2 levels at 24 h after IPC treatment were significantly increased in both wild-type mice and mice lacking EP4 (EP4–/–). At 24 h after IPC treatment, I/R injury was induced by 30 min of LAD occlusion followed by 2 h of reperfusion and the cardiac infarct size was determined. The infarct size was significantly reduced by IPC treatment in wild-type mice; a reduction was not observed in EP4–/– mice. AE1-329, an EP4 agonist, significantly reduced infarct size and significantly ameliorated deterioration of cardiac function in wild-type mice subjected to I/R without IPC treatment. Furthermore, AE1-329 significantly enhanced the I/R-induced activation of Akt, a pro-survival kinase. We demonstrated that the PGE2/EP4 system in the heart plays a critical role in the late phase of IPC, partly by augmenting Akt-mediated signaling. These findings clarify the mechanism of IPC and may contribute to the development of therapeutic strategies for ischemic heart disease. Introduction Ischemic preconditioning (IPC) is well documented as a potent cardioprotective phenomenon [1][2][3]. It refers to a brief ischemic episode of the heart that induces a potent cardioprotective mechanism against succeeding ischemic insult. IPC consists of two phases, an early and a late phase [1][2][3]. The early phase develops within minutes of an initial ischemic episode and lasts for 2-4 h [1][2][3]. The late phase begins 12-24 h after the initial ischemic episode and persists for 72-96 h [1][2][3]. Because the late phase of IPC protects the heart from both myocardial infarction and from stunning for a substantial period of time [4,5], it has potential clinical relevance. Many previous studies have focused on the complex mechanisms underlying IPC; however, the mechanisms remain to be clarified in detail [6]. The current consensus is that the early phase of IPC is mediated by the activation of a preexisting signaling cascade [7], whereas the late phase results from the synthesis of cardioprotective mediators. Recent studies have elucidated that cyclooxygenase-2 (COX-2), a rate-limiting enzyme for the synthesis of prostanoids, is crucial for the late phase of IPC [8,9] and atherosclerotic plaque stabilization [10]. Accordingly, the late phase of IPC is abrogated by COX-2-selective inhibitors, such as NS-398 and celecoxib [11]. Upregulation of COX-2 during IPC results in increased synthesis of cardioprotective prostaglandins (PG), such as prostaglandins I 2 and E 2 (PGE 2 ) [8,9,11]. However, it remains to be determined which type(s) of PG participate in the cardioprotection afforded by COX-2 in the late phase of IPC. PGE 2 exerts various actions through each of its receptor subtypes (EP 1 , EP 2 , EP 3 , and EP 4 ) [12]. It has been reported that EP 4 mRNA is highly expressed in the hearts of several species, including mouse and human, which suggests that EP 4 may have some role in the heart. Indeed, studies have shown the upregulation of EP 4 expression levels in mouse models of myocardial infarction [13]. Importantly, our previous study demonstrated that PGE 2 exerted a potent cardioprotective effect via EP 4 in ischemia/reperfusion (I/R) injury [14]. In mice lacking EP 4 (EP 4 -/-), I/R injury was abrogated significantly in an in vivo I/R model and in an ex vivo perfused heart model. This indicated that PGE 2 's cardioprotective effect was mediated within the heart through the PGE 2 / EP 4 system. Furthermore, an EP 4 -specific agonist has been reported to impart a cardioprotective effect by suppressing the expression of macrophage chemoattractant protein-1, thus inhibiting the infiltration of macrophages into the ischemic area [15]. This indicates that EP 4 in macrophages also can participate in the cardioprotective mechanisms of the PGE 2 / EP 4 system. While a recent report showed that PGE 2 /EP 4 activation ameliorates hepatic I/R injury via the ERK1/2/ glycogen synthase kinase (GSK) 3β pathway [16], the role of the PGE 2 /EP 4 system in the IPC remains to be clarified. We hypothesized that PGE 2 , derived from COX-2, is upregulated by brief ischemic stress and contributes to the late phase of IPC via EP 4 . To test this hypothesis, we examined I/R injury in an EP 4 -/mouse model of IPC. Using a novel EP 4 -specific agonist, AE1-329, we sought a mechanistic explanation for the cardioprotective function of the PGE 2 /EP 4 system. Mice The details of the breeding and maintenance of animals used in the present study were previously reported [17]. Most EP 4 -/mice die postnatally as a result of patent ductus arteriosus or do not survive in the C57BL/6 background. Therefore, F2 progenies of surviving EP 4 -/mice and their wild-type litter mates were independently maintained in a mixed genetic background of 129/Ola and C57BL/6 [18]. All experiments were performed using 7-12-week-old male mice per the guidelines of Japan's Act on Welfare and Management of Animals and were approved by the Asahikawa Medical University Committee on Animal Research. IPC and I/R procedures The study mice were anesthetized with pentobarbital (60 mg/kg body weight, intraperitoneally) and secured in a supine position with the upper and lower extremities held on a heated table under a two-lead electrocardiogram (ECG) monitoring device. Tracheal intubation with a blunt 20-gauge polyethylene tube (Terumo, Tokyo, Japan) was performed under direct visualization of the tube through the tracheal wall at the upper border of the thyroid cartilage, which was surgically exposed. The tracheal tube was connected to a mechanical ventilator (SN-480-7; Shinano, Tokyo, Japan) and the mice were ventilated using a volumecontrolled ventilation mode (0.8 ml room air/breath at 110 breaths/min). After the left anterior thoracotomy, the heart was exposed and the pericardium dissected; the dissection table was tilted up and to the left to visually identify the left coronary artery. An 8-0 nylon suture was passed underneath the left anterior descending coronary artery (LAD) at a position 1 mm from the tip of the left auricle. A small length of polyethylene tube with a blunt edge (size 3; Hibiki, Tokyo, Japan) was threaded by two lines of suture and mounted vertically on the LAD, with a piece of rubber (1 g) attached to each end of the suture. The LAD was occluded by a suture supporting rubber weights and was reopened by manually releasing the weight loading. To confirm that the LAD was successfully occluded, the myocardium was checked for color change (from brick red to pale). In addition, ECG observations showed prolonged QRS duration, enlarged QRS voltage, and ST elevation after successful occlusion [19]. On Day 1, mice in the IPC group underwent the IPC treatment, consisting of four 5-min cycles of occlusion followed by 5 min of reperfusion of the LAD. Mice in the IPC sham group underwent the same surgery, apart from IPC treatment. On Day 2, 24 h after IPC treatment, the mice were subjected to 30 min of LAD occlusion followed by 2 h of reperfusion. To examine the effects of the EP 4 agonist, AE1-329 [20], I/R injury was induced by occluding the LAD for 30 min, without IPC treatment, followed by indicated times of reperfusion. AE1-329 (30 μg/kg) was injected subcutaneously 30 min before LAD occlusion. Reverse transcription polymerase chain reaction analysis for COX-2 mRNA The hearts of wild-type mice were excised 6 h after IPC or sham treatment and total RNA was prepared from ischemic and nonischemic areas using Isogen (Nippon Gene, Toyama, Japan). Total RNA (2 μg) was reverse-transcribed, as previously reported (10). The resulting cDNA was amplified by polymerase chain reaction (PCR) using primer sets corresponding to COX-2 mRNA, as follows: sense 5′-ACA CTC TAT CAC TGG CAC CC-3′ antisense 5′-GGA CGA GGT TTT TCCAC-CAG-3′ The quantity of PCR product was determined by real-time PCR analysis using Lightcycler apparatus (Idaho Technology, Idaho Falls, ID, USA) and DNA Master SYBR Green I (Roche Molecular Biochemicals, Mannheim, Germany) as previously described [20]. The values for the ischemic areas were expressed in relation to the nonischemic areas. PGE 2 enzyme immunoassay Hearts were excised immediately or 24 h after IPC treatment. Tissue samples were prepared from ischemic (anterior left ventricular wall) and nonischemic (posterior left ventricular wall) areas by homogenization in 0.1 M phosphate buffer containing 1 mM EDTA and 10 μM indomethacin. Prostaglandins were pre-extracted from tissue samples using silicabased octadecylsilane reverse-phase columns. The levels of PGE 2 in the samples were determined using an enzymelinked immunoassay kit (Cayman Chemicals, Ann Arbor, MI, USA), according to the manufacturer's instructions. Determination of the area at risk (AAR) and myocardial infarct size After 30 min of LAD occlusion (with or without IPC treatment) and following reperfusion of 2 h, the size of the AAR and the infarct size were determined using a double-staining technique [21]. Briefly, the right carotid artery was exposed via blunt dissection of the paratracheal muscles, and then cannulated with a catheter. AAR was determined by retrograde injection of 5.0% Evans Blue dye (300 μl) through the catheter while the LAD was occluded. By this procedure, all cardiac tissue except the AAR was stained blue. After the heart was excised and washed in ice-cold phosphatebuffered saline (PBS), it was frozen at − 80 °C for 5 min and cut into five slices. The slices were incubated with 1.0% triphenyltetrazolium chloride (TTC) at 37 °C for 5 min, followed by overnight immersion in PBS at 4 °C. Thus, the infarcted area was demarcated as a pale-to-white area, while viable tissue was stained red. AAR and infarct size were determined via planimetry using Photoshop 7.0 (Adobe, San Jose, CA, USA). The infarct size was calculated as ratio of the infarct (pale-to-white area) to AAR (all cardiac tissue except blue areas) since the perfusion territory of LAD was different in each mouse [21]. Echocardiographic examination of the cardiac function Echocardiography was performed before the I/R procedure, and after 2 h of reperfusion following 30 min of LAD occlusion, using a Vevo 660 machine (Primetech; VisualSonics, Toronto, Canada) with a 35-MHz probe. First, a B-mode image of the left ventricle (LV) was obtained in the shortaxis view at the level of the papillary muscles. Then, enddiastolic and -systolic LV dimensions were measured from the M-mode tracings. LV ejection fraction (LVEF) was calculated using the equation: LVEF = (LV diastolic volume − LV systolic volume)/LV diastolic volume) × 100, where LV diastolic and systolic volumes indicate left ventricular diastolic and systolic volumes, respectively. Western blotting analysis for Akt Hearts were excised after 15 min of reperfusion following 30 min of LAD occlusion without IPC treatment. The ischemic area was harvested, frozen in liquid nitrogen, and stored at − 80 °C until use. For detection of Ser473-phospholylated Akt (p-Akt) and Akt, the samples were homogenized in lysis buffer (20 mM Tris, 1 mM EDTA, 1 mM DTT, 1% Triton X-100, 2 mM Na 3 VO 4 , 2 mM NaF, 10 mM sodium pyrophosphate, protease inhibitor cocktail; pH 7.5). After centrifugation for 10 min at 12,000 rpm, protein concentrations in the supernatant were determined using the bicinchoninic acid (BCA) Protein Assay Kit (Pierce; Thermo Fisher Scientific, Waltham, MA, USA). The sample (40 μg protein) was electrophoresed using sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a polyvinylidene difluoride transfer membrane (Millipore, Billerica, MA, USA) using a semidry transfer system (ATTO, Tokyo, Japan). After a blocking procedure using 5% nonfat dried milk for 1 h at room temperature, the membranes were incubated with the antibodies against p-Akt or Akt (×1000; Cell Signaling Technology) at 4 °C overnight. P-Akt and Akt were detected using horseradish peroxidase-conjugated secondary antibodies (GE Healthcare, Chicago, IL, USA) and an enhanced chemiluminescence reagent. Densitometry of the bands was analyzed using Photoshop as previously described [20]. Statistical analyses All data are expressed as mean ± standard error. The data were analyzed using the Student's t test for unpaired samples. P values of < 0.05 were considered statistically significant. Results Augmented COX-2 mRNA expression and PGE 2 production in the heart after IPC treatment The expression level of COX-2 mRNA in the heart of sham-operated wild-type mice was low and barely detectable. However, after IPC treatment, the expression level of COX-2 mRNA in the ischemic area increased remarkably at 6 h (Fig. 1), which indicates that the production of cardioprotective prostanoids (such as PGE 2 ) increases in the heart subjected to IPC treatment. Immediately after the IPC treatment, PGE 2 levels in the ischemic area were similar to those of the nonischemic control areas (Fig. 2), which indicated that there was no increase in PGE 2 production at that time point. However, the PGE 2 level in the ischemic area increased significantly at 24 h after IPC treatment, compared with that of the nonischemic control area, in both wild-type and EP 4 -/hearts to a similar degree (Fig. 2), indicating that PGE 2 may play a role in the late phase of IPC. PGE 2 levels in the nonischemic areas at 24 h after IPC treatment did not differ significantly between wild-type and EP 4 -/hearts (5.24 ± 0.73 pg/mg [n = 9] and 4.10 ± 0.38 pg/mg [n = 6], respectively). The late phase of IPC observed in wild-type hearts disappears in EP 4 -/hearts To determine the role of PGE 2 via EP 4 in late-phase IPC, we performed the I/R procedure after IPC treatment in wild-type and EP 4 -/mice (Fig. 3A). In wild-type mice, the infarct size of the heart after IPC treatment was significantly smaller than in sham-operated mice (Fig. 3B, C), indicating that the late phase of IPC works effectively in wild-type mice. In contrast, there was no significant difference in the infarct size between the IPC-treated and sham-operated groups in EP 4 -/mice, indicating that the late phase of IPC disappeared completely in EP 4 -/mice. In sham-operated groups, there were no significant differences in both infarct size and AAR between wild-type and EP 4 -/mice (Fig. 3C). These results clearly show that the PGE 2 /EP 4 system plays a critical role in the late phase of IPC. AE1-329, an EP 4 agonist, reduces the infarct size of wild-type hearts after the I/R procedure without IPC treatment To clarify whether insufficient production of PGE 2 in the heart was responsible for the lack of significant difference Fig. 1 Upregulation of COX-2 mRNA after the IPC treatment. Wildtype hearts received a brief ischemic stress from four 5-min cycles of LAD occlusion, followed by 5 min of reperfusion. At 6 h after IPC treatment, tissue samples were prepared from ischemic and nonischemic areas of the heart, and then examined for the expression of COX-2 mRNA, using A RT-PCR or B quantitative RT-PCR. The values were expressed as percentages of the nonischemic area, representing the mean values from two independent experiments. n = 3. *P < 0.05 vs. sham-operated in infarct sizes between sham-operated wild-type and EP 4 -/hearts, we used AE1-329 to activate EP 4 in wild-type hearts that did not receive IPC treatment. AE1-329 (30 mg/ kg) was injected 30 min before the 30-min LAD occlusion and the infarct size was determined after 2 h of reperfusion. AE1-329 significantly reduced the infarct size in wildtype mice (Fig. 4), whereas no such effect was observed in EP 4 -/mice (data not shown). In contrast to the efficient activation of EP 4 and the resultant reduction in the infarct size in the heart with the IPC treatment (Fig. 3C), this result indicates that the endogenous production of PGE 2 in hearts without IPC treatment was insufficient to activate EP 4 , at least during the 2 h of the I/R procedure. AE1-329 did not significantly affect blood pressure (data not shown). We used echocardiography to examine the effects of AE1-329 pretreatment on the function of wild-type hearts at 2 h of reperfusion following 30 min of LAD occlusion. In the hearts pretreated with AE1-329, movement of the anterior wall was well retained, compared with that of control hearts (Fig. 4C). Additionally, AE1-329 significantly prevented a reduction in the LVEF after the I/R procedure (Fig. 4D), which indicated that the activation of EP 4 protected the heart from I/R injury, both histologically and functionally. AE1-329 enhances I/R-induced activation of Akt To further investigate the cardioprotective mechanism of the PGE 2 /EP 4 system, we examined the activation of Akt, a pro-survival serine/threonine kinase. We measured the levels of phosphorylated Akt (p-Akt), an activated form of Akt. -/mice after 2 h of reperfusion following 30 min of LAD occlusion. Tissues that were stained blue (Evans Blue dye) represent nonischemic areas; tissues stained red (TTC) within the ischemic area are live tissues. Unstained tissues appear pale-to-white and represent necrotic myocardium. C Cardiac infarct size and AAR were measured in wild-type and EP 4 -/hearts. The values presenting the infarct size are expressed as percentages of the AAR. n = 4-5. *P < 0.01 vs. sham-operated group Fig. 4 AE1-329, an EP 4 agonist, reduces infarct size and ameliorates impaired function in wild-type hearts after I/R. A Representative photomicrographs of LV sections from control and AE1-329-pretreated mice after 2 h of reperfusion following 30 min of LAD occlusion. B Cardiac infarct size and AAR were measured. The values presenting the infarct size are expressed as percentages of the AAR. n = 6. *P < 0.01 vs. control. C Representative M-mode tracings of the LV from control and AE1-329-pretreated mice after 2 h of reperfusion following 30 min of LAD occlusion. D LVEF was measured. n = 5-6. *P < 0.05 vs. control Although AE1-329 alone did not alter the level of p-Akt, it significantly enhanced the I/R-induced increase in p-Akt level in wild-type hearts (Fig. 5), the enhancement of Akt activation by AE1-329 was not observed in EP 4 -/hearts (Fig. 5). Both the molecular weight of p-Akt and Akt is 60 kD, suggesting that the upper bands in p-Akt were nonspecific. The levels of total Akt were not different between wild-type and EP 4 -/hearts, irrespective of the presence or absence of AE1-329. These results suggested that Akt signaling underlies the cardioprotective mechanism of the PGE 2 /EP 4 system in I/R. Meanwhile, I/R treatment increased p-Akt/Akt ratio in EP4 −/− mice, suggesting that Akt activation might be increased via pathways other than PGE 2 /EP 4 system. Discussion Our results show that the PGE 2 /EP 4 system plays a critical role in the heart in the late phase of IPC, partly by augmenting Akt-mediated signaling. IPC treatment significantly increased the cardiac expression level of COX-2 mRNA and the production of PGE 2 , thus inducing a potent late phase of IPC in wild-type hearts. However, in EP 4 -/hearts, the late phase of IPC did not establish, which indicates that the PGE 2 /EP 4 system critically mediates the late phase of IPC. We have previously reported that PGE 2 protects the heart from I/R injury via EP 4 . We found that the infarct size was significantly larger in EP 4 hearts than that in wild-type hearts at 24 h of reperfusion following 1 h of LAD occlusion. However, the present study found no significant difference in the infarct size between sham-operated wild-type and EP 4 mice at 2 h of reperfusion following 30 min of LAD occlusion. This suggests that PGE 2 production in the heart, without the IPC treatment, was insufficient to activate EP 4 at this time point; effective activation of EP 4 and a resultant decrease in the infarct size were observed in wild-type hearts with IPC treatment. Activation of EP 4 by exogenous AE1-329 significantly reduced the infarct size and rescued the functional deterioration induced by I/R at the same time point. Taken together, these results support a hypothesis that PGE 2 originating from COX-2, upregulated by a brief ischemic stress, contributed critically, via EP 4 , to the late phase of IPC. Several reports have demonstrated that EP 4 signaling provides protection from myocardial I/R injury [14,15,22]. This is the first study to demonstrate that COX2 upregulation by IPC attenuated ischemic injury through PGE 2 /EP 4 signaling. Further studies are warranted to clarify the role of PGE 2 /EP 4 signaling in the late phase of IPC. Although a cardioprotective role of the PGE 2 /EP 4 system in I/R injury has been reported previously [14], its precise mechanism remained to be clarified. During myocardial I/R injury, EP 2 and EP 4 play a cardioprotective role after ischemia through the activation of the cyclic AMP/protein kinase A signaling pathway [23]. It is known that phosphatidylinositol 3-kinase (PI3-K)/Akt signaling plays an important antiapoptotic and cardioprotective role in cardiac I/R injury [6,[24][25][26]. Indeed several agents capable of protecting the heart from I/R injury activate PI3-K/Akt signaling when given at reperfusion, such as insulin [27], erythropoietin [28], and bradykinin [29]. A recent study demonstrated that miR-486-5p, a cardioprotective microRNA that can activate the phosphotidylinositol 3-kinase/Akt signaling pathway, was dysregulated in rat models of acute myocardial infarction and in patients with acute myocardial infarction [30]. It has been reported that PGE 2 activates Akt via EP 4 in several types of cells, such as human embryonic kidney cells expressing EP 4 [31], glomerular epithelial cells [32], and human lung carcinoma cells [33]. This suggests the possibility that the PGE 2 /EP 4 system is also able to activate Akt during cardiac I/R, thus protecting the heart. In the present study, AE1-329 significantly augmented the I/R-induced activation of Akt in wild-type hearts, while no such effect The ratio of p-Akt to Akt is expressed as a percentage of that of the nonischemic wild-type control heart. n = 3-5. *P < 0.05 vs. respective AE1-329-untreated control. **P < 0.01 was observed in EP 4 -/hearts. This suggests that Akt signaling underlies the cardioprotective mechanism of the PGE 2 / EP 4 system in I/R. Additionally, several studies demonstrated the utility of EP 4 as a promising therapeutic target in cardiac diseases including not only ischemic heart disease but also inflammatory heart disease [34] and cardiac hypertrophy [35]. Further investigations are necessary to extend the clinical benefits of EP 4 agonists in cardiac disease. This study has several limitations. We did not evaluate cellular necrosis and apoptosis although the signals downstream of the phosphotidylinositol 3-kinase/Akt pathway confers necrosis and apoptosis. Further studies should be considered to clarify the molecular mechanism underlying the cardioprotective role of the EP 4 /Akt pathway. In conclusion, we demonstrated that the PGE 2 /EP 4 system in the heart plays a critical role in the late phase of IPC, partly by augmenting Akt-mediated signaling. Our findings clarify the mechanism of IPC and could contribute to the development of therapeutic strategies for ischemic heart disease. Conflict of interest The authors declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,630.8
2022-12-16T00:00:00.000
[ "Biology", "Medicine" ]
Light-induced renormalization of the Dirac quasiparticles in the nodal-line semimetal ZrSiSe In nodal-line semimetals linearly dispersing states form Dirac loops in the reciprocal space, with high degree of electron-hole symmetry and almost-vanishing density of states near the Fermi level. The result is reduced electronic screening and enhanced correlations between Dirac quasiparticles. Here we investigate the electronic structure of ZrSiSe, by combining time- and angle-resolved photoelectron spectroscopy with ab initio density functional theory (DFT) complemented by an extended Hubbard model (DFT +U +V). We show that electronic correlations are reduced on an ultrashort timescale by optical excitation of high-energy electrons-hole pairs, which transiently screen the Coulomb interaction. Our findings demonstrate an all-optical method for engineering the band structure of a quantum material. The application of topological concepts to condensed matter, which is central to the description of the quantum spin Hall effect [1] and topological insulators [2], has been fruitful in the classification of gapless topological phases [3][4][5]. Dirac and Weyl fermions emerge as low energy excitations, characterized by a discrete number of symmetry-protected nodes [6]. The nodes are responsible for magneto-transport properties that are unknown in topologically trivial compounds [7]. The nodes also exhibit vanishing density of states, which alter the screening of the Coulomb interaction, thus requiring corrections to the Fermi liquid model. Logarithmic corrections play an important role in two-dimensional (2D) compensated semimetals such as graphite [8] and graphene [9]. In the case of Dirac and Weyl particles, the reduced screening enhances correlations, affects charge transport [10] and makes the system prone to instabilities towards new ordered phases [11][12][13]. The long-range Coulomb interaction has been theoretically discussed in nodal-line semimetals (NLSMs) [14], where the intersection of linearly dispersing states forms 1D trajectories in reciprocal space [4,15,16]. It has been shown that in the low-doping regime of NLSMs, even if the Dirac lines are moved away from the Fermi level, weak screening affects the transport properties [17]. Similarly to the Dirac and Weyl semimetals, correlations can drive the materials to symmetrybreaking ground states in the bulk [11] and at the surface [15]. NLSMs are realized in several families of square-net materials [18]. ZrXY (X= Si, Sn, Ge; Y = S, Se, Te) is among the most studied, owing to the great flexibility of the chemical composition, and the high crystal quality which allows the quantum limit in transport to be reached at relatively low magnetic field. The appearance of oscillation in the magnetic response [19] and in the resistivity [20][21][22], and the Berry phases extracted from these experiments, bear fingerprints of the non-trivial topology [20]. Interestingly, magneto-transport measurements under high magnetic field have revealed an enhancement of the effective mass, which is interpreted as a signature of electronic correlations [23]. These findings are supported by theoretical models predicting a large excitonic instability in both ZrSiS [24] and in ZrSiSe [25], as a consequence of the reduced screening, combined with a large degree of electron-hole symmetry and with a finite density of itinerant charge carriers [24]. Several angle-resolved photoelectron spectroscopy (ARPES) studies have addressed the band structure of ZrSiSe [26][27][28][29][30], but no experimental evidence of band renormalization has been reported so far. In this letter, we show that electronic correlations in ZrSiSe can be modified at the ultrashort timescale by an optical perturbation. The intense laser pulse creates electrons and holes far from the Dirac nodes, which efficiently screen the Coulomb interaction, leading to a renormalization of the Dirac quasiparticles (QP) dispersion probed by time-resolved ARPES (tr-ARPES). This interpretation is supported by ab initio DFT calculations complemented by an extended Hubbard model (DFT +U +V ) [31]. High quality single crystals of ZrSiSe grown by vapor transport were cleaved in situ under ultra-high vacuum (UHV) conditions. We measured the band structure by ARPES at the MAESTRO beamline 7.0.2 of the Advanced Light Source in Berkeley, with overall energy and momentum resolutions equal to 0.01Å −1 and 30 meV. We performed measurements at the tr-ARPES endstation [32] of Harmonium [33] at EPFL arXiv:1912.09673v1 [cond-mat.str-el] 20 Dec 2019 in Lausanne. We used s-polarized light at 36.9 eV photon energy, corresponding to the 23rd harmonic generated in argon. The optical excitation was s-polarized and centered at 1.6 eV (780 nm), with an absorbed fluence of 0.38 mJ/cm 2 estimated using optical properties from Refs. [34,35]. The temporal overlap between pump and probe was determined by the observation of the laser assisted photoelectric effect (LAPE) for p-polarized optical excitation. All measurements were performed at 80 K. The band-structure calculations were performed on a fully relaxed slab composed of 5 layers of ZrSiSe. We evaluated the on-site U and inter-site V Coulomb terms ab initio and selfconsistently using the recently developed extended ACBN0 functional [31] using the Octopus code [36]. The Hubbard U was evaluated for the d orbitals of Zr, and we restricted ourselves to the nearest-neighbor interaction for the inter-site V . We employed fully relativistic pseudopotentials with a 15 × 15 k-point grid and a grid spacing of 0.158Å −1 to sample the two-dimensional Brillouin zone. More details about geometry optimization and a comparison with DFT+U and hybrid functionals can be found in the supplemental material [37][38][39][40][41][42][43][44][45][46][47][48]. The band structure of ZrSiSe hosts two kinds of nodal lines; one is protected by the non-symmorphic symmetry and is located far from the Fermi level, E F , along the bulk X − R direction [26,29]. The second, more relevant for the screening and the transport properties, is formed by the crossing of the conduction band (CB) and valence band (VB), as schematized in Fig. 1(a). Near E F , the bulk bands are approximately linear, with a small density of states, which is responsible for the relatively poor screening of the long-range Coulomb interaction. The goal of our study is to transiently enhance the screening by optically exciting high-energy electrons-hole pairs far from the Dirac nodes. The ARPES data of Fig 1 provide a background for the tr-ARPES experiment. Figure 1(b) and (c) compare the theoretical and experimental Fermi surface, which consists of two diamond-shaped contours, corresponding to CB (inner) and VB (outer). A surface state (SS) forms short arcs around the X point of the surface Brillouin zone (SBZ) [red line in Fig. 1 (b)] [26]. Dashed black lines in Fig. 1(c) indicate the cuts shown in Fig. 1(d)-(f). Figure 1(d) displays the bands along the Γ − M direction, where VB and CB disperse linearly over a broad energy range, in good agreement with the literature [49]. Figure 1(e) and (f) show cuts parallel to the M − X − M direction, intercepting the bulk and the surface states, respectively. Figure 2 illustrates the main result of our study. The tr-ARPES data measured at various delay times provide experimental evidence for the light-induced renormalization of the Dirac QP. Figure 2(a) shows the dispersion of the occupied bulk bands along the same direction of Fig. 1(d), for a photon energy of 36.9 eV. Due to the different matrix elements, the intensity of the inner CB state is strongly suppressed. We find that standard DFT calculation fails to reproduce the band dispersion, in particular the second VB state which reaches its maximum 1 eV below E F . The agreement between theory and experiment is improved by DFT +U , with U = 5 eV, but this value appears unphysically large, and cannot be reproduced by ab initio methods (see the supplemental material for a detailed comparison [37]). Due to the partially delocalized character of correlations, the Coulomb interaction is better accounted for by an extended Hubbard model, with U = 1.47 eV and V = 0.33 eV. The blue lines in Fig. 2(a) show the corre- sponding calculated band structure. We stress that U and V are computed fully ab initio and they are not free parameters. These moderate values are compatible with several recent observations, for the similar compound ZrSiS: mass enhancement under intense magnetic field [23], large Landé factor in the magnetic response [50], reduced screening of excitons in the low energy optical conductivity [35]. Figure 2(b) shows that 50 fs after optical excitation, electrons populate the bulk bands above E F , and their dispersion seems to deviate from the theoretical predictions. The broadening of the bulk bands, which reflects their intrinsic k z dispersion, hampers a quantitative analysis of this effect. Therefore, we turn our attention to the sharper surface state, whose unperturbed dispersion is shown in Fig. 2(d) along the M − X − M direction. Red lines show the DFT +U +V calculation, which nicely reproduces not only the surface state dispersion, but also the dispersion of the VB bands at lower energies. Upon optical excitation, in Fig. 2(e), electrons are excited well above E F , and the band dispersion appears kinked. The band velocity is reduced with respect to the equilibrium one, and it is now better described by the simple DFT calculation shown as yellow lines. This effect will be analyzed quantitatively in Fig. 3. Here we notice that the timescale of the band renormalization is within the temporal resolution of our setup. Already 300 fs after the arrival of the pump pulse the band dispersion has recovered the equilibrium slope [ Fig. 2(f)], only with the electronic system at a larger effective temperature. The ultrafast timescale suggests a purely electronic origin for the light-induced band renormalization. In particular, the observed kink in the optically excited band structure cannot be ascribed to the coupling to a phonon, which would affect the dispersion in limited energy windows both above and below E F corresponding to the energy of the specific mode. By contrast, the observed change in dispersion extends well above the largest phonon energy (50 meV) in ZrSiSe [51,52]. Theory and experiment are quantitatively compared in Fig. 3. We simulate the ARPES intensity in Fig. 3(a) starting from the DFT +U +V calculation, broadened to account for the experimental resolution. The band occupancy is determined by the effective electronic temperature (T*) estimated from a Fermi-Dirac fit to the experimental data 200 fs after optical excitation (T* ∼ 600 K) [37]. The experimental dispersion is determined from Lorentzian fits to the momentum distribution curves (MDCs) extracted from the data, as shown in Fig. 3(c) for selected energies. The QP energies are shown in Fig. 3(d), where different colors encode the corresponding delay times during and immediately after optical excitation. The flattening of the band is well captured by the change in dispersion between the DFT +U +V (red) and DFT (yellow) calculated bands, as shown in Fig. 3(e). We plot in Fig. 3(f) the momentum renormalization ∆k, i.e. the difference between the experimental (markers) and DFT (black line) dispersion with respect to the dispersion calculated by DFT +U +V . For E − E F ≥ 150 meV ∆k is negative and as large as −0.01 ± 0.003Å −1 . The renormalization is largest during optical excitation (purple markers), and suddenly decreases after the pump pulse (red markers). The momentum renormalization ∆k is found to vary with the pump fluence, as discussed in the supplemental material [37]. Finally, we address the band dispersion near E F in the shaded area in Fig. 3(f). The positive ∆k observed before the arrival of the pump pulse is a trivial apparent deviation from the band dispersion due to the sharp Fermi-Dirac cutoff. This effect is well-known in ARPES and is stronger at low temperature [53]. Interestingly, ∆k remains positive during optical excitation, which indicates that electrons have not yet reached a thermal distribution. According to our analysis of the temporal dynamics of the Fermi-Dirac distribution [37], for the fluence used in the data set of Fig. 3, electrons fully thermalize via electron-electron scattering after 200 fs (green markers). Only when a hot thermalized electron distribution is established does the broader Fermi distribution cancel the positive ∆k, and the band follows the DFT +U +V dispersion both around E F and at larger energies. This observation establishes a clear hierarchy between the timescales of the band renormalization and of electron-electron scattering and thermalization in ZrSiSe. In summary, by combining time-resolved ARPES and ab initio DFT +U +V calculations we have shown that correlations can be optically modified in ZrSiSe, resulting in a QP renormalization. The equilibrium band structure is well reproduced by moderate on-site U = 1.47 eV and an inter-site V = 0.33 eV Coulomb terms, consistent with the reduced electronic screening of the Dirac QP. Upon optical excitation, the enhanced screening by high-energy electrons and holes produces a measurable change in the band dispersion. Previous tr-ARPES experiments have revealed ultrafast changes in the band dispersion of strongly correlated electron systems such as the high-temperature cuprate superconductors (HTSCs). They show changes of the coherent spectral weight [54], of the QP scattering rate [55] or of the QP effective mass [56]. These effects have been interpreted in terms of an optically induced reduction of the phase coherence, a mechanism which is specific to HTSC. Here we have shown that the QP dispersion can be controlled by purely electronic means, by changing the electronic screening of the Coulomb interaction. Our results are the first experimental evidence of a more general mechanism, originally proposed to control the electronic dispersion of correlated metal oxides [57], which our study extends to the Dirac QP in NLSMs. Our findings demonstrate how ultrafast optical doping can be used as an alternative way of controlling quasi-particle properties, by tuning many-body interactions on a fs timescale. In particular, this could be exploited in other topological materials with linearly dispersing Dirac and Weyl states, where electronic correlations are believed to enhance the electron mobility [58] or to induce Lifshitz transitions [59].
3,183.6
2019-12-20T00:00:00.000
[ "Physics" ]
Antioxidant and Anti-Aging Potential of Indian Sandalwood Oil against Environmental Stressors In Vitro and Ex Vivo : Distilled from the heartwood of Santalum album , Indian sandalwood oil is an essential oil that historically has been used as a natural active ingredient in cosmetics to condition and brighten the skin. It has been documented to exhibit antioxidant, anti-inflammatory, and anti-proliferative activities. Here, we investigated the protective and anti-aging effects of Indian sandalwood oil in scavenging reactive oxygen species (ROS) in HaCaT cells and in human skin explants after exposure to oxidative stress. Using a probe DCFH-DA, the antioxidant capacity of Indian sandalwood oil was monitored following exposure to blue light at 412 nm and 450 nm or cigarette smoke. The anti-aging effect of sandalwood oil was also explored in human skin explants via the assessment of collagenase level (MMP-1). We reported that Indian sandalwood oil possessed antioxidant potential that can scavenge the ROS generated by a free radical generating compound (AAPH). Subsequent exposure to environmental stressors revealed that Indian sandalwood oil possessed superior antioxidant activity in comparison to vitamin E (alpha tocopherol). Using human skin explants, this study demonstrated that Indian sandalwood oil can also inhibit the pollutant-induced level of MMP-1. The findings indicated that Indian sandalwood oil can potentially serve as a protective and anti-aging active ingredient in cosmetics and dermatology against environmental stressors. Introduction Essential oils and plant parts containing essential oils have long been valued for their ability to have a positive effect on human health. Essential oils are broadly classified as terpenes and their mono oxygenated analogues are classified as terpenols. These molecules are produced by many plants in nature with two main types of terpenes being found: monoterpenes, molecules with a C10 carbon frame, and sesquiterpenes, with a C15 carbon frame. Indian sandalwood oil is the essential oil obtained by steam distillation of the aromatic heartwood of Santalum album [1], consisting predominately of sesquiterpenes. The oil is renowned for its olfactory characteristics, having a soft, warm, and woody odor. As a result of this odor, the oil has found its way into many applications such as perfumery, attars, and incense [2]. Human Skin Explants Preparation Human skin explants were obtained, with research consent, from residual skin following mammoplasty and trimmed to remove any subcutaneous fat. Eight-millimeter skin biopsies were taken from skin composed of dermis and epidermis using a sterile dermal biopsy punch (Kai Medical, Dallas, TX, USA) and rapidly placed into a 24-well plate. Then, the skin was maintained under air-liquid interface culture conditions in skin culture medium, Gibco™ DMEM, without glutamine or phenol red (Thermo Fisher, Waltham, MA, USA). The skin was maintained at 37 • C in a 5% CO 2 atmosphere for sufficient adaptation time. Experimental Design The experimental design is shown in Figure 1. Human Skin Explants Preparation Human skin explants were obtained, with research consent, from residual skin following mammoplasty and trimmed to remove any subcutaneous fat. Eight-millimeter skin biopsies were taken from skin composed of dermis and epidermis using a sterile dermal biopsy punch (Kai Medical, Dallas, TX, USA) and rapidly placed into a 24-well plate. Then, the skin was maintained under air-liquid interface culture conditions in skin culture medium, Gibco™ DMEM, without glutamine or phenol red (Thermo Fisher, Waltham, MA, USA). The skin was maintained at 37 °C in a 5% CO2 atmosphere for sufficient adaptation time. Experimental Design The experimental design is shown in Figure 1. Blue Light Source Each blue light lamp (412 nm and 450 nm) consisted of 10 identical LEDs (Honglitronic, Guangzhou, China) emitting continuous visible radiation embedded in a reflector, which was covered by a transparent glass window. A single peak with a maximum wavelength of either 412 nm or 450 nm could be observed for the lamps. The aperture on the light source was 4.5 cm × 4.5 cm. The array at the surface of exposure was approximately 10 cm × 10 cm, at an approximate distance of 5 cm from the light source. A thermopile detector (Gentec EO USA Inc., Lake Oswego, OR, USA) was used to measure the precise intensity of the light source in Watt/cm 2 at the level of the investigational site. The time of exposure was adjusted to ensure that 1 J/cm 2 of blue light was delivered to the investigational site. Cigarette Smoke A transparent exposure chamber designed to accommodate the well plates and a cigarette connected to an air pump were used (Tarsons, Kolkata, India). During 30 min exposure, three cigarettes were used. The cigarette was connected to a pump that mimics the aspiration of the smoker, and the smoke released in the chamber corresponds to exhaled smoke. One cigarette was lit at the start of the exposure and then every 10 min for a total of three cigarettes per exposure of 30 min. Ozone Exposure Ozone was produced using an ozone generator. The skin explants were exposed to 1.6 parts per million (ppm) for a total exposure time of 30 min. Each blue light lamp (412 nm and 450 nm) consisted of 10 identical LEDs (Honglitronic, Guangzhou, China) emitting continuous visible radiation embedded in a reflector, which was covered by a transparent glass window. A single peak with a maximum wavelength of either 412 nm or 450 nm could be observed for the lamps. The aperture on the light source was 4.5 cm × 4.5 cm. The array at the surface of exposure was approximately 10 cm × 10 cm, at an approximate distance of 5 cm from the light source. A thermopile detector (Gentec EO USA Inc., Lake Oswego, OR, USA) was used to measure the precise intensity of the light source in Watt/cm 2 at the level of the investigational site. The time of exposure was adjusted to ensure that 1 J/cm 2 of blue light was delivered to the investigational site. Cigarette Smoke A transparent exposure chamber designed to accommodate the well plates and a cigarette connected to an air pump were used (Tarsons, Kolkata, India). During 30 min exposure, three cigarettes were used. The cigarette was connected to a pump that mimics the aspiration of the smoker, and the smoke released in the chamber corresponds to exhaled smoke. One cigarette was lit at the start of the exposure and then every 10 min for a total of three cigarettes per exposure of 30 min. Ozone Exposure Ozone was produced using an ozone generator. The skin explants were exposed to 1.6 parts per million (ppm) for a total exposure time of 30 min. incubation, the MTT solution was removed, and isopropanol was added to dissolve the formazan crystals. Then, the absorbance was measured at 570 nm using a Synergy HTX multimode microplate reader (BioTek, Winooski, VT, USA). Intracellular Reactive Oxygen Species (ROS) Scavenging Activity Assay Approximately 1 × 10 5 HaCaT cells were seeded per well in different 24-well plates. After the adaptation incubation period of 16 h at 37 • C, the Indian sandalwood oil was added to the cells at three different concentrations (0.2%, 0.1%, and 0.05%) for 24 h at 37 • C and supplemented with 5% CO 2 . The positive control alpha-tocopherol (vitamin E) (Sigma-Aldrich, St. Louis, MO, USA) was also tested at three different concentrations: namely, 15 units (corresponding to 2%), 7.5 units (corresponding to 1%), and 3.75 units (corresponding to 0.5%). Then, the cells were treated with DCFH-DA (Sigma, St. Louis, MI, USA) for three hours at 37 • C and 5% CO 2 . After the DCFH-DA treatment, the cells were washed three times with 1X PBS. Then, the 96-well plates were exposed to 412 nm and 450 nm blue light at 1J/cm 2 or cigarette smoke as an environmental stressor and left unexposed in a dark environment as a control. Following exposure, lysis solution (with 2% Triton X-100) are added to all 96 wells. Then, the well plates were shaken, and the fluorescence (wavelength of excitation of 485/88 nm and emission 528/30 nm) was measured using the Synergy HTX multimode microplate reader (BioTek, Winooski, VT, USA). Collagenase Inhibition Assay via Matrix Metalloproteinase-1 (MMP-1) Following skin biopsy and adaptation time, the test item was applied and incubated for 24 h at 37 • C and 5% CO 2 . Then, the 24-well plates were exposed to cigarette smoke or ozone (1.6 ppm) for 30 min. The remaining plates were left unexposed in a dark environment as a control. Then, the supernatant was transferred to an antibody-coated well plate to assess the level of matrix metalloproteinase-1 (MMP-1) following the manufacturer's instruction of the human MMP-1 ELISA Kit (Sigma, St. Louis, MI, USA). An increase in the color intensity was monitored by the spectrometry at 450 nm (BioTek, Winooski, VT, USA). Statistical Analysis Experiments were independently repeated in biological triplicate. Error bars in the graphical data represent standard estimation of the mean (SEM). A one-way ANOVA was used for the statistical analysis using the software GraphPad Prism Version 7 (GraphPad Software Inc., San Diego, CA, USA), and a statistical significance was claimed when the p-value was lower than 0.01 (p < 0.01). MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide) Assay for Cell Viability and Proliferation Prior to the determination of the efficacy of Indian sandalwood oil to protect against the oxidative stress induced in HaCaT cells, an MTT assay was performed to establish the optimum concentration of sandalwood oil that would not induce any defect in cell cycle or cell toxicity. Thus, HaCaT cells were treated with different concentrations of Indian sandalwood oil for either 24 h or 48 h (Figure 2A,B). Experiments were independently repeated in biological triplicate. Error bars in the graphical data represent standard estimation of the mean (SEM). A one-way ANOVA was used for the statistical analysis using the software GraphPad Prism Version 7 (GraphPad Software Inc., San Diego, CA, USA), and a statistical significance was claimed when the p-value was lower than 0.01 (p < 0.01). MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide) Assay for Cell Viability and Proliferation Prior to the determination of the efficacy of Indian sandalwood oil to protect against the oxidative stress induced in HaCaT cells, an MTT assay was performed to establish the optimum concentration of sandalwood oil that would not induce any defect in cell cycle or cell toxicity. Thus, HaCaT cells were treated with different concentrations of Indian sandalwood oil for either 24 h or 48 h (Figure 2A,B). At 24 h post treatment, the highest concentrations of Indian sandalwood oil tested (10%, 2%, 0.6%, 0.5%, 0.4%, and 0.3%) revealed a cell viability count lower than 70% compared to the cells that were only treated with cell culture media. At concentrations of ≤0.2% of Indian sandalwood oil, a cell viability higher than 70% could be observed. In parallel, the cell count of HaCaT cells treated for 48 h with Indian sandalwood oil was also At 24 h post treatment, the highest concentrations of Indian sandalwood oil tested (10%, 2%, 0.6%, 0.5%, 0.4%, and 0.3%) revealed a cell viability count lower than 70% compared to the cells that were only treated with cell culture media. At concentrations of ≤0.2% of Indian sandalwood oil, a cell viability higher than 70% could be observed. In parallel, the cell count of HaCaT cells treated for 48 h with Indian sandalwood oil was also evaluated ( Figure 2B). At concentrations of 10%, 2%, 0.6%, 0.5%, 0.4%, and 0.3%, a decrease lower than 70% of the cell count could be observed. However, the cells treated with 0.2% of Indian sandalwood oil or less showed cell count results showing more than 70% cell viability, which was reminiscent to what was previously observed for a treatment time of 24 h. Based on these results, all subsequent efficacy experiments performed in HaCaT cells and necessitating 24 h or 48 h treatment with Indian sandalwood oil were performed at 0.2% as the highest concentration. Cellular Antioxidant Assay Then, the potential of Indian sandalwood oil to protect against the oxidative stress induced by the peroxyl initiator AAPH was monitored in HaCaT cells. As determined in the MTT assay, the highest concentration of Indian sandalwood oil used was 0.2%, and the lowest concentration used was 0.001%. Quercetin, a known antioxidant, was used as a positive control. The highest concentration used was 0.0075%, while the lowest concentration used was 0.00023% ( Figure 3A,B). evaluated ( Figure 2B). At concentrations of 10%, 2%, 0.6%, 0.5%, 0.4%, and 0.3%, a decrease lower than 70% of the cell count could be observed. However, the cells treated with 0.2% of Indian sandalwood oil or less showed cell count results showing more than 70% cell viability, which was reminiscent to what was previously observed for a treatment time of 24 h. Based on these results, all subsequent efficacy experiments performed in HaCaT cells and necessitating 24 h or 48 h treatment with Indian sandalwood oil were performed at 0.2% as the highest concentration. Cellular Antioxidant Assay Then, the potential of Indian sandalwood oil to protect against the oxidative stress induced by the peroxyl initiator AAPH was monitored in HaCaT cells. As determined in the MTT assay, the highest concentration of Indian sandalwood oil used was 0.2%, and the lowest concentration used was 0.001%. Quercetin, a known antioxidant, was used as a positive control. The highest concentration used was 0.0075%, while the lowest concentration used was 0.00023% ( Figure 3A,B). Indian sandalwood oil showed antioxidant potential at the five highest concentrations tested (0.2%, 0.1%, 0.07%, 0.05%, and 0.025%). The antioxidant potency of Indian sandalwood oil was equivalent to what was observed with the three highest concentrations of the positive control quercetin. These results suggested that at a concentration of 0.2%, Indian sandalwood oil has an antioxidant activity that was as potent as the antioxidant activity of the positive control quercetin at 0.0075%. IC50 determination was found to be 0.03% for Indian sandalwood oil and 0.002% for quercetin ( Figure 3B). Intracellular Reactive Oxygen Species (ROS) Scavenging Activity Assay The antioxidant potential of Indian sandalwood oil to protect against oxidative stress induced by environmental stressors was subsequently monitored. Three different environmental stressors were used in this study namely blue light at 412 nm, blue light at 450 nm, and cigarette smoke. Both wavelengths of blue light were tested at a dose of 1 J/cm 2 . Three concentrations of Indian sandalwood oil showing good cellular antioxidant efficacy from the MTT assay were used for this assay: namely, 0.05%, 0.1%, and 0.2%. Alpha-tocopherol, a lipophilic antioxidant, was used as the positive control at concentrations of 0.5%, 1%, and 2% ( Figure 4). Indian sandalwood oil showed antioxidant potential at the five highest concentrations tested (0.2%, 0.1%, 0.07%, 0.05%, and 0.025%). The antioxidant potency of Indian sandalwood oil was equivalent to what was observed with the three highest concentrations of the positive control quercetin. These results suggested that at a concentration of 0.2%, Indian sandalwood oil has an antioxidant activity that was as potent as the antioxidant activity of the positive control quercetin at 0.0075%. IC 50 determination was found to be 0.03% for Indian sandalwood oil and 0.002% for quercetin ( Figure 3B). Intracellular Reactive Oxygen Species (ROS) Scavenging Activity Assay The antioxidant potential of Indian sandalwood oil to protect against oxidative stress induced by environmental stressors was subsequently monitored. Three different environmental stressors were used in this study namely blue light at 412 nm, blue light at 450 nm, and cigarette smoke. Both wavelengths of blue light were tested at a dose of 1 J/cm 2 . Three concentrations of Indian sandalwood oil showing good cellular antioxidant efficacy from the MTT assay were used for this assay: namely, 0.05%, 0.1%, and 0.2%. Alpha-tocopherol, a lipophilic antioxidant, was used as the positive control at concentrations of 0.5%, 1%, and 2% ( Figure 4). A sharp induction in the levels or ROS can be observed basally when the untreated HaCaT cells were exposed to the stressors: blue light at 412 nm, blue light at 450 nm, and cigarette smoke. When the HaCaT cells were treated with the three concentrations of Indian sandalwood oil (0.05%, 0.1% and 0.2%), a significant reduction in the oxidative stress induced by either blue light (412 nm or 450 nm) or cigarette smoke could be observed. Indeed, 66% (p < 0.0001), 73% (p < 0.0001), and 76% (p < 0.0001) decreases in the level of ROS could be observed in cells treated with 0.05%, 0.1%, and 0.2% of Indian sandalwood oil, respectively prior to exposure to blue light at 412 nm. In cells exposed to blue light at 450 nm, a similar trend could be observed where 60% (p = 0.0012), 68% (p = 0.0005), and 75 % (p = 0.0002) decreases could be observed for similar concentrations of Indian sandalwood oil, respectively. Moreover, in HaCaT cells that were exposed to cigarette smoke, Indian sandalwood oil was observed to significantly protect against the ROS induced, although the decrease was more modest in comparison to blue light at 412 nm and 450 nm. Thus, at the highest concentration (0.2%) of Indian sandalwood oil, only a 28% (p = 0.001) decrease in the level of ROS was detected. While all three concentrations (0.5%, 1%, and 2%) of the positive control, alpha-tocopherol, could significantly reduce the ROS induced by blue light 450 nm, only the two highest concentrations of alpha-tocopherol (1% and 2%) could induce a protective effect against the levels of ROS induced by cigarette smoke when compared to the untreated samples. However, none of the three concentrations tested for alpha-tocopherol could significantly decrease the ROS induced by blue light 412 nm. Interestingly, in HaCaT cells treated with the highest concentration of alpha-tocopherol prior to blue light 412 nm exposure, an increase in the levels of ROS can be observed compared to untreated but exposed cells. This suggests that an interference might be occurring between the DCFH-DA assay, alpha-tocopherol, and blue light at 412 nm. A sharp induction in the levels or ROS can be observed basally when the untreated HaCaT cells were exposed to the stressors: blue light at 412 nm, blue light at 450 nm, and cigarette smoke. When the HaCaT cells were treated with the three concentrations of Indian sandalwood oil (0.05%, 0.1% and 0.2%), a significant reduction in the oxidative stress induced by either blue light (412 nm or 450 nm) or cigarette smoke could be observed. Indeed, 66% (p < 0.0001), 73% (p < 0.0001), and 76% (p < 0.0001) decreases in the level of ROS could be observed in cells treated with 0.05%, 0.1%, and 0.2% of Indian sandalwood oil, respectively prior to exposure to blue light at 412 nm. In cells exposed to blue light at 450 nm, a similar trend could be observed where 60% (p = 0.0012), 68% (p = 0.0005), and 75% (p = 0.0002) decreases could be observed for similar concentrations of Indian sandalwood oil, respectively. Moreover, in HaCaT cells that were exposed to cigarette smoke, Indian sandalwood oil was observed to significantly protect against the ROS induced, although the decrease was more modest in comparison to blue light at 412 nm and 450 nm. Thus, at the highest concentration (0.2%) of Indian sandalwood oil, only a 28% (p = 0.001) decrease in the level of ROS was detected. While all three concentrations (0.5%, 1%, and 2%) of the positive control, alphatocopherol, could significantly reduce the ROS induced by blue light 450 nm, only the two highest concentrations of alpha-tocopherol (1% and 2%) could induce a protective effect against the levels of ROS induced by cigarette smoke when compared to the untreated samples. However, none of the three concentrations tested for alpha-tocopherol could significantly decrease the ROS induced by blue light 412 nm. Interestingly, in HaCaT cells treated with the highest concentration of alpha-tocopherol prior to blue light 412 nm exposure, an increase in the levels of ROS can be observed compared to untreated but exposed cells. This suggests that an interference might be occurring between the DCFH-DA assay, alpha-tocopherol, and blue light at 412 nm. Collagenase Inhibition Assay (MMP-1) Since the protective effect of Indian sandalwood oil against the oxidative stress induced by environmental stressors was demonstrated, the evaluation of whether Indian sandalwood oil could protect against the detrimental effect of pollution by monitoring a more downstream effector, MMP-1, was also investigated ( Figure 5). Collagenase Inhibition Assay (MMP-1) Since the protective effect of Indian sandalwood oil against the oxidative stress in duced by environmental stressors was demonstrated, the evaluation of whether Indian sandalwood oil could protect against the detrimental effect of pollution by monitoring more downstream effector, MMP-1, was also investigated ( Figure 5). Figure 5. Assessment of the level of MMP-1 after exposure of human skin explants to different en vironmental conditions. The graph represents the ability of the Indian sandalwood oil to preven the secretion of MMP-1 after being exposed to various environmental stressors. The skin explant were exposed to cigarette smoke (30 min) and ozone (30 min). The supernatant was collected 24 after exposure and was assayed for MMP-1. This assay was performed in triplicate. *** represents value < 0.001; **** represents p value < 0.0001. There was a drastic increase in the levels of MMP-1 in the untreated samples exposed to the environmental stressors when compared to the untreated and unexposed samples When exposed to ozone, Indian sandalwood oil inhibited the expression of MMP-1 b 88% (p < 0.0001). Indian sandalwood oil also displayed an inhibitory effect on the levels o MMP-1 when exposed to cigarette smoke. For this exposure, the difference between th untreated and treated samples was 70% (p = 0.0003), which was statistically significant. Discussion Indian sandalwood has been in use for over 2500 years, and its commercial usag dates back to 300 BCE [2,6]. It has been previously reported for its antioxidant, anti-tyro sinase, anti-inflammatory, anti-proliferative, and anti-microbial properties [10][11][12]23,24 Despite reports of the pharmacological activity of sandalwood, only a few studies hav evaluated its benefits as a cosmetic ingredient. In particular, the potential benefits of In dian sandalwood oil in protecting the skin against the detrimental effect of environmenta stressors has never been reported. In this study, we demonstrated that Indian sandalwood oil showed a significant protective effect against pollutants such as cigarette smoke and ozone in skin explants. Indian sandalwood oil was also shown to protect HaCaT cell against the oxidative stress induced by environmental provocation by both blue light and cigarette smoke. We further demonstrated that Indian sandalwood oil decreases the col lagenase MMP-1 levels in skin explants. Cellular antioxidant assays were performed revealing the inhibitory effect of Indian sandalwood oil. In this study, 2,2′-azobis (2-amidinopropane) dihydrochloride (AAPH Figure 5. Assessment of the level of MMP-1 after exposure of human skin explants to different environmental conditions. The graph represents the ability of the Indian sandalwood oil to prevent the secretion of MMP-1 after being exposed to various environmental stressors. The skin explants were exposed to cigarette smoke (30 min) and ozone (30 min). The supernatant was collected 24 h after exposure and was assayed for MMP-1. This assay was performed in triplicate. *** represents p value < 0.001; **** represents p value < 0.0001. There was a drastic increase in the levels of MMP-1 in the untreated samples exposed to the environmental stressors when compared to the untreated and unexposed samples. When exposed to ozone, Indian sandalwood oil inhibited the expression of MMP-1 by 88% (p < 0.0001). Indian sandalwood oil also displayed an inhibitory effect on the levels of MMP-1 when exposed to cigarette smoke. For this exposure, the difference between the untreated and treated samples was 70% (p = 0.0003), which was statistically significant. Discussion Indian sandalwood has been in use for over 2500 years, and its commercial usage dates back to 300 BCE [2,6]. It has been previously reported for its antioxidant, anti-tyrosinase, anti-inflammatory, anti-proliferative, and anti-microbial properties [10][11][12]23,24]. Despite reports of the pharmacological activity of sandalwood, only a few studies have evaluated its benefits as a cosmetic ingredient. In particular, the potential benefits of Indian sandalwood oil in protecting the skin against the detrimental effect of environmental stressors has never been reported. In this study, we demonstrated that Indian sandalwood oil showed a significant protective effect against pollutants such as cigarette smoke and ozone in skin explants. Indian sandalwood oil was also shown to protect HaCaT cells against the oxidative stress induced by environmental provocation by both blue light and cigarette smoke. We further demonstrated that Indian sandalwood oil decreases the collagenase MMP-1 levels in skin explants. Cellular antioxidant assays were performed revealing the inhibitory effect of Indian sandalwood oil. In this study, 2,2 -azobis (2-amidinopropane) dihydrochloride (AAPH) was utilized as a free radical generating compound that could mimic oxidative stress in the HaCaT cells. This was used to induce a constant rate of radical generation, allowing sufficient time to assess the free-radical-scavenging activity [25]. Quercetin was chosen as the positive control. Our study showed that Indian sandalwood oil has equal antioxidant capacity to the established antioxidant quercetin, where a 95-85% decrease in the level of ROS could be observed at the highest concentration used for both test products. Then, HaCaT cells were subjected to environmental oxidative stressors, utilizing cigarette smoke and subsequently two distinct wavelengths of blue light. Cigarette smoke was shown to impart a far greater stress on HaCaT cells compared to the two wavelengths of blue light used. Indeed, cigarette smoke constituted a ubiquitous environmental hazard as a source of human exposure to chemically active pollutants. Ozone was used similarly in the ex vivo experiment. Both stressors generated high levels of ROS, including hydroxyl radicals and hydrogen peroxide, which can be linked to damaging pathological processes in the skin [26,27]. The protective effect of Indian sandalwood oil was more modest in cells exposed to cigarette smoke compared to blue light. This could be explained by the fact that in the different experiments performed in this study, cigarette smoke induced approximately twice more ROS than blue light. Thus, even at the highest concentration tested (0.2%) of Indian sandalwood oil, as well as the control of alpha-tocopherol at 2% (15 units), they were likely overwhelmed by the elevated amount of ROS induced. Exposure to UV, in conjunction with the natural functional changes due to aging, causes dermal fibroblasts to increase the production of MMP-1 [28,29]. Matrix metalloproteinase-1 (MMP-1) is a zinc and calcium-dependent endopeptidase that is synthesized and released from both dermal fibroblast and keratinocytes. It works by breaking down collagen found in the extracellular matrix (ECM). Then, MMP-1 is released as an inactive proenzyme, which is later activated via proteolytic cleavage, leading to the release of its active form. An elevated expression of its activity caused by an upregulation has been implicated with a degradation of the extracellular matrix and leads to premature photoaging of the human skin as well as skin cancer [30]. In skin tissue or cultured fibroblasts, both the active and inactivated form of MMP-1 is released into the culture media [28]. However, it should be noted that the high-throughput ELISA assay used only quantified the pro-domain and the active form of the MMP-1 but did not quantify the inactivated form of MMP-1 [28]. Notably, Indian sandalwood oil was able to decrease the level of matrix metalloproteinase-1 (MMP-1) in the ex vivo skin by a significant amount. As such, we can posit that by showing a significant drop in MMP-1, Indian sandalwood oil likely acted on the activated form of the MMP-1 enzyme. Thus, it can be concluded that Indian sandalwood oil effectively prevented an increase in the activity of at least the activated form of MMP-1. In the future, more studies could be performed to further explore the pathways by which the activity of MMP-1 was disrupted by Indian sandalwood oil. It is well documented those environmental factors contribute to vulnerability in the skin, and this in turn leads to premature skin aging [16]. The factors responsible for accelerated skin aging are contributed largely by the over-expression of ROS and the upregulation of MMPs [16]. Indeed, as the skin ages, and due to the imbalance in the expression of free radicals, the turnover number of keratinocytes in the epidermis dropped. This led to the subsequent decrease in collagen found in the skin. These changes were reported to loop back and contribute to an increased production in free radicals [31]. In addition, another factor of premature skin aging was the increased expression of MMPs and a decrease in the level of its inhibitors (TIMP). These were found to be related to the upregulation of ROS [32]. Therefore, the level of ROS induced in the skin played a central role in the responses toward premature skin aging. Here, we demonstrated that Indian sandalwood oil protected against the oxidation caused by blue light at 412 nm and 450 nm and the effect of cigarette smoke on keratinocytes with an approximate decrease of 75% in ROS activity observed with blue light and an approximate decrease of around 30% in ROS activity observed for cigarette smoke. This decrease in ROS and MMP-1 suggested that Indian sandalwood oil likely possessed skin anti-aging properties. These skin elements that were affected by the pollutants could be targeted by a cosmetic ingredient such as Indian sandalwood oil in the prevention of early skin aging. Ultimately, skin aging caused by environmental stress is a crucial element to consider in maintaining good skin health. As reported by Velasco et al. [15], cosmetic ingredients may work by either preventing contact between the skin and pollutants or by triggering biochemical processes that hinder the oxidative primary product. To achieve an ideal cosmetic formulation, both characteristics are heavily sought after. These would give rise to cosmetic products that can lower short-term damages such as inflammation and upregulate the signaling pathway to increase metabolic activity and cell differentiation [15]. In this study, we reported promising preliminary results attesting to the protective and anti-aging effect of Indian sandalwood oil. Thus, in order to be a successful candidate as a cosmetic ingredient, further efficacy tests should be carried in order to further assess the functional characteristic of Indian sandalwood oil. Furthermore, in vivo assessment of the dermatological activity and usage levels of the oil remains to be carried out to examine the long-term and short-term effects of exposure to the environmental exposomes. Conclusions We described in this study a novel property of Indian sandalwood oil as a protective active ingredient against the detrimental effect of environmental stressors in vitro and on human skin ex vivo. The antioxidant efficacy of Indian sandalwood oil using a cellular antioxidant assay was explored whereby the test substance significantly reduced the level of ROS induced by the peroxyl initiator AAPH. Reminiscent to the latter results, Indian sandalwood oil was also shown to subsequently protect HaCaT cells against the oxidative stress induced by environmental stressors such as blue light at 412 nm and 450 nm and cigarette smoke. The protective effect of Indian sandalwood oil against the detrimental effect of pollution (cigarette smoke and ozone) was also monitored using a more complex model, human skin explants. Our results have shown that Indian sandalwood oil was capable of significantly decreasing the level of MMP-1 induced by either cigarette smoke or ozone. The results presented in this study suggested that Indian sandalwood oil has the potential to serve as an active ingredient in dermatology for protection against environmental stressors in addition to its already existing fragrance and aromatherapy applications. Following more in-depth studies, Indian sandalwood oil may also serve as a promising candidate in its use as a multipurpose ingredient in cosmetics care. Informed Consent Statement: Consent was acquired from surgical patients. Data Availability Statement: The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to privacy reasons.
7,389.2
2021-06-19T00:00:00.000
[ "Biology" ]
PLANT-Dx: A Molecular Diagnostic for Point-of-Use Detection of Plant Pathogens Synthetic biology based diagnostic technologies have improved upon gold standard diagnostic methodologies by decreasing cost, increasing accuracy, and enhancing portability. However, there has been little effort in adapting these technologies toward applications related to point-of-use monitoring of plant and crop health. Here, we take a step toward this vision by developing an approach that couples isothermal amplification of specific plant pathogen genomic sequences with customizable synthetic RNA regulators that are designed to trigger the production of a colorimetric output in cell-free gene expression reactions. We demonstrate our system can sense viral derived sequences with high sensitivity and specificity, and can be utilized to directly detect viruses from infected plant material. Furthermore, we demonstrate that the entire system can operate using only body heat and naked-eye visual analysis of outputs. We anticipate these strategies to be important components of user-friendly and deployable diagnostic systems that can be configured to detect a range of important plant pathogens. S ynthetic biology has recently contributed to multiple advances in point-of-use nucleic acid diagnostic (PoUD) technologies. 1 These technologies leverage isothermal strategies to amplify target nucleic acids, with new advances in detection of these targets using strand-displacement, 2 or CRISPR-based methods 3,4 that produce fluorescent readouts, or RNA toehold switches that control translation of enzymes that produce colored compounds. 5 Overall these technologies can be used for sensitive detection of pathogen-derived nucleic acids in complex matrices in field-deployable formats that significantly improve upon current laboratory-intensive PCRbased approaches. To date, most synthetic biology diagnostic efforts have focused on detecting pathogens that impact human health. However, there is great potential to leverage these technologies for detecting plant pathogens. In the United States alone, plant pathogens account for an estimated $33 billion annual loss in agricultural productivity. 6 Worldwide, losses in crop yields due to plant pathogens can be more severe and contribute to food scarcity and famine. 7 Among plant pathogens, wide host range viral species such as cucumber mosaic virus (CMV) and potato virus Y (PVY) are particularly devastating, as they infect hundreds of plant species, including agriculturally important species such as beans, maize, and potatoes. 8 PoUDs are an important component of strategies to combat the impacts of these pathogens, as timely identification can lead to the rapid deployment of methods for mitigation and containment. However, current plant pathogen PoUD strategies use a range of approaches, including antibody-based detection, which lacks sensitivity, 9 or isothermal amplification, which by itself does not generate convenient visual outputs, that are amenable to field use. To address these shortcomings, we sought to develop a PoUD system called PLANT-Dx (Point-of-use LAboratory for Nucleic acids in a Tube) that combines the sensitivity of isothermal strategies to amplify target plant pathogen nucleic acids, 10 with the designability of synthetic gene regulatory systems 11 and the robustness of cell-free gene expression reactions 12 to produce colorimetric outputs that are visible to the naked-eye (Figure 1a). The overall approach of our PLANT-Dx system is to convert plant pathogen nucleic acids into constructs encoding designed synthetic RNA regulators that when produced activate an RNA genetic switch controlling the expression of an enzyme that catalyzes a color change. The RNA genetic switch thus serves a role as a signal processing layer that filters the noisy output of isothermal amplification products, 10 only triggering the production of color for correctly amplified on-target viral sequences. PLANT-Dx works by first using recombinase polymerase amplification (RPA) 13 ( Figure S1) to amplify a target region of a plant pathogen genome to produce a double-stranded DNA construct that encodes the synthesis of a synthetic RNA regulator called a Small Transcription Activating RNA (STAR) ( Figure S1). 11 These DNA templates are then used to direct the transcription of STARs within a cell-free gene expression reaction, 12 which when produced, activates the transcription of a STAR-regulated construct encoding the enzyme catechol 2,3dioxygenase (CDO) 14 (Figure 1a). Only when the pathogen is present is the RPA product made, leading to expression of CDO, which in turn converts the colorless catechol compound into a visible yellow product. Here we show that this design can detect CMV in infected plant lysate with a low picomolar sensitivity, and can be configured to detect nucleic acids from different viral genomes without crosstalk. In addition, we show that this design requires only simple mixing and body heat to induce a color change, which we anticipate will facilitate deployment to field settings. ■ RESULTS To develop PLANT-Dx, we first sought to create pathogen detecting molecular sensors based upon the Small Transcription Activating RNA (STAR) regulatory system. 11 This transcription activation system is based upon conditional formation of a terminator hairpin located within a target RNA upstream of a gene to be regulated: alone, the terminator hairpin forms and interrupts transcription of the downstream gene, while in the presence of a specific trans-acting STAR the hairpin cannot form and transcription proceeds ( Figure S2). Previous work showed that the STAR linear binding region can be changed to produce highly functional and orthogonal variants. 15 Here we sought to utilize this by replacing the linear binding region with sequences derived from genomic pathogen RNA to create new viral sensors. To do this, we utilized the secondary structure prediction algorithm NUPACK to identify regions within the genomes of CMV and PVY, that are predicted computationally to be unstructured for target RNA design (Note S1). 16 Once viral STARs were designed, reporter DNA constructs were then created in which these target sequences were placed downstream of a constitutive E. coli promoter and upstream of the CDO reporter gene coding sequence. We next designed RPA primer sets to amplify and transform a pathogen's genomic material into a DNA construct capable of synthesizing a functional STAR. Specifically, a T7 promoter and antiterminator STAR sequence were added to the 5′ end of a reverse RPA primer, which when combined with a forward primer, amplified an approximately 80 nucleotide (nt) viral sequence to produce a double-stranded ACS Synthetic Biology Technical Note DNA encoding the designed STAR which contained the target viral sequence. In this way, we anticipated that combining the CDO-encoding reporter construct and RPA amplified DNA into a cell-free gene expression reaction 12,17 would lead to the production of a detectable colorimetric output signal. We began by investigating the ability of PLANT-Dx to detect the presence of in vitro transcribed (IVT) RNA designed to mimic specific target regions of CMV. We observed rapid color accumulation in samples containing 1 nM of purified transcription product versus the no-RNA negative control (Figure 1b). To test for modularity, we further developed sensors and primer sets for the detection of PVY, and confirmed function with the same assay (Figure 1c). The specificity of our system was also tested by interrogating the crosstalk between the product of various RPA reactions and noncognate molecular sensors. Specifically, we tested color production from cell-free reactions containing the reporter DNA construct for CMV with the PVY IVT-derived RPA product, as well as the converse, and found color production only between cognate pairs of input RPA and reporter constructs (Figure 1d). We next interrogated the inherent limit of detection of our system through titration of input IVT products (Figure 1e) and found it to be between 44pM and 4.4pM of input IVT RNA material. This demonstrated our ability to detect the presence of target nucleic acid sequences down to the picomolar range. Surprisingly, this sensitivity is lower than that previously reported for RPA 3 and is most likely due to loss in amplification efficiency from the addition of the long overhangs present within our primer sets. We next set out to determine whether this methodology was able to differentiate between plant lysate obtained from healthy plants versus lysate from plants infected with CMV virus. To test this, we input 1 μL of CMV-infected plant lysate, or an equivalent volume of a noninfected plant lysate control, into the PLANT-Dx reaction system. Here, we observed rapid color change only from reactions with infected lysate when compared to healthy lysate (Figure 2a). Interestingly, the leak in the system was reduced when challenged with plant extract in comparison to previous results using IVT product. This is most likely due to a slightly inhibitory effect plant lysate may have on the efficiency of the RPA reaction and presents a positive benefit of reducing off target signal production. Despite the great benefits derived from colorimetric enzymes, their usage dictates that any leak in the system will eventually result in the complete conversion of substrate into a visible signal. Therefore, it is important to determine a time point cutoff in which to analyze data for the presence or absence of the color signal to minimize false positives associated with expression leak. In this work using PLANT-Dx for detection of CMV, we suggest utilizing 150 min (Figure 1a). To demonstrate that this assay can be monitored by eye, reactions were carried out and filmed within a 31°C incubator ( Figure 2b). With the naked eye, we detected accumulation of a yellow color only within reactions that were incubated with infected lysate, while no such production was observed in reactions with uninfected lysate. A notable drawback of current gold standard diagnostics is the need for peripheral equipment for either amplification or visualization of outputs. Even simple heating elements for controlled incubations are a major hindrance during deployment within the field and can be cost-prohibitive. We sought to exploit the flexible temperature requirements of both RPA and cell-free gene expression reactions by attempting to run our diagnostic reactions for CMV infected lysate using only body heat. This resulted in clear yellow color only in the presence of infected lysate, with no major difference observed between these reactions and those previously incubated within a thermocycler and observed with a plate reader (Figure 2c). ■ DISCUSSION Here we have demonstrated a novel scheme for combining isothermal amplification and custom synthetic biology viral sensors for the detection of the important plant pathogen CMV from infected plant lysate. Building off of previously elucidated STAR design rules, 15 we have shown that our molecular sensors can be efficiently designed, built, and ACS Synthetic Biology Technical Note implemented for use in this important plant diagnostic context. The use of STARs in PLANT-Dx complements previous uses of toehold translational switches for similar purposes in human viral diagnostics, 5 and could lead to more powerful combinations of the two technologies in the future. In addition, we have shown that these reactions can be readily run without the need for extraneous heating or visualization equipment. In particular, the rapid mechanical disruption of infected leaf tissue into a reaction-ready lysate buffer eliminates the need for any nucleic acid isolation. While in these experiments the lysate was snap-frozen, they could equally be used for immediate analysis in the field. The ability of our methodology to selectively detect genomic sequences from CMV and PVY highlight the ability of the growing methodologies and design principles within RNA synthetic biology 18 to contribute to real world applications. Further modifications to sample preparation will undoubtedly be needed to simplify the user interface still further while improving the sensitivity of detection of lower replicating and genetically more diverse virus species. We hope that these developments can be incorporated within other synthetic biology-based diagnostic platforms 5 to enable PoUDs to be developed and delivered to regions of the world that need them most.
2,660
2018-12-17T00:00:00.000
[ "Biology" ]
UDC 514.18 SYSTEM OF MODELING OF STRUCTURAL ELEMENTS OF VENTILATION SYSTEMS BY POLYCOОRDINATE TRANSFORMATIONS In this work, a computer system for modeling geometric objects is constructed. This system is instrumental in solving various problems that occur in construction, in particular in the design of ventilation systems. Our approach is based on a method of the polypoint transformations, namely on deformation modeling. Deformations of geometric objects could be described based on the given parameters of a dynamic deformation rather than on analytical equations. An object’s form is changing due to a deformation of a space in which an object is located. Using the machinery of the polypoint transformations, a computer system for modeling geometric objects has been created. The system provides tools that simplify the constriction of surfaces with various types of sections. Introduction. There are various ways to supply and remove indoor air. The choice of ventilation system must take into account technological requirements for working conditions and living space, as well as economic factors. When designing ventilation, it is necessary to apply appropriate design and planning solutions using modern information technologies. The composition of the system depends on its type. One of the most commonly used is mechanical systems. They include the following components: ducts; grates; diffusers; fans; heaters; filters, gearboxes and more. In any ventilation, ducts are an important structural element for supplying fresh air and removing polluted air. When designing ducts in construction, it is necessary to take into account the various forms of cross sections when connecting ducts, which is difficult in practice. Similar problems also arise when designing gearboxes, various ventilation systems for residential and domestic industrial premises. The difficulty of this task is that the transitional structures listed may have cross-sections of different shapes at the ends that need to be joined: for example, round on one side and square on the other. This problem will be solved by creating a system of modeling by means of polycoordinate transformations. 2. Literature Review and the Problem Statement. In [1] formulas of polypoint transformations are given, the concept of poly-point coordinates is introduced and the method of transformation of a straight line in a point cascade is described. [2,3] provides examples of how poly-point transforms are used to control the shape of an object. The analysis of different ways of polypoint transformations is carried out. [4] provides an example of the use of different types of polypoint transformations to solve the extrapolation problem. The analysis of these studies indicates the need to create a computer system for modeling geometric objects, which would allow solving a number of problems that arise in the design and design in construction. 3. Formulating the goals of the article. The purpose of this study is a computer-aided system for modeling geometric objects, which is based on the theory of polypoint transformations, created with the help of modern information technologies, which would greatly simplify the existing processes of constructing surfaces with different types of sections. Main Materials of the Study. Polycoordinate transformations [1] can be used in various fields of production at the stage of modeling of investigated processes. Polycoordinate transformations are divided into polytissues and polypoint. Let's take a closer look at polypoint transformations using a multipoint framework ( Figure 1). Polypoint transformations allow you to change the position of a straight line (object of transformation) by manipulating the points of the transformation base. As can be seen from Figure 1, the initial base (points 1,2,3,4,5) was changed to 1', 2', 3', 4', 5'. In this case, the position of the line changed according to the change in the basis of points. This is achieved by decoupling the system, which establishes a functional relationship between the polycoordinate coefficients of the direct before and after transformations ( i  and i  ). Since polypoint transformations allow one line to be transferred from the initial basis to another, two and several lines can be "transferred" in the same way. This, in turn, means that you can transform objects in this way. Figure 2 shows the polypoint circle transformation. As you move the points, the circle turns into a closed curve. Thus, polypoint transformations allow you to track the deformation of a geometric object, affecting only the space that limits that object. There are different ways of influencing the basis [2,3]. For example, you can move the basis points in the plane, and you can enter and manipulate the weights by increasing or decreasing the weight at a particular point of the basis, or at all points in the same way. Figure 3 shows an example of a circle (object of transformation) with a change of weight is converted into a square (with increasing weight), or a curvilinear rhombus (with decreasing weight). In three-dimensional polypoint transformations, the images will be not planes but straight. Two point bases are introduced: initial and converted. As the weights of the basis points change, the shape of the three-dimensional objects changes. These transformations also have different modifications depending on the appearance of the scales. Let's take a closer look at polypoint transforms using a multipoint frame. The plane in the initial basis can be described by a system of equations: , 1... The area after conversion will be determined by such a system: where i  -the distance from the plane (the prototype) to the points in the initial base. The system determines А,В,С,D -coefficients of the transformed plane. The formula of multipoint transformation is written as: , Functional for unambiguously solving the problem: Therefore, it is necessary to find partial derivatives for all four variables. These four equations form a linear system of equations. , Unleashing the system, we get the values A, B, C, D -the coefficients of the plane after conversion. The algorithm for converting a three-dimensional body can be described as follows: • the 3D object is immersed in a point base. This is done by the user by selecting the basis points based on the conditions imposed on the task, such as the final shape of the object; • the body to be deformed is represented by a set of planes (triangles). This can be done, for example, by triangulation; • consecutive polypoint transformations of each plane are performed and their intersection is determined; • using the existing methods of spatial interpolation (or the capabilities of modern graphics packages) smoothies the resulting surface. Figure 4 shows an example of a deformation of a sphere as the basis weights change. The polypoint transformations considered allow modeling of deformations of different geometric elements. Exterior, modeling gearboxes (adapters) with different breaks in front of different limbs, which allow to connect with each other the ducts of different system ventilations of internal living and industrial premises. Examples of some reducers are presented in Figure 5. To visualize the processes of deformation of an object on the basis of polypoint transformations, a system has been developed that allows to control the positions of the basis points, ie their coordinates, and also to set the weights of these points. The system lets you view the animation. The object rotates around three axes in real time. This allows the user to view in real time the shape change of the object and to analyze the deformations that have taken place. An example of how the system works is shown in Figure 6. As can be seen from Figure 6, the user can deform the object and get the shape he needs, by simply manipulating the controls. Conclusions. Studies have shown that polycoordinate transformations can not only model the shape of deformed objects with the ability to visually track these processes, but also construct technical objects such as reducersadapters with different cross-sections at the ends. In the process, a computer system was created whereby the user could deform the object and get the shape it needed by simply manipulating the controls, which can be useful in the process of producing gearboxes. Ключевые слова: деформационное моделирование; политочечные преобразования; компьютерное моделирование, вентиляционные системы; редуктор. The research is devoted to the need to create a system of modeling geometric objects that would solve a number of problems that arise in the design and design in construction, namely in ventilation systems. The problem is solved using polypoint transformations. Tab. 0. Fig. 5
1,929.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Small subgraphs with large average degree In this paper we study the fundamental problem of finding small dense subgraphs in a given graph. For a real number $s>2$, we prove that every graph on $n$ vertices with average degree at least $d$ contains a subgraph of average degree at least $s$ on at most $nd^{-\frac{s}{s-2}}(\log d)^{O_s(1)}$ vertices. This is optimal up to the polylogarithmic factor, and resolves a conjecture of Feige and Wagner. In addition, we show that every graph with $n$ vertices and average degree at least $n^{1-\frac{2}{s}+\varepsilon}$ contains a subgraph of average degree at least $s$ on $O_{\varepsilon,s}(1)$ vertices, which is also optimal up to the constant hidden in the $O(.)$ notation, and resolves a conjecture of Verstra\"ete. Introduction Given a graph G and a parameter k, the densest k-subgraph problem asks to find a k-vertex subgraph of G of maximum average degree.This is one of the central problems in theoretical computer science.It is NP-hard, and has no polynomial-time approximation scheme (PTAS) under certain complexity theoretic assumptions [9,19].On the other hand, the currently best known approximation algorithm achieves an O(n 1/4 )-approximation [2]. In addition to the algorithmic perspective, another natural direction for the above problem is to understand the maximum density of the small subgraphs of a given graph which one can theoretically guarantee.The precise problem under consideration, proposed by Feige and Wagner [11], is that given a positive integer n and real numbers d, s satisfying d ≥ s ≥ 2, what is the minimum of t = t(n, d, s) such that every graph G on n vertices with average degree at least d contains a subgraph of average degree at least s on at most t vertices.This question also falls squarely within the context of the so called local-global principle, that states that one can obtain global understanding of a structure from having a good understanding of its local properties, or vice versa.This phenomenon has been ubiquitous in many areas of mathematics and beyond, see e.g.[1,4,13,20]. The question of Feige and Wagner, in the case s = 2, is equivalent to the famous girth problem, that asks for the length of the shortest cycle in a graph on n vertices with average degree d.This problem is extensively studied, and using our notation, it is well known that t(n, d, 2) = Θ(log d−1 n) (see e.g.[3], page 104, Theorems 1.1 and 1.2).However, it is a major open problem to determine the leading coefficient.Much less is known if s > 2. A simple probabilistic argument gives the following result. Proposition 1.1.For every s > 2, there is a positive c s such that for all s ≤ d ≤ n − 1, we have t(n, d, s) ≥ c s nd − s s−2 .In other words, for every s ≤ d ≤ n − 1, there is an n-vertex graph G with average degree at least d in which every subgraph with average degree at least s has at least c s nd − s s−2 vertices. Feige and Wagner [11] proposed the conjecture that this lower bound on t(n, d, s) is optimal, up to polylogarithmic factors in n.In the special case s ≈ 4, they also proved certain results in the support of it.First, they showed that if ε > 0 and s = 4 − ε, then t(n, d, s) = O ε (nd −2 ). Second, they proved that t(n, d, 4) = O ε (nd −1.8+ε ) for every ε > 0. Also, Alon and Hod (personal communication) proved the aforementioned conjecture for certain special values of s and a limited range of d.Here, we completely settle the conjecture of Feige and Wagner with the following theorem (which is even stronger as the error term is logarithmic in d instead of n). Theorem 1.2.For every s > 2, there is a constant C = C(s) such that the following holds for every sufficiently large d.Let G be an n-vertex graph with average degree at least d, where has average degree at least s. While our proof of this result is non-algorithmic, it gives the best theoretical lower bound on the average degree one is guaranteed to find.It would be interesting to decide whether there is a polynomial time algorithm that finds a subgraph that achieves the bound provided by the above theorem. Theorem 1.2 cannot be used to find a constant sized (independent of n) subgraph with large average degree.In this case, one cannot expect a similar answer as before, as the random deletion method shows the following.Proposition 1.3.For every s > 2 and positive integer t, there exists ε = ε(s, t) > 0 such that the following holds for every sufficiently large n.There exists a graph G on n vertices with average degree at least n 1− 2 s +ε such that every subgraph of G on at most t vertices has average degree less than s. A similar argument shows that in case , the logarithmic error term is indeed needed in Theorem 1.2.Motivated by applications from [21] to parity check matrices, Versträete (see [18]) conjectured that this lower bound presented in Proposition 1.3 is optimal in a certain sense.More precisely, he proposed the conjecture that for every s > 2 and ε > 0 there exists some t = t(s, ε) such that every graph on n vertices with average degree at least n 1− 2 s +ε must contain a subgraph on at most t vertices with average degree at least s.In the special case s is an integer, this was proved by Jiang and Newman [18].Janzer [15] strengthened this result, obtaining under the same hypothesis an s-regular subgraph.More precisely he proved that if G is a graph on n vertices with at least n 2− 1 r + 1 k+r−1 +ε edges for sufficiently large n, then G contains an r-blowup of the cycle C 2k (note that the r-blowup of the cycle C 2k is s = 2r-regular and by taking k large one can make the term 1 k+r−1 arbitrarily small).In our next theorem, we prove the conjecture of Verstraëte for all real values of s > 2. Theorem 1.4.For every s > 2 and ε > 0, there is a positive integer t such that the following holds for all sufficiently large n.Let G be an n-vertex graph of average degree d ≥ n 1− 2 s +ε .Then there is a non-empty set R ⊂ V (G) of size at most t such that G[R] has average degree at least s. Our results are closely related to the problem of Erdős, Faudree, Rousseau and Schelp on finding small subgraphs of large minimum degree.In [7], they determined the minimal number of edges in a graph on n vertices which guarantees a proper subgraph (i.e., with u < n vertices) of minimum degree at least s (see [23] for additional details and recent developments).Erdős, Faudree, Rousseau and Schelp [8] further asked the following general question.Given positive integers n and s, and a positive real number d satisfying d ≥ s ≥ 2, what is the minimum of u = u(n, d, s) such that every graph G on n vertices with average degree at least d contains a subgraph of minimum degree at least s on at most u vertices?It is reasonable to suspect that u(n, d, s) ≈ t(n, d, s), that is, that Theorems 1.2 and 1.4 hold with the average degree of G[R] replaced with its minimum degree.In case s is even, the minimum degree version of Theorem 1.4 does hold by the aforementioned results of [15] and [18], the first of which even guarantees a regular subgraph.Moreover, the case s = 3 follows from a recent result of Janzer [16], which refutes a conjecture of Erdős and Simonovits [6] (and again provides a regular subgraph).However, the cases when s is odd and greater than 3 remain open.On the other hand, the minimum degree variant of Theorem 1.2 is completely open for every s ≥ 3, and our methods do not seem to be adaptable for this problem.At least, by noting that every graph of average degree at least 2s contains a subgraph of minimum degree at least s (see Lemma 2.2), we get the following immediate corollary of Theorem 1.2. Corollary 1.5.For every integer s ≥ 2, there is a constant C = C(s) such that the following holds for every sufficiently large d.Let G be an n-vertex graph with average degree at least d, where d ≤ n s−1 s .Then there is a non-empty set R ⊂ V (G) of size at most nd − s s−1 (log d) C such that G[R] has minimum degree at least s. Small subgraphs of large average degree In this section, we prove Theorems 1.2 and 1.4.Both proofs follow the same argument, however, with a different range of parameters.Let us give a brief outline of this argument. The key idea is that for every rational number ρ > 1 we construct a tree T , which we refer to as a balanced tree, with the following property.Let q be the number of leaves of T .If H is a graph that is the union of copies of T having the same set of leaves, then the average degree of H is at least 2ρ(1 − q |V (H)| ).Now suppose we are given a graph G with n vertices and average degree at least d, which does not contain a subgraph of order at most t ≈ nd − s s−2 and average degree at least s.Take ρ such that 2ρ is slightly larger than s, let T be the balanced tree with respect to ρ, and let q be the number of leaves of T .By counting the number of subgraphs of G isomorphic to T and using the pigeonhole principle, we find a large collection T of copies of T in G, all having the same set of leaves.Let H be their union.We show that some subgraph of H will contradict our assumption on G, i.e. it has order at most t and average degree at least s.In order to show this, we consider two cases depending on the number of vertices in H.If |V (H)| > t, we take some sub-collection T ′ of T such that H ′ , which denotes the union of the copies of T in T ′ , has order roughly t.Our choice of parameters will guarantee that 2ρ(1 − q |V (H ′ )| ) ≥ s, so H ′ suffices by the above mentioned property of T .Otherwise, we argue that unless H has average degree at least s, it cannot contain the described number of copies of T , and we are done again. Preliminaries In this section, we prove the lower bounds (Propositions 1.1 and 1.3) and collect some basic results.First, let us start with the following simple consequence of the multiplicative Chernoff bound.Lemma 2.1.Let X be the sum of independent indicator (i.e.0-1 valued) random variables.Then Next, we present the promised probabilistic lower bound arguments. Proof of Proposition 1.1.Let c s be sufficiently small.If d ≥ n−1 2 , then c s nd − s s−2 < 1, so we can take G to be any n-vertex graph with average degree at least d. Else, let G be a random graph on n vertices in which each edge is chosen with probability p = 2d n−1 , independently of all other edges.Then |E(G)| is the sum of independent indicator random variables and has mean nd, so by Lemma 2.1, the probability that G has fewer than nd/2 edges (i.e., that G has average degree less than Moreover, for r ≤ ⌊c s nd − s s−2 ⌋, we have Hence, if c s is sufficiently small, then by ( 1), the probability that G has a subgraph of average degree at least s on at most c s nd − s s−2 vertices is at most 1 1000 .It follows that with positive probability G has no such subgraph but has average degree at least d, completing the proof. Proof of Proposition 1.3.We show that ε = 1 ts suffices.Assume that n is sufficiently large with respect to s and t, and let G ′ be the graph on n vertices in which each edge is present independently with probability p = n − 2 s + 2 ts .Letting X be the number of edges of G ′ , we have ts .Let R be the family of graphs R ⊂ V (G ′ ) (2) with r ≤ t vertices and exactly ⌈ rs . ts > n 2− 2 s +ε .But then there exists a choice for G ′ such that X − Y > n 2− 2 s +ε .For each bad R ∈ R, remove an edge of G ′ contained in R, and let the resulting graph be G.Then G contains no element of R as a subgraph, so every subgraph of G on at most t vertices has average degree less than s.Furthermore, G has average degree at least Finally, before we embark on the proofs of our main theorems, let us state a useful lemma about subgraphs of large minimum degree.Lemma 2.2.Every graph G of average degree d contains a nonempty subgraph of minimum degree at least d 2 . Proof.Keep removing vertices of degree less than d 2 as long as there is such a vertex.In total, we removed less than |V (G)|d 2 = |E(G)| edges, so the resulting graph is nonempty and has minimum degree at least d 2 . Balanced trees In this section, we define balanced trees, which one can view as the building blocks of our small subgraph of large average degree.Interestingly, but perhaps not unexpectedly, these trees coincide with the balanced trees constructed by Bukh and Conlon [5] in their celebrated paper on the Rational Exponents conjecture. Definition 2.3.Let T be a tree with leaf set L. For any non-empty S ⊂ V (T ) \ L, let where e S is the number of edges in T incident to at least one vertex from S. Also, set ρ T = ρ T (V (T ) \ L).We say that T is balanced if ρ T ≤ ρ T (S) holds for every non-empty S ⊂ V (T ) \ L. The main reason balanced trees are useful for us is given by the following simple lemma which one can prove by induction. Lemma 2.4 (Bukh-Conlon [5]).Let T be a balanced tree with q leaves and let H be a graph which is the union of copies of T with the same set of leaves.Then e(H) ≥ (|V (H)| − q)ρ T . Proof.Let H be the union of k copies of T with the same set L of leaves.We prove the inequality e(H) T , as desired.Assume now that k ≥ 2. Let T 0 be one of the k copies of T constituting H and let H ′ be the union of the remaining k − 1 copies of T .By the induction hypothesis, we have e(H ′ ) ≥ (|V (H ′ )| − q)ρ T .Let S = V (T 0 ) \ V (H ′ ).Note that since all copies of T in H have the same set L of leaves, we have S ⊂ V (T 0 ) \ L. Now observe that e S ≥ |S|ρ T (where T 0 is identified with T and S is viewed as a subset of V (T )).Indeed, the inequality is trivial if S = ∅ and else e S = |S|ρ T (S) ≥ |S|ρ T since T is balanced.But then completing the induction step. Next we describe a construction of balanced trees which are caterpillars.A caterpillar is a tree in which the non-leaf vertices form a path.Definition 2.5 (Bukh-Conlon [5]).Suppose that a and b are positive integers satisfying a + 1 ≤ b < 2a + 1 and put i = b − a.We define a tree T a,b by taking a path with a vertices, which are labelled in order as 1, 2, . . ., a, and then adding a leaf to each of the i + 1 vertices In this case we attach two leaves in total to vertex a.)For b ≥ 2a + 1, we define T a,b recursively to be the tree obtained by attaching a leaf to each non-leaf of T a,b−a . Remark.In [5], trees T a,b for b ∈ {a − 1, a} are introduced as well and they are used to define T a,b for b ∈ {2a − 1, 2a}, but one can easily see that our definition gives the same graphs. Bukh and Conlon showed that, indeed, T a,b is balanced for every a < b.Combined with the simple observation that T a,b has maximum degree at most ⌈b/a⌉+ 1, we obtain the following result.Lemma 2.6 (Bukh-Conlon [5]).For any positive integers a < b, T a,b is a balanced caterpillar with a non-leaf vertices, b edges and maximum degree at most ⌈b/a⌉ + 1. Counting trees In this section, we provide lower and upper bounds on the number of copies of a fixed tree in graphs with some prescribed properties.Let us start with the lower bound. For a graph G and a set S of vertices in G, we write Γ G (S) for the set of vertices in G which have a neighbour in S. We make use of the following celebrated theorem of Friedman and Pippenger [12] about large bounded degree trees in expanding graphs. Theorem 2.7 (Friedman-Pippenger [12]).If G is a non-empty graph such that for every S ⊂ V (G) with |S| ≤ 2m−2, we have |Γ G (S)| ≥ (k +1)|S|, then G contains every tree with at most m vertices and maximum degree at most k. Say that a graph G is (ρ, r)-sparse if for every R ⊂ V (G) of size at most r, the number of edges in G[R] is at most ρ|R|.Next, we show that (ρ, r)-sparse graphs of large minimum degree have good expansion properties.Lemma 2.8.Let G be a (ρ, r)-sparse graph with average degree at least 4ρ(k+2).Then G contains every tree with at most r 2(k+2) vertices and maximum degree at most k. Proof.By Lemma 2.2, G contains a subgraph G ′ of minimum degree at least 2ρ(k + 2).Note that G ′ is also (ρ, r)-sparse.We show that G ′ already contains every tree with at most m = r 2(k+2) vertices and maximum degree at most k.Otherwise, by Theorem 2.7, there is a set Furthermore, by the minimum degree condition, the number of edges in G ′ [R] is at least which is a contradiction.Now we are ready to state our first tree counting lemma. Lemma 2.9.For any ρ > 1 and positive integer k, there exists c 0 = c 0 (ρ, k) > 0 such that the following holds for every n ≥ 8. Let G be an n-vertex (ρ, r)-sparse graph with average degree at least d ≥ c −1 0 .Let T be a tree with t ≤ r 2(k+2) vertices and maximum degree at most k.Then G contains at least (c 0 d) t−1 copies of T . Proof.We show that c 0 = 1 16ρ(k+2) suffices.Let p = 8ρ(k+2) d < 1, and sample each edge of G independently with probability p.Let the resulting graph be G ′ , and let X = e(G ′ ).Then E(X) = pe(G) ≥ pdn 2 = 4ρ(k + 2)n.As X is the sum of independent indicator random variables, we can use Lemma 2.1 to write P X ≤ 1 2 E(X) ≤ e − E(X) 8 < 1 2 .Hence, with probability at least 1 2 , G ′ has average degree at least 4ρ(k + 2).If this happens, we can apply Lemma 2.8 to conclude that G ′ contains a copy of T .Thus, the expected number of copies of T in G ′ is at least 1 2 .On the other hand, writing N for the number of copies of T in G, we also have that the expected number of copies of T in G ′ is p t−1 N .Hence, we get the inequality Now let us turn to our upper bound on the number of copies of a tree.For simplicity, we only present a counting result in case the tree is a caterpillar.However, it seems likely that a similar result should hold for trees in general as well. Lemma 2.10.Let G be a graph with n vertices and m edges.Let T be a caterpillar with a non-leaf vertices and maximum degree k.Then G contains at most n • ( 2m a ) ak copies of T . As T is a caterpillar, its non-leaf vertices form a path on a vertices, so let us first count the number of such paths in G. Claim.For every vertex v ∈ V (G), the number of paths on a vertices in G starting from v is at most d 1 . . .d a−1 . Proof.We prove this by induction on a.If a = 2, this is trivial, so let us assume that a ≥ 3. Let G ′ be the subgraph of G we get after removing v, and let Hence, the number of ways to embed the non-leaf vertices of T is at most n•d 1 . . .d a−1 .Suppose that the non-leaf vertices of T are already embedded in G, and their images are v 1 , . . ., v a .Then the number of ways to choose the leaves of T is at most Therefore, the total number of copies of T in G is at most where the first inequality is due to the AM-GM inequality. Piecing the trees together In this section, we present our main technical lemma, which implies both Theorems 1.2 and 1.4 after substituting the right parameters.Before we state this lemma, we show that if a graph G contains many copies of a balanced caterpillar with the same set of leaves, then G cannot be (ρ, r)-sparse.Recall that if T is a tree with t vertices and a non-leaf vertices, then ρ T = t−1 a . Lemma 2.11.Let ρ > 0 and let k be a positive integer.Let T be a balanced caterpillar with t vertices, a non-leaf vertices, and maximum degree at most k.Assume that t < r ≤ n and ρ ≤ (1 − t r−t )ρ T .Let G be an n-vertex graph containing at least r( 2ρr a ) ak copies of T with the same set of leaves.Then G is not (ρ, r)-sparse. Proof.Assume, for contradiction that G is (ρ, r)-sparse.Let T be a collection of at least r( 2ρr a ) ak copies of T in G with the same set of leaves.Let R 0 ⊂ V (G) be the set of vertices spanned by the elements of T .First, observe that we must have |R 0 | ≥ r.Otherwise, as G is (ρ, r) sparse, G[R 0 ] has at most m = ρ|R 0 | < ρr edges, so by Lemma 2.10, it contains less than r( 2ρr a ) ak copies of T , a contradiction. Therefore, we can take a subcollection T ′ ⊂ T such that the union of the trees in T ′ spans at least r − t and at most r vertices.Let R be the set of vertices in G spanned by the union of the trees in T ′ .By Lemma 2.4, e(G[R]) ≥ (|R| − q)ρ T , where q is the number of leaves in T .Hence, Since |R| ≤ r, this contradicts the assumption that G is (ρ, r)-sparse, and the proof is complete.Now we are ready to state the promised main technical lemma. Lemma 2.12.Let ρ > 1 and let c 0 = c 0 (ρ, ⌈2ρ⌉ + 1) given by Lemma 2.9.Let G be an n-vertex graph with average degree d ≥ c −1 0 .Assume that there are positive integers r, t and a such that the following inequalities are satisfied. Proof.Conditions 1 and 3 imply that t − 1 > a, so Lemma 2.6 shows that there is a balanced caterpillar T with a non-leaf vertices, t − 1 edges and maximum degree at most k = ⌈t/a⌉ + 1 ≤ ⌈2ρ⌉ + 1.Note that ka < 3t.Assume that G is (ρ, r)-sparse.Then it follows by Lemma 2.9 that G contains at least (c 0 d) t−1 copies of T .Since T has t − a leaves and G has n t−a subsets of size t − a, it follows from condition 4 and the pigeonhole principle that there is a collection of at least r( 2ρr a ) 3t > r • ( 2ρr a ) ka copies of T in G which share the same set of leaves.Note that T has t − 1 edges and a non-leaf vertices, so ρ T = t−1 a and condition 1 gives ρ ≤ (1 − t r−t )ρ T .Hence, we can apply Lemma 2.11 to conclude that G is not (ρ, r)-sparse, a contradiction. Completing the proofs In this section, we put everything together to conclude the proofs of our main results.First, we prove Theorem 1.2 in the following equivalent form.Theorem 2.13.For every ρ > 1, there is a constant C = C(ρ) such that the following holds for every sufficiently large d.Let G be an n-vertex graph with average degree at least d, where Note that this indeed implies Theorem 1.2 with s = 2ρ.Note that where the last inequality follows from t = ⌈ rε 8 ⌉.Hence, we have ρ ≤ (1 − t r−t ) t−1 a and condition 1 is satisfied. Conditions 2 and 3 are immediate from the definitions of t and a.Hence, it remains to verify condition 4, that is First, since a < t ρ , note that Also, Using that d ε = d 1/ log d = e, this implies that holds for some c 1 = c 1 (ρ) > 0. On the other hand, we have ( 2ρr a ) 3t ≤ ( c 2 ε 3 ) t for some c 2 = c 2 (ρ).Hence, in order to prove (2), it suffices to show that As t > 10 log d and d is sufficiently large, we have (c 0 d) 1− 1 t > c 0 d 2 .Also, t ≥ r 8 log d > log r, so r Theorem 2.14.For every ρ > 1 and 0 < ε < 1, there is a positive integer r such that the following holds for all sufficiently large n.Let G be an n-vertex graph with average degree d ≥ n 1− 1 ρ +ε .Then G is not (ρ, r)-sparse. Proof.Assume that r is sufficiently large in terms of ρ and ε.Let t = ⌈ 4ρ ε ⌉ and let a = ⌊ t ρ ⌋ − 1.It suffices to prove that the four conditions in Lemma 2.12 are satisfied. Note that where the last inequality follows from the assumption that r is sufficiently large in terms of ρ and ε.Hence, we have ρ ≤ (1 − t r−t ) t−1 a and condition 1 is satisfied.Condition 3 is immediate from the definition.Also by definitions of a and t, we have t ρ ≥ 4 and hence a ≥ t ρ − 2 ≥ t 2ρ , verifying condition 2. Now let us verify condition 4. We have d ≥ n 1− 1 ρ +ε and n t−a ≤ n t−a .Hence, it is enough to prove that (c 0 n Note that n is sufficiently large compared to the other parameters, so it suffices to prove the corresponding inequality for the exponents of n on both sides, i.e., that (1 and the proof is complete. Regular subgraphs In this paper, we proved optimal bounds on the order of the smallest subgraph G[R] of average degree at least s in a graph G of given order and average degree.As mentioned in the introduction, in case s is an integer, we believe that the strengthening of our results in which the average degree of G[R] is replaced with its minimum degree should also hold.Moreover, one can further strengthen this by requiring that G[R] contains an s-regular subgraph.Note that in the special case where d ≤ (log n) C(s−2)/s , Conjecture 3.1 asserts the existence of an s-regular subgraph, without a requirement on its order.This is the well studied Erdős-Sauer problem which was resolved very recently in [17].It was shown there that for large C = C(s), every n-vertex graph with average degree at least C log log n has an s-regular subgraph.This is tight, as an old construction of Pyber, Rödl and Szemerédi [22] shows that there is some c > 0 such that there are n-vertex graphs with average degree at least c log log n and no s-regular subgraph. On the other hand, when d is very large and we are looking for a subgraph of bounded order, we have the following conjecture generalizing Theorem 1.4.This conjecture quantifies Problem 7.1 from [18].Conjecture 3.2.For every integer s ≥ 3 and ε > 0, there is a positive integer t such that the following holds for all sufficiently large n.Let G be an n-vertex graph of average degree at least n 1− 2 s +ε .Then G contains an s-regular subgraph on at most t vertices. As we remarked earlier, Conjecture 3.2 is known to be true if s is even [15], or s = 3 [16]. Uniform hypergraphs Another interesting direction one may explore is the analogous question for uniform hypergraphs, which was also considered by Feige and Wagner [11]. Problem 3.3.Let r, n be positive integers, and s, d > 1 be real numbers.Determine the asymptotic value of the smallest t = t r (n, d, s) such that every r-uniform hypergraph on n vertices of average degree at least d contains a subhypergraph on at most t vertices of average degree at least s. This problem is closely related to another well known conjecture of Feige [10] about even covers of hypergraphs.An even cover of a hypergraph is a non-empty subhypergraph in which each vertex is contained in an even number of edges.Feige conjectured that every r-uniform hypergraph with n vertices and average degree d contains an even cover on at most nd − 2 r−2 polylog(n) vertices, which was recently settled by Guruswami, Kothari, and Manohar [14].Indeed, in order to find a small even cover, one needs to first guarantee a small subhypergraph of average degree at least 2. 1 be the degree sequence of G ′ .There are deg G (v) ways to choose the neighbour of v in the path.If this neighbour is v ′ ∈ V (G ′ ), we can use our induction hypothesis to conclude that there are at most d ′ 1 . . .d ′ a−2 paths on a − 1 vertices in G ′ starting with v ′ .Hence, the number of paths on a vertices in G starting with v is at most d G (v) • (d ′ 1 . . .d ′ a−2 ) ≤ d 1 . . .d a−1 , finishing the proof. Proof.Let C be sufficiently large with respect to ρ, let d be sufficiently large with respect to ρ and C, let f = (log d) C and let r = ⌊nd − ρ ρ−1 f ⌋.Then r ≤ n, and by the conditions of the theorem, r ≥ ⌊(log d) C ⌋. Let ε = 1 log d , t = ⌈ rε 8 ⌉ and let a = ⌈ t ρ (1 − ε)⌉.Then t ≥ 20 log d, assuming that C and d are sufficiently large.It suffices to prove that the four conditions in Lemma 2.12 are satisfied. 1 t < 3 . Finally, recalling that f = (log d) C and ε = 1 log d , we get that (3) holds whenever C and d are sufficiently large in terms of ρ.This completes the proof of the theorem.Finally, we prove the following equivalent version of Theorem 1.4. Conjecture 3 . 1 . 2 s For every integer s ≥ 3, there is a constant C = C(s) such that the following holds for every sufficiently large n.Let G be an n-vertex graph with average degree at least d, where C log log n ≤ d ≤ n s−.Then G contains an s-regular subgraph on at most nd − s s−2 (log n) C vertices.
7,645.4
2022-07-05T00:00:00.000
[ "Mathematics" ]
Automation of Inertial Fusion Target Design with Python The process of tuning an inertial confinement fusion pulse shape to a specific target design is highly iterative process. When done manually, each iteration has large latency and is consequently time consuming. We have developed several techniques that can be used to automate much of the pulse tuning process and significantly accelerate the tuning process by removing the human induced latency. The automated data analysis techniques require specialized diagnostics to run within the simulation. To facilitate these techniques, we have embedded a loosely coupled Python interpreter within a pre-existing radiation-hydrodynamics code, Hydra. To automate the tuning process we use numerical optimization techniques and construct objective functions to identify ! Abstract-The process of tuning an inertial confinement fusion pulse shape to a specific target design is highly iterative process. When done manually, each iteration has large latency and is consequently time consuming. We have developed several techniques that can be used to automate much of the pulse tuning process and significantly accelerate the tuning process by removing the human induced latency. The automated data analysis techniques require specialized diagnostics to run within the simulation. To facilitate these techniques, we have embedded a loosely coupled Python interpreter within a pre-existing radiation-hydrodynamics code, Hydra. To automate the tuning process we use numerical optimization techniques and construct objective functions to identify tuned parameters. Inertial Confinement Fusion Inertial confinement fusion (ICF) is a means to achieve controlled thermonuclear fusion by way of compressing hydrogen to extremely large pressures (GBar), temperatures (10's keV) and densities (100x solid density). ICF capsule are typically small (~1 mm radius) spheres composed of several layers of cryogenic hydrogen, plastic, metal or other materials. These extreme conditions are reached by illuminating the capsule with a very high intensity (100's TW) driver. This compresses the shell to more than 100 times solid density and accelerates the radially converging shell to very high velocity (300 km/s). As the shell stagnates, a fusion burn wave propagates from a central, low-density, high temperature region to a surrounding high-density, low temperature fuel region. The inertia of the fuel keeps it intact long enough for a significant fraction of the fuel to burn. There are several approaches to achieving a significant fusion burn, but for this paper consider the shock ignition [Betti2007] approach with the capsule directly driven by lasers. The capsule is a spherical shell of frozen deuterium-tritium ("DT ice"), coated with plastic or another ablator material. The region within the DT ice is filled with DT gas at the vapor pressure. Laser beams directly illuminate the target and deposit energy in the outer most layer called the ablator. The ablation of the ablator supplies the pressure to drive the implosion. We assume a spherically symmetric illumination of the capsule with the total incident power varying in time. The power vs time profile is referred to as the "pulse shape." We divide the pulse shape into three logical sections, which correspond to the three phases of the capsule implosion dynamics. The first section is called the "pre-pulse" and is responsible for shock compressing the DT shell to high density. The pre-pulse consists of a short duration, high intensity spike in the laser power (the "picket") and three pedestals, each with increasing laser power. The pre-pulse is followed by the main pulse, which accelerates the shell to moderate implosion velocity (~300 km/s). When the imploding shell stagnates, it forms a central, low density, high temperature hot spot and a surrounding high density, low temperature shell. The final section of the pulse shape is the igniter pulse. The igniter pulse consists of another pedestal of very high intensity. This section launches a strong shock that arrives just as the shell is stagnating and further heats the hot spot as well as prevents the low pressure shell from coming into pressure equilibrium with the high pressure hot spot. The combination of the stagnation of the shell and the timely arrival of the igniter shock lifts the temperature of the central hot spot above the 12 keV threshold needed to initiate a fusion burn wave. This burn wave propagates into the cold shell where it produces most of the fusion yield. While restricting our attention to laser shock shock ignition, there is a lot of potential variability in the composition and structure of capsules and in the pulse shape. Capsule should have sufficient ablator to drive the implosion, but not in excess. Capsule materials must anticipate the effect of fluid instabilities and laser absorption. The capsule should have realistic fabrication tolerances. Laser powers must be set to produce shocks of an appropriate strength and pulse features should be appropriately timed. Additionally, there are several physical processes important in describing an implosion. Due to all of these sources of complexity, ICF targets are designed using sophisticated multi-physics codes, such as Hydra [Marinak1996]. Extensive simulation, helps identify interesting capsule/pulse shapes before resorting to expensive and difficult experiments. The process of designing a capsule is highly iterative, time consuming, interactive process. In this paper we describe the use of and modifications of Hydra to automate significant sections of the target design process. Specifically, we consider the situation where a capsule design and the pulse shape power levels are specified and the timing of the pulse shape is not specified. When tuning the pre-pulse, we regard the picket as a fixed quantity functioning as a time fiducial for synchronizing the remaining pre-pulse shocks. The picket exists to increase implosion stability by heating the plasma corona, which increase lateral thermal conduction, which in turn smooths out non-uniformities in the deposition of laser energy. These are not effects that can be resolved in one dimensional (1D) simulations, but the picket effects the 1D dynamics and must be included. The pre-pulse pedestals should have their start times set such that their associated shocks reach the gas/ice interface within 50 ps as the picket shock [Munro2001]. Spacing the pre-pulse shocks in this way, prevents them from coalescing in the ice and unnecessarily shock heating the fuel. Also, shocks gain strength with radial convergence, so ensuring that the pre-pulse shocks escape the fuel while it is at large radius helps minimize the shock heating. The main pulse should be timed to get the maximum fuel confinement for a fixed amount of energy. The appropriate measure of fuel confinement is its peak areal density (ρR = ρ(r)dr). Should the fuel ignite, the burn fraction is approximated by f ≈ ρR ρR+7 [Fraley1974]. Finally, the igniter pulse should be timed so that the target ignites robustly. We implement this as maximizing the fusion yield. Automatic Tuning We adopt the general strategy that a tuned pulse can be constructed by serially adding tuned pulse segments. Additionally we require that each property of a pulse segment can be "tuned" by numerically optimizing an appropriately chosen objective function. Our automated pulse tuner ("autotuner") is structured around an iteration over pairs of pulse properties and objective functions. These properties are the start times of the pulse segments and are initially turned off. The tuner iterates through each pulse segment, numerically optimizing it based on its associated objective function. In addition to fixed power levels, the combined energy delivered in the pre-pulse and compression pulse is constrained. The total igniter pulse energy is also pre-determined. It is important to realize that the sequence of properties and choice of objective functions embodies a strategy to achieve the desired target behavior. The automation of this strategy does not guarantee the tuned pulse will produce the desired performance characteristics, just that the design strategy was faithfully executed. In addition to a sequence of parameters and a definition of an objective function, an autotuing program requires other software infrastructure. It needs to transform parameter values to input files and run directories. The autotuing program needs to gather the appropriate information from a simulation needed by the objective functions. Finally, it needs reasonably efficient numeric optimization routines. We generate Hydra input files from a Python proxy class that wraps a nearly complete Hydra input file. The proxy has simple pre-processor like capabilities for modifying simple input file statements and for injecting more complicated structures into the input file. For complicated structures, like the laser source specification, it delegates responsibly to special purpose objects. These object follow the convention that str(obj) produces a string formatted for inclusion in a Hydra input file. This convention allows objects that define the __str__() to lazily evaluate their Hydra representation, while actual strings can be inserted with no boilerplate. Certain objective functions require very high sampling rates and thus must be run within a running simulation. For this purpose, Hydra has an embedded Python interpreter. Since our tuning program and Hydra's embedded interpreter use the same programming language, it is relatively easy for the control program and Hydra to share data structures. There are two obvious methods: object serialization with the pickle module and object reconstruction using repr(). Reconstructed objects are easily modified and more explicit, so we use that method. All of the optimizations use a simple eight way parallel direct search method. In terms of the number of function evaluations, direct search is less efficient than Newton-like methods, direct search is very inefficient. Typical optimizations requires 32 functions evaluations. Converging to the same tolerance using the BFGS method requires only 12 function evaluations. However, the inefficient direct search method requires only 4 iterations, compared to the 12 iterations with BFGS. We are satisfied with the current performance, but recognize that the use of more sophisticated sampling techniques would likely reduce the number of iterations or the number of parallel function evaluations. Hydra's Parallel Python Interpreters Hydra is a massively parallel multi-physics code in use since 1993. The code combines hydrodynamics with radiation diffusion, laser ray trace, and several more packages necessary for ICF design and has over 40 users at national laboratories and universities. Hydra users set up simulations using a built-in interpreter. The existing interpreter provides access to the program parameters and provides functions to access and manipulate the data in parallel. Users can also access and alter the state while the simulation is running through a message interface that runs at a specific cycle, time or if a specific condition is met. To improve functionality, the Python interpreter was added to Hydra. Python was chosen due to the mature set of embedding API and extending tools and the large number of third party libraries. The Python interpreter was added by embedding instead of extending Python itself. This choice was made due to the large number of existing input files that could not be easily ported to a new syntax. The Simplified Wrapper and Interface Generator (SWIG) [Beazley2003] interface generator is used to wrap the Hydra C++ classes and C functions. Users can send commands to the Python interpreter using two separate methods: a custom interactive interpreter based on the CPython interpreter and a file-based Python code block interpreter. The Hydra code base is based on the message passing interface (MPI) library. This MPI library allows for efficient communication of data between processors in a simulation. The embedded interactive and file based methods must have access to the Python input source on all of the processors. The MPI library is used to broadcast a line read from stdin or a file on the root processor to all of the other processors in the simulation. The simplest method to provide an interactive parallel Python interpreter would be to override the PyOS_ReadlineFunctionPointer in the Python code base. This function cannot be overridden for non-interactive processes due to a check for an interactive tty. An alternative interactive Python interpreter was developed to handle the parallel stdin access and Python code execution. For parallel file access the code reads the entire file in as a string and broadcasts it to all of the other processors. The string is then sent through the embedded Python interpreter function PyRun_SimpleString. This C function will take a char pointer as the input and run the string through the same parsing and interpreter calls as a file using the Python program. One limitation of the PyRun_SimpleString call is the lack of exception information. To alleviate this issue a second method was implemented uses Py_CompileString then PyEval_EvalCode. The Py_CompileString uses a file name or input file information to give a better location for the exception. The existing Hydra interpreter is the dominant interpreter and must be given control when Python is not in use. The interactive Python interpreter must check for Hydra control commands as well as compiling, executing and checking errors on Python code. The custom interactive interpreter first reads a line from stdin in parallel. Readline support is enabled which gives the user line editing and history support similar to running the Python program interactively. The line is then checked for any Hydra specific control sequences and compiled through the Py_CompileStringFlags. If the line compiled with no errors then it is executed using the PyEval_EvalCode command. Any errors in compiling or exceptions are checked for a block continuation indicator, syntax error or EOF. Exceptions will be displayed as in Python and available in the output of all the processors. With the above embedded Python support users can run arbitrary Python code through the Python interpreter. One of the mandates of the effort to embed the Python interpreter was to provide an enhanced version of the existing Hydra interpreter. In order to provide this functionality Python must be able to access the information in the running Hydra simulation. This is accomplished by wrapping the Hydra data structures, functions, and parameters using SWIG and exposing them through the hydra Python extension module. The code created by SWIG includes a C++ file compiled into Hydra as a Python extension library and a Python interface file that is serialized and compiled into the Hydra code. The hydra Python module allows users to access and manipulate the Hydra state. Hydra has several types of integer and floating point arrays ranging from one to three dimensional. The multi dimensional arrays have an additional index to indicate the block in the block-structured mesh. The block defines a portion of the mesh on which the zonal, nodal, edge, and face based information is defined, these meshes can consist of several blocks. The blocks are then decomposed into sub-blocks or domains depending on how many processors will be used in the simulation. Access to the multi-block parallel data structures is provided by structures wrapped by C++ interface objects and then wrapped in SWIG using the numerical Python, numpy, module to provide the array object in Python. Users control the simulation by scheduling messages that conditionally execute based on cycle number, time or specific states. These messages can be redefined from Python to steer the simulation while it is running. In addition to the messages, there is a callback functionality that will run a user defined Python function after every simulation cycle has completed. An arbitrary number of callable Python objects can be registered in the code. Objects in the top level, __main__, state are saved to a restart file. This restart file is a portable file object written through the mesh and file I/O library silo [SILO2011]. The Python component of the restart information is a binary string created through the pickle interface augmented with a state saving module. The Python module used for the state saving functionality is the savestate module by Oren Tirosh [Tirosh2008]. This module has been augmented with the addition of numpy support and None and Ellipsis singleton object support. Multiple versions of the Hydra code are available to users at any given time. In order to add additional functionality and maintain version integrity, the hydra Python module is embedded in the Hydra code as a frozen module. The Python file resulting from the SWIG generator is marshaled using a script based on the freeze module in the Python distribution. This guarantees the modules are always available even if the sys.path is altered. Embedded Diagnostics and Objective Functions Embedding a Python interpreter within Hydra adds significant capability. One of the first applications was to add a fluid characteristic tracker. Characteristics are eigenvectors of the Euler fluid equations and represent the highest possible signal speed. Characteristics located near a shock, the characteristic will naturally drift toward the shock front or be swept up in int, consequently they can be used to identify the location of the shock front without the difficulty of post processing the moving Lagrangian mesh. The following initial value problem describes the radial location of the characteristic as the flow evolves:ṙ = v(r) − c s (r). u(r) and c s (r) are the flow velocity and sound speed at the characteristic's current location r. Our characteristic tracker implementation is aware of the pulse shape and starts tracking a new characteristic for each significant feature of the pulse shape. Characteristic positions must be updated every cycle and the tracker is registered as a callback. Since the tracker is updated every cycle, it is easy to trigger other events based on the behavior of the characteristic. The first use is trigger the simulation to end just after shock breakout time. This is very important as Hydra's only other relevant mechanism for ending the simulation is a maximum simulation time. Burn is explicitly turned off for these scans, so Hydra's burn rate monitor is not relevant. Setting a time limit either leads to under-estimating the shock breakout time and stopping the calculation before gathering important information or setting the maximum time to be very large and wasting many compute cycles. Additionally, we use the location of characteristics to set the frequency Hydra writes output files. Different stages of the simulation have disparate time scales and it is useful to add resolution only when it is needed. The most important application of the characteristic tracker is producing smooth, non-noisy measurements of the shock breakout time for the shock syncing objective function. To construct a shock syncing objective function, first consider the case of two radially converging shocks launched at two different times from comparable radii. The second shock is faster since the wake of the first is warmer and the sound speed is larger. The second shock will eventually overtake the first. If we define a "shock breakout time" as when the first shock enters the gas region, we can plot the shock breakout time as a function of the launch time of the second shock (black line in 2). The appropriate objective function should maximize the breakout time (recognizing that it saturates for large launch times) while also minimizing the launch time of the second shock. We construct an aggregate objective function as a linear combination of the two constraints ( f (t) = ωt − b(t)). We find an tuned value of 0.01m, where m is the slope between the end points of the search region. The parallel direct search optimization method typically converges within four iterations. Recall from the first section the pre-pulse launches four shocks, all of which should coalesce at the gas-ice interface at the same time. Figure 3 shows the convergence of the pre-pulse shocks well within the required 50 ps tolerance. It should be noted that this shock syncing method only relies on tracking the first shock. Characteristics will sometimes fail to locate the shock if they are located in a region with heat sources that are not sonically coupled to the plasma. Deeply penetrating x-rays, supra-thermal electrons and heavy ion beams are examples. However, it is expected that the ablator and the DT shell should provide sufficient insulation for the picket shock tracker to locate its shock. Another important embedded diagnostic monitors the fuel areal density (ρR). When tuning the main pulse, the diagnostic monitors the DT ρR, reports the peak value and stops the calculation when the current ρR has fallen to 50% of the peak value. The maximum ρR sets the start time of the main pulse. The igniter pulse start time is tuned by maximizing the fusion yield. Figure 4 shows a peak ρR of 1.8g/cm 2 with a time width of 500ps. Peak ρR is typically found within three iterations. The width in the peak corresponds to mistiming robustness. Hydra is already well suited for tuning the igniter pulse for maximum fusion yield and needs no additional diagnostics. Hydra monitors the burn rate and has triggers to end the calculation upon completion of burn. Hydra also reports the total fusion yield. Conclusions Tuning an ICF pulse to a target is normally a labor intensive, high latency process. We described the desired properties of a tuned pulse and constructed objective functions that will identify the tuned properties. Collecting information for the objective functions requires high frequency sampling of simulation and this data must be gathered within the simulation rather than post-processing a completed simulation. To enable introspective simulations, we add a parallel Python interpreter to Hydra. From these pieces, we constructed a program that tunes a pulse without human intervention. The net result is a significant time savings over manual tuning. Where a typical manual tuning takes several days of attention, an automated tuning takes around 4 hours to execute the same number of simulations. This work performed under the auspices of the U.S. DOE by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
4,977
2011-01-01T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
A Comparison of Electricity Generation System Sustainability among G20 Countries Planning for electricity generation systems is a very important task and should take environmental and economic factors into account. This paper reviews the existing metrics and methods in evaluating energy sustainability, and we propose a sustainability assessment index system. The input indexes include generation capacity, generation cost, and land use. The output indexes include desirable and undesirable parts. The desirable outputs are total electricity generation and job creation. The undesirable outputs are external supply risk and external costs associated with the environment and health. The super-efficiency data envelopment analysis method is used to calculate the sustainability of electricity generation systems of 23 countries from 2005 to 2014. The three input indexes and three undesirable output indexes are used as the input variables. The two desirable outputs are used as the output variables. The results show that most countries' electricity generation sustainability values have decreasing trends. In addition, nuclear and hydro generation have positive effects. Solar, wind, and fossil fuel generation have negative effects on sustainability. Introduction Electric power usage has been growing over time.The electricity generation system is becoming an important factor in energy sources of industrialized countries.The global electricity generation installed capacity was 4114 gigawatts (GW) in 2005, which increased to 5699.36 GW in 2014.The electricity generation system of each country has a different constitution and structure, which leads to different performances in terms of electric power quality, prices, emissions, and so on.In order to improve the efficiency and sustainability of electricity generation systems, many countries have worked out corresponding measures.The Chinese government announced an ambitious plan to modulate the proportion of non-fossil fuel energy to 20% by 2030.China also plans to increase nuclear generating capacity to 58 GW with 30 GW more under construction by 2020 [1].The U.S. has planned to modulate its electricity generation share to 39% coal, 27% natural gas, 18% nuclear, 16% renewable energy, and 1% oil and other liquids till 2035 [2].Japan has decreased the proportion of fossil fuels in the power sector and has planned to adjusted it to 55% by 2030.India will also plan to modulate its electricity generation system to 31% coal, 19% solar PV, 16% hydro, 15% wind, 11% gas, and others [3].However, because each country only considers its own situation when making its energy plan, their rationalities and their levels of sustainability have not been calculated and compared with other countries' electricity generation plan. As a result of these aforementioned circumstances, establishing an evaluation index system and a model to measure the rationality and sustainability of the electricity generation systems plays a vital role in this area of research.This paper aims to solve this problem by exploring the relationships between electricity generation systems, society, the economy, the environment, and resources.We gathered data from consultant reports and government websites.The remainder of the paper is as follows: Section 2 overviews the research literature about sustainability at present; Section 3 introduces the status of electricity generation and the developmental tendency of the world, particularly the G20 countries; Section 4 sets up the electricity generation system sustainability evaluation indexes and model; Section 5 presents the results and a discussion; Section 6 summarizes conclusions and defines research directions for future work. Literature Review Sustainable development is defined as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs" [4].Since providing an increasing amount of energy services is a necessary precondition for eradicating hunger and poverty, and even limiting the global population increase, since about three-quarters of anthropogenic emissions of CO 2 are released by the energy system, since today's energy system consumes a major share of finite fossil resources and is the single most important source of air pollution, and since securing economic productivity will not be possible without a functioning energy infrastructure and competitive energy prices, energy system sustainability evaluation research is very important.There has been considerable research conducted concerning renewable energy, micro-generation technologies, nuclear fission power, photovoltaic solar power plants, thermal power generation, wind power, biomass generation, and so on.Some published papers are shown in Table 1. Authors Evaluation Object Indexes Method Thomas Kurka [10] Bioenergy Analytic Hierarchy Process Air Quality Waste Economic Viability Regional Energy Self-Sufficiency Efficiency Technology Regional Job Creation Regional Food Security Aviel Verbruggen, Erik Laes, Sanne Lemmens [11] Nuclear 1, we find that most indexes evaluating the sustainability of energy systems come from the environment, the economy, resources, society, and security.The evaluation methods include the Analytic Hierarchy Process (AHP) method, a mathematical model and data envelopment analysis.The AHP method, developed by Saaty [16], is a user-friendly, simple, and logical multi-criteria analysis (MCA) method, but it has subjectivity, while evaluation rationality is based on the experiences of the decision-makers.A mathematical model is a description of a system using mathematical concepts and language.Here, the sustainability index is calculated with a mathematical equation with several variables.However, it is very difficult to set up the coefficients of the equation accurately.Since data envelopment analysis (DEA) was set up by Charnes et al. (1978) [17], DEA as a non-parametric approach to efficiency measurement has been widely studied and applied.DEA is fit to analyze efficiencies in systems featuring multiple inputs and outputs.Many researchers have addressed the applications of different DEA models in various ranking and efficiency measurement problems.Sustainability evaluation of an electric power generation system is known as an efficiency problem, for which DEA is commonly used method.However, the DEA method has two main disadvantages: (1) results arising from DEA are sensitive to the selection of inputs and outputs; and (2) the number of efficient firms on the frontier tends to increase with the number of input and output variables [18].To deal with the above disadvantages, we select inputs and outputs based on sustainability theory according to previous works. Electricity Generation Status and Future By 2014, the global electricity generation installed capacity was 5699.4GW.World power generation capacities from all energy sources in 2005 and 2014 are shown in Figure 1.Compared with 2005, the proportions of fossil fuels, hydro, and nuclear electricity capacities in 2014 slightly declined, and the proportions of wind and other electricity capacities slightly increased.Fossil fuel electricity generation is still the main source and constituted over 60% of electricity generation in 2014.Meanwhile, fossil fuel electricity is the main contributor of CO 2 , NO x , SO 2 , and other harmful gases among all energy sources.It contributed 42% of total CO 2 emissions in 2013.Thus, many countries nowadays are trying to reduce the amount of fossil fuel electricity by rolling out incentive policies.Under a new policy scenario (which takes into account the policies and implementing measures affecting energy markets adopted as of mid-2015), the fossil fuel electricity generation capacity will be less than 50% of the whole in 2040 [19].The world's electricity generation capacities in the future are shown in Figure 2.Under a new policy scenario (which takes into account the policies and implementing measures affecting energy markets adopted as of mid-2015), the fossil fuel electricity generation capacity will be less than 50% of the whole in 2040 [19].The world's electricity generation capacities in the future are shown in Figure 2. G20 is an international forum comprised of governments and central bank governors from 20 major economies.The members include 19 individual countries and the European Union.Collectively, the G20 economies account for around 85% of the gross world product (GWP), 80% of world trade (or, if excluding EU intra-trade, 75%), and two-thirds of the world population.In this paper, we study electricity generation system sustainability of Argentina, Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, South Korea, Mexico, Netherlands, Russia, Saudi Arabia, South Africa, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States.These 23 countries account for about 80% of total global electricity generation capacity.The electricity generation proportions of the 23 countries from all energy sources since 2005 to 2014 are shown in Figure 3.The solar PV and wind electricity generations have high growth rates, and their average growth rates are 44.10% and 21.89%, respectively, from 2005 to 2014.The proportions of solar PV and wind of the 23 countries were 0.15% and 1.60% in 2005, and were 2.61% and 6.67% in 2014.The 23 countries have different developing trends on the energy sources for electricity generation.From the historical data, we find that most countries have high fossil fuel electricity generation proportions, such as Argentina, Australia, China, India, Indonesia, Italy, Japan, South Korea, Turkey, the United Kingdom, and the United States.Three countries have high hydroelectric installed capacity proportions, namely Brazil, Canada, and Switzerland.France has a high proportion Under a new policy scenario (which takes into account the policies and implementing measures affecting energy markets adopted as of mid-2015), the fossil fuel electricity generation capacity will be less than 50% of the whole in 2040 [19].The world's electricity generation capacities in the future are shown in Figure 2. G20 is an international forum comprised of governments and central bank governors from 20 major economies.The members include 19 individual countries and the European Union.Collectively, the G20 economies account for around 85% of the gross world product (GWP), 80% of world trade (or, if excluding EU intra-trade, 75%), and two-thirds of the world population.In this paper, we study electricity generation system sustainability of Argentina, Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, South Korea, Mexico, Netherlands, Russia, Saudi Arabia, South Africa, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States.G20 is an international forum comprised of governments and central bank governors from 20 major economies.The members include 19 individual countries and the European Union.Collectively, the G20 economies account for around 85% of the gross world product (GWP), 80% of world trade (or, if excluding EU intra-trade, 75%), and two-thirds of the world population.In this paper, we study electricity generation system sustainability of Argentina, Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, South Korea, Mexico, Netherlands, Russia, Saudi Arabia, South Africa, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States.These 23 countries account for about 80% of total global electricity generation capacity.The electricity generation proportions of the 23 countries from all energy sources since 2005 to 2014 are shown in Figure 3.The solar PV and wind electricity generations have high growth rates, and their average growth rates are 44.10% and 21.89%, respectively, from 2005 to 2014.The proportions of solar PV and wind of the 23 countries were 0.15% and 1.60% in 2005, and were 2.61% and 6.67% in 2014.The 23 countries have different developing trends on the energy sources for electricity generation.From the historical data, we find that most countries have high fossil fuel electricity generation proportions, such as Argentina, Australia, China, India, Indonesia, Italy, Japan, South Korea, Turkey, the United Kingdom, and the United States.Three countries have high hydroelectric installed capacity proportions, namely Brazil, Canada, and Switzerland.France has a high proportion of nuclear generation.Germany, Spain, and Sweden have relatively balanced sources of electricity generation.In this paper, we set up an evaluation model to calculate the sustainability of these electricity generation systems. The Evaluation Indexes of Electricity Generation System Sustainability According to the concept of sustainable development, the evaluation indexes are set up with four aspects-society, the economy, the environment, and security.The specific indexes are shown in Figure 4.The output indexes include desirable (good) and undesirable (bad) outputs.Electric power generation and job creation are desirable outputs.External supply risk and external costs associated with the environment and health are undesirable outputs. Analysis Process In this paper, we use a super-efficiency DEA method to calculate the sustainability of an electricity generation system and use the Spearman's correlation test to determine the factors influencing sustainability.The detailed analysis processes are shown in Figure 5. The Evaluation Indexes of Electricity Generation System Sustainability According to the concept of sustainable development, the evaluation indexes are set up with four aspects-society, the economy, the environment, and security.The specific indexes are shown in Figure 4.The output indexes include desirable (good) and undesirable (bad) outputs.Electric power generation and job creation are desirable outputs.External supply risk and external costs associated with the environment and health are undesirable outputs. The Evaluation Indexes of Electricity Generation System Sustainability According to the concept of sustainable development, the evaluation indexes are set up with four aspects-society, the economy, the environment, and security.The specific indexes are shown in Figure 4.The output indexes include desirable (good) and undesirable (bad) outputs.Electric power generation and job creation are desirable outputs.External supply risk and external costs associated with the environment and health are undesirable outputs. Analysis Process In this paper, we use a super-efficiency DEA method to calculate the sustainability of an electricity generation system and use the Spearman's correlation test to determine the factors influencing sustainability.The detailed analysis processes are shown in Figure 5. Analysis Process In this paper, we use a super-efficiency DEA method to calculate the sustainability of an electricity generation system and use the Spearman's correlation test to determine the factors influencing sustainability.The detailed analysis processes are shown in Figure 5. The Super-Efficiency DEA (SE-DEA) Model In the SE-DEA model, the efficiency scores from the model are obtained by eliminating the data of the decision-making units (DMUs) to be evaluated from the solution set [20].In this paper, the DMUs represent the electricity generation systems of the G20 countries from 2005 to 2014.The SE-DEA model is defined as follows: . Equation ( 1) computes the score of the DMU by removing it from constraints.Here, * S  indicates the score of the DMU under consideration.io x and ro y are respectively the i-th input and r-th output for the o DMU under evaluation.We suppose that * S  shows optimal amounts.ij x is the i- th input of the j-th D M U .rj y is the r-th output of the j-th D M U .j  is the parameter that needs to be estimated.In this paper, input indexes (I1, I2, I3) and undesirable output indexes (O3, O4, O5) are used as the input variables (x) of the SE-DEA model.Desirable output indexes (O1, O2) are used as the output variables (y) of the SE-DEA model. Spearman's Correlation Test Spearman's rank-order correlation is equivalent to Pearson's product-moment correlation coefficient performed on the ranks of the data rather than the raw data, and it is the nonparametric version of Pearson's product-moment correlation.Spearman's correlation coefficient can measure the The Super-Efficiency DEA (SE-DEA) Model In the SE-DEA model, the efficiency scores from the model are obtained by eliminating the data of the decision-making units (DMUs) to be evaluated from the solution set [20].In this paper, the DMUs represent the electricity generation systems of the G20 countries from 2005 to 2014.The SE-DEA model is defined as follows: Equation ( 1) computes the score of the DMU by removing it from constraints.Here, θ * S indicates the score of the DMU under consideration.x io and y ro are respectively the i-th input and r-th output for the DMU o under evaluation.We suppose that θ * S shows optimal amounts.x ij is the i-th input of the j-th DMU.y rj is the r-th output of the j-th DMU.λ j is the parameter that needs to be estimated. In this paper, input indexes (I 1 , I 2 , I 3 ) and undesirable output indexes (O 3 , O 4 , O 5 ) are used as the input variables (x) of the SE-DEA model.Desirable output indexes (O 1 , O 2 ) are used as the output variables (y) of the SE-DEA model. Spearman's Correlation Test Spearman's rank-order correlation is equivalent to Pearson's product-moment correlation coefficient performed on the ranks of the data rather than the raw data, and it is the nonparametric version of Pearson's product-moment correlation.Spearman's correlation coefficient can measure the strength of association between two ranked variables.Its calculation equation is shown in Equation ( 2) [21]. Here, r s is Spearman's correlation, and x, y are the average values of the two variables x i and y i .Spearman's correlation rank will yield a value −1 ≤ r s ≤ 1.The higher the absolute value of r s is, the stronger the correlation between the two variables is.A positive value suggests a positive correlation, while a negative value means a negative correlation. Data Collection We collected information about 23 countries based on data recorded from 2005 to 2014.The data was gathered from the U.S. Energy Information Administration and the World Bank-World Development Indicators.The preliminary data include installed electricity capacity of 23 countries, including different kinds of energy sources, and net generation, which includes nuclear installed capacity (NC) and net generation (NUG), hydroelectric installed capacity (HC) and net generation (HG), wind installed capacity (WC) and net generation (WG), fossil fuel installed capacity (FC) and net generation (FG), geothermal installed capacity (GC) and net generation (GG), solar, tide, and wave installed capacity (SC) and net generation (SG), and biomass and waste electricity net generation (BG).Table 2 shows the descriptive statistics of the data. Parameter Collection In order to obtain the values of the input and output indexes, some parameters are needed.The parameters include life costs of electricity generation (LCOE), land use (LU), external costs associated with environment (ECAWE), external costs associated with health (ECAWH), the number of employees per unit of electricity produced (JC), and external supply risk (ESR).The values [22] of the parameters mentioned above are shown in Table 3.In The results show that the sustainability of the 23 countries' electricity generation system developing trends can be divided into three groups.The details of their developmental trends are shown in Figure 6a-c.The results show that the sustainability of the 23 countries' electricity generation system developing trends can be divided into three groups.The details of their developmental trends are shown in Figure 6a-c Influencing Factors Analysis In this paper, we analyze the relationship among different energy sources' installed capacities and the sustainability of the electricity generation system through Spearman's correlation test.The test results are shown in Table 6. Influencing Factors Analysis In this paper, we analyze the relationship among different energy sources' installed capacities and the sustainability of the electricity generation system through Spearman's correlation test.The test results are shown in Table 6.Note: ** means: the possibility of no significant correlation is not more than 0.01. The calculation results show that nuclear and hydroelectric electricity generation installed capacities have positive relationships with sustainability, while solar, wind, fossil fuel, and geothermal electricity generation installed capacities have negative relationships with sustainability. Conclusions In this paper, we set up a sustainability evaluation index system of electricity generation with undesirable outputs for G20 countries.The evaluation index system includes three input indexes and five output indexes.We use the three input indexes and three undesirable output indexes as the input variables of the SE-DEA method, and the two desirable output indexes as the output variables of the SE-DEA method. We conducted an empirical study on the sustainability of the electricity generation systems in 23 countries based on data from 2005 to 2014.The results indicate that most countries' electricity generation systems have low scores in sustainability.Only France and Switzerland's average sustainability scores from 2005 to 2014 are over one.France has the highest proportion of nuclear electricity generation installed capacity, which is over 50%.Switzerland has the highest proportion of nuclear and hydroelectric electricity generation among the total, which was over 80%.The correlation test results indicate that nuclear and hydroelectric electricity generation has a positive influence on the sustainability of an electricity generation system.The results also show that fossil fuel, solar, wind, and geothermal electricity generation have a negative relationship with the sustainability. Future studies are encouraged to gain more insight into the optimal structure calculation of electricity generation systems considering the life cycle cost of electricity generation technology.Meanwhile, a greater amount of data, such as those concerning additional environmental cost data for energy technologies, might also be needed. to reduce the amount of fossil fuel electricity by rolling out incentive policies. Figure 1 . Figure 1.World electricity generation capacities from all energy sources in 2005 and 2014. Figure 2 . Figure 2. World electricity generation capacities: developing trends under a new policy scenario. Figure 1 . Figure 1.World electricity generation capacities from all energy sources in 2005 and 2014. Figure 1 . Figure 1.World electricity generation capacities from all energy sources in 2005 and 2014. Figure 2 . Figure 2. World electricity generation capacities: developing trends under a new policy scenario. These 23 countries account for about 80% of total global electricity generation capacity.The electricity generation proportions of the 23 countries from all energy sources since 2005 to 2014 are shown in Figure 3.The solar PV and wind electricity generations have high growth rates, and their average growth rates are 44.10% and 21.89%, respectively, from 2005 to 2014.The proportions of solar PV and wind of the 23 countries were 0.15% and 1.60% in 2005, and were 2.61% and 6.67% in 2014.The 23 countries have different developing trends on the energy sources for electricity generation.From the historical data, we find that most countries have high fossil fuel electricity generation proportions, such as Argentina, Australia, China, India, Indonesia, Italy, Japan, South Korea, Turkey, the United Kingdom, and the United States.Three countries have high hydroelectric installed capacity proportions, namely Brazil, Canada, and Switzerland.France has a high proportion Figure 2 . Figure 2. World electricity generation capacities: developing trends under a new policy scenario. . Germany, Spain, and Sweden have relatively balanced sources of electricity generation.In this paper, we set up an evaluation model to calculate the sustainability of these electricity generation systems. Figure 3 . Figure 3.The electricity generation proportions of the 23 countries from all energy sources (2005-2014). Figure 4 . Figure 4.The sustainability evaluation indexes of electricity generation. Figure 3 . Figure 3.The electricity generation proportions of the 23 countries from all energy sources (2005-2014). generation.Germany, Spain, and Sweden have relatively balanced sources of electricity generation.In this paper, we set up an evaluation model to calculate the sustainability of these electricity generation systems. Figure 3 . Figure 3.The electricity generation proportions of the 23 countries from all energy sources (2005-2014). Figure 4 . Figure 4.The sustainability evaluation indexes of electricity generation. Figure 4 . Figure 4.The sustainability evaluation indexes of electricity generation. Figure 5 . Figure 5.The process by which electricity generation system sustainability is determined. Figure 6 . Figure 6.(a) The sustainability development trend of the upward trend group; (b) The sustainability development trend of the downward trend group; (c) The sustainability development trend of the fluctuation group. Figure 6 . Figure 6.(a) The sustainability development trend of the upward trend group; (b) The sustainability development trend of the downward trend group; (c) The sustainability development trend of the fluctuation group. Table 1 . Information on evaluation indexes and methods in related papers. Table 2 . Descriptive statistics of preliminary data of the 23 countries, 2005-2014. Table 3 , Nu indicates nuclear electricity generation, H indicates hydroelectric generation, S indicates solar, tide and wave electricity generation, W indicates wind electricity generation, F indicates fossil fuel electricity generation, G indicates geothermal electricity generation, and B indicates biomass and waste electricity generation.There are no data for geothermal electricity generation on ECAWE and ECAWH.Kagel and Gawell (2005) [23]
5,385.8
2016-12-07T00:00:00.000
[ "Economics" ]
Open Access book publisher This book presents four different ways of theoretical and practical advances and applications of data mining in different promising areas like Industrialist, Biological, and Social. Twenty six chapters cover different special topics with proposed novel ideas. Each chapter gives an overview of the subjects and some of the chapters have cases with offered data mining solutions. We hope that this book will be a useful aid in showing a right way for the students, researchers and practitioners in their studies. Introduction Turkey has started to distribute Global Services of Mobile (GSM) 900 licences in 1998.Turkcell and Telsim have been the first players in the GSM market and they bought licenses respectively.In 2000, GSM 1800 licenses were bought by ARIA and AYCELL respectively.After then, GSM market has saturated and customers started to switch to other operators to obtain cheap services, number mobility between GSM operators, and availability of 3G services.One of the major problems of GSM operators has been churning customers.Churning means that subscribers may move from one operator to another operator for some reasons such as the cost of services, corporate capability, credibility, customer communication, customer services, roaming and coverage, call quality, billing and cost of roaming (Mozer et al., 2000).Therefore churn management becomes an important issue for the GSM operators to deal with.Churn management includes monitoring the aim of the subscribers, behaviours of subscribers, and offering new alternative campaigns to improve expectations and satisfactions of subscribers.Quality metrics can be used to determine indicators to identify inefficiency problems.Metrics of churn management are related to network services, operations, and customer services.When subscribers are clustered or predicted for the arrangement of the campaigns, telecom operators should have focused on demographic data, billing data, contract situations, number of calls, locations, tariffs, and credit ratings of the subscribers (Yu et al., 2005).Predictions of customer behaviour, customer value, customer satisfaction, and customer loyalty are examples of some of the information that can be extracted from the data stored in a company's data warehouses (Hadden et al., 2005).It is well known that the cost of retaining a subscriber is much cheaper than gaining a new subscriber from another GSM operator (Mozer et al., 2000).When the unhappy subscribers are predicted before the churn, operators may retain subscribers by new offerings.In this situation in order to implement efficient campaigns, subscribers have to be segmented into classes such as loyal, hopeless, and lost.This segmentation has advantages to define the customer intentions.Many segmentation methods have been applied in the literature.Thus, churn management can be supported with data mining modelling tools to predict hopeless subscribers before the churn.We can follow the hopeless groups by clustering the profiles of the customer behaviours.Also, we can benefit from the prediction advantages of the data mining algorithms.Data mining tools have been used to analyze many profile-related variables, including those related to demographics, seasonality, service periods, competing offers, and usage patterns.Leading indicators of churn potentially include late payments, numerous customer service calls, and declining use of services. In data mining, one can choose a high or low degree of granularity in defining variables.By grouping variables to characterize different types of customers, the analyst can define a customer segment.A particular variable may show up in more than one segment.It is essential that data mining results extend beyond obvious information.Off-the-shelf data mining solutions may provide little new information and thus serve merely to predict the obvious.Tailored data mining solutions can provide far more useful information to the carrier (Gerpott et al., 2001).Hung et al., 2006 proposed a data mining solution with decision trees and neural networks to predict churners to assess the model performances by LIFT(a measure of the performance of a model at segmenting the population) and hit ratio.Data mining approaches are considered to predict customer behaviours by CDRs (call detail records) and demographics data in (Wei et al., 2002;Yan et al., 2005;Karahoca, 2004).In this research, main motivation is investigating the best data mining model(s) for churning, according to the measure of the performances of the model(s).We have utilized data sets which are obtained from a Turkish mobile telecom operator's data warehouse system to analyze the churn activities. Material and methods Loyal 24900 GSM subscribers were randomly selected from data warehouses of GSM operators located in Turkey.Hopeless 7600 candidate churners were filtered from databases during a period of a year.In real life situations, usually 3 to 6 percent annual churn rates are observed.Because of computational complexity, if all parameters within subscriber records were used in predicting churn, data mining methods could not estimate the churners.Therefore we selected 31% hopeless churners for our dataset and discarded the most of the loyal subscribers from the dataset.In pattern recognition applications, the usual way to create input data for the model is through feature extraction.In feature extraction, descriptors or statistics of the domain are calculated from raw data.Usually, this process involves in some form of data aggregation.The unit of aggregation in time is one day.The feature mapping transforms the transaction data ordered in time to static variables residing in feature space.The features used reflect the daily usage of an account.Number of calls and summed length of calls to describe the daily usage of a mobile phone were used.National and international calls were regarded as different categories.Calls made during business hours, evening hours and night hours were also aggregated to create different features.The parameters listed in Table 1, were taken into account to detect the churn intentions. 1.These attributes are used as input parameters for churn detection process.These attributes are expected to have higher impact on the outcome (whether churning or not).In order to reduce computational complexity of the analysis, some of the fields in the set of variables of corporate data warehouse are ignored.A number of different methods were considered to predict churners from subscribers.In this section the brief definitions of data mining methods are given.The methods that described in this section are general methods that are used for modelling the data.These methods are JRip, PART,Ridor,OneR,Nnge,Decision Table,Conjunction Rules,AD Trees,IB1,Bayesian networks and ANFIS.Except ANFIS, all the methods are executed in WEKA (Waikato Environment for Knowledge Analysis) data mining software [Frank & Witten, 2005]. JRip method JRip implements a propositional rule learner, "Repeated Incremental Pruning to Produce Error Reduction" (RIPPER), as proposed by [Cohen, 1995].JRip is a rule learner.In principle it is similar to the commercial rule learner RIPPER.The RIPPER rule learning algorithm is an extended version of learning algorithm IREP (Incremental Reduced Error Pruning).It constructs a rule set in which all positive examples are covered, and its algorithm performs efficiently on large, noisy datasets.Before building a rule, the current set of training examples are partitioned into two subsets, a growing set (usual1y 2/3) and a pruning set (usual1y 1/3).The rule is constructed from examples in the growing set.The rule set begins with an empty rule set and rules are added incrementally to the rule set until no negative examples are covered.After growing a rule from the growing set, condition is deleted from the rule in order to improve the performance of the rule set on the pruning examples.To prune a rule, RIPPER considers only a final sequence of conditions from the rule, and selects the deletion that maximizes the function [Frank & Witten, 2005]. PART method The PART algorithm combines two common data mining strategies; the divide-and-conquer strategy for decision tree learning and the separate-and-conquer strategy for rule learning.The divide-and-conquer approach selects an attribute to place at the root node and "divides" the tree by making branches for each possible value of the attribute.The process then continues recursively for each branch, using only those instances that reach the branch.The separate-and-conquer strategy is employed to build rules.A rule is derived from the branch of the decision tree explaining the most cases in the dataset, instances covered by the rule are removed, and the algorithm continues creating rules recursively for the remaining instances until none are left.The PART implementation differs from standard approaches in that a pruned decision tree is built for the current set of instances, the leaf with the largest coverage is made into a rule and the tree is discarded.By building and discarding decision trees to create a rule rather than building a tree incrementally by adding conjunctions one at a time avoids a tendency to over prune.This is a characteristic problem of the basic separate and conquer rule learner.The key idea is to build a partial decision tree instead of a fully explored one.A partial decision tree is an ordinary decision tree that contains branches to undefined sub trees.To generate such a tree, we integrate the construction and pruning operations in order to find a stable subtree that can be simplified no further.Once this subtree has been found tree building ceases and a single rule is read.The tree building algorithm splits a set of examples recursively into a partial tree.The first step chooses a test and divides the examples into subsets.PART makes this choice in exactly the same way as C4.5.[Frank & Witten, 2005]. Ridor method Ridor generates the default rule first and then the exceptions for the default rule with the least (weighted) error rate.Later, it generates the best exception rules for each exception and iterates until no exceptions are left.Thus it performs a tree-like expansion of exceptions and the leaf has only default rules but no exceptions.The exceptions are a set of rules that predict the improper instances in default rules [Gaines & Compton, 1995]. OneR method OneR, generates a one-level decision tree, that is expressed in the form of a set of rules that all test one particular attribute.1R is a simple, cheap method that often comes up with quite good rules for characterizing the structure in data [Frank & Witten, 2005].It turns out that simple rules frequently achieve surprisingly high accuracy [Holte, 1993]. Nnge Nearest-neighbor-like algorithm is using for non-nested generalized exemplars Nnge which are hyper-rectangles that can be viewed as if-then rules [Martin, 1995].In this method, we can set the number of attempts for generalization and the number of folder for mutual information. Decision tables As stated by Kohavi, decision tables are one of the possible simplest hypothesis spaces, and usually they are easy to understand.A decision table is an organizational or programming tool for the representation of discrete functions.It can be viewed as a matrix where the upper rows specify sets of conditions and the lower ones sets of actions to be taken when the corresponding conditions are satisfied; thus each column, called a rule, describes a procedure of the type "if conditions, then actions".The performance ok this method is quite good on some datasets with continuous features, indicating that many datasets used in machine learning may not require these features, or these features may have few values [Kohavi, 1995]. Conjunctive rules This method implements a single conjunctive rule learner that can predict for numeric and nominal class labels.A rule consists of antecedents "AND"ed together and the consequent (class value) for the classification/regression. In this case, the consequent is the distribution of the available classes (or mean for a numeric value) in the dataset. AD trees AD Trees can be used for generating an alternating decision (AD) trees.The number of boosting iterations needs to be manually tuned to suit the dataset and the desired complexity/accuracy tradeoffs.Induction of the trees has been optimized, and heuristic search methods have been introduced to speed learning [Freund & Mason, 1999]. Nearest neighbour Instance Based learner (IB1) IBk is an implementation of the k-nearest-neighbors classifier that employs the distance metric.By default, it uses just one nearest neighbor (k=1), but the number can be specified [Frank & Witten, 2005]. Bayesian networks Graphical models such as Bayesian networks supply a general framework for dealing with uncertainty in a probabilistic setting and thus are well suited to tackle the problem of churn management.Bayesian Networks was coined by Pearl (1985).Graphical models such as Bayesian networks supply a general framework for dealing with uncertainly in a probabilistic setting and thus are well suited to tackle the problem of churn management.Every graph of a Bayesian network codes a class of probability distributions.The nodes of that graph comply with the variables of the problem domain.Arrows between nodes denote allowed (causal) relations between the variables.These dependencies are quantified by conditional distributions for every node given its parents. ANFIS A Fuzzy Logic System (FLS) can be seen as a non-linear mapping from the input space to the output space.The mapping mechanism is based on the conversion of inputs from numerical domain to fuzzy domain with the use of fuzzy sets and fuzzifiers, and then applying fuzzy rules and fuzzy inference engine to perform the necessary operations in the fuzzy domain [Jang,1992 ;Jang,1993].The result is transformed back to the arithmetical domain using defuzzifiers.The ANFIS approach uses Gaussian functions for fuzzy sets and linear functions for the rule outputs.The parameters of the network are the mean and standard deviation of the membership functions (antecedent parameters) and the coefficients of the output linear functions (consequent parameters).The ANFIS learning algorithm is used to obtain these parameters.This learning algorithm is a hybrid algorithm consisting of the gradient descent and the least-squares estimate.Using this hybrid algorithm, the rule parameters are recursively updated until an acceptable error is reached.Iterations have two steps, one forward and one backward.In the forward pass, the antecedent parameters are fixed, and the consequent parameters are obtained using the linear least-squares estimate.In the backward pass, the consequent parameters are fixed, and the output error is back-propagated through this network, and the antecedent parameters are accordingly updated using the gradient descent method.. Takagi and Sugeno's fuzzy if-then rules are used in the model.The output of each rule is a linear combination of input variables and a constant term.The final output is the weighted average of each rule's output.The basic learning rule of the proposed network is based on the gradient descent and the chain rule [Werbos, 1974].In the designing of ANFIS model, the number of membership functions, the number of fuzzy rules, and the number of training epochs are important factors to be considered.If they were not selected appropriately, the system will over-fit the data or will not be able to fit the data.Adjusting mechanism works using a hybrid algorithm combining the least squares method and the gradient descent method with a mean square error method.The aim of the training process is to minimize the training error between the ANFIS output and the actual objective.This allows a fuzzy system to train its features from the data it observes, and implements these features in the system rules.As a Type III Fuzzy Control System, ANFIS has the following layers as represented in Figure 1.Layer 0: It consists of plain input variable set.Layer 1: Every node in this layer is a square node with a node function as given in Eq. (1); where A is a generalized bell fuzzy set defined by the parameters {a,b,c} , where c is the middle point, b is the slope and a is the deviation.Layer 2: The function is a T-norm operator that performs the firing strength of the rule, e.g., fuzzy AND and OR.The simplest implementation just calculates the product of all incoming signals. Layer 3: Every node in this layer is fixed and determines a normalized firing strength.It calculates the ratio of the jth rule's firing strength to the sum of all rules firing strength.Layer 4: The nodes in this layer are adaptive and are connected with the input nodes (of layer 0) and the preceding node of layer 3. The result is the weighted output of the rule j.Layer 5: This layer consists of one single node which computes the overall output as the summation of all incoming signals.In this research, the ANFIS model was used for churn data identification.As mentioned before, according to the feature extraction process, 7 inputs are fed into ANFIS model and one variable output is obtained at the end.The last node (rightmost one) calculates the summation of all outputs [Riverol & Sanctis, 2005]. Findings Benchmarking the performance of the data mining methods' efficiency can be calculated by confusion matrix with the following terms [Han & Kamber, 2000] (2) Similarly, the true negative rate (TNR) or specificity is defined as the fraction of negative examples predicted correctly by the model as seen in Eq.( 3). Sensitivity is the probability that the test results indicate churn behaviour given that no churn behaviour is present.This is also known as the true positive rate.Specificity is the probability that the test results do not indicate churn behaviour even though churn behaviour is present.This is also known as the true negative rate.Whenever an input factor has an effect over average, it shows considerable deviation from the original curve.We can infer from the membership functions that, these properties has considerable effect on the final decision of churn analysis since they have significant change in their shapes. In Figures 4 to 7, vertical axis is the value of the membership function; horizontal axis denotes the value of input factor.Marital status is an important indicator for churn management; it shows considerable deviation from the original Gaussian curve as seen in Figure 4, during the iterative process. Figure 5 shows the initial and final membership functions.As expected, age group found to be an important indicator to identify churn.In network, monthly expense is another factor affecting the final model most.Resultant membership function is shown in Figure 6. Conclusions The proposed integrated diagnostic system for the churn management application presented is based on a multiple adaptive neuro-fuzzy inference system.Use of a series of ANFIS units greatly reduces the scale and complexity of the system and speeds up the training of the network.The system is applicable to a range of telecom applications where continuous monitoring and management is required.Unlike other techniques discussed in this study, the addition of extra units (or rules) will neither affect the rest of the network nor increase the complexity of the network. As mentioned in Section 2, rule based models and decision tree derivatives have high level of precision, however they demonstrate poor robustness when the dataset is changed.In order to provide adaptability of the classification technique, neural network based alteration of fuzzy inference system parameters is necessary.The results prove that, ANFIS method combines both precision of fuzzy based classification system and adaptability (back propagation) feature of neural networks in classification of data. One disadvantage of the ANFIS method is that the complexity of the algorithm is high when there are more than a number of inputs fed into the system.However, when the system reaches an optimal configuration of membership functions, it can be used efficiently against large datasets.Based on the accuracy of the results of the study, it can be stated that the ANFIS models can be used as an alternative to current CRM churn management mechanism (detection techniques currently in use).This approach can be applied to many telecom networks or other industries, since it is once trained, it can then be used during operation to provide instant detection results to the task. Fig. 1 . Fig. 1.ANFIS model of fuzzy interference : 1. True positive (TP) corresponding to the number of positive examples correctly predicted by the classification model.2. False negative (FN) corresponding to the number of positive examples wrongly predicted as negative by the classification model.3. False positive (FP) corresponding to the number of negative examples wrongly predicted as positive by the classification model.4. True negative (TN) corresponding to the number of negative examples correctly predicted by the classification model.The true positive rate (TPR) or sensitivity is defined as the fraction of positive examples predicted correctly by the model as seen in Eq.(2).TPR = TP/(TP + FN) Three types of Fuzzy models are most common; the Mamdani fuzzy model, the Sugeno fuzzy model, and the Tsukamoto fuzzy model.We preferred to use Sugeno-type fuzzy model for computational efficiency.Sub-clustering method is used in this model.Sub clustering is especially useful in real time applications with unexpectedly high performance computation.The range of influence is 0.5, squash factor is 1.25, accept ratio is 0.5; rejection ratio is 0.15 for this training model.Within this range, the system has shown a considerable performance.As seen on Figure2, test results indicate that, ANFIS is a pretty good means to determine churning users in a GSM network.Vertical axis denotes the test output, whereas horizontal axis shows the index of the testing data instances. Fig. 2 . Fig. 2. ANFIS classification of testing data Figure 2 and Figure 3, show plot of input factors for fuzzy inference and the output results in the conditions.The horizontal axis has extracted attributes from Table 2.The fuzzy inference diagram is the composite of all the factor diagrams.It simultaneously displays all parts of the fuzzy inference process.Information flows through the fuzzy inference diagram that is sequential. Fig. 3 . Fig. 3. Fuzzy Inference Diagram ANFIS creates membership functions for each input variables.The graph shows Marital Status, Age, Monthly expense and Customer Segment variables membership functions.In these properties, changes of the ultimate (after training) generalized membership functions with respect to the initial (before training) generalized membership functions of the input parameters were examined.Whenever an input factor has an effect over average, it shows considerable deviation from the original curve.We can infer from the membership functions that, these properties has considerable effect on the final decision of churn analysis since they have significant change in their shapes.In Figures4 to 7, vertical axis is the value of the membership function; horizontal axis denotes the value of input factor.Marital status is an important indicator for churn management; it shows considerable deviation from the original Gaussian curve as seen in Figure4, during the iterative process.Figure5shows the initial and final membership functions.As expected, age group found to be an important indicator to identify churn.In network, monthly expense is another factor affecting the final model most.Resultant membership function is shown in Figure6. Fig. 8 . Fig. 8. Reciever Operating Characteristics Curve for Anfis, Ridor and decision trees Figure 8 illustrates the ROC curve for the best three methods, namely ANFIS, RIDOR and Decision Trees.The ANFIS method is far more accurate where the smaller false positive rate is critical.In this situation where preventing churn is costly, we would like to have a low false positive ratio to avoid unnecessary customer relationship management (CRM) costs. Table 2 . The attributes with highest Spearman's Rho values are listed in Table2.The factors are assumed to have the highest contribution to the ultimate decision about the subscriber.Ranked Attributes Table 3 . Training Results for the Methods Used Correctness is the percentage of correctly classified instances.RMS denotes the root mean square error for the given dataset and method of classification.Precision is the reliability of the test (F-score).RMS, prediction and correctness values indicates important variations.Roughly JRIP and Decision Table methods have the minimal errors and high precisions as shown in Table3for training.But in testing phase ANFIS has highest values as listed in Tables4 and 5.RMSE (root mean squared error)values of the methods vary between 0.38 and 0.72, where precision is between 0.64 and 0.81.RMS of errors is often a good indicator of reliability of methods.Decision table and rule based methods tend to have higher sensitivity and specificity.While a number of methods show perfect specificity, the ANFIS has the highest sensitivity. Table 5 . Testing Results for the Methods Used
5,399.8
0001-01-01T00:00:00.000
[ "Computer Science" ]
Are parrots poor at motor self-regulation or is the cylinder task poor at measuring it? The ability to inhibit unproductive motor responses triggered by salient stimuli is a fundamental inhibitory skill. Such motor self-regulation is thought to underlie more complex cognitive mechanisms, like self-control. Recently, a large-scale study, comparing 36 species, found that absolute brain size best predicted competence in motor inhibition, with great apes as the best performers. This was challenged when three Corvus species (corvids) were found to parallel great apes despite having much smaller absolute brain sizes. However, new analyses suggest that it is the number of pallial neurons, and not absolute brain size per se, that correlates with levels of motor inhibition. Both studies used the cylinder task, a detour-reaching test where food is presented behind a transparent barrier. We tested four species from the order Psittaciformes (parrots) on this task. Like corvids, many parrots have relatively large brains, high numbers of pallial neurons, and solve challenging cognitive tasks. Nonetheless, parrots performed markedly worse than the Corvus species in the cylinder task and exhibited strong learning effects in performance and response times. Our results suggest either that parrots are poor at controlling their motor impulses, and hence that pallial neuronal numbers do not always correlate with such skills, or that the widely used cylinder task may not be a good measure of motor inhibition. Electronic supplementary material The online version of this article (doi:10.1007/s10071-017-1131-5) contains supplementary material, which is available to authorized users. Introduction Inhibitory control-a core component of executive functions-operates at a range of levels from basic motor selfregulation to taxing self-control (Diamond 2013;Beran 2015). Whereas motor self-regulation requires only the suppressing of an unproductive movement, self-control involves the ability to decline an immediate, small reward in favor of a larger but delayed one. Great apes, corvids, and parrots are proficient in such self-control tasks and surpass most other tested species when it comes to the duration of the delays that are tolerated (e.g., Rosati et al. 2007;Dufour et al. 2012;Auersperg et al. 2013;Koepke et al. 2015). Recently, it has been shown that great apes and corvids of the Corvus genus also outperform most other species in motor self-regulation (MacLean et al. 2014;Kabadayi et al. 2016). The correlation in performance between self-control and motor self-regulation might hint at a common underlying executive function relating to brain capacity and overall cognitive flexibility. Indeed, MacLean and colleagues found that the best performers on motor self-regulation among 36 tested species Abstract The ability to inhibit unproductive motor responses triggered by salient stimuli is a fundamental inhibitory skill. Such motor self-regulation is thought to underlie more complex cognitive mechanisms, like selfcontrol. Recently, a large-scale study, comparing 36 species, found that absolute brain size best predicted competence in motor inhibition, with great apes as the best performers. This was challenged when three Corvus species (corvids) were found to parallel great apes despite having much smaller absolute brain sizes. However, new analyses suggest that it is the number of pallial neurons, and not absolute brain size per se, that correlates with levels of motor inhibition. Both studies used the cylinder task, a detour-reaching test where food is presented behind a transparent barrier. We tested four species from the order Psittaciformes (parrots) on this task. Like corvids, many parrots have relatively large brains, high numbers of pallial neurons, and solve challenging cognitive tasks. Nonetheless, parrots performed markedly worse Electronic supplementary material The online version of this article (doi:10.1007/s10071-017-1131-5) contains supplementary material, which is available to authorized users. 3 were those with the largest absolute brain sizes (MacLean et al. 2014). Notably, however, this study included mostly primates and other mammals, and very few bird species; when more birds were tested in a subsequent study, it was shown that absolute brain size may not be the best predictor across phylogenetically distant taxa (Kabadayi et al. 2016; for discussion, see also Chappell 2016). Instead, when the results from both studies were reanalyzed, the success rates roughly correlated with total numbers of pallial neurons, as birds have vastly higher neuronal densities than mammals (Herculano-Houzel 2017; Olkowicz et al. 2016). In each of these studies, the main test used for investigating motor self-regulation was the so-called cylinder task. It is a detour-reaching task where a reward is placed in the center of a transparent cylinder with openings at both ends. Subjects must inhibit their initial response to reach directly for the visible food through the transparent barrier and instead retrieve the food through one of the side openings. Before being tested, subjects are familiarized with the affordances of the task; they are exposed first to an opaque cylinder and must show they can reliably retrieve the reward from either of the side openings. It is also crucial that the subjects are familiar with transparent surfaces, having experienced that they act as barriers as opaque objects do. A failure in the tests following the familiarization with opaque cylinders is scored if the subject attempts to reach for the reward directly by bumping into the transparent barrier. Failure signifies the subject cannot inhibit the direct motor movement toward the visible reward despite having previously learnt the correct detour response by means of the opaque cylinder. Previous studies employing the cylinder paradigm have used performance across ten trials as a comparative measure of basic motor inhibition. In order to examine whether total numbers of pallial neurons (Herculano-Houzel 2017) and high performance in previous cognitive studies (Güntürkün and Bugnyar 2016) indeed link to such basic functions as motor self-regulation, we tested four species of large parrots in the cylinder task. Parrots have brains as large, or larger than Corvus species and exhibit similarly high densities of pallial neurons (Olkowicz et al. 2016;Iwaniuk et al. 2004Iwaniuk et al. , 2005Iwaniuk and Nelson 2003;Herculano-Houzel 2017). Several parrot species are also known to perform on par with apes and corvids in various tests on physical and social cognition (Auersperg et al. 2014;Schloegl et al. 2012;O'Hara et al. 2015;Pepperberg, 2009;Güntürkün and Bugnyar 2016). Hence, we predicted that if pallial neuron count and cognitive performance were linked with motor self-regulation ability, parrots should show similar performance to corvids and apes. Subjects A total of 38 parrots participated in this study: Eight African grey parrots (six females, two males, all 1 year old), eight blue-headed macaws (five females, three males, all 1 year old), 13 blue-throated macaws (one female, twelve males, mean age 2.46, SD = 1.76), and nine great green macaws (eight females, one male, mean age 2.33, SD = 2.69). All parrots were hand-raised and subsequently socialized in parrot groups in the Loro Parque Fundacíon, Tenerife, Spain. Housing conditions All parrots were housed in aviaries at the Max-Planck Comparative Cognition Research Station in the Loro Parque in Puerto de la Cruz, Tenerife. The blue-throated macaws and the great green macaws were housed in eight aviaries, divided by species and age into five groups of two to eight individuals. Six of these aviaries were 1.80 × 3.40 × 3 m (width × length × height), and the remaining aviaries were 2 × 3.40 × 3 m and 1.5 × 3.40 × 3 m, respectively. These aviaries were interconnected by 1 m × 1 m windows, which could be closed when desired. The blue-headed macaws were housed together in a separate indoor area (28.61 m 2 ) with access to a smaller outdoor area and the African grey parrots were housed together in another separate outdoor aviary (21.41 m 2 ). All aviaries had at least one side open to the outside, so they followed a natural light schedule and were also kept to ambient outdoor temperature, but they were additionally lit with Arcadia Zoo Bars (Arcadia 54W Freshwater Pro and Arcadia 54W D3 Reptile lamp) to ensure sufficient exposure to UV light. They were also all within the same building as the testing chambers (described below). Experimental setup and procedures Training and testing took place in an indoor chamber of 1.5 × 1.5 × 1.5 m (height × width × length) equipped with lamps covering the birds' full range of visible light (Arcadia 39 W Freshwater Pro and Arcadia 39 W D3 Reptile lamp). The birds were already habituated to moving from aviaries to the testing chambers. The subjects were individually tested in one of the testing chambers with the experimenter in an adjoining room. A sound-buffered one-way glass system permitted zoo visitors to see inside the rooms, but did not allow the birds to see out. All training and testing sessions took place either in the morning or in the afternoon, a minimum of 4 h after the last feeding (or overnight for morning sessions). All birds had free access to water and mineral blocks at all times and were fed fresh fruit and vegetables twice a day. Pieces of walnuts were used as rewards during testing as they are valued by all individuals and were not available outside of testing. The daily amount of nuts and seeds provided to the birds was adjusted according to their intake during testing for weight regulation purposes. The apparatus consisted of an opaque and a transparent cylinder (Fig. 1). Both cylinders were open at both ends and attached to a wooden base. Following the criterion set by MacLean et al. (2014), the cylinders were long enough so that the birds had to put their heads inside the cylinder to reach the reward in the center, but were not so large they could enter the cylinder entirely. The size of the cylinders was adjusted to the size of each species tested (a length of 15 cm and a diameter of 11 cm for the great green macaws, a length of 12.5 cm and a diameter of 9 cm for the bluethroated macaws, and a length of 10 cm and a diameter of 5 cm for the African grey parrots and the blue-headed macaws). For each species, the size of the opaque and transparent cylinder was identical. Prior to this study, the birds had participated in an extensive physical cognition test battery following the protocol of Herrmann et al. (2007), in which they interacted with humans through holes in a Plexiglas panel on a daily basis for 2 months, and consequently all subjects had experience with transparent surfaces. Although none of the tasks required an active contact with the Plexiglass panel, the birds did explore the panel by touching it with their beak and/or tongue in the course of testing. Training In the training phase, the birds learned to retrieve a reward from either side opening of an opaque cylinder. Before each trial, the birds were given a signal (which they had been previously trained on) to wait on a perch at the back of the testing chamber. The experimenter then drew the bird's attention to the reward (a piece of walnut) by holding it up at eye level and calling their name. The experimenter then placed the food inside the cylinder while the bird was observing, at which point the bird could approach the cylinder. Birds had 360° access to the cylinder. If the bird did not approach the cylinder within 1 3 2 min, the reward was removed from the cylinder, and the trial was repeated after a 30-s time-out interval. Hence, such invalid trials did not count toward the maximum of 10 trials per session. Correct responses were scored when the birds retrieved the food without touching the surface of the cylinder, whereas incorrect responses were scored when the birds made contact with the surface of the cylinder. The birds were allowed to retrieve the food after both correct and incorrect responses. When the parrots finished eating the reward, they were again given a signal to return to their perch, and the next trial commenced. To proceed to the testing stage, the birds had to fulfill a criterion of four out of five correct responses on consecutive trials following the criteria from MacLean et al. (2014) and Kabadayi et al. (2016). Birds were given a maximum of 10 trials per session. All birds reached criterion within three sessions, but the majority reached it on their first session. Testing The testing protocol remained identical to the training procedure, with the exception that the opaque cylinder was now replaced with a transparent cylinder. Ten trials were conducted for each individual, except for two African grey parrots, who participated in one and two trials less, respectively, due to experimenter error. To preclude loss of motivation, the ten trials were divided into two sessions of five trials each, carried out on subsequent days. As in training, a correct response was coded if the birds made a detour to either side of the cylinder and retrieved the food without touching the surface of the cylinder, whereas an incorrect response was coded if the bird made physical contact with the surface of the cylinder before retrieving the food (Online Resource 1). All trials continued until the subject retrieved the reward. The methodology described above followed the one of MacLean et al. (2014) and Kabadayi et al. (2016) for both the training and the testing phase. For all trials (correct and incorrect), we also measured the duration of time necessary for the birds to obtain the reward from the onset of the trial (response times). The onset of the trial was defined as the moment when the bird crossed a certain boundary line marked on the ground. The change in response times across trials was used in previous detour studies to study learning processes (Lockman and Adams 2001;Wyrwicka 1959). Thus, we analyzed the change in response times across trials within species as well as the difference in the rate of this change between species. Because of the slight between-species differences in distance between the mark on the ground and the cylinder, we did not compare response times between species. Analyses Two variables were analyzed: the number of correct responses (response accuracy) and the response times per individual per trial. To analyze response accuracy, we used a generalized linear mixed-effect regression analysis (GLMM) with trial number, species, and bird age (in years) as fixed effects. Individual birds were included in the models as random effects, and the trial effect was allowed to vary for each individual bird (random slopes). The outcome variable was binary, i.e., the response was either correct or incorrect. To analyze response times, we used a linear mixed effects regression analysis (LMM) with the same fixed and random effects as in the analysis of the response accuracy. The outcome variable was response times in seconds. Failure patterns We followed the coding criterion from the previous studies (MacLean et al. 2014;Kabadayi et al. 2016;Vernouillet et al. 2016), where all touches of the surface of the cylinder counted as an error, regardless of the location of the touches. However, we observed differences within the errors as some touches did not appear to be directed toward the reward, but could have been the result of exploration or accident. We therefore also provide additional analyses of the patterns of failures in order to potentially differentiate between failures caused by motor self-regulation and those caused by other factors. In this additional analysis, we coded whether the parrots first touched the cylinder toward or away from the reward. When coding this, the cylinder was divided (on the computer screen) into three equal cross-sections (left periphery, center, right periphery), one of which contained the reward. If the birds' initial contact with the cylinder was within the same zone as the reward then it was coded as a reach "toward" the reward; otherwise, it was coded as "away" from the reward. The interobserver reliability when coding for the failure patterns was excellent: Cohen's Kappa = 0.961 (n = 135, z = 11.2, p < 0.001). We then recalculated the scores for all trials, with the following coding criterion: A correct response was coded if the birds made a detour to either side of the cylinder and retrieved the reward without touching the cylinder "toward" the reward, whereas an incorrect response was coded if the bird touched the cylinder "toward" the reward before retrieving the food. We then reran our original model on response accuracy using these new scores. Additionally, we analyzed how many errors were coded as either a reach toward or away from the reward across trials. We analyzed this using a generalized mixed model regression analysis, with error type as the outcome variable (i.e., toward the reward or away from reward), and species, trial number, and age as fixed effects. As in the first analysis, individual birds were included as random effects, and the trial effect was allowed to vary for each individual bird. All statistical analyses were carried in R, version 3.1.3 (R core team 2015). Analysis of response accuracy showed a significant effect of trial number across the four species (GLMM: EST = 3.587, SE = 0.651, z = 5.510, p < 0.001), suggesting that, as a group, the parrots improved their performance over trials (see Fig. 2 for individual/species performance across trials). Species also differed significantly in how much they improved their performance over trials, as evidenced by However, the combined slopes of the great green macaws and the blue-headed macaws were significantly steeper than those of the African grey parrots and the blue-throated macaws (GLMM: EST = − 4.457, SE = 1.220, z = − 3.657, p < 0.001). Finally, the age effect on response accuracy was not significant (GLMM: EST = 0.086, SE = 0.126, z = 0.683, p = 0.495). Patterns of failure The recalculations of the scores according to the coding criterion, where failures were coded only if the touch was directed toward the reward, are shown in Table 1. Averaging across all species, the recalculated scores using this new coding criterion were significantly higher than the original scores (Paired t test: t (37) = 4.307, p < 0.001). This increase was particularly noticeable for the blue-throated macaws and the great green macaw (p < 0.001 for both species); however, there was only a marginal increase for the African grey parrots (p = 0.900) and the blue-headed macaws (p = 0.451, Table 1). Analysis of response accuracy using these new scores replicated the results of our initial model, showing a significant effect of trial (GLMM: EST = 4.878, SE = 0.846, z = 5.768, p < 0.001). Additionally, the combined slopes of the great green macaws and the blue-headed macaws were again significantly steeper than those of the African grey parrots and the blue-throated macaws (GLMM: EST = − 4.616, SE = 1.563, z = − 2.953, p = 0.003). Our separate analysis of the different error types found a significant effect of trial (GLMM: EST = 4.175, SE = 1.436, z = 2.907, p = 0.004), suggesting that the proportion of errors 'away' from the food increased over trials (Fig. 4). Furthermore, the difference in slope between African grey parrots and blue-throated macaws was also significant (GLMM: EST = − 7.261, SE = 3.518, z = −2.064, p = 0.039). No other effects were significant, despite the seemingly large differences between the species shown in Fig. 4. One presumable reason for this is that there were too few errors during the later trials in the experiment, so the possible differences between the species could not be found reliably. Response times ranged from 1 to 107 s, with an average of 5.26 s. The distribution of the response times was positively skewed, with a few extreme large values, notably in the blue-headed macaws. In order to reduce skewness and the effect of extreme values, a reciprocal transformation was applied (1/time) in the regression analysis. The overall trial effect was significant (LMM: EST = 0.282, SE = 0.031, African grey parrot Blue−headed macaw Blue−throated macaw Great green macaw Fig. 4 Each species' estimated change in the proportion of 'away' failures (i.e., cases where the contact made with the cylinder was directed away from the food, and thus not food-related) over the 10 trials predicted from the generalized mixed model regression analysis, with error type (toward or away) as a binary outcome. The gap in the regression line for the great green macaw is due to the fact that there were no errors at the seventh and eighth trial in this species df = 33.05, t = 8.963, p < 0.001), suggesting that all species' responses became faster over the ten trials. In contrast to the analysis of the response accuracy, including the interaction between species and trial did not significantly improve the fit of the regression model (χ 2 (3) = 2.486, p = 0.478), suggesting that the effect of trial on response times was comparable across the four species. Finally, the age effect was not significant (LMM: EST = − 0.012, SE = 0.009, df = 32.55, t = − 1.293, p = 0.205). Discussion The parrots in this study performed correctly on an average of 45% of trials. This performance is markedly poorer than the average 96% success rates of Corvus species on the cylinder task (Kabadayi et al. 2016), despite both groups showing similar neuronal densities and brain sizes, and often showing similar performances in taxing cognitive tests (Kabadayi et al. 2016;MacLean et al. 2014;Olkowicz et al. 2016;Güntürkün and Bugnyar, 2016). The scores of the parrots in our study also align with the 50.8% performance of orange-winged amazon (Amazona amazonica), which was the only parrot species tested previously on the cylinder task (MacLean et al. 2014). Thus, our findings do not support the hypothesis that higher numbers of pallial neurons in birds predict better performance in cognitive tasks (Herculano-Houzel 2017). Nevertheless, as the brain is not a homogenous organ, but a network of various specialized regions, specific cognitive performances might be better explained by allometrically corrected sizes of specific brain regions, especially associative regions that support executive functions (Lefebvre and Sol, 2008). In mammals, detours around transparent barriers are mediated by prefrontal regions-the dorsolateral prefrontal cortex and the orbitofrontal cortex (Diamond 1990;Wallis et al. 2001). In birds, the caudal part of the nidopallium, the nidopallium caudolaterale (NCL), is considered analogous to the mammalian prefrontal cortex (Güntürkün, 2012) and is involved in other motor self-regulation tasks (Kalt et al. 1999). The nidopallium is similarly enlarged in parrots and corvids, (Sayol et al. 2016;Mehlhorn et al. 2010); however, it is possible that they differ when it comes to the relative size of the NCL. Similarly, it might be that the neuronal numbers in the NCL as well as connectivity among brain regions differ between corvids and parrots (Herculano-Houzel, 2017;Sherwood et al. 2008), or that other brain regions are more important in detour tasks in birds. Future research is required to better understand the functional properties of different regions in bird brains and possible differences in neuroarchitecture and function between corvids and parrots. However, performance in the cylinder task may be influenced by many factors other than neuronal capacity and cognitive ability, and thus it may not reflect a species' genuine motor self-regulation skills. Among others, learning processes, motivation to explore objects, or cognitive development are potential confounding factors that could affect the performance in the cylinder task. We will discuss and review them below and raise the question whether the cylinder task is always a valid test of motor self-regulation. In the current study, we found rapid improvements in performance, strongly suggestive of learning processes, in the great green macaws and the blue-headed macaws, and although less rapidly compared to these two species, the African grey parrots and the blue-throated macaws also increased their scores across trials. Kabadayi et al. (2016) suggested that one potential cause of such learning effects and the poor performances on initial trials might be the lack of experience with transparent surfaces. However, parrots tested in our study had interacted with transparent objects such as transparent windows and Plexiglass panels in previous experiments before being tested on the cylinder task. Thus, the observed learning effect is unlikely to be attributable to the lack of experience with the transparent surfaces per se. We also observed a reduction in response times, which is generally regarded as a sign of learning (Koopmans et al. 2003). Individuals of all species became quicker in retrieving the reward over trials, regardless of whether they first touched the cylinder surface or not. Another cylinder task study on song sparrows (Melospiza melodia) showed that the subjects reached perfect performance after around 50 trials (Boogert et al. 2011). It is possible that having more than 10 trials would have notably increased the accuracy in all four parrot species. In fact, the scores of the great green macaws reached almost perfect, and stable, accuracy already around trial six and stayed there until the end of the experiment. This type of improvement may have broader implications for how motor self-regulation is measured. A relatively rapid improvement over trials in the cylinder task and a subsequent ceiling level of perfect accuracy found by ourselves and previous studies (Vernouillet et al. 2016;Boogert et al. 2011) deviate from the pattern seen in other types of motor inhibition tasks, where improvements are unusual (Cohen and Poldrack, 2008;Berkman et al. 2014). In contrast to the cylinder task, classical motor inhibition tasks leave little opportunity for learning to occur, because of their taskswitching component: On certain trials, the subjects must refrain from the previously reinforced responses and instead choose a different response. This might explain why perfect accuracy did not occur in a study on squirrel monkeys (Samiri sciureus) when a task-switching component was added to a detour-reaching task around transparent barrier (Parker et al. 2012). A study on common marmosets (Callithrix jacchus) also suggested that detour tasks around transparent barriers might not always measure inhibition, 1 3 as it was found that depletion of serotonin in the prefrontal cortex impairs detour-reaching behavior around the barrier during the task acquisition, but not after the task is learned (Walker et al. 2006). This suggests that the cylinder task only measures motor self-regulation before there is a detectable overall improvement, which makes the commonly used score of performance across trials difficult to implement, especially in a comparative context. There is a risk that one compares learning speed rather than inhibition. Yet another possible learning effect in the cylinder task paradigm may apply and substantially influence the results, namely the ability to learn to transfer from the opaque cylinder to the transparent one. In the current study, as well as in those we replicated, the subjects received relatively few training trials on the opaque cylinder. Even if the animal readily learned to retrieve the reward from the opaque cylinder, it is possible that more training still would have been required to entrench the affordances of the task sufficiently for an immediate transfer to the transparent cylinder to occur. A study testing common marmosets on a detour task around a transparent barrier found that success was determined by a combination of inhibitory skill and the ability to transfer the detour response from an opaque to a transparent barrier, where both skills are mediated by different brain regions (Wallis et al. 2001). Thus, species differences might not only reflect differences in motor self-regulation and rule learning speed, but also the ability to transfer between the two types of cylinders. For example, corvids of the genus Corvus, along with great apes, perform close to perfectly (Kabadayi et al. 2016;MacLean et al. 2014), but this could equally be a result of good transfer skills or inhibitory capacity, or a combination thereof. Recently, it was suggested that the total number of pallial neurons might be a better predictor of transfer ability and learning speed, rather than of general cognitive skills (Güntürkün et al. 2017). However, as parrots have similar number of pallial neurons to corvids, this should not explain the differences, unless-as stated before-there are other neuroanatomical differences. In any case, future comparative studies using the cylinder task should take into account the task transfer skills of the species in order to avoid confounding effects. Ontogeny is another parameter that may influence cylinder task performance. Children undergo a developmental period, between 6 to 12 months of age, where they have difficulties in retrieving objects from behind transparent barriers (Diamond 1990). Similarly, rhesus monkey infants gradually improve in the same tasks between 1 and 4 months (Diamond 1990). In our study, we did not find an overall age effect across species, but since all African grey parrots and blue-headed macaws were juveniles around 1 year old (and showed the poorest performance in the task), they might not have fully developed their inhibitory skills. Future comparative studies should pay attention to overall differences in cognitive ontogeny in different species and specifically test how development affects the detour response across different species. A species' object exploration style is a motivational aspect that can influence performance. The cylinder task can generate false negatives if the tested animals touch the cylinder in order to explore the surface rather than in an attempt to reach for the reward, which might happen especially since touching the cylinder does not infer any major cost. Considering that touches not directed toward the reward behind the barrier are unlikely to be inhibition failures (Noland and Rodrigues 2012), we ran additional analyses on the failure patterns. They revealed that most failures by the African grey parrots and the blue-headed macaws appeared to be attempts to reach directly for the reward, thus representing true errors, whereas the great green macaws and bluethroated macaws also frequently touched the cylinder in a manner that did not seem to be food-directed. Interestingly, the frequency of such non-food-related failures increased across trials for the great green macaws and blue-throated macaws constituting the majority of their failures in later trials. Indeed, all failures of the great green macaws in the last five trials were of this nature, so it is unlikely that the failing individuals did not know the correct solution of the task. Instead those failures might have occurred due to exploration or boredom resulting from an exposure to repeated trials requiring an identical response. Kabadayi et al. (2016) reported similar failure patterns for New Caledonians and jackdaws, where the individuals touched the barrier likely in an attempt to explore the surface rather than to reach the reward. Such examination of the failure patterns was missing in the large-scale study (MacLean et al. 2014), and neglecting these analyses might have underestimated the scores of some species. It is worth noting that focusing only on the species average scores might miss remarkable individual performances. For example, although the blue-throated macaw's average score was low, there were two individuals successful on all trials (Fig. 2). Such individual variation has big implications in the interpretation of large comparative datasets, especially with small sample sizes of each species (Thornton and Lukas, 2012). In summary, we found that the four parrot species performed relatively poorly in the cylinder task despite having large brains and high pallial neuronal densities, and despite other species from this order demonstrating well-developed cognitive skills in other domains, including self-control (Güntürkün and Bugnyar 2016). This suggests two possibilities: (1) Neither brain volume nor the number of neurons in the pallium is a good predictor of the cylinder task performance within birds, or at least within Psittaciformes. Instead, the relative size, the number of neurons or other anatomical features of the specific brain regions might play an important role. (2) The cylinder task may not be an adequate test to capture motor self-regulation skills in parrots (and some other animals). To better tease these two factors apart, further comparative studies are needed, which ideally include a battery of different tests on motor inhibition and that specifically examine factors that may influence performance in this and other motor inhibition tasks.
7,406
2017-09-19T00:00:00.000
[ "Biology", "Psychology" ]
Low-Molecular-Weight Organogelators Based on N-dodecanoyl-L-amino Acids—Energy Frameworks and Supramolecular Synthons Lauric acid was used to synthesize the low-molecular-weight organogelators (LMOGs), derivatives of two endogenous (L)-alanine, (L)-leucine, and three exogenous (L)-valine, (L)-phenylalanine, and (L)-proline amino acids. The nature of processes responsible for the gel formation both in polar and in apolar solvents of such compounds is still under investigation. Knowing that the organization of surfactant molecules affects the properties of nano scale materials and gels, we decided to elucidate this problem using crystallographic diffraction and energy frameworks analysis. The single crystals of the mentioned compounds were produced successfully from heptane/tBuOMe mixture. The compounds form lamellar self-assemblies in crystals. The energetic landscapes of single crystals of a series of studied amphiphilic gelators have been analyzed to explore the gelling properties. The presented results may be used as model systems to understand which supramolecular interactions observed in the solid state and what energy contributions are desired in the designing of new low-molecular-weight organic gelators. Introduction Pharmaceutical gels are willingly applied in pharmaceutical formulations where active pharmaceutical ingredients (API) are combined to yield a final medicinal product of well-defined properties. The stability of the obtained formulation is a crucial issue in the pharmaceutical and cosmetics industries [1]. Stability, in the case of pharmaceutical products, is an ability to maintain the original form and properties (to ensure that the product maintains its intended quality, safety and efficacy as well as functionality) under the influence of various factors, such as time, humidity, temperature, etc. when stored under appropriate conditions throughout the shelf life [2]. The problem of stability is especially visible in the cases of semiliquid formulations, gels, and nanoparticle-based drugs [3][4][5][6]. The latter offer the unique possibility to overcome cellular barriers in order to improve the delivery of active ingredients. Such a gel form may improve a medicine's efficacy and decrease side effects. Moreover, the drug distribution to target sites via passive and active targeting may be modulated [7][8][9]. A hydrophilic nanoparticle can be an effective carrier for insoluble drugs of low bioavailability [10]. The properties of the nanoparticle-based drugs depend on the particle size. On the one hand, the small particles have an extremely large specific surface, and therefore large specific API capacity. On the another hand, the size of nanoparticles modifies the ability of the drug to pass from the blood vessels to the targeted site destination [11,12]. It helps to reach the extracellular fluid of the tissue by General Chemistry Melting points were recorded in open capillaries and are uncorrected. 1 H NMR (500 MHz) and 13 C NMR (125 MHz) spectra were taken on a Bruker AVANCE 500 MHz spectrometer using CDCl 3 as a solvent (Figures S1-S5). Compounds 1-5 were prepared according to our previously reported protocols [48] and their spectral and other physicochemical properties were consistent with the literature data [29,49]. Preparation of the Gels The dry glass vial equipped with the stirrer magnet was charged with 10 mL of proper solvent and 1 g of 1-Ala. The vial was closed, and the reaction mixture was heated in an oil bath at 20 • C above the boiling point of the solvent used, maintaining continuous stirring for 5 min. Next the vial was cooled down in an ice bath without the stirring to 5-0 • C. After the cooling was completed the stable at ambient (and lower) temperature gels in toluene, water, t-BuOMe and petroleum ether were formed as presented in Figure 1. The gel obtained with CHCl 3 melts above 19 • C. The gelation in alcohols, DMF, DMSO was unsuccessful; however, gel obtained with 40% v/v aqueous ethyl alcohol was stable for a few days at ambient temperature. Here, we present the study of crystal packing and energy frameworks of a series of amphiphilic gelators based on N-dodecanoyl-L-amino acid derivatives for which we managed to obtain single crystals by recrystallization from heptane/ t BuOMe mixture to explore if the solid state supramolecular synthons and the energy frameworks may be an indicator for desirable gelling properties. Obtained gelators were proved to be suitable for organogels and hydrogels formation. General Chemistry Melting points were recorded in open capillaries and are uncorrected. 1 H NMR (500 MHz) and 13 C NMR (125 MHz) spectra were taken on a Bruker AVANCE 500 MHz spectrometer using CDCl3 as a solvent ( Figures S1-S5). Compounds 1-5 were prepared according to our previously reported protocols [48] and their spectral and other physicochemical properties were consistent with the literature data [29,49]. Preparation of the Gels The dry glass vial equipped with the stirrer magnet was charged with 10 mL of proper solvent and 1 g of 1-Ala. The vial was closed, and the reaction mixture was heated in an oil bath at 20 °C above the boiling point of the solvent used, maintaining continuous stirring for 5 min. Next the vial was cooled down in an ice bath without the stirring to 5-0 °C. After the cooling was completed the stable at ambient (and lower) temperature gels in toluene, water, t-BuOMe and petroleum ether were formed as presented in Figure 1. The gel obtained with CHCl3 melts above 19 °C. The gelation in alcohols, DMF, DMSO was unsuccessful; however, gel obtained with 40% v/v aqueous ethyl alcohol was stable for a few days at ambient temperature. toluene (transparent gel), water, t-BuOMe, petroleum ether (120-140 °C), CHCl3 (opaque gels), no gelation in alcohols C2H5OH and n-C5H11OH, and finally a gel in a mixture of water/ethyl alcohol (6/4 v/v). X-Ray Diffraction The crystal structures of the analyzed compounds 1-Ala, 2-Pro, 3-Phe, 4-Leu, and 5-Val were determined based on the X-ray diffraction experiment performed on single crystals at 120 K on SuperNova X-ray diffractometer, CuKα radiation (λ = 1.54184 Å). The structural models were obtained by direct methods using the SHELXS-97 program [50] and refined by the full-matrix least-squares method on F 2 using the SHELXL-97 program implemented on Olex2.refine package [51]. Crystal structure of 3-Phe was redetermined. The first report was published by Chen et al. with final R1 = 10.96% [30]. The propan-2-yl fragment of the valine derivative in 5-Val was refined as disordered over two positions with site occupation factors being of 0.72 and 0.28. The compounds crystallized in noncentrosymmetric space groups. The absolute configuration was established by reference to the unchanging chiral center in the synthetic route because the Flack test results were ambiguous (no elements heavier than O in the crystal structure). Crystal X-ray Diffraction The crystal structures of the analyzed compounds 1-Ala, 2-Pro, 3-Phe, 4-Leu, and 5-Val were determined based on the X-ray diffraction experiment performed on single crystals at 120 K on SuperNova X-ray diffractometer, CuKα radiation (λ = 1.54184 Å). The structural models were obtained by direct methods using the SHELXS-97 program [50] and refined by the full-matrix least-squares method on F 2 using the SHELXL-97 program implemented on Olex2.refine package [51]. Crystal structure of 3-Phe was redetermined. The first report was published by Chen et al. with final R 1 = 10.96% [30]. The propan-2-yl fragment of the valine derivative in 5-Val was refined as disordered over two positions with site occupation factors being of 0.72 and 0.28. The compounds crystallized in non-centrosymmetric space groups. The absolute configuration was established by reference to the unchanging chiral center in the synthetic route because the Flack test results were ambiguous (no elements heavier than O in the crystal structure). Crystal Data for 1-Ala, C 15 Energy Framework Calculations Hirshfeld analysis is a well-known method for exploring crystal data to gain deeper insight into the characteristics of intermolecular interactions in a solid state [52][53][54][55]. For each crystal, a cluster of the closest molecules was generated and the unique pair energies have been calculated using CrystalExplorer [56][57][58] with the CE-B3LYP model derived from wavefunctions computed by Gaussian09 [59]. The energies were scaled by factors 1.057, 0.74, 0.871, and 0.618 for coulombic, polarization, dispersion and repulsion energy terms, respectively, according to Mackenzie et al. [37]. The initial geometry for the calculations was taken from the single crystal diffraction experiments. In the case of 1-Ala with 2 independent molecules in the unit cell, the lattice energy is given as an average of the lattice sums for each of them. In 5-Val, the two disordered orientations of the molecule with the partial occupancies were included in calculations Results The synthesis of the five N-dodecanoyl-L-amino acid derivatives was based on the optimization of well-defined approaches which usually allow obtaining the amino acids of high purity in acceptable yields [48]. Thus, the lauric acid was converted into the lauroyl chloride in the reaction with thionyl chloride (Scheme 1). The synthesis was carried out in toluene solution and the obtained product has the satisfied quality to be used without additional purification. Such obtained lauroyl chloride was subjected the Schotten-Baumann with in situ prepared sodium salts of five selected natural amino acids: endogenous sodium (L)-alaninate, (L)-leucinate, and exogenous (L)-valinate, (L)phenylalaninate, and (L)-prolinate (Scheme 1). The syntheses were carried out in 50% water-dioxane solution solubilizing the substrates and products, while the acidifying the postreaction mixture with aqueous hydrochloric acid effected in separation of mineral phase containing the majority of impurities. The additional purification of the desirable products of high purity was achieved by a single crystallization from heptane/ t BuOMe mixture. The alternative approaches tested to prepare the targeted fatty acid amides were less efficient [27,29,60,61]. sodium (L)-alaninate, (L)-leucinate, and exogenous (L)-valinate, (L)-phenylalaninate, and (L)-prolinate (Scheme 1). The syntheses were carried out in 50% water-dioxane solution solubilizing the substrates and products, while the acidifying the postreaction mixture with aqueous hydrochloric acid effected in separation of mineral phase containing the majority of impurities. The additional purification of the desirable products of high purity was achieved by a single crystallization from heptane/ t BuOMe mixture. The alternative approaches tested to prepare the targeted fatty acid amides were less efficient [27,29,60,61]. The enriched with gold nanoparticles N-lauroyl-L-alanine (1-Ala)-based hydrogel was tested as a supramolecular drug carrier [62]. The gelation properties of 1-Ala, 3-Phe, 4-Leu, and 5-Val were reported by MacLachlan and Hum et al. in the inversion test [30]. The example of gelation by 1-Ala is presented in Figure 1. The studied compounds exhibit the ability to form gels both in polar and in apolar solvents. In toluene (apolar) the gel is transparent, whereas in water (polar) it is opaque. Other tests were performed on 9% gels of 1-Ala in different solvents: the gels obtained with t-BuOMe and petroleum ether were stable at ambient temperature. The gel obtained with CHCl3 melts above 19 °C. The gelation of alcohols was not successful; however, gel obtained with 40% v/v aqueous ethyl alcohol was stable for a few days at ambient temperature, and it changed slowly from nanofiber (gel) to a crystalline form by dissolving-crystallisation process. The last observation proved that gels could be obtained in solvent mixtures, where the solubility of the gelator in at least one of the solvents used is low. The stability of gels is higher in the cases, where a possibility of dissolving of nanofiber of gelators and its recrystallization is limited. On the other hand, solvents in which the gelator is well soluble (CHCl3, EtOH, DMF, DMSO) might not be gelated in a pure form. Crystal Structure and Energetic Landscape Analysis As pure enantiomers all studied compounds crystallized in chiral space groups (P1, P21, P212121 and P22121, Table S1)-Ala has two symmetrically independent molecules in the unit cell-Val exhibited positional disorder of the propan-2-yl moiety. The heteroatoms formed bonds within the expected range of bond lengths (Table S2). The conformation of the molecules (especially the alkyl chain) seems not to determine the crystal packing mode. The 3-Phe, 4-Leu and 5-Val which form similar folded layers show the conformations of the aliphatic chain from synclinal (5-Val) through anticlinal (3-Phe) up to antiperiplanar (4-Leu) ones (Figures 1 and 2, Table S2). The enriched with gold nanoparticles N-lauroyl-L-alanine (1-Ala)-based hydrogel was tested as a supramolecular drug carrier [62]. The gelation properties of 1-Ala, 3-Phe, 4-Leu, and 5-Val were reported by MacLachlan and Hum et al. in the inversion test [30]. The example of gelation by 1-Ala is presented in Figure 1. The studied compounds exhibit the ability to form gels both in polar and in apolar solvents. In toluene (apolar) the gel is transparent, whereas in water (polar) it is opaque. Other tests were performed on 9% gels of 1-Ala in different solvents: the gels obtained with t-BuOMe and petroleum ether were stable at ambient temperature. The gel obtained with CHCl 3 melts above 19 • C. The gelation of alcohols was not successful; however, gel obtained with 40% v/v aqueous ethyl alcohol was stable for a few days at ambient temperature, and it changed slowly from nanofiber (gel) to a crystalline form by dissolving-crystallisation process. The last observation proved that gels could be obtained in solvent mixtures, where the solubility of the gelator in at least one of the solvents used is low. The stability of gels is higher in the cases, where a possibility of dissolving of nanofiber of gelators and its recrystallization is limited. On the other hand, solvents in which the gelator is well soluble (CHCl 3 , EtOH, DMF, DMSO) might not be gelated in a pure form. Crystal Structure and Energetic Landscape Analysis As pure enantiomers all studied compounds crystallized in chiral space groups (P1, P2 1 , P2 1 2 1 2 1 and P22 1 2 1 , Table S1)-Ala has two symmetrically independent molecules in the unit cell-Val exhibited positional disorder of the propan-2-yl moiety. The heteroatoms formed bonds within the expected range of bond lengths (Table S2). The conformation of the molecules (especially the alkyl chain) seems not to determine the crystal packing mode. The 3-Phe, 4-Leu and 5-Val which form similar folded layers show the conformations of the aliphatic chain from synclinal (5-Val) through anticlinal (3-Phe) up to antiperiplanar (4-Leu) ones (Figures 1 and 2, Table S2). The separation of hydrophilic and lipophilic regions (Figure 3) which may favor the gelation process occurs in a special way. The connection between gelling ability of some amino acid surfactants and 2-dimensional packing type was considered previously by Bastiat and Leroux [22]. To study such potential relationships, in the crystal packing analysis we decided to complement the study by the energy framework calculations. Table 1 contains the pairwise interaction energies for the analyzed crystals calculated using the CrystalExplorer at CE-B3LYP/6-31G(d,p) level in kJ/mol. All of the values were scaled by factors 1.057, 0.74, 0.871, and 0.618 for coulombic, polarization, dispersion, and repulsion energy terms, respectively, according to Mackenzie et al. [37]. The separation of hydrophilic and lipophilic regions (Figure 3) which may fa gelation process occurs in a special way. The connection between gelling ability o amino acid surfactants and 2-dimensional packing type was considered previou Bastiat and Leroux [22]. To study such potential relationships, in the crystal packin ysis we decided to complement the study by the energy framework calculations. The separation of hydrophilic and lipophilic regions ( Figure 3) which may fav gelation process occurs in a special way. The connection between gelling ability o amino acid surfactants and 2-dimensional packing type was considered previou Bastiat and Leroux [22]. To study such potential relationships, in the crystal packin ysis we decided to complement the study by the energy framework calculations. Table 1 contains the pairwise interaction energies for the analyzed crystals calc using the CrystalExplorer at CE-B3LYP/6-31G(d,p) level in kJ/mol. All of the value scaled by factors 1.057, 0.74, 0.871, and 0.618 for coulombic, polarization, dispersio repulsion energy terms, respectively, according to Mackenzie et al. [37]. The energy frameworks indicate the dominance of coulombic and dispersion energies in the crystal nets. Figures 4-6 show selected energy frameworks viewed along a and c directions. Various colors represent various energy terms: red-coulombic, yellowpolarization, green-dispersion, and blue-total energy. The scale for tube size was 150, the energy cut-off 5 kJ/mol. polarization, green-dispersion, and blue-total energy. The scale for tube size was 150, the energy cut-off 5 kJ/mol. In crystal packing, there is an evident division for hydrophilic and lipophilic sections ( Figure 3). Within the hydrophilic regions the amide and carboxylic groups participate in various intermolecular interactions. The found supramolecular motifs formed by the stronger hydrogen bonds are presented in Figure 7. To ease the analysis of supramolecular synthons the graph set notation was used [63,64]. In the case of 1-Ala, the supramolecular primary level motif is a C(4) chain formed between amide groups (E_tot = −55.6 kJ/mol), which is assisted by dispersion interactions between parallelly arranged amphiphilic molecules. However, it is not the strongest interaction in this crystal. The carboxylic acid fragments preserve the usual cyclic R 2 2 (8) motif resulting in high total energy of ca. −116 kJ/mol. At the higher level of the graph set theory both of the mentioned motifs link into a R 4 4 (24) ring. The final patterning in 1-Ala may be considered to be parallel tapes between which dispersion forces are present. Such arrangement does not seem to ease a possible formation of gel, since the hydrophilic parts are strongly bonded and the rearrangement of the molecules to accommodate solvent would demand a large energy input. 2-Pro exhibits only a simple C(7) herring-bone pattern via O-H . . . O_amide contact (E_tot = −42 kJ/mol). The maximum total energy of pairwise interaction is smaller than in the other structures (−46 kJ/mol) and comes mainly from dispersion term. This may indicate possible formation of polymorphs. However, the blocked N atom limits the prospects of this compound to interact freely with solvent molecules. In crystal packing, there is an evident division for hydrophilic and lipophilic sections ( Figure 3). Within the hydrophilic regions the amide and carboxylic groups participate in various intermolecular interactions. The found supramolecular motifs formed by the stronger hydrogen bonds are presented in Figure 7. To ease the analysis of supramolecular synthons the graph set notation was used [63,64]. Comparing the crystal packing of 3-Phe, 4-Leu, and 5-Val, the main noticeable feature is the common second level graph set R 4 4 (20) ring with four donors and four hydrogen bond acceptors, in which the same functional groups are involved. These crystals are not isostructural, since the alkyl parts of each are arranged differently. However, some degree of similarity among these lamellar structures is observed, which is also in accordance with the energy frameworks. In the case of 1-Ala, the supramolecular primary level motif is a C(4) chain formed between amide groups (E_tot = −55.6 kJ/mol), which is assisted by dispersion interactions between parallelly arranged amphiphilic molecules. However, it is not the strongest interaction in this crystal. The carboxylic acid fragments preserve the usual cyclic R 2 2(8) motif resulting in high total energy of ca. −116 kJ/mol. At the higher level of the graph set theory both of the mentioned motifs link into a R 4 4(24) ring. The final patterning in 1-Ala may be considered to be parallel tapes between which dispersion forces are present. Such arrangement does not seem to ease a possible formation of gel, since the hydrophilic parts are strongly bonded and the rearrangement of the molecules to accommodate solvent would demand a large energy input. 2-Pro exhibits only a simple C(7) herring-bone pattern via O-H…O_amide contact (E_tot = −42 kJ/mol). The maximum total energy of pairwise interaction is smaller than in the other structures (−46 kJ/mol) and comes mainly from dispersion term. This may indicate possible formation of polymorphs. However, the blocked N atom limits the prospects of this compound to interact freely with solvent molecules. Comparing the crystal packing of 3-Phe, 4-Leu, and 5-Val, the main noticeable feature is the common second level graph set R 4 4(20) ring with four donors and four hydrogen bond acceptors, in which the same functional groups are involved. These crystals are not isostructural, since the alkyl parts of each are arranged differently. However, some degree of similarity among these lamellar structures is observed, which is also in accordance with the energy frameworks. The 3-Phe structure is dominated by two interactions of very similar total energies (−68.7 and −62.7 kJ/mol); however, their origin is entirely different. One of the interactions is a strong O-H…O_amide hydrogen bond with the highest contribution of coulombic factor (−87.1 kJ/mol). The other one is N-H…O_acid with moderate coulombic (−36.6 kJ/mol) and quite high dispersion (−60.3 kJ/mol) inputs. The strongest interaction in 4-Leu comes from O-H…O_amide hydrogen bond with high coulombic contribution (−77 kJ/mol) has a total energy of −54.5 kJ/mol. However, another pairwise total energy nearly equally strong is an effect of a combination of a weaker N-H…O_acid hydrogen bond (E_Coul = −31.5 kJ/mol) supported by high dispersion term (−52.6 kJ/mol) associated with a parallel alignment of the alkyl chains. The strongest interaction in 4-Leu comes from O-H . . . O_amide hydrogen bond with high coulombic contribution (−77 kJ/mol) has a total energy of −54.5 kJ/mol. However, another pairwise total energy nearly equally strong is an effect of a combination of a weaker N-H . . . O_acid hydrogen bond (E_Coul = −31.5 kJ/mol) supported by high dispersion term (−52.6 kJ/mol) associated with a parallel alignment of the alkyl chains. Structure of 5-Val is stabilized by similar interactions, but the much higher coulombic energetic term of ca. −92 kJ/mol in the case of O-H . . . O_amide h-bond is a consequence of an additional contribution of an accompanying C-H . . . O_acid contact which closes the R 2 2 (8) ring. Additionally, the amide N-H . . . O_acid interaction is assisted by a weaker C-H . . . O_amide one (motif R 2 2 (11)) giving the contribution to the electrostatic energy. Conclusions The crystal structures of a series of N-dodecanoyl-L-amino acid derivatives have been successfully determined. Energy frameworks analysis was applied for the first time to study the gelling behavior of a series of low-molecular-weight gelators, suitable for the formation of both organo-and hydrogels. The presence of carboxylic O-H and amide N-H groups impacted the most the crystal net energetics giving high coulombic contributions to the total energy in each of the studied structures. Three of the analyzed crystals (3-Phe, 4-Leu, and 5-Val) had the same cyclic supramolecular synthon R 4 4 (20), stabilized by hydrogen bonds. The different alignment of the aliphatic chains had a minor contribution to the total energy in the crystals, but the alkyl chains exhibited wide range of conformations from synclinal (5-Val) through anticlinal (3-Phe) up to antiperiplanar (4-Leu) ones. This information may be useful when predicting the tangling ability of the lipophilic parts of the gelator molecules. Spontaneous formation of self-assembled fibrillary network (SAFiN), at least in the case of apolar solvents, seems to have its origin in two phenomena: (1) the hydrophilic parts of amphiphilic organogelators interact to each other influencing the stability of the gels as well as the energy necessary to initiate the gelling process; (2) the hydrophobic fibers may tangle up trapping in voids lipophilic solvent molecules. The combination of crystallographic studies and energy frameworks analysis may be a useful help in understanding the gelling behavior and in designing new low-molecularweight organogelator (LMOG) materials.
5,390
2023-01-01T00:00:00.000
[ "Materials Science" ]
A Vision of Hadronic Physics We present a vision for the next decade of hadron physics in which the central question being addressed is how one might win new physical insight into the way hadronic systems work. The topics addressed include the relevance of model building, the role of spontaneously broken chiral symmetry, spectroscopy, form factors and physics in the deep inelastic regime. Introduction At times physics can be a very hard way to make a living. There can be few professions where one can work hard all day and go home feeling more stupid than when the day began. And yet, there are times when it is all worthwhile, when it really was worth the effort of getting out of bed. What makes it worthwhile, instills in us a sense of real achievement, is the feeling that one has actually won some new insight into how Nature works. My vision for the next decade is that, as a community, we will develop a clearer and more satisfying picture of the structure of hadrons and nuclei within the framework of QCD. This quest will, of course, involve new data and calculations of ever higher precision but, more than that, it will involve new physical insight and understanding. Models There are some who are less than thrilled by the use of models in hadron physics and yet they are of fundamental importance. Flawed as all models necessarily must be, they are a vital part of the machinery needed to imagine new ideas and new ways of investigating Nature. Rather than arguing in generalities I briefly recall some examples of insights arising from models that have proven critical in motivating new experiments and guiding the development of our current understanding of hadron structure. Flavor asymmetries Although Sullivan and Feynman [1] had suggested in the early 70's that the pion cloud of the nucleon might be significant in deep inelastic scattering (DIS), it would be fair to say that this was not taken at all seriously a decade later. Then, motivated by the cloudy bag model (CBM) [2,3], their approach was used to explain the known asymmetry between strange and non-strange sea quarks in the nucleon [4], most particle physicists ignored the work. Everyone knew that partons were non-interacting on the light-cone and this was generally taken to mean that clusters, such as a highly correlated q −q pair in a virtual pion, could not be relevant to deep inelastic scattering (DIS). Yet that same calculation also showed that the dominance of the π + − n component in the chiral structure of the proton implied an excess ofd overū quarks in the proton sea. Again this was ignored. A decade later it was discovered that there was a very large violation of the Gottfried sum rule [5] and later Drell-Yan experiments [6,7], followed by semi-inclusive DIS measurements [8], confirmed that this originated in an excess ofd overū quarks consistent with the 1983 prediction [9][10][11]. The pion cloud of the nucleon, which had led to the prediction of precisely such an effect slowly began to be taken more seriously and new experiments are still being planned to explore the phenomenon in detail [12]. Investigations of the structure functions of the nucleon within models such as the MIT bag [13] and the chiral quark soliton model [14] predicted non-trivial polarization of the anti-quarks in the proton sea arising from the modification of the vacuum inside a hadron, with ∆ū > 0 and ∆d < 0. Experiments underway at RHIC have the capacity to tell us whether indeed this is the way Nature works. There is an almost universal assumption of charge symmetry [15,16] (e.g. u ≡ u p = d n ) in the literature concerned with parton distribution functions. Indeed, phenomenological fits allowing for the possibility of charge symmetry violation (CSV) were only initiated a decade ago [17]. Yet bag model investigations some 20 years ago had unambiguously predicted the sign and magnitude of CSV in the valence quark distributions [18,19]. The neglect of those predictions which, at least for low moments, have been confirmed by lattice QCD calculations [20] in just the last few years led to an over-inflated view of the importance of the deviation from the Standard Model expectations in neutrino-nucleus DIS. Later studies of the NuTeV anomaly have also revealed, again within a QCD motivated model of the EMC effect, the possibility of an additional source of difference between u and d valence quarks in nuclei with N Z. This difference, called the iso-vector EMC effect [21], is a consequence of the iso-vector nuclear force acting on the bound quarks. Again, this insight from a model has suggested a number of experiments, including parity violating DIS on nuclei [22], which have the capacity to establish the phenomenon. Spin The proton "spin crisis" resulting from the EMC measurements [23,24] of a large violation of the Ellis-Jaffe sum rule have inspired theorists to find ways to explain it. Within a few months of the experimental paper two very different approaches were proposed. The theoretical beauty of the axial anomaly in QCD led to a rush to explore the possibility that at a scale of a few GeV 2 gluons might carry as much as 4 units of angular momentum, which through the box diagram containing the axial anomaly would resolve the crisis [25]. No-one ever suggested how such an enormous gluon spin fraction might arise and eventually a new generation of polarized pp collider measurements have shown that indeed it does not exist [26]. The alternative suggestions based on rather well understood hadronic physics, which were initially left in the dust now appear to be correct. In particular, when a pion is emitted the proton tends to flip its spin, p ↑→ π + + n ↓, and within the CBM it was shown that this naturally accounts for half of the modern discrepancy [27]. In addition, within essentially all QCD inspired quark models, the exchange of a single gluon is an essential part of the machinery needed to explain the hadron spectrum. For example, within the MIT bag model the N − ∆ and the Σ − Λ splitting both arise from the onegluon-exchange hyperfine interaction [28]. This same interaction required by spectroscopy naturally 13th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon reduces the fraction of spin carried by the quarks [29]. Combining these two effects fully accounts for the current experimental value [30] and lattice results [31]. Of course, the "experimental value" is not independent of theory, because one must subtract the octet component of the nucleon spin (g 8 A ) from the experimental data to obtain the singlet spin fraction. Again, recent studies within the CBM suggest that SU(3) symmetry is broken by as much as 20 % in this case [32] and that has a significant effect on the deduced value of the proton spin fraction. Incidentally, since many fits to spin dependent PDFs impose an assumed value of g 8 A , this also has implications for the phenomenological spin dependent PDFs. Lattice QCD As lattice QCD has developed into a reliable, quantitative tool for calculating hadron properties within QCD one might imagine that models would be irrelevant. Yet that is far from correct. In the quest to understand how QCD works, lattice studies have created a new dimension to that occuring in Nature, namely the study of hadron properties as a function of quark mass. Initially, even for ground state masses, it was essential to work with unphysical quark masses and to extrapolate to the physical quark masses in some way. This is still true now for many form factors and excited state properties. The CBM provided vital guidance in early studies of this kind [33] and eventually inspired what has become known as finite range regularization (FRR) [34,35]. This approach has provided a natural explanation of the general absence of chiral curvature at quark masses above 40-50 MeV in all hadronic observables. It led to the precise calculation of the electric and magnetic strange form factors of the nucleon [36,37] and provided a "back of the envelope" understanding [38] of why those are so remarkably small. Spectroscopy As we hinted above, the situation with respect to excited states calculated using lattice QCD is far more fluid. The JLab and CSSM groups have made enormous progress at light quark masses two or three times the physical value, with as many as 4 or more excited states found for a given set of quantum numbers [39,40]. It is still far too early to draw conclusions concerning the pattern of these excitations with respect to experiment. Nevertheless, this area will move rapidly over the next few years. As an indication of just what may be possible in terms of insight we mention the study of the wavefunction of the Roper resonance by Leinweber and collaborators [41], which clearly shows the node indicative of a 2s excitation. Since the Roper, along with the Λ(1405), has been a bugbear in spectroscopy for decades (e.g. as to whether it is a multi-quark state, involves gluonic excitation, etc.), direct pictures of the distribution of quarks within the state should prove extremely valuable in solving the mystery. The second major issue in hadron spectroscopy that must be mentioned is the existence, or otherwise, of exotic hadrons. The JLab lattice studies strongly suggest that these exist at an excitation energy perhaps 0.8 to 1.0 GeV above the corresponding non-exotic hadrons [42]. This is consistent with the cost in terms of energy found in the MIT bag or much more recently within the Dyson-Schwinger formalism [43]. Clearly the major advances in this area must initially be tied to the success of the experimental searches, for example at JLab 12 GeV. Form Factors No discussion of this topic at the present time would be complete without a mention of the mystery surrounding the charge radius of the proton following the muonic-hydrogen measurement of Pohl and collaborators [44]. At the present time the discrepancy between that beautiful measurement and the CODATA value is not understood at all and there is a distinct possibility that further investigation may reveal a hint of new physics, perhaps related to the muonic g − 2 discrepancy. In this case there is no model we can point to for inspration. At higher Q 2 the JLab discovery of an anticipated decrease in the ratio of the electric to magnetic form factors of the proton is of great interest [45]. We look forward to seeing data at even higher Q 2 to show for certain whether or not G E passes through zero. Lattice QCD has not yet reached these exulted values of momentum transfer but there has been some notable progress below 1.5 GeV 2 [46]. In terms of a deeper understanding of the physics in the region 5-10 GeV 2 , the recent link between the behaviour of G E /G M and the transition between non-perturbative and perturbative behaviour in the quark propagator suggested in a recent Dyson-Schwinger equation calculation makes it very interesting indeed [47]. We already mentioned the success in both measuring and calculating the strange electric and magnetic form factors of the proton, which have occupied many people for the last two decades. This is a vital test of our understanding of non-perturbative QCD, in close analogy with the standing of the Lamb shift in QED. This because these form factors have their origins in so-called "disconnected diagrams", or non-valence physics, just like vacuum polarization. Pushing these tests to higher precision is at a stand still for the time being as one looks for new experimental techniques and tries to pin down the extent of CSV in the nucleon elastic form factors. The Deep Inelastic Regime Over the past 40 or more years unpolarized DIS measurements have managed to define very precisely many important properties of the PDFs of the nucleons. There are cases where physics demands at the LHC may need more but, at least for unpolarized scattering, it is in the area of flavor asymmetries that the most interesting questions remain open. We have already mentioned most of these issues in sect. 2. Semi-inclusive DIS is one promising technique to explore such questions but, especially once it comes to strange quarks and fragmentations functions that involve koans or unfavoured fragmentation there is much we do not know. Model studies may well provide useful information there [48]. However, it is in the measurement of processes involving polarization that there is the greatest activity. The next decade will see a revolution in our understanding of the Collins and Sivers effects, of deeply virtual Compton scattering and so on. Perhaps because we are at such an early stage in this work, there is a great deal of effort needed to build better models that more accurately reflect the consequences of QCD itself. Eventually we may hope for a much deeper understanding of the role of orbital angular momentum in hadrons. We already know that the conversion of valence quark spin into quark orbital angular momentum, carried by pions and anti-quarks, is the explanation for the EMC spin crisis, but the finer details demand much more effort. Conclusion In this very limited space it has been impossible to do more than set out a rough sketch of a vision for hadronic physics over the next decade. New facilities such as JLab at 12 GeV, GSI-FAIR, J-PARC and one or more electron-ion colliders, backed by upgrades at RHIC, will provide essential new data. Lattice QCD will continue to grow in reliability and accuracy. We will probe the QCD structure and origins of atomic nuclei in new ways. Yet, in the end, our success will be judged by the new insights into how hadrons work, the new paradigms that are established. It will be the leaps in our qualitative understanding, many related to the beauty of new models that capture the essence of the hadronic systems that will stand out. These are exciting times. 13th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon
3,291.4
2014-04-03T00:00:00.000
[ "Physics" ]
Addressing Emerging Risks: Scientific and Regulatory Challenges Associated with Environmentally Persistent Free Radicals Airborne fine and ultrafine particulate matter (PM) are often generated through widely-used thermal processes such as the combustion of fuels or the thermal decomposition of waste. Residents near Superfund sites are exposed to PM through the inhalation of windblown dust, ingestion of soil and sediments, and inhalation of emissions from the on-site thermal treatment of contaminated soils. Epidemiological evidence supports a link between exposure to airborne PM and an increased risk of cardiovascular and pulmonary diseases. It is well-known that during combustion processes, incomplete combustion can lead to the production of organic pollutants that can adsorb to the surface of PM. Recent studies have demonstrated that their interaction with metal centers can lead to the generation of a surface stabilized metal-radical complex capable of redox cycling to produce ROS. Moreover, these free radicals can persist in the environment, hence their designation as Environmentally Persistent Free Radicals (EPFR). EPFR has been demonstrated in both ambient air PM2.5 (diameter < 2.5 µm) and in PM from a variety of combustion sources. Thus, low-temperature, thermal treatment of soils can potentially increase the concentration of EPFR in areas in and around Superfund sites. In this review, we will outline the evidence to date supporting EPFR formation and its environmental significance. Furthermore, we will address the lack of methodologies for specifically addressing its risk assessment and challenges associated with regulating this new, emerging contaminant. Introduction Residents near Superfund sites are exposed to fine and ultrafine particles through a variety of routes, including inhalation of windblown dust, ingestion of soil and sediments, and inhalation of emissions from the on-site thermal treatment of contaminants. Since the particles may originate from sites contaminated with hazardous substances, they may also be contaminated. While windblown dust and soils contain a large fraction of coarse particles, designated by the US-EPA as PM 10 (particles with a diameter of less than 10 microns), they also contain fine particles, or PM 2.5 (with a diameter of <2.5 microns), and ultrafine or nanoparticles, PM 0.1 (diameter of <0.1 microns or <100 nm). With publication of epidemiological data since 1990 linking exposure to PM 2.5 with cardiopulmonary diseases, the impact of PM 2.5 and PM 0.1 on human health has become a major environmental issue [1][2][3]. In general, the emission of organic pollutants during thermal treatment processes results from poor mixing of gasses and formation of oxygen-starved pockets in various areas of the flame, including the post flame and cool zones of the thermal treatment process. Combustion and/or oxidation are incomplete in these areas, resulting in the formation of so-called "products of incomplete combustion". Semi-volatile and nonvolatile organic pollutants have long been thought to associate with particulate matter through adsorption to the surface or within the particles' pore system and cracks. These organic compounds can also rapidly react with the surfaces of particles at moderately elevated temperatures in thermal and combustion systems. It has been shown that such reactions are a major route to the formation of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans, popularly known as "Dioxins" [4][5][6]. During such reactions, the adsorbing molecules interact with metal centers, leading to an electron transfer process and formation of surface stabilized metal-radical complexes [7,8]. Upon emission with particulate matter, such entities are resistant to further oxidation in the ambient atmosphere, can persist in the environment for days, weeks or longer [9,10] and are thus referred to as Environmentally Persistent Free Radicals (EPFRs). These EPFRs can be generated through combustion of a variety of source materials and typically elicit a single broad, unstructured electron paramagnetic resonance signal characteristic of an oxygen-centered radical, e.g., g value « 2.003 (see Table 1) [11][12][13][14]. Indeed, recent studies have indicated the ubiquitous presence of similar radical signatures in ambient air PM 2.5 and have attributed its presence with EPFRs, similar to those formed in thermal processes [15]. Table 1. Literature support for the formation of EPFRs formed through the combustion of organic materials. Early research demonstrated EPFR formation for thermal processes and combustion-derived PM [11][12][13][14]. Though there are no data as yet for the levels of EPFRs in PM generated during the thermal remediation of a Superfund site, soils collected in and around them have revealed the presence of EPFRs, and these levels were 2-30-fold higher than in soils collected in areas removed from the sites [18]. This is not unexpected, since soils at Superfund sites should contain many of the same constituents of PM derived from its thermal remediation, including transition metals and organic pollutants. In fact, at sites shown contaminated with pentachlorophenol [18,19] and PAHs [18], analysis of collected soils using electron paramagnetic resonance revealed EPFRs of similar characteristics as that generated by the combustion of similar hydrocarbons [20]. While combustion processes involved in thermal remediation produce vast quantities of EPFRs with reaction times of seconds, in soils, similar EPFRs are likely formed, albeit at reaction times on the order of years rather than seconds. The ability to form EPFRs in ambient conditions was further supported by their detection in tar balls collected at the Gulf of Mexico shores of Louisiana and Florida after the Deepwater Horizon platform oil spill [21]. On the other hand, laboratory studies have shown that low temperature thermal treatment of contaminated soils can increase the concentration of EPFRs [22]. In this review, we will summarize the evidence for the presence of EPFRs in a wide range of environmental matrices, e.g., soils, particulate matter, etc. Further, we will outline the evidence that despite a lack of risk assessment strategies adapted for these pollutant-particle systems, EPFRs are an emerging contaminant that deserves attention, research focus, and efforts in the development of appropriate methodologies for real-time monitoring, risk assessment and public policy. Epidemiological Studies of Hazardous Waste Incineration and Other Thermal Processes Epidemiological data now strongly suggest an association between elevations in particulate matter emissions and a number of cardiovascular and respiratory events. Specifically, air pollution including PM has been shown to both exacerbate [23] and increase the onset of asthma [24][25][26]. In the Framingham Heart Study, short-term exposures to PM 2.5 , even within EPA standards (PM 2.5 < 12 µg/m 3 annually), were shown associated with decreased lung function [27]. Moreover, epidemiological data suggest a link between PM exposures and cardiovascular events and mortality [28]. For example, results of the ESCAPE (European Study of Cohorts for Air Pollution Effects) study revealed a 13% increase in coronary events for each 5 µg/m 3 increase in annual mean PM 2.5 exposures [29]. In a population-based cohort study in Canada, incident hypertension was shown to be associated with elevations in PM 2.5 , with a 13% increase in risk for every 10 µg/m 3 increase in PM 2.5 levels [30]. Using Medicare program hospital admissions coupled to spatial resolution of PM exposure data within 20 km of EPA air monitoring stations, Kloog et al. [2], showed that admissions for respiratory and cardiovascular events, such as those related to valvular disease, stroke, ischemic heart disease and chronic obstructive pulmonary disease, were associated with elevations in PM 2.5 exposures. Given reports that~40%-70% of airborne fine particles are derived from emissions from combustion sources [31,32], it is reasonable to assume that combustion-derived PM plays a role in epidemiological data linking PM exposures and cardiopulmonary diseases. In fact, some studies have directly determined the contribution of combustion-derived PM in these health effects outcomes. A notable example is the association between decreased lung function and increased prevalence of chronic obstructive pulmonary disease (COPD) and respiratory infections reported in populations exposed to particulates from the combustion of biomass fuels for cooking [33,34]. In another example, the National Particle Component Toxicity (NPACT) initiative sought not only to determine an association between PM levels and health effects, but also utilized data from the EPA's Chemical Speciation Network (CSN) to compare these health outcomes across putative sources of particulates [3]. Findings from this study suggested that PM 2.5 derived from fossil fuel combustion was associated with both short-term and long-term health effects. Moreover, PM 2.5 originating from residual oil combustion and traffic sources were associated with short-term health effects, while PM 2.5 derived from coal combustion was correlated with long-term health consequences [3]. As has been reviewed extensively by others, [35,36] the initiation of cardiovascular diseases such as atherosclerosis typically results from cycles of oxidative stress and inflammation within affected cells. Given the known relationship between certain heavy metals and their ability to initiate oxidative stress [37], many laboratories have sought to elucidate whether the metal content of PM has a role in exacerbating PM-related cardiovascular diseases. As an example, studies have shown elevations in ischemic heart disease in trucking industry workers exposed chronically to particulate emissions [38], and PM emitted from diesel engines is associated with a wide range of heavy metals [39,40]. Moreover, these exposures were associated with elevated plasma biomarkers for oxidative stress and inflammation [41], suggesting that these factors may play a role in the biological mechanisms of action of these PM. In addition, plasma obtained from workers exposed to metal-rich PM, particularly iron, exhibited elevated levels of oxidative stress markers [42]. The investigators hypothesize that the oxidative stress condition induced by inhalation of metal-containing PM serves as a critical systemic link between PM exposures and its associated increased risk of cardiovascular disease. Other epidemiological and basic science research studies have revealed disparate findings with respect to the role of metals in mediating the health effects of PM. For example, comparing biological specimens obtained from volunteers in cities in Japan and China with comparable ambient air PM 2.5 levels but dramatically different metal content, Niu et al., showed that exposures to PM 2.5 containing Ni, Cu, As, and Se produced more dramatic reductions in circulating endothelial progenitor cells but increased levels of inflammatory cytokines and other markers [43]. On the other hand, analysis from two large cohorts revealed no relationship between cardiovascular mortality and the levels of Cu, Fe, K, N, S, Si, Va, Zn present in PM [44]. Conversely, in a study assessing incidences of low birth weights in California, increased rates were associated with PM 2.5 containing specific metals, such as Va, Fe, Ti, Mn, and Cu [45]. Thus, our understanding of the role of the metal component of PM in its health risk of exposure is incomplete. As will be explained below, elucidating its contribution to the health risk of PM exposure may require delving beyond its behavior as an individual unit to that of its role in a more complex pollutant-particle system. Specifically, we believe that the metal is important as an entity necessary to form the pollutant-particle systems responsible for EPFR formation, and that different metals impart differences in radical formation and stability in the environment and in the host. The Case for Environmentally Persistent Free Radicals Combustion of organic materials is generally known to generate free radicals due to gas phase reactions (at 600˝C-1200˝C). These radicals evolve from the thermal dissociation/scission of chemical bonds and the reaction of these radicals with other radicals and molecules present in the plasma region of the flame, i.e., that which contains ionized gases. These radicals are very reactive and short-lived, with a half-life of nano-to-milliseconds, depending on the particular species. Thus, the discovery of EPFRs and elucidation of mechanisms for their formation [8] has created a new paradigm for the perception of combustion-borne radicals as long-lived entities. Studies utilizing laboratory simulations of a combustion reactor showed that in the cool-zone region, where temperatures are 100˝C-600˝C, chemicals on the surfaces of particles condense to form additional pollutants and stable free radicals detected using Electron Paramagnetic Resonance (EPR) [46]. Recognition of the presence of long-lived carbon-centered radicals in solid matrices such coals, chars, and soots dates back to the 1950's [47,48] and were associated with delocalized electrons in a polyaromatic organic polymer, although with differing spectral characteristics (g~2.002-2.003). Long-lived radicals in cigarette tar exhibiting spectral characteristics similar to EPFRs were identified as semiquinone radicals [49,50]. These semiquinone radicals were shown associated with a quinone/hydroquinone redox cycle capable of producing reactive oxygen species (ROS) [49,50] and an ability to induce DNA damage [49,51]. EPR studies conducted as early as 1982 demonstrated similar free radical species in diesel exhaust particles [52]. Since then, numerous laboratories have demonstrated the presence of long-lived free radicals formed in combustion products from a wide range of sources, ranging from wood, coal and biochar, to diesel and gasoline exhaust products, to ambient PM and polymer waste products, as is highlighted in Table 1 [11][12][13][14]17,53]. Importantly, PM 2.5 examined from 5 different sites across the country (LA, AZ, CA, NC, PA) demonstrated levels of free radicals comparable to that of cigarette smoke-10 16 -10 17 radicals/gram [54]. As is illustrated in Figure 1, which depicts an EPR spectrum of ambient PM from Atlanta compared to that of cigarette tar, the spectral parameters of these persistent free radicals, including g-values in the range of 2.0031-2.004, were similar to phenoxyl/semiquinone radicals, and this radical could be detected in samples stored for several months. In both cases, the observed spectra is typical of an EPFR. The larger width of the tobacco tar signal is indicative of more complex convolution due to the presence of multiple radicals. Arrows point to the g-value for each spectrum. Mechanisms for EPFR Formation The mechanism for EPFR formation involves the chemisorption of undestroyed parent molecules, incomplete combustion byproducts or molecules formed de novo onto freshly incepted particles, along with metal center domains. Studies aimed at characterizing EPFR formation have shown that incomplete combustion of carbon, carbon and chlorine, carbon and nitrogen and carbon and bromine forms a number of products, including small hydrocarbons such as ethylene, acetylene and others, but also benzene, phenols, and PAHs, including their chlorinated and brominated derivatives [55][56][57]. These compounds can adsorb onto condensed refractory metal oxides at temperatures ranging from 120 to 900 °C, forming particulate matter and culminating in the formation of pollutant particle systems that include EPFRs capable of producing ROS. In support of this claim, virtually every metal and organic compound observed at increased body burden downwind of incinerators has been found to be contained within particulate emissions from these incinerators, as are a number of known carcinogens and other toxic compounds [58][59][60]. Most aromatic compounds will chemisorb to the surface of metal-oxide-containing particulate matter under post-combustion, cool-zone conditions (120-400 °C). Chemisorption is defined as the formation of a chemical bond between the particle and a pollutant, resulting in a new metal-pollutant entity that will exist until a subsequent chemical reaction occurs to separate or destroy them. Many transition metal oxides can be easily reduced by a chemisorbed organic ( Figure 2) [13]. Using X-ray spectroscopy and FTIR methodologies, as well as surrogate systems for copper-containing fly ash, it was demonstrated that in the post-combustion zone, organic compounds react with surface-dispersed CuO to generate phenolate products, while at the same time, Cu(II) is reduced to Cu(I) [61]. In the process of reducing the metal, a surface-associated organic free radical is formed. This research has revealed that the association of the free radical with the surface of the metal-containing particle stabilizes the radical [7]. In fact, these radicals are stable for days in air at room temperature. The precise mechanism promoting their stabilization is not clear. We postulate that their resistance to In both cases, the observed spectra is typical of an EPFR. The larger width of the tobacco tar signal is indicative of more complex convolution due to the presence of multiple radicals. Arrows point to the g-value for each spectrum. Mechanisms for EPFR Formation The mechanism for EPFR formation involves the chemisorption of undestroyed parent molecules, incomplete combustion byproducts or molecules formed de novo onto freshly incepted particles, along with metal center domains. Studies aimed at characterizing EPFR formation have shown that incomplete combustion of carbon, carbon and chlorine, carbon and nitrogen and carbon and bromine forms a number of products, including small hydrocarbons such as ethylene, acetylene and others, but also benzene, phenols, and PAHs, including their chlorinated and brominated derivatives [55][56][57]. These compounds can adsorb onto condensed refractory metal oxides at temperatures ranging from 120 to 900˝C, forming particulate matter and culminating in the formation of pollutant particle systems that include EPFRs capable of producing ROS. In support of this claim, virtually every metal and organic compound observed at increased body burden downwind of incinerators has been found to be contained within particulate emissions from these incinerators, as are a number of known carcinogens and other toxic compounds [58][59][60]. Most aromatic compounds will chemisorb to the surface of metal-oxide-containing particulate matter under post-combustion, cool-zone conditions (120˝C-400˝C). Chemisorption is defined as the formation of a chemical bond between the particle and a pollutant, resulting in a new metal-pollutant entity that will exist until a subsequent chemical reaction occurs to separate or destroy them. Many transition metal oxides can be easily reduced by a chemisorbed organic ( Figure 2) [13]. Using X-ray spectroscopy and FTIR methodologies, as well as surrogate systems for copper-containing fly ash, it was demonstrated that in the post-combustion zone, organic compounds react with surface-dispersed CuO to generate phenolate products, while at the same time, Cu(II) is reduced to Cu(I) [61]. In the process of reducing the metal, a surface-associated organic free radical is formed. This research has revealed that the association of the free radical with the surface of the metal-containing particle stabilizes the radical [7]. In fact, these radicals are stable for days in air at room temperature. The precise mechanism promoting their stabilization is not clear. We postulate that their resistance to oxidation in air is mainly due to the electronic resonance of the radical and the distribution of the unpaired electron over the entire molecule and extending further to the metal center. oxidation in air is mainly due to the electronic resonance of the radical and the distribution of the unpaired electron over the entire molecule and extending further to the metal center. Figure 2. Interaction of a pollutant with a metal oxide cluster. In this representation, monochlorphenol is chemisorbed to the surface of the particle by the elimination of a molecule of water. A 1-electron transfer then results in Cu II reduction to Cu I and the formation of a surface-stabilized, oxygen-centered radical. It is resonance with a carbon-centered radical(s) on the ring further stabilizes the radical. We have now completed multiple studies indicating these EPFRs form and persist in soils from contaminated Superfund sites, PM generated from the thermal treatment of hazardous substances including e-wastes (unpublished data), airborne fine PM, and even e-cigarette aerosols [62]. We have determined the conditions under which persistent free radicals are generated and can reproduce these conditions in the laboratory to produce surrogate systems for biomedical, chemical and physical studies. Superfund sites, in particular, are a rich source of organics and metals that together can form matrices containing EPFRs. As an example, it has already been shown that semiquinone radicals form from the oxidation of PCBs [63], and PCBs are a common contaminant at Superfund sites. Moreover, sediments and soils contaminated with pentachlorophenol in and around a Superfund wood treatment site generated EPFRs [64]. Follow-up studies also indicated the presence of EPFRs in soil and sediment samples from Superfund sites in Montana and Washington [18]. Thus, EPFRs can be presumed a common species in Superfund sites nationwide. EPFR-Induced Production of Reaction Oxygen Species (ROS) and Their Potential Health Effects As mentioned earlier, metal and organic compounds observed at increased body burden downwind of incinerators have been found to be contained within particulate emissions generated by these nearby incinerators [58][59][60]. Some have argued that although these metals are accessible for human exposures through even non-inhalation pathways-i.e., via water, foliage, soils, etc.-levels observed in those sources are likely more reflective of ordinary ambient air exposures [65]. Nevertheless, given that combustion-derived metals may exist as a component of an EPFR, these metal-pollutant complexes may exhibit toxicities that are much greater than, or "more-than-additive", compared to that of their metal and organic components, per se. Thus, risks associated with exposure to these combustion-derived metals may be misinterpreted. Studies published in the last decade have suggested the EPFR's ability to generate ROS such as hydroxyl radical, and consequently, their ability to induce the oxidation of biomolecules in vitro (Table 2) [17,[66][67][68]. Early studies examined the ability of airborne PM to induce oxidative DNA damage [13,17,53,67]. Valavanidis et al., showed than upon suspension together with H2O2, particulates such as PM10, PM2.5, diesel and gasoline exhaust particles and wood smoke soot induced the hydroxylation of 2′-deoxyguanosine in a manner correlated with its content of both transition metals and a free radical species detected by EPR [53]. Later that same year, though, the group reported that these same PM were capable of producing ROS independent of exogenously added hydrogen peroxide [13]. EPR characterization of the PM demonstrated a single, broad signal at g = 2.0036, indicative of a semiquinone radical. The authors argue that the broadness of the signal suggested a group of semiquinone radicals in a polymeric matrix. Alaghmand and Blough likewise found that suspensions of airborne PM from a variety of sources, but particularly that derived from diesel exhaust (DEP), generated hydroxyl radical in a metal ion-dependent manner [ Cl Cu I OH O Figure 2. Interaction of a pollutant with a metal oxide cluster. In this representation, monochlorphenol is chemisorbed to the surface of the particle by the elimination of a molecule of water. A 1-electron transfer then results in Cu II reduction to Cu I and the formation of a surface-stabilized, oxygen-centered radical. It is resonance with a carbon-centered radical(s) on the ring further stabilizes the radical. We have now completed multiple studies indicating these EPFRs form and persist in soils from contaminated Superfund sites, PM generated from the thermal treatment of hazardous substances including e-wastes (unpublished data), airborne fine PM, and even e-cigarette aerosols [62]. We have determined the conditions under which persistent free radicals are generated and can reproduce these conditions in the laboratory to produce surrogate systems for biomedical, chemical and physical studies. Superfund sites, in particular, are a rich source of organics and metals that together can form matrices containing EPFRs. As an example, it has already been shown that semiquinone radicals form from the oxidation of PCBs [63], and PCBs are a common contaminant at Superfund sites. Moreover, sediments and soils contaminated with pentachlorophenol in and around a Superfund wood treatment site generated EPFRs [64]. Follow-up studies also indicated the presence of EPFRs in soil and sediment samples from Superfund sites in Montana and Washington [18]. Thus, EPFRs can be presumed a common species in Superfund sites nationwide. EPFR-Induced Production of Reaction Oxygen Species (ROS) and Their Potential Health Effects As mentioned earlier, metal and organic compounds observed at increased body burden downwind of incinerators have been found to be contained within particulate emissions generated by these nearby incinerators [58][59][60]. Some have argued that although these metals are accessible for human exposures through even non-inhalation pathways-i.e., via water, foliage, soils, etc.-levels observed in those sources are likely more reflective of ordinary ambient air exposures [65]. Nevertheless, given that combustion-derived metals may exist as a component of an EPFR, these metal-pollutant complexes may exhibit toxicities that are much greater than, or "more-than-additive", compared to that of their metal and organic components, per se. Thus, risks associated with exposure to these combustion-derived metals may be misinterpreted. Studies published in the last decade have suggested the EPFR's ability to generate ROS such as hydroxyl radical, and consequently, their ability to induce the oxidation of biomolecules in vitro (Table 2) [17,[66][67][68]. Early studies examined the ability of airborne PM to induce oxidative DNA damage [13,17,53,67]. Valavanidis et al., showed than upon suspension together with H 2 O 2 , particulates such as PM 10 , PM 2.5 , diesel and gasoline exhaust particles and wood smoke soot induced the hydroxylation of 2 1 -deoxyguanosine in a manner correlated with its content of both transition metals and a free radical species detected by EPR [53]. Later that same year, though, the group reported that these same PM were capable of producing ROS independent of exogenously added hydrogen peroxide [13]. EPR characterization of the PM demonstrated a single, broad signal at g = 2.0036, indicative of a semiquinone radical. The authors argue that the broadness of the signal suggested a group of semiquinone radicals in a polymeric matrix. Alaghmand and Blough likewise found that suspensions of airborne PM from a variety of sources, but particularly that derived from diesel exhaust (DEP), generated hydroxyl radical in a metal ion-dependent manner [69]. Biological electron donors such as NADPH further stimulated hydroxyl radical production, and through experiments involving centrifugation and filtration, they showed that hydroxyl radical formation was likely due to particle surface reactions rather than reactions occurring in solution via solubilized PM components [69]. DiStephano et al., also found that DEP and PM collected at sites across California were capable of generating hydroxyl radical in a manner correlated with its Cu content [70]. Similar to studies by Alaghmand and Blough, hydroxyl radical production was further stimulated by co-incubation with an electron donor, in this case, ascorbate [70]. In sum, these studies suggest that PM is a complex mixture, but its ability to redox cycle to produce ROS likely derives from an interaction between its semiquinone radical and transition metal content, presumably through Fenton-type reactions [71] on the particle surface. The implication of the particles' ability to produce ROS is that the particles may have long-lasting effects in biological systems. Table 2. Evidence for the ability of EPFRs to generate reactive oxygen species. Source Finding References TSP (Athens); Urban street dusts; DEP; GEP PM generates hydroxyl radical in aqueous suspension. Hydroxyl radical formation was linked with redox-active metal content. [72] Biochar Biochar contains persistent free radicals evident by EPR. Biochar can activate H 2 O 2 to produce hydroxyl radical. [11] DEP; Coal fly ash Suspensions of DEP and coal fly ash produce hydroxyl radical. Metal ions and superoxide implicated in its production. Neither kaolinite nor silica produce¨OH. [69] Ambient air PM (California); DEP In the presence of ascorbate, ambient air PM and DEP both generatë OH.¨OH production is correlated with Cu content [70] Abbreviations: DEP = diesel exhaust particles, GEP = gasoline exhaust particles, TSP = total suspended particulates. Although our colleagues have observed EPFRs in all ambient air PM 2.5 samples examined to date [15,73], there are as yet no human exposure data. However, model pollutant-particle systems have been developed to study the biological effects of EPFRs, their ROS formation and their cytotoxicity [8]. These EPFR surrogates contain phenolic and/or chlorinated aromatic precursors (e.g., monochlorophenol or dichlorobenzene) adsorbed onto metal oxide domains bound to a fumed amorphous silica matrix. In in vitro studies, these EPFR surrogates were capable of generating ROS, including superoxide and the highly reactive hydroxyl radical, with a comparable yield to ambient air PM 2.5 [64,68,74]. The measured cycle lengths for these particle systems indicate that they undergo numerous redox cycles [68]. Our hypothesis for EPFR-induced initiation of a redox cycle within a biological environment and their ability to generate ROS is illustrated in Figure 3. In support of this hypothesis, absence of any one of the components of the particle system, be it the transition metal or the organic, produced particles that generated little or no ROS, and the particles containing an EPFR were more toxic to cells in vitro than particles lacking an EPFR [68]. While a number of laboratories have reported that combustion-derived PM exposures elicit oxidative stress responses in both cultured cells and rodents [10,66,[75][76][77], our studies utilizing model particle systems demonstrate that EPFRs produce greater levels of oxidative stress and overall toxicity compared to particles that do not contain an EPFR and are not themselves capable of producing ROS [66,68]. Studies using EPFR surrogates have also demonstrated their ability to induce both pulmonary and cardiovascular dysfunction in animals. In rodent models of asthma, EPFRs increased oxidative stress, dendritic cell activation and innate immune responses in the lungs of adult animals [78]. In neonates, EPFRs, but not size identical particles lacking an EPFR, induced airway inflammation and hyper-responsiveness, [79] as well as an increased severity of influenza disease [80]. Finally, EPFR-exposed neonatal mice exhibited an exacerbated allergic inflammation when challenged with allergen as adults [81]. With respect to cardiovascular toxicities, EPFR-exposed rats exhibited decreased cardiac function and increased oxidative stress at baseline, as well as after ischemia/reperfusion (I/R) injury [82]. EPFR inhalation also prevented the heart's ability to compensate for deficits in left ventricular function induced by an IR injury [83]. These two findings may help to explain the association between PM exposures and mortality due to myocardial ischemic injury. Finally, in healthy rats, EPFR inhalation reduced left ventricular function associated with an increased oxidative stress, whereas exposures to particles lacking an EPFR exhibited no functional deficits or oxidant injury [84]. We have not yet specifically examined the impact of particle size on EPFR-mediated functional outcomes in animals, but given that the surrogate EPFR systems can be synthesized at varying sizes, such experiments are possible and likely warranted. exposures and mortality due to myocardial ischemic injury. Finally, in healthy rats, EPFR inhalation reduced left ventricular function associated with an increased oxidative stress, whereas exposures to particles lacking an EPFR exhibited no functional deficits or oxidant injury [84]. We have not yet specifically examined the impact of particle size on EPFR-mediated functional outcomes in animals, but given that the surrogate EPFR systems can be synthesized at varying sizes, such experiments are possible and likely warranted. Figure 3. Proposed cycle for the generation of ROS by a pollutant-CuO particle system. The process begins when CuO is chemisorbed to the surface of the particle (Figure 2). For simplicity, in this example, hydroquinone is shown as an example of an organic pollutant. Beginning with the structure indicated in blue and working counter-clockwise, hydroquinone is adsorbed to the surface of the CuO-particle system and an electron is transferred from hydroquinone to Cu(II) to generate Cu(I) and semiquinone free radical. Following deprotonation of the other phenolic proton, an electron is transferred from the chemisorbed-radical compound to molecular O2 to produce superoxide and a non-radical product. Finally, the superoxide is converted to H2O2 in the presence of NADPH, ascorbate or thiols, all in high abundance on the lung surface. The resulting H2O2 can undergo Fentontype reactions with the chemisorbed Cu(I) to produce·OH, and can regenerate Cu(II) on the particle to complete an ROS-generating redox cycle. In conclusion, numerous published reports establish that EPFRs induce ROS production and biomolecule oxidation (Table 2), and several in vivo studies demonstrate that EPFRs exhibit greater toxicity than non-EPFR PM [79,80,84]. These outcomes suggest that adequate risk assessment of PM exposures will require a strategy for detecting EPFRs associated with PM and for estimating the contribution of EPFRs to disease risk. Therefore, a better understanding of EPFRs, their mechanisms of toxicity and careful risk assessment strategies for handling EPFRs appear in order. Proposed cycle for the generation of ROS by a pollutant-CuO particle system. The process begins when CuO is chemisorbed to the surface of the particle (Figure 2). For simplicity, in this example, hydroquinone is shown as an example of an organic pollutant. Beginning with the structure indicated in blue and working counter-clockwise, hydroquinone is adsorbed to the surface of the CuO-particle system and an electron is transferred from hydroquinone to Cu(II) to generate Cu(I) and semiquinone free radical. Following deprotonation of the other phenolic proton, an electron is transferred from the chemisorbed-radical compound to molecular O 2 to produce superoxide and a non-radical product. Finally, the superoxide is converted to H 2 O 2 in the presence of NADPH, ascorbate or thiols, all in high abundance on the lung surface. The resulting H 2 O 2 can undergo Fenton-type reactions with the chemisorbed Cu(I) to produce¨OH, and can regenerate Cu(II) on the particle to complete an ROS-generating redox cycle. Particle Aggregation In conclusion, numerous published reports establish that EPFRs induce ROS production and biomolecule oxidation (Table 2), and several in vivo studies demonstrate that EPFRs exhibit greater toxicity than non-EPFR PM [79,80,84]. These outcomes suggest that adequate risk assessment of PM exposures will require a strategy for detecting EPFRs associated with PM and for estimating the contribution of EPFRs to disease risk. Therefore, a better understanding of EPFRs, their mechanisms of toxicity and careful risk assessment strategies for handling EPFRs appear in order. Particle Aggregation It is now widely accepted that the toxicological study of nanoparticles either in vitro or in vivo is plagued by a number of methodological challenges. For example, do the nanoparticles actually exist as nanoparticles or as aggregated micron-sized particles? Even more important, to study their effects in vitro and in vivo, do we prevent them from aggregating? For example, if they do indeed aggregate, do the EPFRs interact to form new entities that themselves elicit cytotoxic properties, and should these new entities be studied? Nevertheless, most studies typically involve dispersal and suspension in solution with the aid of surfactants or other such agents to prevent their rapid aggregation. We typically utilize solutions of saline containing a small amount of Tween 80 and have found this surfactant to be relatively non-toxic to cells and not to contribute to ROS production. One may still wonder, though, whether we should be studying the monodispersed nanoparticles or the aggregates or even whether the use of a surfactant alters the mechanism of action of the particle system. Nanoparticles of many types are furthermore known to reduce the accuracy of cellular cytotoxicity measurements, as the particles typically either quench fluorescence or bind to reagents utilized in assays [85]. Moreover, nanoparticles are known to bind to cytokines, making the accurate assessment of cytokines through ELISA problematic [85]. Although we have found that many of these same challenges apply to the experiments utilizing EPFR-containing PM, as will be described below, the study of EPFR toxicity is associated with numerous other unresolved challenges over and beyond that of other nanoparticle exposures. Particle Storage A critical factor that cannot be overlooked is the storage of particles. Our prior data has shown that certain radical species associated with PM survive for months after extraction from the environment [54]; however, some radicals are less stable (i.e., hours/minutes). Furthermore, suspension in solution for the sake of creating monodispersed nanoparticles can result in quenching and thus, their storage as dry particles under vacuum is important for maintaining their shelf life [8]. Thus, PM experiments conducted with less than optimal storage conditions likely culminates in EPFR quenching prior to experimentation and may not reflect the exposure effects associated with an EPFR, per se. Since filter systems designed to trap airborne PM typically generate only small quantities of particles over short periods of time, extending the collection time-i.e., days to a few weeks-likely results in the quenching of radical collected in the beginning of the collection period. Consequently, it is hard to conceive of collecting sufficient EPFR-containing PM in a pristine condition for performing animal studies. In our own experience, studies such as these were restricted to in vitro designs only. Thus, it is likely that animal exposure studies conducted to date that assess the toxicity of PM collected from biological sources poorly assess the contribution of the EPFR. Thus, risks associated with PM exposures may be underestimated. Use of Controls Given that PM exposures themselves, independent of the EPFR, can elicit tissue responses such as inflammation, design and utilization of appropriate controls is important, but is also frustratingly difficult to achieve. Our research team has addressed a number of our experimentation issues by developing model EPFR [8,66]. Our laboratory-generated pollutant-particle systems have certainly overcome the issue of obtaining sufficient pristine EPFR-containing PM samples for exposing animals. However, design of the most appropriate surrogate PM for use as a control has proven problematic. For example, although we have sometimes utilized an EPFR surrogate system containing only a transition metal such as Cu(II) as a control, Cu(II) is itself a known oxidant in biological systems [86]. Considering that it may be unlikely to encounter a particle in the environment that contains only copper, and given that the Cu(II)-particle control will likely present with toxicity but with a differing mechanism of action, it seems unlikely that such a surrogate serves as an ideal control. One might also utilize, for example, amorphous silica as an appropriate control having a similar particle size but lacking an EPFR. Although a reasonable solution, silica does not contain an organic component that together with the EPFR, may be necessary for promoting biologic effects. To date, we have attempted to utilize a number of similar surrogate systems as controls [66,68], but despite our efforts, have yet to identify an ideal control for biological studies. Mixtures PM 2.5 is a complex mixture of particles, along with sulfate, nitrates, ammonium, elemental carbon, metals, and organic carbon [87], as well as biological components such as lipopolysaccharide [88]. Epidemiological studies suggest that these components may exacerbate cardiopulmonary disease onset or progression. For example, as reviewed by Guarnieri and Balmes [1], NOx may potentially act as an adjuvant, such that combinations of PM and NOx interact to promote asthma symptomology or onset. Moreover, ozone has been shown associated with allergic sensitization [89]; thus, ozone may further exacerbate PM-induced asthma presentation. Therefore, animal experimentation should address not only the impact of PM exposures and the role of EPFR, but also the synergistic/antagonistic effects of EPFR-containing PM mixtures with other components of air pollution. It is possible that the numerous components synergize to elicit an increased risk of cardiopulmonary disease that may be underestimated from individual exposure studies. In summary, exposure to elevated levels of combustion derived PM has been associated with adverse cardiopulmonary events. EPFRs exist on airborne PM 2.5 at concentrations that are high compared to most organic pollutants (~1-10 µM/g). Traditional methods of analyzing PM (i.e., solvent or alkaline extraction) result in their conversion to molecular species that may or may not contain an EPFR [90], suggesting that EPFRs are being misidentified as molecular pollutants or worse, not being detected at all. Thus, we believe that EPFRs represent a new paradigm for the human health impacts of environmental PM and that risk assessment for EPFRs is timely and critical. EPFRs and the Regulatory Framework Since EPFRs are a new class of pollutant, policy makers in the U.S. have not formulated specific regulations to address the potential public health risks. However, existing policy governing the incineration of hazardous materials and the Clean Air Act's standards for fine particulate matter predict how EPFR regulation might be addressed. Incineration of Hazardous Materials Emissions from hazardous waste incinerators are regulated under the Clean Air Act (CAA). The policy is implemented by the states through permitting of hazardous waste incineration operations. Under the existing policy, states may adopt more stringent permit requirements than those required by the CAA. The CAA is designed to protect human health and the environment from the most harmful effects of air pollution by requiring significant reductions in the emissions of the most dangerous air pollutants. These pollutants are known or suspected to cause serious health problems such as cancer or birth defects, and are referred to as hazardous air pollutants (HAPs). Under the original CAA, the Environmental Protection Agency established National Emission Standards for Hazardous Air Pollutants (NESHAPs) for seven HAPs. The 1990 amendments to the CAA required that the standards be based not on a specified level, but on the maximum achievable control technology (MACT) for a category of emission sources within an industry group, such as hazardous waste incinerators. The policy applies an environmental policy tool or instrument known as "design standards" and requires that the regulated entity apply the technology used by the most successful pollution-reduction firm within an industrial category. The specific regulations for hazardous waste incinerator operators are found in the Code of Federal Regulations, Part 264, Subpart O. (USEPA website). Permitted incineration facilities are required to conduct risk assessments to ensure that the incineration process does not emit materials that may pose a public health threat. EPA sets a relatively stringent standard for hazardous waste incinerators at 10 times more protective than the allowable limits of the same substances involved in other permitted processes. The required risk assessment must include direct and indirect potential pathways of human exposure. Direct pathways include inhalation and ingestion, while indirect routes include deposition on soil and in waterways, leading to possible introduction of the material into the food chain. Under current regulations, the estimated emissions must account for any metals, dioxin, and other products of incomplete combustion (PICs) that may be present. There is as yet no requirement for EPFR monitoring. Clean Air Act (CAA) Regulation of Particulate Matter The current version of the CAA also provides guidelines and protocols for limits on PM 10 and PM 2.5 , based on their physical properties as pulmonary irritants. The EPA recognizes the ability of these coarse (PM 10 ) and fine (PM 2.5 ) particles to decrease lung function, aggravate existing asthma, and induce chronic bronchitis, irregular heartbeats, and nonfatal heart attacks. All of these health effects are based solely on the mechanical properties of particulates small enough to enter and irritate airways and alveolar sacs. However, ultrafine particles (PM 0.1 ) have recently been shown to cause much more severe problems [91,92]. As a percentage of mass, these particles only make up a small portion of the total particulate assemblage, but as a percentage of the total number of particles, the ultrafines comprise up to 90% of the assemblage [91]. However, it is already widely appreciated that the ultrafine particles are capable of greater penetration into lung tissues. Nevertheless, many unknowns concerning the intersection of ultrafines and EPFRs remain. For example, it is unknown whether EPFR concentrations are greater in ultrafines compared to PM 2.5 or PM 10 . Moreover, given the findings of our studies [66,68], EPFR-containing ultrafine particles present with greater toxicity than those absent an EPFR. Thus, the contribution of EPFR cannot be overlooked in either strategies for risk assessment or for the development of additional policies for regulating PM exposures. What is the outlook for more specific or stringent policy guidelines in the future? Public policy theory offers insights. Kingdon's conceptual model of policy "agenda setting" [93] describes the process by which issues are recognized as important enough to warrant a policy response. The first component of the process is the "problem stream", wherein scientific certainty concerning risks associated with a substance is high and public sentiment supports a policy response to address the risk. Often, a "triggering event" is necessary to convince observers that the hazard is significant and that the regulatory status quo is not adequate. Next, policy approaches recognized as appropriate and effective means for addressing the problem, ranging from direct command-and-control regulations to incentive-based programs, are designed to promote best practices [94]. Finally, solutions to the problem are typically defined largely by the responsible government agency and regulated entities that would bear increased costs associated with new or more stringent regulation. Affected economic interests are more likely to accept new regulation if they believe the regulations would be implemented equitably without bias throughout the industry, and would not introduce unreasonable costs or onerous reporting requirements [93,95]. As a result of the inherent difficulties of achieving movement within these policy realms, many environmental concerns do not advance to the public policy agenda. Applying this descriptive model to the case of EPFRs and PM 0.1 , federal funding opportunities and recent research have increased awareness of the issues and likely risks, at least among the research community and government agencies. However, there is significant uncertainty concerning the extent to which the general public is subject to exposure risks. The dearth of research to date concerning the contribution of EPFRs to PM-associated health effects, and the lack of an ability to measure EPFRs in real-time hinder progress toward scientific certainty. As a result, the "problem stream" for EPFRs is not well-defined, and advancement through the public policy agenda is not yet supported. Conclusions The discussion points introduced here provide a context for the consideration of additional research and public policy actions to address EPFRs, an important emerging contaminant. The lack of scientific certainty concerning the extent to which EPFR's may pose risks to public health and our lack of an ability to measure them in real-time are major impediments to accurate risk assessments and new guidelines to address the threat. As also outlined here, another critical need is the development of risk assessment strategies for EPFR-containing PM, and this can only happen once substantial in vitro and in vivo experimentation has been completed. A feasible next step concerning EPFR's would be to work within the existing policy framework to promote additional monitoring, environmental sampling for EPFRs around permitted hazardous waste incinerators, and studies aimed at mapping human exposures around these sites. This could provide a key source of data to support more refined estimates of the location and extent of human exposure risks associated with EPFR's. This information along with a better understanding the science and development of technologies that could minimize the risk of EPFR formation in the incineration process could be implemented within the MACT program to reduce public exposure risks associated with this emerging contaminant. Acknowledgments: This work was accomplished through funding from the LSU Superfund Research Program (P42ES013648). Author Contributions: All authors, including Tammy R. Dugas, Slawomir Lomnicki, Stephania A. Cormier, Barry Dellinger and Margaret Reams contributed intellectually and in the writing of this review article. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in the manuscript:
10,666.6
2016-06-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Application of Neuromorphic Olfactory Approach for High-Accuracy Classification of Malts Current developments in artificial olfactory systems, also known as electronic nose (e-nose) systems, have benefited from advanced machine learning techniques that have significantly improved the conditioning and processing of multivariate feature-rich sensor data. These advancements are complemented by the application of bioinspired algorithms and architectures based on findings from neurophysiological studies focusing on the biological olfactory pathway. The application of spiking neural networks (SNNs), and concepts from neuromorphic engineering in general, are one of the key factors that has led to the design and development of efficient bioinspired e-nose systems. However, only a limited number of studies have focused on deploying these models on a natively event-driven hardware platform that exploits the benefits of neuromorphic implementation, such as ultra-low-power consumption and real-time processing, for simplified integration in a portable e-nose system. In this paper, we extend our previously reported neuromorphic encoding and classification approach to a real-world dataset that consists of sensor responses from a commercial e-nose system when exposed to eight different types of malts. We show that the proposed SNN-based classifier was able to deliver 97% accurate classification results at a maximum latency of 0.4 ms per inference with a power consumption of less than 1 mW when deployed on neuromorphic hardware. One of the key advantages of the proposed neuromorphic architecture is that the entire functionality, including pre-processing, event encoding, and classification, can be mapped on the neuromorphic system-on-a-chip (NSoC) to develop power-efficient and highly-accurate real-time e-nose systems. Introduction Research in machine olfaction and electronic nose (e-nose) systems has garnered much interest due to a number of novel applications that can be envisaged by implementing this technology [1]. Although foundational work in odor sensing can be traced back to the 1960s starting with Moncrieff's mechanical model [2], a paradigm shift in this domain came after the seminal work of Persaud and Dodd [3] in the early 1980s that sparked the development of sophisticated e-nose systems. Inspired by the biological olfactory pathway, Persaud and Dodd proposed an electronic nose system that implemented a multi-sensor approach, combined with a signal conditioning and processing module, for the identification of various volatile compounds. The past thirty years have seen an increasingly large number of studies building on this foundational research to link the functional emulation of the biological olfactory pathway to artificial olfactory systems that can be implemented for real-world applications [1,[4][5][6]. Typically comprising a sensor array and a pattern recognition engine (PARC), e-nose systems mimic the capabilities of biological olfaction to recognize chemical analytes. A conventional approach of processing electronic nose data includes four key stages: data acquisition of time-series resistance data generated by the front-end sensing array; application of pre-processing or signal conditioning techniques for denoising; feature extraction of robust information to enhance class differentiability; and a subsequent pattern recognition algorithm that can classify the extracted features to identify the odor class. Although the dynamics of all the aforementioned processes are vital for the implementation of a robust and reliable e-nose system, the PARC engine, in particular, is a principal determining factor for key performance parameters such as power and computing requirements, portability, and classification latency and accuracy [7,8]. The implementation of traditional computing techniques has imposed limitations in handling continuous multi-dimensional data, which in turn has affected the efficiency of the e-nose systems and impeded their performance [4]. Advanced research in machine learning and statistical algorithms has been a major enabler to improved handling of multivariate data, which has led to novel algorithms being implemented for pattern recognition in e-nose systems [4, 6,[9][10][11]. However, the efficiency of these algorithms has largely depended on pre-processing methods such as dimensionality reduction, and a number of signal conditioning stages that has added to the complexity, power and computational requirements, and the overall processing latency [1,12]. Nevertheless, the limitations observed in these implementations has highlighted the importance of a simplified, robust, and power-efficient PARC engine that can be easily integrated in an e-nose system. The emergence of neuromorphic methods provided a totally different outlook towards solving the artificial olfaction problem. The sparse spike-based data representation used in neuromorphic approaches was crucial for e-nose systems, as the volume of data generated could be minimized by encoding only useful information, enabling optimization of the processing [1,13,14]. Other advantages, such as low-power implementation and rapid processing of sparse data through spiking neural networks (SNNs) and bio-inspired learning algorithms, were vital for the development of efficient and robust artificial olfactory systems. The fully-integrated olfactory chip proposed by Koickal et al. in [15] was one of the first neuromorphic olfactory system implementations. Comprising a chemosensor array, a signal conditioning circuitry, and an SNN with bio-inspired learning capabilities, the proposed system emulated the sensing, transformation, and association functionalities of the biological counterpart. Although further research into overcoming the limitations of analogue design and real-world applications of this study was never reported, this groundbreaking work paved the path for future studies in neuromorphic olfaction. Other noteworthy studies in neuromorphic olfaction include the rank-order-based latency coding [16,17], hardware-based olfactory models based on the antennal lobe of fruit fly [18][19][20], a VLSI implementation of an SNN based on the neurophysiological architecture of a rodent olfactory bulb [21], hardware implementation of the olfactory bulb model [22], a classifier using a convolutional spiking neural network [23], a 3D SNN reservoir-based classifier for odor recognition [24], and the columnar olfactory bulb model inspired by the glomerular layer of the mammalian olfactory pathway that was recently extended for its implementation on Loihi, Intel's neuromorphic research chip [14,25]. However, most of the research in neuromorphic olfaction, such as [15,21,[26][27][28][29][30], is more driven towards implementing a high level of bio-realism to emulate the biological olfactory pathway, which results in impractical models with limited scope for real-world applications [5]. Review articles [1,[4][5][6] present a comprehensive survey on the development, application, and current limitations of neuromorphic olfactory systems. Although application of neuromorphic methods and SNNs for artificial olfactory systems has begun to show promise, only a small number of studies, such as [12,14,21,24,31,32], have been able to deploy these bio-inspired models on an application-ready neuromorphic platform in a realistic field setting. In the work presented in this paper, we extend our previously reported neuromorphic encoding and SNN-based classification approach to include performance parameters when deployed on Akida neuromorphic hardware [12]. The significance of this work is two-fold: Firstly, the neuromorphic processing model for olfactory data hypothesized in [12] is proven by applying the model on a real-world dataset collected to identify eight types of malts. Secondly, the proposed neuromorphic model establishes a general platform for encoding and classifying e-nose data, where all of these functions can be mapped on the Akida neuromorphic hardware to leverage the ultra-low-power and high-performance capabilities for simplified integration in a portable e-nose system. Studies based on implementation of traditional methods for evaluating malt aromas to identify malt types have shown them to be time-consuming and requiring use of costly equipment and trained personnel [33,34]. Accomplishing this task using a non-invasive electronic nose (e-nose) system may be of great interest within the brewing industry because malts, as one of the vital raw materials, significantly impact the beer quality and the brewing process [35]. However, achieving this presents a nontrivial classification task because, as is the case with most aromatic compounds, the instrumental odor characteristics of a malt sample may overlap even if their aroma profiles may seem different for human olfaction [36,37]. Therefore, this study aims to implement bioinspired data-encoding and classification techniques on olfactory data obtained using a commercial e-nose system and the Akida Spiking Neural Network (SNN) architecture. Sample Preparation The preparation of samples and experimental protocols were based on previous machine olfaction-based studies that included experiments with grains [38][39][40][41] and beer [42,43]. This study used eight types of malt samples obtained from Pilot Malting Australia. The classes of malts and their flavor profiles, as described in [36,[44][45][46][47][48], are listed in Table 1. Samples were prepared using 100 g of each malt type transferred to a 250 mL sterile and borosilicate glass flask. The samples were sealed tightly with two layers of paraffin film and stored at room temperature to prevent the loss of volatiles and odor characteristics. Before exposure to the e-nose system, the samples were heated at 25 • C using a digital hotplate with frequent perturbation to ensure that the malts were evenly heated. The paraffin films were punched with holes to prevent moisture accumulation within the flask, and the perturbation continued until a thermal equilibrium was achieved. This process allowed the release of aromatic volatiles, which mainly include aliphatic alcohols, aldehydes, ketones, pyrroles, furans, and pyrazines [49], from the malt samples without a significant increase in relative humidity that would affect the headspace analysis. A total of eight samples, corresponding to each type of malt, were prepared for the experiment. Electronic Nose System A commercially available Cyranose-320™ e-nose (Sensigent, Pasadena, CA, USA) was used to obtain the aroma patterns from the headspace of the malt samples. The portable e-nose system incorporates a sensor array consisting of 32 nanocomposite sensors, where each sensor exhibits cross-sensitivity towards specific chemical or aromatic volatile compounds [50]. The e-nose system is exposed to these aromatic compounds through a delivery system where the chemical interaction between the sensing element and the volatiles results in a change in electrical resistance. This change in resistance is proportional to the amount of chemical absorbed by the conducting polymer on the sensing surface. The resulting signal is a change in resistance in a sensing element for the time interval during which it is exposed to the chemical vapors. The raw data acquired consists of changes in resistance in each sensor array element, producing a distribution pattern or a smell-print that can be used to identify the VOC mixture using pattern-recognition techniques. In the study described in this paper, it was observed that four sensors (sensors 5, 6, 23, and 31) were sensitive to polar compounds, such as water vapor due to moisture present in the headspace due to the heating of the malt samples. As a result, data from sensors 5, 6, 23, and 31 was not acquired during the experiments, and the experiments overall resulted in a 28-dimensional e-nose response. Sampling Protocol The VOCs were measured using the experimental setup shown in Figure 1. Although the experiments were carried out in a fume cupboard to avoid interference from contaminants such as dust, ambient air was used for the baseline so as to replicate a real-world application where ideal lab conditions and zero-grade dry air for the baseline may not be available. Sensigent's PCNose software was used for data acquisition, and the raw resistance change data was exported to a CSV file. As reheating of the malt sample after initial thermal equilibrium was achieved could potentially change its physical characteristics and adversely affect the experiments, data samples were recorded as consecutive sensor response measurements until the thermal equilibrium could be maintained. In total, nine replicates of measurements were recorded for each malt sample, resulting in a dataset of 72 files with eight classes. Another set of experiments producing three additional replicates per class was carried out under similar laboratory conditions. This dataset, consisting of 24 files, was used to validate the classifier's generalization for inferences of previously unseen data. Before the experiments, the e-nose system was purged with ambient air for six minutes to obtain a steady baseline. For the e-nose analysis, the sample headspace was analyzed for a total of 90 s. This included 15 s of baseline, 50 s for sample intake, and 25 s for snout removal and baseline purge. The substrate temperature was set to 37 • C and the pump speeds for each sampling stage were set as per the manufacturer's recommendation [50], and the sampling frequency was set to 1 Hz. Table 2 shows the sampling parameters used to record responses from the e-nose system. Signal Conditioning and Pre-Processing The odor data acquired in the form of relative resistance signals was first visually analyzed using the PCNose tool, which is Sensigent's interfacing and data acquisition software for the Cyranose-320™ e-nose system. A typical e-nose response has three key components: a baseline response during the reference phase, a response curve and steady response during the exposure/sniffing phase, and a transition back to the baseline during the recovery phase (shown in Figure 2) [51]. Mathematically, the normalization process can be expressed as: where ( ) is the absolute value of normalized relative resistance for sensor , is the baseline response of sensor , is the measured resistance of sensor at insta , and ( ) and ( ) are the minimum and maximum resistance of sensor that sample. Although the dataset was limited in terms of the number of samples and classes, e sample is highly multidimensional as responses from 28 sensors are acquired. Despite fact that each sensing element responds differently to the aromatic compounds, the In order to accomplish the identification of aromatic compounds through pattern recognition of e-nose responses, raw sensor responses have to be conditioned to mitigate the effects of noise and differences in resistance ranges of the sensors that can influence the outcomes of the classification process [7,11]. Noise in the sensor responses was mitigated by implementing a rolling mean smoothing technique, and the signals were normalized by fractional manipulation during which the baseline is subtracted from the signal and divided by the minimum and maximum resistance to generate dimensionless and normalized responses on a unified scale between 0 and 1. In general, normalization using linear scaling was used over other methods in order to avoid computationally expensive operations during the pre-processing stage. Mathematically, the normalization process can be expressed as: where R norm (x) is the absolute value of normalized relative resistance for sensor x, R 0 is the baseline response of sensor x, R i is the measured resistance of sensor x at instance i, and R min(x) and R max(x) are the minimum and maximum resistance of sensor x for that sample. Although the dataset was limited in terms of the number of samples and classes, each sample is highly multidimensional as responses from 28 sensors are acquired. Despite the fact that each sensing element responds differently to the aromatic compounds, the distinctive information observed in the dataset is limited as the sensor responses follow a typical trend of baseline response followed by an increase or decrease in resistance to a steady-state response when exposed to the malt sample and back to baseline during the recovery phase. As a result, except for the slope of the sensor responses, most of the time-points represent a steady-state feature that may not suffice for classification, especially for a highly multivariate dataset. Another feature set based on enhancing inter-class discrimination was extracted to overcome the limitations of relative resistance features. In this case, the mean of the baseline was subtracted from the signal, and the data was normalized using the min-max values recorded for each sensor across all samples and classes. This global normalization process can be modelled as: where R norm (x) is the absolute value of normalized resistance response for sensor x, R i is the measured resistance of sensor x at instance i, R baseline(avg) is the average of sensor x's baseline response, and R global max(x) and R global min(x) are the global maximum and minimum resistances for sensor x observed across all samples and classes. The implementation of global normalization highlighted the descriptive information regarding the sensor responses with respect to each class by enhancing their inter-class features. This unique information can be used to distinguish sensor responses more effectively, which boosts classification performance. The pre-processing and conditioning stage is illustrated in Figure 3, which shows the transformation of the raw signal into features that were used for encoding and classification. Data-to-Event Encoding Using AERO One of the key aspects of implementing a neuromorphic approach for a sensing application is the sparse representation of data using a spike-based format that enables rapid processing with minimal power consumption [52]. Although the encoding of data in a spiking format can be achieved using several bioinspired algorithms, such as step forward (SF) thresholding or Ben's spiker algorithm (BSA) [53], address event representation (AER) [54] has become a de facto standard within the neuromorphic domain [55]. Based on the abstraction of the pulse-based neurobiological communication code found in living organisms, AER is an ideal interface for communicating temporal information in an eventbased sparse format from multiple sources using a narrow channel [56]. First conceptualized during the development of the dynamic vision sensor (DVS), the AER protocol's ternary data format for vision applications is used to encode X-axis and Y-axis coordinates of a pixel and ON or OFF spikes that are generated using a thresholding method to represent luminosity changes [4,57]. Following the successful implementation of AER for neuromorphic vision sensors, the AER protocol has been extended for several other neuromorphic systems, such as tactile [58,59] and auditory sensing [60], along with event-driven processing in neuromorphic hardware implementations [52,61,62]. The data-to-event transformation approach used in this work was abstracted from our previously developed AER for olfaction (AERO) encoder [12]. This approach is based on quantizing the normalized sensor responses to encode signal amplitude levels of each sensor within the AER data structure. AERO generates events at each timepoint and translates sensor responses into the AER-based spiking data format to encode the timestamp, the amplitude level of the signal, and the sensor ID information. Similar to one-hot encoding [63], the quantization of the signal amplitude creates time-based bins that are used by the SNN to learn from the non-zero bins and classify the sensor responses. Data-to-Event Encoding Using AERO One of the key aspects of implementing a neuromorphic approach for a sensing application is the sparse representation of data using a spike-based format that enables rapid processing with minimal power consumption [52]. Although the encoding of data in a spiking format can be achieved using several bioinspired algorithms, such as step forward (SF) thresholding or Ben's spiker algorithm (BSA) [53], address event representation (AER) [54] has become a de facto standard within the neuromorphic domain [55]. Based on the abstraction of the pulse-based neurobiological communication code found in living organisms, AER is an ideal interface for communicating temporal information in an event-based sparse format from multiple sources using a narrow channel [56]. First conceptualized during the development of the dynamic vision sensor (DVS), the AER protocol's ternary data format for vision applications is used to encode X-axis and Y-axis coordinates of a pixel and ON or OFF spikes that are generated using a thresholding method to represent luminosity changes [4,57]. Following the successful implementation of AER for neuromorphic vision sensors, the AER protocol has been extended for several other neuromorphic systems, such as tactile [58,59] and auditory sensing [60], along with event-driven processing in neuromorphic hardware implementations [52,61,62]. The data-to-event transformation approach used in this work was abstracted from our previously developed AER for olfaction (AERO) encoder [12]. This approach is based on quantizing the normalized sensor responses to encode signal amplitude levels of each sensor within the AER data structure. AERO generates events at each timepoint and translates sensor responses into the AER-based spiking data format to encode the timestamp, the amplitude level of the signal, and the sensor ID information. Similar to one-hot encoding [63], the quantization of the signal amplitude creates time-based bins that are used by the SNN to learn from the non-zero bins and classify the sensor responses. Based on the number of bits selected for quantization, the signal amplitude is partitioned into 2 n levels, where n is the number of bits used. The quantization levels of signal amplitude are crucial to preserve the features that can significantly influence the learning and classification capabilities of the SNN. Typically, the number of bits selected for quantization determines whether the time-based bins formed are fine-or coarse-grained, which directly impacts the SNN's ability to generalize the odor classes based on the classspecific features it has learnt. This process of encoding continuous e-nose sensor responses into sparse AER-based events implemented through AERO is illustrated in Figure 4 as a conceptual block diagram. Based on the number of bits selected for quantization, the signal amplitude is partitioned into 2 levels, where n is the number of bits used. The quantization levels of signal amplitude are crucial to preserve the features that can significantly influence the learning and classification capabilities of the SNN. Typically, the number of bits selected for quantization determines whether the time-based bins formed are fine-or coarse-grained, which directly impacts the SNN's ability to generalize the odor classes based on the class-specific features it has learnt. This process of encoding continuous e-nose sensor responses into sparse AER-based events implemented through AERO is illustrated in Figure 4 as a conceptual block diagram. Akida Neuromorphic Framework and Network Architecture Spiking neural networks are a particular class of artificial neural networks (ANNs) that incorporate biological processing principles where neurons process and propagate information in the form of sparse action potential-like representations, also known as spikes. The Akida neuromorphic framework by Brainchip implements these core concepts in the form of a digital neuromorphic system-on-a-chip (NSoC) [64] and the Akida Execution Engine (AEE), a Python-based chip emulator and key component of the Akida MetaTF ML framework (link-https://doc.brainchipinc.com accessed on 15 November 2021) for development and simulation of the behavior of the SNNs supported by the event domain neural processor. The Akida SNN implements a simplistic yet effective integrate-and-fire neuron model where a summation operation of input spikes is performed to simulate the membrane potential of the neuron and causes the neuron to fire if this potential is higher than a predetermined threshold. One of the key features of this neuromorphic framework is the binary implementation of synaptic weights and activation. This significantly reduces the computational overhead, resulting in a low-power rapid processing architecture [65]. The study described in this paper takes advantage of the fact that SNN models developed using the Akida MetaTF framework can be seamlessly deployed on the Akida NsoC, allowing the classifier to run on low-power neuromorphic hardware with support for edge learning. Additionally, the on-chip processor and data-to-spike converter within the Akida NsoC architecture (shown in Figure 5) enables onboard signal pre-processing and event generation, thus eliminating the requirement of a PC for interfacing with the enose system. The neuromorphic classifier proposed in this work is based on a feed-forward twolayer network architecture that comprises an input layer that receives AER-based spiking input and a fully connected layer for processing. The input dimensions, such as the num- Akida Neuromorphic Framework and Network Architecture Spiking neural networks are a particular class of artificial neural networks (ANNs) that incorporate biological processing principles where neurons process and propagate information in the form of sparse action potential-like representations, also known as spikes. The Akida neuromorphic framework by Brainchip implements these core concepts in the form of a digital neuromorphic system-on-a-chip (NSoC) [64] and the Akida Execution Engine (AEE), a Python-based chip emulator and key component of the Akida MetaTF ML framework (link-https://doc.brainchipinc.com accessed on 15 November 2021) for development and simulation of the behavior of the SNNs supported by the event domain neural processor. The Akida SNN implements a simplistic yet effective integrate-and-fire neuron model where a summation operation of input spikes is performed to simulate the membrane potential of the neuron and causes the neuron to fire if this potential is higher than a predetermined threshold. One of the key features of this neuromorphic framework is the binary implementation of synaptic weights and activation. This significantly reduces the computational overhead, resulting in a low-power rapid processing architecture [65]. The study described in this paper takes advantage of the fact that SNN models developed using the Akida MetaTF framework can be seamlessly deployed on the Akida NsoC, allowing the classifier to run on low-power neuromorphic hardware with support for edge learning. Additionally, the on-chip processor and data-to-spike converter within the Akida NsoC architecture (shown in Figure 5) enables onboard signal pre-processing and event generation, thus eliminating the requirement of a PC for interfacing with the e-nose system. Classifier Training: Learning Using STDP Learning in the SNN-classifier is implemented using the Akida built-in learning algorithm based on the bioinspired spike-time dependent plasticity (STDP) learning approach with modifications for efficient implementation on low bit-width architectures (refer to [66]). In this unsupervised learning approach, the neurons learn to respond to particular features that are found to repeat over multiple input samples by reinforcing the synapses that match an activation pattern [64]. The synaptic connectivity of the neurons within the network undergoes weight changes to establish a correlation with repeating temporal patterns, and the competition between neurons ensures that they each learn different features. The quantization of the signal during the data-to-event encoding plays an important role in the learning process as the discretized sensor responses are distributed in timebased bins, similar to one-hot encoding, and the network learns the signal characteristics and odor-specific features from non-zero-valued bins. In this case, the level of quantization controls the specificity and generalization of the signal that the network learns over successive presentation of the e-nose data. A 4-bit discretization that partitions the amplitude of the signal into 16 activation levels was selected for this application based on the overall classification performance of the network achieved with minimum use of neural resources. Training the SNN model was based on one-shot learning where the SNN learns repeating temporal patterns through a single feed-forward propagation of event-based data. This approach is much faster than typical deep learning gradient-based training that requires multiple iterations for network convergence and to minimize the error function. Training and testing of the SNN-based classifier for all eight classes of malts was implemented for both of the relative resistance features (local and global) that were extracted during the pre-processing stage. In each case, a randomly allocated combination of six files per sample (70%) were used for training the classifier model, and the remaining three files (30%) were used for testing. The resultant connectivity weights within the neuron population after the learning phase for locally normalized relative resistance features are shown in Figure 6. The neuromorphic classifier proposed in this work is based on a feed-forward twolayer network architecture that comprises an input layer that receives AER-based spiking input and a fully connected layer for processing. The input dimensions, such as the number of timepoints (input width), activation levels (input height), and the number of features (number of sensors), are defined in the input layer. The event-based data generated by the AERO encoder is received by the input layer and propagated as spikes to the subsequent fully connected processing layer. This layer is responsible for learning and classification tasks. Several parameters-such as connectivity of neurons, the total number of neurons, minimum plasticity, and learning competition-are defined in this layer, which control the learning and classification performance of the model. Classifier Training: Learning Using STDP Learning in the SNN-classifier is implemented using the Akida built-in learning algorithm based on the bioinspired spike-time dependent plasticity (STDP) learning approach with modifications for efficient implementation on low bit-width architectures (refer to [66]). In this unsupervised learning approach, the neurons learn to respond to particular features that are found to repeat over multiple input samples by reinforcing the synapses that match an activation pattern [64]. The synaptic connectivity of the neurons within the network undergoes weight changes to establish a correlation with repeating temporal patterns, and the competition between neurons ensures that they each learn different features. The quantization of the signal during the data-to-event encoding plays an important role in the learning process as the discretized sensor responses are distributed in time-based bins, similar to one-hot encoding, and the network learns the signal characteristics and odorspecific features from non-zero-valued bins. In this case, the level of quantization controls the specificity and generalization of the signal that the network learns over successive presentation of the e-nose data. A 4-bit discretization that partitions the amplitude of the signal into 16 activation levels was selected for this application based on the overall classification performance of the network achieved with minimum use of neural resources. Training the SNN model was based on one-shot learning where the SNN learns repeating temporal patterns through a single feed-forward propagation of event-based data. This approach is much faster than typical deep learning gradient-based training that requires multiple iterations for network convergence and to minimize the error function. Training and testing of the SNN-based classifier for all eight classes of malts was implemented for both of the relative resistance features (local and global) that were extracted during the pre-processing stage. In each case, a randomly allocated combination of six files per sample (70%) were used for training the classifier model, and the remaining three files (30%) were used for testing. The resultant connectivity weights within the neuron population after the learning phase for locally normalized relative resistance features are shown in Figure 6. Classification Performance The classification within the SNN is based on a winner-takes-all (WTA) logic [67], where the class label of the neuron with the highest activation level among the population is allocated to the presented data. The accuracy of the classifier is determined by comparing the predicted class label to the true class label for the validation data. The experiments for classification of malts using the SNN model were conducted for both of the extracted features, locally normalized relative resistance and relative resistance normalized using global min-max. An optimization process based on differential evolution [68] was implemented to determine a configuration for key parameters of the network. These include the minimum plasticity, plasticity decay, and learning competition, which have a significant influence on the classification performance of the SNN model. The optimum values for the network parameters were derived using a fitness function based on maximizing the stable classification accuracy of the SNN model. Certain parameters-such as the number of neurons per class and the connectivity of neurons (number of weights per neuron)-largely depend on the number of samples within a class, the number of sensors (dimensions of the data) employed, and the number of timepoints used for classification. The initial plasticity parameter was set to the maximum during the network initialization and gradually decreased based on the neuron activations and learning. Table 3 lists the network parameters, a short description of their functionality, their bounds used for the optimization process, and the optimum values for each parameter. Table 3. SNN parameters with a description of their functionality, their max-min bounds used for the optimization, and the optimum value of the parameter obtained using grid-search. Network Parameters Parameter Description Bounds Optimum Value Number of neurons per class Number of neurons representing each class 1-30 10 1 to 2880 (max bound is de- Classification Performance The classification within the SNN is based on a winner-takes-all (WTA) logic [67], where the class label of the neuron with the highest activation level among the population is allocated to the presented data. The accuracy of the classifier is determined by comparing the predicted class label to the true class label for the validation data. The experiments for classification of malts using the SNN model were conducted for both of the extracted features, locally normalized relative resistance and relative resistance normalized using global min-max. An optimization process based on differential evolution [68] was implemented to determine a configuration for key parameters of the network. These include the minimum plasticity, plasticity decay, and learning competition, which have a significant influence on the classification performance of the SNN model. The optimum values for the network parameters were derived using a fitness function based on maximizing the stable classification accuracy of the SNN model. Certain parameters-such as the number of neurons per class and the connectivity of neurons (number of weights per neuron)-largely depend on the number of samples within a class, the number of sensors (dimensions of the data) employed, and the number of timepoints used for classification. The initial plasticity parameter was set to the maximum during the network initialization and gradually decreased based on the neuron activations and learning. Table 3 lists the network parameters, a short description of their functionality, their bounds used for the optimization process, and the optimum values for each parameter. Table 3. SNN parameters with a description of their functionality, their max-min bounds used for the optimization, and the optimum value of the parameter obtained using grid-search. The classification performance of the network was determined using a stratified fivefold cross-validation. For the first scenario using the locally normalized relative resistance feature, the SNN model provided a classification performance of 90.83% with a variance of ±4.083%. The classification performance of the SNN model for the second scenario using relative resistance normalized using global min-max increased by 6.25%. In this case, the five-fold cross-validation accuracy of the classifier was found to be 97.08%, with a variance of ±2.08%. For each scenario, the processing latency for the emulated learning and recognition tasks on a standard PC with an i5 CPU, including the data-to-event encoding and other software-based latencies due to looping and control structures, was found to be between 1.5 and 2 s. Network Parameters In order to evaluate the efficiency and accuracy of the SNN-based classifier in regard to the overall classification performance, we compared the obtained results with statistical machine learning tools. As most of the statistical classification methods are based on single vector inputs [7,11,13], the temporal data was reduced to three static features: maximum resistance change, area under the curve, and the slope of the sensor response during the sniffing phase of the sampling. Statistical machine learning algorithms generally do not perform well for highly multidimensional datasets [1,5,24]. Hence, principal component analysis (PCA) was used for dimensionality reduction and the dataset was reduced to three key components based on maximum explained variance. The comparison of classification accuracy and latency to train and classify the dataset based on a 70:30 train:test split and five-fold cross-validation is shown in Table 4 below. In order to validate the classifier performance, the SNN model was exposed to an entirely unseen dataset. This phase of the work used the secondary dataset, consisting of 24 files. This test was crucial to evaluate the generalization ability of the classifier model and eliminate the effects of inadvertent overfitting resulting from multiple uses of data during the model development. Applying the SNN model to this dataset resulted in 91.66% accuracy for the relative resistance features using global normalization. A confusion matrix of the classification result is shown in Figure 7. based classifier when implemented on the neuromorphic hardware was less than 1 mW. The overall classification results, on both the Python-based emulator and the neuromorphic hardware, confirm that the proposed neuromorphic framework can be efficiently integrated as a pattern recognition engine in a portable artificial olfactory system operating under strict power constraints to deliver highly accurate classification in real time. Conclusions This study presents the implementation of a neuromorphic approach towards the encoding and classification of electronic nose data. The proposed approach was used to identify eight classes of malts and has potential as an application for quality control in the brewing industry. Experiments were conducted using a commercial e-nose system to record a dataset consisting of time-varying information of sensor responses when exposed to different malts under semi-laboratory conditions. The classifier proposed in this study utilized the combination of the Akida SNN and the AERO encoder, a neuromorphic approach that has previously delivered highly accurate results on a benchmark machine olfaction dataset [12]. The proposed method successfully classified the dataset with an accuracy of 97.08% and a maximum processing latency of 0.4 ms per inference when deployed on the Akida neuromorphic hardware. A secondary dataset that was used to validate the classifier model in an 'inference-only' mode was classified with an accuracy of 91.66%. These results could potentially be further improved by refinements to pre-processing that can enhance informative independent components for malt classes that are misclassified. Based on these results, we can conclude that the classifier model implemented using Akida SNN in conjunction with the AERO encoder provides a promising platform for odor recognition systems. An application targeted towards the identification of malts based on their aroma profile, generally considered a nontrivial classification task using traditional machine learning algorithms, was successfully demonstrated in this work with a classification accuracy greater than 90% under different scenarios. The developed model As SNN models designed using the Akida MetaTF framework can be seamlessly deployed on the Akida NsoC, the SNN-based classifier proposed in this study was implemented on the Akida neuromorphic hardware to validate the performance parameters. All functionalities of the proposed pattern recognition engine, including pre-processing, AERO encoder, and the SNN-based classifier, were mapped onto the neuromorphic hardware platform. As anticipated, the classification performance of the SNN model when implemented on the hardware was similar to the results obtained using the software-based chip emulator. The classification latency for a trained SNN model in an inference mode was recorded to be 0.6 ms per inference. The dynamic power consumption of the SNN-based classifier when implemented on the neuromorphic hardware was less than 1 mW. The overall classification results, on both the Python-based emulator and the neuromorphic hardware, confirm that the proposed neuromorphic framework can be efficiently integrated as a pattern recognition engine in a portable artificial olfactory system operating under strict power constraints to deliver highly accurate classification in real time. Conclusions This study presents the implementation of a neuromorphic approach towards the encoding and classification of electronic nose data. The proposed approach was used to identify eight classes of malts and has potential as an application for quality control in the brewing industry. Experiments were conducted using a commercial e-nose system to record a dataset consisting of time-varying information of sensor responses when exposed to different malts under semi-laboratory conditions. The classifier proposed in this study utilized the combination of the Akida SNN and the AERO encoder, a neuromorphic approach that has previously delivered highly accurate results on a benchmark machine olfaction dataset [12]. The proposed method successfully classified the dataset with an accuracy of 97.08% and a maximum processing latency of 0.4 ms per inference when deployed on the Akida neuromorphic hardware. A secondary dataset that was used to validate the classifier model in an 'inference-only' mode was classified with an accuracy of 91.66%. These results could potentially be further improved by refinements to preprocessing that can enhance informative independent components for malt classes that are misclassified. Based on these results, we can conclude that the classifier model implemented using Akida SNN in conjunction with the AERO encoder provides a promising platform for odor recognition systems. An application targeted towards the identification of malts based on their aroma profile, generally considered a nontrivial classification task using traditional machine learning algorithms, was successfully demonstrated in this work with a classification accuracy greater than 90% under different scenarios. The developed model can be deployed on the Akida NsoC, thus enabling the integration of a bio-inspired classifier model within a commercial e-nose system. A comparative analysis of the proposed approach with statistical machine learning classifiers shows that the SNN-based classifier outperforms the statistical algorithms by a significant margin for both accuracy and processing latency. A performance-based comparison of the neuromorphic model proposed in this work with other neuromorphic olfactory approaches, such as [13,14,26,27,69,70], could not be established as their inherent structures, including spike encoding schemes, neuron models, SNN architectures, and implementation of learning algorithms, vary vastly. The proposed methodology, however, does not require a graphic processing unit (GPU)based model simulation, unlike in [13], or a complex bio-realistic model, as used in [14]. Furthermore, the SNN-based classifier can be entirely mapped on a single neural processing unit core, as opposed to multiple cores used in [14], leading to a low-power and low-latency implementation. The application of such real-time and highly accurate e-nose systems can be extended to fields such as food technology, the brewing and wine industries, and biosecurity. Future research in this domain will focus on encoding parameters such as rank-order code within the AERO events to analyze its impact on classification performance.
9,370.2
2022-01-01T00:00:00.000
[ "Computer Science" ]
Artificial Intelligence Algorithms for Multisensor Information Fusion Based on Deep Learning Algorithms Arti fi cial intelligence (AI) has been widely used all over the world. AI can be applied not only in mechanical learning and expert system but also in knowledge engineering and intelligent information retrieval and has achieved amazing results. This article aims to study the relevant knowledge of deep learning algorithms and multisensor information fusion and how to use deep learning algorithms and multisensor information fusion to study AI algorithms. This paper raises the question of whether the improved multisensor information fusion will a ff ect the AI algorithm. From the data in the experiment of this article, the accuracy of the neural network before the improvement was 4.1%. With the development of society, the traditional algorithm fi nally dropped to 1.3%. The accuracy of the multisensor information fusion algorithm before the improvement was 3.1% at the beginning; with the development of society, it fi nally dropped to 1%; it can be known that the accuracy of the improved neural network is 4.6%, and with continuous improvement, it fi nally increased to 9.8%. The improved multisensor information fusion algorithm is the same, the accuracy at the beginning was 3.9%, and gradually increased to 9.5%. From this set of data, it can be known that the improved convolutional neural network (CNN) algorithm, and the improved multisensor information fusion algorithm should be used to study AI algorithms. Introduction Since AI was proposed, the progress of AI research can be described as "fast." Now, AI has been widely used in many disciplines and has formed an independent system. The history of AI is only a few decades, and its achievements and bumpy experiences have also attracted people's attention, especially that the traditional AI algorithms were once in trouble. Therefore, the study of AI algorithms based on deep learning methods has great research significance. Human progress is inseparable from the development of information. Information development makes people realize an intelligent life. People's life is more and more intelligent. Intelligent technology will bring people a better life. It also promotes the development of society. Therefore, we should make rational use of AI to make people's life more convenient. With the development of society, AI has become more and more important. Yanming found that many scholars recently proposed deep learning algorithms to solve previous AI problems. The purpose of this work is to confirm the latest technology of deep learning algorithms in computer vision through epoch-making topics. He gave an overview of various deep learning methods and their latest developments. Finally, Yanming summarized the future trends and challenges of neural network design and training [1]. Levine explained the coordination method based on the learning of the image grabbing robot. In order to control the coordination of hands and eyes, Levine only uses single-lens camera images to predict the success probability of the final control of spatial motion. This requires the use of the network to observe the spatial relationship in the scene to learn the coordination of hands and eyes. Then, he used the network to acquire masters in real time and successfully mastered the training network. Levine collected more than 800,000 crawling attempts in two months. The experimental evaluation of Levine showed that this method achieved effective real-time control [2]. Oshea introduced and discussed the application fields of deep learning. It can not only have a great impact on communication, but also play a great role in radio transformer. Compared with the previous traditional scheme, his scheme is obviously more advantageous. He applied deep learning to the sending and receiving of the network and achieved great success [3]. Ravi found that deep learning is applied more and more in life, its foundation is neural network. As it is used in more and more fields, its popularity is higher and higher [4]. Makridakis found that AI had a great impact on people's lives. It will have a great impact on people. It will not only have an impact on people's lives, but also promote social development. Through AI, people all over the world communicate more conveniently, and the network makes people's contacts more close. Obviously, AI has great advantages [5]. Lemley found that CNNs in deep learning are widely used. It is caused by the reliability of big data, which can make a lot of data and information perfectly solved in a fast time. Its emergence has improved the data problems in computers, the Internet, and so on. It makes deep learning more widely used. This leads to the wider applicability of deep learning. A new generation of intelligent assistant has also appeared, which is closely related to learning algorithm and deep learning. AI is also inseparable from deep learning [6]. Burton found that AI can help people solve moral, ethical, and philosophical problems. Burton hopes that students can be developed from AI and understand the role that AI will bring to society. Burton also discussed how to use AI to solve ethical and moral issues [7]. Polina found that although AI promoted the development of medical industry, it also brought challenges to the traditional medical system. The new deep learning is helpful to transform any patient's data into medical data and analyze the face and patient information. Patients cannot see their medical data. Polina outlined the advantages of the next generation of AI and proposed solutions. These solutions can solve medical problems and motivate relevant personnel to continuously monitor health [8]. Through the experimental analysis of scholars, people cannot stand still and must break through some traditional concepts and algorithms. Therefore, it is still necessary to study AI algorithms based on deep learning algorithms. The innovations of this article are as follows: (1) Introduce the relevant theoretical knowledge of deep learning and multisensor information fusion, and use the neural network algorithm based on deep learning to analyze how deep learning and multisensor information fusion research AI algorithms. (2) Base on neural network algorithm and multisensor information fusion weighted similarity improvement D-S evidence theory to launch experiments and analysis of AI algorithm research. Through investigation and analysis, it is found that AI algorithms are inseparable from neural network algorithms and multisensor information fusion. Network Algorithm and Multisensor Information Fusion The so-called multisensor information fusion is the use of computer technology to automatically analyze and synthesize information and data from multiple sensors or multiple sources under certain criteria to complete the required decision-making and estimation information processing process. Multisensor information fusion is an important content in target recognition, and the similarity between unknown and known targets is the key technology [9,10]. The common methods of multisensor information fusion can basically be summarized into two categories: AI method and random method; random algorithm is an algorithm that uses probability and statistics to make a random selection for the next calculation step during its execution. The random algorithm has certain advantages, as shown in Figure 1. As shown in Figure 1, since the 1970s, ultra-high-speed, ultra-large-scale integrated circuits have been realized through the use of high-precision machine tools. With the improvement of sensor performance, many sensor information systems have appeared, and these information systems are mainly used for applications with a variety of complex application backgrounds [11]. System information is monitored by multiple sensors, and the requirements of information processing speed, information expression form, and information storage capacity are no longer what the human brain's information comprehensive ability can bear. Mobile Information Systems This chapter introduces the functional model of multisensor information fusion; the structural model of sensor information fusion generally has three basic forms: centralized, decentralized, and hierarchical structure. The hierarchical structure is divided into feedback structure and nonfeedback structure, as shown in Figure 2. As shown in Figure 2, in the model of Figure 2, the functions of the information fusion system mainly include feature extraction, classification, recognition, and estimation. Among them, feature extraction and classification are the prerequisites for recognition and speculation, and the actual information fusion process is performed through recognition and speculation [12]. 2.1. Improved D-S Evidence Theory. Scholars have studied a new fusion method, the theory of evidence. The D-S algorithm belongs to the field of information fusion. The D-S theory is the promotion of Bayesian inference method, which is mainly carried out by using Bayesian conditional probability in probability theory. It is necessary to know the prior probability. The D-S evidence theory does not need to know the prior probability, can represent "uncertainty" well, and is widely used to deal with uncertain data. Supposing Bel 1 and Bel 2 are two trustworthiness functions on the same recognition framework Θ, w 1 and w 2 are their corresponding basic trustworthiness distributions, respectively, and the focal element of the two is A i , B j ði = 1, 2, 3, ⋯nÞ. If A j ∩ B j = A I , on the premise that the evidence is independent of each other, the following two trust merging rules are defined as The improved fusion rule is This method divides all conflict information into unknown item Θ and re-judges when new evidence comes. This method solves the problem of evidence synthesis when there is a high degree of conflict and improves the classic DS evidence theory into Here, the element in D is DðA, BÞ; d BPA ðw 1 , w 2 Þ represents the comprehensive influence of the two evidence focal elements and the basic probability assignment, which reflects the difference between the evidence. In order to comprehensively reflect the independence and difference between the evidence, based on the evidence distance, a new method of evidence conflict is proposed: Here, k is the conflict coefficient of the classical evidence theory. The fusion rule is improved to Formula (5) adopts a weighting method for the evidence body, assigning weight to each evidence body, and realizes the improvement of evidence theory by adjusting the weight value. This method defines the weight probability distribution function on the recognition framework Θ: Based on the above two types of improvement methods, which adjusts the weight of each evidence subject according Mobile Information Systems to the similarity between the evidence subjects, it does not completely negate the conflict evidence, but distributes the conflict part according to a new way of expression, which is also an amendment to the conflict part [13]. In order to reflect the effectiveness of method w 1 proposed in this paper, specific simulation experiments are used to compare with other methods. Now, suppose that the recognition frame is Θ = fx, y, zg, and four sensors are used to observe the three characteristics of the recognition frame to form four evidence bodies, as shown in Table 1. As shown in Table 1, for the fusion of multiple evidence bodies, two pieces of evidence can be fused first and then fused with the others, and the fusion sequence is not affected. This method reduces the conflict of evidence very well. It can be seen from the size of the impact factor that the newly added evidence plays a major role in the fusion, and its impact factor is always 1. This shows that when the sensor collects data, it is very important. If a sensor fails, this method can accurately locate which sensor has failed [14]. CNN Algorithm Based on Deep Learning. The deep learning method represented by the convolutional neural network realizes object recognition and classification, and the feature extraction is completely handed over to the machine. The entire feature extraction process does not need to be manually designed, and all is automatically completed by the machine. Feature extraction is achieved through different convolutions, and scale invariance is achieved through maximum pooling layer sampling. While maintaining the three invariances of traditional feature data, the manual design details are minimized in the feature extraction method. Through supervised learning, the computing power of the computer is brought into play, and the appropriate characteristic data are actively searched for. In fact, the research on CNN has been developed since the last century, but it was limited by the computing power at that time, and it did not bloom like it is today [15]. The CNN is composed of multiple single-layer basic structures such as input layer, convolution layer, fully connected layer, and output layer [16]. These network layers form a deep network structure in a hierarchically connected manner [17]. Take the classic network model as an example, as shown in Figure 3. As shown in Figure 3, the CNN uses the sample image to propagate forward layer by layer through these structures to obtain an output corresponding to the input sample and then use the output value and the actual label corresponding to the sample to define the cost function. Use the BP algorithm back propagation to update the parameters of each layer and minimize the cost function. After a large number of samples are iteratively learned, a trained network is finally obtained; use the network to obtain the output characteristics of each layer for the newly input image, which can be used in tasks such as image representation and image recognition [18]. Convolutional Layer. It is linearly convolved through the convolution kernel when the current layer is input, and after adding an offset, the activation function is nonlinearly mapped. The feature map of the convolutional layer is Among them, E i is the feature map of the previous layer, which is also the input of the i-th layer, h represents the weight vector of the i-th layer convolution kernel, w i represents the convolution operation, f i−1 * w represents the offset vector, and a i represents the nonlinear activation function. In image processing, the convolution operation of the convolution layer is a linear convolution of the N × N size convolution kernel and the feature map. The value of each pixel in the convolution kernel is the weight: Among them, Size i represents the size of the feature map of the i-th layer; Size i−1 represents the size of the sliding window, which is also the size of the convolution kernel; and S tride is the step size of each movement of the convolution kernel. In this process, the advantages of CNNs are reflected. A certain size of convolution kernel is used for convolution with the input, which is equivalent to the weight of the neuron is only connected to a part of the input. When the convolution kernel slides according to a certain step size, after experiencing the entire input image, the corresponding output is generally the convolution result plus the offset; instead of directly taking the convolution result, this is to increase the nonlinearity of the network [19]. The commonly used activation function is the Sigmoid function: The output of Tanh function form is zero mean, and the effect is better than Sigmoid in actual use. It is defined as 2.2.2. Pooling Layer. The pooling layer is to down-sample the image according to a certain pooling method. Commonly used pooling methods include maximum pooling and average pooling. The specific operation is shown in Figure 4. As shown in Figure 4, the pooling layer is sandwiched between successive convolutional layers to compress the amount of data and parameters and reduce overfitting. In short, if the input is an image, then the main function of the pooling layer is to compress the image. Among them, Mobile Information Systems the maximum pooling method is shown on the left, and its pooling result is the value with the largest absolute value in the window, corresponding to the points with slashes at the same position before and after pooling. The role of the pooling layer includes two aspects. One is to reduce the dimensionality of the feature map. The feature map will become smaller as the number of layers increases, but it maintains a certain scale invariance [20]. It is defined as Among them, a represents the input, e a represents the transpose of a, and b represents the corresponding output. Training of CNN. Deep neural networks are sometimes called deep CNNs [21]. Artificial neural network is also referred to as neural network or connection model for short. It is an algorithmic mathematical model that imitates the behavioral characteristics of animal neural network and carries out distributed and parallel information processing. This kind of network relies on the complexity of the system and achieves the purpose of processing information by adjusting the relationship between a large number of internal nodes. The neural network structure diagram is shown in Figure 5. As shown in Figure 5, people often like to use the cost function to measure the network [22], and its form is a represents the input sample, n represents the number of samples, b represents the corresponding ideal value, and h θ represents the actual output of the network output layer. In the classification task, h θ ðaÞ is a K-dimensional vector, corresponding to K classifications, as is the case with the output of the classifier as described above. The CNN framework used in this article uses a quadratic cost function. The deconvolution layer is the linear convolution of the feature map and the filter and summation to reconstruct the image. Taking the first layer as an example, each color channel of the input image can be expressed as the linear convolution sum of a feature map and a series of filters corresponding to the channel [23], which is Among them, b e 1 represents the approximate reconstructed image of the c-th color channel of the input image b 1 of the first layer; K 1 represents the number of feature maps of the first layer; and Z k,1 represents the k-th feature map. For brevity, Formula (13) is abbreviated as Formula (14) in a matrix form: Among them, b 1 represents the reconstructed image of the input image, F 1 represents the convolutional summation matrix, and Z 1 represents the one-dimensional vector Clustering Algorithm Based on AI Algorithm. AI is widely used and has become a cross domain advanced science [24]. Generally speaking, the purpose of AI is to make computers and machines think like humans. In doing so, these machines can be used to replace a lot of labor. AI has brought great convenience to mankind, and mankind has conducted more and more detailed research on this. One of the important areas of AI is the intelligent retrieval system [25]. Clustering is a widely used exploratory data analysis technique. People's first instinct for data is often through meaningful grouping of data. By grouping objects, similar objects are classified into one category, and dissimilar objects are classified into different categories. Clustering is an effective method for analyzing data and mining potential information. It is a clustering algorithm based on partition and widely used. Therefore, the improvement research of clustering algorithm for the purpose of improving the efficiency and clustering results has important theoretical significance. Since cluster analysis is based on the similarity between individuals, the measurement of similarity between individuals, between classes, and between different clustering results runs through each stage of clustering [26] . Assuming ðm, nÞ is two points in space, their distance is When m = 1, 2, three commonly used distances are obtained, respectively; when m = 1, it is the absolute value distance: It should be noted that distance is only limited to measure the similarity between numerical individuals, and cannot be used to measure the similarity of attribute individuals. Two vectors are randomly selected from the sample, and the distance between the two is where ∑ −1 represents the covariance matrix between samples. Assume that the mean of all data is μ, the standard deviation is σ, the number of classifications is k, and the initial classification center of the i-th category is x i : If it is the classification of d-dimensional data, set the initial center value of the i-th cluster to x il : In summary, although clustering has been well developed, its own shortcomings limit its application in practice. The large and more complex data volume in practice can easily cause the calculation of the clustering algorithm to be too large, and it is difficult to perform effective clustering. Therefore, the traditional clustering method needs to be further improved and perfected [27]. Experiment and Analysis of AI Algorithms The sensor is used to collect environmental information and combined with the conventional robot to form an intelligent robot. If the robot can perceive the surrounding environment information, its performance advantage is also great. It happens that multisensor fusion can solve this problem. The outlier point is the point where the gradient of the sampled value cannot be reached within a sampling period in the actual system. In the field of mathematics, outliers are recorded as the first type of singularity. Outliers are composed of one or more observation points, which contradict other data in the observation data set. There are many methods for outlier elimination: visual inspection, mean square method, point discrimination method, Wright method, etc., among which Wright method is the most commonly used. But using the Wright method to eliminate outliers requires the residuals to meet the normal distribution, which is often not satisfied in the measurement data. If the feature change item in the measurement data is extracted Mobile Information Systems as a trend item, and then the remaining part is eliminated according to the Wright criterion, it will have a good effect. In order to verify the effectiveness of the outlier detection and elimination algorithm, this paper conducts experiments on 4 intelligent robots and controls their time period, respectively, which is between 0 and 0.4 seconds, and the measured value is between 5.7 and 6.8. This paper collects the data of the robot over a period of time and eliminates the outliers in the data. The theoretical value, measured value, and eliminated data of the robot are shown in Table 2. As shown in Table 2, the data collected by the sensor has a certain error compared with the theoretical data, and there are outliers. Therefore, the data collected by the sensor must be preprocessed. This paper compares the data before and after eliminating outliers, as shown in Figure 6. As can be seen from Figure 6, before the outliers are eliminated, the sensor's acquisition accuracy has been declining, from 6% at the beginning to 4.2% at the end. After the outliers are eliminated, the acquisition accuracy of the sensor has been increasing, from 8% at the beginning to 16%; the filtered data can reduce the side effects of abnormal data on the system, but the filtered data accuracy still cannot meet the accuracy requirements of the system. After eliminating outliers, it can meet the accuracy requirements of the system. The data obtained after eliminating outliers is more conducive to the subsequent obstacle detection and path planning of the robot. The results show that prepro-cessing the data collected by the sensor can promote the increase of accuracy and effectiveness of the data. The data fusion and path planning are carried out by the method after the outliers are eliminated. Under the premise of ensuring the accuracy, the convergence speed is accelerated. Many scholars propose to improve the simple model into a practical model. The specific method is to first establish a rough model and then fit a large amount of data to obtain model parameters; this method is called data AI algorithm. As a kind of AI algorithm, the main purpose of wavelet neural network is to establish the inherent mathematical relationship between the input value and the output value through a certain number of training. However, the discovery of an accurate physical model is difficult and lengthy, and its application at the system level is limited and lacks scalability. The results obtained are incomplete, but they are sufficient to guide practice. The measured data can be used as the sample library for wavelet neural network training, as shown in Table 3. As shown in Table 3, through the fitting of the MTF values of the meridian and sagittal directions of the optical system to be measured at different spatial frequencies, it can be observed that a small defocus has a relatively small effect on the MTF value of the low-frequency optical system. As the amount of defocus increases, the MTF value of the optical system decreases. The initial value of the wavelet neural network parameters is particularly critical for fast and accurate calculation of the optimal wavelet neural network, but the initial parameter calculation is more discrete and less convergent. When trying to use the normal distribution, which is more commonly used in nature, the accuracy and efficiency meet the requirements. Through artificial optimization, the optimal COMS position can be found, and the MTF values of its meridian and sagittal directions can be obtained. Furthermore, combined with the neural network calculation method of wavelet function, the optimal MTF value and the corresponding CMOS Table 4. As shown in Table 4, it can be seen that the MTF value optimized by wavelet neural network is higher than the MTF value obtained by artificial optimization. The wavelet neural network and support vector machine algorithm are used to realize the nonlinear function fitting of the relationship between the focus position and the MTF value, and then the best focus position is approximated by AI to obtain the optimal solution. This paper analyzes the MTF value optimization algorithm through the optimization results of the meridian direction and the sagittal direction, as shown in Figure 7. As shown in Figure 7, the MTF values corresponding to the meridian and sagittal directions of the current AI algorithm can be obtained according to the fitted MTF. It can be seen that the MTF values in the meridian and sagittal directions have little fluctuation, within the allowable range of error. Based on the wavelet neural network, the optimal algorithm of the MTF value optimization algorithm can be obtained. This paper uses CNN and multisensing algorithm to compare the benefits of AI algorithms brought by two estimation algorithms based on machine learning before and after improvement and proves the superiority of the algorithm in the subjective effect and objective evaluation index, as shown in Figure 8. As shown in Figure 8, the accuracy of the neural network before the improvement was 4.1%. With the development of society, the traditional algorithm finally dropped to 1.3%. The accuracy of the multisensor information fusion algorithm before the improvement was 3.1% at the beginning; with the development of society, it finally dropped to 1%. The improved CNN and multisensing algorithm bring high accuracy to the AI algorithm, and it has been on the rise. The improved CNN rose from 4.6% to 9.8% at the beginning, and the multisensing algorithm rose from 3.9% to 9.5% at the beginning. Through scientific research, human intelligence can be imitated by machines. Therefore, AI is conducive to the development of intelligent machines. At the same time, AI is widely used in many fields, so AI is very comprehensive. Discussion This article analyzes how to research AI algorithms based on multisensor information fusion. In addition, the concepts related to deep learning algorithms and multisensor Mobile Information Systems information fusion are expounded, and related theories of AI algorithms are analyzed, and the research methods of AI algorithms are explored. Through experiments and analysis of various algorithms, the importance of deep learning algorithms and multisensor information fusion to AI algorithms is discussed. Finally, the AI algorithm incorporates deep learning algorithms and multisensor information fusion as an example for analysis. This paper also makes reasonable use of the neural network based on deep learning and the improved D-S evidence theory based on multisensor information fusion weighted similarity. As the scope of application of multisensor information fusion has become larger and larger, its importance has also increased. Many scholars have begun to apply multisensor information fusion to all aspects of life. According to these two algorithms, it is meaningful to study AI algorithms based on deep learning algorithms and multisensor information fusion. Through experimental analysis, this paper knows that deep learning algorithms and multisensor information fusion are necessary to study AI algorithms, which will make AI algorithms more advanced and accurate. Conclusion This article explains the concepts of deep learning and multisensor information fusion. In the method part, the neural network based on deep learning and the improved D-S evidence theory based on multisensor information fusion weighted similarity are introduced in detail, and finally the clustering algorithm through AI algorithm is introduced. This article has conducted experiments and analysis on wavelet neural network training through intelligence algorithms and found that the use of wavelet neural network for AI approximation is currently a more reliable and true method. Comparing the algorithm and multisensing algorithm before and after the improvement finds that the improved algorithm can improve the authenticity of the AI algorithm. As a scientific cross-discipline, AI research does not have a unified concept so far, and it is difficult to give an accurate definition of AI. Therefore, this article has some shortcomings in the generalization of the concept of AI, but it has been described in the largest scope. Finally, it can be seen that the research of AI algorithms through deep learning has a significant impact on social development. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare no conflicts of interest. 9 Mobile Information Systems the project is Research and Application of Educational Technology based on Artificial Intelligence (taking artificial intelligence major in Higher Vocational Colleges as an example). The project number is 2020ITA05051. Thank the project for supporting this article!
6,909
2022-04-13T00:00:00.000
[ "Computer Science" ]
Whole-Genome Analysis of Recurrent Staphylococcus aureus t571/ST398 Infection in Farmer, Iowa, USA Staphylococcus aureus strain sequence type (ST) 398 has emerged during the last decade, largely among persons who have contact with swine or other livestock. Although colonization with ST398 is common in livestock workers, infections are not frequently documented. We report recurrent ST398-IIa infection in an Iowa farmer in contact with swine and cattle. L ivestock, especially swine, are a reservoir for Staphylococcus aureus sequence type 398 (ST398) (1). Carriage of this strain is primarily reported in persons with occupational exposure to livestock; however, less is known about the frequency and severity of ST398 infections, particularly in the United States, where surveillance for this organism is limited (2). We report colonization and recurrent infection with methicillin-sensitive S. aureus (MSSA), belonging to livestock clade ST398, in a farmer in Iowa, USA. A participant in a longitudinal study of rural Iowans (2) provided swab samples of his current skin infection. Swabs were cultured, and S. aureus isolates were subjected to molecular analyses as previously described (3,4). Institutional review board approval was obtained from the University of Iowa. The 61-year-old man enrolling in the Iowa study in July 2011 reported a skin infection on his foot. He had visited a doctor the previous day and received ciprofloxacin, and noted his infection was chronic. He also reported a history of heart disease and diabetes. The participant was a farmer who raised swine (900 head × 40 years) and cattle (500 head × 10 years), and owned a dog. He reported working directly with swine and cattle ≈2 hours per day and noted that he did not use personal protective equipment such as gloves or masks. His nose and throat were colonized with MSSA ST398 that was mecA-negative, periventricular leukomalacia-negative, and staphylococcal protein A type t571. The isolate was resistant to tetracycline, trimethoprim/sulfamethoxazole, and levofloxacin. The participant's wife was also colonized at enrollment with MSSA in her throat, showing the same molecular characteristics and susceptibility patterns as her spouse. She reported no exposure to livestock. The isolate obtained from the participant's first culture showed the same molecular characteristics as isolates from his nasal and throat swabs and the isolate from his second infection culture received in November 2011 (Table). He described this second infection as cellulitis: he reported draining pus, a general ill feeling, and headaches. The infection was treated empirically with ciprofloxacin, neosporin, warm compresses, and incision and drainage. Isolates from his third incident infection culture in August 2012 showed the same molecular characteristics as previous samples, but 2 different susceptibility patterns: 1 isolate was the same as previous, while another showed additional phenotypic resistance to oxacillin (MIC 4 µg/mL by broth dilution), making this a borderline-resistant S. aureus infection. Genome sequence analyses showed that this strain had not acquired a mec gene and likely had become resistant from a mutation in a penicillin-binding protein. This infection was treated with mupirocin, trimethoprim/sulfamethoxazole, ciprofloxacin, warm compresses, and drainage. All of the farmer's isolates had the t571 spa type, which is common among livestock-independent (i.e., human-associated) ST398 lineages (4). S. aureus t571 strains have been isolated from an Iowa childcare worker (5), inmates from a Texas jail (6), and in community members in New York and New Jersey (7); t571 strains have been rarely isolated from Iowa livestock (8). However, whole-genome sequence analyses with 89 previously published clonal complex 398 genome data confirmed that the isolates from this case were from the livestock-associated ST398 lineage (CC398-IIa) (4). Longitudinal follow-up complemented with wholegenome sequence analyses showed that the participant repeatedly experienced infections by the same strain over the course of 13 months. The isolates from this patient and his wife formed a distinct clade within the ST398 global phylogeny (online Technical Appendix Figure 1, https:// wwwnc.cdc.gov/EID/article/24/1/16-1184-Techapp1.pdf). In-depth genomic analyses revealed a novel ≈265,000 bp recombinant region originating from clonal complex 9, and unique to this lineage. It is unclear whether the farmer's recurrences arose from treatment failure, repeated acquisitions from exposure to livestock, or the result of long-term colonization and evolution. In contrast, the farmer's wife, who was colonized by the same strain, reported no direct contact with livestock and may have become colonized by human-to-human transmission from her husband. Comparison with other strains collected as part of this (2) and other studies (9) identified closely related isolates from 3 other participants and 1 pig (online Technical Appendix Figure 1) distributed throughout the state of Iowa and into western Illinois (online Technical Appendix Figure 2). This dataset does not specify whether the evolution of the borderline-resistant S. aureus phenotype took place in the patient or the livestock reservoir. The farmer was prescribed ciprofloxacin multiple times; this could have contributed to emergence of borderline oxacillin resistance (10). The farmer's first phenotypically methicillinresistant strain was not detected until >1 year after enrollment into this study. This case demonstrates the ability of livestock-associated S. aureus ST398 to cause repeated skin and soft tissue infections in a manner similar to community-associated strains, and the necessity to screen for MSSA and methicillin-resistant Staphylococcus aureus when studying the spread of these organisms.
1,190
2018-01-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Education Majors’ Preferences on the Functionalities of E-Learning Platforms in the Context of Blended Learning The modern stages of higher education development and the actual training of education majors require systematic use of different electronic forms and platforms of education in combination with the traditional educational methods and approaches which will provide students with essential digital skills and competencies, important for their future professional and personal success. Widespread learning management systems provide a common set of basic functionalities. In this study, an assessment of the preferences of education majors on the main functionalities of the electronic platforms used in the context of blended learning in university education is presented. The results reveal a preference on organizational and informational functionalities and less on communication features. Keywords—digital competence, blended learning, e-learning platform, functionality, preferences, education majors Introduction Some contradictory points, concerning the content and the structure of the digital competence of pedagogy specialists, can be outlined in educational theory and practice. They affect the quality of students' university training and professional readiness. The negative aspects of applying information technologies in university programs can be summarized as follows: (1) lack of understanding and adequate approach toward the specific needs of technical students and engineers and information technologies teachers on one hand, and on the other hand other teachers and specialists in the field of education, who are supposed to use information technologies to improve and facilitate their specific work; (2) underestimation of the principles and the potential of the competence approach in designing and implementing training programs, together with the impossibility to define clearly the fundamental components of each competence; (3) vague concept of teachers' digital competence frame-work and its value for their professional profile; (4) unclear conceptual model for design and improvement of educational solutions for digital competence development of pedagogy specialists within their professional training. There is a vast range of platforms for electronic training of education majors that offers a variety of functions and topics and involves different skills, included in the digital competence requirements but specific for teachers' professional environment. Widespread learning management systems such as Blackboard or Moodle provide a common set of basic functionalities. Extending or adapting these functionalities is more or less dictated by the current trends in information and communication technologies, the social networks and the mobile media. Other functionalities include the legacy from the first steps in the development of learning management systems. The contemporary requirements to educational environment guarantee a high level of individualization in education based on the construction of pedagogical situations. Born as an alternative caused by concerns on pure e-learning, blended e-learning has been presented as promising. The realization of blended learning at universities is dependent on the level of motivation and interest on part of the students as its subjects as well as on its potential for information processing within the integration of elements of electronic and traditional education. Various approaches for transforming traditional courses into online ones were developed and various practices of blending face-to-face with online teaching and learning have given rise to intensive processes of theoretical reflection of these practices in an attempt for them to be conceptualized by existing pedagogical theory or used as a basis for creating new pedagogical paradigms [1]. Due to the growing of blended learning, some researchers have already studied students' expectations from [2] and satisfaction with [3] blended e-learning system environment. We focus our empirical study on the preferences on basic functionalities of elearning platforms used in the context of blended learning and the interest and motivation of the education majors. Theoretical Background The electronic platforms for education are incredibly flexible and allow adding individual characteristics to the educational process depending on student's personal needs or the aims of the training program. Different electronic platforms and environments offer different options for successful training of big groups and at the same time, they provide individual evaluation and assessment. The communication between instructor and students in the electronic portal happens naturally, in an easy manner, and provides effective feedback on training results. On the one hand, it is an important guideline for the instructor when it comes to selection of adequate training methods and strategies and improvement of the educational interaction. On the other hand, students receive free and easy access to the course content and the assessment tools. They can contact the other students in the group or the instructor himself for further information or help. It is also important to mention the potential of modern platforms for electronic teaching and learning to present the educational content in a very attractive and easy-to-understand way. It also allows different combinations, interpretations and adaptations of knowledge and affects students' motivation. Blended learning is a type of new rather than innovative education, which is supposed to change the structure of educational content. It implies information environment and sources as a prerequisite for turning the educational platform into an algorithmic rather than human space. Through blended learning we economize time, means and efforts. Blended education is the result of the convergence of two classic learning environments -the traditional face-to-face learning and online training. It is also motivated by the need to increase the availability and flexibility of learning in the context of lifelong learning and the need to increase cost efficiency because universities are looking for ways to use technology to achieve both improvement of the quality of education and reduction of costs. The search for opportunities to fully design and implement blended education requires systematization of several of its main models, which play the role of conceptual frameworks. They enable its operationalization and technological realization in practice. Although the "patterns are abstract in their nature, they facilitate effective learning, which requires a specific understanding of the needs of learners, the educational content, the target groups and the organizational conditions and environment" [4]. Blended education does not provide the whole range of options for its implementation, but represents a good basis for defining the features in accordance with the objectives, resources and opportunities for its practical implementation. In the context of these, the realization of blended education at universities seems to be highly dependent on the level of motivation and interest on part of the students as its subjects as well as on its potential for information processing within the integration of elements of electronic and traditional education. This research aims at searching and finding more variegated opportunities to integrate and adapt electronic learning environments to traditional university training of education majors as a way to improve the quality of their achievements and to maintain high motivation for learning and success. A model of mixed -electronic and traditional training has been created and implemented. The application of this model in the university training of education majors provides a further opportunity to enrich their digital competence. Digital competence, defined as one of the eight key competences for lifelong learning includes "the confident and critical use of information society technologies for work, leisure and communication" [5]. Digital competence implies connectivity with the skills to use digital technologies that allow teaching professionals to work with modern information and communication technology, computers, software applications and databases, helping them to realize their ideas and objectives in the context of their work. It is important for education majors to have the ability to search, collect and process information and approach it critically and systematically as well as the skills to use the design tools for media information and the capacity to access, search and use Internet-based services, especially in the context of their future activities and opportunities for continuous professional qualification. All this is successfully assisted during their university education through the techniques of blended learning and targeted implementation of electronic platforms in it. Design of the Empirical Study and Analysis of the Results The focus of the presented study is related to the survey of the students' preferences on basic functionalities of electronic platforms used in the context of the implementation of blended learning in university education. This mainly operational level of attitude is directly related to the interest and motivation of students and their adequate involvement in the organization and implementation of their training in specific subjects, using the options for integration of traditional and electronic forms. The paper studies the functionality of e-learning platforms in the context of blended learning (traditional and electronic) of education majors. The aim of the study is to pinpoint the students' preferences on the functionality of a particular e-learning platform and motivation in the course of the implementation of blended learning. The tools for conducting the empirical study include a survey for evaluating the level of motivation and a Questionnaire for the preferences of students on basic functionalities of Blackboard Learn LMS platform. The survey for assessment of the level of motivation consists of: Introduction with guidelines for assessing and completing the survey, which contains 42 statements with answers on a 7-point grading scale in which the answers comply with a certain sequence. The questionnaire was developed by V. K. Gerbachevski [6] in order to identify the components of the motivational structure associated with the level of claims immediately advanced in the course of operations and is completed by students in the course of solving specific educational cognitive tasks through the use of the elearning platform Blackboard Learn for Academic Collaboration. The examiner fixes in advance a certain stage during the exercises after the completion of which students must fill cards received earlier and continue their work on the assignments during the exercises in the course. The main purpose of using the proposed questionnaire is to study and assess the level of motivation of students while they use the Blackboard Learn for Academic Collaboration e-learning platform and to track and differentiate the attitudes of students towards the use of the electronic platform and its functionalities in course training where the experiment was conducted. For the level of educational and cognitive motivation, during solving a specific task, by using the e-learning platform (Blackboard Learn for Academic Collaboration), there can be deduced, according to the degree of manifestation, a high, moderate or low level of student's response to success or failure, increase or decrease in the willingness to work on the task. In the first phase of our study, these two types of motivation were explored for a target group including students majoring in physical education and in sports. The second phase of our inquiry covers education majors' opinion on the educational platform functionalities demonstrates the same tendencies and confirms the conclusions made in that study. The result analysis at the beginning of the implementation of the mixed training project (with a duration of 15 weeks) on a specific subject from the curriculum, shows a very low level of students' academic motivation. According to the statistic information obtained after data processing, 27% of students claim to have no academic motivation at all, 47% of students admit to have weak motivation, for 19% of the students the level of motivation is moderate and only 7% claim to have strong motivation for academic learning. At the end of the experimental training the levels of motivation changes as follows: 42% of students claim to have strong motivation, for 46% the level of motivation is moderate, and only 12% admit still to have weak motivation for academic learning. The dynamics of different levels of students' motivation is obvious. It is a result of systematic use of electronic platforms in the context of a mixed educational model, combining electronic and traditional training of pedagogy students. Students, involved in the experimental training, have been asked to share their expectations about such a mixed educational model on the basis of its different components related to the principles of collectivism and constructivism as educational paradigms on one hand, and on the other hand related to the technological aspects of offline and online media, synchronous and asynchronous communication and their educational potential. According to the results, students' expectations refer to a great extend to the easy and fast access to various study materials. For 25% of students the best advantage of the mixed model is the electronic access to course lectures in a document format, 65% give priority to online lectures within the electronic environment, 26% would appreciate access to sample tests as separate files to download, for 75% of students a variety of interactive tests will support best their study. 83% of students expect to be able to find and download PowerPoint presentations on different topics. Different types of interactive teaching and learning materials are important for 70% of the students. A vast majority of 86% claims that the access to all kinds of virtual models, training experiments, or video and animation models and simulations are crucial for the successful study. 79% of the students will take advantage if they are offered links to extra Internet resources in both Bulgarian and foreign sites. The results also emphasize on searching for different ways to facilitate and improve communication between students and instructors; on an easy access to information about different extracurricular activities; up-to-date information about course requirements and assessment procedures; electronic portfolio implementation, supporting exams preparation. There are some contradictory students' views about the mixed -electronic and traditional training model as a type of training that offers only mechanical connection with low social and academic value. An important aspect, when it comes to planning and implementing such a model of mixed electronic and traditional training, is students' opinion on the social value of their communication with instructors and professors. Even when it comes to education, students strongly prefer the popular social networks such as Facebook, Twitter, App.net, Linkedin to contact and communicate with the instructors (89% of the students involved), instead of the specialized platforms for electronic and distant learning. The face-to-face communication is not excluded from students' answers. On the contrary, students find it important to integrate successfully both electronic and face-to-face communication to achieve best educational results. Another important factor that gives a priority to the mixed model of training and improves students' motivation is the opportunity to publicize and popularize students' works and achievements throughout the electronic platforms of education. Even when the publicity is within a given platform or group, it affects positively the levels of motivation. The increasing number of registered users who visit a student's portfolio or read a certain presentation affects to a great extent the quality of the implementation of the next task of the same student. To illustrate this process, a research study has been done regarding students' preferences on the basic functionalities of the platform (Figure 1). Fig. 1. Factor analysis on main functionalities of the platform The results regarding the level of motivation in the course of using the electronic platform Blackboard Learn for Academic Collaboration are statistically significant and give a good idea of the impact of technology on the motivation for achievement, which is crucial to the overall educational and cognitive motivation. Although the study of motivation is a very complex process involving various factors that guide, regulate and maintain individual actions in the learning process, examining it in the course of using the electronic platform enables us to make conclusions about the attitude of students towards the activities and their results. In order to establish a correlation between success in solving the task and the level of motivation a chi-squared test (! ! test) is applied, since empirical data are represented by variables of two scales -ordinal (success) and nominal (level of motivation, which is characterized as mainly qualitative). If the null hypothesis (! ! ) states that between the success rate of students in the application of e-learning platform and the level of motivation for excellence, there is no logical connection, then the alternative hypothesis states that such a relationship exists. The empirical characterization of the hypothesis is ! !"# ! ! !!!"! while ! ! ! ! !!!"! ! ! !!!" ! Comparing the theoretical to the empirical characterization of the hypothesis namely ! !"# gives us reason to reject the null hypothesis in favor to the alternative one, which means that there is a logical connection between success in using the e-learning platform and the level of motivation for achievement. Factor analysis has been performed on the basis of the conversion of the set of correlating data in a new set of non-correlating variables (or factors) that explain the highest possible fraction of the total variation of the output data, thereby reducing the number of input variables by grouping those which correlate with each other in a common factor and separating uncorrelated ones at different factors. Conclusions When experimenting with educational techniques and technologies in university education, priority is given to new information and communication networks. The principle of their selection will be determined by the ability not just to know electronic platforms, resources and formats activity but primarily by the personal expectations and attitudes of students determined by the motives for purposeful behavior in training. The survey results give reason to appreciate the dynamics of educational activity and its relation to motivation as a process. In this sense, blended learning is a new opportunity to establish priorities for studies related to the idea of innovative models of interaction between classical education and e-learning, stimulating a motivational mode of behavior in students' learning according to the functionalities of the selected electronic platform. The general conclusion that can be drawn here is the idea that the connection between the mixed model of education and students' motivation helps to focus the effort to improve students' academic behaviour and performance. It is well known that motivation is a result of personal inner needs and outer stimuli. It can be best explained by the ideas and principles of the competence approach, where the personal characteristics and skills development are tightly related. The mixed model of electronic and traditional training can be seen as an alternative of the competence approach. Being a regulative factor in human's behaviour, motivation creates the right conditions to concentrate students' attention on the most valuable aspects of learning. The idea of blended education at university gives opportunities to search and find new configuration of gnoseological, axiological and didactic factors irradiating a new concept of educational environment.
4,240.6
2017-05-31T00:00:00.000
[ "Computer Science", "Education" ]
Basis Functions for a Transient Analysis of Linear Commensurate Fractional-Order Systems : In this paper, the possibilities of expressing the natural response of a linear commensurate fractional-order system (FOS) as a linear combination of basis functions are analyzed. For all possible types of s α -domain poles, the corresponding basis functions are found, the kernel of which is the two-parameter Mittag–Leffler function E α , β , β = α . It is pointed out that there are mutually unambiguous correspondences between the basis functions of FOS and the known basis functions of the integer-order system (IOS) for α = 1. This correspondence can be used to algorithmically find analytical formulas for the impulse responses of FOS when the formulas for the characteristics of IOS are known. It is shown that all basis functions of FOS can be generated with Podlubny‘s function of type ε k (t , c ; α , α ), where c and k are the corresponding pole and its multiplicity, respectively. Introduction The zero-input response, sometimes also referred to as the natural response, is a well-known concept in system theory.It is the response of a system to its initial conditions, represented by the vector of the initial state in the absence of external excitation.The second well-known component of a system motion is its zero-state response, or forced response, which is the response of the system to external excitation under zero initial conditions.For linear systems, as a consequence of the superposition principle, the total response of the system, i.e., its response to external excitation under general initial conditions, is the sum of the zero-input and zero-state response [1]. The natural response plays a specific role in describing phenomena in systems left "to themselves".From the way these systems deal with their own initial conditions and how the corresponding movement of the system can be observed through the chosen outputs, one can infer the stability of the system and the observability of the corresponding state variables from these outputs.The zero-state response, in turn, indicates the way in which signals from given inputs are propagated to given outputs through the state dynamics. Furthermore, it is well known that the natural response of a linear system can be composed of a linear combination of so-called basis functions [2].If a linear differential system with lumped parameters is considered, then the corresponding basis functions are components of the general solution of the corresponding input-output homogeneous differential equation of the system.The same basis functions appear in the general solution of the set of state equations of the system at zero excitation.The basis functions of a particular system are uniquely determined by the system poles, i.e., by the roots of its corresponding characteristic equation or the eigenvalues of the state matrix. For classical integer-order systems (IOSs), the possible basis functions are functions of the form e at , sin(bt), cos(bt) and their combinations e at sin(bt), e at cos(bt), which are further multiplied by the functions t k , k = 0, 1, ..., m − 1, m ∈ N, a ∈ R, b ∈ R+ in the case of m-tuple poles.All these basis functions can be expressed by the real and imaginary components of the exponential function of the complex argument, which will be hereinafter referred to as the generating basis function f ∈ C: f (t) = t k e (a+ib)t . (1) The above basis functions play an important role in the concept of modes of motion of a linear system [3], which is known from classical mechanics [4].The linear combination of the basis functions can be used to model not only the natural response, but also selected types of forced responses, which include the impulse responses of so-called proper systems, i.e., systems of an integer order whose transfer functions have a polynomial of a lower order in the numerator than in the denominator.In the opposite case of improper systems, the impulse response contains, in addition to the basis functions mentioned above, the Dirac impulses and their derivatives, which penetrate the output due to the Dirac impulse exciting the system input. The concept of basis functions can be extended to the class of linear fractional-order systems (FOSs).It is well known that the Mittag-Leffler functions (MLFs) with one, two or three parameters play an important role in the description of these systems, and can thus be viewed as a generalization of the classical exponential function from the domain of IOS [5].It can be therefore expected that basis functions from the world of FOS are a generalization of the above basis functions and thus they could be derived from some known kinds of the MLFs.On the other hand, the MLFs are not the only possible candidates: for example, several different definitions of exponential and goniometric functions generalized to the fractional domain are known [6][7][8][9][10][11]. Dictionaries of the Laplace transform for a fractional domain include functions such as Dawson, erc, ercf, Bessel, extended Bessel, hypergeometric, Hermite polynomials [12] or, for example, the function introduced by Podlubny in [13] for solving fractional differential equations.It is necessary to consider which of these or similar functions are appropriate to be selected for the set of basis functions.Furthermore, one should analyze the modification of these functions for the case of multiple poles and try to find an equivalent to the above function t k for FOS.The notional culmination of this work should be to find a generating basis function from which all basis functions for the fractional domain can be generated, i.e., an analogue of function (1) for the integer-order domain. The concept of basis functions can be used to algorithmically generate equations of waveforms of system responses via an analysis program.For example, formulas of responses to initial conditions or impulse or step responses as functions of time can be generated.The Laplace transform of the response, for example, the transfer function as the Laplace transform of the impulse response, is decomposed into partial fractions.A basis function of time is assigned to each fraction.The impulse response is then expressed as a linear combination of the basis functions.Once the basis functions are quantified, the waveform of the corresponding response is directly obtained with an algebraic evaluation, not with a classical numerical solution of differential equations.The accuracy of the result is determined with the accuracy of the pole calculation, the accuracy of the algorithm of partial fraction decomposition and the accuracy in computing the basis functions.This procedure also has the advantage of offering insight into the dynamics of the system through the analytical formula of the response.Simulation programs for a so-called symbolic and semi-symbolic circuit analysis work on this principle [14]. As far as numerical methods for calculating the standard responses of linear systems are concerned, particularly the impulse or step responses, either standard simulation programs such as SPICE or another platform for scientific and technical calculations such as MATLAB can be used.Of course, programs like MATLAB also allow an analysis using various numerical algorithms that are not available in classical simulation programs [15,16].A review of numerical methods for solving linear fractional differential equations and a discussion of specific problems associated with them are given in [17].Additional procedures are published in [15] together with codes for implementation in MATLAB.The method based on closed-form solutions to linear fractional-order differential equations starts from the approximate Grünwald-Letnikov definition of the fractional-order derivative.However, the accuracy of the transient analysis strongly depends on the choice of the computational step, so the method is of little use when solving practical problems.In [18], a generalization of the classical integer-order integrator approach for the numerical solution of the free (zero-input) response is made by elaborating the concept of the fractional integrator.In [19], the concept of LFD (Local Fractional Derivative) is used to compute zero-input responses. Numerical algorithms of the Laplace inversion can also be used for a transient analysis [20,21].An excellent discussion of the selected algorithms is given in [22,23]. Procedures utilizing analytical preprocessing of results are also popular.The analytical solution of a linear homogeneous fractional-order differential equation was first published in the classic work [13].However, the corresponding formula contains infinite sums of derivatives of two-parameter MLFs, and is thus of little use for practical calculations. Interesting possibilities of a transient analysis are offered for the so-called commensurate systems, where all non-integer powers of the operator s in their transfer functions are rational numbers.For commensurate systems, one can use a procedure leading to expressing their impulse or step responses in terms of a finite sum of MLFs [15,24,25]. The algorithmic generation of waveform formulas, constructed from basis functions, is an interesting alternative to the above numerical methods because it completely eliminates the numerical solution of differential equations of a non-integer order. The above summary of the state of the art indicates the usefulness of extending the concept of basis functions to the domain of linear fractional-order systems.In the following section, the problem to be solved is specified, and new pieces of knowledge arising from its solution as well as their application potential are pointed out. Problem Formulation The purpose of this article is to solve the following problem: Consider a linear time-invariant IOS, described with a proper s plane transfer function where a i , b i ∈ R, a n = 0, m, n ∈ N+ and m < n. Let us define an associated commensurate FOS with the transfer function K F , obtained from transfer function (2) by substituting thus Denote the impulse responses of systems ( 2) and (4), i.e., the Laplace inverse of ( 2) and (4), as g(t) and g F (t), respectively. Let us solve the following problem: Suppose we know the analytic formula for g(t) as a linear combination of the basis functions of time.Let us find a procedure for deriving g F (t) from g(t) for all possible configurations of the coefficients a i , b i from ( 2) and (4), respectively. It will be shown below that g F (t) can be obtained from g(t) such that to each known basis function of the IOS, the corresponding basis function of the FOS will be assigned.This result can be arrived at in the following steps: transfer function (1) will be decomposed into partial fractions, and each fraction will be converted into a basis function of time according to whether the corresponding pole is real or complex and simple or multiple.In the next step, substitution (2) will be applied to the formula of partial fraction decomposition, and the Laplace inversion will be used to arrive at the basis functions of associated system (3).By comparing the results from the two steps, we come to unambiguous correspondences between the basis functions in the integer-order and fractional-order domains. The above procedure leads to the main contribution of this paper, which is the definition of a complete set of basis functions of commensurate FOS with transfer function (4).These functions allow an analytical description of the natural responses of all the above FOSs for an arbitrary set of their eigenvalues.This automatically implies the possible use of this analytical modeling for an accurate transient analysis of FOS without the need for a numerical solution of fractional-order differential equations, while the accuracy is guaranteed for both stable and unstable systems. Another possible use arises from revealing the above correspondence between the basis functions of the FOS and the IOS: it can be used for an effective computer-aided transient analysis of commensurate FOSs with a direct utilization of the well-known algorithms for finding analytic formulas of responses of IOSs, with the proviso that in the final stage, the classical basis functions are replaced by new basis functions of associated FOS (3).In this paper, we will point out another interesting use of the formulas of impulse responses of FOS constructed from basis functions: reflecting them back into the space of IOSs, one can arrive at hitherto unpublished or little-known correspondences between the Laplace transforms and their time-domain representations from the world of classical IOS. This paper is structured as follows: in Section 3, following this section, the mathematical prerequisites for constructing the basis functions are summarized.In Sections 4 and 5, the Laplace inversions of the transfer function of a commensurate proper FOS for real and complex, single and multiple poles are performed with the aim of expressing the corresponding basis functions in a unified way.Section 6 proposes the formalism for writing these functions and the function generating all the proposed basis functions, together with an unambiguous correspondence between the basis functions for IOS and FOS.Section 7 discusses the possibility of algorithmically expressing the analytical formulas for the step response from the knowledge of the formula of the impulse response composed of the basis functions.Section 8 discusses the numerical aspects of calculating the impulse and step responses with the method of generating their formulas from the transfer function.In Section 9, the procedures are demonstrated on examples. Mathematical Prerequisites The purpose of this section is to summarize the mathematical prerequisites for analyzing to what extent certain functions known from fractional calculus are suitable candidates for their inclusion in the set of basis functions of a transient analysis. This section has three objectives: 1. To summarize the well-known definitions of one-, two-and three-parameter MLFs of a complex argument, derivatives of two-parameter MLF and their interrelations, as well as relations with the classical exponential function.2. To point out the possibility of expressing two-parameter MLFs of type E α,β and their derivatives for the specific parameters α, β using functions of type E α,α and their derivatives.This analysis will make it possible to verify the hypothesis that the set of basis functions can be reduced just to functions of type E α,α and their derivatives. 3. To analyze the relations for real and imaginary parts of two-parameter MLFs and their derivatives, and the resulting appropriate definitions of fractional exponential functions and goniometric functions such as sine and cosine as potential building blocks of basis functions. Mittag-Leffler Functions The diagonal part of Table 1 summarizes the defining relations of the complex Mittag-Leffler functions of complex argument z: the classical MLF E α (z) with one parameter α; the function E α,β (z) with two parameters α, β; the m-th derivative of the function E α,β (z) with respect to the argument z; and the function E χ α,β (z) with three parameters α, β, χ [5].For completeness, the classical function exp(z) was added to the table and its known connections to MLFs are indicated. Table 1.Overview of the types of complex Mittag-Leffler functions (MLF) of the complex argument z and their relationships with each other and with exponential and gamma functions. In the legend to Table 1, it is recalled that the corresponding parameters α, β, χ may be complex with positive real parts [6].In fractional calculus, the parameters in question often appear as positive integers. It is obvious that the classical exponential function and the one-parameter and twoparameter MLFs are special cases of the three-parameter MLF, and that there is an unambiguous mapping between the integer derivatives of the two-parameter MLF and the three-parameter MLF.For χ = 1, the three-parameter MLF and its derivatives can be expressed in terms of the two-parameter MLF and its derivatives.A further analysis shows that the condition χ = 1 suffices for the construction of all variants of the formulas of the time responses of commensurate FOSs.While searching for basis functions, one can then focus on computationally simpler two-parameter MLFs. Selected Relations between Two-Parameter Functions E α,β , E α,α and their Derivatives Based on the analogy between the mathematical description of the response of classical and fractional systems, it is shown in Sections 4 and 5 that it may be convenient to choose two-parameter MLFs of type E α,α (z) and their derivatives as basis functions for a semisymbolic analysis.At the same time, however, it turns out that the Laplace transforms of some waveforms lead to MLFs and their derivatives with a different distribution of parameters, namely to E α,0 functions.Therefore, it may be useful to be able to transform these functions and their derivatives to E α,α (z) functions and their derivatives. However, the two-parameter MLF for the parameter β = 0, i.e., E α,0 (z), contradicts the classical domain of MLF: β ∈ C, Re(β) > 0 [5].The problem is that the gamma function, appearing in the defining relation of E α,β (z), exhibits an infinite limit at 0. However, since the corresponding gamma function appears in the denominator of the defining relation of the two-parameter MLF, the first term of the infinite series defining the function E α,β (z) is zero for β = 0 and k = 0.Then, the recurrence relation [26] can be used to derive the identity Numerical algorithms for computing the MLF available in [27-29], as well as the Wolfram Computational Notebook, for example, evaluate the function E α,0 (z) in accordance with (6). By successively differentiating both sides of (5), we arrive at a recurrence relation for the m-th derivative of the two-parameter MLF, whose solution is Thus, for β = 0, arbitrary derivatives of the function E α,0 (z) can be expressed using the derivatives of the function E α,α (z): Note that the defining relation for the m-th derivative of the two-parameter function E α,0 (z) exhibits no singularity for k = 0 when m > 0. Thus, the algorithms designed for this purpose [30,31] can be used directly for numerical computations of the derivatives of the function E α,0 (z) even for β = 0. It is worth noting that the robust algorithms for MLF computations [25,31] return correct results even for real parameters β < 0. The algorithm deals with the problem of infinite limits of the gamma function for the arguments −1, −2, −3, ... in the same way as for the argument of zero discussed above. The fact that there is a clear relationship between the three-parameter MLF and the derivatives of the two-parameter MLF contributes to further possible narrowing of the set of basis functions.If an integer positive parameter χ = m + l, m ∈ N is substituted into the defining relation of the three-parameter MLF, then After comparing the result with the definition of the m-th derivative of the twoparameter MLF, we find that [31] The back conversion, i.e., expressing the m-th derivative of the two-parameter MLF using the three-parameter MLF, is given in Table 1.This is to say that, for χ = m + l, m ∈ N, one can alternatively work with either one or the other type of basis functions as desired. The above transformation relations are important not only from the methodological point of view (narrowing down the set of basis functions) but also from the point of view of numerical calculations, as will be discussed in Section 8. Some Fractional Goniometric and Other Functions In Section 4, we show that when computing some responses, it is convenient to compute the MLF as a complex function of the complex argument and, in a second step, to extract its real or imaginary part from the result.Consider the m-th derivative of a two-parameter MLF with complex argument z or complex conjugate z*, Then, the defining relation in Table 1 implies where Although relations (15) and ( 16) are trivial, they are mentioned here since the real and imaginary parts of the corresponding MLFs and their derivatives described by these formulas play an important role in the semi-symbolic analysis of systems with complex poles (see Section 4). The case of MLF with the purely imaginary argument z = iy, which corresponds to modeling the time response of a classical IOS without a damping factor, also deserves attention.In this case, For the integer-order case, i.e., α = β = 1, the m-th derivative of the function E α,β (iy) on the left-hand side of (19) translates into the classical exponential function of the imaginary argument, and the sums on the right-hand side translate into the Maclaurin series of functions of the cosine and sine type: Thus, the sums on the right-hand side of ( 19) can be viewed as the m-th derivatives of the generalized (fractional) functions cos α,β (y) and sin α,β (y), and ( 19) can be rewritten in the form where cos The case m = 0 yields simple defining equations of fractional functions of the cosine and sine type: Formulas ( 24) and ( 25) are generalizations of one-parameter fractional functions of the cosine and sine type from [8,9] to two-parameter functions. Relation (21) gives guidance on how to compute fractional functions ( 22) and ( 23) in a simple way: as real and imaginary parts of the derivatives of MLF with an imaginary argument. Equation (21) as well as the identity E 1,1 (z) = exp(z) confirm that the two-parameter MLF can be viewed as a generalization of the classical exponential function to the noninteger domain.Alternative definitions of the generalized exponential function of a noninteger order are published in [6][7][8]10].Interesting for the purpose of a semi-symbolic analysis of fractional systems in the time domain is the following definition [7,8,10]: which can be formally extended to a two-parameter exponential function Relations ( 26) and ( 27) are special cases of the function introduced in [13], namely e α,α (a, t) = ε 0 (t, a; α, α), (29) The definition of the two-parameter MLF implies the development of ( 27) into an infinite series From (31), it is easy to deduce that where are goniometric functions, associated with previously defined functions ( 24), ( 25) and ( 28) by the relations The Laplace transforms of exponential function ( 27) and goniometric functions ( 34) and ( 35) can be derived from ( 31), ( 34) and ( 35): (40) In Section 5, we discuss the possibility of utilizing the above functions as basis functions for describing the transient responses of commensurate FOSs. Time-Domain Response via Basis Functions: Real s 1 -Domain Poles For the case of an m-tuple real pole s = a of the transfer function of an IOS, the corresponding partial fractions of decomposition (2) are of the familiar form where P k are the residues belonging to the root a. The partial fractions of the decomposition of associated transfer function (4) of a commensurate FOS have the same form as (41), differing only in substitution (2), so they can be written as follows: From the well-known formula published in various notations in a number of works, e.g., [13], the relations can be derived between the time and Laplace domains, relevant to partial fractions (41) and (42), for single and multiple real poles, as shown in Table 2. α,α−δ (at α ) integer order (α = 1) fractional order Laplace domain time domain Laplace domain time domain From a comparison of the relations in Table 2, analogies begin to emerge between the basis functions for IOS and FOS.For a simple pole, the classical exponential function of an IOS corresponds to a MLF of type E α,α multiplied by a factor t α−1 .However, this is a generalized exponential function (26).For the multiple pole, the classical exponential function is accompanied by a multiplicative term t k−1 .In the fractional domain, this is reflected both by the derivative of the function E α,α of order (k − 1) and by the multiplicative term t α(k −1) .It is easy to see that the resulting waveform can then be smartly expressed by function (28) of type ε k−1 .A more detailed analysis will be carried out in Section 6. Time-Domain Response via Basis Functions: Complex s 1 -Domain Poles Consider that the transfer function K(s) in ( 1) contains an m-tuple complex conjugate pair of poles c, c*, where The corresponding partial fractions of the decomposition of transfer function ( 2) can be written in the form The k-th term of the associated transfer function of FOS (3) is The following analysis is somewhat more complicated than for real poles.For the sake of clarity, it will be done separately for single and multiple poles. Simple Complex Poles For the case of simple poles (k = 1), (46) can be converted to the time domain using the following correspondence, published in [25]: where Im denotes the imaginary part of the complex MLF.By successively substituting σ = 0, σ = α into (47), Formula (46) can be converted to the time domain for k = 1 as follows: Using ( 6) and ( 13)-( 16), the right-hand side of (48) can be expressed using the functions E α,α : Equation ( 49) is the basic formula from which the various correspondences between the Laplace and time domains are derived in Table 3 for simple complex poles.The table shows that the classical waveforms of type exp(at)sin(bt) and exp(at)cos(bt) for IOS correspond to the combinations of basis functions of type t α−1 Im{E α,α (ct α )} and t α−1 Re{E α,α (ct α )} for FOS.In fact, these are the imaginary and real parts of fractional exponential function (26) with a complex argument. integer order (α = 1) fractional order Laplace domain time domain Laplace domain time domain Interesting results can be obtained for the Laplace transforms without a damping factor, i.e., with the parameter a = 0, which lead to undamped oscillations of type sin(bt) and cos(bt) for IOS.The corresponding fractional goniometric functions are sin α,α (bt α ) and cos α,α (bt α ) multiplied by the function t α−1 .However, these are modified fractional goniometric functions (36) and (37). Multiple Complex Poles To convert partial fraction (46) into the time domain, one can proceed in a similar way as for the simple pole, but this time using the relation we published in [25]: where [x] denotes the integer part of x, and Table 4 summarizes the Laplace transforms and their time representations (50) for the cases δ = 0, δ = α, and for the case of zero damping (a = 0).The responses for δ = α, which are expressed using the derivatives of the E α,0 function according to (50), are adjusted to forms that use the E α,α function via identity (9). Table 4. Basic relations between the time and Laplace domains for a multiple pair of complex conjugate poles c = a + ib, c* = a − ib. Laplace domain time domain Laplace domain time domain It can be seen from Table 4 that the classical basis functions of IOS of type exp(at)t k sin(bt) and exp(at)t k cos(bt) correspond to the imaginary and real parts of the k-th derivative of the function E α,α multiplied by the functions t α−1 and t k .The k-th derivatives of the fractional functions of sine and cosine type ( 25) and ( 24) for β = α multiplied by the functions t α−1 and t k are assigned to the functions t k sin(bt) and t k cos(bt). Note that the compact formulas of waveforms in the fractional domain obtained in this way after simplifying α = 1 directly yield the waveforms that correspond to the Laplace transforms for classical IOSs (see the "integer order" section of Table 4), which are not given even in the very detailed dictionaries of the Laplace transform [32]. Basis Functions for Integer-and Fractional-Order Commensurate Systems: One-to-One Correspondence Starting from the possible forms of the decomposition of the transfer function into partial fractions for real and complex poles (41), ( 42), (46), and comparing the formulas of the corresponding waveforms for the IOS and FOS in Tables 2-4, we arrive at the summary of basis functions given in Table 5 in the "Basis Functions" columns.There are specific connections between these functions.These connections, which are different for single and multiple poles, are best seen by comparing the contents of the "Generated from" columns, which reveal from which general "generating function" these basis functions are generated. Let us first compare the part of Table 5 for integer-order systems with the "Basis Functions" column for fractional-order systems. If we restrict ourselves to simple poles, then the basis function e at of the IOS, i.e., the classical exponential function, corresponds to the basis function e α,α (a, t) of FOS, i.e., fractional exponential function (26).Undamped functions of the sine and cosine type are projected into the fractional domain as fractional sine and cosine functions of type (36) and (35). Note that all of these above basis functions of FOS can be expressed using a twoparameter MLF of type E α,α . For multiple poles, the basis functions of IOS are given by the product of the integer power of time, t k , and the classical exponential function or its imaginary or real part.For FOS, they are the products of the non-integer powers of time t αk , the non-integer powers of time t α−1 and the k-th derivatives of the function E α,α or its imaginary or real part.For the purely imaginary poles, the k-th derivatives of the functions sin α,α and cos α,α multiplied by the terms t α−1 correspond to the classical sine and cosine functions for IOS. Note that all these basis functions of FOS for multiple poles can be expressed using the corresponding derivatives of the two-parameter MLF of type E α,α . Based on defining relation ( 28) for Podlubny's function ε k , it is easy to prove that all basis functions for the FOS, listed in the "Basis Functions" column, can be expressed just using the function ε k as listed in the "Generated from" column. Thus, Table 5 provides guidance on how to obtain the time response of a commensurate FOS from the response of the associated classical IOS, namely by substituting for each other the appropriate basis functions.The most systematic correspondence is provided by comparing the generating functions f (t) for IOS (1) and f F (t) for FOS, (see the comparison of the "Generated from" columns in Table 5): for real poles of a general multiplicity, the basis functions are generated with the function t k e at or ε k (t, a; α, α), and for complex poles of a general multiplicity, by the real and imaginary parts of the function t k e (a+ib)t or ε k (t, a + ib; α, α). Calculation of Step Response The correspondence between the basis functions of the IOS and FOS from Table 5 allows a convenient determination of analytical formulas for the impulse response g F (t) of commensurate FOS from the known impulse response g(t) of IOS.Let us reiterate that transfer functions ( 2) and ( 4) of both systems are bounded by relation (3). Let us analyze to what extent the correspondences of Table 5 can be used to determine the step response h F (t) of the FOS.Since the step response is the forced response to a unit step, the question is whether it can be constructed from the basis functions from Table 5, which describe natural, not forced, responses. It turns out that in the general case, we really cannot make do with the above basis functions when constructing the step response. The step response h F (t) of the associated commensurate FOS of transfer function K F (s α ) (4) is obtained with the Laplace inversion of the function K F (s α )/s = K(s α )/s.Similarly, the step response h(t) of the IOS is the Laplace inverse of the expression K(s)/s: Since the denominator of the second equation contains the s operator, not its power s α , the assumption from Section 1 that the Laplace transform of the time response of a FOS is obtained from the Laplace transform of IOS with substitution (2) cannot be used.However, all the conclusions that follow from this assumption, including the transformation relations of Table 5, are based on this assumption. On the other hand, the step response can be easily obtained with a direct integration of the impulse response with respect to time.Since the impulse response of proper systems can be thought of as a linear combination of basis functions, the step response is actually constructed from a linear combination of the integrals of the basis functions with respect to time.Using the rule of integration of Podlubny's function (28) [13], the first integral of generating function f F (t) (52) can be written as follows: Thus, it is obvious that the step response can be composed from the basis functions that are built of MLFs with the parameters α, α + 1. Starting from transfer function (4), it is possible to first determine the impulse response based on Table 5 and then proceed to the step response by integrating each basis function according to (54). Numerical Aspects Reliable algorithms for computing the basis functions are necessary for an accurate and fast computation of waveforms from their semi-symbolic formulas that use these functions. Table 5 shows that all the basis functions for FOS can be uniformly generated with Podlubny's function (28) ε k (t, a + ib; α, α).In terms of numerical computations, it is a complex function of three real and one complex argument.Relation (28) defining this function contains the k-th derivative of the complex two-parameter MLF of the complex argument.The evaluation of this function can be approached either directly, i.e., with an algorithm for computing the derivatives of the two-parameter MLF, or indirectly by computing the three-parameter MLF, from which the derivatives of the two-parameter MLF can then be easily determined (see Table 1). A number of papers deal with the numerical calculation of MLF.Some of them have become the basis for developing robust algorithms for evaluating two-and three-parameter MLFs [30,31].Among the best known is the MATLAB function mlf.m of Podlubny and Kacenak [27], which computes the two-parameter MLF with a complex argument.However, since it is not designed to compute derivatives of MLFs, it cannot be used to compute the basis functions belonging to multiple poles.Garrappa's ml.m function is available on the MATLAB Central File Exchange, allowing for computing one-parameter, twoparameter and three-parameter MLFs [29].Thus, the ml.m function allows for computing the derivative of the two-parameter MLF indirectly, using three-parameter MLFs.The corresponding routine, which implements the optimal parabolic contour (OPC) algorithm described in [33], is based on the inversion of the Laplace transform on a parabolic contour suitably chosen in one of the regions of analyticity of the Laplace transform [29].However, there are some limitations in computing three-parameter MLFs via the ml.m: the parameter α must be greater than 0 and less than 1, and the absolute value of the complex argument of the MLF must not be less than απ radians.In particular, the second condition can pose a significant limitation for computing derivatives of the two-parameter MLF as part of Podlubny's generating function. To evaluate two-parameter MLFs and their derivatives, it is convenient to use another MATLAB function, ml_deriv(α, β, k, z) by R. Garrappa.The function computes the k-th derivative of the MLF with the parameters α and β at each entry of the complex vector z.Derivatives of the ML function are evaluated by exploiting an algorithm combining (by means of the derivative balancing technique devised in [31]) the Taylor series, a summation formula based on the Prabhakar function and the numerical inversion of the Laplace transform obtained after generalizing the algorithm described in [33].The function was successfully tested in [25] for the accuracy of computing the derivatives.Based on the results published in [25], the given algorithm was used to evaluate the basis functions of Table 5, which belong to all kinds of poles, real or complex, simple or with a general multiplicity.The algorithm from ml_deriv is also continuously implemented in the current versions of SNAP for symbolic, semi-symbolic and numerical analyses of analog fractionalorder circuits [14]. Illustration of the Use of Basis Functions for Transient Analysis The use of basis functions for a transient analysis is illustrated by two examples.In the first step, the formulas for impulse and step responses are always found.In the second step, these responses are quantified.The MATLAB script ml_deriv by R. Garrappa [25] was used to evaluate the basis functions. The responses obtained with this procedure were also simulated in the following SPICE-family simulation programs: Cadence PSPICE 16.5, LTSpice XVII and Micro-Cap 12.The fractional-order transfer functions were modeled using Laplace sources.In case of inconsistent results, data from MATLAB were imported into the simulation program for comparison.The decision on which results were correct was made via the so-called FFT-check, i.e., by using the Fourier transform of the impulse responses and by comparing the results with the frequency characteristics derived from the transfer functions. Significantly different outputs of SPICE-family programs when solving the same task were observed for all the simulations described below.It turned out that, in all cases, the correct results were generated only in MATLAB via the basis functions method.As far as SPICE-family programs are concerned, the most accurate analysis was achieved with Micro-Cap 12.For this reason, the Micro-Cap outputs were chosen for the MATLAB-SPICE comparison. For system (55), let us find the basis functions of its natural response and use them to construct the formulas for the impulse and step responses. First, the basis functions and the impulse response of the associated IOS will be found.Substituting α = 1 into (55) yields the transfer function K(s) (see Equation ( 2)).The decomposition of K(s) into partial fractions and the Laplace inversion lead to the impulse response P k e a k t +2e a 7 t (P 7r cos b 7 t − P 7i sin b 7 t) + 2e a 9 t (P 9r cos b 9 t − P 9i sin b 9 t). (56) Equation ( 56) implies that the transfer function K(s) has six real poles a k , k = 1... 6, and two pairs of complex conjugate poles of type c k = a k ± ib k .The corresponding residues are denoted by the symbols P k = P kr + iP ki .The poles and residues were calculated with a high precision using the MATLAB symbolic toolbox, and, after rounding, they are summarized in Table 6.A similar decomposition of the function K(s)/s can be used to obtain the formula for the step response. Figure 1 shows the plots of the corresponding impulse and step responses in MATLAB.For illustration, the responses of the IOS (α = 1) were added.Since the same results were obtained in Micro-Cap, they were not added in Figure 1.It follows from the comparison of the IOS and FOS responses in Figure 1 that the FOS responses exhibit less damping due to the increase in the α parameter from 1 to 1.2.In [35], only the step response hF is published, and its exact comparison with the analysis results of Figure 1 cannot be performed with the lack of numerical data.Therefore, the correctness of the results from Figure 1 was verified with the FFT test. Commensurate Fractional-Order Sallen-Key Filter Figure 2 shows a simplified version of the Sallen-Key filter from [35] with an operational amplifier as the voltage buffer.Instead of classical capacitors, fractors of the impedances 1/(s α C1) and 1/(s β C2) are used, where α, β ∈ R+.It follows from the comparison of the IOS and FOS responses in Figure 1 that the FOS responses exhibit less damping due to the increase in the α parameter from 1 to 1.2.In [35], only the step response h F is published, and its exact comparison with the analysis results of Figure 1 cannot be performed with the lack of numerical data.Therefore, the correctness of the results from Figure 1 was verified with the FFT test. Commensurate Fractional-Order Sallen-Key Filter Figure 2 shows a simplified version of the Sallen-Key filter from [35] with an operational amplifier as the voltage buffer.Instead of classical capacitors, fractors of the impedances 1/(s α C 1 ) and 1/(s β C 2 ) are used, where α, β ∈ R+.For α = β, the filter behaves like a commensurate FOS with the transfer function where the pseudo-natural frequency and pseudo-quality factor are given by Transfer function (59) can be rewritten in the form where is the corner frequency of the asymptotic frequency characteristic of the fractional filter.Consider the Sallen-Key filter with the following parameters: It can be shown that Q = 5 and the corner frequency f0F = ω0F/(2π) is 1 kHz.Note that the natural frequency of the associated IOS, resulting from (60), is about 174 Hz. First, the impulse response for the case α = 1 will be found.Transfer function (59) has a pair of complex conjugate poles c = a ± ib, a = −109.2808,b = 1091.4412. For α = β, the filter behaves like a commensurate FOS with the transfer function where the pseudo-natural frequency and pseudo-quality factor are given by Transfer function (59) can be rewritten in the form where is the corner frequency of the asymptotic frequency characteristic of the fractional filter.Consider the Sallen-Key filter with the following parameters: It can be shown that Q = 5 and the corner frequency f 0F = ω 0F /(2π) is 1 kHz.Note that the natural frequency of the associated IOS, resulting from (60), is about 174 Hz. First, the impulse response for the case α = 1 will be found.Transfer function (59) has a pair of complex conjugate poles c = a ± ib, a = −109.2808,b = 1091.4412. According to Table 3, the impulse response is Hence the impulse and step responses of the fractional filter are If the parameters C 1 and C 2 in the filter are changed to have the values C 1 = C 2 = 10 nF, then the corner and natural frequencies are preserved, but the quality factor is reduced to 0.5.This will correspond to an integer-order filter with critical damping and the two-fold real pole −ω 0 .For critical damping, the impulse response has the form The corresponding responses of the fractional filter are then as follows: The respective waveforms computed in MATLAB are shown in Figure 3. If the parameters C1 and C2 in the filter are changed to have the values C1 = C2 = 10 nF, then the corner and natural frequencies are preserved, but the quality factor is reduced to 0.5.This will correspond to an integer-order filter with critical damping and the twofold real pole −ω0.For critical damping, the impulse response has the form 0 2 0 () The corresponding responses of the fractional filter are then as follows: The respective waveforms computed in MATLAB are shown in Figure 3. The obtained plots of the impulse and step responses satisfy the check for the basic features (the limit values at times zero and infinity are correct, and the local maxima and minima of the step response correspond to the impulse response passages through zero).Looking at the step response for Q 5, we find a much smaller overshoot than for classical second-order IOS.This observation is consistent with the well-known fact that lowering the order 2α below 2 (i.e., α = 0.8 < 1) acts as additional damping.The obtained plots of the impulse and step responses satisfy the check for the basic features (the limit values at times zero and infinity are correct, and the local maxima and minima of the step response correspond to the impulse response passages through zero).Looking at the step response for Q = 5, we find a much smaller overshoot than for classical second-order IOS.This observation is consistent with the well-known fact that lowering the order 2α below 2 (i.e., α = 0.8 < 1) acts as additional damping. The filter in Figure 2 was also modeled in SPICE.Two different models were used: Model 1: Modeling transfer function (61) using the Laplace voltage-controlled voltage source. Model 2: Model of the circuit in Figure 2. In Model 2, for a conclusive comparison with the MATLAB results, the OpAmp follower was modeled with a simple controlled source (gain = 1).The fractors were modeled with Laplace current-controlled voltage sources. Regarding the Micro-Cap outputs, Models 1 and 2 show a perfect match with the MATLAB results. However, the experiments revealed errors in the SPICE transient analysis if the fractional-order filter was set to an unstable mode due to an inappropriately chosen parameter α.In contrast, the method based on basis functions gives correct results. The stability analysis of transfer function (59) or (61), respectively, leads to the observation that the fractional-order Sallen-Key filter in Figure 2 is stable for For Q = 5, the threshold value of the parameter α is 1.0638.The was re-designed for α = 1.15, i.e., for the unstable mode.For Q = 5 and f 0F = 1 kHz, it now comes out as Figure 4 shows the results of the simulation of the step response in Micro-Cap.The simulation was performed with default simulation parameters and with a step ceiling of 10 µs, which one thousandth of the simulation time.For comparison, data from MATLAB were imported into the simulation program environment, showing the results obtained with the basis functions method.In Model 2, for a conclusive comparison with the MATLAB results, the OpAmp follower was modeled with a simple controlled source (gain = 1).The fractors were modeled with Laplace current-controlled voltage sources. Regarding the Micro-Cap outputs, Models 1 and 2 show a perfect match with the MATLAB results. However, the experiments revealed errors in the SPICE transient analysis if the fractional-order filter was set to an unstable mode due to an inappropriately chosen parameter α.In contrast, the method based on basis functions gives correct results. Figure 4 shows the results of the simulation of the step response in Micro-Cap.The simulation was performed with default simulation parameters and with a step ceiling of 10 μs, which is one thousandth of the simulation time.For comparison, data from MATLAB were imported into the simulation program environment, showing the results obtained with the basis functions method.Figure 4 shows that Micro-Cap calculates completely different responses for Models 1 and 2, which should be interchangeable.The response for Model 2 is close to the response computed from the basis functions in MATLAB.The FFT check cannot be used to verify the results because the system is unstable.However, by lowering the step ceiling 100-fold to 0.1 us, the simulation program can be forced to perform a more accurate, but time-consuming, transient analysis.Then, the step response obtained using Method 2 is overlaid with the response computed using MATLAB.Figure 4 shows that Micro-Cap calculates completely different responses for Models 1 and 2, which should be interchangeable.The response for Model 2 is close to the response computed from the basis functions in MATLAB.The FFT check cannot be used to verify the results because the system is unstable.However, by lowering the step ceiling 100-fold to 0.1 µs, the simulation program can be forced to perform a more accurate, but timeconsuming, transient analysis.Then, the step response obtained using Method 2 is overlaid with the response computed using MATLAB. The above pitfalls of the transient analysis of FOS in a SPICE simulation environment certainly deserve attention.The unpleasant fact is that different SPICE-family programs may deal with the same simulation task differently depending on the properties of their internal algorithms, and that the particular simulator chosen may evaluate the response differently depending on how the model of a fractional-order circuit is constructed.The cause should be sought in the specifics of the method of a convolutional transient analysis that SPICE uses to solve circuits with Laplace sources.Anyway, the method utilizing basis functions does not deal with such problems in principle. Conclusions The main results of this work are summarized in the following points: 1. The natural response of an arbitrary commensurate fractional-order system with transfer function (4) and parameter α can be expressed as a linear combination of the basis functions of time t, which are based on the two-parameter Mittag-Leffler functions E α,β , β =α and their derivatives. 2. All the basis functions can be uniformly generated by using Podlubny's function ε k (t, c; α, α), where c is the corresponding s α -domain pole of the system, and k is its multiplicity. 3. There are unambiguous assignments between the generating functions as well as the individual basis functions of the integer-order system and the associated commensurate fractional-order system according to Table 5.This can be used to quickly construct an analytical formula for the impulse response of the fractional-order system if one knows the response formula of the classical integer-order system.4. The step response of any commensurate fractional-order system with transfer function (4) can be written as a linear combination of basis functions whose generating function is ε k (t, c; α, α + 1). 5. The transient analysis of FOSs via basis functions can lead to credible results also in cases when SPICE-family programs fail. Figure 1 . Figure 1.(a) Impulse responses gF and g, (b) step responses hF and h of the commensurate system with transfer function (55) and associated IOS with α = 1. Figure 1 . Figure 1.(a) Impulse responses g F and g, (b) step responses h F and h of the commensurate system with transfer function (55) and associated IOS with α = 1. . Hence the impulse and step responses of the fractional filter are ( ) Figure 3 . Figure 3. (a) Impulse responses gF, (b) step responses hF of commensurate Sallen-Key filter with transfer function (59) for α = 0.8 and for two different values of damping.The filter in Figure2was also modeled in SPICE.Two different models were used: Model 1: Modeling transfer function (61) using the Laplace voltage-controlled voltage source. Figure 3 . Figure 3. (a) Impulse responses g F , (b) step responses h F of the commensurate Sallen-Key filter with transfer function (59) for α = 0.8 and for two different values of damping. Algorithms 2023 , 25 Model 2 : 16, x FOR PEER REVIEW 22 of Model of the circuit in Figure 2. Table 2 . Basic relations between the time and Laplace domains for single and multiple real pole a. s δ Table 3 . Basic relations between the time and Laplace domains for a simple pair of complex conjugate poles Table 5 . Correspondence between the basis functions of IOS and FOS for various types of poles. Table 6 . Parameters of the partial fraction expansion of transfer function (55).
11,702.6
2023-07-13T00:00:00.000
[ "Engineering", "Mathematics" ]
Open source solutions for Librarian Open source software is software that users have the ability to run, copy, distribute, study, modify, share, and improve for any purpose. Open source library software does not need the initial cost of commercial software, but it does allow libraries to have more control over their working environment. Library professionals should be vigilant. The advantages of open source software and should involve its development. They must have basic knowledge. In the selection, installation and maintenance. Open source software requires more processing responsibility for commercial software. Library professionals do not seriously think about the benefits of opening source code software for automation and are therefore reluctant to use it. Lacking the skills to support open source software, the document highlights the core software of the open source library. Introduction What is open source software technology? Open source software is software whose source code is available under a license (or public domain agreement) that allows users to study, change and improve the software and redistribute it to a modified or unmodified company. It is often developed in a public and collaborative way. It is the most important example of open source development and is often compared to user generated content. For many libraries, organizing their books and other media can be a daunting task, especially as the library grows with more material. Years ago, we had raw paper catalog systems (remember the Dewey Decimal System) that kept things organized, but were difficult to maintain. With today's computer technology, organizing our libraries has never been easier or more efficient. The card catalog is finished, and in some libraries, it is much easier to find a book and an Internet connection and pick it up on arrival, rather than wasting time searching the lanes for the next reading. Now, just because the world has been blessed with wonderful software solutions that make everything easier to do doesn't mean that every library in the universe is using these solutions. Many libraries do not have large amounts of money to spend, and what they usually get is to buy additional resources. Due to this need for software (and the installation and training costs associated with anyone) and the lack of money available to spend on it, many libraries are left alone when it comes to keeping up with the latest technologies. Unless, of course, they embrace the open source movement and use some of the myriad software solutions available to help. Most of the software we use every day is known as "proprietary," which in summary means that it costs money and that the actual software code is limited because the software code cannot be edited, copied, or changed. Its original construction. The code is "unreadable" and it is practically what it is. Open source software, on the other hand, is the exact opposite. The open source mindset revolves around sharing and collaboration, and these two important elements perfectly describe open source software. First of all, open source software is free for everyone; More importantly, not only is the software free, it is also free for anyone to copy, hack, edit, etc. This increases the potential of a software program thanks to this free-thinking model. Many large developer groups have customized basic open source programs to what they deem necessary, and in turn have returned these changes to the open source community for free, where others can continue to develop their work. There are many different types of open source software solutions today that could be accepted by the library. There is a basic operating system, document processing programs, library management software (LMS), and digital library software. "Open source promotes the reliability and quality of software by supporting independent peer review and the rapid evolution of source code. To be certified as open source, a program license must guarantee the right to read, redistribute, modify, and use it freely" Software, Evergreen is a robust business-class ILS solution developed to provide support. The workload of large libraries in a fault tolerant system. This is also standards compliant and uses the OPAC interface and offers many features including flexible administration, workflow customization, customizable programming interfaces and since they are open source they cannot be blocked and can benefit from your contributions from the community. Digital library Greenstone Digital Library Software Greenstone digital library software is an open source system for creating and presenting collections of information. Create collections with effective full-text searches and metadata-based navigation services that are attractive and easy to use. Furthermore, they are easily maintained and can be Fully expanded and rebuilt automatically. The system is extensible: software "plug-ins" can contain different types of documents and metadata. The purpose of Greenstone software is to allow users, especially in universities, libraries, and other public service institutions, to create their own digital libraries. DSpace Dspace is an innovative digital institutional repository that captures, archives, indexes, preserves and redistributes the intellectual output of a university's research faculty in digital formats. Manages and distributes digital articles, which consist of digital files and allows the creation, indexing, and search for associated metadata to locate and retrieve items. Design and development of DSpace from the Massachusetts Institute of Technology (MIT) and Hewlett-Packard (HP) libraries. DSpace was designed as an open source application that institutions and organizations could run with relatively few resources. It is used to support the long-term retention of digital material stored in the repository. It is also designed to facilitate shipping. DSpace supports sending, managing and accessing digital content. Fingerprints Eprints is an open source software package for creating open access repositories that comply with the open file archiving protocol for metadata collection. It shares many of the features commonly seen in document management systems, but is used primarily for institutional purposes. Repositories and scientific journals. EPrints was developed at the University of Southampton College of Electronics and Computing and was released under the GPL license. Fedora Fedora's open source software provides organizations with a flexible, serviceoriented architecture to manage and distribute their digital content. At its core is a powerful digital object model that supports multiple views of each digital object and the relationships between digital objects. Digital objects can be encapsulated in locally managed content or display remote content. Dynamic views are possible by associating web services with objects. Digital objects exist within a repository architecture that supports a variety of management functions. All Fedora functions, both object level and repository level, are exposed as web services. These functions can be protected with specific access control criteria. This unique combination of features makes Fedora an attractive solution in a variety of domains. Examples of Fedora-based applications include library collection management, media authoring systems, file repositories, institutional repositories, and digital libraries for education. Publication on the web Word Press WordPress was born as a fast, free and open source blogging solution a few years ago; Today is a perfect alternative to create a website from scratch. In addition to being free (and easy to install), the WordPress community has exploded, with thousands of users and programmers creating custom themes and plugins to modify the appearance and operation of the software. The most important aspect of the software is the intuitive interface and the content management system. With its visually rich editor, anyone can post text and photos on the website. Other options include multiple authors (with separate logins), integrated Real Simple Syndication (RSS) technology to keep subscribers up-to-date, and a comment system that allows readers to interact with site content. A great way to communicate with users, staff, etc. Drupal Drupal is another open source web publishing option that allows a person or community of users to easily publish, manage and organize a wide range of content on a website. Tens of thousands of people and organizations have used Drupal to feed dozens of different websites, including community web portals, discussion sites, corporate websites, intranet apps, personal websites or blogs, e-commerce apps, resource lists, social networking sites. Conclusion Using open source software is as good as owning it. Applicant suitable for long-term use of the library. It is worth spending time and energy on learning and adoption. Therefore, it seems that there are some very powerful solutions available today that could be used to create a much more entrepreneurial library. By using open source software in the library, money that would otherwise be spent on software solutions can be used for other important resources, such as the purchase of additional multimedia resources (books, magazines, etc.), or You can use it to recruit qualified Technical Support personnel who provide users with knowledge on how to make better use of existing resources. Additionally, this free software is constantly updated, modified, and customized to meet the needs of the library. While this is all good and stylish, and sounds like the advantageous solution for your library, there are still hurdles and hurdles that we will have to overcome. I hope this article provides introductory information on how to separate your library from traditional IT products and immerse yourself in the set of open source resources available today. Source of Funding None.
2,122.4
2020-07-28T00:00:00.000
[ "Computer Science" ]
Reducing Off-State and Leakage Currents by Dielectric Permittivity-Graded Stacked Gate Oxides on Trigate FinFETs: A TCAD Study Since its invention in the 1960s, one of the most significant evolutions of metal-oxide semiconductor field effect transistors (MOSFETs) would be the 3D version that makes the semiconducting channel vertically wrapped by conformal gate electrodes, also recognized as FinFET. During recent decades, the width of fin (Wfin) and the neighboring gate oxide width (tox) in FinFETs has shrunk from about 150 nm to a few nanometers. However, both widths seem to have been leveling off in recent years, owing to the limitation of lithography precision. Here, we show that by adapting the Penn model and Maxwell–Garnett mixing formula for a dielectric constant (κ) calculation for nanolaminate structures, FinFETs with two- and three-stage κ-graded stacked combinations of gate dielectrics with SiO2, Si3N4, Al2O3, HfO2, La2O3, and TiO2 perform better against the same structures with their single-layer dielectrics counterparts. Based on this, FinFETs simulated with κ-graded gate oxides achieved an off-state drain current (IOFF) reduced down to 6.45 × 10−15 A for the Al2O3: TiO2 combination and a gate leakage current (IG) reaching down to 2.04 × 10−11 A for the Al2O3: HfO2: La2O3 combination. While our findings push the individual dielectric laminates to the sub 1 nm limit, the effects of dielectric permittivity matching and κ-grading for gate oxides remain to have the potential to shed light on the next generation of nanoelectronics for higher integration and lower power consumption opportunities. Introduction Silicon oxide has been used as a gate dielectric material on thin film transistors for over 40 years, but as dimensions shrink, alternatives with higher dielectric constants are necessary to reduce leakage currents.While high-κ dielectrics have been investigated for their thermal stability and compatibility with Si, FinFET technology, with 3D doublegate and triple-gate transistors, has further advanced, leading to smaller, more efficient transistors with reduced power consumption [1][2][3][4][5]. The continuous downscaling of MOS devices is indispensable for increasing the transistor density and performance, leading to efficient chip functionality at higher speeds.However, this scaling poses challenges such as severe short channel effects (SCEs), increased fabrication costs, and difficulties in device processing [6][7][8].Multi-gate MOS device structures like FinFETs, which use multiple gate electrodes and an ultrathin body, have been developed to address these challenges, showing an excellent device performance at The permittivity matching TFT designs appeared [42][43][44] when the SiO 2 and SiN x gate insulators were discovered to be behaving well when neighboring the Si channel [44], and designers frequently used the Equivalent Oxide Thickness (EOT) convention [41,[45][46][47] for the determination of the thickness of hi-κ gate oxide to replace the SiO 2 or SiN x .But EOT also had its disadvantages, like its invalidity for non-planar devices due to the impact of device geometry on capacitance behavior [48] and a gate-leakage current increase when the gate oxide layer is scaled down below 2 nm [49]. With κ-grading (also called as "epsilon grading" (ε-grading), so that dielectric permittivity changes through device depth is interchangeably designated as "ε" or "κ" in different references), our aim is to match the dielectric permittivity of stages; i.e., the Si channel is followed by a dielectric material with the lowest bulk dielectric constant κ b , followed by a material with a higher κ b , then followed by a material with a higher κ b again, until the gate is reached.κ-grading together with an effective dielectric constant (κ EFF ) calculation of the staged/graded gate oxide structure is proposed for the better effectivity of gate oxide.We highlight three steps in the incorporation of this technique as follows: 1. κ-grading is employed for stacked gate oxide.This is detailed in Section 3.1.1. 2. Even when a single material gate dielectric is used, the Penn model [50,51] can be utilized for the calculation of effective dielectric constants of the gate oxide layer, κ EFF , as the bulk dielectric constant usage will be misleading for gate oxides with thicknesses of a few nanometers.This is detailed in Section 3.1.2. 3. With each addition of a new laminate material, the overall effective dielectric constant of the gate oxide layer, κ EFF , can be recalculated using the Maxwell-Garnett [52] mixing formula, so that a fair mechanism is established to compare the performance of FinFETs with respect to this κ EFF as the independent variable.The mentioned calculations are given in Section 3.1.3. Our research work offers the most comprehensive simulation work in the investigation of stacked gate oxides on FinFETs with 41 different gate oxide combinations, all with a 3 nm total thickness, adding two-stage or three-stage κ-grading features and taking an effective dielectric constant (κ EFF ) calculation into account.In this paper, we present the simulation results obtained using SILVACO ATLAS for a 3D silicon on insulator (SOI) n-FinFET structure with κ-graded stacked gate oxides. This manuscript is divided into several sections: In Section 2, the FinFET device structure, its geometry and gate dielectric combinations, and their designations are introduced.In Section 3, details of the κ-grading, effective dielectric constant κ EFF calculation, mathematical methods for FinFET modeling, simulation tool usage, and choice of performance metrics are presented.Our simulation results are exhibited and discussed with some analysis and insights that we derived in Sections 4-6.Finally, fabrication considerations and the conclusions are reported in Sections 7 and 8. Device Structure 2.1. FinFET Geometric Model The 3D Technology Computer-Aided Design (TCAD) structure for a FinFET with a gate oxide with graded dielectric permittivity is shown in Figure 1.Using SILVACO ATLAS for device simulation and with a gate oxide thickness (t ox ) of 3 nm, the buried oxide (BOX) material is kept as HfO 2 and never changed through all simulations.An equal doping concentration (N d ) of 5 × 10 19 cm −3 is the used source-drain channel region.Other FinFET properties are shown in Table 1.We call this FinFET type "FinFET with κ-graded gate oxide" or "gκ-FinFET" throughout the paper.The device structure is of an n-type FinFET, comprising three gates, one on top and two at the sides of the fin-shaped channel, not isolated, but behaving as a single inversed U-shaped gate.Metal with a work function (ϕ w ) of 5 eV is applied at the gate, common for n+-doped Si channel junctionless architectures [7,27,28].Ni or CrAu alloy is suitable for this work function value, common for junctionless n-TFTs. not isolated, but behaving as a single inversed U-shaped gate.Metal with a work function (φw) of 5 eV is applied at the gate, common for n+-doped Si channel junctionless architectures [7,27,28].Ni or CrAu alloy is suitable for this work function value, common for junctionless n-TFTs. Gate Dielectrics Six base dielectric materials, SiO2, Si3N4, Al2O3, HfO2, La2O3, and TiO2, bulk dielectric constants of which are shown in Table 2, are selected as single-layer gate dielectrics of a 3 nm thickness (tox) for a 14 nm channel length (LFET) gκ-FinFET structure.These six materials are used one-by-one for first six simulations to form the control group. Then 15 different two-stage and 20 different three-stage κ-graded material combinations composed of these six base dielectrics, as designated in Table 3, are devised between the Si channel and the gate.The AHT case consists of Al2O3: HfO2: TiO2 gate oxides, as shown in Figure 1. Dielectric Material  2, are selected as single-layer gate dielectrics of a 3 nm thickness (t ox ) for a 14 nm channel length (L FET ) gκ-FinFET structure.These six materials are used one-by-one for first six simulations to form the control group. Then 15 different two-stage and 20 different three-stage κ-graded material combinations composed of these six base dielectrics, as designated in Table 3, are devised between the Si channel and the gate.The AHT case consists of Al 2 O 3 : HfO 2 : TiO 2 gate oxides, as shown in Figure 1. HLT In Table 3, we introduce reference designators in the last column for gκ-FinFET equipped with each gate oxide material for the easy reading of the figures incorporated in the results.The designator consists of two to four alphanumeric characters, including the first character of each gate oxide it consists of.Since SiO 2 and Si 3 N 4 have the same first character, gκ-FinFETs with their respective gate oxides were designated as S1 and S2, respectively.All the parameters for gκ-FinFET were kept the same at each simulation, only the gate oxide layer material combination was changed, making a total of 41 simulations.The performances of the FinFETs with these gate oxide combinations, will be shown in subsequent pages and can be followed with these designations which appear in boldface throughout the paper and the individual stage thicknesses read from Table 4.For example, FinFET with a gate oxide of a single layer of SiO 2 is designated as S1, the same with a single layer of Si 3 N 4 as S2; for the Al 2 O 3 : TiO 2 gate oxide combination, the FinFET is designated as AT, and for a Si 3 N 4 : La 2 O 3 : TiO 2 combination, the same is designated as S2LT. Methods Our methods, mathematical derivations and modeling, choice of performance metrics, and usage of these figures of merit (FoM) for evaluation are presented herein with following main steps: κ-grading and calculation of effective κ of the gate oxide.Mathematical modeling in ATLAS Software v5.34.0.R. Choice of performance metrics for performance evaluation. κ-Grading Regarding κ-grading, we mean that, among selected dielectric materials to be used for stacking, Si channel deposition should be followed by dielectric material with lowest bulk dielectric constant κ b , followed by material with higher κ b , then followed by a material with higher κ b again, until gate is reached like in Figure 2. We mainly target dielectric permittivity matching of gate oxide at both ends of Si channel side and metal side.Thus, as permittivity matching at both ends of the gate oxide is considered, we implement this concept herein by κ-grading, keeping permittivity of neighboring materials as close as possible. metrics, and usage of these figures of merit (FoM) for evaluation are presented herein with following main steps: Choice of performance metrics for performance evaluation. κ-Grading Regarding κ-grading, we mean that, among selected dielectric materials to be used for stacking, Si channel deposition should be followed by dielectric material with lowest bulk dielectric constant κ , followed by material with higher κ , then followed by a material with higher κ again, until gate is reached like in Figure 2. We mainly target dielectric permittivity matching of gate oxide at both ends of Si channel side and metal side.Thus, as permittivity matching at both ends of the gate oxide is considered, we implement this concept herein by κ-grading, keeping permittivity of neighboring materials as close as possible.are calculated dielectric constants of their respective nanolaminates with , the volumetric filling factor for material A, and 1 − is the volumetric filling factor for material B, in a two-phase dielectric system of Figure 3. Suppose κ bA , κ bB are bulk dielectric constants for materials A and B and κ A , κ B are calculated dielectric constants of their respective nanolaminates with f , the volumetric filling factor for material A, and 1 − f is the volumetric filling factor for material B, in a two-phase dielectric system of Figure 3.A theoretical foundation was first given by Penn's 1962 paper [50].For Si, it has been shown that for thicknesses greater than 200 Å (20 nm), bulk  can be considered to be unchanged and equivalent to  , and if is less than 200 Å, one needs to consider using the wave number dependence equation for changing dielectric function.For practical purposes, this equation evolved into a modified model [54] by Tsu in 1997, and then into a generalized one [51] by Sharma in 2006, for calculation of size-dependent energy gap and dielectric permittivity of nanolaminated dielectric structures under quantum confinement effects, where  becomes less than  .A patent by Gealy [23] in 2012 incorporated similar equations to calculate the dielectric constant of thin nanolaminate, as stated in Equation (1).Our FinFET under consideration requires 1 nm, 1.5 nm, and 3 nm gate oxide nanolaminates; we chose to use Sharma's generalized Penn model.Calculation of effective κ, hereinafter κ , of this dielectric system in case of any narrowed individual thickness or below 200 Å is presented in two steps: First, nanolaminate dielectric constant  due to thickness of nanometer order is to be calculated by Equation ( 1): where κ is the bulk dielectric constant, κ is the high-frequency dielectric constant, A theoretical foundation was first given by Penn's 1962 paper [50].For Si, it has been shown that for thicknesses greater than 200 Å (20 nm), bulk κ bA can be considered to be unchanged and equivalent to κ A , and if t A is less than 200 Å, one needs to consider using the wave number dependence equation for changing dielectric function.For practical purposes, this equation evolved into a modified model [54] by Tsu in 1997, and then into a generalized one [51] by Sharma in 2006, for calculation of size-dependent energy gap and dielectric permittivity of nanolaminated dielectric structures under quantum confinement effects, where κ A becomes less than κ bA .A patent by Gealy [23] in 2012 incorporated similar equations to calculate the dielectric constant of thin nanolaminate, as stated in Equation (1).Our FinFET under consideration requires 1 nm, 1.5 nm, and 3 nm gate oxide nanolaminates; we chose to use Sharma's generalized Penn model.Calculation of effective κ, hereinafter κ EFF , of this dielectric system in case of any narrowed individual thickness t A or t B below 200 Å is presented in two steps: First, nanolaminate dielectric constant κ A due to thickness t A of nanometer order is to be calculated by Equation (1): where κ bA is the bulk dielectric constant, κ ∞A is the high-frequency dielectric constant, K f A is Fermi wave vector, and t A is the planar thickness of the nano-scaled dielectric material A. Equation ( 1) can be numerically generalized and further fitted to Equation (2) as in [51], forming the generalized Penn Model which we utilize for our calculations of κ A for desired thickness t A : When we calculate the resultant κ A of material due to its nanolaminate thickness t A , we observe significant loss in dielectric effect.This numerical approximation is depicted in Figure 4 for TiO 2 material, showing that in orders of few nanometers, κ A reduction is significant.At 3 nm thickness, κ A becomes 77, at 1.5 nm it is 52.6, and at 1 nm it is 35.8 when compared to its bulk value of 95. Maxwell-Garnett Model: Calculation of κ for Whole Gate Oxide Dielectric constant  of system of nanolaminates due to thickness = + and with volumetric filling factor is calculated by Maxwell-Garnett mixing formula. Niklasson et.al.[55] used, in 1981, the Maxwell-Garnett and Bruggeman effective medium theories to derive average dielectric permeability of heterogeneous materials and estimated dielectric properties of a composite material composed of Cobalt and Alumina.Petrovsky [56] laid foundations of multi-material "effective dielectric constant" calculation with profound detail in 2012 mainly by Bruggeman equations with respect to volumetric filling factor . Markel [52] in 2016 issued a framework tutorial, surveying existing methods and restating the Maxwell-Garnett mixing formula for calculation of  for two-stage dielectrics.This formula gives the effective permittivity in terms of the permittivity and volume fractions of the individual constituents of the complex medium and is shown in Equation (3). To extend this formula for a three-phase system, we denote the dielectric constants of the three materials as  ,  , and  and their respective volumetric filling factors as Maxwell-Garnett Model: Calculation of κ for Whole Gate Oxide Dielectric constant κ EFF of system of nanolaminates due to thickness t ox = t A + t B and with volumetric filling factor is calculated by Maxwell-Garnett mixing formula. Niklasson et al. [55] used, in 1981, the Maxwell-Garnett and Bruggeman effective medium theories to derive average dielectric permeability of heterogeneous materials and estimated dielectric properties of a composite material composed of Cobalt and Alumina.Petrovsky [56] laid foundations of multi-material "effective dielectric constant" calculation with profound detail in 2012 mainly by Bruggeman equations with respect to volumetric filling factor f .Markel [52] in 2016 issued a framework tutorial, surveying existing methods and restating the Maxwell-Garnett mixing formula for calculation of κ EFF for two-stage dielectrics.This formula gives the effective permittivity in terms of the permittivity and volume fractions of the individual constituents of the complex medium and is shown in Equation (3). To extend this formula for a three-phase system, we denote the dielectric constants of the three materials as κ A , κ B , and κ C and their respective volumetric filling factors as f A , f B , and f C where f A + f B + f C = 1, and we need to simply derive the same equation that considers all three materials.Thus, we can now: i. Calculate the effective dielectric constant κ AB for materials A and B using the Maxwell-Garnett mixing formula.ii.Consider κ AB as single-material AB's dielectric constant and apply the Maxwell-Garnett formula again, with input variables κ AB and κ C , to find the overall effective dielectric constant κ EFF , with f AB + f C = 1, where f AB = f A + f B , and finally, our equation becomes Equation ( 4) for a complex medium of three phases, A, B, and C. Therefore, using Equations ( 3) and ( 4), we calculated the κ EFF of two-stage and threestage dielectric materials denoted in last column of Table 4. Mathematical Models in ATLAS This section lays out modeling methods we utilize in ATLAS, Non-Equilibrium Green's Function, Hot Electron/Hole Injection Model and Direct Quantum Tunneling Model, equations of which are employed within simulations. Quantum Transport: Non-Equilibrium Green's Function (NEGF) Approach This fully quantum method treats such effects as source-to-drain tunneling, ballistic transport, and quantum confinement on equal footing.This situation is common to double gate and trigate transistors, FinFETs, and nanowire FETs. By specifying the NEGF_MS and SCHRODINGER options on the MODELS statement, we can launch a NEGF solver to model ballistic quantum transport in such devices as double gate or surround gate MOSFET.An effective-mass Hamiltonian H o of a two-dimensional device is given by: when discretized in real space using a finite volume method.A corresponding expression in cylindrical coordinates is: Rather than solving a 2D or 3D problem, which may take vast amounts of computational time, a Mode Space (MS) approach is used.A Schrodinger equation is first solved in each slice of the device to find eigenenergies and eigenfunctions.Then, a transport equation of electrons moving in the sub-bands is solved.As only a few lowest eigen sub-bands are occupied and the upper sub-bands can be safely neglected, the size of the problem is reduced.In the devices where the cross-section does not change, the sub-bands are not quantum-mechanically coupled to each other, and the transport equations become essentially 1D for each sub-band.Therefore, we can further divide the method into Coupled (CMS) or Uncoupled Mode Space (UMS) approaches.ATLAS tool automatically decides on the minimum number of sub-bands required and the method to be used.It is possible, however, to set the number of sub-bands by using the EIGEN parameter on the MOD-ELS statement.To enforce either CMS or UMS approaches, we can use NEGF_CMS or NEGF_UMS instead of NEGF_MS on the MODELS statement.The transformation of a real space Hamiltonian H o to a mode space is done by taking a matrix element between m th and n th wave functions of k th and l th slices: Skipping some middle steps of derivation from [57], 2-dimensional carrier density and corresponding current density functions are laid as follows: Carrier density function: x-component of current density: y-component of current density: Total current density: Here, G < is the Green's function as a matrix, whose diagonal elements are carrier densities as function of energy.t ijkl is an off-diagonal element of real space Hamiltonian H o , which couples nodes (x i ,y k ) and (x j ,y l ).In our overall model, this current density J is to be integrated through the model geometry to yield the total current that will add up with the currents calculated by other models stated in next two sections. Lucky-Electron Hot Carrier Injection Model The Lucky-Electron Model (LEM), proposed in 1984 by Tam, Ko, and Hu, focuses on channel hot-electron injection in MOSFETs [58].This model was later challenged by the Energy-Driven Model (EDM) introduced in 2005, which emphasized the role of available energy over peak lateral electric field in predicting hot carrier effects in MOS devices.Furthermore, recent research has concentrated on electron-electron scattering-induced channel hot-electron injection in nanoscale n-MOSFETs with high-κ/metal gate stacks, highlighting the significance of trapping mechanisms in high-κ dielectric devices.Additionally, investigations on partially depleted SOI NMOSFETs revealed the impact of hot-electron injection on the back-gate threshold voltage and interface trap density, influencing the device's direct-current characteristics and radiation hardness performance [59]. In the Lucky-Electron Hot Carrier Injection Model, it is proposed that an electron is emitted into the oxide by first gaining enough energy from the electric field in the channel to surmount the insulator/semiconductor barrier.Once the required energy to surmount the barrier has been obtained, the electrons are redirected towards the insulator/semiconductor interface by some form of phonon scattering.When these conditions are met, the carrier travelling towards the interface will then have an additional probability that it will not suffer any additional collision through which energy could be lost. The model implemented into ATLAS is a modified version of the model proposed by Tam [58] and is activated by the parameters of HEI and HHI, for electron and hole injection, respectively, on the MODELS statement.The gate electrode-insulator interface is subdivided into several discrete segments which are defined by the mesh.For each segment, the lucky electron model is used to calculate the injected current into that segment.The total gate current is then the sum of all the discrete values. If we consider a discrete point on the gate's electrode-insulator boundary, we can write a mathematical formula for the current injected from the semiconductor.The formula calculates the injected gate current contribution from every node point within the semiconductor according to the injection current formula, stated as 2-dimensional integral of probability of hot electrons and holes, convolved with electron and current densities: → J n (x, y) dxdy + P P (x, y) → J P (x, y) dxdy (12) Direct Quantum Tunneling Model For deep submicron devices, the thickness of the insulating layers can be very small.For example, gate oxide thicknesses in MOS devices can be as low as several nanometers.In this case, the main assumptions of the Fowler-Nordheim approximation [60] are generally invalid and we need a more accurate expression for tunneling current.ATLAS used is based on a formula, which was introduced by Price and Radcliffe [61] and developed by later authors.It formulates the Schrödinger equation in the effective mass approximation and solves it to calculate the transmission probability, T(E), of an electron or hole through the potential barrier formed by the oxide layer.The incident (perpendicular) energy of the charge carrier, E, is a parameter.It is assumed that the tunneling process is elastic.After considering carrier statistics and integrating over lateral energy, the formula is obtained, which gives the current density J (A/m 2 ) though the barrier.The effective masses m y and m z are the effective masses in the lateral direction in the semiconductor.For example, for a direct bandgap material, where the Γ valley is isotropic, both m y and m z are the same as the density of states' effective mass.The logarithmic term includes the carrier statistics and E Fl and E Fr are the quasi-Fermi levels on either side of the barrier.The range of integration is determined according to the band edge shape at any given contact bias [17]. Employing the Computational Models in ATLAS We model our gκ-FinFET using SILVACO ATLAS Deckbuild software tool.The family of such tools were used in vast amounts of research to design and simulate the MOSFET devices.ATLAS is actually a text-based language and takes an input file to be run to simulate the TFT devices.After building mesh and device geometry definitions, basic procedure for selecting mathematical models is adding the double line statement starting with keywords "MODELS" and "INTERFACE" to the ATLAS file, given in statement ( 14 By adding these within ATLAS file, researchers can employ direct quantum tunneling model (QTUNN.EL, QTUNN.HO) for both holes and electrons, hot-electron/hot-hole injection (HEI, HHI) model, non-equilibrium green function (NEGF_MS) model, and Schrodinger model [57] (SCHRODINGER), together with interface trap effect considerations simultaneously, to model complete current densities required for drain and gate leakage on any transistor with defined geometry, also defined in the ATLAS input (*.in) file.SP.FAST activates a fast product-space approach in a 2D Schrödinger solver.SP.GEOM = 2DYZ sets a dimensionality and direction of a Schrödinger solver.Value 2DYZ is default for mesh structure in ATLAS 3D. Choice of Performance Metrics Our performance metrics were selected, like in the paper by Nagy [31], for benchmarking of FinFETs, with DIBL added as the most researched short-channel effect, as follows: i. I G , on-state gate leakage current, in Amperes, leaks from gate metal through dielectric into the channel, when V GS = 1 V.In our case, we favor to minimize.ii.I ON , on-state drain current, in Amperes, when V DS = V DD (= 1.25 V in our case) and V GS = V DD .We favor to maximize.iii.I OFF , off-state drain current, in Amperes, when V DS = V DD and V G = −1.5 V. We favor to minimize.iv.I ON /I OFF ratio, unitless, accepted and powerful measure of TFT design quality.We favor to maximize.v. V TH , threshold voltage, in Volts, the minimum V GS voltage that drain current I D slightly exceeds a limit current (1 × 10 −7 A in our case) significant for the design.We favor to minimize.vi.SS, Subthreshold Slope, in mV/decade, change in the gate voltage required a decrease in the drain current I D by one decade, SS = ∆V GS /∆log (I D ).We favor to minimize.vii.DIBL, Drain-Induced Barrier Lowering, in mV/V, represents the drain voltage V DS influence on the threshold voltage V TH , defined as DIBL = |∆V TH |/|∆V DS |.We favor to minimize. as these are the primary FoMs for evaluation of thin film transistors' performance, as also restated by Nowbahari [62] in his comprehensive review on junctionless transistors. Results We herein exhibit the performance of simulations carried out in ATLAS with the model given in Figure 1, of gκ-FinFET with gate oxide combinations tabulated in Table 4, in Figures 5-13 and Tables 5-12. Drain Current Performance First, our drain current modeling is verified by the results given in papers with FinFET fabrication examples [12,13,31,63].Figure 5 shows the drain current I D for all of single, twostage and three-stage graded gate oxides for the gκ-FinFET device we examined, depicting the single and compounded performances of the SiO 2 , Si 3 N 4 , Al 2 O 3 , HfO 2 , La 2 O 3 , and TiO 2 gate dielectrics.S1AL (SiO 2 : A 2 O 3 : La 2 O 3 ) has the highest I ON with 20.8 µA at (V G = 1.25 V) performance.AT (Al 2 O 3 : TiO 2 combination) has the lowest I OFF current of 6.45 × 10 −15 A. The I OFF current significantly changed with the changing dielectric combination; it varied between 4.73 × 10 −11 A and 6.45 × 10 −15 A, more than four orders of magnitude, just because of modifying the gate oxide layer. If a single layer was used, this range would be in between 2.14 × 10 −12 A (for SiO 2 ) and 8.18 × 10 −14 A (for HfO 2 ).The I ON current would not be varying a great deal with changing gate oxides.However, gκ-FinFET S2T (Si 3 N 4 : TiO 2 gate oxide) has the highest I ON current of 2.08 × 10 −5 A, better than any other single gate oxides including FinFET H.For I ON /I OFF , S2T also performed the best at 2.4 × 10 9 , one order higher than that of FinFET H. As depicted in Figure 9, the best I OFF performance gate oxides are AT, S2T, AHT, S2LT, and ALT, and from Figure 10, the best I ON performance gate oxides are S1AL, S1S2A, S1L, S1S2H, S1AH, and S1H.We can observe that no single-material gate oxide has performed better than the two-stage or three-stage gate oxides in the drain current performances. Leakage Current Performance First, we observed that our gate leakage current model is verified as Rudenko [64], Garduno [32], Khan [65], and Golosov [66] have similar trends for IG: starting from a negative VG, IG first decreases significantly around 6-14 orders of magnitude, depending on the gate oxide, takes a minimum at some VG value, and then it increases steeply again. Figure 6 shows the IG leakage current characteristics [57] for the traditional singlematerial gate dielectrics together with the two-stage and three-stage -graded dielectrics, with the lowest gate leakage current of 2.04 × 10 −11 A (20.4 pA) at VG = 1.0 V for our specific FinFET under study.The leakage current curves generally show a similar trend and all tend to make local minimums at VG = 1 V, with the exception of that of TiO2 which has a local minimum around VG = 0.75 V and a leakage current of 4.0 × 10 −12 A (4 fA).Despite this low leakage current, TiO2 does not behave well, especially regarding its DIBL, ION, IOFF, and ION/IOFF performance; thus, the sole usage of TiO2 as a gate dielectric cannot be advised. Leakage Current Performance First, we observed that our gate leakage current model is verified as Rudenko [64], Garduno [32], Khan [65], and Golosov [66] have similar trends for I G : starting from a negative V G , I G first decreases significantly around 6-14 orders of magnitude, depending on the gate oxide, takes a minimum at some V G value, and then it increases steeply again. Figure 6 shows the I G leakage current characteristics [57] for the traditional singlematerial gate dielectrics together with the two-stage and three-stage κ-graded dielectrics, with the lowest gate leakage current of 2.04 × 10 −11 A (20.4 pA) at V G = 1.0 V for our specific FinFET under study.The leakage current curves generally show a similar trend and all tend to make local minimums at V G = 1 V, with the exception of that of TiO 2 which has a local minimum around V G = 0.75 V and a leakage current of 4.0 × 10 −12 A (4 fA).Despite this low leakage current, TiO 2 does not behave well, especially regarding its DIBL, I ON , I OFF, and I ON /I OFF performance; thus, the sole usage of TiO 2 as a gate dielectric cannot be advised.The DIBL plot suggests that as the effective dielectric constant increases, the DIBL effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 35, and then increases back until κ EFF ≈ 77, point T (designates FinFET with TiO 2 as gate oxide).The DIBL performance of S2T with 41.9 mV/V is 37.4% lower than that of H. S2T, S2LT, AHT, AT, and S2HT, which are the five best-performing gκ-FinFETs in DIBL performance.5 for concise results.Figure 8 presents the Subthreshold Slope (SS) of gκ-FinFETs against their effective dielectric constants of gate oxides within.A lower SS means less change in the gate voltage is required to increase the drain current by a factor of ten.This is generally desirable as it indicates that the transistor can switch states more quickly and with less power consumption.Essentially, a lower subthreshold slope results in more efficient transistors that can operate effectively at lower voltages, which is especially beneficial in low-power and high-speed applications. DIBL, SS, ION, IOFF, ION/IOFF, and VTH Performance The SS plot suggests that as the effective dielectric constant increases, the SS effect decreases steeply and significantly from  ≈ 3.35 until  ≈ 25, just like DIBL's regime, then increases almost linearly back until  ≈ 77, point T (designates gκ-FinFET with TiO2 as the gate oxide).AHT, S1HT, HT, S2HT, ALT, HLT, and HT are the bestperforming FinFETs in SS performance.The SS performance of AHT with 152.0 mV/dec is 10.5% lower than that of H. 5 for concise results.Figure 8 presents the Subthreshold Slope (SS) of gκ-FinFETs against their effective dielectric constants of gate oxides within.A lower SS means less change in the gate voltage is required to increase the drain current by a factor of ten.This is generally desirable as it indicates that the transistor can switch states more quickly and with less power consumption.Essentially, a lower subthreshold slope results in more efficient transistors that can operate effectively at lower voltages, which is especially beneficial in low-power and high-speed applications. The SS plot suggests that as the effective dielectric constant increases, the SS effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 25, just like DIBL's regime, then increases almost linearly back until κ EFF ≈ 77, point T (designates gκ-FinFET with TiO 2 as the gate oxide).AHT, S1HT, HT, S2HT, ALT, HLT, and HT are the best-performing FinFETs in SS performance.The SS performance of AHT with 152.0 mV/dec is 10.5% lower than that of H. 6 for concise results.Figure 9 plots the IOFF of gκ-FinFETs against the effective dielectric constant of gate oxides within.One of the primary advantages of a lower IOFF is the decrease in power consumption, especially important in battery-powered devices like smartphones and laptops.When transistors leak less current in their off state, the overall power efficiency of the device improves, leading to a longer battery life and less heat generation.Also, with lower IOFF values, it is possible to pack more transistors into a given area without significant overheating or power drain issues.This is critical for the ongoing trend of miniaturization in semiconductor technology. The IOFF plot suggests that as the effective dielectric constant increases, the IOFF effect decreases steeply and significantly from  ≈ 3.35 until  ≈ 26 (that of AT), and then increases again until  ≈ 77.AT, S2T, AHT, S2LT, ALT, and S2HT are the bestperforming gκ-FinFETs in IOFF performance.The IOFF performance of AT with 6.45 × 10 −15 A is 92% lower than that of H. 6 for concise results.Figure 9 plots the I OFF of gκ-FinFETs against the effective dielectric constant of gate oxides within.One of the primary advantages of a lower I OFF is the decrease in power consumption, especially important in battery-powered devices like smartphones and laptops.When transistors leak less current in their off state, the overall power efficiency of the device improves, leading to a longer battery life and less heat generation.Also, with lower I OFF values, it is possible to pack more transistors into a given area without significant overheating or power drain issues.This is critical for the ongoing trend of miniaturization in semiconductor technology. The I OFF plot suggests that as the effective dielectric constant increases, the I OFF effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 26 (that of AT), and then increases again until κ EFF ≈ 77.AT, S2T, AHT, S2LT, ALT, and S2HT are the best-performing gκ-FinFETs in I OFF performance.The I OFF performance of AT with 6.45 × 10 −15 A is 92% lower than that of H. 7 for concise results.Figure 10 plots the ION of gκ-FinFETs against their effective dielectric constants of gate oxides within.A higher ION implies that the transistor can deliver more current rapidly, which generally translates to faster switching speeds.With a higher ION, a transistor can drive larger currents through a circuit, which is essential for applications.The ION plot suggests that as the effective dielectric constant increases, the ION effect decreases steeply and significantly from  ≈ 3.35 until  ≈ 26 (that of AT), and then increases again until  ≈ 77, point T. S1AL, S1S2L, S1L, S1S2H, S1AH, and S1AH are the bestperforming gκ-FinFETs in ION performance.The ION performance of S1AL is 2.4 × 10 8 , which is 35% higher than that of H. 7 for concise results.Figure 10 plots the I ON of gκ-FinFETs against their effective dielectric constants of gate oxides within.A higher I ON implies that the transistor can deliver more current rapidly, which generally translates to faster switching speeds.With a higher I ON , a transistor can drive larger currents through a circuit, which is essential for applications.The I ON plot suggests that as the effective dielectric constant increases, the I ON effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 26 (that of AT), and then increases again until κ EFF ≈ 77, point T. S1AL, S1S2L, S1L, S1S2H, S1AH, and S1AH are the best-performing gκ-FinFETs in I ON performance.The I ON performance of S1AL is 2.4 × 10 8 , which is 35% higher than that of H. 8 for concise results.Figure 11 plots the IG of gκ-FinFETs against their effective dielectric constants of gate oxides within.The IG plot suggests that as the effective dielectric constant increases, the ION effect decreases steeply and significantly from  ≈ 3.35 until  ≈ 22 (that of S1T), and then increases again until  ≈ 77.A lower IG means the device has a better performance and less heating.A lower leakage current is preferable, especially for memory devices such as EEPROMs where a high IG can contribute to charge loss and memory degradation over time [67][68][69].With this fact in mind, AHL, S1, S2LT, AHT, S2HT, and S1S2H appear to be the best performers with respect to IG.Despite S1, all others are FinFETs with three-stage gate oxides, meaning -grading works properly in all cases. We observe that no single-material gate oxide has performed better than the twostage or three-stage gate oxides in leakage current performances.We find that the use of κ-graded stacked gate oxide dielectrics has the potential to generate lower gate-to-channel 8 for concise results.Figure 11 plots the I G of gκ-FinFETs against their effective dielectric constants of gate oxides within.The I G plot suggests that as the effective dielectric constant increases, the I ON effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 22 (that of S1T), and then increases again until κ EFF ≈ 77. A lower I G means the device has a better performance and less heating.A lower leakage current is preferable, especially for memory devices such as EEPROMs where a high I G can contribute to charge loss and memory degradation over time [67][68][69].With this fact in mind, AHL, S1, S2LT, AHT, S2HT, and S1S2H appear to be the best performers with respect to I G .Despite S1, all others are FinFETs with three-stage gate oxides, meaning κ-grading works properly in all cases. We observe that no single-material gate oxide has performed better than the two-stage or three-stage gate oxides in leakage current performances.We find that the use of κ-graded stacked gate oxide dielectrics has the potential to generate lower gate-to-channel leakage currents, as stacked gate oxide AHL achieved a 76% lower I G than the FinFET with a single HfO 2 dielectric. The performance of κ-graded gate oxides in terms of I G appears to be better than that of single-material dielectrics, suggesting that κ-grading in gate oxides may provide a significant advantage in reducing I G. Also, they do not tend to exhibit any deficiency in device reliability, within the scope of this study. Micromachines 2024, 15, x FOR PEER REVIEW 20 of 27 leakage currents, as stacked gate oxide AHL achieved a 76% lower IG than the FinFET with a single HfO2 dielectric.The performance of κ-graded gate oxides in terms of IG appears to be better than that of single-material dielectrics, suggesting that κ-grading in gate oxides may provide a significant advantage in reducing IG.Also, they do not tend to exhibit any deficiency in device reliability, within the scope of this study.9 for concise results.Figure 12 plots the ION/IOFF of the gκ-FinFETs against their effective dielectric constants of gate oxides within.A higher ION/IOFF is mostly desirable in any transistor application and it indicates a distinct and clear differentiation between the "on" and "off" states of the transistor.With a higher ratio, the transistor leaks significantly less current in the "off" state compared to the current it conducts in the on state.As transistors are miniaturized further, maintaining a high ION/IOFF ratio becomes increasingly important to 9 for concise results.Figure 12 plots the I ON /I OFF of the gκ-FinFETs against their effective dielectric constants of gate oxides within.A higher I ON /I OFF is mostly desirable in any transistor application and it indicates a distinct and clear differentiation between the "on" and "off" states of the transistor.With a higher ratio, the transistor leaks significantly less current in the "off" state compared to the current it conducts in the on state.As transistors are miniaturized further, maintaining a high I ON /I OFF ratio becomes increasingly important to ensure that the devices operate reliably without interference from leakage currents.It enables the continued scaling down of semiconductor devices following Moore's Law, without performance degradation. Our I ON /I OFF plot suggests that as the effective dielectric constant increases, the I OFF effect increases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 24.26 (point S2T), and then decreases again until κ EFF ≈ 77.S2T, AHT, S2LT, AT, ALT, and S2HT are the best-performing gκ-FinFETs in I ON /I OFF performance.The I ON /I OFF performance of S2T is 2.4 × 10 9 , which is 11.73 times higher than that of FinFET H.We observe that no singlematerial gate oxide has performed better than the two-stage or three-stage gate oxides in I ON /I OFF performance. Micromachines 2024, 15, x FOR PEER REVIEW 21 of 27 ensure that the devices operate reliably without interference from leakage currents.It enables the continued scaling down of semiconductor devices following Moore's Law, without performance degradation. Our ION/IOFF plot suggests that as the effective dielectric constant increases, the IOFF effect increases steeply and significantly from  ≈ 3.35 until  ≈ 24.26 (point S2T), and then decreases again until  ≈ 77.S2T, AHT, S2LT, AT, ALT, and S2HT are the best-performing gκ-FinFETs in ION/IOFF performance.The ION/IOFF performance of S2T is 2.4 × 10 9 , which is 11.73 times higher than that of FinFET H.We observe that no singlematerial gate oxide has performed better than the two-stage or three-stage gate oxides in ION/IOFF performance.10 for concise results.Figure 13 plots the V TH of the gκ-FinFETs against their effective dielectric constants of gate oxides within.Devices with a lower V TH can operate effectively at lower voltages.This is particularly advantageous in low-power applications such as mobile devices and wearable technology, where preserving battery life is crucial.A lower threshold voltage generally allows transistors to switch on and off more quickly.This can improve the overall speed of a processor and faster switching is beneficial for high-performance computing and digital circuits where rapid state changes are necessary. The V TH plot suggests that as the effective dielectric constant increases, the V TH increases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 26 (that of AT), and then decreases until κ EFF ≈ 77.S2A, A, S2, S1AH, S1S2H, S1S2L, and S1AL are the best-performing gκ-FinFETs in the V TH performance.The V TH performance of S2A with 0.4731 V is 3.76% lower than that of A, 10.5% lower than that of S2, and 42% lower than that of H.This shows how graded oxide is better than any other single dielectric, including S2 and A individually, as shown in Table 6. Micromachines 2024, 15, x FOR PEER REVIEW 22 of 27 This is particularly advantageous in low-power applications such as mobile devices and wearable technology, where preserving battery life is crucial.A lower threshold voltage generally allows transistors to switch on and off more quickly.This can improve the overall speed of a processor and faster switching is beneficial for high-performance computing and digital circuits where rapid state changes are necessary. The VTH plot suggests that as the effective dielectric constant increases, the VTH increases steeply and significantly from  ≈ 3.35 until  ≈ 26 (that of AT), and then decreases until  ≈ 77.S2A, A, S2, S1AH, S1S2H, S1S2L, and S1AL are the bestperforming gκ-FinFETs in the VTH performance.The VTH performance of S2A with 0.4731 V is 3.76% lower than that of A, 10.5% lower than that of S2, and 42% lower than that of H.This shows how graded oxide is better than any other single dielectric, including S2 and A individually, as shown in Table 6.11 for concise results.11 for concise results. Discussion As seen in Figure 10, the minimum I OFF happens in gκ-FinFETs AT, S2T, AHT, S2LT, ALT, and S2HT.We observe that they have TiO 2 in common.We may safely conclude that TiO 2 matched perfectly with the metal side, better than others, and Al 2 O 3 and Si 3 N 4 matched (not so perfectly, but better than SiO 2 , HfO 2 , and La 2 O 3 ) with the Si channel side when the FinFET was in depletion mode. As seen in Figure 11, the maximum I ON happens in gκ-FinFETs S1AL, S1S2L, S1L, S1S2H, S1AH, and S1H, and they all have SiO 2 in common.We may also conclude that SiO 2 matched perfectly with the Si channel side, better than the others and, La 2 O 3 and HfO 2 matched (not so perfectly, but better than Si 3 N 4 , Al 2 O 3 , and TiO 2 ) with the metal side when the gκ-FinFET was in inversion mode. All these observations and optimal values for all FoMs (Table 4) happen between κ EFF values of 4.95-24.87.Observing Figure 5 to 13, according to our findings, for the n+ Si family gκ-FinFETs, seeking dielectrics of κ EFF higher than 25 might not be so efficient as favorable FoM values all appear in the mentioned range of κ EFF . Therefore, it would be logical to infer, depending on the modes of the operation or the FoM we favor.In order to achieve this in a highly effective gate oxide layer, dielectric permittivity matching should be considered at both the neighboring Si channel side and neighboring gate metal side simultaneously.This is the reason why we actually employed κ-graded stacked gate oxides, as their least dielectric permittivity side would match that of the Si channel side and the highest dielectric permittivity side of the same would match that of metal side, yielding lesser interface problems to widen the limits for a better gate oxide and transistor performance, while we restate the facts presented in the works of Giustino,Peng,.We added below our insights which may lead to brief rules for designs in the future. Analysis and Insights Scanning throughout the 41 simulation results, we freely present our insights as follows: • No obvious linear or quadratic relationship exists between composite gate oxide κ EFF and any of the FoMs examined; thus, a curve fitting was not possible.• According to Table 7, the best I OFF performances have a TiO 2 laminate in common, as the last stage of the κ-graded structure.To minimize the I OFF , the dielectric permittivity of the gate metal and the neighboring gate oxide laminate should be kept as close as possible.• According to Table 12, the best DIBL performance appeared in the S2T (Si 3 N 4 : TiO 2 ) gate oxide combination.To minimize the DIBL and maximize the I ON /I OFF , both the permittivity difference of the channel material and the neighboring gate oxide laminate, as well as the permittivity difference of the gate material and the neighboring gate oxide laminate should be kept small.In this case, the S2T gate oxide dielectric showed the perfect permittivity-matching behavior in between the neighboring Si and neighboring CrAu alloy.• According to Table 12, at least one two-stage or three-stage κ-graded dielectric combi- nation exists which will behave much better than all of the single-stage counterparts with respect to all our FoMs. Fabrication Considerations The deposition processes of the mentioned graded dielectric stack shown in Figure 1 should be achieved using the Atomic Layer Deposition (ALD) method so that thin films of the dielectric stack are obtained in an ALD reactor.ALD, a very slow process, will provide the deposition of thin film oxides with the thickness in order of a few angstroms, excellently uniform, accurate, and a pin-hole free [70,71].Finally, the metal layer should be deposited by using magnetron sputtering or thermal evaporation onto the gate oxide layer [72]. Conclusions We showed by simulations that it is possible that κ-graded stacked gate oxides could increase I ON and reduce I OFF and I G currents, DIBL, SS, and V TH .A numerical analysis was conducted to show the viability of the usage of κ-graded dielectric structures against conventional single-layer high-κ dielectrics on a 14 nm FinFET geometry.The impact on the key electrical performance parameters is analyzed using SILVACO ATLAS as the device simulation tool.Within 41 different two-and three-stage κ-graded stacked gate oxide combinations, some FinFET structures with κ-graded gate oxides (gκ-FinFET) promise a lower gate leakage current I G of up to 76%, lower drain-induced barrier lowering (DIBL) of up to 37.4%, a lower subthreshold slope (SS) of up to 10.5%, a lower drain-off current, I OFF , of up to 92%, a higher drain-on current, I ON , of up to 35%, a higher I ON /I OFF ratio of up to 11.7 times, and a lower threshold voltage, V TH , of up to 42%, with respect to the FinFET of the same dimensions with a single-layer HfO 2 gate dielectric.It became apparent that adverse interface effects will be minimized when smoother dielectric permittivity transitions are achieved by nanofabrication from the FinFET's channel, up to its gate metal. o κ-grading and calculation of effective κ of the gate oxide.o Mathematical modeling in ATLAS Software v5.34.0.R. o Figure 3 . Figure 3. Two-phase dielectric system connected in series in parallel sheets. Figure 3 . Figure 3. Two-phase dielectric system connected in series in parallel sheets. Figure 5 . Figure 5. (a) Drain Current I D for gκ-FinFETs with single, two-stage, and three-stage κ-graded gate oxides, (b) I OFF zoomed for V G between −1.6 and −1.4 V, (c) I OFF further zoomed for V G between −1.6 and −1.4 V, best six gκ-FinFETs, (d) I ON zoomed for V G between 1.2 and 1.3 V. See Tables 7, 8 and 12 for summarized results of this figure. Figure 6 . Figure 6.(a) Leakage Current IG for gκ-FinFETs with single, two-stage, and three-stage κ-graded gate oxides, (b) IG zoomed for VG between 0.90 and 1.1 V. See Tables 9 and 12 for summarized results of this figure. Figure 7 Figure 6 . Figure 7 presents the Drain-Induced Barrier Lowering (DIBL) of FinFETs against their effective dielectric constants of gate oxides within.As DIBL is the short-channel effect where the drain voltage can influence the threshold voltage of the transistor, a lower DIBL value does generally better because it means the device has better control over the threshold voltage and is less susceptible to variations due to changes in the drain voltage.The DIBL plot suggests that as the effective dielectric constant increases, the DIBL effect decreases steeply and significantly from  ≈ 3.35 until  ≈ 35, and then increases back until  ≈ 77, point T (designates FinFET with TiO2 as gate oxide).The DIBL performance of S2T with 41.9 mV/V is 37.4% lower than that of H. S2T, S2LT, AHT, AT, and S2HT, which are the five best-performing gκ-FinFETs in DIBL performance. 4. 3 . DIBL, SS, I ON , I OFF , I ON /I OFF, and V TH Performance Figure 7 Figure 7 presents the Drain-Induced Barrier Lowering (DIBL) of FinFETs against their effective dielectric constants of gate oxides within.As DIBL is the short-channel effect where the drain voltage can influence the threshold voltage of the transistor, a lower DIBL value does generally better because it means the device has better control over the threshold voltage and is less susceptible to variations due to changes in the drain voltage.The DIBL plot suggests that as the effective dielectric constant increases, the DIBL effect decreases steeply and significantly from κ EFF ≈ 3.35 until κ EFF ≈ 35, and then increases back until κ EFF ≈ 77, point T (designates FinFET with TiO 2 as gate oxide).The DIBL performance of S2T with 41.9 mV/V is 37.4% lower than that of H. S2T, S2LT, AHT, AT, and S2HT, which are the five best-performing gκ-FinFETs in DIBL performance. Table 3 . gκ-FinFET reference designators for single and compound gate oxides of 41 simulations. Table 4 . Effective dielectric constants κ EFF of stacked nano-laminated gate oxides. Table 7 . Five best-performing gκ-FinFETs with lowest I OFF versus nearest-performing single dielectric FinFET H. Table 9 . Five best-performing gκ-FinFETs with lowest I G versus nearest-performing single dielectric FinFET H. Table 10 . Five best-performing gκ-FinFETs with lowest ION/IOFF versus nearest-performing single dielectric FinFET H.Figure 13plots the VTH of the gκ-FinFETs against their effective dielectric constants of gate oxides within.Devices with a lower VTH can operate effectively at lower voltages. Table 10 . Five best-performing gκ-FinFETs with lowest I ON /I OFF versus nearest-performing single dielectric FinFET H. Table 11 . Five best-performing gκ-FinFETs with lowest V TH versus nearest-performing single dielectric FinFET A. Table 12 . FoM champions of gκ-FinFETs with two-and three-stage graded gate oxides compared with FinFET with single-layer HfO 2 of t ox 3 nm.Boldface indicates best value among all 41 gκ-FinFET configurations. • According to Table 8, the best I ON performances have a SiO 2 laminate in common as the first stage of the κ-graded structure.To maximize the I ON , the dielectric permittivity of channel material and neighboring gate oxide laminate should be kept as close as possible.• According to Table 11, the lowest values of V TH appeared in the lowest values of κ EFF .
12,702.2
2024-05-30T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Global Classical Solutions to the Mixed Initial-boundary Value Problem for a Class of Quasilinear Hyperbolic Systems of Balance Laws T i i r (u) r (u) 1 (i 1,...., n) ≡ = (1.6) Where ij δ stands for the Kronecker's symbol. Clearly, all i ij ij (u), l (u) and r (u)(i, j 1,..., n) λ = (1.7) have the same regularity as A(u), i.e., C2 regularity. We assume that on the domain under consideration, each characteristic with positive velocity is weakly linearly degenerate and the eigenvalues of A(u) f(u) = ∇ (1.8) satisfy the non-characteristic condition. r(u) 0 s(u) (r 1,....,m;s m 1,..., n) λ < < λ = = + (1.9) r(u) 0 s(u) (r 1,....,m;s m 1,..., n) λ < < λ = = + (1.10) We are concerned with the existence and uniqueness of global C1 solutions to the mixed initial-boundary value problem for system (1.1) in the half space D {(t, x) | t 0, x 0} = ≥ ≥ (1.11) with the initial condition: t 0 : u (x)(x 0) = = φ ≥ (1.12) and the nonlinear boundary condition: Introduction and Main Result Consider the following quasilinear hyperbolic system of balance laws in one space dimension: where L>0 is a constant; u=(u 1 ,…., u n ) T is the unknown vector function of (t, x), f(u) is a given C 3 vector function of u. We assume that on the domain under consideration, each characteristic with positive velocity is weakly linearly degenerate and the eigenvalues of A(u) f(u) = ∇ (1.8) satisfy the non-characteristic condition. r(u) 0 s(u) (r 1,...., m;s m 1,..., n) λ < < λ = = + (1.9) r(u) 0 s(u) (r 1,...., m;s m 1,..., n) λ < < λ = = + (1.10) We are concerned with the existence and uniqueness of global C 1 solutions to the mixed initial-boundary value problem for system (1.1) in the half space with the initial condition: t 0 : u (x)(x 0) == ϕ ≥ (1.12) and the nonlinear boundary condition: are all C 1 functions with respect to their arguments, which satisfy the conditions of C 1 compatibility at the point (0; 0). Also, we assume that there exists a constant µ >0 such that For the special case where (1.1) is a quasilinear hyperbolic system of conservation laws, i.e., L=0, such kinds of problems have been extensively studied (for instance, [1][2][3][4][5][6][7][8] and the references therein). In particular, Li and Wang proved the existence and uniqueness of global C 1 solutions to the mixed initial boundary value problem for first order quasilinear hyperbolic systems with general nonlinear boundary conditions in the half space {(t, x | t 0, x 0 |)} ≥ ≥ . On the other hand, for quasilinear hyperbolic systems of balance laws, many results on the existence of global solutions have also been obtained by Liu, et al., (for instance, see [8][9][10][11][12][13][14] and the references therein), and some methods have been established. So the following question arises naturally: when can we obtain the existence and uniqueness of semiglobal C1 solutions for quasilinear hyperbolic systems of balance laws? It is well known that for first-order quasilinear hyperbolic systems of balance laws, generically speaking, the classical solution exists only locally in time and the singularity will appear in a finite time even if the data are sufficiently smooth and small [15][16][17][18][19][20]. However, in some cases global existence in time of classical solutions can be obtained. In this paper, we will generalize the results in [21] to a nonhomogeneous quasilinear hyperbolic system, the analysis relies on a careful study of the interaction of the nonhomogeneous term. Our main results can be stated as follows: Theorem 1.1. Suppose that the non-characteristic condition (1.10) holds and system (1.1) is strictly hyperbolic. Suppose furthermore that for j = m + 1,…., n; each j-characteristic field with positive velocity is weakly linearly degenerate. Suppose finally that The rest of this paper is organized as follows. In Section 2, we give the main tools of the proof that is several formulas on the decomposition of waves for system (1.1). Then, the main result will be proved in Section 3. Finally, an application is given in Section 4 [22]. Decomposition of Waves Suppose that on the domain under consideration, system (1.1) is strictly hyperbolic and (1.2)-(1.6) hold. Suppose that k A(u) C ∈ where k is an integer ¸ 1. By Lemma 2.5 in [23], there exists an invertible k 1 C + transformation u u(u)(u(0) 0) = =  such that in u  -space for each i = 1,…, n, the ith characteristic trajectory passing through u 0 =  coincides with the i u  -axis at least for i | u |  small, namely, be the directional derivative along the ith characteristic. Our aim in this section is to prove several formulas on the decomposition of waves for system (1.1), which will play an important role in our discussion. On the other hand, we have Where ijk(u) γ  is given by (2.8) and Proof. Differentiating the first equation of (2.27) with respect to y gives Then, noting (2.6), it follows from (2.31) that Then along the ith and for i = m + 1; : : : ; n; let where n>0 is suitably small (Figure 1). Noting that n > 0 is small, by (3.3), it is easy to see that where c and C are positive constants independent of T. Proof. Differentiating the first equation of (2.27) with respect to y gives Then, noting (2.19), it follows from (2.44) that Proof of Theorem 1.1 By the existence and uniqueness of a local C1 solution for quasilinear hyperbolic systems [22], there exists T0>0 such that the mixed initial-boundary value problem (1.1) and (1.12)-(1.13) admits a unique C 1 solution u=u(t, x) on the domain Thus, in order to prove Theorem 1.1 it suffices to establish a uniform a priori estimate for the C 0 norm of u and u x on any given domain of existence of the C 1 solution u = u(t; x). Thus, there exist sufficiently small positive constants δ and 0 δ such that For the time being it is supposed that on the domain of existence of the C 1 solution u = u(t; x) to the mixed initial-boundary value problem (1.1) and (1.12)-(1.13), we have At the end of the proof of Lemma 3.3, we will explain that this hypothesis is reasonable. Thus, in order to prove Theorem 1.1, we only need to establish a uniform a priori estimate for the piecewise C 0 norm of v and w defined by (1.14) and (2.1) on the domain of existence of the ) t x ( (0) ) t} Where j C  denotes any given jth characteristic in In the present situation, similar to the corresponding result in [24,[30][31][32][33], we have to the mixed initialboundary value problem (1.1) and (1.12)-(1.13), we have the following uniform a priori estimates: where here and henceforth, ki(i=1; 2,….) are positive constants independent of θ and T. By integrating (2.6) along j (s; t, x) ξ = ξ and noting (2.9) and (2.11), we have By using Lemma 3.2 and noting (3.54) and (3.57), it is easy to see that By Hadamard's formula, we have Thus, noting the fact that L>0, and using (3.13) and (3.54), we obtain from (3.59) that Similar to Lemma 3.2 in [21], differentiating the nonlinear boundary condition (1.13) with respect to t, we get By (1.1), (1.3) and (2.4), it is easy to see that Therefore it follows from (3.63)-(3.65) that where B1(u) is a matrix whose elements are all C1 functions of u, which satisfy in which Fs(s = m + 1,…,n) are continuous functions of t and u. x 0 : ws Proof. Noting (3.4), it is easy to see that On the other hand, by (3.8), we have Noting the fact that By employing the same arguments as in (i), we can obtain 0) ) y) if t t t (resp.t t t ) which gives a one-to-one correspondence t = t(y) between the segment . Thus, the integral on j C  with respect to t can be reduced to the integral with respect to y. Differentiating (3.80) with respect to t gives i i i (t, y) 1 (u(t, x (t, y))) (u(t, x (t, y))) it suffices to estimate | q (t, x (t, y)) | | q (t, x (t, y)) | Similarly, we have T W T W T V T V T U T V T U T V T T V T W T W T V T W T V T U T V T Hence, noting the fact that L > 0, we obtain from (3.123) that Combining (3.122) and (3.128), we get 2 37 1 1 We next estimate 1 (T) V  and V1(T). For i=m + 1,…, n, for any given jth characteristic as in the proof of (3.90), in order to estimate 1 (T) V  it suffices to estimate Thus, by continuity there exist positive constants k2; k3; k4; k5; k6; k7 and k8 independent of µ, such that (3.47)-(3.53) hold at least for Finally, we observe that when µ0 > 0 is suitably small, by (3.52) we have In addition, we assume that By (4.4), it is easy to see that in a neighborhood of 0 0 0 u v ae ö ÷ ç ÷ = ç ÷ ç ÷ ç è ø  system (4.1) is strictly hyperbolic and has the following two distinct real eigenvalues: The corresponding right eigenvectors are T T 1 2 (u) / /( (w)) , (u) / /(1 (w))) r r s s ¢ ¢ - (4.9) It is easy to see that in a neighborhood of 0 0 0 u v ae ö ÷ ç ÷ = ç ÷ ç ÷ ç è ø  all characteristics are linearly degenerate, then weakly linearly degenerate, provided that (w) 0, | w | s¢¢ º " small (4.10) The corresponding left eigenvectors can be taken as Suppose finally that 1 (t) C h Î satisfies (4.14) and that the conditions of C1 compatibility are satisfied at the point (0; 0). Then there is a sufficiently small 0 0 q > such that for any given
2,417.8
2014-12-10T00:00:00.000
[ "Mathematics" ]
Maintenance of cooperation in a public goods game: A new decision-making criterion with incomplete information Hardin’s “The Tragedy of the Commons” prophesies the inescapable collapse of many human enterprises. The emergence and abundance of cooperation in animal and human societies is a challenging puzzle to evolutionary theory. In this work, we introduce a new decision-making criterion into a voluntary public goods game with incomplete information and choose successful strategies according to previous payoffs for a certain strategy as well as the risk-averse benefit. We find that the interest rate of the common pool and the magnitude of memory have crucial effects on the average welfare of the population. The appropriate sense of individuals’ innovation also substantially influences the equilibrium strategies distribution in the long run. Cooperative behaviors are a trademark of both insect and human societies [1]. These behaviors are explained by kinship in the former and unknown mechanisms in the latter. Hardin's "The Tragedy of the Commons" indicated the overuse of public resources leads to an inevitable collapse of cooperation among non-kin populations; for example, in the drive to reduce pollution and sustain global climate [2]. Moreover, the prisoner's dilemma (PD) has long been viewed as a typical example to show the disintegration of cooperation through pair-wise interactions. However, conspicuous examples of cooperation (although almost never of ultimate self-sacrifice) also occur where relatedness is low or absent [3]. For the purpose of studying qualitatively interactions between humans, a public goods game (PGG) has been used by economists, social scientists and evolutionary biologists as a paradigm to explain maintaining cooperation in a group of unrelated individuals [4][5][6][7]. In a typical PGG, an experi-menter endows four players with 20 money units (MUs) each. The four players then have the opportunity to invest part or all of their money into a common pool. The experimenter doubles the total capital in the common pool and divides it equally among all players regardless of their investment. If everyone cooperates and invests all the money, each player ends up with 40 MUs. However, every player faces the temptation to defect and to free-ride on the other player's contributions by withholding the money, because each MU returns only half to the investor. If all the players were perfectly rational, they would invest nothing. Such behavior attributed to Homo economicus varies considerably from experimental evidence [4]. Note that for pair-wise players with a fixed investment amount, the PGG reduces to the PD. Recently, various mechanisms have been proposed to explain the persistence of cooperation in the absence of genetic relatedness. For example, repeated interactions and direct reciprocity [3,8], indirect reciprocity [8][9][10], punishment [11,12], reward [13], and the structured population have been suggested [1,14]. Voluntary participation may provide a way to escape from the economic stalemate and maintain substantial levels of cooperation in a large population without secondary mechanisms [5,[15][16][17][18][19]. This volunteering PGG considers three types of players: (1) cooperators C and (2) defectors D, both willing to join the PGG, the former contributes a fixed amount to the common pool, while the latter attempts to free ride on the others' investment and contributes nothing; (3) loners L, who are unwilling to participate in the PGG and obtain a fixed small payoff. In other words, loners are risk averse investors. Traditionally, most researchers concerned with cooperation in social dilemmas focus on imitation or learning rules, with the central argument that an individual could compare their payoff with that of others, randomly choose a role model, and take over the strategy of the role model with a certain probability [7,[20][21][22][23][24][25][26][27][28][29][30]. Game theory has been an interesting subject in recent years [31][32][33][34]. We would position the procedure of decision making among the games of incomplete information: In a fictitious game, player i has the weight function updated by adding 1 to the weight of a strategy each time it is played [23]. Moreover, in the expected weighted attraction model, the strength of hypothetical reinforcement of strategies is considered players who make decisions not according to payoffs they would have yielded. However, without any information of their von Neumann neighborhoods, how does a person make decisions if they only know their own payoffs in each round in the rule of PGG? In other words, what is the decision-making criterion for a person with incomplete information, who only obtains their own information? In this work, we introduce a new decision-making criterion to show the procedure of cooperation maintenance. Numerical simulation of this evolutionary mechanism illustrates that historical experience is a large factor when a person chooses an appropriate strategy. Moreover, compared with random selection, a proper length of retention is more conducive to the welfare of the whole population. The simulation also indicates that cooperation is on a substantial level, which means this introduction is an illusion of free ride behavior, irrespective of the initial conditions. Model Here, we consider the voluntary PGG in a spatial lattice with periodic boundary conditions. The players are arranged in the rigid regular two-dimensional square lattice with 10000 members and interact with their von Neumann neighborhoods only. The von Neumann neighborhoods are the nearest players to each lattice point. Confined to a site x on the lattice with incomplete information, player x chooses a strategy according to their own historical records. The profit of player x then depends on their strategy as well as the choices of their neighbors. Considering a single PGG involving the player and their four nearest neighbors, the payoffs for different strategies are then determined by the five strategies. Namely, if n c , n d , n l (with n c +n d +n l =N=5) denote the numbers of C, D and L players, then the net payoff of cooperator P c , defectors P d , and loners P l is given by The cooperative investments are set to unity and r denotes the interest rate of the common pool. In particular, if only one player joins the PGG (i.e. n c +n d =1), the solitary player will be accounted as a loner. For a voluntary PGG deserving its name, we must have 1<1+K<r<N. In the rigorous sense of the spatial PGG, the payoffs to each player are accumulated by summing the player's performance in PGGs taking place on the player's site and neighboring sites. For convenience, we assume that the payoff for a player is the average value of the payoffs obtained over all the games they take part in. Note that this simplification does not change the system's dynamics. Confined to a site x on a square lattice, player x uses a mixed strategy: s x (p l , p c , p d ), which means probability distribution over pure strategies. Obviously there is the normalization condition: resents the probability that player x chooses the i strategy. That is to say, at each decision stage, all the players will come up with one of their feasible actions with their preassigned probabilities. From time to time, player x reassesses and updates their mixed strategy according to the payoffs which they obtained in the previous rounds, i.e. from their historical experience. They increase the probability of the last round strategy if the payoff satisfied them, and vice versa. But when do they feel satisfied? In our model, all the players will realize that they can obtain a small but fixed income in the long run when they refuse to join the PGG. They also remember their decisions and deficits along with profits for a certain strategy in each round from the beginning until now. Then it is safe to say that each player will remember their average payoff for a certain choice: . In our model, we suppose that each player updating their mixed strategy will compare their last round's payoff with their decision criterion: the better earning of K and i P . Assuming that player x choose the i strategy in the last round, the evolution of the mixed strategy is then given by x l The parameter ω denotes the magnitude of memory, i= c, d and j =d, c (i.e. the opposite i strategy), respectively. Hence, the players in our model can choose the strategy which has been successful during the game's history. In addition, as former researchers have done [24], we postulate that the players have the possibility to explore available strategies, and define µ as the probability that players choose a random strategy. Numerical simulation Simulations were carried out for a population of N= 100×100 individuals. In the quantitative analysis, numerical simulations were performed to investigate the frequencies of cooperators and defectors when this decision criterion was introduced into the PGG. Figure 1 shows that this mechanism can maintain the evolution of cooperation to avoid dilemma. Obviously, when r→2, most people view quitting the game as their best choice. When r→5, cooperation means the greatest benefit for all players. Moreover, if r = 5, each player knows that defection is a reasonable choice only if they alone make this decision. Every player faces the maximum temptation to defect and to free-ride on other players' contributions. Thus from Figure 1, we can see that when r gets close to 5, the average frequency of defection is higher than 40%. Figure 2 shows that the payoff of defection approaches 2.2 when r = 5, which means that if the player is smart enough at this value, they will definitely take part in the game, regardless of the probability of the random choice. As Figure 1 shows us, most players participate in the game; the frequency of loners is close to zero. Additionally, when r is small, the decision-making criterion for a certain player is the fixed payoff K of the loner. The frequencies of C, D and L are the same as in former research [17]. In Figure 2, payoffs of cooperators C and defectors D increase as r moves from 2 towards 5, and this result conforms to the instinct of players in real life, when they know the interest rate of the common pool. Compared with previous results [17], the payoffs increasing monotonically with r are more reasonable. In the remaining text we will discuss how the magnitude of memory ω and the mutation rate μ affect the welfare of all the players. When ω→∞, the behaviors of individuals are only influenced by the historical payoffs of a certain strategy, which means the players are myopic. The rule in this case is similar in spirit to Win-Stay-Lose-Shift rules. When ω→0, the present round payoffs cause only a minor effect on the evolution of mixed strategies, i.e. the players have longer memories. More precisely, the magnitude of memory represents the individual's sensitivity to their neighborhood. The lower the ω value, the more slowly an agent will react to their surroundings, and vice versa. Thus for a low ω value, even if individuals cannot make more money than loners, they will still keep their original action for a very long time. Consequently, the population may not obtain very efficient outcomes in the long run. Conversely, for a high ω value, the individuals are mainly affected by current payoffs, and the C clusters will collapse immediately once they are invaded by defectors D. That is to say, the strategy is unable to correct mistakes, and like tit-for-tat [3], an accidental defection would lead to a breakdown in cooperation, which decreases average outcomes of the population. We conclude that an appropriate magnitude of memory is helpful to promote outcomes of the collectives. In general, it may result in a more efficient outcome for the population when ω is approximately 10 ( Figure 3). In fact, the result suggests that both the behavior of marching in lockstep and focusing on short-term goals will go against the development of the population in a real-life situation. It is also in close accord with the conclusion of a previous study [17]. The value of µ characterizes the probability that an individual makes their decision randomly. For µ→1, the individual is completely irrational and their decision is random. Apparently, the only equilibrium is the equal abundance frequency in all strategies. For smaller µ, the mutation rate causes minor modifications in the equilibrium of the system when r > 3, whereas it affects the equilibrium significantly when the value of r is small (Figure 4). The single extreme value in Figure 4 is much more credible than Figure 4(a) from [17] where there are two extreme values. Conclusion In this work, we introduce a new yet simple evolution method to spatial voluntary public goods games with incomplete information, resulting in interesting dynamic properties. In our model, players are considered to know only their payoffs in each round. They adjust their mixed strategies continually according to their decision-making criterion: the better earning of the risk-averse benefit K and the history of payoffs for a certain strategy. If they are satisfied with their payoffs, their probability of using the previous round strategy in their mixed strategies increases, and vice versa. Thus, this rule is no longer myopic. By introducing this criterion, we found that both the interest rate of the common pool and the magnitude of individuals' memory have an important effect on the evolution of cooperation. To maintain the best welfare of the whole population, people should choose an appropriate magnitude of memory. Meanwhile, our simulation results suggest that in a real-life situation, people make decisions from their own history of experience and from those of neighborhoods, which implies that the heterogeneity of individuals is due to the different environments in which they live. A person may make a different choice as their environment changes. With this evolution rule, the proportion of cooperation is on a substantial level, which reduces free ride behavior in enterprise decision-making.
3,211.6
2012-01-28T00:00:00.000
[ "Economics" ]
Dicer structure and function: conserved and evolving features Abstract RNase III Dicer produces small RNAs guiding sequence‐specific regulations, with important biological roles in eukaryotes. Major Dicer‐dependent mechanisms are RNA interference (RNAi) and microRNA (miRNA) pathways, which employ distinct types of small RNAs. Small interfering RNAs (siRNAs) for RNAi are produced by Dicer from long double‐stranded RNA (dsRNA) as a pool of different small RNAs. In contrast, miRNAs have specific sequences because they are precisely cleaved out from small hairpin precursors. Some Dicer homologs efficiently generate both, siRNAs and miRNAs, while others are adapted for biogenesis of one small RNA type. Here, we review the wealth of recent structural analyses of animal and plant Dicers, which have revealed how different domains and their adaptations contribute to substrate recognition and cleavage in different organisms and pathways. These data imply that siRNA generation was Dicer's ancestral role and that miRNA biogenesis relies on derived features. While the key element of functional divergence is a RIG‐I‐like helicase domain, Dicer‐mediated small RNA biogenesis also documents the impressive functional versatility of the dsRNA‐binding domain. Introduction Small RNAs (20-30 nucleotides [nt] long) loaded on Argonaute proteins function as sequence-specific guides in numerous RNA silencing pathways (reviewed in Ketting, 2011). Small RNAs in many RNA silencing pathways are generated by RNase III Dicer from substrates with doublestranded RNA structures (reviewed in Jaskiewicz & Filipowicz, 2008). Dicers and numbers of their paralogs in each species vary. Some organisms use a single Dicer for small RNA production, while others have two or more Dicers with distinct features. For example, mammals and Caenorhabditis elegans utilize a single Dicer, Drosophila two, and Arabidopsis has four (Fig 1A). This review summarizes recent remarkable progress on understanding the structural and functional variability of Dicer proteins in different model systems and discusses the main principles of Dicer protein function and evolution. Dicer Architecture Dicer is a general name for RNase III endoribonucleases producing small RNAs for RNA silencing pathways. There is a common order of specific modules present in Dicer across eukaryotic kingdoms, which reflects the common order of protein domains in the primary protein sequence. This protein organization constitutes the canonical Dicer architecture, despite some eukaryotic Dicers differing considerably from it (Fig 1A and B). The three-dimensional architecture of canonical Dicer consists of three major structural regions: the cap (head), the core (body), and the base (Lau et al, 2009;Wang et al, 2009;Taylor et al, 2013;Liu et al, 2018; Fig 1B). The core-an internal dimer of the RNase III domains The rigid core of Dicer-related enzymes ( Fig 1B) is formed by an intramolecular dimer of two RNase III domains, which resembles a bacterial RNase III dimer (Zhang et al, 2004;MacRae et al, 2006). Each RNase III domain of Dicer cleaves one strand of the substrate (also known as "dicing"). The same intramolecular RNase III dimer organization is common for proteins called Dicer or Dicer-like, even if they otherwise deviate from the canonical architecture. The RNase III core module is also present in Drosha, a Dicer-related enzyme (Fig 1A), which likely evolved from a Dicer duplication in a Metazoan ancestor (Maxwell et al, 2012;Brate et al, 2018). Drosha supports animal miRNA biogenesis as the catalytic core of a nuclear Microprocessor complex that cleaves primary microRNA transcripts (pri-miRNA) into precursor miRNA (pre-miRNA) substrates for Dicer (reviewed in Bartel, 2018). The cap-the PAZ domain and platform with a connector helix The cap module is composed of the Piwi/Argonaute/Zwille (PAZ) domain, the platform, and the connector helix ( Fig 1B). The PAZ domain is an RNA-binding domain found in Dicer and Argonaute proteins (Lingel et al, 2004;Ma et al, 2004). In Dicer, PAZ is a dsRNAbinding element MacRae et al, 2006) that anchors one end of the substrate. The PAZ domain is present in animal and plant Dicers as well as in Dicer of Giardia intestinalis (MacRae et al, 2006), a flagellated unicellular parasite. The low sequence conservation of the PAZ domain impedes its recognition by sequence homology analysis with Dicers from the fungi Schizosaccharomyces pombe and Neurospora crassa (Colmenares et al, 2007;Vetukuri et al, 2011;Paturi & Deshmukh, 2021). However, despite the lack of sequence conservation, a PAZ-like fold appears in the structural prediction for Dicer from S. pombe (Fig 1C), suggesting that the canonical Dicer architecture evolved in a common ancestor of the main eukaryotic kingdoms. The PAZ/platform domains in different Dicers may contribute to discriminating Dicer substrates. While the PAZ has specificity for the two nucleotide 3 0 overhang (3 0 pocket; Lingel et al, 2004;Ma et al, 2004), the adjacent platform domain may carry a phosphatebinding pocket, enabling combined binding of both strands of a substrate RNA duplex with the 3 0 two nucleotide overhang and assuring efficient and accurate substrate processing (Park et al, 2011;Tian et al, 2014). The 5 0 binding pocket is present in Drosophila Dicers and human Dicer but not in Giardia or S. pombe Dicer. Functional studies suggested that simultaneous anchoring of the 3 0 and 5 0 ends of the substrate terminus is a feature important for fidelity of small RNA biogenesis (Park et al, 2011;Fukunaga et al, 2014;Kandasamy & Fukunaga, 2016). A unique phosphate-binding pocket exists in Drosophila Dicer-2, which is required to bind the 5 0 monophosphate of short dsRNA substrates but not long dsRNAs (Cenik et al, 2011;Fukunaga et al, 2014). Inorganic phosphate binding by the pocket suppresses miRNA processing in vivo and alters substrate specificity in favor of long dsRNAs (Fukunaga et al, 2014). At the same time, the phosphate-binding pocket of Dicer-2 supports length fidelity of siRNA production from long dsRNA (Kandasamy & Fukunaga, 2016). In monocot plants, divergence of the PAZ domain in DCL3 and DCL5 paralogs determines distinct substrate specificity in the biogenesis of distinct classes of 24-nt siRNAs . (Zapletal et al, 2022). (C) Comparison of the folds of the PAZ domain of S. pombe (gray fold) and H. sapiens (green fold). The red connector helix was retained in the fold to indicate orientation of the fold relative to the rest of Dicer. S. pombe fold is based on AlphaFold (Varadi et al, 2022), for the human Dicer a published structure [PDB ID: 5ZAM] was used (Liu et al, 2018). The PAZ domain is key to the biogenesis of small RNAs of defined length by different Dicers, as the length of the product is defined by the distance between the end of the substrate anchored in the PAZ domain and the catalytic center (MacRae et al, 2006. This distance is defined by an a helix (connector helix or ruler helix; Fig 1B, D and E), a structural component first reported in the Dicer structure from Giardia (MacRae et al, 2006), which directly connects the PAZ domain with the RNase III domains (Fig 1D). The base-the N-terminal helicase domain The base of the enzyme consists of an N-terminal helicase domain of the RIG-I-like receptor subgroup, which forms a clamp-like structure near RNase III (Lau et al, 2009(Lau et al, , 2012Taylor et al, 2013;Liu et al, 2018). This domain is composed of three globular subdomains: an N-terminal DExD/H domain (HEL1), an insertion domain (HEL2i), and a helicase superfamily C-terminal domain (HEL2). As we will discuss later, the N-terminal helicase is another structural element involved in substrate recognition and functional differentiation of Dicer homologs as shown by structural and functional analyses (Welker et al, 2011;Liu et al, 2012;Kidwell et al, 2014;Sinha et al, 2015;Wang et al, 2021;Wei et al, 2021;Jouravleva et al, 2022;Su et al, 2022;Yamaguchi et al, 2022;Zapletal et al, 2022;Aderounmu et al, 2023). It also mediates interactions with regulatory proteins (reviewed in Hansen et al, 2019). dsRBD and DUF283 domains Two additional domains are part of the common Dicer architecture: the C-terminal dsRNA-binding domain (dsRBD), which localizes near the catalytic core, and the DUF283 domain localized next to the helicase domain at the boundary between the core and the base (Fig 1). DUF283 has a dsRBD fold (Dlakic, 2006) and, like dsRBD, appears to interact with the substrate during dicing (Wang et al, 2021;Wei et al, 2021;Jouravleva et al, 2022;Su et al, 2022;Yamaguchi et al, 2022;Zapletal et al, 2022). The C-terminal dsRBD has also been implicated in affecting the nucleocytoplasmic localization of Dicer (Vagin et al, 2009;Doyle et al, 2013). While the above-described canonical Dicer architecture ( Fig 1B) exists across eukaryotic kingdoms, there are Dicer variants deviating from the cap-core-base architecture. The abovementioned Dicer from Giardia lacks the base as well as the C-terminal dsRBD, representing a "minimal Dicer" composed of the core and head parts. There are other Dicers, whose architecture deviates even more. For example, DrnA/B in Dictyostelium retains the tandem RNase III core module, the dsRBD is placed at the N terminus and the central part of the protein does not show sequence homology with canonically built Dicers (Hinas et al, 2007;Kruse et al, 2016;Liao et al, 2018;Fig 1A). How DrnA/B determine the length of the product remains unclear. Similarly, abovementioned Drosha proteins may be considered highly derived Dicer variants retaining the core module with the dsRBD (Kwon et al, 2016). Remarkably, noncanonical Dicer Dcr1 in Kluyveromyces polysporus carrying just a single RNase III domain and a tandem dsRBD ( Fig 1A) is also able to generate small RNAs of specific lengths (Weinberg et al, 2011). Structural analyses of Dcr1 suggested that the siRNA length of~23 nt is generated upon cooperative binding of long dsRNA by an array of Dicers, where dicing by two adjacent Dicers yields a sized siRNA duplex. Main Dicer Binding Partners Dicer interacts with many proteins but two binding partner families stand out: (I) Aproteins from the Argonaute family, which bind small RNAs generated by Dicer and (II) dsRNA-binding proteins with tandemly arrayed dsRBD, which facilitate substrate recognition, dicing fidelity, and Argonaute loading. Argonaute proteins form the core of effector complexes and provide another layer for divergence of RNA silencing pathways (reviewed in Meister, 2013). Argonautes exist in both, prokaryotes and eukaryotes suggesting they emerged before the canonical Dicer architecture was formed. Eukaryotic Argonautes have a stereotypical bilobed architecture consisting of four domains: N-terminal, PAZ, MID and PIWI. The Argonaute PAZ domain binds the 3 0 end of the loaded small RNA (Lingel et al, 2003;Song et al, 2003;Yan et al, 2003). The 5 0 end of the loaded small RNA is recognized by the MID domain (Boland et al, 2010;Frank et al, 2010). The PIWI domain has an RNaseH fold and provides nuclease activity for Argonaute homologs able to cleave RNAs complementary to the loaded small RNA (Liu et al, 2004;Meister et al, 2004). Interaction of human Dicer with AGO Argonaute subfamily was proposed through a region near the RNase IIIa domain (Sasaki & Shimizu, 2007). Dicer Substrates Dicer substrates can be classified in different ways. Here, we divide substrates into three groups according to their structure and predictability of small RNA sequence (Fig 2A). Perfectly complementary long dsRNAs with a blunt end These are typically produced by an RNA-dependent RNA polymerase (RdRP) that synthesizes the antisense strand from the end of a template. It may be a hallmark of replicating viral RNA and its processing by Dicer one of the mechanisms of innate immunity (reviewed in Hur, 2019). RdRPs are also an intrinsic element in many RNA silencing mechanisms (e.g., in transcriptional silencing in plants and S. pombe; Mourrain et al, 2000;Colmenares et al, 2007) where RdRPs convert specific RNAs into Dicer substrates. A blunt-end dsRNA can be recognized through the PAZ domain (Zhang et al, 2002;. However, the point-of-entry for long dsRNA into Dicer may also be the RIG-I-like N-terminal helicase domain (Welker et al, 2011;Sinha et al, 2018). In the processive mode of dicing (Fig 2B), the ATPase activity of the helicase domain facilitates the enzyme's movement along the substrate, which is processed by a single Dicer molecule in an ATP-dependent series of consecutive dicing (Zamore et al, 2000;Bernstein et al, 2001;Ketting et al, 2001;Nykanen et al, 2001;Cenik et al, 2011;Naganuma et al, 2021). Perfectly and nearly perfectly complementary long dsRNAs with longer single-stranded overhangs Such dsRNAs typically arise from genomic transcription followed by intra or intermolecular base pairing of cellular transcripts (reviewed in Chen & Hur, 2022). Such structures usually have one or two long single-stranded overhangs (Fig 2A), as such transcripts are unlikely to match precisely to generate a blunt end. This precludes direct access of Dicer to ends of the double-stranded region. However, Dicer is able to dice such substrates by a poorly understood endonucleolytic activity (Zhang et al, 2002), which yields fragments with two nucleotide overhangs, which are then efficiently recognized and processed by Dicer. Small hairpin substrates Small hairpin substrates are common substrates for the biogenesis of miRNA-like molecules. Annotated miRNAs in animals, plants, and other taxonomic groups (Kozomara et al, 2019) show that miRNAs are produced from a heterogeneous assortment of substrates that are not processed in a uniform way. At the same time, a common feature of miRNAs is that their biogenesis results in a small RNA with a defined sequence. In (Kozomara et al, 2019). Annotated mature miRNAs are highlighted in red font when on the ascending strand (5p miRNA) and blue font when on descending strand (3p miRNA). eroded larger inverted repeats originally producing long dsRNA (reviewed in Svoboda & Di Cara, 2006) and often suppress complementary sequences by RNAi-like endonucleolytic cleavage. Consequently, many plant miRNAs retain longer stems ( Fig 2C) and are diced by DCL1 in two consecutive cleavages in an ATP-dependent manner reminiscent of siRNA biogenesis (Fig 2B). Canonical animal miRNAs are a distinct small RNA class whose biogenesis involves a two-step processing by two RNase III family proteins-Drosha and Dicer (Fig 2B). Nuclear Drosha generates pre-miRNA, a small precursor hairpin, from a long pri-miRNA transcript. Subsequently, cytoplasmic Dicer cleaves the pre-miRNA and releases a~22 nt miRNA, which is loaded onto Argonaute. Canonical miRNAs in Eumetazoan animals typically emerge de novo (Chen & Rajewsky, 2007;Meunier et al, 2013), presumably from random small hairpin structures entering the miRNA biogenesis machinery. Given the distinct modes of biogenesis and independent adaptation of animal Dicers for miRNA biogenesis, miRNAs in Eumetazoan animals have shorter stems and smaller loops than plant miRNAs while showing slight differences in precursor length distribution (Fig 2C). There are also noncanonical animal miRNAs, such as mirtrons, which are Drosha-independent, and are Dicer hairpin substrates derived from specific small spliced-out introns (Fig 2D). How the biogenesis of annotated miRNAs from species from other taxonomic groups (e.g., Dictyostelium or Phytophthora) occurs is unclear. But the precursors of these miRNAs seem to have longer stems and loops, such as plant precursors do (Fig 2D). Taken together, Dicer substrates have different structural features, which facilitate their recognition and cleavage. Three types of endonucleolytic cleavages by Dicer can be recognized: ATPdependent processive cleavage of dsRNA from its terminus, poorly understood internal endonucleolytic cleavage of long dsRNA with inaccessible ends, and loop removal from small hairpin miRNA precursors. Dicer Function at the Molecular Level While Dicer was identified as an enzyme producing small RNAs more than two decades ago , only recent structural analyses of Dicers in plants (Wang et al, 2021;Wei et al, 2021) Drosophila (Jouravleva et al, 2022;Su et al, 2022;Yamaguchi et al, 2022) and mammals (Liu et al, 2018;Zapletal et al, 2022;Lee et al, 2023b) have provided detailed structural insights into miRNA and siRNA biogenesis by Dicer. Several Dicer homologs were captured in apo form and in multiple RNA-bound states (Table 1), revealing details about substrate selection, catalysis, and product release, and how Dicer cooperates with dsRBP cofactors during RNA processing. These structural data from multiple organisms allow us to take a closer look at common and derived features of Dicer in substrate recognition and processing. Dicer helicases determine how the enzyme selects and loads its substrate The determined structures of Dicer from invertebrates (Drosophila Dicer-1 and Dicer-2) and Dicer-1 from mammals all adopt very similar closed "L"-shaped conformations in the apo state (Liu et al, 2018;Jouravleva et al, 2022;Su et al, 2022;Yamaguchi et al, 2022;Zapletal et al, 2022;Lee et al, 2023b). In this autoinhibited conformation reflecting in vitro data (Ma et al, 2008), the helicase domain is associated with RNase IIIb and limits the loading of RNA into the catalytic site. The helicase domains of these Dicers have diverged during evolution. For example, Drosophila Dicer-1 has a degenerate helicase domain and is an ATP-independent enzyme (Tsutsumi et al, 2011), Drosophila Dicer-2 has a conserved helicase domain that hydrolyzes ATP (Zamore et al, 2000;Nykanen et al, 2001;Cenik et al, 2011;Sinha et al, 2015), and mammalian Dicer-1, despite conservation of its helicase domain, does not hydrolyze ATP (Zhang et al, 2002). Accordingly, Dicers employ distinct mechanisms of substrate recognition and loading. This view is in line with the recent attempt to reconstruct ancestral helicase sequences of animal Dicers and analyze their properties in vitro (Aderounmu et al, 2023). Upon RNA binding, the horseshoe-shaped structure of the helicase domain of Drosophila Dicer-2 associates with the DUF283 domain to form a ring (similar to RIG-I-like receptor) that threads the RNA substrate into the enzyme (Fig 3A; Sinha Su et al, 2022). Using this mechanism, Dicer-2 can process not only dsRNA with a blunt end (Sinha et al, 2018) but also those with an overhang at the 3 0 -end, as suggested by quantitative biochemical evidence (Cenik et al, 2011), single-molecule fluorescent microscopy (Naganuma et al, 2021), and a recent structural study (Su et al, 2022). While it is still controversial whether or not the initial binding of RNA requires ATP, the subsequent translocation of the RNA substrate through the helicase-DUF283 ring is ATP-dependent (Sinha et al, 2018;Naganuma et al, 2021;Singh et al, 2021;Su et al, 2022). By contrast, the helicase domain of mammalian Dicer-1 is not rearranged and remains in the closed "L"-shaped conformation and encircles the pre-miRNA substrate during initial RNA binding. This mechanism discriminates between authentic miRNA precursors and other RNA hairpins by anchoring three elements: the RNA ends, the central region, and the terminal loop ( Fig 3B). Upon binding of the substrate (and TARBP2), Dicer switches into an active open state that allows repositioning of the substrate into the RNA processing center. This substrate repositioning requires disengagement of the helicase domain and DUF283 from the Dicer core, and the unlocked helicase becomes highly flexible in the dicing state (invisible in cryo-EM), and the same has been reported for Dicer in mice (Zapletal et al, 2022) and humans (Lee et al, 2023b). Interestingly, the helicase domain of Dicer-1 in flies remains fully bound to the Dicer core and undergoes only a small conformational change toward an open conformation that is sufficient to accommodate authentic miRNA precursors (Jouravleva et al, 2022). Structural studies in plants showed that DCL-3, an siRNAproducing enzyme, adopts the same active conformation as mammalian miRNA-producing Dicer-1, whose helicase domain and DUF283 are disengaged from the Dicer core and are highly flexible (Wang et al, 2021). In contrast, the miRNA-producing DCL-1 in plants, which combines the activities of both Drosha and Dicer in mammals, uses a helicase-mediated threading mechanism typical of the siRNA-producing Dicers in animals. The DCL-1 structures showed that the helicase-DUF238 threading ring transfers the RNA substrate in an ATP-dependent manner, while the enzyme performs two cuts, first pri-miRNA to form pre-miRNA and then pre-miRNA to miRNA (Wei et al, 2021). Dicing mechanism The set of recent structures showed that the adaptation of Dicer to specific roles in different organisms and pathways (Box 1) is encoded in the substrate selection and in the loading mechanism, while the actual mechanism of catalysis remains invariant. The superposition of Dicer-RNA structures, regardless of organism or pathway, shows virtually identical conformations of the core elements involved in the catalysis (Fig 3C and D; Wang et al, 2021, Jouravleva et al, 2022Zapletal et al, 2022;Lee et al, 2023b). This includes the insertion of 3 0 and 5 0 ends of pre-miRNA into a basic pocket of the PAZ-Platform cassette, which aligns the substrate and determines the dicing site. The pre-miRNA is further aligned in the positively charged groove formed by the RNase IIIa/b domains. The internal dsRBD of Dicer clamps the RNA in the catalytic sites of Dicer. A recent study proposed that the dsRBD of human Dicer recognizes the GYM motif (G guanine; Y, paired C/U; M, mismatched nucleotide) and stabilizes the interaction between the RNA and Dicer for a more efficient dicing reaction (Lee et al, 2023b). This molecular interaction could explain the evolutionary conservation of the GYM motif in a subset of miRNAs (Lee et al, 2023a). Internal and external dsRBDs modulate Dicer's activities Recent structures of mammalian Dicer-1 unexpectedly revealed that its internal C-terminal dsRBD has two distinct RNA-binding sites. The first RNA-binding site, situated on the b-sheet surface of the domain, is responsible for binding the central double-helical region of the pre-miRNA in the pre-dicing state. In contrast, the second RNA-binding site is located on the a-helical surface that is at the opposite side of the domain and is used to clamp RNA in the catalytic sites of Dicer in the dicing state (Fig 4B and C;Zapletal et al, 2022;Lee et al, 2023b). The fact that the dsRBD switches between its two RNA-binding sites during the multistep catalysis process of Dicer demonstrates the remarkable plasticity of this important domain. The structural studies of D.m. Dicer-2 showed that the dsRBD also helps to induce bending of the dsRNA substrate, which facilitates the proper alignment of the substrate with the core of Dicer and its translocation into the PAZ domain (Su et al, 2022). Additionally, Dicers associate with accessory dsRBD-containing proteins such as TARBP2, ADAR1, PKR, PACT in mammals, and Loquacious-PA/PB/PD and R2D2, in invertebrates (Fig 4A; Hansen et al, 2019). These external regulatory dsRBD-containing proteins associate with the Dicer helicase domain via their C-terminal Type B dsRBD (protein-protein interacting domain; Doyle & Jantsch, 2002;Hansen et al, 2019) as in TARPB2, R2D2, and Loquacious-PA/PB (Fig 4D-F) or via unstructured regions as in Loquacious-PD (Su et al, 2022). In general, these binding partners facilitate miRNA or siRNA precursor interaction with Dicers. Recent structural work on D.m. Dicer-2 showed that the C-terminal tail of Loquacious-PD binds to the helicase domain of Dicer-2 and its two dsRBDs support substrate binding (Su et al, 2022). However, the dsRBDs of Loquacious-PD were not observed in the density map of the initial binding state, suggesting that the dsRBDs have only an assisting role in the initial binding of dsRNA to the Dicer-2 helicase domain. In mammals, TARBP2 stimulates the transition from the pre-dicing to dicing-competent state (Zapletal et al, 2022). In the pre-dicing state, the dsRBD3 of TARBP2 stably associates with the Dicer-1 helicase (Fig 4D;Liu et al, 2018;Zapletal et al, 2022), while the first two dsRBDs bind dsRNA in a flexible manner, similarly as observed for Loquacious-PD (Fig 4D; Zapletal et al, 2022). Akin to TARBP2, the C-terminal dsRBD3 of Loquacious-PB interacts with the helicase domain of D.m. Dicer-1. By contrast, the first two dsRBDs of Loquacious-PB are stably bound to the miRNA precursor and assist D.m. Dicer-1 in the positioning of the pre-miRNA into the RNA processing center (Fig 4E; Jouravleva et al, 2022). Recent structures of the Dicer-2-R2D2-siRNA complexes provided insight into substrate release and strand-selection state mediated by R2D2 in the siRNA-loading process. In this state, the internal dsRBD of Dicer is dissociated from the siRNA and R2D2 stably and asymmetrically recognizes the end of the siRNA duplex with the higher base-pairing stability, while the other end (the guide strand for target silencing) is accessible for loading onto AGO2 (Fig 4F; Yama guchi et al, 2022). In previous structural studies, the isolated dsRBDs were shown to bind dsRNA with little or no specificity (Green & Mathews, 1992;St Johnston et al, 1992;Bass et al, 1994;Bycroft et al, 1995;Kharrat et al, 1995;Nanduri et al, 1998;Ryter & Schultz, 1998;Lehmann & Bass, 1999;Fierro-Monti & Mathews, 2000;Ramos et al, 2000;Blaszczyk et al, 2004;Gan et al, 2006;Stefl et al, 2006;Masliah et al, 2018), but several studies have reported that they can bind preferentially when irregularities (e.g., mismatches, internal loops, and stem-loop junctions) are present in the RNA double helix (Stefl et al, 2005(Stefl et al, , 2010Wang et al, 2011;Jayachandran et al, 2016;Lazzaretti et al, 2018;Yadav et al, 2020). It was shown that the irregularities widen the minor groove of the RNA double helix, thereby facilitating the positioning of the a1 helix of the dsRBD on dsRNA. Recent Dicer-RNA structures with dsRBDcontaining regulatory proteins show that the mode of dsRBD binding is governed by the dynamic structural context of multistep Dicer catalysis. This versatility of dsRBDs ranges from nonspecific interactions with multiple dsRNA registers during initial substrate recruitment to highly specific interactions (mainly dsRBDs a1 helix in the minor dsRNA groove) that directly modulate Dicer catalysis and control substrate release to properly feed AGO2 with a guide strand for target silencing (Liu et al, 2018;Jouravleva et al, 2022;Su et al, 2022;Yamaguchi et al, 2022;Zapletal et al, 2022;Lee et al, 2023b). Ancestral and Derived Dicer Functions For a more detailed overview of the evolution of Dicer and its paralogs, we refer readers to the works of Mukherjee et al (2013) and Jia et al (2017). Here, we will discuss only selected issues, which have emerged from the recent structural analyses. As mentioned earlier, the complex canonical Dicer architecture found in animals, plants, fungi, and Protista implies that the canonical Dicer architecture with a functional ATPase/helicase had evolved already in the common ancestor of all three multicellular kingdoms and at least some extant Protista. The RIG-I-like helicases serve as cytoplasmic dsRNA sensors for antiviral immunity across animals (Yoneyama & Fujita, 2007;Kowalinski et al, 2011;Guo et al, 2013). RIG-I recognizes blunt-end dsRNA with 5 0 triphosphate (Kowalinski et al, 2011), which is a hallmark of viral replication. The primary function of the canonical Dicer architecture was Box 1. Characteristics of Dicer structures Drosophila Dicer-2 Inactive form: L-shaped. Active form: the horseshoe-shaped structure of the helicase domain associates with the DUF283 domain to form a ring that threads the RNA substrate into the enzyme (yet side-loading (ATP-independent; distributive) can occur for the 3 0 overhang substrate). Drosophila Dicer-1 Inactive form: L-shaped. Active form: small opening of the helicase domain during the miRNA substrate loading, the helicase stays associated with the core. thus most likely linked to dsRNA processing in antiviral defense in early eukaryotes living in aquatic environments which hosted virioplankton. While it is unknown how dense and diverse virioplankton was in primeval oceans, the exposure of early eukaryotes to viruses was likely intense. This notion is consistent with high throughput analyses of seawater nowadays, which reveal unexpected density and diversity of extant virioplankton, including RNA viruses (Vlok et al, 2019;Wolf et al, 2020). The antiviral role is highly plausible for explaining the emergence and success of the canonical Dicer architecture. Given the major regulatory potential of sequence-specific targeting by small RNA, it is not surprising that other RNA silencing pathways emerged, such as the gene expression-regulating miRNA pathway or transcriptional silencing mechanisms. The major requirement for the miRNA pathway is biogenesis of precisely defined small RNAs, which enables their specific engagement in gene regulation. In plants and animals, miRNA biogenesis involves distinct mechanisms (reviewed in Rogers & Chen, 2013;Bartel, 2018). In plants, two cleavages by DCL1 release a miRNA from its precursor. In animals, the mechanism involves first nuclear cleavage by Drosha and then cytoplasmic cleavage by Dicer. In this context, Drosha appears as a highly specialized animal Dicer variant emerging from Dicer duplication early in Metazoan evolution. Animal Dicer-1 shows distinct adaptations to miRNA biogenesis in different animal lineages further supporting the notion that animal miRNA biogenesis by Dicer is a derived character. The main adaptation concerned the helicase domain, whose evolution in animal Dicers took diverse paths. In C. elegans DCR-1, the helicase is fully functional as the enzyme is efficiently processing both types of substrates-long dsRNA and pre-miRNAs. In Drosophila, the helicase of miRNA-producing Dicer-1 lost its original function and degenerated, while siRNA-producing Dicer-2 carries a functional RIG-I-like helicase hydrolyzing ATP and feeding dsRNA substrate into the enzyme. A recent attempt to reconstruct ancestral helicase sequences of animal Dicers suggests that decline of the functionality of the helicase started relatively early in the lineage leading to Deuterostomes and vertebrates (Aderounmu et al, 2023). Notably, Dicer in the earliest branching animal groups (ctenophores, sponges, and cnidarians; Schultz et al, 2023) uses a binding partner, which resembles HYL1 from plants rather than Loquacious/TARBP2 (Moran et al, 2013) suggesting that some miRNA-like mechanism preceded the existence of miRNA pathways in animals. Mammalian Dicer-1, which primarily generates miRNAs, represents a remarkable case where the helicase lost its helicase/ATPase function, yet amino acid residues critical for ATPase function are highly conserved across mammals. Structural and functional analyses of the helicase showed that it acquired an ATP-independent unique structural role in miRNA biogenesis (Zapletal et al, 2022). The previously observed dicing-incompetent "pre-dicing state" where a pre-miRNA interacts with PAZ, dsRBD, and helicase domains (Liu et al, 2018) appears to be a specific adaptation of mammalian Dicer for recognition of bona fide pre-miRNAs. In Drosophila, the single-stranded region of the pre-miRNA also interacts with the helicase but the arrangement is different from that in mammals (Jouravleva et al, 2022). As mentioned above, a recent reconstruction of ancestral helicases of animal Dicers (Aderounmu et al, 2023) suggests that degeneration of the helicase activity in the lineage leading to mammals likely started already in early deuterostomes and the helicase was inactive in vertebrate ancestors already. This creates a remarkable paradox-removal of HEL1 from a mammalian Dicer increases its ability to dice long dsRNA (Ma et al, 2008;Flemr et al, 2013), which is the opposite of its original role to facilitate long dsRNA dicing. Furthermore, there is no evidence that mammalian Dicer lacking HEL1 is truly processive. Analyses of small RNA populations generated by the full-length Dicer and the shorter isoform suggest that the shorter isoform might only be more active (Flemr et al, 2013;Zapletal et al, 2022). Summary and Outlook Structural analyses of Dicers with canonical Dicer architectures reveal features associated with miRNA and siRNA biogenesis and provide a good framework for interpreting future structures and/or their predictions (Box 2). It is likely that future studies of Dicer in nematodes, mollusks and other animals will identify additional unique adaptations of Dicer for miRNA biogenesis. At the same time, some organisms rely on Dicers, which retain only the core module and the mechanism by which these Dicers generate small RNAs of defined length has not yet been determined. AlphaFoldbased models (Varadi et al, 2022) offer interesting insights into predicted structural organization of these divergent Dicers and suggest that the future may hold additional mechanisms for small RNA biogenesis by Dicer. Box 2. In need of answers How do noncanonical Dicers, such as DrnA and DrnB in Dictyostelium, recognize and cleave substrates to produce defined lengths of small RNAs? Are there other Dicer architectures in Protists? A single Dicer in C. elegans produces both, miRNAs and siRNAs. Are long dsRNA and pre-miRNA substrates recognized and cleaved in a similar way or differently? That is, is long dsRNA recognized and loaded into the enzyme through the helicase domain and do pre-miRNAs interact with Dicer primarily through the PAZ domain? Cryo-EM structures of C. elegans Dicer with both substrates would answer this question. How does AGO bind Dicer and select 5p/3p miRNAs? It remains unclear how AGO interacts with Dicer and binds cleavage products in different model organisms. Once a cleavage product is released from Dicer, thermodynamic sensing promotes selection and loading of the main strand. However, structural insights into the organization of the RISC loading complex and the loading process are still needed. What is the physiological relevance of PACT, a mammalian TARBP2 paralog? Is it having any specific role? How do other Dicer-binding partners regulate its activity/substrate specificity?
7,408.8
2023-06-13T00:00:00.000
[ "Biology" ]
Supersymmetric Continuous Spin Gauge Theory Taking into account the Schuster-Toro action and its fermionic analogue discovered by us, we supersymmetrize unconstrained formulation of the continuous spin gauge field theory. Afterwards, building on the Metsaev actions, we supersymmetrize constrained formulation of the theory. In each formulation, we provide supersymmetry transformations for the $\mathcal{N}=1$ supermultiplet in four-dimensional flat space-time, in which continuous spin particle (CSP) is considered to be a complex scalar continuous spin field, and its superpartner which can be called ``\,CSPino\,'' is considered to be a Dirac continuous spin field. It is shown that the algebra of these supersymmetry transformations are closed on-shell. Furthermore, we investigate whether obtained supersymmetry transformations reproduce the known result of the higher spin gauge field theory in the helicity limit. Finally, we illustrate how these two separate set of obtained transformations are related to each other. I. INTRODUCTION Elementary particles propagating on Minkowski space-time have been classified long time ago by Wigner using the unitary irreducible representations (UIRs) of the Poincaré group ISO(3, 1) [1] (see also [2] for more details in any dimension). In d space-time dimensions, the massive particles are determined by representations of the rotation group SO(d − 1), while the massless particles (helicity particles) which describe particles with a finite number of degrees of freedom are determined by representations of the Euclidean group E d−2 = ISO(d − 2). Another massless representation, called continuous spin representation 1 , describes a continuous spin particle (CSP) with an infinite number of physical degrees of freedom per spacetime point characterized by the representations of the short little group SO(d − 3), the little group of E d−2 [3]. This representation labels by a dimensionful parameter µ (a real parameter with the dimension of a mass) so as when µ vanishes, the helicity eigenstates do not mix while they do when µ = 0. Thus, the continuous spin parameter µ controls the degree of mixing. In fact, in the "helicity limit" µ → 0, the continuous spin representation becomes reducible and decomposes into the direct sum of all helicity representations. We recall that for the both massless representations, helicity and continuous spin, the eigenvalue of the quadratic Casimir operator C 2 := P 2 (the square of the momentum P µ ) vanishes. However, for the helicity representation, the eigenvalue of the quartic Casimir operator C 4 := W 2 (the square of the Pauli-Lubanski vector W µ = 1 2 ǫ µνρσ P ν J ρσ ) is zero, while the one for the continuous spin representation becomes µ 2 . Historically, constructing a local covariant action principle for continuous spin particle has been a mystery for decades, however, about 75 years after Wigner's classification, the first action principle for the bosonic continuous spin particle was presented by Schuster and Toro [4] in 2014, and the first action principle for the fermionic continuous spin particle was suggested in 2015 [5]. In these two action principles there are no constraint on the gauge fields and parameters, so in this sense one can refer to them as unconstrained formulations of the CSP theory. Along with the unconstrained formulation, Metsaev established a constrained formulation of the CSP theory for both the bosonic [6] and fermionic [7] continuous spin fields, in d-dimensional (A)dS space-time, in which the gauge fields and parameters are constrained. These two formulations of the CSP theory, unconstrained and constrained 2,3 , that have been formulated based on the metric-like approach, have not yet been supersymmetrized in the literature, which is the main purpose of the present paper. Indeed, for each formulation, we provide supersymmetry transformations for the N = 1 continuous spin supermultiplet in 4-dimensional Minkowski space-time. We observe that, in the CSP supermultiplet, the bosonic field should be a complex scalar continuous spin field and the fermionic one must be a Dirac continuous spin field: We note that the first supersymmetry transformations, in the frame-like approach, for the N = 1 continuous (infinite) spin supermultiplet was presented by Zinoviev [15] in three-dimensional Minkowski space-time, which was recently generalized to four dimensions [16], however our approach to supersymmetrize the theory in this paper is different. Furthermore, there are other papers discussing the supersymmetric continuous (infinite) spin gauge theory [17]- [19]. Apart from supersymmetry, a number of papers have studied other aspects of the continuous spin theory in different approaches [20]- [46]. For instance, since an interacting theory is more favored, possible interactions of continuous spin particle with matter have been investigated in [8,33], while interactions of continuous spin tachyon 4 is examined in [41,44]. The presence of the dimensionful parameter µ = 0 in the CSP theory makes it in some ways similar to a massive theory. More precisely, one may find an apparent connection (in formulations) between the massive higher spin gauge field theory and the continuous spin one. For instance, although continuous spin particle is massless, its representation is not conformally invariant since it is characterized by a parameter with the dimension of a mass, like massive particles [47]. Moreover, we shall show that the Dirac continuous spin field equation does not decouple into two Weyl equations, which is similar to the massive Dirac spin-1 2 field equation. In addition, the number of real CSP fields we use for the N = 1 CSP supermultiplet (1) equals the number of real fields in the massive higher spin N = 1 supermultiplet, in which two bosonic fields (with opposite parity) and two fermionic fields are used [48]. On the other side, there is a tight connection between the massless higher spin field theory and the continuous spin one at µ = 0 (refer e.g. to footnote 3). These two connections with the massive and massless higher spin gauge theories can give us a better understanding of how to deal with and develop the continuous spin gauge field theory. The layout of this paper is as follows. In section II, we will briefly review the supersymmetric higher spin theoryà la Fronsdal for both half-integer and integer spin supermultiplets. The review contains and pursues a method we have used to find the supersymmetry transformations for the CSP theory, however, the reader can jump to next section and follows the main part of the paper. In section III, we will present supersymmetry transformations for unconstrained formulation, while in section IV, we will provide those for the constrained formulation of the CSP theory. In section V, we will make a connection between two obtained supersymmetry transformations, presented in III and IV. The conclusions are displayed in section VI. In appendices; we present our conventions in the appendix A. The appendix B includes a proof related to section III. Transformation rules of the chiral supermultiplet will be presented in appendix C. A discussion on inverse operators will be displayed in appendix D. Useful relations concerning supersymmetry and so on will be presented in the appendix E. II. SUSY HIGHER SPIN GAUGE THEORY: A BRIEF REVIEW This section reviews the massless half-integer and integer higher spin N = 1 supermultiplets in four-dimensional Minkowski space-time in the metric-like approach which is known for a long time [49] 5 (see also [48] for review). However, our approach is based on the generating functions and we deal with operators, so as this fashion somewhat facilitates calculations. Moreover, the applied method to supersymmetrize the massless higher spin (HS) theory in this section has been employed for the CSP theory in sections III and IV, so in this respect the present review may be informative. In 4-dimensional flat space-time, a real massless bosonic higher spin field (except the spin-0 field which has one degree of freedom) has two degrees of freedom. Thus one can consider a Majorana spinor as its superpartner, which has also two real degrees of freedom for any arbitrary half-integer spin. Therefore, in what follows, we will take into account the Fronsdal actions [13] (except the Klein-Gordon action) for real massless higher spin fields, as well as the Fang-Fronsdal actions [14] in which the fermionic field is a Majorana spinor. Two possible supermultiplets, half-integer and integer ones, will be discussed separately in the following. A. Half-integer spin supermultiplet: ( s , s + 1/2 ) Let us first introduce the bosonic and fermionic massless higher spin fields, respectively, by the generating functions where φ µ1...µs is a tensor field of integer spin s, and ψ µ1...µs is a spinor-tensor field of half-integer spin s + 1 2 . To ignore the chiral supermultiplet ( 0 , 1/2 ) which is irrelevant for higher spins, we consider s 1 in the half-integer spin supermultiplet ( s , s + 1/2 ) and consequently in the gauge fields (2). The generating functions (2) are considered to be double-and triple gamma-traceless, that is and obey the following homogeneity conditions where N = w · ∂ ω . Then, the Fronsdal [13] and Fang-Fronsdal [14] actions can be given respectively by 6 where the operators B and F are respectively the bosonic and fermionic operators, defined as We note that the hermiticity of the actions (5), (6) satisfy by with respect to the following Hermitian conjugation rules The bosonic (5) and fermionic (6) actions are invariant under the following gauge transformations where ξ s and ζ s are gauge transformation parameters introduced by the generating functions subject to the traceless and gamma-traceless conditions In order to find supersymmetry transformations which leave invariant the sum of both free actions (5), (6) one can consider the following ansatz where ǫ is the global supersymmetry transformation parameter which is a Majorana spinor, α is considered to be a real number determining from the closure of the SUSY algebra, and X (assuming that X † = γ 0 X γ 0 ) is an operator which we would like to find out. To this end, one can vary the SUSY action (16) with respect to the ansatz, which yields 6 We note that the spinor field ψs in (6) is considered to be a Majorana field, thus the overall factor of 1 2 compared to the Fang-Fronsdal action [14] is usual for selfconjugate fields, introduced to ensure a consistent normalization of the field operators in quantum field theory. Demanding δI (s, s+1/2) = 0, one can cancel the first term in (19) by choosing Then, taking hermitian conjugation of (20) leads to α B = − X F which, in turn, vanishes the second term of (19). Now we are in a position to find the operator X. For this purpose, we consider a general form (which is considered to be similar to the fermionic operator F (8)) with undetermined coefficients where A i (i = 1, . . . , 7) are considered to be real functions of N (:= ω · ∂ ω ) to satisfy our assumption: Plugging the operators (7), (8), (21) into (20), and using (anti-)commutation relations presented in appendix of [9] which lead to some useful relations (E2)-(E11), one can read the coefficients A i , which become Therefore, we could determine the operator X and consequently the expression for δ ψ s , which is To find the parameter α, we should check the closure of the SUSY algebra. We will then find that the algebra closes up to a field dependent gauge transformation parameter by choosing α = √ 2, that is where To illustrate how the SUSY algebra closes we have used the Majorana flip relations (E1) and the identity (E13). Hence, we find that the SUSY action (16) is invariant under the following supersymmetry transformations: This is equivalent to the well-known result of the supersymmetry transformations for the half-integer spin supermultiplets ( s , s + 1/2 ) with s 1, which was first presented by Curtright in [49]. B. Integer spin supermultiplet: ( s + 1/2 , s + 1 ) Let us take into account s 0 for the integer spin supermultiplet ( s + 1/2 , s + 1 ), and as a result for the generating functions in (2). In this case, one can consider the bosonic [13] and fermionic [14] higher spin actions respectively by where the bosonic B and fermionic F higher spin operators were given in (7), (8) . In order to find the supersymmetry transformations for the SUSY action one can start with the following ansatz where ǫ is the global supersymmetry transformation parameter, f can be in general a real function of N determining from the closure of the SUSY algebra, and Y is an operator (assuming that Y(∂ ω ) † = − γ 0 Y(ω) γ 0 ) that we would like to find. We note that presence of the unit imaginary number i in the ansatz guaranties that the Majorana spinor field is real. Varying the SUSY action (30) with respect to the above ansatz, we will reach to Demanding δI (s+1/2, s+1) = 0, we have to choose leading in turn to B ω / f = − i Y(ω) F, by taking hermitian conjugation of (34). Hence, the remaining task is determining the operator Y(∂ ω ). Considering the property we adopted to the operator Y(∂ ω ), one can drop an ω / from the left-hand-side of the operator F (8), and surmise a general form for Y(∂ ω ) as where B i (i = 1, . . . , 5) are considered to be real functions of N (:= ω · ∂ ω ). Then, plugging (7), (8), (35) into (34), and applying the (anti-)commutation relations presented in appendix of [9], we will find the coefficients as Therefore, the operator Y(∂ ω ) and as a result the expression for δψ s can be given by The closure of the SUSY algebra will fix the f operator. In fact, we will find by choosing the algebra will be closed up to a field dependent gauge transformation parameter where Therefore, we find that the SUSY action (30) is invariant under the following supersymmetry transformations: This is also equivalent to the well-known result of the supersymmetry transformations for the integer spin supermultiplets ( s + 1/2 , s + 1 ) with s 0, which was first discovered in [49]. III. UNCONSTRAINED FORMULATION OF THE CSP THEORY This section, and the next one, include main results of this paper. As we know, a general property of all supersymmetric theories is that the number of physical bosonic degrees of freedom is always identical to the number of fermions. On the other hand, we know that a continuous spin particle (bosonic or fermionic) has infinite number of physical degrees of freedom per space-time point. Hence, the equality of the number of bosonic and fermionic degrees of freedom in a CSP supermultiplet looks like meaningless. Therefore, in four-dimensional flat space-time, there would be in principle four possibilities for the N = 1 supermultiplet containing of a CSP and CSPino (superpartner of CSP) so as one can consider CSP to be a real or complex scalar continuous spin field, while one may consider CSPino to be a Majorana or Dirac continuous spin field. Among these possibilities, we find that the mentioned case in (1) with complex scalar CSP field and Dirac CSP field is the only choice which is consistent with supersymmetry expectations. Here, in this section, we first present bosonic [4] and fermionic [5] unconstrained formulations of the continuous spin gauge field theory. Then we provide supersymmetry transformations for the N = 1 continuous spin supermultiplet which leave the sum of the bosonic and fermionic actions invariant and simultaneously satisfy the SUSY algebra, as we expect. We also investigate the helicity limit of the SUSY CSP theory and supersymmetrize unconstrained formulation of the higher spin gauge field theoryà la Segal, given by the actions [11], [12]. We note that this formulation of the higher spin theory has not been already supersymmetrized. Notice again that in these formulations of the CSP and HS theories there is no constraint on gauge fields (bosonic or fermionic) and their related gauge transformation parameters. A. Bosonic action Let us consider the Schuster-Toro action [4] in four-dimensional Minkowski space-time, in which the scalar continuous spin gauge field is complex. Applying partial integration to the Schuster-Toro action, the complex scalar continuous spin gauge field action is given by 7 where µ is continuous spin parameter, η µ is a 4-dimensional auxiliary Lorentz vector localized to the unit hyperboloid η 2 = −1, and δ ′ is the derivative of the Dirac delta function with respect to its argument, i.e. δ ′ (a) = d da δ(a) . The complex scalar CSP field Φ is unconstrained and introduces by a collection of totally symmetric complex tensor fields Φ µ1...µs (x) of all integer rank s, packed into a single generating function We note that in the infinite tower of spins (45), every spin state interns only once, and the spin states are mixed under the Lorentz boost, so as the degree of mixing is controlled by the continuous spin parameter µ. The action (44) is invariant under gauge transformations where ξ 1 , ξ 2 are two arbitrary complex gauge transformation parameters, which are unconstrained. By varying the action (44) with respect to the gauge fields Φ † and Φ, one can obtain two independent equations of motion which the one for the CSP gauge field Φ reads besides a same independent equation of motion for the CSP gauge field Φ † . 7 We note an overall factor of 1 2 has been dropped from the Schuster-Toro action [4] compared to (44), because we deal with a complex scalar CSP field. B. Fermionic action Let us now consider the fermionic version of the Schuster-Toro's action in four-dimensional flat space-time [5], in which the fermionic continuous spin field is a Dirac spinor. By applying partial integration to the action [5], the Dirac continuous spin gauge field action is given by where ∂ / (or η /) is defined according to the Feynman slash notation: ∂ / ≡ γ µ ∂ µ with γ µ s as the gamma matrices in 4 dimensions . The fermionic SCP field Ψ is considered to be a Dirac spinor field, which is unconstrained and introduced by the generating function where Ψ µ1...µs (x) are totally symmetric Dirac spinor-tensor fields of all half-integer spin s + 1 2 , in such a way that the spinor index is left implicit . Again, as the bosonic case, in the infinite tower of spins (50), every spin state interns only once, and the spin states mix under the Lorentz boost which the degree of mixing is controlled by the continuous parameter µ. The action (49) is invariant under spinor gauge transformations where ζ 1 , ζ 2 are the unconstrained arbitrary spinor gauge transformation parameters . Varying the action (49) with respect to the spinor gauge field Ψ yields the equation of motion for the unconstrained Dirac continuous spin field Ψ We note that there are two possibilities for presenting the unconstrained formulation of the fermionic CSP/HS theory (see appendix of B in [8]). One possibility is the one we have used in [12] and here. Another possibility can be expressed by converting i → − i in relations of (49)-(53) which have been used in [5], [8]. Remark: Here, we recall that a four-component Dirac spinor field can be written as where the two-component objects ψ L and ψ R are left-handed and right-handed Weyl spinors respectively. If one uses notation in [53] which defines then the Dirac equation for massive spin 1 2 particle can be written as demonstrating the two Lorentz group representations ψ L and ψ R are mixed by the mass term in the Dirac equation. However, in massless case, the equations for ψ L and ψ R decouple and yield Weyl equations: Based upon the above discussion, as the Dirac continuous spin gauge field (given by the equation of motion (53)) describes a massless particle, one can expect to derive the so-called Weyl continuous spin equations. To this end, using the above notation, let us write (53) in terms of Ψ L and Ψ R where This equation, manifestly, demonstrates that the Dirac continuous spin equation (57) can not be decoupled into two independent Weyl CSP equations. Even in the helicity limit (µ → 0) which massless higher spin equations are expected to be reproduced, the equation (57) does not decompose into Weyl equations. However, the latter case happens due to the unconstrained formulation we use, so as in the constrained formulation (next section) we will see it can be decomposed. C. Supersymmetry transformations Now we are in a position to supersymmetrize unconstrained formulation of the continuous spin theory in 4dimensional flat space-time for the N = 1 supermultiplet, in which we have considered the bosonic CSP as a complex scalar continuous spin filed and the fermionic CSP as a Dirac continuous spin field. By this feature, we find conveniently that the SUSY CSP action, a sum of the bosonic (44) and fermionic (49) is invariant under the following supersymmetry transformations where ǫ is an arbitrary constant 8 infinitesimal, anticommuting, Dirac spinor object that parameterizes the supersymmetry transformations (see (E1) for its properties), γ 5 is the fifth gamma matrix, and Φ, Ψ are respectively the complex scalar and Dirac CSP fields. Let us now calculate commutator of the supersymmetry transformations (60), (61) acting on the bosonic and fermionic CSP fields. We straightforwardly find the SUSY commutator on the bosonic CSP field yields which corresponds to the translation, while the one on the fermionic CSP field becomes where " ≈ " denotes that we have applied the Dirac continuous spin field equation of motion (53). Taking into account a field dependent spinor gauge transformation parameter, given by the second line in (63) would be the ζ 1 gauge transformation (51), demonstrating that the SUSY commutator acting on the fermion CSP field is closed on-shell, up to a gauge transformation. Remarks: Concerning the supersymmetry transformations we obtained in (60) and (61), there are some remarks which are useful to discuss: • By starting from the ansatz δΦ = αǭ Ψ which α is an arbitrary parameter, one can prove 9 that there would be no δΨ to leave invariant the sum of the bosonic and fermionic actions. In other words, in (60), the term (η / − i) is necessary for invariance of the SUSY action (see the appendix B for the proof). • Employing the gamma fifth matrix γ 5 in the above set was essential for the closure of the SUSY algebra. In fact, by omitting γ 5 , one can consider the bosonic field as a real scalar CSP field and the fermionic one as a Majorana or Dirac CSP field. However, in these two cases, although the SUSY action will be invariant under such transformations, the SUSY algebra will not be closed. • When the gamma fifth was employed, the CSP field has to be complex while the CSPino can be either a Majorana or a Dirac CSP field. We note that again a CSP has infinite physical degrees of freedom per space-time point, so as the Majorana or Dirac CSP field can be candidate of the CSPino. • If one chooses the Majorana CSP field as superpartner of the complex scalar CSP field, then the right-hand-side of (61) does not satisfy the Majorana spinor condition and should be improved by adding a complex conjugate of the right-hand-side of (61). However, by adding the complex conjugate term, one finds that the SUSY algebra can not be closed 10 . • For the N = 1 supermultiplet, we had to pick the Dirac CSP field as superpartner of the complex scalar CSP field. Therefore, one concludes that in the context of the CSP theory, instead of the equality of the number of bosonic and fermionic physical degrees of freedom, the number of bosonic and fermionic real CSP fields should be equal. This fact can be seen here for the N = 1 supermultiplet, and may hold for N > 1 but it remains to be checked. • As we employed the Dirac CSP field, we deal with supersymmetry transformation parameter ǫ which is also a Dirac spinor object. Therefore, to illustrate how the SUSY algebra closes, we have used the so-called "Dirac flip relations" (E1). Indeed, it is notable to mention that the Majorana flip relations hold also for Dirac spinors 11 (see [54], page 49, for more details) . • Although the supersymmetry transformation parameter ǫ is a Dirac spinor object, it is effectively not a Dirac spinor, but the right-handed Weyl spinor. This is due to the fact that, by defining 1 2 (1 − γ 5 ) ǫ := R ǫ := ǫ R , one can see that ǫ always appears as ǫ R and ǫ R in the SUSY transformations (60) and (61) respectively, which reflects the fact that we deal with N = 1 SUSY 12 . • In the frame-like approach, authors of [16] have supersymmetrized the infinite spin theory using four fields; two real infinite spin fields with opposite parity, as well as two Majorana infinite spin fields. In this regard, the number of real CSP fields we have used is quite consistent with [16]. D. Helicity limit By the term "helicity limit" 13 , we refer to a case that the continuous spin parameter µ vanishes, and consequently the known results of the higher spin theory are expected to be reproduced. Since, in this section, we deal with unconstrained formulation of the CSP theory, it is natural to arrive at unconstrained formulation of the higher spin theory 14 in the helicity limit. In the approach we follow here, unconstrained formulation of the bosonic higher spin gauge field theory was established by Segal in d-dimensional (A)dS space-time [11]. In four-dimensional flat spacetime, this theory becomes the helicity limit of the Schuster-Toro formulation (see [4], [8] and [12] for more details). In addition, unconstrained formulation of the fermionic higher spin gauge field theory was constructed in d-dimensional 9 We thank Mohammad Khorrami for discussion and sending us the proof. 10 We thank Dmitri Sorokin for many fruitful discussions on this issue. 11 We thank Antoine Van Proeyen for clarifying the subject. 12 We thank again Dmitri Sorokin for pointing out this important comment. 13 Authors of [4] used the term "helicity correspondence". 14 Here, by the term "unconstrained" we mean there are no constraints on the gauge fields and parameters, as well as there are no auxiliary fields in the theory. However, there are some differences in the meaning of the term, e.g. see unconstrained formulations in [55][56][57][58][59][60][61]. (A)dS space-time [12], which in four-dimensional flat space-time becomes the helicity limit of the fermionic CSP action [5]. However, as we know, unconstrained formulation of the higher spin gauge theoryà la Segal has not been supersymmetrized by now. Therefore, in the helicity limit, we will reach to a result that has not been already in the literature and thus its accuracy should be examined, what we will do here. At µ = 0, the bosonic and frmionic CSP actions (44), (49) reduce respectively to the following bosonic and fermionic higher spin actions where, here, the bosonic field Φ is a complex higher spin field and the fermionic one Ψ is a Dirac higher spin field. These fields can be introduced respectively by the generating functions in (45) and (50), but by this difference that here the infinite towers of spins are a direct sum over all integer helicity states (s = 0, 1, · · · , ∞) and all half-integer helicity states (s = 1/2, 3/2, · · · , ∞), in which helicity states do not mix under the Lorentz boost. These higher spin actions are invariant under gauge transformations (46), (47), (51), (52), and their equations of motion are given by (48), (53), when we set µ = 0. By taking the helicity limit of the CSP theory (setting µ = 0), one can propose that the SUSY higher spin actioǹ a la Segal, a sum of the complex higher spin action (65) and the Dirac higher spin action (66) is invariant under the following supersymmetry transformations We have examined and found that indeed the SUSY higher spin action (67) is invariant under the above supersymmetry transformations, and the SUSY algebra closes on-shell up to a gauge transformation. More precisely, relations of (62)-(64) with µ = 0 will be obtained for the closure of the SUSY higher spin algebra. Here, we just provided the supersymmetry transformations (68), (69) for unconstrained formulation of the higher spin gauge theory (à la Segal), and let us postpone further discussion to subsection IV D, where we will investigate the helicity limit of constrained formalism. IV. CONSTRAINED FORMULATION OF THE CSP THEORY In this section, we first display bosonic [6] and fermionic [7] constrained formulations of the continuous spin gauge field theory in 4-dimensional flat space-time, discovered by Metsaev in d-dimensional (A)dS space-time. Then we provide supersymmetry transformations for the N = 1 continuous spin supermultiplet which leave the sum of the bosonic and fermionic actions invariant. Again, as the previous section, we consider a supermultiplet consist of one complex scalar CSP field and one Dirac CSP field. A. Bosonic action Let us define the complex scalar continuous spin gauge field as the generating function where Φ µ1...µs represent for all totally symmetric complex tensor fields of all integer rank s, and ω µ is a 4-dimensional auxiliary vector. Then, the bosonic CSP action [6], in which the boson field is complex, is given by the complex scalar continuous spin action with where N := ω · ∂ ω , and µ is the continuous spin parameter. We note that the operators B, B 1 and B 2 are Hermitian (i.e. B † = B) with respect to the Hermitian conjugation rules The action (71) is invariant under the gauge transformation where χ is the gauge transformation parameter introduced by the generating function We note that this formulation of the CSP theory is constrained, that is, the gauge field Φ and the gauge transformation parameter χ are respectively double-traceless and traceless By varying the action (71) with respect to the gauge field Φ † , we shall arrive at the bosonic CSP equation of motion which after dropping a factor of ( 1 − 1 4 ω 2 ∂ 2 ω ) from its left-hand-side can be expressed as the following form In comparison to the spin-two case, one can refer to (79) and (80) as the Einstein-like and Ricci-like equations respectively. We note that in the helicity limit µ → 0, the above equation of motion reduces to a direct sum of all Fronsdal equations where φ s was given by the generating function in (2). B. Fermionic action Let us introduce the Dirac continuous spin gauge field by the generating function where Ψ µ1...µs denote for all totally symmetric Dirac spinor-tensor fields of all half-integer spin s + 1 2 , and the spinor index is left implicit. The fermionic CSP action [7], in which the fermion field is a Dirac spinor, is then given by the Dirac continuous spin action where We note that operators F, F 1 are Hermitian (i.e. F † = γ 0 F γ 0 ) with respect to the Hermitian conjugation rules (75). The action (83) is invariant under the gauge transformation where τ is the spinor gauge transformation parameter introduced by the generating function The formulation is constrained so as the spinor gauge field Ψ and the spinor gauge transformation parameter τ are respectively triple gamma-traceless and gamma-traceless By varying the action (83) with respect to the gauge field Ψ, one can easily obtain the Dirac CSP equation of motion which after removing a factor of ( 1 − 1 2 ω / ∂ ω / − 1 4 ω 2 ∂ 2 ω ) from its left-hand-side will take the following form We note that, similar to the previous section, in constrained formulation there are also two possibilities for presenting the fermionic CSP theory 15 . One possibility is the one we have stated in above, however there exists another possibility which obtains by converting i → − i in relations of (85), (86), (90) that has been applied in [9] (see also appendix D). Remark: Let us here pursue again the discussion we had in the previous section about Weyl equations. Referring to the issue and using the notation we applied in that section, one can write the Dirac CSP equation of motion (90) as where 15 Notice that, in contrast to the unconstrained formulation, here there exists just one possibility for expressing the fermionic HS theory, which is the Fang-Fronsdal formalism [14]. It is clear, when µ = 0, the operator M is non-zero and as a result the equation (91) does not decompose into two Weyl equations. However, in the helicity limit µ → 0, which the higher spin equations should be reproduced, the operators M and Ξ vanish, and consequently the equation (91) decouples into two Weyl higher spin equations: Therefore, one may conclude that the continuous spin parameter µ (which has a dimension of mass) in the Dirac CSP equation plays a role as mass in the massive Dirac spin-1 2 equation. Accordingly, one can observe that although continuous spin particle is a massless object, there would be no Weyl continuous spin equation, at least in its two formulations which we have studied in this paper. We recall that the existence of Weyl equations was dependent on formulation we use, so as at µ = 0 there was no weyl equations based on unconstrained formulation, while there exists in constrained one. C. Supersymmetry transformations In previous subsections, we discussed constrained formulation of the bosonic and fermionic continuous spin gauge field theories in 4-dimensional flat space-time. At this stage we are ready to provide supersymmetry transformations for the N = 1 continuous spin supermultiplet, in which CSP and CSPino are respectively a complex scalar and a Dirac continuous spin fields. We acquire that the supersymmetry continuous spin action (sum of the bosonic (71) and fermionic (83) continuous spin actions) is invariant under the following supersymmetry transformations where the supersymmetry transformation parameter ǫ is a Dirac spinor object, and Φ, Ψ are respectively the complex scalar and Dirac CSP fields. Using the above transformations, it is tedious but straightforward to check the closure of the SUSY algebra. We find that the algebra closes on-shell up to a gauge transformation which is proportional to (86). Remarks: Most of remarks in the preceding section are valid here, however, let us add a few points concerning the supersymmetry transformations (97) and (98): • The gamma fifth matrix γ 5 is responsible for closure of the SUSY algebra, so as by dropping the γ 5 from the above supersymmetry transformations, the SUSY action (96) will remain still invariant under (97) and (98). • It is notable to see that the SUSY CSP variation of boson field (97) contains two terms. The first term is proportional to the SUSY variation of the half-integer spin supermultiplet (26), and the second term is corresponding to the SUSY variation of the integer spin supermultiplet (41). • Moreover, one can observe that the first line in the SUSY CSP variation of fermion field (98) is identical to the SUSY variation of the half-integer spin supermultiplet (27), while the second line in (98) is proportional to the integer spin supermultiplet (42). D. Helicity limit Let us go on the discussion was carried out about the helicity limit in the previous section. However, here, the formulation is constrained and one expects to reach to the well-known result of [49] in the helicity limit. To be more precise, in the helicity limit, result of the supersymmetric higher spin theory, i.e. supersymmetry transformations of half-integer and integer spin supermultiplets (26), (27) and (41), (42) discussed in the section II, are expected to be recovered. However, we note that the chiral supermultiplet ( 0 , 1/2 ) was irrelevant for higher spins and was not discussed, while here in the helicity limit of the continuous spin theory it may be reproduced. In order to make clear the discussion, let us take into account the CSP supermultiplet (1), in which the complex scalar CSP and Dirac CSP fields are given respectively by (45) and (50): On the other side, in the helicity limit µ → 0, we know that the continuous spin representation becomes reducible and decomposes into the direct sum of all helicity representations. Therefore, at µ = 0, one can expect that the above supermultiplet decomposes into a direct sum of the chiral supermultiplet as well as the well-known half-integer and integer spin supermultiplets of the higher spin theory, i.e. In what follows, let us discuss and attempt to reproduce each case separately. Chiral supermultiplet ( 0 , 1/2 ) : At µ = 0, if one just considers spin-0 and spin-1 2 fields in the infinite towers of spins (101), there would be no ω in the gauge fields and consequently the act of ω-dependent derivatives on the gauge fields vanish. Thus, the supersymmetry transformations (97) and (98) reduce to those for the chiral supermultiplet (appendix C) where we have considered decomposition of the Dirac field ψ = ψ L + ψ R and the supersymmetry transformation parameter ǫ = ǫ L + ǫ R in terms of Weyl spinors, and took into account Half-integer ( s , s + 1/2 ) and integer ( s + 1/2 , s + 1 ) spin supermultiplets : Ignoring the chiral supermultiplet, one can redefine the supersymmetry transformation parameter and as a result, at µ = 0, one can illustrate that (97) and (98) are a direct sum of the reducible supersymmetry transformations of the reducible half-integer spin supermultiplet ( s , s + 1/2 ) as well as the reducible supersymmetry transformations of the reducible integer spin supermultiplet ( s + 1/2 , s + 1 ) In these two above SUSY transformations, the bosonic field φ(x, ω) is a complex higher spin field, and the fermionic field ψ(x, ω) is a Dirac higher spin field. However, as we know, such transformations are reducible and can reduce, respectively, to the well-known irreducible supersymmetry transformations of the irreducible half-integer spin supermultiplet (26), (27) and integer spin supermultiplet (41), (42), which were presented by Curtright in [49]. Let us close this section with a remark on the helicity limit where we try to demonstrate that in the limit µ = 0, we get the correct reducible supersymmetry transformations of the reducible higher spin supermultiplets. In the SUSY CSP transformations (97), (98), the fermionic CSP field is a Dirac field, however, at µ = 0, we arrive at (103) and (104), illustrating that the chiral supermultiplet ( 0 , 1/2 ) involves only the left-handed part of the Dirac spin-1 2 field, then the natural question is what happens with the right-handed part of the Dirac spin-1 2 field? It seems what happens is that this right-part combines with a real (or imaginary) part of the spin-1 field into the integer spin-1 supermultiplet ( 1/2 , 1 ), while the imaginary (or real) part of the spin-1 field couples to a left-haded (or right-handed) Weyl part of the spin- 3 2 Dirac field which thus form the half-integer spin-3 2 supermultiplet ( 1 , 3/2 ) and so on and so forth 16 . V. RELATIONSHIP OF SUPERSYMMETRY TRANSFORMATIONS In this section we aim to make relationship between our unconstrained continuous spin SUSY transformations (60), (61) and the constrained SUSY transformations (97), (98). For this purpose, we begin from the unconstrained CSP SUSY transformations, and will pursue three steps; performing the Fourier transformation, applying field redefinition, and changing of auxiliary space variable. We note that, however, one can follow a reverse approach by starting from the constrained CSP SUSY transformations. A. Fourier transformation Let us multiply the SUSY variation of boson field (60) by δ ′ (η 2 + 1), and the SUSY variation of fermion field (61) by δ ′ (η 2 + 1)(η / − i ) to the left which become We then perform a Fourier transformation in the auxiliary space variable η µ to express relations (111), (112) in their Fourier-transformed auxiliary space, i.e. ω-space, via Notice that the fields in the left-hand-sides of the latter are constrained while the ones in the right-hand-sides are unconstrained. More precisely, the equations (113) and (114) can be understood respectively as the general solutions of the double traceless-like and the triple gamma-traceless-like conditions Using (113), (114), we perform the Fourier transformation over the auxiliary variable η, and rewrite (111) and (112) in ω-space, which become C. Change of variable As final step, let us make a change of variable in the auxiliary space ω by shifting which in turn leads to the following changes If one applies these changes in relations (121) and (122), ones convert to These supersymmetry transformations are precisely the ones we presented in (97), (98) for constrained formulation of the CSP theory. Therefore, by following the above three steps, we could make a precise relation between two separate set of SUSY transformations, unconstrained (60), (61) and constrained (97), (98) ones. VI. CONCLUSIONS AND OUTLOOK In this paper, we first reviewed the supersymmetric higher spin gauge theory and obtained supersymmetry transformations for the N = 1 half-integer and integer spin supermultiplets, studied long time ago by Curtright [49]. Nevertheless, our review was based on the generating functions and we dealt with operators facilitating calculations. In addition, the review included a method in detail which we applied to find the SUSY CSP transformations. Then, taking into account the Schuster-Toro action [4] and its fermionic analogue [5], we supersymmetrized unconstrained formulation of the continuous spin gauge field theory. To this end, we provided supersymmetry transformations (60), (61) for the N = 1 supermultiplet which leave the SUSY continuous spin action (59) invariant. Since a CSP (bosonic or fermionic) has infinite physical degrees of freedom per space-time point, we observed that the number of real CSP fields should be equal in the N = 1 CSP supermultiplet (which may be held for N > 1). Therefore, we took into account a CSP supermultiplet (43), in which CSP is a complex scalar continuous spin field and CSPino is a Dirac continuous spin field. We note that in the frame-like approach, authors of [16] provided supersymmetry transformations for the N = 1 infinite spin supermultiplet containing four real fields; a pair of massless bosonic CSP fields with opposite parity, and a pair of massless fermionic CSP fields. Therefore, in this regard, the number of real fields we used to supersymmetrize the CSP theory in the metric-like approach is compatible with [16]. We then took the helicity limit of the CSP theory, and supersymmetrized unconstrained formulation of the higher spin gauge theoryà la Segal, given by the bosonic [11] and fermionic [12] actions, in 4-dimensional flat space-time. To supersymmetrize this theory, similar to the CSP case, we considered the N = 1 higher spin supermultiplet in which HS is a complex higher spin field and the so-called "HSpino" is a Dirac higher spin field. In both cases, continuous spin and higher spin, the fact that we should have a complex field in the supermultiplet is related to the presence of the spin-0 field in the spectrum, which should be complex in the chiral supermultiplet 17 . We recall that in the supersymmetric higher spin theoryà la Fronsdal [49], the spin-0 field does not exist in the spectrum while here there exists. Afterwards, building on the Metsaev actions in 4-dimensional flat space-time [6], [7], we supersymmetrized constrained formulation of the continuous spin gauge theory by providing supersymmetry transformations (97), (98). In both formulations, the gamma fifth was employed to close the algebra and we illustrated that the SUSY algebra closes on-shell up to a gauge transformation. Moreover, we demonstrated that although CSP is a massless elementary particle, the continuous spin parameter µ in the theory plays a role of mass, and thus the Dirac continuous spin equation can not be decoupled into Weyl equations. We also made a relationship between two set of unconstrained and constrained supersymmetry transformations by performing a Fourier transformation, field redefinition and change of variable. As we know, in the helicity limit µ → 0, the continuous spin representation becomes reducible and decomposes into the direct sum of all helicity representations. Therefore, in the limit, the bosonic (fermionic) CSP field gives rise to an infinite set of the bosonic (fermionic) higher spin fields in which each spin appears only once. Let us mention that there is a different formulation in which similar infinite sets of higher spin fields appear. In this formulation the infinite sets of higher spin fields are described as scalar and spinor fields in the so-called tensorial (or hyper) spaces (for a review and references see [62]). Supersymmetric higher spin models constructed in hyperspace [63][64][65][66] describe infinite-dimensional higher spin supermultiplets and thus differ from the conventional higher spin supermultiplets obtained in the helicity limit of the supersymmetric CSPs. Constrained formulation of the continuous spin theoryà la Fronsdal is more favorable for higher spin community, however, it seems calculations in the unconstrained formulationà la Segal or Schuster-Toro are more convenient. For instance, at a glance, one can see that the form of supersymmetry transformations in (60), (61) are more brief in comparison with those in (97), (98), however both are equivalent and can be converted to each other. Therefore, it would be interesting to establish the massive higher spin gauge theory in unconstrained formulation and find its supersymmetry transformations which will probably take a simple form but equivalent to existing shapes [48]. In addition, it would be nice to construct a supersymmetric massive higher spin gauge theory in constrained formulation, such that by taking the continuous spin limit (m → 0 , s → ∞ while ms = µ = constant) converts to the result of this paper. It is also interesting to develop cubic interaction vertices for the N = 1 arbitrary spin massless supermultiplets [67], [68] to the continuous spin gauge theory. It is convenient to show that operators P Φ and P Ψ , defined in (119) and (120), are related to each other through The quantities ∂ α ω , ∂ 2 ω and ω α act on the bosonic operator P Φ (119) as (for details, see the appendices in [9] 18 ) ω a P Φ = P Φ ω α + ω 2 ω α where the terms containing O(ω 4 ) in two last relations will be eliminated at the level of the action, due to the double-traceless condition on the gauge field Φ(x, ∂ ω ) (ω 2 ) 2 = 0 .
10,963.4
2019-12-27T00:00:00.000
[ "Physics" ]
Exploring Amharic Sentiment Analysis from Social Media Texts: Building Annotation Tools and Classification Models This paper presents the study of sentiment analysis for Amharic social media texts. As the number of social media users is ever-increasing, social media platforms would like to understand the latent meaning and sentiments of a text to enhance decision-making procedures. However, low-resource languages such as Amharic have received less attention due to several reasons such as lack of well-annotated datasets, unavailability of computing resources, and fewer or no expert researchers in the area. This research addresses three main research questions. We first explore the suitability of existing tools for the sentiment analysis task. Annotation tools are scarce to support large-scale annotation tasks in Amharic. Also, the existing crowdsourcing platforms do not support Amharic text annotation. Hence, we build a social-network-friendly annotation tool called ‘ASAB’ using the Telegram bot. We collect 9.4k tweets, where each tweet is annotated by three Telegram users. Moreover, we explore the suitability of machine learning approaches for Amharic sentiment analysis. The FLAIR deep learning text classifier, based on network embeddings that are computed from a distributional thesaurus, outperforms other supervised classifiers. We further investigate the challenges in building a sentiment analysis system for Amharic and we found that the widespread usage of sarcasm and figurative speech are the main issues in dealing with the problem. To advance the sentiment analysis research in Amharic and other related low-resource languages, we release the dataset, the annotation tool, source code, and models publicly under a permissive. Introduction Sentiment analysis is the task of detecting the orientation of someone's opinion and analyzing the emotions, feelings, and attitudes of a speaker or a writer in a piece of information concerning a certain situation, object, or event (Pandey and Govilkar, 2015). The most widely adopted approach in sentiment analysis to explore opinions is by employing very large datasets that target products and services, political, economical, social, and cultural feelings (Kauffmann et al., 2019;Caetano et al., 2018;Lennox et al., 2020). Understanding the sentiment of text content helps governments, organizations, and institutions to make correct, timely, and economical decisions (De Souza Bermejo et al., 2019). Sentiment analysis has been researched intensively for resource-rich languages such as English and German (Liu, 2012;Feldman, 2013;Tymann et al., 2019;Akhtar et al., 2016;D'Andrea et al., 2015;Wojatzki et al., 2017). However, existing models and approaches for most resource-rich languages can not easily be adapted to Amharic due to context variations in language, culture, and technology, especially for social media communication (Gangula and Mamidi, 2018). The works by Gezmu et al. (2018) and Abate and Assabie (2014) indicate that natural language processing (NLP) components, such as part of speech tagging (POS), named entity recognition (NER), and sentiment analysis are nontrivial due to the morphological, syntactic, and semantic complexity of the language. The absence of well-annotated corpora and NLP resources like parsers and taggers make Amharic sentiment analysis still challenging (Gezmu et al., 2018;Pandey and Govilkar, 2015). In addition to resource scarcity and morphological challenges for Amharic, the nature and structure of social media texts is also another bottleneck by itself (Nakov et al., 2013). On one hand, the statements in social media are noisy and do not usually follow proper rules, contain spelling errors, mixed scripts, and non-standard abbreviations (Badaro et al., 2019). On the other hand, punctuation marks, emoticons, emojis, and even other special symbols are very crucial to portray sentiment orientation in context. These issues make social media texts challenging for sentiment analysis tasks (Virmani et al., 2017;Badaro et al., 2019). In general, sentiment analysis research for low-resource languages is under-researched. For Amharic, the first attempt of sentiment analysis is described by Philemon and Mulugeta (2014), who focused on the prediction of sentiment polarity. They have used a Naïve Bayes classifier based on unigram and bigram features on 600 tweets. While the datasets they have used are very small in size, it is not also publicly available for further investigation. Considering the importance of sentiment analysis tasks for several applications, it is essential to properly explore the challenges, develop readily usable models, and describe future challenges. The main motivation of this work is to address sentiments analysis issues of Amharic comments, which become widely available on Facebook and Twitter. Another motivation stems from annotation tool challenges, where one of the major problems is the limited bandwidth in Ethiopia. Also, people favor smartphones over desktop applications. Hence classical web-based annotation tools will not be suitable. Moreover, the majority of crowdsourcing platforms, for example, MTurk, do not support workers and task requesters from Ethiopia. The main foci of this work are, 1) to explore different annotation strategies and tools for low-resource languages, 2) to collect a large dataset, and 3) to build different machine learning models for Amharic sentiment analysis. We will also publicly release the collected datasets, annotation tools, pre-trained models, and the associated source codes to advance the sentiment analysis research in Amharic. In this work, we will address the following research questions: RQ1) How to identify appropriate data annotation tools and collect large-scale sentiment analysis corpus for Amharic? RQ2) Which machine learning model is most appropriate for Amharic sentiment analysis? RQ3) What are the main challenges in Amharic sentiment analysis? The remainder of the paper is organized as follows. In Section 2, we will discuss the data acquisition, annotation strategy, annotation tools, and characteristics of the annotated data. In Section 3, the main approaches in the development of sentiment analysis tasks and related works in Amharic are presented. In Section 4, we have discussed the different experimental setups that are used to build different models. While Section 5 discusses the experimental results and analyses the different errors associated with the proposed models, Section 6 briefly summarises the main findings of the study. Data Collection and Analysis In this section, we will briefly describe the data collection and sampling strategies we have followed to annotate the dataset. Furthermore, we discuss the limitations of the existing annotation tools and proposed a novel annotation tool that is appropriate for low-resource language data collection. Data Acquisition and Dataset Characteristics The data source for this study is the Ethiopic Twitter Dataset for Amharic (ETD-AM) that we have collected using the Twitter API and previously introduced in Yimam et al. (2019). The collection spans two months of data (December 2019 and January 2020) that are selected purposefully due to specific political and social events happening in Ethiopia. During those months: 1) The current Ethiopian Prime Minister Dr. Abiy Ahmed has received the 100 th Nobel peace prize. 2) Around 17 university students were kidnapped. 3) The ruling party EPRDF was resolved and transformed itself to 'prosperity party', and 4) many other socio-political changes such as religious conflicts were aggravated and were intensely discussed in mainstream and social media platforms. In total, we have collected more than 300k tweets. Because of resource limitations for annotating 300k tweets, we have selected tweets based on an extended sentiment lexicon (435 positive and 660 negative lexicon entries) generated by Gebremeskel (2010). Table 1 shows sample positive and negative examples with their English translation from the Amharic sentiment lexicon. For the final dataset annotation, we have considered tweets that contain at least one sentiment lexicon entry. Our final dataset is comprised of 9.4k tweets, where each tweet is annotated by three different users. Annotation Strategy Annotation is one of the most laborious and complex tasks in the process of developing machine learning components (Wang et al., 2013;Finlayson and Erjavec, 2017). In countries where technology is not fully exploited, people usually conduct surveys or annotations using manual labor, for instance, by answering to a printed version of questionnaires (Dickinson et al., 2019;Lupu and Michelitch, 2018). For the machine learning approach, this entails that the questionnaire should be transformed into a digital format that takes a lot of effort and could even introduce errors during encoding (Lupu and Michelitch, 2018). To annotate a large number of datasets with minimum compensation, one has to use crowdsourcing platforms (Wang et al., 2013;Sabou et al., 2014). However, most of the crowdsourcing platforms require a complicated registration process, especially to incorporate transaction of payments for workers. For example, Amazon Mechanical Turk (MTurk) does not allow registration of task requesters from Ethiopia 1 . Moreover, we can not conduct our annotation using MTurk for Amharic sentiment analysis as there are no adequate online payment methods in Ethiopia and a lack of Amharic speakers on MTurk. Hence, our first possible approach was to label our dataset using a spreadsheet application where we distribute our data in a tabular format (see Figure 1a). The users found that annotating the data using a spreadsheet is time-consuming, error-prone, and difficult to interact with. Therefore, we build a specific web-based annotation tool that can facilitate the annotation process. Figure 1b shows the annotation tool with the annotation instructions, the tweets to annotate, and the annotation types (sentiment classes). While this approach was more attractive and easy to use, the main challenge was that a lot of users in Ethiopia do not use desktop computers. Besides, most users have no experience in using web browsers with their mobile phones, as only 10% of smartphone users spent time in web browsers, and the rest 90% mostly spent time only in applications (Wurmser, 2018). This is a critical limitation to collect large-scale datasets from users with various backgrounds. As the number of mobile device users is tremendously increasing (Sabou et al., 2014), social mediabased annotation tools for the data collection could be an option. We choose chatbots as a great candidate (Fadhil and Villafiorita, 2017) to perform sentiment analysis annotations and extract emotional context from a text corpus. A lot of users adopt Telegram Messenger to channel and share data for their followers. We opt to build a Telegram Bot-based chatbot for our work due to its applicability for the task, availability of active users, and simplicity of use. Design of the Annotation Tool: Amharic Sentiment Annotator Bot (ASAB) Telegram Bot 2 supports a client-server architecture, where it enables users with Telegram accounts to directly communicate with 'Telegram Server'. We have implemented an application server that can communicate with the 'Telegram Server' using different endpoints. The '/start' endpoint initiates the communication with our application server while the '/instruction' endpoint displays the annotation in- structions. The '/update' and '/end' endpoints enable us to receive responses (annotations) from the users and quit the communication with our server respectively. The user interface of ASAB as it can be seen on the user's mobile device is depicted in Figure 1c. ASAB is designed to support rewards (in the form of mobile card vouchers) as soon as the user successfully annotated enough tweets. After conducting a pilot study, the number of tweets to annotate and get a reward was set at 50. When the worker completes the task, the voucher will be displayed instantly to the user. In general, controlling the quality of the annotated data by blocking bad workers or spammers is crucial on crowdsourcing platforms (Stenetorp et al., 2012;Hovy and Lavid, 2010). The chatbot-based annotation is much more restrictive, mainly designed with built-in control mechanisms to assure annotation quality. ASAB integrates a controlling strategy in the form of control questions. For every 6 tweets, we have included one control question with a known answer. Users who have made 3 consecutive mistakes will receive a warning message. If the user still keeps on randomly annotating the tweets, he/she will be blocked after the fourth attempt. Another challenge in the crowdsourcing annotation framework is the preparation of concise annotation instructions that users can read and understand it instantly. However, it is very tough to display long instructions and annotation examples that can fit mobile devices. To mitigate this issue, we have published a separate web page that shows detailed instructions and annotation examples 3 . We have also prepared a YouTube video demonstrating the annotation steps 4 . Even if such elaborated instructions exist, users are tempted to start the annotation task immediately. We partly address this limitation by presenting minimal instructions and examples every time a user restarts the annotation task, before displaying the first tweet. Besides, since the texts are collected from Twitter and as we do not have direct control of the content, some of the tweets might not be appropriate for users. Thus, users are warned about such texts and they have to agree before proceeding to the main annotation task. Analysis of Annotated Data We have collected 9.4k tweets, a total of 143,848 words (total tokens), and 45,525 types (unique tokens) that are annotated by employing ASAB where each tweet is annotated by three users. In total, 92 Telegram users have visited ASAB while 53 users (58% of the total users) completed at least 50 tweets (rewarded one mobile card voucher). Furthermore, the system blocked 4 users who have made consecutive mistakes while annotating the control questions. ASAB is the first of its kind to conduct surveys based on a specific reward scheme, which is mobile card vouchers. In the beginning, it was challenging to convince annotators regarding the reward. We have advertised the task on several social media networks such as Facebook and Twitter. We have also created a Telegram channel group to announce the release of new tasks and to answer questions raised by the annotators. Once the annotators obtain their first mobile card voucher as a reward, we have observed that the popularity of ASAB has increased and users rely on our system. We have conducted the annotation in batches. In the first batch, it took more than two days to complete the annotation. Whereas, in the second and third iterations, the annotation was completed in less than 6 hours for each iteration. From the annotated dataset, we have observed that 2 or more annotators agreed on the 7,317 tweets (78%) while all the three annotators disagree on the remaining 2,072 tweets (22%). As an indicator of annotation quality, Fleiss' Kappa measurement metrics are used to evaluate the inter-annotator agreement (IAA) of the sentiment corpus. Fleiss' Kappa is chosen for its suitability to compute the inter-annotator agreement for multi-rater annotators (number of annotators larger than two) (Pustejovsky and Stubbs, 2012). A substantial level of inter-annotator agreement (0.785) is achieved on the four sentiment labels. This inter-annotator agreement value is relatively higher (very close to an almost perfect agreement, which spans from 0.81-1.00). The score indicates that human annotators can associate meanings and sentiments to texts despite the lack of background contexts for the tweets and the complex properties of social media texts such as incomplete phrases and sentences, mixed script, figurative and sarcastic speeches, and spelling and grammar errors. Even though the ASAB tool and the rewarding scheme are completely new to the annotators, the high IAA result indicates that ASAB is appropriate for sentiment analysis annotation tasks, which addresses our research question 1 (RQ1). We tried to analyze some of the tweets on which the three annotators fail to agree as can be seen in Table 2. For example, the tweets of the 'Sarcasm' group are difficult to interpret as they need extensive background knowledge. For instance, the first example might have a different interpretation depending on what "butter" refers to. Figurative speech is also very common in the Amharic language that is also present in abundance in the Twitter dataset. 'Mixed-Script' and 'Incomplete' tweets also need more background information as well as knowledge of other languages. For instance, in Table 2, the first 'Mixed-Script' example represents a tweet, which is an English sentence transliterated in 'Fidel' script. Related Works According to Liu (2012), opinion mining is a field of study that analyzes people's opinions, sentiments, evaluations, attitudes, and emotions from written language. It can be asserted that the expansion of digital technology and the volume of data made available by such technologies affects trends of sentiment analysis task. The work by Feldman (2013) differentiates sentiment analysis into four levels. Document-Level Sentiment Analysis: This is the simplest form of sentiment analysis and it is assumed that the document contains an opinion on one main object expressed by the author of the document. Sentence-Level Sentiment Analysis: A single document may contain multiple opinions even about the same entities. When we want to have a more generalized view of the different opinions expressed in the document about the entities, we must move to the sentence level. Aspect-Based Sentiment Analysis: The above methods work when whether the whole document or each sentence is discussing a single concept. However, in many cases, people talk about entities that have many aspects (attributes) and they have a different opinion about each of the aspects. Comparative Sentiment Analysis: Usually, people do not give opinions about a product directly instead they give comparable opinions. Sentiment analysis systems for such kind of situation identify sentences that contain the comparative opinion and extract the preferred entity(-ies) in each opinion. Sentiment Classification Approaches The work by Anitha et al. (2013) defines sentiment classification as a task of categorizing sentimental text in a specific document into 'positive' or 'negative' classes. This approach completely ignores the 2019) to explore, which classes mostly fit the tweets in the Amharic language. However, as depicted in Figure 2a, the number of mixed examples is much lower than the rest. The type of tweets labeled as mixed by the annotators and their significance during the model evaluation will be discussed in Section 5.2. Sentiment Analysis for Amharic The task of sentiment analysis for low-resource languages like Amharic remains challenging due to the lack of publicly available datasets and the unavailability of required NLP tools. Moreover, there are no attempts of analyzing the complexities of sentiment analysis on social media texts (e.g. Twitter dataset), as the intended meaning is highly context-dependent and influenced by the user experience (Gangula and Mamidi, 2018). Some of the existing works in Amharic either targeted the generation of sentiment lexicon or limited to the manual analysis of very small social media texts. The work of Alemneh et al. (2019) focuses on the generation of Amharic sentiment lexicon using the English sentiment lexicon. The English lexicon is translated to Amharic using a bilingual English-Amharic dictionary. The work proposed by Gebremeskel (2010) describes a rule-based sentiment polarity classification system. Using movie reviews, 955 sentiment lexicon entries are generated. The system then tries to detect the presence and absence of the positive and negative terms from the lexicon to classify the polarity of the document. We have extended this sentiment lexicon (to a total of 1194 entries) to select tweets for further annotation. We have found that even filtering tweets with the sentiment lexicon, the majority of the tweets are annotated as 'neutral'. This indicates that the sole use of sentiment lexicon is not enough to build a proper sentiment classification model. Experimental Setup For the sentiment classification task, we follow the document classification approach as it is mainly addressed in the literature (Prabowo and Thelwall, 2009). Our tweets mainly constitute of one or two sentences, which are limited to the maximum length of texts allowed by Twitter. We explore both classical supervised machine learning and deep learning approaches for the classification task. Instead of manually crafted features, we have used automatic text representation techniques such as Term-Frequency Inverse Document Frequency (TF-IDF) and word representations (embeddings). The TF-IDF document representation is produced using the scikit-learn (Pedregosa et al., 2011) CountVectorizer and TF-IDFTransformer built-in methods. For the TF-IDF computation, each tweet is considered as a document. To build word2vec-based word representations, we have collected around 15 Million sentences from dif-ferent sources, such as a News dataset using the Scrapy Python API 5 , YouTube comments using the YouTube Data API 6 , and a Twitter dataset using the Twitter API 7 . We collect the datasets every day and store relevant metadata such as the date, title, and language of the dataset. As we have discussed in Section 2 and Section 3.1, each tweet is annotated with 'Positive', 'Negative', 'Neutral', and 'Mixed' sentiment classes. The 9.4k annotated tweets are further split into training, development, and test instances using an 80:10:10 split. We have used the development dataset to optimize the learning algorithms. All the results reported in the remaining sections are based on the test dataset instances. After error analysis, we found out that tweets annotated with the 'Mixed' class are noises, which can be regarded as 'Positive', 'Negative', or 'Neutral'. Hence, we further cleanse the dataset and exclude tweets labeled as 'Mixed', which leads to a final dataset of size 8.6k tweets. We follow the same split and report the results accordingly (see column 'Cleaned' in Table 3). Baseline Methods We build the baseline models using the scikit-learn Python machine learning framework. The 'Dum-myClassifier' includes the following strategies to build the baseline models: 1) Stratified: Generates predictions by respecting the training set's class distribution. 2) Uniform: Generates predictions uniformly at random. 3) Most frequent: Always predicts the most frequent label in the training set. Supervised Approach In this work, we do not consider handcrafted features, such as N-gram features, lexical and syntactic features, word frequencies, and sentiment lexicon entries to train a supervised machine learning model. Instead, we rely on word representations that are obtained using different approaches. For the supervised machine learning approach, we have used TF-IDF and word embeddings. We have used the following machine learning algorithms from scikit-learn based on the TF-IDF feature vectors. Support Vector Machine (SVM): It is a machine learning algorithm for two-group classification problems. The 'SGDClassifier' in sci-kit learn supports multiclass probability estimation, which is derived from the binary 'one-versus-rest' estimates by simpler normalization (Cortes and Vapnik, 1995;Zadrozny and Elkan, 2002). The hyperparameters used for final model include (loss='modified_huber',penalty='l2',alpha=1e-3,and max_iter=100). K-Nearest Neighbor (KNN): KNN works by determining the nearest neighbors to a given query and use those classes to predict the right class of the query (Cunningham and Delany, 2020). We use n_neigh-bors=10 and weights='distance' as a hyperparameter to build the model. Logistic Regression: It is a common supervised learning technique that classifies a text into two or more classes. This technique employs a discriminative classification approach (Jurafsky and Martin, 2019). The model is tunned with the following parameters: solver='newtoncg', multi_class='multinomial', and max_iter=100. Nearestcentroid: It is a simple machine learning approach that achieves classification by assuming the locally constant class conditional probability. It calculates the mean of a given observation and assigns it to to the class with the nearest centroid (Pedregosa et al., 2011). The default parameter settings are used to build the final model. Deep Learning Approach Over the previous couple of years, many NLP applications start employing deep learning approaches for their automation components (Young et al., 2018). Unlike the approaches in classical supervised machine learning, the use of deep learning methods avoids the unnecessary hand-crafted feature engineering pre-process steps (Minaee et al., 2020). The effectiveness of deep learning models is improving over time as newer algorithms, better hardware infrastructure, and above all, a substantially large amount of free texts are being generated (Torfi et al., 2020). Unlike high-resource languages such as English and German, the impacts, limitations, and perspectives of using deep learning models in sentiment analysis for low-resource languages, particularly for Amharic, is not yet exploited. In this work, three types of embeddings, namely static (Mikolov et al., 2013), contextualized (Devlin et al., 2019), and network (Hamilton et al., 2017) embeddings are considered to build different deep learning models for the sentiment classification. Word2Vec: Word2vec helps to learn word representations (word embeddings) that employ a two-layer neural network architecture (Mikolov et al., 2013). Embeddings can be computed using a large set of texts as an input to the neural network architecture. We have used the Gensim Python Library (Řehůřek and Sojka, 2011) to train the embeddings using the default parameters. Network embeddings: Network embeddings allow representing nodes in a graph in the form of low dimensional representation (embeddings) to maintain the relationship of nodes (Hamilton et al., 2017;Sevgili et al., 2019;Cai et al., 2018). In this paper, we first compute the network-based distributional thesaurus (DT) (Ruppert et al., 2015) and later convert the DT to a network embeddings following the approach by Jana and Goyal (2018). Contextual embeddings: With the release of Google's Bidirectional Encoder Representations from Transformer (BERT) (Devlin et al., 2019), word representation strategies have shifted from the traditional static embeddings to a contextualized embedding representation. BERT-like models have an advantage over static embeddings as they can accommodate different embedding representation for the same word based on its context. In this task, we have used RoBERTa (A Robustly Optimized BERT Pre-training Approach), which is a replication of BERT developed by Facebook (Liu et al., 2019). Unlike BERT, RoBERTa removed the 'next sentence prediction' functionality, allowing training on longer sequences, and dynamically changing the masking patterns. We also train and fine-tune contextual embedding models using the FLAIR framework (Akbik et al., 2018;Akbik et al., 2019) Document Embedding: Unlike word embeddings, document embeddings provide a single embedding for the entire text (sentence, paragraph, or the entire document). The FLAIR framework has document embeddings implementations, such as 'DocumentPoolEmbeddings, which produces document embeddings from pooled word embeddings, and 'DocumentLSTMEmbeddings, which provides document embeddings from LSTM based on word embeddings (Akbik et al., 2019). Table 3 shows the experimental results based on the baseline, supervised, and deep learning models. As we can see in the table, both the supervised and deep learning approaches outperform the three baseline systems. We have followed the suggestions by De Souza Bermejo et al. (2019) to categorize sentiment classes into 'positive', 'negative', 'neutral', and 'mixed'. However, we have observed that the number of tweets annotated as 'mixed' are substantially smaller than the other classes, see Figure 2a. Due to this reason, we have conducted two variant experiments, 1) with the whole dataset ('All' column in Table 3) and 2) removing all the 'mixed' instances from the training ('Cleaned' column). For both datasets ('All' and 'Cleaned'), the models based on the deep learning approach (models with prefix F-) performs better than the supervised machine learning approaches. Moreover, the models perform better when the 'mixed' sentiment classes are removed from the dataset, hence, 'mixed' sentiment classes can be considered as noise for the model. Discussion Concerning the dataset, we have observed that with proper control questions integration and concise and clear instructions, the Telegram bot-based annotation strategy is viable for low-resource language data collection. The reward technique in ASAB can be enhanced with a more general vouchering system such as incorporating Amazon vouchers or even direct monetary rewards that will be more attractive. The significant contributions of the tool are 1) ASAB, a Telegram bot-based annotation tool does not require installation of extra applications. 2) Verification and authentication of users will be managed directly by the Telegram server. 3) It is very convenient to directly communicate with the users in case errors or problems. 4) The annotation can be conducted even with a very slow internet connection. Table 3: Experimental results using the test set. The 'All' column shows performance on the four sentiment classes and the 'Cleaned' column shows the performance on three classes ('mixed' class removed'). Regarding the developed classifier models, we have observed that: 1) Deep learning models generally outperform the classical supervised models. 2) Fine-tuning pre-trained models perform very well (compare the results of 'F-Multi-flair' and 'F-Multi-Flair-Finetuned' models). 3) Most interesting, models based on network embeddings perform better than Word2Vec embeddings (see 'F-DeepWalk' and 'F-Role2Vec' models). Hence, using network embeddings in a deep learning setup works better for the Amharic sentiment analysis task, which addresses our research question 2 (RQ2). As it can be seen from Figure 2b and Figure 2c, the supervised machine learning model based on 'Near-estCentroid' predict more 'mixed' sentiment classes than the deep learning model based on 'role2Vec' document embeddings. However, 'NearestCentroid' fails to correctly classify the 'neutral' and 'positive' sentiment classes. Hence, we suggest that it is better to use ensemble methods based on supervised and deep learning models to improve classification performance. Error Analysis We have randomly selected tweets where the model prediction (using the 'role2Vec' model) and the user annotations differ. As can be seen from Table 4, for the errors under 'a)', the machine was able to correctly classify these tweets while the users wrongly annotate them to different classes. We suppose that such annotation errors by the users occur due to 1) users press the wrong button by mistake, 2) some users might not understand the tweet, or 3) due to slow internet connection, some users reported that there was a delay between the first and the second tweet. At this interval, it is possible to click the button for the second tweet even before the actual tweet is displayed to the user. The tweets under 'b)' are wrong predictions of the model. These tweets are mostly figurative speech, for example, the first tweet in the category contains the phrase ሆድ ይፍጀው -with a literal meaning of each word as "abdomen" and "burn" while the correct meaning is "Let it go, I can't say anything". The model seems to consider the literal meaning and classify it as 'neutral'. We found out that sarcasm, figurative speech, mixed scripts, incomplete phrases and sentences, and spelling and grammar errors in social media texts are the main challenges for Amharic sentiment analysis (answering our research question 3 -RQ3). Conclusion and Future Directions In this paper, we have presented the first work of sentiment analysis for the Amharic language based on the Twitter dataset. The source dataset is collected using the Twitter API for two months, targeting only tweets written in the 'Fidel' or 'Ethiopic' script. Non-Amharic texts in languages such as Geez and Tigryinga are removed. Using extended sentiment lexicon, a total of 9.4k tweets are processed for sentiment class annotation. As there is no well-established annotation framework to conduct an annotation task for the Amharic sentiment classification, we developed a mobile and social network-based annotation tool called ASAB using the Telegram bot chatbot framework. ASAB incorporated the following: 1) Support of parallel annotation for a large number of users. 2) Integrate controlling mechanisms to block spammers or users with repetitive wrong annotations. 3) Employ an automatic rewarding scheme in the form of mobile card vouchers, which can be extended for various vouchering systems. 4) Allow seamless communication between the users and the annotation task managers in the form of Telegram and email messages. ASAB demonstrated a success story for tackling low-resource language data annotation problems, which resembles the existing crowdsourcing annotation platforms. We further developed different classical supervised machine learning and deep learning models that are trained on the collected dataset. While the supervised models performed significantly better than the baseline systems, the deep learning models demonstrated superior performance over the classical supervised approaches. The dataset, the extended sentiment lexicon, the best performing models, and associated source codes are released under a permissive license 8 . In the future, we plan to integrate different rewarding mechanisms to the ASAB tool to increase its usability. ASAB can also be extended for different annotation problems such as named entity recognition, relation extraction, machine translation, and so on. Moreover, we will improve the classification models by employing an ensemble approach using batteries of supervised and deep learning models, to lift automatic sentiment labels to a usable level in applications.
7,303.8
2020-12-01T00:00:00.000
[ "Computer Science" ]
Antibiotic susceptibility prole of bacterial pathogens isolated from febrile children under 5 years of age in Nanoro, Burkina Faso. Background: The curative power of antimicrobials is severely threatened due to emerging resistance to rst-line antibiotics worldwide. With a limited reserve of antibiotics, increasing antimicrobial resistance has become a global concern, but there is a paucity of such data in Burkina Faso, and the West African region in general. Therefore, this study aims to determine the antibiotic susceptibility prole of bacterial species isolated from febrile children under 5 years of age in Nanoro (Burkina Faso). Methods: Clinical specimens (blood, stool, and urine) were collected from 1099 febrile children attending the peripheral health facilities and the referral hospital in Nanoro. Bacterial isolates from these clinical specimens were assessed for their susceptibility against commonly used antibiotics by standard disc diffusion procedure and minimal inhibitory concentration method (when appropriate). Results: In total, 141 bacterial strains were recovered from 127 febrile children of which 65 strains were isolated from blood, 65 from the stool, and 11 from urine. Predominant bacterial isolates were Salmonella species (56.7%; 80/141) followed by Escherichia coli (33.3%; 47/141). Antibiotic susceptibility testing revealed Salmonella species were highly resistant to ampicillin (70%; 56/80), trimethoprim-sulfamethoxazole (65%; 52/80), and chloramphenicol (63.8%; 51/80). E. coli isolates were highly resistant to trimethoprim-sulfamethoxazole (100%), ampicillin (100%), ciprooxacin (71.4%; 10/14), amoxicillin-clavulanate (64.3%; 9/14), ceftriaxone (64.3%; 9/14), and gentamycin (50%; 7/14). Moreover, 7 out of 14 E. coli isolates were producers of the ß-lactamase enzyme, suggesting multi-drug resistance against b-lactam as well as non-b-lactam antibiotics. S. pneumoniae isolates were fully resistant to tetracycline and 50% to penicillin G. Multi-drug resistance was observed in 54.6% (59/108) of the isolates of which 56 (54.9%) were Gram-negative bacteria and 3 (50.0%) Gram-positive bacteria. Conclusions: The antibiotic susceptibility proling showed an alarming high resistance to commonly used antibiotics to treat bacterial infections in the study region. The work prompts the need to expand antibiotic resistance surveillance studies in Burkina Faso, and probably the whole region (West Africa). Moreover, it implies the need of a revision of the antibiotic-treatment guidelines by the Ministry of Health in Burkina Faso to avoid further development of resistance. other studies from the same study area (17) and other sub-Sahara African countries who also reported alarming resistance of E. coli and NTS to rst-line antibiotics (32–35). Urinary tract infections suspected to be caused by E. coli or Klebsiella species are treated with trimethoprim-sulfamethoxazole (SXT) or amoxicillin (AMOX) (same antibiotic category as AMP), but these rst-line antibiotics for UTI treatment revealed a resistance rate of 100% in this study. Although low resistance of NTS to CIP (uoroquinolones) was found, the ecacy of this antibiotic must be carefully monitored as it is widely used to treat bacillary dysenteries by children under 5 years in West Africa (18, 36). multi-resistant group β-lactam 39, 40), Gram-negative Background The development of antibiotics against bacterial infections has been one of the greatest achievements of modern medicine (1)(2)(3)(4)(5). However, the e cacy of these antibiotics is not endless and this success is now being jeopardized by the increasing occurrence of antibiotic resistance (ABR). Nowadays, ABR is considered as one of the most important threats to public health and one of the biggest health challenges that mankind faces (6)(7)(8)(9)(10)(11). Indeed this is associated with increased risk of infection severity, patient morbidity and mortality rate, prolonged hospitalization time and healthcare costs (12). One of the main obstacles in low-and middle-income countries is the lack of practical tools in the primary healthcare facilities to reliably differentiate bacterial infections from other febrile infections. As a result, antibiotics are systematically prescribed without any evidence-base, thereby signi cantly contributing to increasing ABR (13). To solve this alarming situation, the World Health Organization (WHO) has developed a global antimicrobial resistance (AMR) action plan that encompasses reinforcing AMR knowledge through surveillance and research (12). A better understanding of local AMR patterns is crucial to rstly guide clinical management of infectious diseases and secondly for the early detection of ABR to rst-line antibiotics used in primary healthcare facilities. However, the information about the true extent of the antibiotic resistance threat in the African region is limited to 6 out of 47 countries where studies on AMR have been performed. The resulting gap in monitoring AMR weakens decision-making on antibiotic resistance policy (14,15). The same applies to Burkina Faso, where studies have revealed the worrying situation of the most commonly prescribed rst-line antibiotics in primary healthcare facilities such as amoxicillin, amoxicillin-clavulanic acid and ampicillin (9,10,(16)(17)(18). These studies highlight that signi cant resistance is recorded for several bacterial species, which have spread into hospitals and communities. It has for example been observed that nurses providing the rst-line care in primary healthcare facilities use the 10 years old national treatment recommendations (18), but this guideline does not contain up to date information about the resistance pro les of different circulating bacterial strains and species. In addition, this guideline is mostly based on ndings in high income countries and does not necessarily re ects the best treatment options for low income countries such as Burkina Faso. Furthermore, this situation is exacerbated due to the fact that the general public has (without a proper prescription) access at local markets and shops to antibiotics, where supply and quality of drugs are not appropriately controlled. This does not only threat the effectiveness of current rst-line antibiotic treatments used in peripheral health facilities, but also the second-and third-line antibiotics (6,19). There are currently no structural mechanisms in place in Burkina Faso to monitor antibiotic use and the susceptibility of bacteria to available antibiotics. The existing sentinel sites for antibiotic resistance surveillance are mainly in tertiary urban hospitals and often not operational. The present study aims to ll part of the gap in our knowledge on the current effectiveness of antimicrobials by presenting the antibiotic susceptibility pro le of bacteria isolated from various clinical specimens of febrile children less than 5 years of age in the Nanoro health district, Burkina Faso. Among the antibiotics tested in this study, several are recommended as the rst-line antibiotics by the Ministry of Health (MoH) of Burkina Faso to treat various bacterial infections (Table 1). According to this guideline sepsis/suspected bacterial bloodstream infections (bBSIs) and suspected pneumonia are treated with ampicillin (AMP) and gentamycin (GEN). In the cases of suspicion of typhoid fever, it is recommended to treat the infection with the cipro oxacin (CIP). Furthermore, it is advised to treat suspected simple pneumonia with the trimethoprim-sulfamethoxazole (SXT) (18). For suspected cases of bacterial gastroenteritis (bGE), the rst-line antibiotic is also CIP and for suspected bacterial urinary tract infections (bUTIs) either SXT or amoxicillin (AMOX) are used (18). The rst-line therapy for the treatment of the meningitis infections is chloramphenicol (CL) and AMP; in case CL appears to be ineffective, ceftriaxone (CRO) is used as second line-treatment (18). (18). For suspected cases of bacterial gastroenteritis CIP is used and for suspected bacterial urinary tract infection either SXT or amoxicillin (AMOX) is used (18). Chloramphenicol (CL) and AMP are mostly used as rst-line therapy for bacterial meningitis and Ceftriaxone (CRO) as second line-treatment (18). Patients and clinical samples The present study was conducted in the framework of a larger project that was investigating the aetiologies, diagnoses, and treatment of febrile children in the Health district of Nanoro (20). Brie y, any child under-5 years of age attending the primary healthcare facilities or the referral hospital of Nanoro with documented fever (axillary temperature ≥ 37.5 °C) was invited to participate in the study. Cases were managed by health facility or referral hospital staff according to the Burkinabe national protocol of diseases management based on the Integrated Management of Childhood Illness (IMCI) (21). Furthermore, clinical specimens (blood, stool and urine) were collected at enrolment for microbiological analyses at the laboratory of Microbiology of the Clinical Research Unit of Nanoro (CRUN). In case the children could not produce a urine or stool sample at the time of enrolment, sterile containers were provided to the legal guardian to collect these samples at home and return them as soon as possible to the health facility within 48 hours after inclusion. Written informed consent was obtained from parents or legal guardians before any data and specimen collection. (27,28). The diameter of the inhibition zone was measured and recorded in millimetres for each disc and in microgram per millilitre (µg/mL) for each E-test. The results of antibiotic susceptibility testing were interpreted according to the criteria of the CLSI (27,28). The antibiotic discs (BD Seni-Disc™, Becton Dickinson and Company, B.V., Vianen, The Netherlands) used for antibiotic susceptibility testing as well as the minimal inhibition concentration (MIC; E-tests; Lio lchem S.r.l, Roseto degli Abruzzi(TE), Italy) are reported in Table 1. Furthermore, the extended-spectrum beta-lactamase (ESBL) producing Enterobacteriaceae were determined by using both ceftazidime (CAZ) (30 µg) and cefotaxime (CTX) (30 µg) discs, alone or in combination with clavulanate (C) (10 µg) discs, as described in CLSI (27,28). A bacteria strain was considered as potential ESBL-producer, when the inhibition zone diameters were ≤ 25 mm for ceftriaxone (CRO) disc, ≤ 22 mm for CAZ disc, and ≤ 27 mm for CTX disc (27,28). An Enterobacteriaceae phenotype is indisputably considered to be an ESBL producing phenotype bacterium if the difference between the inhibition zone diameter for either antibiotic tested in combination (CAZ + C) or (CTX + C) and the inhibition zone diameter of the corresponding antibiotic tested alone (CAZ or CTX) is ≥ 5 mm (27, 28). S. aureus species were considered as MRSA strains when the inhibition zone diameter of cefoxitin disc (FOX; 30 g) on Mueller Hinton (MH) agar plate is ≤ 21 mm after 16-18 hours of incubation, according to CLSI guidelines (27,28). An isolate was considered to be multi-resistant when it is resistant to at least one antibiotic agent in each of all three antibiotic categories used for therapy or prophylaxis based on Burkina Faso national treatment guidelines. Data analysis The inhibition diameters for each antibiotic tested for each investigated bacterium were recorded using Excel 2016. These data were double entered by 2 independent technicians and subsequently validated by the lab-manager. Data analysis was performed using STATA version 13 software. For the interpretation of the resistant rate of strains identi ed in the present study, the following classi cation was used: low (resistance rate < 20%), moderate (resistance from 20 to 50%), high (resistance rate from 50 to 75%), alarming (resistance rate from 75 to 100%) for the antibiotics tested (30, 31). Results The characteristics of the study population are presented in Table 2. In total 1099 children participated in the study of whom 55.2% were male and 44.8% female. One hundred and twenty-seven (11.6%) of the febrile children had one (or more) con rmed bacterial infection(s). In total, 1099 blood samples (100%), 757 (68.9%) stool samples and 739 (67.2%) urine samples were collected for microbiology analyses. Among them, a total of 141 bacterial strains were identi ed. Out of these bacterial strains, 65 were con rmed in bacterial bloodstream infections (bBSI), 65 in bacterial gastroenteritis (bGE), and 11 in bacterial urinary tract infections (bUTI) ( Table 2). Antibiotic Susceptibility testing Antibiotic susceptibility of Gram-negative bacteria The antibiotic susceptibility testing was performed on 102 Gram-negative bacteria, but not on 33 EPEC isolates from stool. The antibiotic susceptibility results of 102 Gram-negative bacteria isolates recovered from the various clinical specimen are presented in Table 4. The analysis of the susceptibility patterns of predominant Gram-negative bacteria isolates such as non-typhoid Salmonella (NTS) and E. coli revealed a resistance rate varying between high to alarming for several antibiotics tested (Table 4). For example, among NTS isolated from blood and stools, the rate of reported resistance was alarming or moderated to CL, respectively. However, for NTS, the susceptibility testing revealed a low resistance rate for cipro oxacin (CIP) and nalidixic acid (NA); (Table 4). In contrast, for E. coli isolated from urine, an alarming resistance rate was reported to CIP and NA. In addition, 7 strains of E. coli produced β-lactamase of which 6 were isolated from urine, suggesting multi-drug resistance against β-lactam and non-β-lactam antibiotics. For other Gram-negative bacteria, two out of four strains of typhoidal Salmonella (TS) showed high resistance to trimethoprim-sulfamethoxazole (SXT; 50%) and nalidixic acid (NA; 50%). All N. meningitidis strains tested had an alarming resistance to SXT and one was resistant to penicillin (PEN). Less frequently isolated E. agglomerans (0.7%) and H. in uenzae b (0.7%) were found to be sensitive to most of the antibiotics tested, except for trimethoprim-sulfamethoxazole (SXT; 100% resistance) to H. in uenzae b. The single Klebsiella species isolated from urine was fully resistant to SXT (100%) and AMP (100%). In Table 5 the resistance rates to the rst-line therapies as recommended by MoH of Burkina Faso are presented in more detail. The resistance rates of bacteria associated with bacterial gastroenteritis were in the general low to moderate. However, in the case of bacterial urinary tract infections, E. coli and Klebsiella resistance rate against ampicillin (AMP) and SXT was 100%. AMP is commonly used to treat invasive bacterial infections, but high to alarming resistance rates were found in the present study. In contrast, GEN and CRO seemed to remain effective against NTS. In addition to rst-line antibiotic tested, all S. pneumoniae isolates showed resistance to TET (100%). As for S. aureus, 1 out of 2 was resistant to clindamycin (CC) and tetracycline (TET). In contrast, CRO that is used as the rst-line antibiotic to treat bacterial meningitis showed to be effective against S. pneumoniae. Resistance pro ling of co-infections The resistance pro ling results of bacterial co-infections are presented in Table 6. In total, 11 bacterial isolates (10 NTS and 1 E. coli) were identi ed simultaneously in blood and stool. The resistance rate of NTS strains identi ed from both infection sites was alarming to the rst-line antibiotics (AMP and SXT) tested. Importantly, 2 children had three types of different infections; one child had an E. coli strain responsible for bBSI, bGE, and bUTI, and in another child, 2 NTS strains were responsible for bBSI and bGE, and one E. coli caused bUTI. Overall, all these bacteria identi ed from these 3 sites of infections were fully resistant to AMP and SXT, which are commonly used as rst-line antibiotics to treat these infections cases. Table 6 Resistance rate to recommended rst-line therapy* for the treatment of bacterial co-infections identi ed. Co-infections type bBSI + bGE (11) bBSI + bUTI (2) bGE + UTI (3) bBSI + bGE + bUTI (2) Isolated bacteria In total, 56/102 (54.9%) of Gram-negative and 3/6 (50%) Gram-positive bacteria were MDR (Table 7). Ten out of fourteen (71.4%) E. coli strains isolated in this study revealed resistance to SXT, AMP and CIP. For Salmonella species, 56.3% (45/80) of the strains isolated were resistant to SXT, AMP and CL. These antibiotics are recommended by national treatment guidelines of Burkina Faso to treat the infections found in this study (Table 5). (17) and other sub-Sahara African countries who also reported alarming resistance of E. coli and NTS to rst-line antibiotics (32)(33)(34)(35). Urinary tract infections suspected to be caused by E. coli or Klebsiella species are treated with trimethoprim-sulfamethoxazole (SXT) or amoxicillin (AMOX) (same antibiotic category as AMP), but these rst-line antibiotics for UTI treatment revealed a resistance rate of 100% in this study. Although low resistance of NTS to CIP ( uoroquinolones) was found, the e cacy of this antibiotic must be carefully monitored as it is widely used to treat bacillary dysenteries by children under 5 years in West Africa (18,36). The present work also reported an ambiguity regarding the role that the urinary tract plays on the proliferation of E. coli producing β-lactam enzyme. It was found that 85.7% E. coli isolates from urine were β-lactamase enzyme producers. The children from whom these bacteria were isolated could transmit these resistant E. coli strains to their mother, and subsequently to their family, and even to the community. Furthermore, the strains producing β-lactamase, usually show co-resistance to non-β-lactam antibiotics, such as aminoglycosides and uoroquinolones (37)(38)(39). This explains the high resistance of E. coli isolated from urine to β-lactam and cross-resistance to non-β-lactam antibiotics reported in this study. It was reported that this enzyme is predominately produced by bacteria that are multi-resistant to the group of β-lactam antibiotics (1,37,39,40), which are frequently used to treat infections caused by Gram-negative bacteria like Enterobacteriaceae. This observation, supported by data form another study from Burkina Faso (10) and from other African countries (15,33,41,42) implies that treatment options for bacterial diseases are further reduced. Especially, the treatment of pediatric bUTIs caused by ESBL-producing E. coli is nowadays seriously jeopardized due to antibiotic resistance in sub-Saharan Africa countries. The observed high resistance of E. coli to 3rd generation antibiotics cephalosporin (CRO) and uoroquinolones (CIP), which are two essential antibiotics largely used in our study area, further pin points the severe threat of antibiotic resistance at the community level. Together these data con rm that the e cacy of many rst-line antibiotics commonly used in Nanoro to treat principal bacterial infections such as E. coli and NTS is at high risk. This is likely to further undermine the precarious health system in place in low-and middle-income countries (LMICs) such as Burkina Faso if nothing is done to stop the spread of resistance. It should be noted that our results are fully in line with other observations that warned for decaying antibiotic effectiveness (17,43). Therefore, actions have to be taken urgently to prevent the inappropriate use of antibiotics, which are still (highly) effective against common pathogenic bacteria encountered at primary health facilities. In order to deal with this threat, it is essential that practical tools or diagnostic algorithms be developed to correctly diagnose bacterial infections that can be easily implemented in the primary health care settings in LMICs. Furthermore, national and regional guidelines for integrated management of childhood illness (IMCI) that recommend syndrome-based management and treatment of bacterial infection need to be reconsidered as it may contribute to the spread of antibiotic resistance. The untargeted, prolonged, and repeated exposure of bacteria to essential antibiotics, which is a consequence of the use of the IMCI guidelines, is largely contributing to emerging resistance and jeopardizes action plans to ght against this emerging antibiotic resistance. Despite the rare cases of N. meningitidis (2 cases) and H. in uenza b (1 case) reported in the present study, it is relevant to note that these bacteria were fully susceptible to the CL and CRO. This is important as these antibiotics are used to treat meningitis as recommended by MoH of Burkina Faso (the country is located in Lapeyssonnie's belt). Moreover, GEN used in combination with AMP as a rst-line antibiotic showed to be effective against most of the pathogens isolated in this study, except for E. coli, which showed moderate resistance. In addition, low resistance of NTS isolated from blood to this antibiotic was found in the present study, and this is worrying as this combination was always highly effective against Enterobacteriaceae in Burkina Faso and this might be an indication for upcoming resistance against this antibiotic. The study also reported a high prevalence of MDR bacteria. This emergence of MDR is a serious public health problem and a threat to effectively treating bacterial infections. The emergence of speci c MDR bacteria is closely linked to the use of broad-spectrum antibiotics for both presumptive and de nitive therapy. The occurrence of community spread of MDR bacteria leads to the large increase of the population at risk and increases the number of infections caused by MDR bacteria. A limitation of the study is that the work did not include respiratory tract infections, as these infections are often (presumptively) treated with antibiotics, irrespective of the cause of infection (being bacterial or viral). Often this treatment practice leads to signi cant resistance (19,44). In the case of suspected simple pneumonia, it is for example advised to treat with the trimethoprim-sulfamethoxazole (SXT). In the present study, it was found that this antibiotic was ineffective to many of the bacterial infections studied and it would be valuable to determine its effectiveness against bacterial infections causing pneumonia. Another possible restriction of the study is the low number of Gram positive bacteria isolated from the clinical specimens studied. The low prevalence of S. pneumoniae is likely a positive effect of the introduction of the pneumococcal conjugate vaccine in the expanded program of immunization (EPI) in October 2013 (45,46). However, it remains a concern that the few isolates recovered in the present study (from blood) showed moderate to high resistance against the rst-line antibiotics recommended in our study area (6,18). Finally, another limitation of our study is the fact that the recruited children were not followed up post-treatment in the framework of the study. Consequently, it remains unknown whether the treatments installed actually failed or were successful in vivo. However, based on the evidence provided by the susceptibility testing it is likely that several treatments have failed thereby jeopardizing the health of the children. Therefore, we propose to update the current national antibiotic treatment guidelines in order to use effective drugs to treat the infections. The study demonstrated that various rst-line antibiotics are no longer effective to treat common bacterial infections and a revision of the current treatment guidelines in Burkina Faso and probably the whole West-Africa region is needed. Based on our study outcomes we recommend the following revision (Table 8): when sepsis or a simple (uncomplicated) bBSI is suspected; the proposed treatment would be with a single 3rd generation cephalosporin (CRO). In the case of a severe sepsis or severe bBSI, the proposed treatment would be a combination of a 3rd generation cephalosporin such as CRO combined with an aminoglycoside, like Gentamycin (GEN). In the case of a suspected bUTI, we suggest distinguishing between hospitalized and non-hospitalized cases, because the route of administration of GEN may have a health safety risk for the outpatient as it needs to be administered intravenously. For a hospitalized patient with bUTI the proposed treatment would be with an aminoglycoside (GEN). However, for a non-hospitalized case, we propose to use amoxicillinclavulanate (AMC) which is a combination of a β-lactamase inhibitor, Clavulanic acid (C), together with another antibiotic agent, Amoxicillin, which can be administered orally. For the treatment of bGE we propose to use a uoroquinolone (CIP), but it is important to monitor resistance to this antibiotic too as it is very frequently used even without proper laboratory examinations and/or prescriptions. Conclusions In conclusion, this study showed an alarming high resistance to many rst-line antibiotics used to treat common bacterial infections in Burkina Faso and a revision of the current treatment guidelines is needed. The work prompts the need to expand antibiotic resistance surveillance studies in Burkina Faso, and probably the whole region (West Africa). Abbreviations
5,324.2
2020-07-17T00:00:00.000
[ "Medicine", "Biology" ]
X-Ray Thomson scattering without the Chihara decomposition X-Ray Thomson Scattering (XRTS) is an important experimental technique used to measure the temperature, ionization state, structure, and density of warm dense matter (WDM). The fundamental property probed in these experiments is the electronic dynamic structure factor (DSF). In most models, this is decomposed into three terms [Chihara, J. Phys. F: Metal Phys. {\bf 17}, 295 (1987)] representing the response of tightly bound, loosely bound, and free electrons. Accompanying this decomposition is the classification of electrons as either bound or free, which is useful for gapped and cold systems but becomes increasingly questionable as temperatures and pressures increase into the WDM regime. In this work we provide unambiguous first principles calculations of the dynamic structure factor of warm dense beryllium, independent of the Chihara form, by treating bound and free states under a single formalism. The computational approach is real-time finite-temperature time-dependent density functional theory (TDDFT) being applied here for the first time to WDM. We compare results from TDDFT to Chihara-based calculations for experimentally relevant conditions in shock-compressed beryllium. Warm dense matter (WDM) arises in many contexts ranging from planetary science [1][2][3] to the implosion stage of inertial confinement fusion [4][5][6]. While there are no sharp pressure, temperature and density boundaries for the WDM regime, it is generally viewed as an intermediate state between a condensed phase and an ideal plasma where Fermi degeneracy is present, and the Coulomb coupling and thermal energy are comparable in magnitude [7]. Experimental characterization of warm dense matter is challenging due to the difficulty of producing uniform samples at extreme conditions and developing diagnostic techniques that can provide accurate and independent measurements of these conditions for transient samples opaque to optical photons. X-ray Thomson Scattering (XRTS) [7,8], one such diagnostic technique, exploits the scattering of hard coherent x-rays to directly probe the system's dynamic structure factor (DSF). Through the fluctuation-dissipation theorem, the DSF is related to the system's density-density response, and consequently XRTS provides direct insight into electron dynamics. XRTS experiments have been performed on a variety of materials including beryllium [9,10], lithium [11], carbon [12], CH shells [13], and aluminum [14,15]. With recent improvements in source brightness [15] producing increasingly high resolution and high signal-to-noise data, the full DSF is expected to become routinely available. In anticipation of these advances, it is critical and timely to examine the theoretical constructs underpinning the interpretation of these experiments. The most common model of XRTS experiments relies on an additive form of the DSF due to Chihara [16,17]: S(q, ω) = |f I (q)+ρ(q)| 2 S ii (q, ω)+Z f S ee (q, ω)+S bf (q, ω) (1) The DSF varies with momentum and energy transfers (q and ω) and is partitioned into 3 features that can be interpreted in terms of x-ray scattering processes. These include scattering from electrons bound to and adiabatically following ions (|f I (q) + ρ(q)| 2 S ii (q, ω)), from Z f free electrons per ion (Z f S ee (q, ω)), and from bound electrons that are photo-ionized (S bf (q, ω)). While successfully applied to many systems, this model relies on numerous approximations and assumptions. Most critically, the electrons are separated into bound and free populations, a distinction that is often ambiguous in the WDM regime. Each term in Eqn. 1 is subject to different models potentially leading to under-constrained fits to experimental data [18]. The ionic feature typically relies upon a decomposition into a product of an ion-ion structure factor, S ii (q, ω), and an average atomic form factor, f I (q) + ρ(q), with the first term describing the unscreened bound electrons and the latter the screening cloud, which must be treated carefully [19]. However, recent work has focused on moving past this decomposition of the elastic peak [20]. In this work, we transcend the Chihara decomposition by explicitly simulating the real-time dynamics of warm dense matter using a finite temperature form of timedependent density functional theory (TDDFT) [21,22] and the Projector Augmented-Wave (PAW) formalism [23,24]. In PAW, the all-electron Kohn-Sham orbitals (and their associated density) can be accessed via an explicit linear transformation on the smoother pseudo orbitals. By performing calculations with none of the core states frozen, we avoid making any assumptions between bound and free electrons, and treat all electrons similarly. We work with a real-time implementation of TDDFT in the Vienna Ab-Initio Simulation Package (VASP) [24][25][26] (see Supplemental Material [27]) that provides a number of attractive features. Physically, higher-order response phenomena and Ehrenfest molecular dynamics are accessible in this framework. Computationally, the orthogonalization bottleneck that limits standard DFT approaches is removed, as it is only explicitly required arXiv:1512.05795v2 [cond-mat.mtrl-sci] 1 Apr 2016 for the calculation of the initial state of the Kohn-Sham orbitals. This leads to excellent strong scaling [28,29], and we have observed near-perfect scaling up to 65,536 cores in our implementation. We next outline the details of our TDDFT calculations, noting that we use Hartree atomic units (m e = e 2 = = 1/4π 0 = 1) unless otherwise indicated. Time-dependent quantities evaluated at t = 0 are indicated by the addition of a subscript 0 and the absence of a temporal argument. Fourier transformed quantities are indicated by a diacritic tilde. All calculations are spin unpolarized. The equation of motion in real-time TDDFT is the time-dependent Kohn-Sham (TDKS) equation: . The orbitals, indexed by band and Bloch wave number, are such that their weighted sum produces the time-dependent density, ρ(r, t). The external potential, v ext , includes contributions due to the Coulomb field of the bare nuclei as well as a model of the x-ray probe, v probe (r, t), which is quiescent until t = 0. v H and v xc are the Hartree and exchange-correlation potentials, with accurate and efficiently computable approximations to the latter being a central theoretical concern of TDDFT. The initial conditions for the {φ n,k (r, t)} from Eqn. 2 are the self-consistent solution to a Kohn-Sham Mermin DFT calculation [30] at electron temperature T e in a supercell of volume Ω sc . The initial density is then: where f n,k is a composite weight consisting of the measure of the specific Bloch orbital weighting and the Mermin weight encoding temperature dependence according to a Fermi-Dirac distribution. It is important to note the implicit dependence of these initial orbitals on the equilibrium electron temperature, T e . Evolving these orbitals under Eqn. 2, the time-dependent density becomes: The weights are not time-evolved, as we do not expect them to change in the linear response regime [31]. For stronger perturbations, additional formalism might be required for the weights. That the Mermin formalism is sensible within TDDFT in the linear response regime is supported by recent foundational work [32]. The action of a probe potential, v probe (r, t), turned on at t = 0 leads to a change in the time-dependent density. The linear density-density response function, χ ρρ (r, r , t), encodes the relationship between these two quantities: δρ(r, t) = ∞ 0 dτ Ωsc dr χ ρρ (r, r , τ )δv ext (r , t − τ ) (5) where we use the notation δf (r, t) = f (r, t) − f 0 (r) for δρ(r, t) and δv ext (r, t). If we fix ionic positions at time t = 0, which evolve slowly on the relevant attosecond timescales, then δv ext (r, t) = v probe (r, t) and we can construct a probe potential that can be used to extract the DSF, similar to Sakko, et. al. [33]. The real-time density response to such a probe potential is shown in Fig. 1 FIG. 1. The density response of warm dense beryllium (density 5.5 g/cm 3 and Te = 13 eV, movie in Supplemental Material [27]) due to (a) a perturbing potential with the illustrated envelope observed at times coinciding with (b) the peak of the perturbation, (c) the peak of the density response, and (d) the plasmons continuing to ring around the system after the perturbation has ceased. Red (blue) isosurfaces bound volumes of charge accumulation (depletion) and yellow isosurfaces indicate the nodal surface. The amplitude of the perturbation is within the linear response regime, a discussion of which is in the Supplemental Material [27]. In principle, any sufficiently weak analytic probe potential will allow us to extract the response function. For convenience, we choose v probe (r, t) = v 0 e iq·r f (t), where f (t) is a Gaussian envelope and v 0 is related to the probe intensity. The Fourier transformed response function, χ ρρ (q, −q, ω) = δρ(q, ω)/v 0f (ω), is then related to the DSF through the fluctuation-dissipation theorem: We are careful to note that Fourier transforms are normalized such that δρ(q, ω) has units of inverse frequency. As a proof of principle, we report calculations of the DSF for 3x-compressed beryllium consistent with the conditions reported in [10]. We consider the same momentum transfers as in [34] to study a range of excitations spanning the collective regime to the beginning of noncollective regime. For convenience of presentation, each q value is mapped onto an XRTS scattering angle, θ, relative to the 2Å probe wavelength in [10] (see Supplemental Material [27] for more information). Our results come from averaging the response of electronic densities generated from several static uncorrelated ionic configurations sampled from thermally equilibrated DFT-MD calculations. These calculations were performed on 32 and 64 atom supercells with a four electron beryllium PAW potential within the local density approximation (LDA), with electrons and ions thermostatted at T = 13 eV, and k-point sampling at ( 1 4 , 1 4 , 1 4 ), analogous to the Baldereschi mean-value point for cubic supercells. For these conditions a plane wave cutoff of 1400 eV was required to converge the pressure to within 1%, and 576 (32 atom)/1152 (64 atom) Kohn-Sham orbitals were needed to represent the thermal occupation (f n,k (T e ) ≥ 10 −5 ). Each sample configuration is used to seed a TDDFT calculation of the DSF, utilizing the same cutoff and number of orbitals. The initial Mermin electronic state is recomputed using a denser k-point sampling on a 3×4×4 (32 atom) or 2×2×2 (64 atom) Monkhorst-Pack grid. To assess the effect of the frozen core approximation (FCA), we consider electronic initial conditions and dynamics generated using both two and four electron beryllium PAW potentials. Results of our calculations are illustrated in Figs. 2 and 3. Details of the averaging procedure used to generate results, and information concerning the satisfaction of sum rules, can be found in the Supplemental Material [27]. ∆t=4 as ∆t=2 as ∆t=1 as ∆t=0.5 as FIG. 2. The DSF of warm dense beryllium (density 5.5 g/cm 3 and Te = 13 eV) at the scattering angles (θ) considered in [34] with (dashed lines) and without (marked lines) the frozen core approximation. All TDDFT calculations utilize the adiabatic LDA. Fig. 2 directly compares the DSF computed with and without the FCA. While the PAW method still includes the proper all-electron density in aggregate, only the orbitals tied to the two outermost valence electrons are included in the time-evolved response within the FCA. This effectively removes the dynamics of the core states from the density response and the high energy shoulder above 80 eV in the DSF is removed. For the temperatures and densities being considered, this is roughly equivalent to partitioning the inner two and outer two electrons into bound and free groups in the Chihara picture, such that the two-electron response corresponds to S ee (q, ω). However, there are important distinctions to keep in mind. First, in the four-electron calculation, all electrons are being treated identically, whereas it is typical to treat S ee (q, ω) and S bf (q, ω) using different levels of theory and without self-consistency in the Chihara framework. Second, even in the two-electron calculation, the response of the outer two electrons is still aware of the two frozen core states tied to each atom through their screening of the nuclear potential. Finally, these calculations are based upon explicit simulations of the realtime electron dynamics of a bulk supercell of warm dense beryllium rather than a phenomenological model of the response based upon a jellium plus average-atom picture. Based upon our observation that the two-electron response roughly corresponds to S ee (q, ω), we can also extract a quantity akin to S bf (q, ω) by differencing the fourelectron and two-electron DSFs. The effective S ee (q, ω) and S bf (q, ω) computed within TDDFT are illustrated in Fig. 3. Here we compare our TDDFT calculations to calculations done using state-of-the-art models for S ee (q, ω) and S bf (q, ω). The former is treated with an RPAlevel model dielectric function with lifetime effects taken from four-electron DFT-MD calculations of the optical conductivity; the Mermin approximation-ab initio collision frequencies (MA-AICF) method in [34]. The latter is calculated with the formalism developed in [35] and a quantum mechanical average-atom ion-sphere model with Slater exchange. As we are interested in studying energies relevant to the electronic response, we ignore S ii (q, ω), though it is necessarily present in the Chiharaindependent TDDFT calculation. Examining the dispersion of the primary plasmon peak in Fig. 3a, we see that TDDFT predicts a slight (∼5 eV) blue shift relative to the MA-AICF calculation θ = 20 • and 40 • , whereas it predicts a stronger (∼10 eV) red shift relative to MA-AICF at θ = 60 • and 90 • . We attribute this shift to exchange/correlation and band structure effects, not present in the MA-AICF dielectric function. Previously, comparisons of inelastic x-ray scattering spectra to RPA and LDA in cold free electron metals (Na and Al) indicate a similar trend in which both LDA and experiment are red shifted relative to RPA [36]. However, these calculations also indicate that the addition of lifetime effects to the LDA are necessary to totally reconcile theory and experiment. While non-adiabatic exchange IG. 3. Comparison of the DSF of warm dense beryllium computed using TDDFT (marked lines) and individual terms in a Chihara decomposed theory (dashed lines). (a) Illustrates the two-electron TDDFT response compared to the MA-AICF method [34] for Z f See(q, ω). (b) Illustrates the difference of the four-and two-electron TDDFT responses compared to an average atom treatment [35] of S bf (q, ω). correlation kernels are available for energy domain linear response TDDFT, no such time-domain exchange correlation potentials are currently available for real-time TDDFT. As warm dense matter requires a large number of thermally occupied states such that energy domain TDDFT may become computationally prohibitive, warm dense matter may provide compelling motivation for the development and testing of these functionals. Considering the bound-free shoulder in Fig. 3b, we see that TDDFT is generally in good agreement with the average-atom model treatment of S bf (q, ω) with some minor differences. Such average atom models agree well with real-space Green's function methods for cold solid beryllium [37]. We observe a trend opposite the free-free feature, in which there is a red shift of the TDDFT result at small angles, and a blue shift at large angles. We expect that LDA will do an increasingly poor job of describing the Compton scattering limit at large θ due to its well-established self-interaction error. Applying a func-tional with some fraction of Fock exchange should remedy this behavior and will be the subject of future investigations. Further, the TDDFT bound-free feature computed by differencing the two-and four-electron DSFs has a small negative dip below the 80 eV onset of the core feature. It is difficult to determine whether this is due to core-polarization suppressing the response of the valence electrons in the four-electron calculation, or potentially an artifact of the different pseudization procedures used to generate the two PAW potentials used for this comparison. This motivates further investigations of the PAW formalism applied to both real-time TDDFT and specifically, warm dense matter in which thermal effects will start to blur the line between core and valence electrons. We presented a method for the direct calculation of the DSF for warm dense matter, independent of the Chihara decomposition, by applying real-time TDDFT to configurations drawn from thermal Mermin DFT-MD calculations. Comparison of our results with state-ofthe-art models applied within the Chihara picture illustrates some subtle differences between the two, though it generally supports the use of the Chihara formalism as an inexpensive alternative to the very detailed and computationally intensive TDDFT calculations. We anticipate that TDDFT may provide a powerful discriminating tool for arbitrating disagreements between these more phenomenological theories and experiment, especially as experimental data becomes more highly resolved. Our framework enables future explorations of systems in which the partition between bound and free electrons is more ambiguous. It also provides a platform for studying the impact of recent foundational developments in DFT at non-zero temperature [32,[38][39][40][41][42][43]. Supplemental Materials: X-ray Thomson scattering without the Chihara decomposition IMPLEMENTATION DETAILS We have implemented real-time TDDFT using a plane wave basis and the Projector Augmented Wave (PAW) formalism [23,24] within version 5.3.5 of VASP [25,26]. As in other implementations using ultrasoft pseudo-potentials [44] or PAW [45] we chose a Crank-Nicolson (CN) time integration scheme to propagate the time-dependent Kohn-Sham (TDKS) equations numerically rather than chosing an integrator to achieve a high-order convergence in the local error. The unitarity of the discrete propagator was deemed an essential factor to guarantee the satisfaction of charge conservation and other sum rules associated with the dynamic structure factor (DSF) itself. At each timestep, the CN-discretized TDKS equations are solved using the Generalized Minimal Residual method (GMRES) [46]. Given the perturbation on identity form of the discrete propagator we expect and observe rapid convergence in the number of iterations. As many details of our implementation have not been published elsewhere, we begin by demonstrating that it is robust. We first test stability of the time integration. Specifically, we illustrate that our implementation is not susceptible to any significant or uncontrolled errors, given the intrinsic nonlinearity of the TDKS equations and the use of an iterative solver on the linearized problem. To do so, we time propagate a Mermin thermal equilibrium set of orbitals in the absence of a probe potential or ionic motion. Here, the time-dependent potential should be constant in time, and any variations in the instantaneous total free energy or supercell charge are due to the accumulation of numerical errors. Results are reported for 32 beryllium atoms under the warm dense conditions in the paper and using the adiabatic zero-temperature local density approximation throughout. The initial condition was generated from a Mermin DFT calculation done on a single atomic configuration drawn from a thermalized DFT-MD run with a four-electron PAW potential, 576 thermally occupied orbitals, and a plane wave cutoff of 1400 eV. The time propagation utilized the same plane wave cutoff and number of orbitals, and a time step (∆t) of 1 attoseconds (as) carried out for 8000 steps (8 fs). We varied the relative tolerance of the iterative solver, and kept the absolute tolerance fixed to be 2 orders of magnitude smaller. As the right hand side of the CN-discretized TDKS system is of order unit norm, the absolute tolerance is effectively irrelevant. The resultant free energy per atom and total charge density of the supercell are reported in Figure S1. Here, we see that the free energy drift per atom is −2.6 × 10 1 µeV/(atom·fs) and that the drift in the total charge of the unit cell is 8.7 × 10 −6 electrons/fs in the most permissive case (8 digits). In the least permissive case (14 digits), these quantities are 1 × 10 −3 µeV/(atom·fs) and 1.3 × 10 −11 electrons/fs, with the 12 digit case producing drifts of a similar order of magnitude. Going from 8 digits to 14 digits, the average CPU time per step varies linearly from 0.85 s/time step to 1.64 s/time step, i.e., convergence is exponential in the number of GMRES iterations. As the results are practically indistinguishable from 14 digits and the CPU time per time step is 20% shorter, a relative tolerance of 12 digits was used in all subsequent calculations. Pertinent to the satisfaction of sum rules on δρ(r, t), in all calculations in this work, we have verified that charge conservation is guaranteed to a relative accuracy of greater than 8 digits. We next demonstrate convergence of the integrated density response, the observable of interest, in ∆t. To do so, we consider the same 32 beryllium atom initial condition described above, this time applying a time-varying scalar perturbation of the form described in the main body of the paper. Here with t d = 10 as, t w = 2 as, v 0 = 0.001 eV·fs, and q = 1.091xÅ −1 . ∆t is varied from from 4 as to 0.25 as, and the associated convergence of the real-time density response is illustrated in Figure S2. From these results, it is evident that the density response exhibits first order convergence. Further, the convergence supports the decision that ∆t = 1 as (attosecond) provides a reasonable balance between accuracy and time required per calculation for generating production results. Next, we consider the determination of parameters for v probe (r, t). We chose f (t) to take the form of a Gaussian envelope to ensure that the exciting potential and its response are approximately band-and time-limited. To this end, the pulse width (t w ), determines the bandwidth of the response and was chosen to ensure that modes of the density response with energies on the order of 100s of eV are excited with appreciable amplitude. The delay, t d , was chosen to ensure that the excitation is approximately quiescent at t = 0 as. The remaining parameter, v 0 , determines the effective intensity of the probe potential, and must be chosen to be large enough that the response is not dominated by numerical noise, yet small enough to remain in the linear response regime. Varying v 0 over 4 orders of magnitude from 1 eV·fs to 0.001 eV·fs, we do not observe numerical noise to be a problem at any value. The post-processed DSF computed using these probe amplitudes for a single 32 atom configuration at |q| = 1.091Å −1 (θ = 20 • ) are illustrated in Figure S3. The results for v 0 ranging from 0.001 eV·fs to 0.1 eV·fs are indistinguishable, while the distortion of the results at 1 eV·fs indicate the onset of physics beyond linear response. That we can easily access this regime is one of the benefits of real-time TDDFT, though we do not explore this further in this work. GENERATION OF INITIAL CONDITIONS Each TDDFT calculation requires a set of Kohn-Sham orbitals and occupancies as initial conditions. These are generated using DFT-MD as implemented in the standard version of the VASP software package [24][25][26]. To assess the impact of the supercell shape and the underlying ionic positions on our DSF calculations, for each value of q we run 2 separately thermalized DFT-MD trajectories on different supercells and draw 5 independent sets of initial conditions from each. Each set of initial conditions is used to seed independent TDDFT calculations of the DSF at a fixed q. Separate DFT-MD configurations are needed for each value of q due to the requirements of the probe potential. For the DSF calculations, v probe (r, t) must be commensurate with our supercell, and consequently any realizable value of q must be in the reciprocal lattice of our supercell. To precisely specify the value of q we work with tetragonal supercells in which the perturbing q is directed along the c-axis. Then q can be set by scaling the c-axis and the a-axes can be adjusted to ensure that the desired mass density is realized. Given the liquid-like ordering in the extreme conditions under investigation, we do not anticipate that this biases our results. Table I gives the dimensions of the 8 supercells used in this study. It is worth noting that all DFT-MD trajectories are generated with the full four-electron PAW potential. However, both the two-electron and four-electron TDDFT calculations are initialized from this same set of ionic configurations, with the Kohn-Sham orbitals and occupancies being recomputed for each set of fixed ionic positions in the two-electron case. This is done to assess the impact of the frozen core approximation on the DSF in isolation. All DSF results reported are averaged over 10 configurations sampled at each value of q. The sample size is necessarily small due to the significant computational resources required for each TDDFT calculation, with each production calculation being done on 1,152 cores. However, because the electronic density of warm dense beryllium is relatively uniform, we do not see much variability in the DSF from configuration to configuration. In an effort to quantify this variability, we apply jackknife resampling to estimate the variance and indicate single standard deviation intervals. For the scattering angles considered in the manuscript, we show the estimated error intervals for the DSF computed with four-electron PAW potentials in Figure S4. ] IG. S4. Jackknife error estimates of the DSF for warm dense beryllium at 4 different scattering angles. The colored shaded regions bound ±4σ about the mean, which is a solid black line. The ±1σ intervals are barely perceptible, and we simply present the mean values in the manuscript. SATISFACTION OF SUM RULES The units on the density response and DSF are such that the following form of the f-sum rule [47] is satisfied: Here, N e is 2 for the two-electron PAW, and 4 for the four-electron PAW. Similar real-time calculations in the gas phase have reported satisfaction to within 5% [33] and we report similar results. Forms of the DSF based upon model dielectric functions may or may not be consistent with various sum rules by construction. Our DSF is derived strictly from a time-evolved electronic density for which charge conservation is numerically guaranteed to high precision, and makes no assumptions about the form of the equivalent dielectric response. To this end, the primary source of error in our satisfaction of sum rules is due to numerical errors in the post-processing, e.g., the numerical evaluation of Fourier integrals with integrands which become increasingly oscillatory at higher energies, and thus more prone to small errors in the response. When checking sum rules produced using linear response TDDFT, it is common practice to fit the high energy tail of the DSF to force it to go to zero in a way that is consistent with the integrand in the above sum rule [48]. This is also critical for real-time simulations where small phase errors between the real and imaginary parts of the time domain density response are amplified at high energies relative to low energies when post-processing to generate the energy domain response. Rather than force the tail to fit some form, we simply impose a high energy cutoff on the integrand once the DSF has decayed to < 1% of its peak value. We simply seek to verify that the majority of the weight of our response is consistent with the sum rule, and could resort to fitting tails if necessary to improve agreement. Applying this technique, we verify that we satisfy the f-sum rule for our four-electron data with a relative error of −7% (θ = 20 • ), −3% (θ = 40 • ), −2% (θ = 60 • ), and −4% (θ = 90 • ). In all cases, we underestimate the value of the f-sum rule, giving us confidence that a fit of the slowly-decaying high energy tail might be used to improve this agreement. Applying this same technique to the two-electron data we find relative errors of 8% (θ = 20 • ), 5% (θ = 40 • ), 0.4% (θ = 60 • ), and −2% (θ = 90 • ). Here, we do not uniformly underestimate the sum rule as was the case with the four-electron data. The two-electron data decays much more abruptly at high energies, and does not need to represent the response of slowly-decaying core states, so we do not view this as a deficiency (i.e., fitting a tail would not have as strong of an impact here). However, this seems to point to the two-electron response as overestimating the free-free peak, consistent with the small negative dips in the bound-free data in Fig. 3b for θ = 20 • , 40 • , and 60 • (the calculations for which the sum rule was over-estimated). Whether or not this can be improved with a different two-electron PAW may be an interesting topic of further study. Perhaps a more interesting sum rule to study is the one that defines the static structure factor through the integral of the DSF: Computing this integral for the effective free-free and bound-free data from TDDFT, we can extract structure factors that we can compare against physically intuitive models. Results are presented in Fig. S5. The static structure factor computed using Eqn. S2 applied to the effective free-free and bound-free responses extracted from TDDFT. The bound-free structure factor is compared to that of doubly-and triply-ionized beryllium [49].. The free-free structure factor is compared to an analytic fit of QMC data for jellium with rs = 1.3 a.u [51]. For the bound-free component we compare to results obtained using a configuration-interaction expansion by Brown for doubly-and triply-ionized beryllium [49]. For θ = 60 • and 90 • , the TDDFT gives results that are closer to doublyionized beryllium, consistent with physical intuition for the thermodynamic conditions under consideration. For θ = 20 • and 40 • , the TDDFT gives results that are closer to triply-ionized beryllium, though the difference between the doubly-and triply-ionized structure factors are smaller at these angles. We note that by excluding the negative difference data below 80 eV in our integration (evident in Fig. 3b), we can improve agreement with the doubly ionized curve at all angles. This indicates that the underestimation of the TDDFT structure factor may be due to differences between the free electron response for the two-electron and four-electron PAWs, which may or may not be physical. This highlights the importance of being able to self-consistently compute the four-electron response without the Chihara decomposition using our methodology. For the free-free component (our frozen core result) we compare to results for the static structure factor of jellium with r s = 1.3 a.u., consistent with the experimentally determined free-electron density [10]. In evaluating Eqn. S2, we remove the elastic peak in a region of width ∼ k B T e centered at ω = 0 and apply a simple linear interpolant between the DSF data on either side of the excluded region. We compare the resultant free-free static structure factor to an analytic fit to QMC data [53] by Gori-Giorgi, et. al. [51]. The TDDFT free-free structure factor exhibits the same trend as the QMC fit, but is not expected to agree perfectly. We only expect qualitative agreement because the external potential experienced by the free electrons in warm dense beryllium is not a uniform neutralizing background as is the case in jellium. This qualitative agreement between results from TDDFT and QMC stands in contrast to Vorberger and Gericke's recent work in which DFT-MD is used to compute a free electron density, from which the free-free static structure factor of warm dense beryllium is extracted [20]. In their work, the static structure factor computed from DFT differs greatly from QMC, namely it does not go from one to zero for momentum transfers less than 2k F (k F = 2.88Å −1 for our conditions), but instead oscillates slightly about unity. The authors postulate that this is due to the mean field nature of the Kohn-Sham equations, and that this may be beyond the capability of DFT. Our results indicate that this basic physics is in fact within the grasp of TDDFT. To this end, it is worth noting that TDDFT has provided us with a convenient means of computing the static structure factor as an exact functional of the time-dependent density. In this case, we mean exact in the sense that if we are given a representation of the exact interacting time-dependent density, we can map it onto the exact DSF and the exact structure factor through Eqn. S2.
7,571.2
2015-12-17T00:00:00.000
[ "Physics" ]
Magic electron affection in preparation process of silicon nanocrystal It is very interesting that magic electron affection promotes growth of nanocrystals due to nanoscale characteristics of electronic de Broglie wave which produces resonance to transfer energy to atoms. In our experiment, it was observed that silicon nanocrystals rapidly grow with irradiation of electron beam on amorphous silicon film prepared by pulsed laser deposition (PLD), and silicon nanocrystals almost occur in sphere shape on smaller nanocrystals with less irradiation time of electron beam. In the process, it was investigated that condensed structures of silicon nanocrystals are changed with different impurity atoms in silicon film, in which localized states emission was observed. Through electron beam irradiation for 15min on amorphous Si film doped with oxygen impurity atoms by PLD process, enhanced photoluminescence emission peaks are observed in visible light. And electroluminescence emission is manipulated into the optical communication window on the bigger Si-Yb-Er nanocrystals after irradiation of electron beam for 30min. S ilicon is the most important semiconductor material for the electronic industries. But the optical properties of silicon are relatively poor, owing to its indirect band gap which precludes the efficient emission and absorption of light. Due to the modification of the energy structure afforded by quantization, silicon nanocrystals emerge as ideal candidates for photonic applications involving efficient radiation recombination. The emission from silicon nanocrystals has characteristic features: photoluminescence (PL) intensity increases and its wavelength occurs blue-shift with decreasing crystal size. Further, localized states due to impurity atoms, such as oxygen or nitrogen, and surface or interface defects have been suggested that leads to a stabilization of the PL wavelength for smaller silicon nanocrystals [1][2][3] . Silicon nanocrystals have been studied intensively over the past decade 4,5 . The popular methods for fabricating silicon nanocrystals are self-assembly from silicon-rich silicon oxide matrices 6,7 and plasma synthesis [8][9][10][11] . The interesting method to fabricate silicon nanocrystals is growth under photons interaction [12][13][14] . In the first case, SiOx (with x , 2) is formed by a thin-film deposition technique such as pulsed laser deposition (PLD). Subsequent high-temperature annealing of the substoichimetric film (typically 900,1100uC) produces a phase separation between Si and SiO 2 with the formation of Si nanoclusters. The dimensions, crystallinity and size distribution of the nanoclusters depend on the Si excess, the temperature and the annealing time 5,6 . In the article, the most interesting and simplest method discovered in our experiment for fabricating silicon nanocrystals is self-assembly growth by assistance of electron interaction, in which silicon nanocrystals rapidly grow with irradiation of electron beam on amorphous silicon film prepared by PLD, and shape of silicon nanocrystals is almost sphere when crystal size is smaller with less irradiation time of electron beam. The method of electron affection could be used to replace the traditional annealing methods in preparing process of silicon nanocrystals. In the process, it was investigated that condensed structures of silicon nanocrystals are changed with different impurity atoms in silicon film, for examples oxygen or Er atoms make a stronger condensed affection than doing of nitrogen or Yb atoms in impurity, in which various localized states emission was measured. It is very interesting that magic electron affection promotes the growth of nanocrystals, whose physical mechanism may be from nanoscale characteristics of electronic de Broglie wave which produces resonance to transfer energy to crystal atoms. In natural sciences, many analogous structures and properties occur on different size hierarchy, for example in nanoscale space related to electronic de Broglie wavelength and in sub-micrometer scale related to photonic de Broglie wavelength, in which the photon affection from nanosecond or femtosecond laser is used to fabricate periodic surface structures with 100nm spatial periods on silicon 15,16 , and the electron affection is used to prepare silicon nanocrystals, which were demonstrated in experiment. The amorphous silicon film was prepared in the combination fabrication system with pulsed laser etching (PLE) and PLD devices (see Methods), as shown in Fig. 1. Then, the amorphous silicon film was exposed under electron beam with 0.5 nA/nm 2 for 5,30min in Tecnai G2 F20 system, it was observed that silicon nanocrystals rapidly grow with irradiation of electron beam and various shape of crystals forms with different irradiation time on the amorphous silicon film with impurity (see Methods). In Fig. 2(a), the image of transmission electron microscope (TEM) shows Si quantum dots (QDs) embedded in the Si y N x amorphous film prepared in PLD device with nitrogen gas tube, whose diameter is about 2,5nm after irradiation under electron beam for 15min. The Si-Yb QDs structures are built after irradiation under electron beam for 20min on the amorphous film with impurity prepared in PLD device with Si and Yb bars, as shown in the TEM image of Fig. 2 It is interesting that gradually growing process of QDs structure was observed under electron beam with increase of irradiation time. For example, not only QD structure has been observed after irradiation for 5min (TEM image in Fig. 3(a)), and a few QDs are observed after irradiation for 15min under electron beam (TEM image in Fig. 3(b)) on amorphous Si film prepared in oxygen gas. Figure 3(c) shows the Fourier transform image on the structures with QDs embedded in SiO x . As shown in TEM images of Fig. 4, no QD structure occurs after electron beam irradiation for 5min (a), a few QDs appear after radiation for 20min (b) and QDs structure has been broken over irradiation time of 30min (c) on the Si-Yb amorphous film prepared by PLD process in the device related to Fig. 1(b). The inset in Fig. 4(a) shows composition of Si and Yb in X-ray energy spectrum on the amorphous film. In the crystallizing process under irradiation of electron beam, it was observed that condensed speed is different to form different structures of silicon nanocrystals with different impurity atoms on silicon film, for examples oxygen or Er atoms make a stronger condensed affection than doing of nitrogen or Yb atoms in impurity. In Fig. 5, TEM images show different silicon nanocrystals with different impurity gas atoms after irradiation of electron beam for 30min, such as in nitrogen gas (a), in oxygen gas (b) and in SF gas (c) on the samples prepared in the device related to Fig. 1(a), in which the film structure of Si-N nanocrystals is still kept, but the film structures of Si-O and Si-S nanocrystals have been broken. As shown in Fig. 6, TEM images show different silicon nanocrystals with different impurity solid atoms, such as Yb bar (a), Ge bar (b) and Er bar (c) on the samples prepared in the device related to Fig. 1(b), in which the film structure of Si-Yb nanocrystals is still kept, but the film structures of Si-Ge and Si-Er nanocrystals have been broken. Here, it is obvious that some atoms have a stronger condensed ability, such as Er atom. On the silicon nanocrystal samples prepared by using irradiation of electron beam in the PLD device with oxygen gas tube related to Fig. 1(a), photoluminescence (PL) spectra were investigated in various impurity atoms after annealing for different time (see Methods). Figure 7(a) shows silicon QDs embedded in SiO x prepared by using irradiation of electron beam for 15min, whose sharper PL peak at 604nm is observed after annealing at 1050u C for 20min as shown in Fig. 7(b). Various PL spectra are observed with annealing time of 10min, 15min or 20min on the samples, as shown in Fig. 7(c), which shows that annealing time of 20min is suitable for localized states emission. In Fig. 8(a), Si QDs embedded in Si y N x is observed on the sample prepared by using radiation of electron beam for 15 min in the device with nitrogen tube related to Fig. 1(a). PL peak at 605nm (near 2eV) on the sample is found in Fig. 8(b). In the PLD device with Yb bar related to Fig. 1(b), Si-Yb QDs embedded in Si y N x are prepared by using irradiation of electron beam for 15min, as shown in Fig. 9(a), whose PL peaks occur in Fig. 9(b). It is very interesting that in the PLD device with Yb and Er bars related to Fig. 1(b), bigger Si-Yb-Er nanocrystals with various shapes appear after irradiation of electron beam for 30min, as shown in Fig. 10(a), whose electroluminescence (EL) spectra occur in optical communication window in Fig. 10(b) (see Methods), which have the characteristics of localized states emission on surface defects In experiments, PL bands in visible light are observed on smaller Si nanocrystals such as Si QDs structures prepared by using irradiation of electron beam for less time, and localized states emission peaks are measured at some fixative wavelengths with different impurity atoms bonding on QDs surface. In Fig. 11, simulation analysis of PL spectra shows that PL band emission (Fit peak 3) on Si QDs with various sizes belongs to the QC effect emission, and localized states emission (Fit peaks 1 and 2) comes from impurity atoms bonding on nanocrystals surface, which form competition in physical mechanism. A physical model of the localized states emission could be built on Si nanocrystals doped with various impurity atoms by using irradiation of electron beam, in which electron from bottom pumping states of conduction band (CB) opened, owing to quantum confinement (QC) effect in Si QDs, is relaxed into the localized states due to impurity atoms bonding on QDs surface, and then enhanced PL emission is obtained due to recombination between electron in the localized states near CB and hole in the localized states near valence band. As shown in Fig. 12, electrons in pumping states of CB opened due to smaller Si QDs (,3nm) are relaxed into the localized states of Si 5 O bonds on QDs surface to form inverse population between up localized states and low localized states, then stimulated emission near 700nm could be built. In same way, the localized states due to Si -O -Si bonds on surface of smaller Si QDs (,2.5nm) could produce stimulated emission near 600nm. But the localized states due to Si-Yb -Er bonding on surface of bigger nanocrystals prepared by electron beam irradiation for longer time will be deeper position in band gap to manipulate emission wavelength into optical communication window. In fact, this physical model involves a four-level system for emission. Figure 13 shows the cavity affection for stimulated emission on Si QDs embedded in SiO x prepared by using irradiation of In conclusion, various silicon nanocrystals are fabricated by using irradiation of electron beam on Si film prepared by PLD process, in which Si QDs with 2,5nm diameter could be obtained by controlling irradiation time of electron beam. Through electron beam irradiation for 15min and annealing at 1050uC for 20min on amorphous Si film doped with different impurity atoms by PLD process, enhanced PL emission peaks are observed in visible light. And EL emission wavelength could be manipulated into optical communication window by irradiation of electron beam for longer time to form bigger Si crystals doped with Yb and Er impurity atoms. In the process, physical phenomena and effects are very interesting, and a new way will be developed for fabrication of silicon nanostructures, which would have good application in emission materials and LED devices. Methods Preparation of amorphous silicon film. Some silicon wafers of P-type (100) oriented substrate with 1-10 Vcm were taken on the sample stage in the combination fabrication system with pulsed laser etching (PLE) and pulsed laser deposition (PLD) devices, as shown in Fig. 1. A pulsed Nd:YAG laser (wavelength: 1064nm, pulse length: 60 ns FWHM, repetition rate: 1000) was used to etch the Purcell micro-cavity on Si sample in PLE process. In the cavity, a third harmonic of pulsed Nd:YAG laser at 355nm was used to deposit the amorphous silicon film in PLD process. Figure 1(a) shows the impurity process on the amorphous silicon film through gas tube, such as nitrogen or oxygen gas. Figure 1(b) shows the impurity process on the amorphous silicon film by using PLD bars, such as SiO 2 , Ge, Yb or Er bar. Fabrication of silicon quantum dots under irradiation of electron beam. The amorphous silicon film was exposed under electron beam with 0.5 nA/nm 2 for 5,30min in Tecnai G2 F20 system, in which electron beam from field-emission electron gun, accelerated by 200 KV, has higher energy and better coherent. After irradiation under electron beam for 15min, silicon quantum dots (Si QDs) structures are built and embedded in SiO x (with x , 2) or Si y N x (with x , 4 and y . 3) amorphous film related to oxygen or nitrogen gas tube respectively in the PLD device. www.nature.com/scientificreports SCIENTIFIC REPORTS | 4 : 9932 | DOI: 10.1038/srep09932 silicon amorphous films with impurity, and the compositions were measured on the samples by using analysis in X-ray energy spectra. Photoluminescence (PL) measurement. PL spectra of the samples were measured under the 514nm excitation by using RENISHAW Micro-Raman Systems at room temperature. Electroluminescence (EL) measurement. EL spectra were measured on the sample whose surface was deposited the ITO film for positive pole and bottom side was deposited the Au film for negative pole. Annealing process. The samples were sent into the annealing furnace filled with nitrogen atmosphere to make annealing at 1050uC for 10min, 15min or 20min. The PL spectra show that annealing time of 20min is suitable for localized states emission.
3,129
2015-04-24T00:00:00.000
[ "Physics", "Materials Science" ]
Optimization of SABRE for polarization of the tuberculosis drugs pyrazinamide and isoniazid Hyperpolarization produces nuclear spin polarization that is several orders of magnitude larger than that achieved at thermal equilibrium thus providing extraordinary contrast and sensitivity. As a parahydrogen induced polarization (PHIP) technique that does not require chemical modification of the substrate to polarize, Signal Amplification by Reversible Exchange (SABRE) has attracted a lot of attention. Using a prototype parahydrogen polarizer, we polarize two drugs used in the treatment of tuberculosis, namely pyrazinamide and isoniazid. We examine this approach in four solvents, methanol-d4, methanol, ethanol and DMSO and optimize the polarization transfer magnetic field strength, the temperature as well as intensity and duration of hydrogen bubbling to achieve the best overall signal enhancement and hence hyperpolarization level. Introduction By producing nuclear spin polarization far beyond that available at thermal equilibrium, hyperpolarization can provide improved sensitivity for NMR, enabling the detection of less concentrated molecules. In the area of molecular imaging, MRI has recently been used to study the distribution [1] and metabolism [2][3][4] of hyperpolarized substrates. For instance, multiple studies have reported on the conversion of hyperpolarized 13 C-labeled pyruvate to its metabolic products, alanine, lactate and carbonate in vivo [2][3][4][5][6], in which higher lactate production is an important indicator of cancer. This technique is already being translated to the clinic and a first trial is ongoing [7]. Major hyperpolarization techniques include dynamic nuclear polarization (DNP) [8,9], spin exchange optical pumping polarization of noble gases [10] and parahydrogen induced polarization (PHIP) [11][12][13][14][15][16]. Parahydrogen is a spin isomer of hydrogen with an antisymmetric singlet spin state. By incorporating this pure spin state into a molecule through a hydrogenation reaction, large sig-nal enhancements have been observed in a variety of situations as first conceived by Bowers and Weitekamp [12] and Pravica and Weitekamp [14]. In 2009, Duckett's group developed a parahydrogen polarization technique that works without the need for the chemical modification of the substrate [17]. In this approach, the substrate and the parahydrogen bind to a catalyzing metal complex simultaneously, thus enabling polarization to be transferred to the substrate through the scalar coupling network. The polarized substrate is subsequently released, and replaced by new substrate which is polarized in turn. Such Signal Amplification By Reversible Exchange (SABRE) has already been applied to detect trace amounts of chemicals [18][19][20] and used in conjunction with zero-field NMR spectroscopy [21]. According to a theoretical description of SABRE, the signal enhancement level depends on the binding kinetics and the magnetic field in which polarization transfer occurs [22]. In order to achieve better enhancement, new catalyst precursors have been developed to tune the binding kinetics. Enhancements can be boosted by using the bulky electron-donating phosphines of the Crabtree catalyst [23]. Currently, an iridium N-heterocyclic carbene complex [24] shows the highest polarization transfer efficiency. In this paper, we investigated the SABRE polarization of two drugs that are used clinically, isoniazid and pyrazinamide [25]. Isoniazid treats tuberculosis meningitis, and pyrazinamide is used in combination with other drugs in the treatment of Mycobacterium tuberculosis. Isoniazid is a pyridine derivative, and pyrazinamide is a pyrazine derivative. They are nitrogen containing heterocyclic aromatic organic compounds ( Fig. 1) and are thus able to bind to the iridium atom of the catalyst precursor. Therefore, they are suitable for SABRE polarization. In previous work, methanol-d 4 was used as a solvent for SABRE polarization, which is not suitable for injection into small animals. In this paper, we therefore also investigated the possibility of SA-BRE polarization in solvents more suitable for in vivo applications, namely DMSO and ethanol. The enhancement efficiency depends on the polarizing magnetic field and temperature as well as on the hydrogen bubbling intensity and time. These conditions were optimized for each solvent. . This catalyst to substrate ratio of 1:10 was chosen following Ref. [26]. The solvents were methanol-d 4 (Cambridge Isotope Laboratories, Andover, MA), methanol, ethanol and dimethyl sulfoxide (DMSO) (Sigma-Aldrich, St. Louis, MO). The total sample volume was 3.5 mL. SABRE polarization Parahydrogen was prepared using a parahydrogen generator that cools the hydrogen gas to 36 K in the presence of a metal catalyst, after which the fraction of parahydrogen becomes 92.5%. Subsequently, the sample containing the substrate and the catalyst precursor was loaded into a mixing chamber positioned underneath the magnet of a Bruker 700 MHz spectrometer. The temperature of the sample was controlled by a home-built water bath system. Polarization was achieved by bubbling parahydrogen through the sample. The sample was then pneumatically transferred to the flow cell in the spectrometer. This process took about 2 s. Once the sample was in the NMR probe, spectra were acquired immediately. After data acquisition, the sample was returned to the mixing chamber for repolarization. NMR measurement In experiments using methanol-d 4 as a solvent, NMR spectra were acquired after a p/2 hard pulse. When non-deuterated solvents were used, solvent suppression was achieved using excitation sculpting pulse sequences [27]. The shaped pulses were 20 ms Gaussian pulses that excite all of the solvent peaks. Pyrazinamide The total magnetic field of the sample in the preparation chamber is the vector summation of the stray field of the scanner magnet and the magnetic field generated by a small electromagnetic coil surrounding the sample, which is tunable up to ±145 G. The mixing chamber is placed directly underneath the NMR magnet in a region where the stray field is vertical so that it can be completely compensated to achieve a zero overall magnetic field in the sample. According to the SABRE mechanism, optimum enhancement occurs when the differences in resonance frequency of the protons are of the same order as the scalar couplings [22]. While the optimal polarization field cannot be predicted straightforwardly, it can be easily determined experimentally by varying the magnetic field with the small coil around the sample. Fig. 2 shows the dependence of the enhancement of pyrazinamide on local magnetic field strength in methanol-d 4 at room temperature. In the range of 0-120 G, the signal enhancement for all the three aromatic protons of pyrazinamide was always of the same order of magnitude and negative. The shape of the dependency of the enhancement was a ''V'' curve with a maximum absolute enhancement at 65 G, which is very close to the value 70 G reported by Cowley et al. for pyridine [24]. Subsequently, the parameters for the hydrogen bubbling were tested at the optimal magnetic field of 65 G. The mixing of hydrogen gas with the catalyst precursor and substrate in liquid phase, which is required by the SABRE mechanism, was achieved by bubbling the hydrogen gas through a porous ceramic rod. This bubbling was controlled by the input and output pressure of parahydrogen in the mixing chamber. Usually, a larger pressure difference meant more intense bubbling. However, a very large bubble size produced by a pressure difference that is too large should be avoided. The hydrogen bubbling time should be long enough to ensure complete reaction of hydrogen, substrates and the catalyst. In our case, we increased the hydrogen bubbling time until the polarization stopped increasing. These timing and pressure parameters were solvent dependent ( Table 1). The temperature dependency was also investigated. For the polarization of pyrazinamide in methanol-d 4 in a magnetic field of 65 G, the enhancements (Fig. 3) of all three protons were relatively low for temperatures below 20.0°C. From 20.0 to 46.1°C, the enhancements of all three protons increased dramatically, before leveling off. Methanol-d 4 was chosen as the first test solvent based on the literature [17,20,23,24]. Methanol was also investigated and found to give enhancements only slightly lower than its deuterated analog (Fig. 4). Two other solvents, ethanol and DMSO, were chosen because of their lower toxicity and suitability for intravenous injection for study in vivo. DMSO is often used as a drug vehicle in medical research. Water was not considered as a solvent due to the catalyst precursor being insoluble. The polarization field dependencies for pyrazinamide in these other solvents showed patterns similar to methanol-d 4 , with optimal enhancement at 65 G. While the enhancement in ethanol resembled that in methanol, it was about an order of magnitude smaller in DMSO (Fig. 4). The enhancement dependencies on temperature in these three solvents at 65 G are plotted in Fig. 5. The polarization studies in DMSO were only carried out at higher temperatures because it was difficult to transfer the sample when it is too viscous, which occurs at a temperature close to the freezing point of the solvent (DMSO, 19°C). Compared to methanol-d 4 , the enhancements in methanol were reduced to a half and in ethanol to a quarter, while those in DMSO were an order of magnitude smaller and thus less suitable to polarize pyrazinamide. Isoniazid In the case of isoniazid, the enhancements of the two protons again showed a ''V-curve'' dependency on polarization magnetic field (Fig. 6). Interestingly, at 0 G, the polarization of proton 2 was negative while that of proton 3 was positive. The optimal magnetic field for both protons was again very similar, namely around 60-65 G. A magnetic field of 65 G was therefore again chosen to study the temperature dependence. At this field strength, the polarization of protons was almost twice of that of proton 3, probably due to proton 2 being closer to the nitrogen atom, which directly bonds to iridium upon ligation. The polarization of isoniazid in methanol-d 4 at a magnetic field of 65 G was measured over the temperature range 4.7-54.4°C (Fig. 7). The signal enhancements observed for both protons increased with temperature until reaching a maximum enhancements of À220 and À150 fold at 46.1°C. At higher temperature (54.4°C), the enhancements were slightly decreased. The polarization of isoniazid in the other three solvents was also investigated for a polarization transfer magnetic field of 65 G (Fig. 9), even though this magnetic field was not optimal for the polarization in ethanol at room temperature (Fig. 8). The best enhancements were always at 46.1°C. The SABRE enhancement of isoniazid shows similar solvents dependence as that of pyrazinamide. Compared to methanol-d 4 , the enhancements in methanol were slightly lower, in ethanol about a half, and in DMSO about one fifth, making it a less suitable solvent in which to polarize isoniazid via SABRE. Discussion According to SABRE theory [22], polarization transfer, binding kinetics and spin relaxation determine the size of the enhancement. The polarization of parahydrogen is transferred to the substrate through J coupling networks, the strength of which is determined by the chemical structure and bonding strength of the substrate-metal complex. Since the multi-bond J couplings between the parahydrogen and the substrate are small, a relative Table 1 Parahydrogen bubbling parameters for the four solvents used in the paper. Solvent Forward long residence time on the metal (in the order of 100 ms to s) is required for effective transfer. Thus, in the case of fast binding kinetics, the short lifetime of the substrate-metal complex will decrease SABRE enhancements. On the other hand, since the concentration of the substrate is much larger than that of the catalyst precursor, polarization of all of the substrate molecules requires relative fast exchange between the substrate in free form and metal bound form. During the exchange, the spin relaxation decreases the polarization. Therefore, binding kinetics that is too slow does not lead to significant SABRE enhancements either. These data show a general trend that the enhancements are better at higher temperature than at room temperature. Generally, the binding kinetics and molecular tumbling are faster at higher temperature. Also, the spin relaxation rates are smaller at higher temperature for these small molecules in the extreme narrowing limit. Faster binding kinetics and slower relaxation lead to higher enhancements, with the best enhancement in most cases occurring at 37.5-46.1°C. The enhancements are negatively correlated with the viscosity of the solvents (methanol < ethanol < DMSO). In the extreme narrowing limit, proton spin relaxation rates are faster for the substrate-metal complex in more viscous solvents, causing polarization loss and a concomitant lower SABRE enhancement. By replacing the protons with deuterons, the spin relaxation reduces and methanol-d 4 showed the best enhancement. In pyrazinamide the parahydrogen spin order is shared with three protons, while it is shared with four in isoniazid. One might therefore expect that for equivalent transfer efficiency the levels of signal enhancement would be 4:3. Given the 1400:230 ratio, we conclude that pyrazinamide reflects a better spin system. SABRE enhancement requires the complexation of the substrate of the catalyst precursor, which is through the formation of a chemical bond between the iridium and the nitrogen in the aromatic ring. Effective polarization transfer requires strong J coupling. In the substrate metal complex, the polarization can be transferred to proton 2 in isoniazid and all three aromatic proton in pyrazinamide through a 4-bond J coupling. However, the transfer to proton 3 in isoniazid is through a much smaller 5-bond J coupling. This is the probable cause of the much smaller enhancement of proton 3 compared to that of proton 2 in isoniazid. In addition, pyrazinamide has two nitrogen atoms in the aromatic ring, both of which are able to bind to iridium. This is one possible reason that the enhancement for pyrazinamide is much higher than that of isoniazid. Conclusion We report the polarization of two drugs via SABRE that are used clinically for treating tuberculosis, pyrazinamide and isoniazid [25]. To achieve the best enhancement level, the strength of the polarizing magnetic field and temperature were optimized together with the bubbling of parahydrogen. Using a fixed catalyst-to-substrate ratio of 1:10, the best enhancements for all three protons in pyrazinamide were obtained in a polarizing magnetic field of 65 G for all solvents. In all solvents, the enhancements at higher temperature were better than that at room temperature. In methanol-d 4 , up to À1400 times enhancement was obtained, corresponding to 8% polarization, which is comparable to that of DNP [28][29][30]. In methanol, ethanol and DMSO, the best enhancements were À960, À320 and À40 respectively. For isoniazid, the best enhancements in methanol-d 4 , methanol, ethanol and DMSO were À230, À140, À120 and À34 respectively in a magnetic field of 65 G. The enhancement of proton 3 was only 50-70% of that of proton 2. Both systems show a similar temperature profile where 37.5-46.1°C seems to reflect the optimum temperature and hence lifetime of the polarization transfer catalyst. It would therefore appear that the J-coupling framework in the pyrazinamide system is more suited for optimal transfer. Considering the solvent effects, the SABRE enhancement can be increased by minimizing the spin relaxation of the substrate-metal complex, namely using less viscous solvent and deuterated solvent.
3,424.4
2013-12-01T00:00:00.000
[ "Physics" ]
A Simple Approach to the Synthesis of Hexagonal-Shaped Silver Nanoplates This paper deals with the synthesis of hexagonal-shaped silver nanoplates with diameters ranging from 15 to 20 nm along with a smooth nanobulk of 120 nm. These nanoplates were prepared by a kinetically controlled solution growth method using mild reducing agent dextrose, polyvinylpyrrolidone (PVP Mw 40 k Da) as the capping agent and urea as a habit modifier and at a moderate temperature of 50◦C. The crystal structure of the highly faceted particles consists mostly of (111) surfaces as revealed by both X-ray diffraction (XRD) studies and transmission electron microscopy (TEM). Introduction Silver (Ag) nanoparticles of different shapes possess unique and tunable optical properties.Their surface Plasmon resonance (SPR) and surface-enhanced Raman spectroscopy (SERS) can be used for optical labels, contrast enhancement agents, near-field optical probes, and chemical and biological sensors [1][2][3][4][5][6].They can be treated as fluorescent analogue tracer in biological assay [7].These properties and applications of silver nanoparticles relate to their morphology and size.Hence, controlled synthesis of silver nanoparticles with well-defined shapes and sizes is pursued for investigating their size and morphology-dependent properties and their practical applications.A number of solution-phase methodologies have been developed to synthesize the nanoparticles successfully with a wide range of well-defined shapes [8,9] such as cubes and octahedrons [10,11], tetrahedrons [12], icosahedrons and decahedrons [13], wires/rods [14,15], plates with varying profiles such as triangular plates [16], and nanodiscs [17].Among these particles, two-dimensional nanoplates have particularly attracted attention due to their ability to control optical properties.They exhibit intense inplane dipole surface Plasmon resonance peak which is tunable in Visible and NIR regions by simply controlling its edge length to thickness ratio and/or truncation degree of tips which is not possible with spherical Ag particles.Strong absorption coefficient of Ag nanoplates over the broad UV-NIR spectral regions and the scattering of light by these nanoplates make them an ideal class of light trapping material which enhances charge carrier generation in organic photovoltaic films [18].Nanoplates are generally synthesized by transformation of spherical nanoparticles into plates by photo-induced methods.Mirkin's [19] group pioneered this strategy by using fluorescent light and laser with controlled wavelength.Visible light [20], UV light [21], microwaves [22] were also used to induce the conversion of spheres to nanoplates.Later on, kinetically controlled reduction of Ag + ions provided a simple cost effective method to synthesize plate like morphologies [23].Seed-mediated growth method [24,25] is another method used for the synthesis of nanoplates.Xionghui and Aixia [26] have used sulphuric acid as a modifier to prepare hexagonal silver nanoplates.All of these methods require high temperature and long duration of reaction time.In the present work, Ag nanoplates were prepared by heterogeneous nucleation method using preformed silver seeds of size 3-10 nm.The growth solution contained reagents according to a method proposed by Lu and Chou [27].The reaction conditions and the molecular weight of the capping agent are altered to follow a kinetically controlled route to synthesize hexagonal plates.Moderate temperature of 50 • C was used to synthesize hexagonal plates with a reaction time of 3 hrs.Urea was added to the growth solution as a habit modifier.Urea is known to change crystal habits in bulk crystals [28].PVP (M w 40 k Da) was used as a capping agent.Silver nanoparticles, 3-10 nm in diameter, were used as seeds to grow hexagonal silver nanoplates of 15-20 nm size. Materials and Methods 2.1.Materials.Qualigen samples of silver nitrate AgNO 3 and polyvinylpyrrolidone (PVP, M w 40 k Da) and Merck samples of dextrose, urea, and trisodium citrate were used as such.All solutions were prepared in deionized water. Synthesis of Silver Seeds.20 mL aqueous solution of silver nitrate (0.156 M) was heated to boiling, 2 mL of sodium citrate (0.025 M) solution was added to it, and the solution was stirred for 3 minutes and was allowed to react for 5 minutes.The solution turned yellow indicating the formation of silver nanoparticles of size 3-10 nm. Preparation of Growth Solution. A growth solution containing 20 mL of aqueous solution of AgNO 3 (0.156 M) and 20 mL of urea solution (0.63 M) was prepared in a 100 mL conical flask.To this flask was added 20 mL alkaline solution of PVP (1 g/1 g of AgNO 3 ).This solution was used as the growth solution. Kinetically Controlled Synthesis of Silver Nanoplates. Silver nanoplates with hexagonal shape were prepared by adding 20 mL of seed solution to the growth solution in a flask and mixed well by stirring.20 mL of an ice cold aqueous solution of dextrose (0.313 M) was added to the flask dropwise with continuous stirring at room temperature.The solution turned jet black and was heated in a thermostat for 3 hrs at 50 • C. The silver nanoparticles were centrifuged at 4000 rpm to produce the precipitate and analyzed by UV-Visible absorption spectra, XRD, and TEM. Characterization. UV-Visible spectrum of the silver nanoparticles was recorded using a Cary 5 E UV-VIS-NIR spectrophotometer.Powder X-ray diffraction (XRD) patterns were measured using (RICHSEIFER) powder diffractometer, using nickel filtered copper K-alpha radiations (λ = 1.5461Å) with a scanning rate of 0.02 • .The diffraction pattern was recorded in the range of 5-70 • .The HRTEM images of the silver nanoparticles were recorded using a JEOL JEM 3010 instrument with a UHR pole piece electron microscope operating at 200 kV.particle size is around 3-10 nm.For small nanoparticles, the peaks in the absorption spectra are due to light absorption only.The product solution containing the hexagonal nanoplates shows two bands and a strong SPR absorption peak at about 418 nm and a shoulder at about 586 nm are contrary to a single SPR band in the range of 390 and 430 nm for spherical Ag nanoparticles.Additional peak at 586 nm is attributed to reduction in symmetry, and the number of peaks correlates with the number of ways the electron density can be polarized [29].The peak at 416-418 nm is attributed to the transverse oscillation of electrons and has contributions from the light scattering as well.The new SPR band at 586 nm has originated from the hexagonal-shaped plates.This peak is attributed to longitudinal oscillation of electrons which can be shifted even up to 1000 nm in the near-IR region when the aspect ratio of the nanoplates increases [30].The UV-Visible SPR information is an evidence for the generation of the Ag nanoplates with hexagonal shape.A comparison among (111), (200), and (220) diffraction intensities reveals that the ratio between the (111) and (200) peaks is much lower than that of the conventional bulk silver.This fact indicates that the obtained silver nanoplates are dominantly ruled by (111) planes.The diffraction peak for the (220) lattice planes is also present, but its intensity is very weak.Interestingly, there is an obvious increase in the diffraction intensity of the (111) peak relative to that of the (200) peak in going from particles in seed solution to product solution (Figure 2(b)).This is a strong evidence of the preferential growth of the particles along (111) directions.The formation of nanoplates with essentially (111) facets may be the result of the lower free energy of the (111) planes and the effective adsorption of urea and PVP to the (100) plane. Transmission Electron Microscopy. The particle structure and size distribution can be determined by transmission electron microscopy (TEM).The electron micrographs of PVP-stabilized silver nanoparticles in the seed solution and product solution are given in Figures 3(a) and 3(b).It may be seen that in the seed solution (Figure 3(a)) the particles are spherical and moderately monodispersed.In the product solution (Figure 3(b)), however, the images reveal well-defined hexagonal facets present in many of the nanoparticles.A twin boundary in the middle of the particles can be seen when viewing these nanoparticles at different angles (e.g., Figure 3(b) particles 1, 2, 3).We can also see a nanobulk with a smooth surface of about 120 nm in size with a highly faceted hexagonal shape (Figure 3(c)). (The diameter of the hexagonal particles is determined as the mean of the three distances between opposite corners.) The longest side has a length of around 100 nm and the shortest side 60 nm.It may be the twinning boundary that allows the Ag nanoplates to grow into microplates.The bulk may also be a single crystal according to the "surfacewrapping" mechanism proposed by Feng and Zeng [31]. To further understand the crystal habit of the hexagonal silver nanoplates prepared in this study, a high-resolution TEM characterization was performed on the hexagonal silver nanoplates.HRTEM image of a hexagonal-shaped silver nanoplate (Figure 3(d)) shows the lattice planes of the crystal.The plate seems to contain twin boundaries, which is typical for many of the nanoparticles obtained in this study.The lattice planes have a d-spacing of 2.36 Å for adjacent lattice planes that corresponds to the (111) planes of face-centered cubic silver.In addition to the presence of twin boundaries, the particle contains ( 111) and ( 200) lattice planes with the (111) planes covering a much larger fraction of the particle surface. Mechanism of Formation. The mechanism of kinetically controlled synthesis of the hexagonal Ag nanoplates can be discussed as follows.In order to get plate like structure, the precursor reduction rate must be greatly reduced, forcing the Ag atoms to form seeds through random hexagonal close packing with the inclusion of stacking faults.This is achieved by using wider growth conditions.Firstly a mild reducing agent dextrose was used which promotes the formation of twinned seeds in addition to making the synthesis a kinetically controlled one.Xiong and Xia have [32] demonstrated that the use of a mild reducing agent such as citric acid or ascorbic acid can promote the formation of twined seeds.Murphy and Jana [33] have shown that the addition of preformed seeds to a metal precursor in the presence of a weak reducing agent effectively isolates the nucleation and growth events allowing control over the size and shape of the resulting nanoparticles.Secondary nucleation during the growth stage was inhibited.Secondly, urea a habit modifier PVP as capping agent, helps in the formation of twodimensional plate-like structures.Use of PVP in anisotropic synthesis of silver nanoplates and preferential adsorption of PVP to the (100) plane has been reported in literature [9,34].Thus (100) plane is retained, and preferential addition of silver atoms to the (111) planes was facilitated.In addition to being a habit modifier by getting adsorbed to specific crystal planes, urea was reported [27] to change the reaction path significantly by being a source of cyanate and carbonate ions and thus forming the composite intermediates AgOCN and Ag 2 CO 3 .Decomposition of urea in solution occurs as follows [27] NH 2 CO NH 2 −→ NH 4 + + OCN − (1) In an alkaline solution, cyanate ions react with hydroxide ions to form carbonate ions and ammonium ion according to The intermediates were then reduced by dextrose to silver at a relatively slow rate.The kinetically controlled reaction pathway provided sufficient time for the PVP molecules and urea to adsorb onto the silver surfaces.As a result, we need not add many PVP molecules to obtain a uniform dispersion of nanosilver.Some of the urea might have chelated with silver, further slowing down the reduction reaction of the silver ion, and significantly altered the reduction kinetics.The shape of silver nanoparticles is determined by the nature of reducing agent, capping agent, molar concentration ratio of capping agent to silver nitrate, temperature, and reaction time.In the case of mild reducing agent like dextrose, the rate of reduction of Ag + is moderate, and, when PVP/AgNO 3 ratio is 1 : 1, PVP can act mostly as capping agent and may not be a reducing agent in the present study.Yang et al. [35] have reported the solvothermal synthesis of hexagonal Ag nanoplates in DMF assisted by PVP at 140 Conclusion The use of a mild reducing agent like dextrose, higher concentration of AgNO 3 , polyvinylpyrrolidone (PVP), and urea as a modifier maneuvered the reduction of AgNO 3 in a kinetically controlled pathway to produce hexagonal silver nanoplates at a moderate temperature of 50 • C. The reaction time is also considerably less when compared to other existing methods.Moderately monodispersed hexagonal plates of size 15-20 nm were achieved by the combined use of urea and PVP by adsorption of both to specific crystal planes.The crystal structure was confirmed by XRD data and was determined to possess mostly (111) facets and smaller regions of (100) facets as particle size increases.The formation of these hexagonal nanoplates is attributed to the possible intermediate formation to slow down the synthesis and adsorption of urea and PVP to specific crystal planes.The present work may help to modify optical properties of Ag nanoplates in the visible and near infrared (NIR) region by controlling their edge length to thickness ratio.These properties and applications of silver nanoparticles relate to their morphology and size. Studies.The crystalline character of the silver nanoplates is supported by XRD technique.Appearance of the XRD peaks (Figures2(a) and 2(b)) corresponding to seed and product particles demonstrates typical nature of pure FCC crystalline silver (JCPDS Card no.89-3722). Figure 2 : Figure 2: Typical X-ray diffraction patterns of (a) Ag nanoseeds and (b) product solution containing hexagonal silver nanoplates.
3,039.4
2011-01-01T00:00:00.000
[ "Materials Science" ]
Understanding the Processes Causing the Early Intensification of Hurricane Dorian through an Ensemble of the Hurricane Analysis and Forecast System (HAFS) The early stages of a tropical cyclone can be a challenge to forecast, as a storm consolidates and begins to grow based on the local and environmental conditions. A high-resolution ensemble of the Hurricane Analysis and Forecast System (HAFS) is used to study the early intensification of Hurricane Dorian, a catastrophic 2019 storm in which the early period proved challenging for forecasters. There was a clear connection in the ensemble between early storm track and intensity: stronger members moved more northeast initially, although this result did not have much impact on the long-term track. The ensemble results show several key factors determining the early evolution of Dorian. Large-scale divergence northeast of the tropical cyclone (TC) appeared to favor intensification, and this structure was present at model initialization. There was also greater moisture northeast of the TC for stronger members at initialization, favoring more intensification and downshear development of the circulation as these members evolved. This study highlights the complex interplay between synoptic and storm scale processes in the development and intensification of early-stage tropical cyclones. Introduction Understanding the processes underlying tropical cyclone intensity change continues to be a major goal of research in tropical meteorology. Predicting tropical cyclone (TC) intensity change is important for many reasons, including providing accurate warnings of potential impacts. TC intensity can also affect the track a TC will take [1], and the interplay between them can prove to be very difficult in some forecast scenarios. In this study, we examine the complexities associated with the early intensification of Hurricane Dorian (2019), one of the most devastating and powerful tropical cyclones of the 21st century [2]. The study explores key factors that allowed Dorian to intensify in the Eastern Caribbean despite the presence of marginal to hostile environmental conditions (including dry air), and the connection between the track and intensity of the TC during this critical early period. The link between TC structure and forecast errors of track and intensity, and sources of forecast uncertainty, are both topics of interest in the tropical meteorology community. Davis and Bosart [3] studied numerical simulations of Hurricane Diana (1984), and found that its track was sensitive to the cumulus scheme chosen, because of how the associated precipitation and outflow affected the large-scale steering of the storm. Bassill [4] found a similar sensitivity of the modeled track of Hurricane Sandy (2012) to cumulus parameterization. Torn and Snyder [5] found that initial position uncertainty tends to be higher for weaker storms (when the vortex may be broad and ill-defined). Alaka et al. [6] 2 of 26 used an ensemble of Hurricane Weather Research and Forecasting (HWRF) model runs to study the evolution of Hurricane Joaquin (2015). They found that the track of the TC did not seem to be sensitive to the initial vortex location or intensity, but rather was much more sensitive to subtle differences in the synoptic environment. Nystrom et al. [7] also examined Hurricane Joaquin, using an ensemble of WRF runs. They found that the track of the TC was most sensitive to differences in the initial conditions 600-900 km from the TC center, while intensity was most sensitive to initial conditions in the 300 km closest to the TC. What is clear from these studies is that TC track and intensity can be sensitive to a number of environmental and local perturbations, and the relative contributions of local and large-scale factors can be different for different cases. Here, we explore which factors were most important for allowing Dorian to intensify and move further northeast than initially forecasted in the Eastern Caribbean. Ensemble systems are a useful tool for studying the uncertainty of certain TC forecasts and setups. Zhang and Krishnamurti [8] was an early study that used an ensemble setup to study TC track forecasts and the sensitivity to perturbations in both TC location and the environment around the TC. More recent studies have utilized an ensemble approach to elucidate details of TC intensification and evolution, especially in sheared or otherwise hostile environments. Leighton et al. [9] used an ensemble of HWRF forecasts to examine the intensification of Hurricane Edouard (2014) and found that both vortex-scale processes (like vortex tilt and eddy radial vorticity flux) as well as environmental factors (such as environmental moisture) were key in distinguishing between intensifying and non-intensifying members. Rios-Berrios et al. [10] similarly demonstrated the importance of moisture to the intensification of Hurricane Katia (2011) using the ensemble of Advanced Hurricane WRF (AHW). As these and other studies demonstrate, ensembles are a useful tool for evaluating complex and uncertain processes in TC formation and evolution. Such complex processes were at work in the early evolution of Dorian. Here, we make use of an ensemble set based on the Hurricane Analysis and Forecast System (HAFS), which uses the nested version of the finite-volume cubed sphere (FV3) dynamical core. Similar versions of high-resolution nested FV3 have been used both for realtime prediction [11] and comparison with observational datasets [12], and demonstrated the capability of this model for high-resolution TC prediction. HAFS is a developing component of the Unified Forecast System (UFS) [13]. Hurricane Dorian Background Hurricane Dorian was the strongest hurricane of the 2019 Atlantic Hurricane season, and also the strongest hurricane to impact the Bahamas in recorded history [2]. Dorian was a small storm that formed in the Central Atlantic, and then moved into the Eastern Caribbean Sea and then the Southwest Atlantic. It rapidly intensified and became a Category 5 Hurricane before impacting the northern Bahamas with devastating wind, rain, and storm surge. Dorian slowed to a crawl, stalled, and then recurved and impacted the Outer Banks of North Carolina as a Category 1 hurricane. One of the biggest forecast challenges with Dorian was the motion and intensity change early in its lifecycle, while the storm was moving through and past the Lesser Antilles. This period included a "jump" of the center north of the previous position [2]. During this time, Hurricane Dorian also intensified more than most forecasts suggested [2], and became a hurricane while moving northwest through the Virgin Islands. This early period of intensification, the factors that led to this unexpected intensification, and the connection to the track of the TC will be explored in this study. HAFS-globalnest uses physics parameterizations that are similar to the operational GFS model, with some key modifications to account for changes in the tropical cyclone environment. The convective parameterization [16] is used on the global domain, but not on the 3-km nest. The GFS EDMF planetary boundary layer (PBL) scheme [17] is used, with modifications to be more consistent with observed eddy diffusivity and PBL structure in the TC environment [18,19]. For radiation, HAFS-globalnest uses the rapid radiation transfer model for GCMs (RRTMG) [20]. The microphysics scheme is the 6-class single moment GFDL microphysics [21]. The surface scheme accounts for changes in drag coefficient at high wind speeds in the TC core [22]. For the analysis in this study, an 80-member ensemble is used, created by initializing the model from 80 different members from the Global Ensemble Forecast System (GEFS), which uses an 80-member Ensemble Kalman Filter (EnKF) data assimilation system [23]. A similar methodology was applied to initialize a 40-member HFV3 (a nested FV3GFS predecessor to HAFS) forecast ensemble to study the rapid intensification of Hurricane Michael (2018) [14]. The ensemble forecasts for Hurricane Dorian were initialized at 00 UTC on 27 August 2019, when the TC was approaching the island of Barbados. The forecasts were run for 168 h. All 80 members use the same surface analysis from the GFS analysis, including sea surface temperature (SST). Currently, HAFS-globalnest is not coupled HAFS-globalnest uses physics parameterizations that are similar to the operational GFS model, with some key modifications to account for changes in the tropical cyclone environment. The convective parameterization [16] is used on the global domain, but not on the 3-km nest. The GFS EDMF planetary boundary layer (PBL) scheme [17] is used, with modifications to be more consistent with observed eddy diffusivity and PBL structure in the TC environment [18,19]. For radiation, HAFS-globalnest uses the rapid radiation transfer model for GCMs (RRTMG) [20]. The microphysics scheme is the 6-class single moment GFDL microphysics [21]. The surface scheme accounts for changes in drag coefficient at high wind speeds in the TC core [22]. For the analysis in this study, an 80-member ensemble is used, created by initializing the model from 80 different members from the Global Ensemble Forecast System (GEFS), which uses an 80-member Ensemble Kalman Filter (EnKF) data assimilation system [23]. A similar methodology was applied to initialize a 40-member HFV3 (a nested FV3GFS predecessor to HAFS) forecast ensemble to study the rapid intensification of Hurricane Michael (2018) [14]. The ensemble forecasts for Hurricane Dorian were initialized at 00 UTC on 27 August 2019, when the TC was approaching the island of Barbados. The forecasts were run for 168 h. All 80 members use the same surface analysis from the GFS analysis, including sea surface temperature (SST). Currently, HAFS-globalnest is not coupled to an ocean model to account for dynamic ocean cooling, although that capability is in development. Figure 2 shows the track and intensity forecasts from all 80 ensemble members analyzed in this study. There were a wide range of track outcomes, ranging from a turn east of the Bahamas to a landfall in the Florida Peninsula. A later study will examine synoptic influences on the longer-range track (near the Bahamas and Florida) in these simulations. This study will focus on the observed track that was mostly outside of the ensemble group in the early part of the forecast period. This bias was seen in several operational ensemble systems as well, including the GFS ensemble (which was used for the initialization of this set) and even the European Center for Medium-Range Weather Forecasting (ECMWF) ensemble (not shown). The intensity forecasts showed a large range, with some members keeping the TC as a weak tropical storm for several days, with others more correctly showing the intensification into a major hurricane. Although the middle of the ensemble group was generally too weak throughout the period, several members did capture the early intensification, and these members will be the focus of detailed analyses for much of the paper. As will be discussed in the next section, the weak bias of the ensemble mean may also be connected to the fact that the ensemble mean track was west of the Best Track, given the relationship between track and intensity that will be discussed. Correlation between Track and Intensity Errors A later study of this ensemble set will be examined in detail in terms of the synopticscale influences on Dorian's track, and which factors were most important for causing the storm to stall in the Bahamas rather than continuing westward into Florida. Here, we focus on the relationship between the early intensification and the track during this time period (0-48 h). Figure 3 shows the correlation between cross-track errors and intensity errors at each forecast hour. Note that the intensity bias is used to represent the intensity error, such that negative bias indicates a forecast that was too weak. From hours 42 to 96, there is a statistically significant relationship (based on a t-test), with members that were too weak having larger cross-track errors. As can be seen in Figure 2, this result means that weaker members were too far southwest compared to the stronger members which were farther northeast and closer to the Best Track. Some of this relationship is partially due to the effects of land interaction with Puerto Rico and Hispaniola-tracks that were further northeast had less land interaction and therefore provided more opportunity for the TC to strengthen. However, the track and intensity relationship was already apparent even before the period of land interaction (around hours [36][37][38][39][40][41][42], as can be seen in the track plots with intensity overlaid in this time period (Figure 4). The next section will explore some of the key large-scale factors that led to this relationship between track and intensity in this early period. paniola-tracks that were further northeast had less land interaction and therefo vided more opportunity for the TC to strengthen. However, the track and intensi tionship was already apparent even before the period of land interaction (around [36][37][38][39][40][41][42], as can be seen in the track plots with intensity overlaid in this time period 4). The next section will explore some of the key large-scale factors that led to th tionship between track and intensity in this early period. Examination of Environmental Variables and Correlation with Early Intensity Next, the ensemble data is used to explore some of the key early factors that allow Dorian to intensify in the early period. The "early intensity" is defined as the intensity 42 h, which was right before many members passed over or near Puerto Rico and Hispa iola. This allowed for the early intensity to be examined without the complicating inf ences of land interaction (which certainly played a role in some members after this ea period). The first analysis involved examination of a number of synoptic-scale enviro Examination of Environmental Variables and Correlation with Early Intensity Next, the ensemble data is used to explore some of the key early factors that allowed Dorian to intensify in the early period. The "early intensity" is defined as the intensity at 42 h, which was right before many members passed over or near Puerto Rico and Hispaniola. This allowed for the early intensity to be examined without the complicating influences of land interaction (which certainly played a role in some members after this early period). The first analysis involved examination of a number of synoptic-scale environmental variables that were calculated from the ensemble data, and their relationship with the intensity at 42 h. These variables are similar to those used to drive the Statistical Hurricane Intensity Prediction Scheme (SHIPS) [24], and are listed in Table 1. The linear correlations between each of these variables at each lead time from 0 to 42 h and the intensity (minimum pressure) at 42 h are shown in Figure 5. Even at this early time, there was a large spread in forecast intensity, with the minimum pressure in the members ranging from 982 hPa to 1009 hPa. Several interesting patterns stand out from the correlation. First, almost all of the variables show little correlation at early lead time (~0 to 6 h), indicating that differences in the environment at model initialization were not directly responsible for the later intensity differences, but the evolution of the environment and TC vortex throughout the forecast was important. It should be noted, however, that some of the variables are likely related (humidity and upper divergence, for example), so it is difficult to fully examine each variable independently. One notable variable that did show a significant relationship was the upper-level zonal wind near the TC, indicative of some difference in upper-level anticyclone structure between the members at the initialization time that will be examined in more detail. The vertical shear at early lead times was not well-correlated with intensity at 42 h, but the relationship improved closer to that forecast hour as the environment around the TC evolved. The variables that were most significantly correlated with intensity at 42 h, with 12-24 h lead time, were the moisture (in all layers), the upperlevel divergence, and the mid-level (850-500 hPa) shear. These correlations generally peaked at around 12-30 h, suggesting that this period after spin-up was key to determining the intensity and structure of Dorian in the different members. The SHIPS-like variables are useful for assessing the overall importance of large-scale synoptic factors, but because of the large spatial averaging involved, it can be difficult to ascertain the exact nature of the large-scale features responsible for influencing the TC intensity [9]. Thus, the structure of the synoptic environment early in the lifecycle of Dorian is examined next. In order to tease out the relationship with intensity change, composites of the 20 strongest and 20 weakest members (as defined by the minimum pressure at 42 h) are compared. Figure 6 shows the differences in 200-hPa height between the strong composite and the weak composite, as well as the difference in the composite wind vectors, at 0 h (initialization), 24 h (near when most variables showed peak correlation with 42 h intensity), and 42 h. There are clear differences even at the initial time, with the strong group having a much stronger anticyclone to the north and east of the TC center. This difference is significant based on a t-test. The vector difference shows that there is stronger upper-level divergence in this region (including at/near the TC center), and the upper-level low pressure area to the north of the TC is weaker at initialization in the strong group (see a later section for an individual example from two members). As time went on, this difference in the upper-level fields became even more pronounced, with a stronger upper-level anticyclone over and especially to the east of the TC center in the strong group, and a weaker upper-level low due north of the TC (as seen in the dipole pattern in Figure 6c). This difference in the upper-level wind and height fields was clear at the initial time and grew with time in the forecasts, with an apparent positive feedback between the upper divergence and convection on the north and east sides of the TC. This difference is also reflected in the mid-to-upper-level moisture field ( Figure 7). There is slightly higher moisture initially to the northeast of the TC center in the strong composite, seemingly coupled with the area of stronger upper divergence. As time went on, this difference became more pronounced. Interestingly, while the strong group has higher overall humidity due to the large difference on the east side, the strong group also has a pronounced area to the southwest where there is lower humidity. This implies a preferred location for convection to the northeast in the strong group, and to the southwest in the weak group, which will be seen in the individual members examined later. the upper-level divergence, and the mid-level (850-500 hPa) shear. These correlations generally peaked at around 12-30 h, suggesting that this period after spin-up was key to determining the intensity and structure of Dorian in the different members. The SHIPS-like variables are useful for assessing the overall importance of large-scale synoptic factors, but because of the large spatial averaging involved, it can be difficult to ascertain the exact nature of the large-scale features responsible for influencing the TC Examination of Structural Variables and Correlation with TC Intensity The above analysis highlighted some of the composite synoptic variables that were important for the early evolution of Dorian. Next, some of the storm-scale structural metrics that have been shown to be important for other TCs were examined to see which ones correlated with the intensity of the TC in the ensemble members. For this analysis, a precipitation partitioning algorithm was used that has been applied to radar data in several previous studies [25,26] to distinguish between stratiform and convective precipitation. The algorithm also sorts convective precipitation into shallow, moderate, and deep convection based on height and reflectivity thresholds. Originally developed by Steiner et al. [27], the partitioning algorithm has also been applied to numerical model output by Rogers [28]. As in that study, some adjustments to several thresholds and tuning parameters were made due to differences in model simulated reflectivity and observed reflectivity from aircraft data. These differences can include model bias as well as attenuation in the aircraft radar observations. Table 2 lists several key height and reflectivity thresholds used in the model, and compares this with the radar method of Steiner et al. [27]. The radar thresholds are also applied to the observed NOAA P-3 aircraft radar data used for comparison with the model data. (see a later section for an individual example from two members). As time went on, this difference in the upper-level fields became even more pronounced, with a stronger upperlevel anticyclone over and especially to the east of the TC center in the strong group, and a weaker upper-level low due north of the TC (as seen in the dipole pattern in Figure 6c). This difference in the upper-level wind and height fields was clear at the initial time and grew with time in the forecasts, with an apparent positive feedback between the upper divergence and convection on the north and east sides of the TC. This difference is also reflected in the mid-to-upper-level moisture field (Figure 7). There is slightly higher moisture initially to the northeast of the TC center in the strong composite, seemingly coupled with the area of stronger upper divergence. As time went on, this difference became more pronounced. Interestingly, while the strong group has higher overall humidity due to the large difference on the east side, the strong group also has a pronounced area to the southwest where there is lower humidity. This implies a preferred location for convection to the northeast in the strong group, and to the southwest in the weak group, which will be seen in the individual members examined later. Examination of Structural Variables and Correlation with TC Intensity The above analysis highlighted some of the composite synoptic variables that were important for the early evolution of Dorian. Next, some of the storm-scale structural met- In this analysis, we examine time series of the evolving inner-core precipitation structure. To do this, a metric chosen is eyewall closure, similar to that discussed in Matyas et al. [29] and Hazelton et al. [14] The threshold is based on the integer value of the precipitation type in the algorithm (one for weak echo, two for stratiform, three for shallow convection, four for moderate convection, five for deep convection). Thus, a closure of one indicates a perfectly closed ring, and a closure of zero indicates no precipitation meeting the required threshold. The region analyzed is the eyewall, defined as r* = 0.75 RMW 2 km to r* = 1.25 RMW 2 km (as in Rogers et al. [30] and Hazelton et al. [31]). RMW 2 km is the azimuthal mean radius of maximum winds at 2-km altitude. A few other dynamic and thermodynamic variables were also examined. The local Rossby number is related to both the size and magnitude of the incipient vortex [32], and is defined as V t | RMW 10 m is the tangential wind at the 10-m RMW, RMW 10 m is the 10-m RMW (similar to but usually slightly different from the 2-km RMW due to eyewall slope [33]), and f is the Coriolis parameter. Another variable examined is the magnitude of the innercore temperature anomaly, which is related to the developing TC warm core and defined similarly to Zhang et al. [34] as T 0-15 is the temperature in the 0-15 km radius from the TC center, while T 200-300 is the temperature in the 200-300 km radius from the TC center. The warm core magnitude is taken as the maximum of this difference in the 0-18 km vertical layer. A final dynamic variable examined is the vortex tilt. The tilt is examined in two layers, 2-5 km and 2-10 km, with the center defined based on the vorticity centroid at each height. Table 3 lists the structural variables examined, including their abbreviations and definitions. Figure 8 shows the correlations between these structural metrics and the 42 h intensity of the TC in the ensemble members, again defined as the minimum central pressure. Unsurprisingly, the precipitation organization, vortex strength, and warm core anomaly all correlated strongly (and statistically significantly) with the intensity at 0 lead (42 h). As with the synoptic variables, though, several variables show strong relationships at earlier leads. In particular, the percentages/closure of moderate and deep convection at hours 015 and onward are strongly correlated with the 42 h TC intensity. Similarly, the Rossby number and warm-core magnitude around this time (15 h) is well-correlated with the intensity over a day later. This further highlights that, in addition to the large-scale differences, vortex-scale and convective structural differences between members were critical in determining the evolution in the Eastern Caribbean. Inner-core structure has been shown to be a critical component of RI onset in moderately-sheared environments. A decrease of RMW 10 m , which leads to a larger Rossby number, is one such structural change that has been linked to RI onset [32]. Precipitation symmetry has also been shown to be a critical part of conditioning a vortex for rapid intensification [35,36], and these relationships imply that was true in the early intensification of Dorian. There is also a strong relationship (r = 0.88) between the initial intensity in the simulations and the intensity at 42 h, implying that some of the structure differences (and perhaps some of the large-scale initial differences discussed above) were tied to the initial state of the TC. More details of the structural evolution are highlighted in the comparison of two members in the next section. Atmosphere 2021, 12, x FOR PEER REVIEW 13 of 28 Unsurprisingly, the precipitation organization, vortex strength, and warm core anomaly all correlated strongly (and statistically significantly) with the intensity at 0 lead (42 h). As with the synoptic variables, though, several variables show strong relationships at earlier leads. In particular, the percentages/closure of moderate and deep convection at hours 015 and onward are strongly correlated with the 42 h TC intensity. Similarly, the Rossby number and warm-core magnitude around this time (15 h) is well-correlated with the intensity over a day later. This further highlights that, in addition to the large-scale differences, vortex-scale and convective structural differences between members were critical in determining the evolution in the Eastern Caribbean. Inner-core structure has been shown to be a critical component of RI onset in moderately-sheared environments. A decrease of RMW10m, which leads to a larger Rossby number, is one such structural change that has been linked to RI onset [32]. Precipitation symmetry has also been shown to be a critical part of conditioning a vortex for rapid intensification [35,36], and these relationships imply that was true in the early intensification of Dorian. There is also a strong relationship (r = 0.88) between the initial intensity in the simulations and the intensity at 42 h, implying that some of the structure differences (and perhaps some of the large- Detailed Examination of Two Divergent Members In order to further explore the details of the synoptic and vortex-scale factors that were responsible for Dorian's evolution in the period of its lifecycle near the Antilles, two individual members are next examined in detail. Figure 9 shows the tracks of the two members as well as Best Track. One of the members, member 57 (the "weak member"), was an outlier on the weak and southwest side of the ensemble envelope, showing very little development in the first few days and passing between Puerto Rico and Hispaniola. This was similar to the forecast from several operational and experimental models at this point during Dorian's lifecycle (not shown). On the other hand, member 72 (the "strong member") was on the right side of the ensemble envelope, one of a few members to predict a track that was close to the observed path. It was also one of the stronger members during this early time period, correctly predicting intensification as the TC moved to the northwest and passed over St. Croix and east of Puerto Rico. As shown in Figure 9, the track and intensity of the strong member was very close to the "Best Track" for this early period. The fact that neither of these members was directly impacted by a large land mass during this period also helps with the analysis of the processes responsible for intensity change. Track/Intensity Comparison between Two Members bers, member 57 (the "weak member"), was an outlier on the weak and southwest side of the ensemble envelope, showing very little development in the first few days and passing between Puerto Rico and Hispaniola. This was similar to the forecast from several operational and experimental models at this point during Dorian's lifecycle (not shown). On the other hand, member 72 (the "strong member") was on the right side of the ensemble envelope, one of a few members to predict a track that was close to the observed path. It was also one of the stronger members during this early time period, correctly predicting intensification as the TC moved to the northwest and passed over St. Croix and east of Puerto Rico. As shown in Figure 9, the track and intensity of the strong member was very close to the "Best Track" for this early period. The fact that neither of these members was directly impacted by a large land mass during this period also helps with the analysis of the processes responsible for intensity change. To provide another illustration of the evolution of intensity in these two members compared to the observed TC, Figure 10 shows the time series of maximum wind speed To provide another illustration of the evolution of intensity in these two members compared to the observed TC, Figure 10 shows the time series of maximum wind speed and minimum central pressure from both members, as well as from the "Best Track" observations. The two members followed each other and the Best Track closely for the first 12 h of the forecast. However, just after 12 h, a divergence began to occur, with the weak member continuing to stay basically flat in intensity while the pressure began falling and the winds began to increase in the strong member as well as in the observed TC. The rate of intensification was slightly quicker initially in the strong member than observations, with more fluctuations (which is typical of noisier model data) and a slightly different pressure-wind relationship than the observed. However, by the end of this 48-h period the strong member was close in intensity (~75 kt) to the observed intensity (70 kt), while the weak member remained weak (35-40 kt) through the entire forecast period. Looking back at Figure 9, 012-15 h was when the tracks of the two members started to diverge as well. At 12 h (5th forecast point), both members (and Best Track) showed the center of the TC near St. Lucia. The trajectories of the members diverged from this point forward, however. The coincidence of the divergence in both track and intensity around this point is consistent with the full ensemble results discussed above. The environmental and structural changes leading to this difference are discussed next. the weak member remained weak (35-40 kt) through the entire forecast period. Looking back at Figure 9, 012-15 h was when the tracks of the two members started to diverge as well. At 12 h (5th forecast point), both members (and Best Track) showed the center of the TC near St. Lucia. The trajectories of the members diverged from this point forward, however. The coincidence of the divergence in both track and intensity around this point is consistent with the full ensemble results discussed above. The environmental and structural changes leading to this difference are discussed next. Synoptic Differences between Two Members In this section, we dig deeper into the synoptic-scale processes that were responsible for the intensity divergence between the two members. As shown in the composites above, one of the key differences appeared to be the early evolution of the subtropical ridge to the northeast of the storm, and the upper-level low to the northwest. The interplay between these two features, while somewhat subtle, appeared to play a major role in determining the intensity and structural evolution of the TC in the Eastern Caribbean. Figure 11 illustrates this point, showing the 200-hPa height and wind at initialization/0 h, 24 h, and 36 h for the weak and strong members. While the key synoptic features are similar in both members, the differences in their structures were important. In the strong member, the upper-level low to the north of Dorian was weaker than in the weak member at initialization, and the upper-level anticyclone to the north and east of the TC (around 20 N, Synoptic Differences between Two Members In this section, we dig deeper into the synoptic-scale processes that were responsible for the intensity divergence between the two members. As shown in the composites above, one of the key differences appeared to be the early evolution of the subtropical ridge to the northeast of the storm, and the upper-level low to the northwest. The interplay between these two features, while somewhat subtle, appeared to play a major role in determining the intensity and structural evolution of the TC in the Eastern Caribbean. Figure 11 illustrates this point, showing the 200-hPa height and wind at initialization/0 h, 24 h, and 36 h for the weak and strong members. While the key synoptic features are similar in both members, the differences in their structures were important. In the strong member, the upper-level low to the north of Dorian was weaker than in the weak member at initialization, and the upper-level anticyclone to the north and east of the TC (around 20 N, 50-55 W) was stronger at initialization as well. These differences persisted and grew throughout the forecast. This contributed to the slightly stronger environmental shear in the weak member, and the tendency for more precipitation on the east side of the strong member (discussed later), helping the evolving TC circulation to develop closer to the convection on the east side. As another way of visualizing the large-scale differences between the two members, the 200-hPa potential vorticity (PV) at 24 h is shown in Figure 12. This variable highlights the key synoptic features impacting Dorian in the early lifecycle. The large-scale pattern was very similar in the two members, but some key subtle differences were apparent, especially in the difference plot (Figure 12c). The PV streamer (upper low) was farther southeast (closer to the TC) in the weak member, and there was a generally larger area of low PV surrounding the TC in the strong member, associated with the larger/stronger upper-level anticyclone seen in the ensemble composites. Both of these differences provided a more favorable environment for convection development in the strong member. The oddity of this point is explored next in a detailed analysis of the shear structure near Dorian in these two members. 50-55 W) was stronger at initialization as well. These differences persisted and grew throughout the forecast. This contributed to the slightly stronger environmental shear in the weak member, and the tendency for more precipitation on the east side of the strong member (discussed later), helping the evolving TC circulation to develop closer to the convection on the east side. As another way of visualizing the large-scale differences between the two members, the 200-hPa potential vorticity (PV) at 24 h is shown in Figure 12. This variable highlights the key synoptic features impacting Dorian in the early lifecycle. The large-scale pattern was very similar in the two members, but some key subtle differences were apparent, especially in the difference plot (Figure 12c). The PV streamer (upper low) was farther southeast (closer to the TC) in the weak member, and there was a generally larger area of low PV surrounding the TC in the strong member, associated with the larger/stronger upper-level anticyclone seen in the ensemble composites. Both of these differences provided a more favorable environment for convection development in the strong member. Although the upper-level wind fields evolved differently in the strong and weak members, the earlier correlations ( Figure 5) also showed that the mid-level shear was a distinguishing factor for the TC intensity as well. Some previous studies have in fact suggested that shallower, mid-troposphere shear can actually be a greater detriment to intensification than deep-layer shear [37]. Figure 13 shows the mid-level (850-500 hPa) shear in the two members in the area near and around the TC, at initialization and 24 h into the forecast. The shear distributions in the two members were similar initially, with the dominant feature related to the circulation of the TC itself. At 24 h, the TC circulation is also evident (this circulation is removed in the shear calculated for Figure 5), but there is also noticeably stronger environmental westerly mid-level shear west and northwest of the TC, so there is stronger environmental shear in addition to the changes due to vortex structure. As the correlations show, this large mid-level shear contributed to the slower intensification in the weak member. The structural impacts of slightly stronger shear (both mid and upper level) in the weak member, including precipitation and vortex tilt, are discussed in the next section. The shear distributions in the two members were similar initially, with the dominant feature related to the circulation of the TC itself. At 24 h, the TC circulation is also evident (this circulation is removed in the shear calculated for Figure 5), but there is also noticeably stronger environmental westerly mid-level shear west and northwest of the TC, so there is stronger environmental shear in addition to the changes due to vortex structure. As the correlations show, this large mid-level shear contributed to the slower intensification in the weak member. The structural impacts of slightly stronger shear (both mid and upper level) in the weak member, including precipitation and vortex tilt, are discussed in the next section. Structure Comparison between Two Members and with Observations Due to its proximity to land for much of its lifecycle, as well as the length of time that it spent in the Western Atlantic, Dorian was one of the most directly-observed tropical cyclones in modern history, including 15 missions [38] from the NOAA WP-3D Orion Hurricane Hunter aircraft, which collected Doppler radar data from the tail radar of the Structure Comparison between Two Members and with Observations Due to its proximity to land for much of its lifecycle, as well as the length of time that it spent in the Western Atlantic, Dorian was one of the most directly-observed tropical cyclones in modern history, including 15 missions [38] from the NOAA WP-3D Orion Hurricane Hunter aircraft, which collected Doppler radar data from the tail radar of the plane. This radar data was used for comparison between the model results and the observed TC, to understand how realistic the structural changes observed in each member were, and how these structure changes may have been related to the intensity differences in the modeled TCs. Figure 14 shows the comparison between the modeled and observed precipitation and wind structure around hour 24 of the simulations, while Figure 15 shows the comparison 24 h later (a new flight, near forecast hour 48). At 24 h, differences between the two members were already apparent. The mid-level vortex was tilted in both members and in observations. However, it was tilted about 40 km to the southeast in the strong member and observations, while in the weak member it was tilted about 70 km to the southeast. In addition, while an asymmetric precipitation structure was present in both members and observations, more precipitation wrapped around the upshear side in the strong member. Neither member had as much precipitation on the west side of the TC as was observed, but the strong member was closer to reality. From these analyses, it appears that some of the structural changes in Dorian (vortex tilt and precipitation symmetry) related to intensity change were similar to those seen in sheared TCs [10,14,39], although the large-scale shear was moderate at this point (7 kt in the strong member and observations, 10 kt in the weak member). At 48 h (Figure 15), the differences between the two members are even more apparent. At this point, while the TC was near the end of this "early intensification" period, the largescale shear was almost identical in the two members and observations (10-11 kt from the southwest), but the weak member was still extremely asymmetric, with almost all of the precipitation on the east side of the low-level center. In contrast, while the large-scale precipitation pattern in the strong member was somewhat asymmetric (and in the aircraft radar data as well), there was a small, symmetric core at the center. This small but robust circulation was seen in both the reflectivity and velocity fields. In addition, while the vortex tilt was still at around 70 km in the weak member, it was largely reduced and the TC vortex had become nearly vertically aligned in the strong member. The radar data also showed that the observed TC had become vertically aligned by this time. Clearly, despite similar initial intensities and positions, the evolution of TC structure diverged significantly between these two members, and it should prove insightful to further examine synopticscale and storm-scale details of the evolution of these two members, to understand how Dorian was able to intensify in the Eastern Caribbean. Storm-Scale Structure Differences between Two Members This section examines some of the key storm-scale differences between the two divergent members, to identify some of the smaller-scale details that were important for Dorian's evolution in the Eastern Caribbean. One key difference was seen in the precipitation structure of the two members. Figure 16 shows the reflectivity at z = 2 km, as well as the precipitation types, for the weak member and the strong member at 24 h, 33 h, and 48 h. Observed reflectivity and precipitation types from the NOAA P-3 at times corresponding to 24 h and 48 h are also shown. For the weak member, the precipitation is initially extremely disorganized, showing a sheared pattern with almost all moderate and deep convection displaced to the south and east of the low-level center. At 33 h, a strong band of precipitation forms and begins to wrap around the center, but this symmetrization does not last, and by 48 h the pattern is back to that of a highly asymmetric and disorganized system. In contrast, the symmetrization process progresses much more effectively in the strong member. At 24 h, a comma-shaped curl of deep and moderate convection forms on the northeast (downshear) side and is beginning to wrap around the center of the TC. By 33 h, this precipitation has effectively symmetrized into a developed inner core vortex, with much more symmetric stratiform precipitation, that is maintained for the next 6 h and beyond. Interestingly, a symmetric rainband can also be seen propagating away from the TC between 33 h and 39 h (not shown) in the strong member, likely a result of the diurnal cycle described by Dunion et al. [40]. In the observed radar data, the structure of the TC consists of 1-2 curved convective bands wrapping around the TC at 24 h, that then grow into a small but robust and symmetric inner core by 48 h, with moderate to deep convection surrounding a small eye. A prominent rain band is evident in the observations as well. It is clear from this comparison that the development of the TC core in the strong member was much more in line with observations. Atmosphere 2021, 12, x FOR PEER REVIEW 23 of 28 Figure 17 shows the evolution of the closure metric for the weak and strong members, and also shows the observed values of eyewall closure based on P-3 radar data. In the strong member, the stratiform precipitation has high closure initially and remains high. The closure of shallow and moderate precipitation increases through the first 48 h. In the weak member, there is less stratiform precipitation initially, and the overall trend in closure is downward. The overall shallow, moderate, and deep convective trends in the weak member were all flat as well, despite the "spike" at 33 h that did not persist. Looking at the observed radar data, there were some fluctuations (likely tied to both convective pulses and data quality), but the overall trend was for an increase in closure of all precipitation types between the flight around 24 h and the flight around 48 h. The evolution of these metrics in the strong member was more consistent observations. Clearly, the symmetrization process was much more efficient in the strong member than in the weak member, helping the storm to form a compact but robust inner core that allowed it to intensify despite the relatively dry environment around the TC. As can be seen in the evolution between Figures 13 and 14, the mid and upper level vortices in the strong member became aligned more quickly than in the weak member, which was consistent with the observed evolution of the TC. This evolution of the two members, with the stronger member having stronger downshear convection near the TC center, earlier vortex alignment, and then greater symmetrization of precipitation (and increase in closure of stratiform precipitation prior to intensification), is consistent with the evolution of Hurricane Edouard 2014 as shown in both an observational study by Rogers et al. [41] and an ensemble study by Alvey et al. [35], and also with the intensification of Hurricane Earl (2010) in simulations [42]. Discussion This study highlights the utility of an ensemble technique using HAFS to study the evolution and intensification of a complex tropical cyclone. This builds off of prior studies using ensembles to study tropical cycle evolution, using FV3-based models [13] or other models such as WRF or HWRF [4,7,8] Ensembles allow for the study of both statistics in a large group of members as well as detailed examination of case studies within the ensemble group, and this is the method applied in this study. The analysis presented here highlights several of the key factors leading to the early intensification of Hurricane Dorian in the Eastern Caribbean. Tropical cyclones are unique systems that span a variety of scales, and the evolution of Dorian appeared to be con- Discussion This study highlights the utility of an ensemble technique using HAFS to study the evolution and intensification of a complex tropical cyclone. This builds off of prior studies using ensembles to study tropical cycle evolution, using FV3-based models [13] or other models such as WRF or HWRF [4,7,8] Ensembles allow for the study of both statistics in a large group of members as well as detailed examination of case studies within the ensemble group, and this is the method applied in this study. The analysis presented here highlights several of the key factors leading to the early intensification of Hurricane Dorian in the Eastern Caribbean. Tropical cyclones are unique systems that span a variety of scales, and the evolution of Dorian appeared to be connected to processes on several different scales, including synoptic-scale, vortex-scale, and convective-scale. The synoptic environment, especially over and to the northeast of the TC, was significantly different between strong and weak members from the time of the analysis, and this difference grew over time. The stronger members had a large-scale environment characterized by stronger upper-level divergence and larger values of upper level moisture, which likely promoted convective growth especially on the northeast side of the developing circulation. Over time, this large-scale flow evolved into a larger upper-level anticyclone more favorably placed over the TC in the stronger members, insulating it from the shear due to a PV streamer to the west. This feature was present in both the strong and weak members, but subtle location and strength differences of the anticyclone appeared to be key to the intensity evolution of the TC. It is worth noting that some of the differences in composite synoptic flow near the mean TC position could be partially due to the differences in TC intensity, as the more favorable synoptic conditions in the stronger members positively feedback on a stronger TC circulation. For example, the greater upper-level divergence in the stronger member(s) promoted a stronger secondary circulation, which leads to increased convection [43] and latent heat release, which then, in turn, further strengthens the upper-level anticyclone and associated upper divergence. The exact timing of the related changes in synoptic flow and storm-scale convective structure can be difficult to determine, but as the precipitation analyses showed, the favorable synoptic pattern in the strong member did appear to be connected to a more organized core convective structure favorable for intensification. The favorable synoptic environment appeared to precondition the atmosphere to allow a more symmetric convective structure to develop in the stronger ensemble members. The development of an symmetric core is key to tropical cyclone rapid intensification [44], and this was reflected in this ensemble set (and also consistent with the observed data). The visual, qualitative comparisons with aircraft data provided evidence of the importance of the core formation in the intensification of Dorian. Calculation of structure metrics allowed for a more quantitative analysis of this aspect of the forecast. Both precipitation symmetry and metrics related to vortex intensity are unsurprisingly closely related to the intensity when the TC is in the northeast Caribbean, but the correlations at earlier lead time highlight the importance of TC core formation early in this period to the overall intensity evolution. This was highlighted in the comparison of two diverging members, with a small, increasingly symmetric inner core forming in the more realistic member, while only pulsing convection with lack of consistent organization was occurring in the weaker member. Conclusions and Future Work In this study, we demonstrated the usefulness of a HAFS ensemble system to analyze a complex system like Hurricane Dorian. Detailed examination of the ensemble members in both composite and individual frameworks highlighted several of the key synopticscale features that evolved and allowed Dorian to intensify in the Eastern Caribbean. These factors included enhanced upper-level divergence and moisture northeast of the TC as it moved into the Caribbean, as well as the TC's more northeast location keeping it farther away from an upper-level low and its associated shear. The more favorable upper-level pattern also allowed a more robust and symmetric inner core to develop and enhance the intensification of the TC in the Eastern Caribbean. This enhanced core structure and convection (seen in the precipitation plots) may have also contributed to reducing the shear around the TC [45]. Inner-core structure metrics related to the convective and vortex structure (including tilt and eyewall closure) also showed strong correlation to the intensity at 42 h (as the storm reached the edge of the Caribbean). The strong member that was analyzed in detail became vertically aligned with symmetric precipitation, while the weaker member remained vertically tilted and highly asymmetric. This analysis highlights the importance of processes across multiple scales for TC intensification, and the complex interplay between these processes. Future work will explore how the early intensification was related to the track differences at later periods in the ensemble set, and whether the track differences were mostly due to synoptic-scale or storm-scale processes.
11,884.6
2021-01-10T00:00:00.000
[ "Environmental Science", "Physics" ]
Perception of IT Professionals , Introduction The Special Interest Group on HCI of the Association for Computing Machinery (ACM) describe HCI as discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (SIGCHI Curriculum Development Group). To aid human's physical and mental abilities for developing technology HCI Knowledge has seemed as an essential requirement [1]. From 1980 in IT industry of United States and Europe, HCI is considered very important [2]. HCI is the Practice and study of Usability [3]. Usability is a measurable feature of a product's e.g. easy to use and learn [3]. Usability is a quality features of the complete system not only the features of the user interfaces [4]. When transitions turned towards windows, Internet and smart mobile phones Usability become very significant attribute of the system. Now scholars are paying a massive consideration on usability throughout the World [5]. Usability awareness is a phenomenon that is keen on usability features; learnability, efficiency, memorability, errors and satisfaction. Eight levels of usability identified by Jakob Nileson called them the levels of organization usability maturity to be implemented by an organization during product developing phase. Assurance of Usability on projects can be increased by executing the 8 stages, after applying the 8 stages the system will become usable enough that the user need no training for using the system [6]. In difference with developed countries, Pakistan current status as in HCI and usability is developing. However, the IT revolution in Pakistan is clearly coming of age, encouraged by the usage of Internet and falling price of telecommunication. Therefore, we are conducting this research to find the current state of HCI/Usability in Pakistan IT industry. This paper is divided into seven sections. First section introduces the study. The 2nd section describes the Literature review. Third part of the paper explains the methodology. Data analysis is explained in fourth section of the paper. Results of this study are presented in section five. Sixth section of this paper contains the discussions and last section of the paper contains the conclusion of this paper. Litreature Review A survey study was conducted in Malaysia. Total Seventy-two respondents were participated in this study, 23 of them were IT practitioners, 27 of them were IT students and 22 of them were non-IT practitioners. The participants were system analysts, sales men, software developers, marketing persons and IT students. Outcomes of the study show that there is no big difference on usability awareness between IT Experts, non-IT practitioners and IT students. Results reveal that usability is considered as God gifted skills and common-sense knowledge for both IT and Non-IT staff. Organizations were considered at unrecognized level of usability. Usability Maturity Model was used to find the current state of awareness of an organization [7]. Collaboration between academia and industry considered Vitol for better perceptions about HCI. HCI Experts are requested from IT industry in universities to convey the real view of HCI to students. At postgraduate and undergraduate level HCI considered as an important subject in Malaysia. MIMOS Berhad is government agency concentrating on the cooperation among industry and academia. Through MIMOS offering internship to students, deliver lectures and evaluating student's projects are included in collaboration between industry and academia. To promote HCI these initiatives are required to be taken by the whole Malaysian industry [6]. Another survey was conducted in UAE where the participants were IT managers and end users. Finding reveals that the participants do not think that usability related issues are existing. Usability of the system was not considered an important factor. They do not have knowledge about usability and HCI. Lacks of usability staff were witnessed. End users were not involved in designing phase of the system [6]. A survey was conducted in Korea. A total of 72 respondents were involved in this survey study. The respondents were development practitioners and UI/Usability practitioners. Lack of usability staff and lack of usability methods were identified. Findings show that usability is not applied on products [8]. Topmost Chinese organizations recognized User Experience (UX) as a basic issue. These most leading Chinese organizations have their private research laboratories and Usability Engineering staff. About 200 HCI/Usability Experts are employed in these laboratories [9]. Most recently a survey was conducted in Pakistan IT industry to find out the current state of awareness of usability of an organization. Most of the participant were in this survey study were from open software houses. It was pilot survey study 1 conducted by the same author. Data was analysed on the basis of UMM. After completion of study 1 the 2nd survey study was started and PSEB registered, not registered firms were included in this current study. The result of the survey study [10] one reveals that software developers were found confused about usability and not interested in HCI or usability related issues. Most of the respondents do not consider usability an important software factor. Usability awareness levels of experienced and non-experienced IT professionals were equal. In most of the organizations end user were not involved in user interface designing phase of the product. Lack of HCI or usability staff were witnessed in most of the organizations. Most of the organizations were categorized at unrecognized level of UMM. Methodology Different usability maturity evaluation models have been proposed. To find out usability awareness in a corporation, this study is planned using the Usability Maturity Model (UMM) briefly explained in Table 1. Latest studies [11] in Asia have adopted the UMM to find the level of usability awareness in a company. Respondents profile A Total 117 respondents took part in the study. Respondents were Developers (Android, IOS, Web, and Computer System), Software Engineers, Quality Assurance Engineers, Managers, Internees, Designers, Software Analysts, Senior Executives and CEOs. As many as ninety-seven (82.9%) respondents were from 30 years or less age group. Nineteen 16.2% respondents were from 31 to 40 years age group. Only 1 respondent was from more than 60 years age group. Eighteen (45%) respondents were bachelor degree holders. Master degree holders were 19 (47.5%) and only 2 (5%) respondents were MS degree holders. Only one respondent has Intermediate level of education. All the participants have computer science or software engineering academic background. Thirty-Four (29.1%) respondents have one year professional experience. Twenty-Seven (23.1%) respondents have two years' experience. Nineteen 19 (16.2%) respondents have five years' experience. Fourteen (12%) participants have three years working experience in their current job. Twelve (10.3%) participants had four years' experience. Six (5.1%) respondents had a ten years job experience. Five 5 (4.3%) participants have more than ten years of experience. Design An online questionnaire was designed and sent to focus group. Most of the questions in questionnaire were closedended, which are followed by response options. However, some open ended-questions are also asked in the questionnaire to explore the views of the respondents. Following six questions were included in the questionnaire to find out the current state of usability of an organization according to UMM. • Q#1: Do you think your current working system is usable? • Q#2: Something incorrect with the system interface but not sure what was the issue and don't know how to fix it. • Q#3: Does your company consistently produce usable products? • Q#4: This Company starts to set up specialized group to handle user interface issues. • Q#5: Is there any Usability Budget in your organization? • Q#6: The top decision makers in this company have started focusing on design for human use rather than inner technology. Live interviews were also held with some IT experts. However, open-ended and same questions were asked during the interview. Perceptions of IT professionals about usability Comments of a senior software engineers about usability are mentioned below. He had More than 5 years of experience and sixteen years of education. Usability maturity model (UMM) To figure out the usability awareness level of any organizations UMM is used. UMM is explained in Table 1. At Unrecognized level of UMM the employees in a company think that product they have Developed having no problem regarding usability of that product. Level A: Problem recognition When employs of an organization identify problems regarding usability in their developed product and it need to be improve its usability. Performed process Members of an organization know that Usability covers whole system. does not concern only with interface, which is used by end user, but it Level B: Quality in use awareness Employees know that quality of product comes through the series of HCI process by collecting the end user requirements. User focus Keeping an eye during developing product that end user is not technical as compared developing team. Mapping of UMM and study questionnaire Each level of UMM is mapped with a question asked in Table 2 from the respondents. Integration of questionnaire with UMM is described in Table 3. Each question represents a level of UMM. Question 1, 2, 3, 4, 5 and 6 represent represents level of UMM X, A, B, C, D, and E respectively. Data analysis To find the current level of awareness of an organization in this survey study questions narrated in Section III B were asked. YES or No were two options for respondents to show his response according his knowledge (Figure 1). Average score of all six questions is calculated in Table 3. Figure 2 is showing the responses of respondents. Whether heard about usability or not To know the participants familiarity about the term usability, respondents were asked a question that is they ever heard word "usability". Most of the respondents (92.3%) answered yes they heard about usability. Only nine (7.7%) respondents were unaware of the term. Whether know about HCI or not Another question was asked to respondents, "Have you ever heard about Human Computer Interaction (HCI)?" As many as ninety-nine i.e. 84.6% of the respondents stated YES, they heard about HCI because they were familiar with HCI. On the other side, only eighteen (15.4%) respondents replied NO, they never heard about HCI. If we compare HCI and usability, many IT professionals are familiar with usability but not with HCI due to the fact that usability is a generic term . Figure 2 describes the state of awareness about HCI. Importance of usability for success of project A question was asked to respondents regarding usability of the system as a factor for success. Sixty-Two (53%) respondents out 117 selected that usability of the system is very important for project's success because they were aware about its importance. Fifty-two (44.4%) respondents said that usability is important for success of the project. On the other hand, only 2.6% respondents consider usability as useless or unimportant feature for project's success. HCI/usability experts inside organization In Fifty-Nine (50.4%) organizations there is no staff regarding usability or HCI. In thirty organizations USER EXPERIENCE (UX) PROFESSIONALS were hired. USER EXPERIENCE PROFESSIONALS option got a high response due to buzzword of usability and CHI UX in the last two decades. Seventeen (14.5%) respondents selected INTERACTION DESIGNER option. INTERACTION DESIGNER option got this much high response due its usage in computer games and mobile application development [12] mostly. Seventeen (14.5%) respondents were selects USABILITY ENGINEER option. Sixteen (13.7%) respondents selected USABILITY EXPERT option. USABILITY ENGINEER and USABILITY EXPERT got this response because both are general terms [13]. Only thirteen 13 (11.1%) respondents selected HCI EXPERT. HCI EXPERT comparatively got low response due to the fact that IT industry is not too much familiar with HCI practices. Only four respondents opted the OTHERS and didn't mention any other related word. In seventeen IT companies' respondents selected more than one option. Figure 1 represents the availability of HCI/usability experts in the organization. End user involvement in design phase The Results showed that in Twenty-Nine (24.8%) IT companies, end users were not involved in design phase of the software development. There are few chances for the system to be usable for the end user if users are not involved in designing phase of the system. Actively and frequently user participation throughout the system development is a basic principle for User-Centred system design (UCSD) [14]. User participation is valuable for user approval and system success [15]. In Eighty-Eight (75.2%) companies, end users were involved in designing phase of the system. It shows they know user participation is the key to success. They know the reputation of usability. Results from six questions Six questions were asked from the respondents to find out the usability awareness level of an organization. The first question was about the usability of the current working system that refers to the systems they are currently working. Most of the respondents think that their current working system interface is error free. Actually, there is no system which is error free every system has some errors. Therefore, this indicates that most of the respondents are at unrecognized level of UMM. Do you think your current working system is usable? 1.179 Something incorrect with the system interface but not sure what was the issue and don't know how to fix it 1.786 Does your company consistently produce usable products? 1.316 Is there any Usability Budget in your organization? 1.495 The top decision makers in this company have started focusing on design for human use rather than inner technology. 1.487 In few organizations, respondents point out some problems with the user interface, which indicates that respondents identified there are some issues with the user interface. Identification of the system interface issues indicates that IT professionals at recognized level of UMM. Discussions Study results reveal that a little more than half (50.4%) firms have no experts for handling usability issues. In those companies, usability related problems; user interface related issues were not addressed due to unavailability of usability staff. The other problem highlight of the survey study was no user involvement in interface design phase of the system development. It is a basic rule to participate user in user interface designing phase of the system for developing a usable product [16]. If user is not involved then there are very few chances for user satisfaction. In 49.57% organizations has no budget for usability. There is no training of employee for usability. There are no staffs to handle usability issues. System testing being not performed according to HCI and Usability principles. It has been figuring out from these facts that usability was not considered an important issue. Important thing noticed in the study was the respondent's perception about usability. Most of the software engineers and QA engineers were comparatively clear about usability. They consider it very productive for companies and end user. While, in case of Software Developers and System Engineers, either they were confused about usability or they don't know about usability. Even, Software Developers who already had more than five years professional experience were not aware about usability. Even Developers are not interested to learn about usability. They don't want on job trainings and seminars about usability. They also don't prefer to meet with end users. Some of them consider usability as inborn creativity. In interview and questionnaire, five questions were asked about usability to check the current state of the usability of any organization. Result of the all those five questions reveals that most of the organization are on unrecognized level in accordance with the [17] findings. If any organization consider user interface is error free then its means that the organization did not recognize usability yet. While on the other side, very few organizations are at recognized level of Usability Maturity Model. Conclusion Usability gets additional attention by industry experts and researchers throughout the world [16] but it's not expanded in Pakistan. In most of the organizations the end users were involved in product user interface designing phase. Most of the Software Developers and System Engineers have shown no interest in usability. Most of the Software Developers either do not know about usability or they are confused about usability. From the survey we conclude that Software Engineers and Quality Assurance Engineers have good perceptions about usability as compared to Software Developers due to fact that Software Engineers have more interaction with end user. Absence of HCI/Usability Experts is witnessed in most of the companies. Hence, companies are categorized at unrecognized level of UMM in most of the cases. Very few organizations are ranked at recognized level of UMM. Most organizations are not interested in usability because they have no financial budget for it.
3,834.6
2018-01-01T00:00:00.000
[ "Computer Science" ]
A Personalized Electronic Movie Recommendation System Based on Support Vector Machine and Improved Particle Swarm Optimization With the rapid development of ICT and Web technologies, a large an amount of information is becoming available and this is producing, in some instances, a condition of information overload. Under these conditions, it is difficult for a person to locate and access useful information for making decisions. To address this problem, there are information filtering systems, such as the personalized recommendation system (PRS) considered in this paper, that assist a person in identifying possible products or services of interest based on his/her preferences. Among available approaches, collaborative Filtering (CF) is one of the most widely used recommendation techniques. However, CF has some limitations, e.g., the relatively simple similarity calculation, cold start problem, etc. In this context, this paper presents a new regression model based on the support vector machine (SVM) classification and an improved PSO (IPSO) for the development of an electronic movie PRS. In its implementation, a SVM classification model is first established to obtain a preliminary movie recommendation list based on which a SVM regression model is applied to predict movies’ ratings. The proposed PRS not only considers the movie’s content information but also integrates the users’ demographic and behavioral information to better capture the users’ interests and preferences. The efficiency of the proposed method is verified by a series of experiments based on the MovieLens benchmark data set. Introduction With the growth in computer networks, information technology and availability of online resources, electronic commerce (E-commerce) has grown extensively over the last decades. Nowadays, the large amount of information available to users is not always assisting them in making decisions because useful and relevant information it is readily distinguishable, i.e. information overloading. To partly address this problem, personalized recommendation systems (PRS) have been developed within the discipline of service computing to assist users in identifying possible products or services of interest based on their preferences. This is usually achieved by extrapolating from historical data of users' preferences and online behaviors possible recommendations of services and products that might be relevant and of interest to the users. The underlying techniques used in most of the state-or-the-art recommendation systems can be generally classified into two classes: content-based recommendation (CBR) techniques and collaborative filtering (CF) techniques. In particular, CBR selects items suitable for a user by comparing the representations of the content and user interest model [1], while CF utilizes explicit or implicit ratings from many users to recommend items to a user [2]. Content-based methods are limited in their applicability because based on the textual information of items only. Typically, a profile is formed for an individual user by analyzing the content of items in which s/he is interested (e.g., movie name, director, description, etc.) and additional items can be then inferred from this profile. CF algorithms are widely applied in areas in which the product contents are non-textual, such as music recommendation [3], news recommendation [4], movie recommendation [5], and product recommendation [6]. CF based recommendations can be further subdivided into memory-based and modelbased algorithms. The memory-based CF algorithms find neighbors for an active user (new user) and rely on the neighbors' preferences to predict the preferences of the active user [7]. Shortcomings of the memory-based CF algorithms include the over simplicity of the similarity calculation and the high computational complexity. The implementation of model-based CF algorithms starts from the development of a model from the historical data that is then used to predict new preferences for an active user [8]. Currently, many machine learning methods rely on the model-based CF, such as the Backward propagation (BP) neural network [9], Adaptive learning [10], Linear Classifier [11], Bayesian learning [12], Gradient boosting [13], and Graphic neural network [14]. Compared with many other machine learning approaches [15,16,17,18], the support vector machine (SVM) approach has many advantages. For example, a solution identified with the SVM has the characteristic of an overall optimum and a strong generalization ability [19]. It is worth highlighting that choices of parameters of a SVM heavily influence its prediction accuracy [20]. To date, many heuristic techniques such as grid search (GS), genetic algorithms (GA), and particle swarm optimization (PSO) have been used for the parameter optimization of SVM [21,22]. Compared with other methods, PSO possesses excellent global search capability and can be easily implemented [23]. Despite this, the standard PSO has some drawbacks such as relapsing into local optimum, slow convergence speed, and low convergence precision in the later evolution. This paper presents a new personalized recommendation system for electronic movies whose particularities (and contributions) rely on the developments of an improved PSO (referred to in the following as IPSO) and of a support vector machine (SVM) based regression model. In the proposed IPSO, the evolution speed factor and aggregation degree factor of the swarm are introduced to improve the convergence speed, and the position-extreme strategy is used to avoid the search process plunging into the local optimum. In each generation, the inertia weight is updated dynamically based on the current evolution speed factor and aggregation degree factor, which makes the algorithm attain effective dynamic adaptability. The proposed IPSO has stronger global searching performance than the standard PSO, and can yield more accurate prediction results in the proposed recommender system. With the use of the SVM based regression model, the proposed recommendation method overcomes the limitations of the traditional CF methods. Compared with traditional CF methods that only use historical rating data to calculate similarity, the proposed PRS not only utilizes the user's demographic information but also relies on their rating information. In the implementation of the proposed PRS, the movie data is firstly classified and then the ratings of the testing movie data are predicted. This procedure limits the movie data in the same category range, reduces the forecasting range, and thus enhances the forecast accuracy. Basic Principles of SVM Denoting the training data set as {(x 1 ,y 1 ),Á Á Á,(x l ,y l )} 2 R n × R, where x i is the input vector; y i is the output value; and l is the total number of the training data. Then the relation between x i and f(x i ) can be defined as a regression model: where ω is the inertia weight vector; b is the pre-specified threshold. ω and b are determined by following linear optimization model: s:t : where ξ ( à ) is slack variable; C is punishment coefficient; and 03B5 is insensitive loss function. ξ ( à ) guarantees the satisfaction of constraint condition; C controls the equilibrium between the model complexity and training error; ε is a preset constant which controls the tube size. Assuming a transform ϕ: R n ! H, x 7 ! ϕ(x) which makes K(x,x') = ϕ(x)Áϕ(x'), where (Á) denotes the inner product operation. If a kernel function K(x,x') satisfies the Mercer condition, according to the functional theory, it corresponds to the inner product of a transform space. The nonlinear regression model can thus be estimated as: s:t: where a " ðÃÞ ¼ ða " In this study, we use the Gaussian function as the kernel function in the form of K(x,x') = exp(−kx−x'k 2 / σ 2 ), where σ (can also be expressed as g) is the kernel parameter. σ precisely defines the structure of high dimensional feature space, and thereby controls the complex nature of the final solution. The selection of parameters C and σ is critical to the performance of SVM and consequently impacts the generalization and regression efficiency Standard PSO Algorithm PSO is a heuristic based optimization algorithm proposed by Kennedy and Eberhart in 1995 [24], and has been applied in many applications [25,26,27,28]. Denoting a swarm consisting of n particles; each particle has a position vector X i = (x i1 ,x i2 ,Á Á Á,x iD ) and a velocity vector where i = 1,2,Á Á Á,n. Each particle represents a potential solution to the given optimization problem in a D-dimensional search space. In each generation, each particle is accelerated toward its previously visited best position and the global best position of the swarm. The best previously visited position of the i-th particle is denoted as P i = (p i1 ,p i2 ,Á Á Á, p iD ); the best previously visited position of the swarm denotes P g = (p g1 ,p g2 ,Á Á Á,p gD ). The new velocity value is then used to calculate the next position of the particle in the search space. This process repeats until the pre-set termination criterion is achieved. The update of velocity and position vectors of a particle can be mathematically formulated as: where i = 1,2,Á Á Á,n, d = 1,2,Á Á Á,D; w denotes the inertial weight coefficient; c 1 and c 2 are learning factors; rd l 1 and rd l 2 are positive random number in the range of [0,1]; l is the iteration index; x l id is the position of the particle i in the d-dimensional space. When applying PSO into SVM, x l id also denotes the current value the parameters C and σ; v id 2 [v max ,v min ] denotes the velocity of a particle i in the d-dimensional space. The inertia weight w controls the impact of the previous history of velocities on the current velocity. A larger value of w facilitates the global exploration, while a small value tends to facilitate the local exploration. In order to balance the global exploration and local exploration capabilities, a linear decreasing inertia weight can be used where w(k) is reduced linearly in between iterations. This updating process can then be described as: where k is the iteration index; T max is the maximum number of iteration; w start and w end are the maximum and minimum values of the inertia weight, respectively. IPSO Algorithm For the standard PSO, when a good solution is found during the early evolution, it is likely that the convergence remains trapped in the local optima. In order to enhance the global searching capability of the standard PSO, the linearly decreasing strategy is designed to selfadaptively adjust the inertia weight w. One limitation of this strategy is that with the decrease of w in the later evolution, the global searching capability of the algorithm and the diversity of the particles are also weakened. In order to overcome these deficiencies, this paper proposes a non-linearly descending strategy for self-adaptively adjusting the PSO inertia weight. Evolution Speed and Aggregation Degree Strategy Let f ðP i g Þ be the i-th generation best global position corresponding to the fitness function value and f ðP iÀ 1 g Þ be the i-1-th generation best global position, then we can define the concept of evolution speed as outlined below. where min(Á) represents the minimum value function; max(Á) represents the maximum value function. Definition 2 Aggregation degree α: in which the average fitness function value at the t-th generation is determined as: Based on above definitions, the non-linearly inertia weight can be expressed as follows: where A is the weight of evolution speed and B is the weight of aggregation degree. Position-Extreme Strategy To avoid the identification of local optimum, a judgment condition is introduced to influence the selection of the global optimal values in the evolution process. If the global optimal value does not improve in k consecutive iterations (that is k > limit), the algorithm is then assumed to be trapped into a local optimum. In such a case, the search strategy of the particles will change so that the particles escape from the local optimum and start exploring new positions. The corresponding update equations are expressed as follows: where rand(0,1) represents a random number in the range of [0,1]. Principles of IPSO Based on the strategies mentioned above, the procedures of the IPSO algorithm can be summarized as follows: Step 1: Initialization. Initialize all particles; initialize parameters of IPSO algorithm including the velocity x id ) of each particle. Set the acceleration coefficient c 1 and c 2 , particle dimension, the maximum number of iterations T max , the maximum number of consecutive times limit, the weight of evolution speed A, the weight of aggregation degree B, the maximum value of inertia weight w start , the minimum value of inertia weight w end , and the fitness threshold ACC. rd l 1 and rd l 2 are the two random numbers ranging between 0 and 1. T is the current number of iterations. Step 2: Set values of P i and P g . Set the current optimal position of the particle i as X i = (x i1 , x i2 ,Á Á Á,x id ), i.e. P i = X i (i = 1,2,Á Á Á,n), and set the optimal individual in group as the current P g . Step 3: Define and evaluate fitness function. For the classification problems, Acc is defined as the classification accuracy: Acc ¼ The number of correctly classified samples The total number of samples ð15Þ For the regression problems, Acc is defined as regression error (MAE): where n is the number of the samples; y i is the original values; and y î is the forecast value. Step 4: Update velocity and position of each particle. Search for the better kernel parameters according to Eqs (6) and (7). The inertia weight is changed dynamically based on the current evolution speed factor and aggregation degree factor, formulated in Eq (12). Step 5: Update the iteration index by setting t = t+1. Step 6: Check the termination condition. If t > T max or Fitness function value < ACC, then terminate the algorithm and output the optimal solution; otherwise, go to step7. Step 7: Judge the global optimum vale unchanged in consecutive k times. If k > limit, then go to step 8; otherwise, go to step 3. Design of Personalized Movie Recommendation System This section outlines the design principles of the proposed personalized movie recommendation system. SVM Classification Based Regression Model The nonlinear regression problem is solved by using the SVM to establish a classification of the items considered and then to perform the regression based on the obtained classification results. With the proposed IPSO it is possible to optimize the SVM parameters. The detailed steps of SVM classification based regression, that is personalized recommendation for regression method based on SVM classification optimized by IPSO, can be presented as follows: Step 1. Divide the sample data set S into N q (q = 1,2,Á Á Á,s) classes based on the actual application. Step 2. Use a training sample data set of S to generate a SVM classifier. Step 2.1. Normalize the sample data. Step 2.2. Select a kernel function and make use of IPSO algorithm to optimize the parameters. Step 2.3. Train the normalized sample data and then obtain the SVM classification model. Step 3. Adopt this classifier to forecast class labels of the testing data. Classify the testing data and get the class label j of each sample (x p ,y p ), where, p is the number of the sample. Step 4. For N q 2 type j^(x p ,y p ) 2 type j, M q is a training data set, utilize SVM regression algorithm to predict y p value of each testing samples. Step 4.1. Normalize N q and (x p ,y p ), which belong to the same class j. Step 4.2. Select a kernel function and use IPSO algorithm to optimize the parameters. Step 4.3. Train the normalized training data set and establish the SVM regression model. Step 4.4. Adopt the established SVM regression model to forecast y p value of each testing samples. Personalized Recommendation Model The proposed PRS requires the user's demographic information, user's behavioral information ("ratings"), and movie's content information to form a "user-movie" correlation matrix. The correlation matrix is then trained by the training model, after which the movies are ranked. Based on the classification results, the PRS provides a list of recommended movies to the users. Before establishing a classification model, movies are divided into two categories: "like" (recommended) and "dislike" (not recommended), according to the users' ratings. A movie is rated using the number of the stars to represent the user's level of preference. The movies with 4 or 5 stars are grouped in the "like" category, and the movies with 1, 2, and 3 stars are included in the "dislike" category. "User-Movie" correlation feature extraction. In the proposed movie PRS, the relationship information between the user and movie is essential for establishing the classification model. Based on the realization that the MovieLens data set can be associated by keywords, we use the user's demographic information, movie's information, and user's ratings information about movies to realize the correlation between the user's preference characteristics and movie's information. The proposed User-movie correlation feature extraction method is shown in Fig 1. Personalized recommendation based on SVM. As discussed before, the collaborative filtering methods have some limitations. The UserCF method needs to calculate the similarity between two users based on the items' rating matrix, while the item-based collaborative filtering (ItemCF) needs to calculate the similarity between two items based on the items' rating matrix. The computational complexity of the user-based collaborative filtering (UserCF) is related to the number of users, which is proportional to the square of the number of users. For the ItemCF, when the number of the items is large, its computational complexity is also very high, which is proportional to the product of the square of the number of items and the sparsity. Taking into account the user's demographic information alleviates the "cold start" problem to some extent because such information provides useful hints on the users' preferences. The personalized movie recommendation model is described in Fig 2. The classification model based on SVM is built based on the obtained feature vector between users and movies, based on which the movies are classified and a preliminary recommendation list is produced. The latter is then used to build the regression model for the ratings' forecasts and to form the final recommendation list. In particular, the workflow of the proposed movie recommender system can be summarized in the following steps: 1. The recommender system extracts the movie's content information and the user's demographic information, and correlates this information by forming combinations of features information about movies and users; 2. The feature transformation is performed and the "User-Movie" correlation feature vector is formed; 3. The recommender system trains the SVM classification model based on the obtained feature vector, classifies the movies that are without ratings, and forms a preliminary recommendation list according to the classification results; 4. The SVM regression model is trained based on the movies' feature vector obtained from the preliminary recommendation list; 5. The movies' ratings are forecasted, therefore narrowing the forecast data range and, as a consequence, improving the forecasting accuracy; 6. The "movie, rating" pairs are obtained based on the preliminary recommendation list and forecasted ratings; 7. The filtering of the list based on the forecasted ratings is carried out, based on which the final recommendation list is established. Experimental Data Set To test the performance of the propose recommender system, we select the MovieLens 1M data set as the experimental data set [29]. The MovieLens dataset includes the movie information as well as the users' demographic information. The MovieLens 1M data set includes 3,900 movie anonymous ratings from 6,040 MovieLens users, which are stored in 3 data files: ratings.dat, users.dat, and movies.dat files. The information of these 3 data files is shown in Table 1. "User-Movie" Correlation Feature Extraction In our movie recommendation system, the relationship information between the user and movie is essential for establishing the prediction model. Based on the MovieLens data set, we "academic/educator", 2: "artist", 3: "clerical/admin", 4: "college/grad student", 5: "c", 9: "homemaker", 10: "K-12 student", 11: "lawyer", 12: "programmer", 13: "retired", 14: "sales/ marketing", 15: "scientist", 16: "self-employed", 17: "technician/engineer", 18: "tradesman/ craftsman", 19: "unemployed", 20: "writer".) movie.dat MovieID, Title, and Genres use the user's demographic information, movie's information, and user's ratings information about movies to realize the correlation between the user's preference characteristics and movie's information. The 3 files store the information of movies, users, and users' ratings on movies, respectively. The primary and foreign keys of the 3 data tables provide the correlation relationships of the above 3 categories of information. By analyzing the correlation relationships, we extract the users' behavior and their preference information about the movies, and the "User-Movie" relationship feature vector can be formed. Classification Results and Analysis We select 2000 users' rating data from the MovieLens 1M data set as the experimental data set. For each user, we randomly select 10 data records as testing data, and the remaining data are used as the training set. For the PSO and IPSO, the parameter settings are as follows: c 1 = 1.5, c 2 = 1.5, w start = 0.9, w end = 0.4; the initial speed range of the particles is set to be [−5,5]; the population size is set to be 20; the maximum iteration number is set to be 100. For the SVM prediction model, the parameters of Gaussian kernel are set as follows: c 2 [0,100], δ 2 [2 −10 ,2 10 ]. In order to prevent errors caused by the random sample selection, we repeat the experiment five times and take the average as the final classification accuracy. From Fig 3, it can be clearly seen that IPSO has the best performance on the SVM parameter optimization. When the training samples reache 90% of the entire training set, the classification accuracy of IPSO-SVM reaches 75.4%, which is significantly higher than that of PSO-SVM (73.7%), GA-SVM (72.2%), and GS-SVM (74.5%). Table 2 shows the average classification accuracy and deviation of IPSO, PSO, GS, and GA after the five experiments. From Table 2, it can be seen that in many cases both IPSO and GS reach similar classification accuracies. The deviations of GS are larger than the other three methods, indicating that the GS optimization algorithm is not sufficiently stable for use in practical applications. This is a consequence of the fact that the GS is essentially an exhaustive method whose searching precision is highly related to the step size, and it would be very time-consuming under the smaller step size. This problem does not exist in the proposed IPSO algorithm, and the IPSO can be regarded as a good compromise between classification accuracy and computational time. Rating Prediction Results and Analysis Based on the classification results, the proposed recommender system obtains a preliminary recommendation list. Then, it builds a regression model based on the recommendation list, where the IPSO is utilized to optimize the parameters of SVM. The parameter optimization results are reported in Fig 4. Fig 4 shows that after 100 iterations, IPSO obtains the optimal parameter combination (c = 2.1803, g = 10.462). Fig 4 also shows the profiles of best and average fitness values over the whole population. The GA and GS are also applied and compared with the IPSO. All three methods adopt the 5-fold cross-validation, and the maximum iteration time is set to be 100. The search range of IPSO is set to be [0, 100]. The settings of GA are basically the same as IPSO. The search stepsize of GS is set to be 0.5. The parameters optimization range is set to be [-2 -8 , 28]. The optimal parameters optimization results by GA are shown in Fig 5. The results show that after the 100 iterations, GA obtains the optimal parameter combination (c = 90.154, After 100 iterations, GS obtains the optimal parameter combination (c = 90.5097, g = 0.5). In the meantime, we observe that the fitness value of particles keep on changing under different parameter combinations. In summary, the results in Figs 4-6 clearly show that in terms of overall fitness, the performance of IPSO algorithm is better than the one of the other two algorithms. To verify the performance of the proposed regression model on personalized recommendations, several other methods are tested for comparison purposes. These methods include the regression model based on the classification, the item-based collaborative filtering (ItemCF), user-based collaborative filtering (UserCF), (SVM) direct regression model, BP neural network, and multiple linear regression. Fig 7 shows how the actual training sample number accounts for the proportion of the sample set and the corresponding error MAE. The results show that the proposed regression model based classification method has the lowest error, followed by the SVM direct regression method, while the UserCF and ItemCF methods exhibit the highest errors. The errors of BP and Multiple linear regression are very close. These results also show that, with the increase of sample size, the prediction errors also reduce. This is because with the increase of sample size, the numbers of similar users and similar movies also increase, and this helps to enhance the accuracy of the recommendation system. From the analysis collaborative filtering based recommendation methods, it can be found that the major differences between them and recommendation algorithms based on machine learning methods are as follows: traditional collaborative filtering algorithms only consider the movie's unilateral rating information, while the machine learning based recommendation algorithms not only use the user's demographic information and movie's information, but also use the user's ratings information about movies. The advantage of this is that it can alleviate the user's "cold start" problem to some extent. In addition, the user's demographic information and their rating information can better reflect the user's preferences. It is worth mentioning that the proposed PRS firstly builds a SVM classification model to get a movie recommendation list, and then forecast movies' ratings according to this list. In this way, the range (or number) of the movie samples is narrowed to the movies in the recommendation list, which consequentially enhances the forecast accuracy and efficiency. Conclusions This paper has presented a new rating prediction model for the pre-classification, and later regression, for personalized recommendation. The main advantage of the approach relies on its ability to overcome the limitations of existing collaborative filtering recommendation methods,. In particular, the proposed system starts by establishing a SVM classification model and by identifying a preliminary recommendation list. It then builds a SVM regression model based on the preliminary recommendation list, and predicts items' ratings. The proposed method is capable of using the items' content information as well as accounting for the user's demographic information and behavior information to establish the "user-item" correlation information matrix and to capture the user's interests and preferences. To improve the performance of the recommendation system, an improved PSO algorithm with the evolution speed factor and the aggregation degree factor (IPSO) is also proposed to optimize the parameters of the model. To validate the proposed method, experiments are conducted on the public MovieLens data set, and five state-of-the-art recommendation methods are compared. The experimental results show that the proposed model can provide better recommendation results than the other methods.
6,309.8
2016-11-29T00:00:00.000
[ "Computer Science" ]